Python has stopped working - python

I'm running a Python script on my windows 10 machine. The script reads compressed data files, stored as .tar.gz, processes it, and then reads the next one. In this manner it processes thousands of files.
I run the scipt in a windows10 powershell, and -seemingly randomly- I often get the following error:
Some times this happens after a day, sometimes already after a few minutes.
I select 'Close program' and the script is terminated. Looking into Windows event viewer, I can see the following entry:
Faulting application name: python.exe, version: 3.6.2150.1013, time
stamp: 0x59c1326e Faulting module name: multiarray.cp36-win_amd64.pyd,
version: 0.0.0.0, time stamp: 0x59c3eeda Exception code: 0xc0000005
Any ideas on how to avoid this error message?

0xc0000005 means 'memory access violation' error.
The related info seems to indicate this happens when python is processing arrays.
You can try to trouble shoot by adding logs so you can identify the issue.
The problem may be solved by changing related code.
If you are able to replicate the issue consistently and the python code seems correct - it may be a rare case of a bug in python.

Related

Timed out waiting for debuggee to spawn

Randomly, when I run in vscode a python script, I get this error message:
"Timed out waiting for debuggee to spawn"
It happens randomly, sometimes disappears when killing all vscode processes but sometimes it doesn't help.
My vscode version is:
and the debugger extension launcher is:
/homes/noyah/.vscode-server/extensions/ms-python.python-2022.19.13251009/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher
There are many possible reasons for this. You can try the following methods:
Right click vscode and set the administrator identity to run:
Check your Python version. If it is too old, this problem will occur.
Try using older python extensions or pre-release version:

Error opening libpython3.9.s0.1.0 on raspberry pi

I'm trying to run a program that was originally compiled for x86 processors on my raspberry pi, I tried to use the box86 emulator which I installed to launch it but it is returning a long list of errors of the following type.
"Error: Global Symbol XML_SetEncoding not found, cannot apply R_386_BLOB_DAT #0xba19f98 ((nil)) in ~/program/libpython3.9.so.1.0"
and ending with
"Error loading Python lib '/program/libpython3.9.so.1.0': dlopen Cannot dlopen("/program/libpython3.9.so.1.0"/0xb6380560, 102)
I can't find any reference to these type of errors and am unsure if it is something in the setup or due to the box86 emulator? The program I am running is initialised from a .sh and runs a python script, and the libpython3.9.so.1.0 file is located in the directory it is looking in.
I know this is not the best way to go about this but I have been tasked with getting the program running on a pi and if it proves impossible I can try to purchase an industrial pc for the task but I would like to try with this first, My next step will be attempting to see if using exagear may help but assistance with these errors would be useful first.
Edit: I have not done any configuration of python or the path since setting up the Pi, however the file i am running starts with #!/usr/bin/env/ bash

What causes a "...memory could not be written error" in windows?

I'm getting an application error on Windows 10 with python in PyCharm when I run a script after the error my PyCharm terminal looks as though the script is still running but when I open task manager all my python processes are reading 0% CPU usage, the script is utilizing multiprocessing pool set to 14 processors. It will sit like this forever I am guessing and no error will print to the terminal. The script also works entirely on one dataset but throws this error on the other. Although the database is larger the table is about the same size, maybe smaller, and the number of records the script is looking for is about 1000 less than on the dataset it works on.
What does this error mean?
How do I fix it?
Is it linked to multiprocessing? Data size and memory? or something else?
Python 3.7.0, PyCharm 2021.1.1 Community Edition Build PC# 211.7142.13 Runtime Version 11.010+9-b1341.41 amd64, GDAL 3.1.4
Faulting application name: python.exe, version: 3.7.150.1013, time stamp: 0x5b331a30
Faulting module name: gdal301.dll, version: 3.1.4.0, time stamp: 0x5fdf9dd8
Exception code: 0xc0000005
Fault offset: 0x0000000000b7043b
Faulting process id: 0x8160
Faulting application start time: 0x01d8f8795e672f31
Faulting application path: C:\OSGeo4W64\apps\Python37\python.exe
Faulting module path: C:\OSGEO4~1\bin\gdal301.dll

Python: Process finished with exit code -1073741819 (0xC0000005). How to Debug?

I'm getting a strange error "Process finished with exit code -1073741819 (0xC0000005)" running python 3.7 on a windows machine. The process just crashes. The error comes in random time. It seems to appear inside a thread.
Is there someway to get more information where exactly the error comes from? Right now my only solution is to add logging, but that is really time consuming.
Thanks a lot for any hint.
I've seen this error occur when a Python script has infinite (or very deep) recursion and the following code is used to increase the recursion limit:
import sys
sys.setrecursionlimit(4000)
I would guess that the error means we are running out of memory.
I had the same issue, not long time ago and I have solved this with the following solution:
reinstall python – you don't have python33.dll in c:\WINDOWS\system32\
Maybe you have different python versions – look at folders in root of c:
If yes, then point to your version of python.exe in pyCharm > Settings > Project Interpreter

python 2.7 script crashing ubuntu 16.04 : how to find reason?

I run a complex python program that is computationally demanding.
While it is complex in terms of number of lines of code, the code itself is simple: it is not multi-thread, not multi-process and does not use any "external" library, with the exception of colorama, installed via pip.
The program does not require a big amount of memory.
When I run it, and monitor via "htop", it shows one (out of the eight) cpu is used 100% by the script, and around 1.16GB (out of 62.8GB) of memory are used (this number remains more or less steady).
After a while (10 to 20 minutes) of running the script, my ubuntu dell desktop running ubuntu 16.04 systematically freezes. I can move the mouse, but clicks do not work, the keyboard is unresponsive, and running programs (e.g. htop) freeze. I can only (hard) reboot. Note that the last frame displayed by htop does not show anything unexpected (e.g. no higher usage of memory).
I never experience such freezes when not running the python program.
I do nothing special in parallel of running the script, aside of browsing with firefox or dealing with mails using thunderbird (i.e. nothing that would use cpu or ram in a significative fashion).
I have printed traces in my python code: it never crashes at the same state.
I also print kernel logs in another terminal: nothing special is printed at the time of the freeze.
I do not use any IDE, and run the script directly from a terminal.
Searching for similar issue, it seems that they are usually related to overusage of memory, which does not seem to be my case.
I have no idea how to investigate this issue.

Categories