I seem to be getting a Runtime error whilst running my Python script in Blackmagic Fusion.
# "The application has requested the Runtime to terminate it in an unusual way".
This does not happen every time I run the script. It only seems to pop up when I feed the Python script a heavy workload, or if I run the Python script multiple times inside of the Blackmagic Fusion compositing software, without restarting the package. I thought this might be a memory leak, but when I check the CPU memory usage, it does not seem to flinch at all.
Does anyone have any idea what might be causing this, or at least a solution of how I might start to debug the script?
Many thanks.
if you know how to get runtime error, then run your script using pdb
Perhaps this'll help. It's apparently a common error with microsoft visual c++:
http://support.microsoft.com/kb/884538
Related
I have a PyQt application that occasionally will crash due to memory issues with the error shown below
The instruction at 0X00007FFC450FA07 reference memory at 0X0000000000000048. The memory could not be read.
I personally have not been able to reproduce this error in any way. It has happened on the computer's of a few people I work with while running my application. I have looked and see a lot of questions for languages like C# or C++, and it seems to be a pointer issue. Since I am using Python though, these questions and solutions are not very relevant since Python does not use pointers.
Seen here: C# example question the problem here is a long running thread. I do use threading in my application so I have been looking there. I have ran my application many times with the task manager open to monitor threads and have not seen any problems. All threads seem to close. If there is a better way to monitor this please let me know.
I am using PyCharm and my Python version is 3.9. I am also using Pyinstaller to create an executable version of the application. I have ran the program using the command line, launching the executable, and also directly through PyCharm and have not been able to see the error at all or debug it.
Is it possible this is happening due to bit related issues? Meaning that I am compiling my code using a 32-bit version of Python, could some computers have a problem with that? Would compiling with a 64 bit version possibly solve this?
Fairly new 'programmer' here, trying to understand how Python interacts with Windows when multiple unrelated scripts are run simultaneously, for example from Task Manager or just starting them manually from IDLE. The scripts just make http calls and write files to disk, and environment is 3.6.
Is the interpreter able to draw resources from the OS (processor/memory/disk) independently such that the time to complete each script is more or less the same as it would be if it were the only script running (assuming the scripts cumulatively get nowhere near using up all the CPU or memory)? If so, what are the limitations (number of scripts, etc.).
Pardon mistakes in terminology. Note the quotes on 'programmer'.
how Python interacts with Windows
Python is an executable, a program. When a program is executed a new process is created.
python myscript.py starts a new python.exe process where the first argument is your script.
when multiple unrelated scripts are run simultaneously
They are multiple processes.
Is the interpreter able to draw resources from the OS (processor/memory/disk) independently?
Yes. Each process may access the OS API however it wishes, to the extend that it is possible.
What are the limitations?
Most likely RAM. The same limitations as any other process might encounter.
These are difficult questions to answer, in part because they depend on:
Your operating system: Your OS gets to schedule and run tasks when it wants, which the Python programmer often does not have control over.
What your scripts are actually doing: If your scripts are all trying to write to the same drive, their execution may be halted more often than if no device was being written to. Or the script might run even faster if only one script writes to the drive, as the CPU can let one script calculate when another script writes. (It's hard to tell without benchmark testing.)
How many CPUs you're using: The number of Central Processing Units can improve parallel processing of programs -- but perhaps not. If your programs are constantly reading and writing from the same disk, more CPUs may not be a benefit.
Your Python version: (I'm just adding this for completeness.)
Ultimately, the only way you're going to get any real information on this is if you do your own benchmarking -- and even then, you should remember that those figures you find are only applicable to your current setup. That is, if you go to another computer elsewhere, you may find you get different results.
If you aren't familiar with Python's timeit module, I recommend you look into it. (I'm pretty sure it's a standard module, so you should already have it.) It'll help you do benchmark testing and let you get some definitive answers for your platform.
By asking questions like yours, you may soon hear about Python's GIL (Global Interpreter Lock). It has to do with Python threads, and some people think it's a blessing, and some think it's a curse. Either way, this page:
https://realpython.com/python-gil/
has a good high-level explanation of it when it can work well and when it might not.
Related to this thread...
I am trying to track down a bug in which the results from processing on an iPython cluster do not match what happens when the same process is run locally. Even when the iPython cluster is entirely local, and the CPU is simply running multiple engines.
I cannot seem to figure out how to log data as it is being processed on the engines. Print statements don't work and even when I try to have each engine write to a separate file, the file is created but nothing is written to it.
There must be a way to debug code running on the iPython parallel engines.
Not sure why, but I narrowed the problem and workaround down to the fact that I am using cython and compiling the .pyx files before running the program.
For some reason the cython cdef init of my float variables was not being done properly on the engines however it was being done correctly when I ran outside of the Client() queue.
Changing these variables to be normal python variables solved the problem. Though it does not seem like this should have to happen. Anyone that can shed more light on this?
I am implementing Kosaraju's Strong Connected Component(SCC) graph search algorithm in Python.
The program runs great on small data set, but when I run it on a super-large graph (more than 800,000 nodes), it says "Segmentation Fault".
What might be the cause of it? Thank you!
Additional Info:
First I got this Error when running on the super-large data set:
"RuntimeError: maximum recursion depth exceeded in cmp"
Then I reset the recursion limit using
sys.setrecursionlimit(50000)
but got a 'Segmentation fault'
Believe me it's not a infinite loop, it runs correct on relatively smaller data. It is possible the program exhausted the resources?
This happens when a python extension (written in C) tries to access a memory beyond reach.
You can trace it in following ways.
Add sys.settrace at the very first line of the code.
Use gdb as described by Mark in this answer.. At the command prompt
gdb python
(gdb) run /path/to/script.py
## wait for segfault ##
(gdb) backtrace
## stack trace of the c code
I understand you've solved your issue, but for others reading this thread, here is the answer: you have to increase the stack that your operating system allocates for the python process.
The way to do it, is operating system dependant. In linux, you can check with the command ulimit -s your current value and you can increase it with ulimit -s <new_value>
Try doubling the previous value and continue doubling if it does not work, until you find one that does or run out of memory.
Segmentation fault is a generic one, there are many possible reasons for this:
Low memory
Faulty Ram memory
Fetching a huge data set from the db using a query (if the size of fetched data is more than swap mem)
wrong query / buggy code
having long loop (multiple recursion)
Updating the ulimit worked for my Kosaraju's SCC implementation by fixing the segfault on both Python (Python segfault.. who knew!) and C++ implementations.
For my MAC, I found out the possible maximum via :
$ ulimit -s -H
65532
Google search found me this article, and I did not see the following "personal solution" discussed.
My recent annoyance with Python 3.7 on Windows Subsystem for Linux is that: on two machines with the same Pandas library, one gives me segmentation fault and the other reports warning. It was not clear which one was newer, but "re-installing" pandas solves the problem.
Command that I ran on the buggy machine.
conda install pandas
More details: I was running identical scripts (synced through Git), and both are Windows 10 machine with WSL + Anaconda. Here go the screenshots to make the case. Also, on the machine where command-line python will complain about Segmentation fault (core dumped), Jupyter lab simply restarts the kernel every single time. Worse still, no warning was given at all.
Updates a few months later: I quit hosting Jupyter servers on Windows machine. I now use WSL on Windows to fetch remote ports opened on a Linux server and run all my jobs on the remote Linux machine. I have never experienced any execution error for a good number of months :)
I was experiencing this segmentation fault after upgrading dlib on RPI.
I tracebacked the stack as suggested by Shiplu Mokaddim above and it settled on an OpenBLAS library.
Since OpenBLAS is also multi-threaded, using it in a muilt-threaded application will exponentially multiply threads until segmentation fault. For multi-threaded applications, set OpenBlas to single thread mode.
In python virtual environment, tell OpenBLAS to only use a single thread by editing:
$ workon <myenv>
$ nano .virtualenv/<myenv>/bin/postactivate
and add:
export OPENBLAS_NUM_THREADS=1
export OPENBLAS_MAIN_FREE=1
After reboot I was able to run all my image recognition apps on rpi3b which were previously crashing it.
reference:
https://github.com/ageitgey/face_recognition/issues/294
Looks like you are out of stack memory. You may want to increase it as Davide stated. To do it in python code, you would need to run your "main()" using threading:
def main():
pass # write your code here
sys.setrecursionlimit(2097152) # adjust numbers
threading.stack_size(134217728) # for your needs
main_thread = threading.Thread(target=main)
main_thread.start()
main_thread.join()
Source: c1729's post on codeforces. Runing it with PyPy is a bit trickier.
I'd run into the same error. I learnt from another SO answer that you need to set the recursion limit through sys and resource modules.
I am running a stand-alone Python v3.2.2/Tkinter program on Windows, not calling any external libraries. Idle has been very helpful in reporting exceptions, and the program has been debugged to the point where none are reported. However, the python interpreter does occasionally crash at non-deterministic times - operations will run fine for a while and then suddenly hang. The crash triggers the standard Windows non-responding process dialog asking if I want to send a crash dump to Microsoft:
"pythonw.exe has encountered a problem and needs to close.
We are sorry for the inconvenience."
Crash reporting in Python says that the interpreter itself rarely crashes. My question is: no matter how many mistakes there are in a python script, is there any way it should in theory be able to crash the interpreter? Since there are no exceptions being reported and the crashes happen at random times, it's hard to narrow down. But if the interpreter is in theory supposed to be crash-proof, then something I'm doing is triggering a bug.
The code (a scrolling strip-chart demonstration) is posted at What is the best real time plotting widget for wxPython?. It has 3 buttons - Run, Stop, Reset. To cause a crash just press the buttons in random order for a minute or so. With no interaction, the demo will run forever without crashing.
Of course, the goal is for something like Python to never crash. Alas, we live in an imperfect world. A more useful question to ask, I think, is "What should I do if Python crashes?". If you want to help make a more perfect world, first make a quick search at the Python issue tracker to see if a similar problem has already been reported and possibly fixed in a newer or as yet unreleased version of Python. If not, see if you can find a way to reproduce the problem with clear directions about steps involved, what OS platform and version, what versions of Python and 3rd-party libraries, as applicable. Then open a new issue with all the details. Keep in mind that Python, like many open source projects, is an all-volunteer project so there can be no guarantee when or if the problem will be more deeply investigated or solved (most issues are resolved eventually) but you can be happy that you have done your part and likely saved someone (maybe many people) time and trouble. If you want other opinions before opening an issue, you could ask about it on the python-list mailing list/news group.
Python isn't indeed 100% crash proof, especially when you using external libraries, which TkInter is.
There is even page dedicated to it: http://wiki.python.org/moin/CrashingPython