debugging: how to check what where my Python program is hanging? - python

A fairly large Python program I write, runs, but sometimes, after running for minutes or hours, in a non easily reproducible moment, hangs and outputs nothing to the screen.
I have no idea what it is doing at that moment, and in what part of code it is.
How can I run this in a debugger or something to see what lines of code is the program executing in the moment it hangs?
Its too large to put "print" statements all over the place.
I did:
python -m trace --trace /usr/local/bin/my_program.py
but that gives me so much output that I can't really see anything, just millions of lines scrolling on the screen.
Best would be if I could send some signal to the program with "kill -SIGUSR1" or something, and at that moment the program would drop into a debugger and show me the line it stopped at and possibly allow me to step through the program then.
I've tried:
pdb usr/local/bin/my_program.py
and then:
(Pdb) cont
but what do I do to see where I am when it hangs?
It doesn't throw and exception, just seems like it waits for something, possibly in an infinite loop.
One more detail: when the program hangs, and I press ^C and then (not sure if that is necessary) the program continues normally (without throwing any exception and without giving me any hint on the screen why did it stop).

This could be useful to you. I usually do
>>> import pdb
>>> import program2debug
>>> pdb.run('program2debug.test()')
I usually add a -v option to my programs, which enables tons of print statements explaining what I'm doing in detail. When you write a program in the future, consider doing the same before it gets thousands of lines big.

You could try running it in debug mode in an IDE like pydev (eclipse) or pycharm. You can break the program at any moment and get to its current execution point.

No program is ever too big to put print statements all over the place. You need to read up on the logging module and insert lots of logging.debug() statements. This is just a better form of print statement that outputs to a file, and can be turned off easily in production software. But years from now, when you need to modify the code, you can easily turn it all back on and get the benefit of the insight of the original programmer.

Related

Python flush printing many times

I am running some python (pytorch) code through slurm. This is my first time using slurm. I have a lot of print statements in my code for status updates, but they aren't printing to the output file that I specify. I think in issue is with the fact that python buffers. However, when I use the -u flag, or set flush=True is some of the print statements, it prints the same thing many times, which is very confusing and I am very unsure why this is happening.
Any suggestions? Because I can't really debug my code without it. Thanks!

Implementing a dead man's switch for running code

I have some Python code that's a bit buggy. It runs in an infinite loop and I expect it to print something about once every 50 milliseconds, but sometimes it hangs and stops printing or even outright segfaults. Unfortunately, the problem seems to be fairly rare, so I've had trouble nailing down the exact cause of the issue. I want to have the code up and running while I debug the problem, so while I try to figure it out, I'd like to create a dead man's switch that runs my code, but stops if the code doesn't print anything in a certain time frame (say, 5 seconds) or exits and finally executes a command to notify me that something went wrong (e.g. 'spd-say terminated').
I put this into the terminal for my first attempt at this:
python alert.py; spd-say terminated;
Unfortunately, it didn't seem to work - at this point I realized that the code was not only crashing but also hanging (and I'm also not sure whether this would even work if the code crashes). Unfortunately, I'm not very familiar with bash yet (I assume that's what I'm using when I run stuff in the terminal), and I'm not sure how I could set up something like what I want. I'm also open to using other things besides bash to do this if it would be particularly difficult to do for some reason.
What would be the best way to implement what I want to do?
You could run two python programs with a pipeline between them. On one side you have you buggy script writing something on the pipeline every less than 5 seconds. On the receiving end of the pipeline you have a very simple script that checks how long it has been since it last received anything. If this time is more than 5 seconds.
This way you decouple your watchdog from your buggy script.

Debug a Python program which seems paused for no reason

I am writing a Python program to analyze log files. So basically I have about 30000 medium-size log files and my Python script is designed to perform some simple (line-by-line) analysis of each log file. Roughly it takes less than 5 seconds to process one file.
So once I set up the processing, I just left it there and after about 14 hours when I came back, my Python script simply paused right after analyzing one log file; seems that it hasn't written into the file system for the analyzing output of this file, and that's it. No more proceeding.
I checked the memory usage, it seems fine (less than 1G), I also tried to write to the file system (touch test), it also works as normal. So my question is that, how should I proceed to debug the issue? Could anyone share some thoughts on that? I hope this is not too general. Thanks.
You may use Trace or track Python statement execution and/or The Python Debugger module.
Try this tool https://github.com/khamidou/lptrace with command:
sudo python lptrace -p <process_id>
It will print every python function your program invokes and may help you understand where your program stucks or in an infinity loop.
If it does not output anything, that's proberbly your program get stucks, so try
pstack <process_id>
to check the stack trace and find out where stucks. The output of pstack is c frames, but I believe somehow you can find something useful to solve your problem.

Simultaneous Python and C++ run with read and write files

So this one is a doozie, and a little too specific to find an answer online.
I am writing to a file in C++ and reading that file in Python at the same time to move a robot. Or trying to.
When I try running both programs at the same time, the C++ one runs first and then the Python one runs.
Here's the command I use:
./ColorFollow & python fileToHex.py
This happens even if I switch the order of commands.
Even if I run them in different terminals (which is the same thing, just covering all bases).
Both the Python and C++ code read / write in 'infinite' loops, so these two should run until I say stop.
The code works fine; when the Python script finally runs the robot moves as intended. It's just that the code doesn't run at the same time.
Is there a way to make this happen, or is this impossible?
If you need more information, lemme know, but the code is pretty much what you'd expect it to be.
If you are using Linux, & will release bash session and in this case, CollorFlow and fileToXex.py will run in different bash sessions.
At the same time, composition ./ColorFollow | python fileToHex.py looks interesting, cause you redirect stdout of ColorFollow to fileToHex.py stdin - it can syncronize scripts by printing some code string upon exit, then reading it by fileToHex.py and exit as well.
I would create some empty file like /var/run/ColorFollow.flag and write there 1 when one of processes exit. Not a pipe - cause we do not care which process will start first. So, if next loop step of ColorFollow sees 1 in the file, it deletes it and exits (means that fileToHex already exited). The same - for fileToHex - check flag file each loop step and exit if it exists, after deleting flag file.

Large dataset - no error - but it wont run - python memory issue?

So I am trying to run various large images which gets put into an array using numpy so that I can then do some calculations. The calculations get done per image and the opening and closing of each image is done in a loop. I a have reached a frustration point because I have no errors in the code (well none to my knowledge nor any that python is complaining about), and as a matter of fact my code runs for one loop, and then it simply does not run for the second, third, or other loops.
I get no errors! No memory error, no syntax error, no nothing. I have used Spyder and even IDLE, and it simply runs all the calculations sometimes only for one image, sometimes for two, then it just quits the loop (again WITH NO ERROR) as if it had completed running for all images (when it has only ran for one/two images).
I am assuming its a memory error? - I mean it runs one loop , sometimes two, but never the rest? -- so ...
I have attempted to clear the tracebacks using this:
sys.exc_clear()
sys.exc_traceback = sys.last_traceback = None
I have also even tried to delete each variable when I am done with it
ie. del variable
However, nothing seems to fix it --
Any ideas of what could be wrong would be appreciated!
The exit code of the python process should reveal the reason for the process exiting. In the event of an adverse condition, the exit code will be something other than 0. If you are running in a Bash shell or similar, you can run "echo $?" in your shell after running Python to see its exit status.
If the exit status is indeed 0, try putting some print statements in your code to trace the execution of your program. In any case, you would do well to post your code for better feedback.
Good luck!

Categories