How do you debug pydoit? - python

I have a python doit script that is getting stuck on one step but doesn't throw an error. It would sit all day if I let it. I've checked all the inputs and they look exactly the same as the last time I ran it. How do I debug? I tried using pdb but maybe I don't know how to use it and I googled and couldn't find example code. I can't post my code since its confidential. Just a general how to debug in doit would help me greatly. I use Python 2.7 and yes eventually I'll have to update to 3 but for now I'm using 2.7. (Sorry I have had quite a few ask why I continue with 2.7--- no time right now to update all my scripts, there are over 200)

https://pydoit.org/tools.html#set-trace
doit provides a set_trace() function that will call PDB set_trace and make sure stdout output is printed on terminal.
Not your case, but doit also provides a command line option --pdb that automatically drops in PDB when an unhandled exception occurs.

Related

jit-lock-function enters infinite loop in inferior Python

Since the last update whenever I try to M-xrun-python the Python shell doesn't start properly. I need to kill the process which is starting it, otherwise Emacs is locked. If then I try to type into the shell, the typing works, but as only I need to evaluate, the results will not show up, unless I interrupt the process with C-g. The message buffer shows this:
Error during redisplay: (jit-lock-function 468) signaled (quit)
Is this a known problem? Where should I look for the source of the problem?
Probably a bug-report is most helpful
M-x report-emacs-bug

unpredictable behaviour with python subprocess calls

I'm writing a python script that performs a series of operations in a loop, by making subprocess calls, like so:
os.system('./svm_learn -z p -t 2 trial-input model')
os.system('./svm_classify test-input model pred')
os.system('python read-svm-rank.py')
score = os.popen('python scorer.py -g gold-test -i out').readline()
When I make the calls individually one after the other in the shell they work fine. But within the script they always break. I've traced the source of the error and it seems that the output files are getting truncated towards the end (leading me to believe that calls are being made without previous ones being completed).
I tried with subprocess.Popen and then using the wait() method of the Popen object, but to no avail. The script still breaks.
Any ideas what's going on here?
I'd probably first rewrite a little to use the subprocess module instead of the os module.
Then I'd probably scrutinize what's going wrong by studying a system call trace:
http://stromberg.dnsalias.org/~strombrg/debugging-with-syscall-tracers.html
Hopefully there'll be an "E" error code near the end of the file that'll tell you what error is being encountered.
Another option would be to comment out subsets of your subprocesses (assuming the n+1th doesn't depend heavily on the output of the nth), to pin down which one of them is having problems. After that, you could sprinkle some extra error reporting in the offending script to see what it's doing.
But if you're not put off by C-ish syscall traces, that might be easier.

Python script drops into pdb without reason

I have a python function that I'm calling from inside an iPython session.
In a very specific situation, in which a conditional in a certain line comes out as True, the script consistently drops into a pdb debug mode.
There is no trace or any other indication of a problem with the code, and as soon as I type c to continue, the code continues perfectly well.
The script doesn't include any import pdb not to mention a set_trace()...
Any ideas what could account for this?
Depending on your ipython config it automatically goes into PDB if an exception is raised.
Seems like there was a import pdb; pdb.set_trace() line in the code after all, which I missed due to source control issues.

debugging: how to check what where my Python program is hanging?

A fairly large Python program I write, runs, but sometimes, after running for minutes or hours, in a non easily reproducible moment, hangs and outputs nothing to the screen.
I have no idea what it is doing at that moment, and in what part of code it is.
How can I run this in a debugger or something to see what lines of code is the program executing in the moment it hangs?
Its too large to put "print" statements all over the place.
I did:
python -m trace --trace /usr/local/bin/my_program.py
but that gives me so much output that I can't really see anything, just millions of lines scrolling on the screen.
Best would be if I could send some signal to the program with "kill -SIGUSR1" or something, and at that moment the program would drop into a debugger and show me the line it stopped at and possibly allow me to step through the program then.
I've tried:
pdb usr/local/bin/my_program.py
and then:
(Pdb) cont
but what do I do to see where I am when it hangs?
It doesn't throw and exception, just seems like it waits for something, possibly in an infinite loop.
One more detail: when the program hangs, and I press ^C and then (not sure if that is necessary) the program continues normally (without throwing any exception and without giving me any hint on the screen why did it stop).
This could be useful to you. I usually do
>>> import pdb
>>> import program2debug
>>> pdb.run('program2debug.test()')
I usually add a -v option to my programs, which enables tons of print statements explaining what I'm doing in detail. When you write a program in the future, consider doing the same before it gets thousands of lines big.
You could try running it in debug mode in an IDE like pydev (eclipse) or pycharm. You can break the program at any moment and get to its current execution point.
No program is ever too big to put print statements all over the place. You need to read up on the logging module and insert lots of logging.debug() statements. This is just a better form of print statement that outputs to a file, and can be turned off easily in production software. But years from now, when you need to modify the code, you can easily turn it all back on and get the benefit of the insight of the original programmer.

Using a debugger and curses at the same time?

I'm calling python -m pdb myapp.py, when an exception fires, and I'd normally be thrown back to the pdb interpreter to investigate the problem. However this exception is being thrown after I've called through curses.wrapper() and entered curses mode, rendering the pdb interpreter useless. How can I work around this?
James` answer is a good and I've upvoted it but I'd also consider trying to split the logic and presentation layers of my program. Keep the curses part a thin layer on top of a library and write a simple driver that invokes the correct routines to recreate the error. Then you can dive in and do what's necessary.
Another way I can think of is to create a function called debug or something that throws you back into the regular screen and invokes pdb. Then stick it just before the code that raises the exception and run your program. Something like
def debug(stdscr):
curses.nocbreak()
stdscr.keypad(0)
curses.echo()
curses.endwin()
import pdb; pdb.set_trace()
Apparently, this is similar to what is done with the curses.wrapper function. It's mentioned briefly at http://www.amk.ca/python/howto/curses/.
Not being familiar with Python, this may not be exactly what you want. But apparently, winpdb can attach to a script - just like gdb can to a running process (IIUC).
http://winpdb.org/docs/launch-time/
Don't be mislead by the name, it is platform independent.
use pyclewn
you can use pyclewn with vim.
or use pdb-clone,the core of pyclewn
its good ,its like gdb ,can remote debug

Categories