I'd like to run some Python code in debugger mode in PyCharm. My code includes an API function call, and for some reason, that single function call takes forever in debugger mode.
I really do not care about debugging that specific function, and having debugger skip over that function (only run it in regular mode) is fine. However, I'd like to be able run the rest of my code in debug mode.
Is this doable in PyCharm or is there any Python workaround?
# some code to be run in debugger mode, e.g.
func_a(obj_a) #this function modifies obj_a
# some API function call, super slow in debugger mode. can I just run this part in run mode? e.g.
obj_b = api_func(obj_a)
# rest of the code to be run in debugger mode e.g.
func_c(obj_b)
Potentially you could use sys.gettrace and sys.settrace to remove the debugger while your API call runs, though it's not recommended, and PyCharm will complain at you if you do:
PYDEV DEBUGGER WARNING:
sys.settrace() should not be used when the debugger is being used.
This may cause the debugger to stop working correctly.
If this is needed, please check:
http://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html
to see how to restore the debug tracing back correctly.
In your case, you'd do something like this:
import sys
# some code to be run in debugger mode, e.g.
func_a(obj_a) #this function modifies obj_a
# Remove the trace function (but keep a reference to it).
_trace_func = sys.gettrace()
sys.settrace(None)
# some API function call, super slow in debugger mode. can I just run this part in run mode? e.g.
obj_b = api_func(obj_a)
# Put the trace function back.
sys.settrace(_trace_func)
# rest of the code to be run in debugger mode e.g.
func_c(obj_b)
I would strongly recommend keeping the code you run while the debugger is disabled as short as possible.
You can right-click on a breakpoint and set condition
Related
When using pdb to debug Python code, I often wish that commands such as next, return, and until would show the time it takes to run until the next time pdb breaks.
Is it possible to change a setting or write a plugin to make this happen?
I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger?
Additionally, I was wondering how that debugger would be able to be operated.
For example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open?
I would also like to know how this function works in conjunction with the breakpointhook() method.
No, debugger will not open itself automatically as a consequence of setting a breakpoint.
So you have first set a breakpoint (or more of them), and then manually launch a debugger.
After this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as
inspect variable values,
set variables manually to other values,
continue performing instructions step by step (i. e. only the next instruction),
continue performing instructions to the next breakpoint,
prematurely stop debugging your program.
This is the common scenario for all debuggers of all programming languages (and their IDEs).
For IDEs, launching a debugger will
enable or reveal debugging instructions in their menu system,
show a toolbar for them and will,
enable hot keys for them.
Without setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task.
(Some IDEs have an option to launch a debugger in the "first instruction, then a pause" mode, so you need not set breakpoints in advance in this case.)
Yes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu.
(It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.)
The connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one.
The natural question is why we need two functions with the exactly same behavior?
The answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function.
For example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.
The default behavior of the breakpoint() builtin is to open the pdb debugger at that point.
That is, by default the line
breakpoint()
Should behave identically to
import pdb; pdb.set_trace()
The behavior can be customized (e.g. to open a different debugger) by modifying sys.breakpointhook. Generally the only time you would do this is if you were implementing a debugger or something that functioned like a debugger. If you're running code from an IDE, the IDE itself should modify sys.breakpointhook so that it opens the IDE debugger. (I don't know if all Python IDEs actually do this, but they should.)
For more information, including the rationale of why this function was added, see the PEP 553 proposal. The actual implementation was landed into Python 3.7.
I'm using Pycharm and playing with the profiler it has built in. I've keyed in on some areas where my code can be optimized but I was wondering if there was a way to step through the code and see how long each line took to execute as I stepped through without having to rerun all my code in the profiler.
I think the closes you could do is put a breakpoint
then open up the debugger and enter console mode
and execute the statement as started=time.time();my_function();print("Took %0.2fs"%(time.time()-started))
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
This might work:
make sure the native library is compiled with debug symbols (-g switch for gcc).
Run python under gdb and let it crash:
gdb --args python caller.py
run # tell gdb to run the program
# script runs and crashes
bt # print backtrace, which should show the crashing line
If crash happens in the native library code, then this should reveal the line.
If native library code just corrupts something or violates some postconditions, and crash happens in Python interpreter's code, then this will not be helpful. In that case your options are code review, adding debug prints (first step would be to just log entry and exit of each C function to detect which is the last C function called before crash, then adding more fine-grained logging for variable values etc), and finally using debugger to see what happens by using the usual debugger techniques (breakpoints, stepping, watches...).
Take Python and the .so file(s) out of the equation. See what params are being passed, if any, and call the routines from a debugger capable of stepping through C code and binaries.
Here is a link to an article describing a simple C debugging process, in case you're not familiar with debugging C (command line interface). Here is another link on using NetBeans to debug C. Also using Eclipse...
This could help: gdb: break in shared library loaded by python (might also turn out to be a dupe)
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
I have a python function that I'm calling from inside an iPython session.
In a very specific situation, in which a conditional in a certain line comes out as True, the script consistently drops into a pdb debug mode.
There is no trace or any other indication of a problem with the code, and as soon as I type c to continue, the code continues perfectly well.
The script doesn't include any import pdb not to mention a set_trace()...
Any ideas what could account for this?
Depending on your ipython config it automatically goes into PDB if an exception is raised.
Seems like there was a import pdb; pdb.set_trace() line in the code after all, which I missed due to source control issues.