I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger?
Additionally, I was wondering how that debugger would be able to be operated.
For example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open?
I would also like to know how this function works in conjunction with the breakpointhook() method.
No, debugger will not open itself automatically as a consequence of setting a breakpoint.
So you have first set a breakpoint (or more of them), and then manually launch a debugger.
After this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as
inspect variable values,
set variables manually to other values,
continue performing instructions step by step (i. e. only the next instruction),
continue performing instructions to the next breakpoint,
prematurely stop debugging your program.
This is the common scenario for all debuggers of all programming languages (and their IDEs).
For IDEs, launching a debugger will
enable or reveal debugging instructions in their menu system,
show a toolbar for them and will,
enable hot keys for them.
Without setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task.
(Some IDEs have an option to launch a debugger in the "first instruction, then a pause" mode, so you need not set breakpoints in advance in this case.)
Yes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu.
(It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.)
The connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one.
The natural question is why we need two functions with the exactly same behavior?
The answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function.
For example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.
The default behavior of the breakpoint() builtin is to open the pdb debugger at that point.
That is, by default the line
breakpoint()
Should behave identically to
import pdb; pdb.set_trace()
The behavior can be customized (e.g. to open a different debugger) by modifying sys.breakpointhook. Generally the only time you would do this is if you were implementing a debugger or something that functioned like a debugger. If you're running code from an IDE, the IDE itself should modify sys.breakpointhook so that it opens the IDE debugger. (I don't know if all Python IDEs actually do this, but they should.)
For more information, including the rationale of why this function was added, see the PEP 553 proposal. The actual implementation was landed into Python 3.7.
Related
I'd like to run some Python code in debugger mode in PyCharm. My code includes an API function call, and for some reason, that single function call takes forever in debugger mode.
I really do not care about debugging that specific function, and having debugger skip over that function (only run it in regular mode) is fine. However, I'd like to be able run the rest of my code in debug mode.
Is this doable in PyCharm or is there any Python workaround?
# some code to be run in debugger mode, e.g.
func_a(obj_a) #this function modifies obj_a
# some API function call, super slow in debugger mode. can I just run this part in run mode? e.g.
obj_b = api_func(obj_a)
# rest of the code to be run in debugger mode e.g.
func_c(obj_b)
Potentially you could use sys.gettrace and sys.settrace to remove the debugger while your API call runs, though it's not recommended, and PyCharm will complain at you if you do:
PYDEV DEBUGGER WARNING:
sys.settrace() should not be used when the debugger is being used.
This may cause the debugger to stop working correctly.
If this is needed, please check:
http://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html
to see how to restore the debug tracing back correctly.
In your case, you'd do something like this:
import sys
# some code to be run in debugger mode, e.g.
func_a(obj_a) #this function modifies obj_a
# Remove the trace function (but keep a reference to it).
_trace_func = sys.gettrace()
sys.settrace(None)
# some API function call, super slow in debugger mode. can I just run this part in run mode? e.g.
obj_b = api_func(obj_a)
# Put the trace function back.
sys.settrace(_trace_func)
# rest of the code to be run in debugger mode e.g.
func_c(obj_b)
I would strongly recommend keeping the code you run while the debugger is disabled as short as possible.
You can right-click on a breakpoint and set condition
I am using Python C Api to embed a python in our application. Currently when users execute their scripts, we call PyRun_SimpleString(). Which runs fine.
I would like to extend this functionality to allow users to run scripts in "Debug" mode, where like in a typical IDE, they would be allowed to set breakpointsm "watches", and generally step through their script.
I've looked at the API specs, googled for similar functionality, but did not find anything that would help much.
I did play with PyEval_SetTrace() which returns all the information I need, however, we execute the Python on the same thread as our main application and I have not found a way to "pause" python execution when the trace callback hits a line number that contains a user checked break point - and resuming the execution at a later point.
I also see that there are various "Frame" functions like PyEval_EvalFrame() but not a whole lot of places that demo the proper usage. Perhaps these are the functions that I should be using?
Any help would be much appreciated!
PyEval_SetTrace() is exactly the API that you need to use. Not sure why you need some additional way to "pause" the execution; when your callback has been called, the execution is already paused and will not resume until you return from the callback.
Bigger image
Especially I run code perhaps running a little long time(10 mins roughly), and hit the break point.
The python debugger always show me this kind of error "timeout waiting for response on 113"
I circle them in red in screencut.
And I use Pycharm as my python IDE, is it just issue for Pycharm IDE? Or Python debugger issue?
And if Pycharm is not recommended, can anyone give me better IDE which be able to debug efficiently.
I had a similar thing happen to me a few months ago, it turned out I had a really slow operation within a __repr__() for a variable I had on the stack. When PyCharm hits a breakpoint it grabs all of the variables in the current scope and calls __repr__ on them. Here's an amusement that demonstrates this issue:
import time
class Foo(object):
def __repr__(self):
time.sleep(100)
return "look at me"
if __name__ == '__main__':
a = Foo()
print "set your breakpoint here"
PyCharm will also call __getattribute__('__class__'). If you have a __getattribute__ that's misbehaving that could trip you up as well.
This may not be what's happening to you but perhaps worth considering.
As you are on Windows, for debugging such & most things I use the good old PythonWin IDE:
This IDE + Debugger runs in the same process as the debugged stuff!
This way, being in direct touch with real objects, like pdb in simple interactive shell, but having a usable GUI, is a big advantage most of the time. And this way there are no issues of transferring vast objects via repr/pickle or so between processes, no delays, no issues of timeouts etc.
If a step takes a long time, PythonWin will also simply wait and not respond before ... (unless one issues a break signal/KeyboardInterrupt via the PythonWin system tray icon).
And the interactive shell of PythonWin is also fully usable during the debugging - with namespace inside the current frame.
It's an old question but reply can be helpful.
Delete the .idea folder from the project root dir. It will clean up the Pycharm's database and the debugger will stop timing out. It works for me on Windows.
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
This might work:
make sure the native library is compiled with debug symbols (-g switch for gcc).
Run python under gdb and let it crash:
gdb --args python caller.py
run # tell gdb to run the program
# script runs and crashes
bt # print backtrace, which should show the crashing line
If crash happens in the native library code, then this should reveal the line.
If native library code just corrupts something or violates some postconditions, and crash happens in Python interpreter's code, then this will not be helpful. In that case your options are code review, adding debug prints (first step would be to just log entry and exit of each C function to detect which is the last C function called before crash, then adding more fine-grained logging for variable values etc), and finally using debugger to see what happens by using the usual debugger techniques (breakpoints, stepping, watches...).
Take Python and the .so file(s) out of the equation. See what params are being passed, if any, and call the routines from a debugger capable of stepping through C code and binaries.
Here is a link to an article describing a simple C debugging process, in case you're not familiar with debugging C (command line interface). Here is another link on using NetBeans to debug C. Also using Eclipse...
This could help: gdb: break in shared library loaded by python (might also turn out to be a dupe)
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.