I'm using Python to control GDB via batch commands. Here's how I'm calling GDB:
$ gdb --batch --command=cmd.gdb myprogram
The cmd.gdb listing just contains the line calling the Python script
source cmd.py
And the cmd.py script tries to create a breakpoint and attached command list
bp = gdb.Breakpoint("myFunc()") # break at function in myprogram
gdb.execute("commands " + str(bp.number))
# then what? I'd like to at least execute a "continue" on reaching breakpoint...
gdb.execute("run")
The problem is I'm at a loss as to how to attach any GDB commands to the breakpoint from the Python script. Is there a way to do this, or am I missing some much easier and more obvious facility for automatically executing breakpoint-specific commands?
def stop from GDB 7.7.1 can be used:
gdb.execute('file a.out', to_string=True)
class MyBreakpoint(gdb.Breakpoint):
def stop (self):
gdb.write('MyBreakpoint\n')
# Continue automatically.
return False
# Actually stop.
return True
MyBreakpoint('main')
gdb.execute('run')
Documented at: https://sourceware.org/gdb/onlinedocs/gdb/Breakpoints-In-Python.html#Breakpoints-In-Python
See also: How to script gdb (with python)? Example add breakpoints, run, what breakpoint did we hit?
I think this is probably a better way to do it rather than using GDB's "command list" facility.
bp1 = gdb.Breakpoint("myFunc()")
# Define handler routines
def stopHandler(stopEvent):
for b in stopEvent.breakpoints:
if b == bp1:
print "myFunc() breakpoint"
else:
print "Unknown breakpoint"
gdb.execute("continue")
# Register event handlers
gdb.events.stop.connect (stopHandler)
gdb.execute("run")
You could probably also subclass gdb.Breakpoint to add a "handle" routine instead of doing the equality check inside the loop.
Related
#!/usr/bin/env python3
import click
#click.command()
#click.option('--desiredrange', '-r', default=1)
def main(desiredrange):
print(desiredrange)
print('abcd')
main()
print('!!!!')
Running the above code gives me the following in the terminal:
1
abcd
But I do not get
!!!!
This scenario is true for other variations of code that include ANYTHING after the main() function. The script exits after executing this function. This is also true for any function i.e. if I placed the print('!!!!') inside another function, then called that function after main(), the script exits after main(), ignoring the second function.
If I removed 'click', such that the code looks like:
#!/usr/bin/env python3
def main():
print(1)
print('abcd')
main()
print('!!!!')
I will get the following printed to terminal:
1
abcd
!!!!
I can execute other functions after main(), as well as other commands. I run this script from terminal using ./script.py (applied chmod +x script.py). I also get no errors from BOTH scenarios.
Why is this?
The function named main that you defined isn't actually the one called directly by the line main(). Instead, the two decorators are creating a new value that wraps the function. This callable (I don't think it's necessarily a function, but a callable instance of click.core.Command; I'm not digging into the code heavily to see exactly what happens.) calls raises SystemExit in some way, so that your script exits before the "new" main actually returns.
You can confirm this by explicitly catching SystemExit raised by main and ignoring it. This allows the rest of your script to execute.
try:
main()
except SystemExit:
pass
print('!!!!')
Remember that decorator syntax is just a shortcut for function application. With the syntax, you can rewrite your code as
import click
def main(desiredrange):
print(desiredrange)
print('abcd')
x = main
main = click.command(click.option('--desiredrange', '-r', default=1)(main))
y = main
assert x is not y
main()
print('!!!!')
Since the assertion passes, this confirms that the value bound to main is not your original function, but something else.
The click module runs your script in standalone_mode which invokes the command and then shuts down the Python interpreter, rather than returning.
You can call your main with standalone_mode=False and it'll return and then continuing executing your additional statements.
#!/usr/bin/env python3
import click
#click.command()
#click.option('--desiredrange', '-r', default=1)
def main(desiredrange):
print(desiredrange)
print('abcd')
main(standalone_mode=False)
print('!!!!')
> python source.py -r 2
2
abcd
!!!!
Also check the documentation for click.BaseCommand:
standalone_mode – the default behavior is to invoke the script in standalone mode. Click will then handle exceptions and convert them into error messages and the function will never return but shut down the interpreter. If this is set to False they will be propagated to the caller and the return value of this function is the return value of invoke().
Main question
My understanding is that atexit will run registered functions in reverse order as soon as the program is killed by a signal handled by Python. One example of such a signal is when the interpreter is quit. So if I have a file called register-arguments.py as follows:
def first_registered():
print('first registered')
def second_registered():
print('second registered')
import atexit
atexit.register(first_registered)
atexit.register(second_registered)
Running python register-arguments.py will trigger the following steps:
Interpreter is started
Functions are registered, which generates no output in the terminal
The interpreter is terminated, and the registered functions are called
in reverse order.
The output is as follows:
second registered
first registered
Which makes sense. However, if I try to debug one of these functions with Python's native debugger pdb, here's what I get:
$ python -m pdb .\register-arguments.py
> ...\atexit-notes\register-arguments.py(1)<module>()
-> def first_registered():
(Pdb) b 2
Breakpoint 1 at c:\users\amine.aboufirass\desktop\temp\atexit-notes\register-arguments.py:2
(Pdb) c
As you can see the program starts and stops at line 1 of the source file. I then intentionally place a breakpoint at line 2, then use the c command to continue execution.
I expected the debugger to stop after the registered function is called (upon pdb exiting the terminal). However, that doesn't happen, and I get the following message:
The program finished and will be restarted.
How do I debug functions registered by atexit using pdb in Python?
On pdb.set_trace
It seems that explicitly adding a trace inside the source code does work:
import pdb
def first_registered(arg):
pdb.set_trace()
print(f'first registered, arg={arg}')
def second_registered():
print('second registered')
import atexit
atexit.register(first_registered, arg="test")
atexit.register(second_registered)
Debugging with python register-arguments.py will land you in the line where the breakpoint is explicitly added, and the value of the variable arg at that breakpoint is indeed 'test'.
I have a strong preference for not editing the source code, so this unfortunately won't work for me. I'm curious to see whether there's a way to do it without editing the source code.
I am running an interactive ssh python program, but I'm not always technically interacting with it (piping stdout and stdin). I need to print a string to signal to pexpect on my local machine that a breakpoint has been called and to enter interactive mode.
On my local machine I have:
p: pexpect.spawn
i = p.expect([pexpect.EOF,'__INTERACT__'])
if i==1:
p.interact()
The reason I do this is because I cannot simply interact() the whole time it is running. This interferes with stdout on the local process.
So on the remote process, I breakpoint with
print('__INTERACT__')
breakpoint()
This works, however I want to have single-line breakpoints. So I tried:
def remote_breakpoint():
print('__INTERACT__')
import pdb; pdb.set_trace()
sys.breakpointhook = remote_breakpoint
This allows me to just write breakpoint(). It's also allows me to disable the extra print statement when running locally:
def remote_breakpoint():
if platform.system() != 'Darwin':
print('__INTERACT__')
import pdb; pdb.set_trace()
It works, But now pdb starts inside the remote_breakpoint function and I have to hit r to get out every time. How can I tell pdb.set_trace to start one function up the stack, or make it wait to start the interactive process until remote_breakpoint has returned?
Even if I just call remote_breakpoint() (instead of using sys.breakpointhook), I still have the same problem.
I Have a shell script which in turn runs a python script, I have to exit from the main shell script when an exception is caught in python. Can anyone suggest a way on how to achieve it.
In Python you can set the return value using sys.exit(). Typically when execution completed successfully you return 0, and if not then some non-zero number.
So something like this in your Python will work:
import sys
try:
....
except:
sys.exit(1)
And then, as others have said, you need to make sure your bash script catches the error by either checking the return value explicitly (using e.g. $?) or using set -e.
I have a script that I want to exit early under some condition:
if not "id" in dir():
print "id not set, cannot continue"
# exit here!
# otherwise continue with the rest of the script...
print "alright..."
[ more code ]
I run this script using execfile("foo.py") from the Python interactive prompt and I would like the script to exit going back to interactive interpreter. How do I do this? If I use sys.exit(), the Python interpreter exits completely.
In the interactive interpreter; catch SystemExit raised by sys.exit and ignore it:
try:
execfile("mymodule.py")
except SystemExit:
pass
Put your code block in a method and return from that method, like such:
def do_the_thing():
if not "id" in dir():
print "id not set, cannot continue"
return
# exit here!
# otherwise continue with the rest of the script...
print "alright..."
# [ more code ]
# Call the method
do_the_thing()
Also, unless there is a good reason to use execfile(), this method should probably be put in a module, where it can be called from another Python script by importing it:
import mymodule
mymodule.do_the_thing()
I'm a little obsessed with ipython for interactive work but check out the tutorial on shell embedding for a more robust solution than this one (which is your most direct route).
Instead of using execfile, you should make the script importable (name=='main protection, seperated into functions, etc.), and then call the functions from the interpreter.