I've got a Python script which is running on a Linux server for hours, crunching some numbers for me. I'd like to check its progress, so I'd like to see what line is being executed right now. If that was a C or C++ program then I would just attach to the process with gdb -p <pid> and examine the stacktrace with where. Of course, I can do the same with the Python interpreter process, but I can't see the Python script's line in the stacktrace.
So, how can I find out which line of the Python script is being executed currently?
You can add a signal handler to the Python script that sends this information to the terminal, or to a file, then hit ^C in the terminal to send the signal to the process.
import signal
def print_linenum(signum, frame):
print "Currently at line", frame.f_lineno
signal.signal(signal.SIGINT, print_linenum)
You could also use some other signal and use the kill command to send the signal, if you need ^C to be able to interrupt the script, or set a signal.alarm() to print the information periodically, e.g. once a second.
You could print out other things from the stack frame if you like; there's a lot there. See the attributes of frame objects in this table.
Related
I want to run a program from inside my Python script and get the PID of the process I launched. I tried with :
p=subprocess.Popen(nameofmyprocess) pid=p.pid
But the problem is that it doesn't wait for my called program to finish. When I looked at the documentation, I concluded I should use subprocess.run() instead, however it doesn't have a .pid like Popen
Is there another alternative ?
Edit :
I should have mentioned this in the original question. The code of my program includes a server part where it opens a socket and listens to it. I have a client side which connects to that socket and sends a message. My end goal is to write a script that runs my program, gets its PID to pass it to some functions I wrote that will do the monitoring of my program : they will show me the memory usage of my program, the socket it has opened, the file descriptors.. basically information about things my program does.
As suggested , I used subprocess.Popen.poll so now my code looks like this :
p=subprocess.Popen(nameofmyprocess)
pid=p.pid
print pid
while (p.poll() is None):
time.sleep(20)
myFunc(pid)
myFunc2(pid)
However, when I run this script and run my client, it can't connect to my server program. It says "Connection failed". I'm pretty sure the program is running though because the PID I print is displayed when I use the command ps aux on another terminal.
I've been ripping my hair out over this. I've searched the internet and can't seem to find a solution to my problem. I'm trying to auto test some code using the gdb module from python. I can do basic command and things are working except for stopping a process that's running in the background. Currently I continue my program in the background after a break point with this:
gdb.execute("c&")
I then interact with the running program reading different constant values and getting responses from the program.
Next I need to get a chunk of memory so I run these commands:
gdb.execute("interrupt") #Pause execution
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)") #dump memory to file
But when I run the memory dump I get an error saying the command can't be run when the target is running, after the error the interrupt command is run and the target is paused, then from the gdb console window I can run the memory dump.
I found a similar issue from awhile ago that seems to not be answered here.
I'm using python2.7.
I also found this link which seems to be the issue but no indication if it's in my build of gdb (which seems unlikely).
I had the same problem, from what I can tell from googling it is a current limitation of gdb: interrupt simply doesn't work in batch mode (when specifying commands with --ex, or -x file, or on stdin, or sourcing from file), it runs the following commands before actually stopping the execution (inserting a delay doesn't help). Building on the #dwjbosman's solution, here's a compact version suitable for feeding to gdb with --ex arguments for example:
python import threading, gdb
python threading.Timer(1.0, lambda: gdb.post_event(lambda: gdb.execute("interrupt"))).start()
cont
thread apply all bt full # or whatever you wanted to do
It schedules an interrupt after 1 second and resumes the program, then you can do whatever you wanted to do after the pause right in the main script.
I had the same problem, but found that none of the other answers here really work if you are trying to script everything from python. The issue that I ran into was that when I called gdb.execute('continue'), no code in any other python thread would execute. This appears to be because gdb does not release the python GIL while the continue command is waiting for the program to be interrupted.
What I found that actually worked for me was this:
def delayed_interrupt():
time.sleep(1)
gdb.execute('interrupt')
gdb.post_event(delayed_interrupt)
gdb.execute('continue')
I just ran into this same issue while writing some automated testing scripts. What I've noticed is that the 'interrupt' command doesn't stop the application until after the current script has exited.
Unfortunately, this means that you would need to segment your scripts anytime you are causing an interrupt.
Script 1:
gdb.execute('c&')
gdb.execute('interrupt')
Script 2:
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)")
I used multi threading to get arround this issue:
def post(cmd):
def _callable():
print("exec " + cmd , flush=True)
gdb.execute(cmd)
print("schedule " + cmd , flush=True)
gdb.post_event(_callable)
class ScriptThread (threading.Thread):
def run (self):
while True:
post("echo hello\n")
time.sleep(1)
x = ScriptThread()
x.start()
Save this as "test_script.py"
Use the script as follows:
gdb
> source test_script.py
Note: that you can also pipe "source test_script.py", but you need to keep the pipe open.
Once the thread is started GDB will wait for the thread to end and will process any commands you send to it via the "post_event" function. Even "interrupt"!
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8
It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.
Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.
I'm using gdb 7.4.1 on embedded powerpc target to perform some analysis on my multi-threaded C++ program that uses pthreads. My end goal is to script gdb with python to automate some common analysis functions. The problem is that I am finding some discrepancy in behavior when I run commands individually vs. in a gdb user-defined command (or invoking the same commands via python script).
edit: I found this reference to a very similar problem on the main gdb mailing list. Although I don't completely follow Pedro's response about the limitation of async mode, I think he's implying that in async mode, the relative timing of user-defined command sequences cannot be trusted. This is what I found empirically.
In both scenarios, I perform the following start-up steps, loading my program, setting its args, and turning on asynchronous and non-stop debugging modes, then running the program in the background:
(gdb) file myprogram
(gdb) set args --interface=eth0 --try-count=0
(gdb) set target-async on
(gdb) set pagination off
(gdb) set non-stop on
(gdb) run &
At this point, if I manually issue interrupt and then info threads commands, I see the list of all threads running except one that got stopped. Then I can continue & and repeat to my hearts content, it works consistently. When stopped, I can inspect that thread's stack frames and all is well.
However, if instead I put these commands into a user-defined gdb command:
(gdb) define foo
(gdb) interrupt
(gdb) info threads
(gdb) continue &
(gdb) end
(gdb) foo
Cannot execute this command while the selected thread is running.
Then the thread list printed by foo indicates no threads were stopped, and so the continue & command returns Cannot execute this command while the selected thread is running.. I thought this was a problem inherent to the asynchronous gdb commanding, so I inserted an absurdly long wait after the interrupt command and got the same behavior:
(gdb) define foo
(gdb) interrupt
(gdb) shell sleep 5
(gdb) info threads
(gdb) continue &
(gdb) end
(gdb) foo
Cannot execute this command while the selected thread is running.
With or without the sleep command, I can always issue the manual CLI commands and the threads get stopped correctly.
Similarly, I get the same results sourcing a python script to do the thread perusal:
import gdb, time
gdb.execute("file myprogram")
gdb.execute("set args --interface=eth0 --try-count=0")
gdb.execute("set target-async on")
gdb.execute("set pagination off")
gdb.execute("set non-stop on")
gdb.execute("run &")
time.sleep(5)
gdb.execute("interrupt")
# here, I inspect threads via gdb module interface
# in practice, they're always all running bc the program neven got interrupted
for thread in gdb.selected_inferior().threads():
print thread.is_running(),
gdb.execute("continue &")
I get the same result even if I specify from_tty=True in the gdb.execute calls. Also, if I use continue -a it suppresses the error string but does not help otherwise bc the interrupt call still doesn't work.
So... is this:
cockpit error? Is there something that I'm omitting or doing incorrectly, given what I'm trying to accomplish? Should this work, or do I have to use GDB/MI to asynchronously "drive" gdb like this?
a timing problem? Maybe invoking shell sleep (or python time.sleep()) doesn't do what I assume it would, in this context.
problem with my usage of pthreads? I have assumed that since using manual gdb commands always works correctly this is not the case.
a gdb problem?
Thanks.
I think this is most likely a gdb problem. I don't know enough about the inferior-control stuff to be more confident. I do know that inferior control generally has not been wired up to Python...
One thing worth trying is having a separate Python thread that does the wait, then sends an "interrupt" command to the main gdb thread using gdb.post_event.
Then, instead of synchronously examining the threads or doing work after the "interrupt", instead use the gdb.events.stop event source to trigger your actions.
Please file bugs liberally about holes in the Python API.
I came across your post today and tested this gdb-python combination. I found a way to make it works.
Set an env in your terminal etc: setenv MY_ENV 1
In your C/C++ source code, you can add a few line near the start of the code to get the env: getenv(MY_ENV), and then print out the process id with get_pid()
Set the env in the terminal, then run the program. The program will stop and show its process id on the screen. say 1234
Write a separate gdb_monitor.py for gdb e.g:
import gdb
# Your code
gdb.execute("set target-async on")
gdb.execute("set pagination off")
gdb.execute("set non-stop on")
gdb.execute("set args --interface=eth0 --try-count=0")
# Add these lines
pid = input("Python script is running, attach the pid here:")
gdb.execute("attach {}".format(pid))
while(1):
print("I am running gdb in async mode, add the break condition here")
Now in gdb, you can source the python script, enter the pid in the script.