I want to run a program from inside my Python script and get the PID of the process I launched. I tried with :
p=subprocess.Popen(nameofmyprocess) pid=p.pid
But the problem is that it doesn't wait for my called program to finish. When I looked at the documentation, I concluded I should use subprocess.run() instead, however it doesn't have a .pid like Popen
Is there another alternative ?
Edit :
I should have mentioned this in the original question. The code of my program includes a server part where it opens a socket and listens to it. I have a client side which connects to that socket and sends a message. My end goal is to write a script that runs my program, gets its PID to pass it to some functions I wrote that will do the monitoring of my program : they will show me the memory usage of my program, the socket it has opened, the file descriptors.. basically information about things my program does.
As suggested , I used subprocess.Popen.poll so now my code looks like this :
p=subprocess.Popen(nameofmyprocess)
pid=p.pid
print pid
while (p.poll() is None):
time.sleep(20)
myFunc(pid)
myFunc2(pid)
However, when I run this script and run my client, it can't connect to my server program. It says "Connection failed". I'm pretty sure the program is running though because the PID I print is displayed when I use the command ps aux on another terminal.
Related
I've been ripping my hair out over this. I've searched the internet and can't seem to find a solution to my problem. I'm trying to auto test some code using the gdb module from python. I can do basic command and things are working except for stopping a process that's running in the background. Currently I continue my program in the background after a break point with this:
gdb.execute("c&")
I then interact with the running program reading different constant values and getting responses from the program.
Next I need to get a chunk of memory so I run these commands:
gdb.execute("interrupt") #Pause execution
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)") #dump memory to file
But when I run the memory dump I get an error saying the command can't be run when the target is running, after the error the interrupt command is run and the target is paused, then from the gdb console window I can run the memory dump.
I found a similar issue from awhile ago that seems to not be answered here.
I'm using python2.7.
I also found this link which seems to be the issue but no indication if it's in my build of gdb (which seems unlikely).
I had the same problem, from what I can tell from googling it is a current limitation of gdb: interrupt simply doesn't work in batch mode (when specifying commands with --ex, or -x file, or on stdin, or sourcing from file), it runs the following commands before actually stopping the execution (inserting a delay doesn't help). Building on the #dwjbosman's solution, here's a compact version suitable for feeding to gdb with --ex arguments for example:
python import threading, gdb
python threading.Timer(1.0, lambda: gdb.post_event(lambda: gdb.execute("interrupt"))).start()
cont
thread apply all bt full # or whatever you wanted to do
It schedules an interrupt after 1 second and resumes the program, then you can do whatever you wanted to do after the pause right in the main script.
I had the same problem, but found that none of the other answers here really work if you are trying to script everything from python. The issue that I ran into was that when I called gdb.execute('continue'), no code in any other python thread would execute. This appears to be because gdb does not release the python GIL while the continue command is waiting for the program to be interrupted.
What I found that actually worked for me was this:
def delayed_interrupt():
time.sleep(1)
gdb.execute('interrupt')
gdb.post_event(delayed_interrupt)
gdb.execute('continue')
I just ran into this same issue while writing some automated testing scripts. What I've noticed is that the 'interrupt' command doesn't stop the application until after the current script has exited.
Unfortunately, this means that you would need to segment your scripts anytime you are causing an interrupt.
Script 1:
gdb.execute('c&')
gdb.execute('interrupt')
Script 2:
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)")
I used multi threading to get arround this issue:
def post(cmd):
def _callable():
print("exec " + cmd , flush=True)
gdb.execute(cmd)
print("schedule " + cmd , flush=True)
gdb.post_event(_callable)
class ScriptThread (threading.Thread):
def run (self):
while True:
post("echo hello\n")
time.sleep(1)
x = ScriptThread()
x.start()
Save this as "test_script.py"
Use the script as follows:
gdb
> source test_script.py
Note: that you can also pipe "source test_script.py", but you need to keep the pipe open.
Once the thread is started GDB will wait for the thread to end and will process any commands you send to it via the "post_event" function. Even "interrupt"!
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8
It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.
Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.
I'm working on testing a corosync cluster. I'm trying to fail the interface that has the floating-IP to ensure the resource migrates over to another node with python.
Now the dilemma is my command does execute on the remote machine, but my test code hangs forever waiting for a reply it will never get--thenode will get rebooted because of the injected failure.
ssh = SSHClient(self.get_ms_ip(ms),
self.get_ms_user(ms),
self.get_ms_password(ms))
ssh.connect()
self.logger.info("Failing FIP eth now on %s" % ms)
ssh.exec_command(cmd, timeout=1)
#Code never reached this comment.
In python, how can I send the command and just continue on without waiting for any return? I've tried wrapping my ssh.exec_command with subprocess.Popen as suggested here Run Process and Don't Wait but that didn't yield anything different.
You don't want a subprocess, you want a thread. Spawn a thread that runs the exec_command call and you'll be able to continue with your code.
Did you try nohup?
ssh.exec_command('nohup %s &'%cmd, timeout=1)
Python doesn't handle threads nicely; can't manually exit a thread. I ended up having to make a worker method that would create the shh connection and run exec_command that would be run as a seperate multiprocessing.Process.
This way I was able to cleanup after a test properly before the next test ran (as part of python's unit test framework).
I've got a Python script which is running on a Linux server for hours, crunching some numbers for me. I'd like to check its progress, so I'd like to see what line is being executed right now. If that was a C or C++ program then I would just attach to the process with gdb -p <pid> and examine the stacktrace with where. Of course, I can do the same with the Python interpreter process, but I can't see the Python script's line in the stacktrace.
So, how can I find out which line of the Python script is being executed currently?
You can add a signal handler to the Python script that sends this information to the terminal, or to a file, then hit ^C in the terminal to send the signal to the process.
import signal
def print_linenum(signum, frame):
print "Currently at line", frame.f_lineno
signal.signal(signal.SIGINT, print_linenum)
You could also use some other signal and use the kill command to send the signal, if you need ^C to be able to interrupt the script, or set a signal.alarm() to print the information periodically, e.g. once a second.
You could print out other things from the stack frame if you like; there's a lot there. See the attributes of frame objects in this table.