I am trying to run subprocess Popen in a thread in Python. The command in Popen is expected to run continuously to collect logs. But when a condition is met outside the thread, I want to stop the Popen subprocess and the corresponding thread also to finish. Below is a sample representative code:
import threading
import subprocess
class MyClass(threading.Thread):
def __init__(self):
super(MyClass, self).__init__()
def run(self):
self.proc = subprocess.Popen("while true; do foo; sleep 2; done", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = self.proc.communicate()
myclass = MyClass()
myclass.start()
myclass.proc.kill()
print("Done")
But in the above code, it gets stuck forever. What is the correct way to stop the running Popen subprocess and also to finish the thread?
All you really need to do is to have the thread, i.e. the MyClass instance, do the "killing" and instantiate MyClass with an event that it will wait on and then when set will kill the process. As I do not know what is in foo, I have substituted a simple echo hello for that and have set text=True on the Popen call so that the output is Unicode rather than bytes:
import threading
import subprocess
import time
class MyClass(threading.Thread):
def __init__(self, evt):
super(MyClass, self).__init__()
self.evt = evt
def run(self):
proc = subprocess.Popen("while true; do echo hello; sleep 2; done", shell=True, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# wait for the "kill" order:
self.evt.wait()
proc.kill()
stdout, stderr = proc.communicate()
# for demo purposes let's see what has been output:
print(stdout, end='')
evt = threading.Event()
myclass = MyClass(evt)
myclass.start()
# let the process run for 5 seconds and then give the kill order:
time.sleep(5)
print("Killing process:")
evt.set()
myclass.join()
print("Done")
Prints:
Killing process:
hello
hello
hello
Done
Add param preexec_fn=os.setsid to your Popen func.
Use os.killpg(proc.pid, signal.SIGKILL) to kill sub process by pid.
Related
I have a python script which starts multiple commands using subprocess.Popen. I added a signal handler which is called if a child exits. I want to check which child terminated. I can do this by iterating over all children:
#!/usr/bin/env python
import subprocess
import signal
procs = []
def signal_handler(signum, frame):
for proc in procs:
proc.poll()
if proc.returncode is not None:
print "%s returned %s" % (proc.pid, proc.returncode)
procs.remove(proc)
def main():
signal.signal(signal.SIGCHLD, signal_handler)
procs.append(subprocess.Popen(["/bin/sleep", "2"]))
procs.append(subprocess.Popen(["/bin/sleep","5"]))
# wait so the main process does not terminate immediately
procs[1].wait()
if __name__ == "__main__":
main()
I would like to avoid querying all subprocesses. Is there a way to determine in the signal handler which child terminated?
You could achieve a similar result using multiprocessing. You could use the threading package instead if you didn't want to spawn the extra processes. It has pretty much the exact same interface. Basically, each subprocess call happens in a new process, which then launches your sleep processes.
import subprocess
import multiprocessing
def callback(result):
# do something with result
pid, returncode = result
print pid, returncode
def call_process(cmd):
p = subprocess.Popen(cmd)
p.wait()
return p.pid, p.returncode
def main():
pool = multiprocessing.Pool()
pool.apply_async(call_process, [["/bin/sleep", "2"]], callback=callback)
pool.apply_async(call_process, [["/bin/sleep", "5"]], callback=callback)
pool.close()
pool.join()
main()
I have created a script which should run a command and kill it after 15 seconds
import logging
import subprocess
import time
import os
import sys
import signal
#cmd = "ping 192.168.1.1 -t"
cmd = "C:\\MyAPP\MyExe.exe -t 80 -I C:\MyApp\Temp -M Documents"
proc=subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,shell=True)
**for line in proc.stdout:
print (line.decode("utf-8"), end='')**
time.sleep(15)
os.kill(proc.pid, signal.SIGTERM)
#proc.kill() #Tried this too but no luck
This doesnot terminate my subprocess. however if I comment out the logging to stdout part, ie
for line in proc.stdout:
print (line.decode("utf-8"), end='')
the subprocess has been killed.
I have tried proc.kill() and CTRL_C_EVENT too but no luck.
Any help would be highly appreciated. Please see me as novice to python
To terminate subprocess in 15 seconds while printing its output line-by-line:
#!/usr/bin/env python
from __future__ import print_function
from threading import Timer
from subprocess import Popen, PIPE, STDOUT
# start process
cmd = r"C:\MyAPP\MyExe.exe -t 80 -I C:\MyApp\Temp -M Documents"
process = Popen(cmd, stdout=PIPE, stderr=STDOUT,
bufsize=1, universal_newlines=True)
# terminate process in 15 seconds
timer = Timer(15, terminate, args=[process])
timer.start()
# print output
for line in iter(process.stdout.readline, ''):
print(line, end='')
process.stdout.close()
process.wait() # wait for the child process to finish
timer.cancel()
Notice, you don't need shell=True here. You could define terminate() as:
def terminate(process):
if process.poll() is None:
try:
process.terminate()
except EnvironmentError:
pass # ignore
If you want to kill the whole process tree then define terminate() as:
from subprocess import call
def terminate(process):
if process.poll() is None:
call('taskkill /F /T /PID ' + str(process.pid))
Use raw-string literals for Windows paths: r"" otherwise you should escape all backslashes in the string literal
Drop shell=True. It creates an additional process for no reason here
universal_newlines=True enables text mode (bytes are decode into Unicode text using the locale preferred encoding automatically on Python 3)
iter(process.stdout.readline, '') is necessary for compatibility with Python 2 (otherwise the data may be printed with a delay due to the read-ahead buffer bug)
Use process.terminate() instead of process.send_signal(signal.SIGTERM) or os.kill(proc.pid, signal.SIGTERM)
taskkill allows to kill a process tree on Windows
The problem is reading from stdout is blocking. You need to either read the subprocess's output or run the timer on a separate thread.
from subprocess import Popen, PIPE
from threading import Thread
from time import sleep
class ProcKiller(Thread):
def __init__(self, proc, time_limit):
super(ProcKiller, self).__init__()
self.proc = proc
self.time_limit = time_limit
def run(self):
sleep(self.time_limit)
self.proc.kill()
p = Popen('while true; do echo hi; sleep 1; done', shell=True)
t = ProcKiller(p, 5)
t.start()
p.communicate()
EDITED to reflect suggested changes in comment
from subprocess import Popen, PIPE
from threading import Thread
from time import sleep
from signal import SIGTERM
import os
class ProcKiller(Thread):
def __init__(self, proc, time_limit):
super(ProcKiller, self).__init__()
self.proc = proc
self.time_limit = time_limit
def run(self):
sleep(self.time_limit)
os.kill(self.proc.pid, SIGTERM)
p = Popen('while true; do echo hi; sleep 1; done', shell=True)
t = ProcKiller(p, 5)
t.start()
p.communicate()
def adbshell(command, serial=None, adbpath='adb'):
args = [adbpath]
if serial is not None:
args.extend(['-s', serial])
args.extend(['shell', command])
return subprocess.check_output(args)
def pmpath(serial=None, adbpath='adb'):
return adbshell('am instrument -e class............', serial=serial, adbpath=adbpath)
I have to run this test for a specific time period, and then exit if it is not working. How do I provide a timeout?
Depending which Python version you are running.
Python 3.3 onwards:
subprocess.check_output() provides a timeout param. Check the signature here
subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False, timeout=None)
Below Python 3.3:
You can use threading module. Something like:
def run(args, timeout):
def target():
print 'Start thread'
subprocess.check_output(args)
print 'End thread'
thread = threading.Thread(target=target)
thread.start() # Start executing the target()
thread.join(timeout) # Join the thread after specified timeout
Note - I haven't tested the code above with threading and check_output(). Normally I use the subprocess.Popen() which offers more flexibility and handles almost all scenarios. Check the doc
The Popen constructure provides more flexiblity, as it can be used to check the exit status of the subprocess call.
The Popen.poll returns None if the process has not terminated yet. Hence call the subrprocess, sleep for the time required time out.
consider a simple test.py which is the subprocess called from the main program.
import time
for i in range(10):
print i
time.sleep(2)
The test.py is called from another program using the subprocess.Popen
from subprocess import Popen, PIPE
import time
cmd = Popen(['python','test.py'],stdout=PIPE)
print cmd.poll()
time.sleep(2)
if cmd.poll()== None:
print "killing"
cmd.terminate()
time.sleep(2)
provides a time out of 2 seconds, so that the program can excecute.
checks the exit status of the process using Popen.poll
if None, the process has not terminated, kills the process.
I'm struggling to find the best approach to running multiple OS commands in parallel and being able to capture the output from them. The OS command is a semi-long running proprietary utility written in C. (running on solaris/linux hosts and using python 2.4) From a high level, this script will pull jobs from a job queue, instantiate a class for each job, wherein the class then spawns the OS utility with provided arguments. There is actually going to be a lot more to this class but just focusing on the overall architecture of the script, the omitted code is trivial in this context.
There are actually 2 points where I need the output from this OS command.
When the command is first executed it returns a jobid, which I need to capture. The command then blocks until complete. I then need to capture the return code of this command.
What I really want to do (I think) is define a class which spawns a thread and then executes Popen().
Here is what I have now:
#!/usr/bin/python
import sys, subprocess, threading
cmd = "/path/to/utility -x arg1 -y arg2"
class Command(object):
def __init__(self, cmd):
self.cmd = cmd
self.process = None
self.returncode = None
self.jobid = None
def __call__(self):
print "Starting job..."
self.process = subprocess.Popen(self.cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
out, err = self.process.communicate()
self.jobid = out.split()[10]
def alive(self):
if self.process.poll():
return True
else:
return False
def getJobID(self):
return self.jobid
job = Command(cmd)
jobt = threading.Thread(target=job, args=[])
jobt.start()
# if job.alive():
# print "Job is still alive."
# do something
# else:
# print "Job is not alive."
# do something else
sys.exit(0)
The problem here is using p.communicate() causes the entire thread to block and I can't get the jobid at the point I want to.
Also if I uncomment the if statement, It complains that there is no method alive().
I've tried various variations of this, like creating the thread inside of the call method of the class but that seemed like I was going down the wrong road.
I also tried specifying the class name as the target argument when spawning the thread:
jobt = threading.Thread(target=Command, args=[cmd])
jobt.start()
Every approach I have used I kept hitting roadblocks.
Thx for any suggestions.
So after trying dano's idea, I now have this:
class Command(threading.Thread):
def __init__(self, cmd):
super(Command, self).__init__()
self.cmd = cmd
self.process = None
self.returncode = None
self.jobid = None
def run(self):
print "Starting job..."
self.process = subprocess.Popen(self.cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0, shell=False)
print "Getting job id..."
out = self.process.stdout.readline()
print "out=" + out
self.returncode = self.process.wait()
def alive(self):
if self.process.poll():
return True
else:
return False
def getJobID(self):
return self.jobid
job = Command(cmd)
job.start()
Which yields this following output:
Starting job...
Getting job id...
At this point it hangs until the OS command completes.
Here is an example of running this command manually. The first two lines of output return immediately.
$ /path/to/my/command -x arg1 -y arg2
Info: job request 1 (somestring) submitted; job id is 729.
Info: waiting for job completion
# here is hangs until the job is complete
Info: job 729 completed successfully
Thx again for the help.
I think you could simplify things by having Command inherit from threading.Thread:
import sys
import subprocess
import threading
cmd = "/path/to/utility -x arg1 -y arg2"
class Command(threading.Thread):
def __init__(self, cmd):
super(Command, self).__init__()
self.cmd = cmd
self.process = None
self.returncode = None
self.jobid = None
def run(self):
print "Starting job..."
self.process = subprocess.Popen(self.cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
out, err = self.process.communicate()
self.jobid = out.split()[10]
def alive(self):
if self.process.poll():
return True
else:
return False
def getJobID(self):
return self.jobid
job = Command(cmd)
job.start()
if job.alive():
print "Job is still alive."
else:
print "Job is not alive."
sys.exit(0)
You can't use self.process.communicate() to get the job id prior to the command actually exiting, becausecommunicate() will block until the program completes. Instead, you'd need to use read directly from the process' stdout:
self.process = subprocess.Popen(self.cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, bufsize=0, shell=True)
out = self.process.stdout.readline()
self.jobid = out.split()[10]
Note that bufsize=0 is added, so try to avoid the subprocess buffering its output, which could make readline block.
Then you can call communicate or wait to wait for the process to end:
self.returncode = self.process.wait()
I'm trying to launch an 'rsync' using subprocess module and Popen inside of a thread. After I call the rsync I need to read the output as well. I'm using the communicate method to read the output. The code runs fine when I do not use a thread. It appears that when I use a thread it hangs on the communicate call. Another thing I've noticed is that when I set shell=False I get nothing back from the communicate when running in a thread.
You didn't supply any code for us to look at, but here's a sample that does something similar to what you describe:
import threading
import subprocess
class MyClass(threading.Thread):
def __init__(self):
self.stdout = None
self.stderr = None
threading.Thread.__init__(self)
def run(self):
p = subprocess.Popen('rsync -av /etc/passwd /tmp'.split(),
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
self.stdout, self.stderr = p.communicate()
myclass = MyClass()
myclass.start()
myclass.join()
print myclass.stdout
Here's a great implementation not using threads:
constantly-print-subprocess-output-while-process-is-running
import subprocess
def execute(command):
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = ''
# Poll process for new output until finished
for line in iter(process.stdout.readline, ""):
print line,
output += line
process.wait()
exitCode = process.returncode
if (exitCode == 0):
return output
else:
raise Exception(command, exitCode, output)
execute(['ping', 'localhost'])