Related
I am using subprocess to execute a python script. I can successfully run the script.
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
proc.communicate()
time.sleep(10)
proc.terminate()
test.py
import time
while True:
print("I am inside test.py")
time.sleep(1)
I can see the message I am inside test.py printed every second. However, I am not able to terminate the script while it is still running by proc.terminate().
Could someone kindly help me out?
proc.communicate waits for the process to finish unless you include a timeout. Your process doesn't exit and you don't have a timeout so communicate never returns. You could verify that with a print after the call. Instead, add a timeout and catch its exception
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
try:
proc.communicate(timeout=10)
except subprocess.TimeoutExpired:
proc.terminate()
First things first, don't use a list to pass in the arguments for the subprocess.Popen() if you're using the shell = True!, change the command to string, "python test.py".
Popen.communicate(input=None, timeout=None) is a blocking class method, it shall Interact with process, and Wait for process to terminate and set the returncode attribute.
since your test.py running infinite while loop, he will never return !
you have 2 options to timeout the process proc that you have spawned:
assign the timeout keyword argument in the,e.g. timing the process for 5 seconds, communicate(timeout=5) method. If the process proc does not terminate after timeout seconds, a TimeoutExpired exception will be raised. Catching this exception and retrying communication will not lose any output (in your case you dont need the child outputs, but i will mention this in the example below). ATTENTION The child process is not killed if the timeout expires, so in order to cleanup properly a well-behaved application should kill the child process (proc) and finish communication.
by using the poll method and do the timing by your calling method.
communicate with timeout
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
poll with time.sleep
proc = subprocess.Popen('python test.py', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still sleeping')
time.sleep(1)
t -= 1
proc.kill()
I wrote some code to run a script (via a subprocess) and kill the child process after a certain timeout. I'm running a script called "runtime_hang_script.sh" that just contains "./runtime_hang," which runs an infinite loop. I'm also redirecting stdout to a pipe -- I plan to write it to both sys.stdout and to a file (aka I'm trying to implement tee). However, my code hangs after the subprocess times out. Note that this ONLY hangs when running "sh runtime_hang_script.sh" and not "./runtime_hang." Also, this doesn't hang when I try piping directly to a file or when I don't read from the pipe.
I've tried other implementations of creating a timed subprocess, but I keep on getting the same issue. I've even tried raising a signal at the end of the problem -- for some reason, the signal is raised earlier than anticipated, so this doesn't work either. Any help would be appreciated. Thanks in advance!
process = None
def run():
global process
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
sys.stdout.flush()
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
with process.stdout:
for line in iter(process.stdout.readline, b''):
sys.stdout.write(line.decode('utf-8'))
sys.stdout.flush()
process.wait()
proc_thread = threading.Thread(target=run)
proc_thread.start()
proc_thread.join(5)
print(proc_thread.is_alive())
if proc_thread.is_alive():
process.kill()
Assuming you are using Python 3.3 or newer, you can use the timeout argument of the subprocess.communicate() method to implement your 5-second timeout:
import subprocess
import sys
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
try:
print("Waiting for data from child process...")
(stdoutData, stderrData) = process.communicate(None, timeout_secs)
print("From child process: stdoutData=[%s] stderrData=[%s]" % (stdoutData, stderrData))
except subprocess.TimeoutExpired:
print("Oops, child process took too long! Now it has to die")
process.kill()
print("Waiting for child process to exit...")
process.wait()
print("Child process exited.")
Note that spawning a child thread isn't necessary with this approach, since the timeout can work directly from the main thread.
I'm launching a subprocess with the following command:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
However, when I try to kill using:
p.terminate()
or
p.kill()
The command keeps running in the background, so I was wondering how can I actually terminate the process.
Note that when I run the command with:
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
It does terminate successfully when issuing the p.terminate().
Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.
Here's the code:
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM) # Send the signal to all the process groups
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.kill()
p.kill() ends up killing the shell process and cmd is still running.
I found a convenient fix this by:
p = subprocess.Popen("exec " + cmd, stdout=subprocess.PIPE, shell=True)
This will cause cmd to inherit the shell process, instead of having the shell launch a child process, which does not get killed. p.pid will be the id of your cmd process then.
p.kill() should work.
I don't know what effect this will have on your pipe though.
If you can use psutil, then this works perfectly:
import subprocess
import psutil
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
proc = subprocess.Popen(["infinite_app", "param"], shell=True)
try:
proc.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(proc.pid)
I could do it using
from subprocess import Popen
process = Popen(command, shell=True)
Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
it killed the cmd.exe and the program that i gave the command for.
(On Windows)
When shell=True the shell is the child process, and the commands are its children. So any SIGTERM or SIGKILL will kill the shell but not its child processes, and I don't remember a good way to do it.
The best way I can think of is to use shell=False, otherwise when you kill the parent shell process, it will leave a defunct shell process.
None of these answers worked for me so Im leaving the code that did work. In my case even after killing the process with .kill() and getting a .poll() return code the process didn't terminate.
Following the subprocess.Popen documentation:
"...in order to cleanup properly a well-behaved application should kill the child process and finish communication..."
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
In my case I was missing the proc.communicate() after calling proc.kill(). This cleans the process stdin, stdout ... and does terminate the process.
As Sai said, the shell is the child, so signals are intercepted by it -- best way I've found is to use shell=False and use shlex to split the command line:
if isinstance(command, unicode):
cmd = command.encode('utf8')
args = shlex.split(cmd)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Then p.kill() and p.terminate() should work how you expect.
Send the signal to all the processes in group
self.proc = Popen(commands,
stdout=PIPE,
stderr=STDOUT,
universal_newlines=True,
preexec_fn=os.setsid)
os.killpg(os.getpgid(self.proc.pid), signal.SIGHUP)
os.killpg(os.getpgid(self.proc.pid), signal.SIGTERM)
There is a very simple way for Python 3.5 or + (Actually tested on Python 3.8)
import subprocess, signal, time
p = subprocess.Popen(['cmd'], shell=True)
time.sleep(5) #Wait 5 secs before killing
p.send_signal(signal.CTRL_C_EVENT)
Then, your code may crash at some point if you have a keyboard input detection, or sth like this. In this case, on the line of code/function where the error is given, just use:
try:
FailingCode #here goes the code which is raising KeyboardInterrupt
except KeyboardInterrupt:
pass
What this code is doing is just sending a "CTRL+C" signal to the running process, what will cause the process to get killed.
Solution that worked for me
if os.name == 'nt': # windows
subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
else:
os.kill(process.pid, signal.SIGTERM)
Full blown solution that will kill running process (including subtree) on timeout reached or specific conditions via a callback function.
Works both on windows & Linux, from Python 2.7 up to 3.10 as of this writing.
Install with pip install command_runner
Example for timeout:
from command_runner import command_runner
# Kills ping after 2 seconds
exit_code, output = command_runner('ping 127.0.0.1', shell=True, timeout=2)
Example for specific condition:
Here we'll stop ping if current system time seconds digit is > 5
from time import time
from command_runner import command_runner
def my_condition():
# Arbitrary condition for demo
return True if int(str(int(time()))[-1]) > 5
# Calls my_condition() every second (check_interval) and kills ping if my_condition() returns True
exit_code, output = command_runner('ping 127.0.0.1', shell=True, stop_on=my_condition, check_interval=1)
I would like to repeatedly execute a subprocess as fast as possible. However, sometimes the process will take too long, so I want to kill it.
I use signal.signal(...) like below:
ppid=pipeexe.pid
signal.signal(signal.SIGALRM, stop_handler)
signal.alarm(1)
.....
def stop_handler(signal, frame):
print 'Stop test'+testdir+'for time out'
if(pipeexe.poll()==None and hasattr(signal, "SIGKILL")):
os.kill(ppid, signal.SIGKILL)
return False
but sometime this code will try to stop the next round from executing.
Stop test/home/lu/workspace/152/treefit/test2for time out
/bin/sh: /home/lu/workspace/153/squib_driver: not found ---this is the next execution; the program wrongly stops it.
Does anyone know how to solve this? I want to stop in time not execute 1 second the time.sleep(n) often wait n seconds. I do not want that I want it can execute less than 1 second
You could do something like this:
import subprocess as sub
import threading
class RunCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = sub.Popen(self.cmd)
self.p.wait()
def Run(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate() #use self.p.kill() if process needs a kill -9
self.join()
RunCmd(["./someProg", "arg1"], 60).Run()
The idea is that you create a thread that runs the command and to kill it if the timeout exceeds some suitable value, in this case 60 seconds.
Here is something I wrote as a watchdog for subprocess execution. I use it now a lot, but I'm not so experienced so maybe there are some flaws in it:
import subprocess
import time
def subprocess_execute(command, time_out=60):
"""executing the command with a watchdog"""
# launching the command
c = subprocess.Popen(command)
# now waiting for the command to complete
t = 0
while t < time_out and c.poll() is None:
time.sleep(1) # (comment 1)
t += 1
# there are two possibilities for the while to have stopped:
if c.poll() is None:
# in the case the process did not complete, we kill it
c.terminate()
# and fill the return code with some error value
returncode = -1 # (comment 2)
else:
# in the case the process completed normally
returncode = c.poll()
return returncode
Usage:
return = subprocess_execute(['java', '-jar', 'some.jar'])
Comments:
here, the watchdog time out is in seconds; but it's easy to change to whatever needed by changing the time.sleep() value. The time_out will have to be documented accordingly;
according to what is needed, here it maybe more suitable to raise some exception.
Documentation: I struggled a bit with the documentation of subprocess module to understand that subprocess.Popen is not blocking; the process is executed in parallel (maybe I do not use the correct word here, but I think it's understandable).
But as what I wrote is linear in its execution, I really have to wait for the command to complete, with a time out to avoid bugs in the command to pause the nightly execution of the script.
I guess this is a common synchronization problem in event-oriented programming with threads and processes.
If you should always have only one subprocess running, make sure the current subprocess is killed before running the next one. Otherwise the signal handler may get a reference to the last subprocess run and ignore the older.
Suppose subprocess A is running. Before the alarm signal is handled, subprocess B is launched. Just after that, your alarm signal handler attempts to kill a subprocess. As the current PID (or the current subprocess pipe object) was set to B's when launching the subprocess, B gets killed and A keeps running.
Is my guess correct?
To make your code easier to understand, I would include the part that creates a new subprocess just after the part that kills the current subprocess. That would make clear there is only one subprocess running at any time. The signal handler could do both the subprocess killing and launching, as if it was the iteration block that runs in a loop, in this case event-driven with the alarm signal every 1 second.
Here's what I use:
class KillerThread(threading.Thread):
def __init__(self, pid, timeout, event ):
threading.Thread.__init__(self)
self.pid = pid
self.timeout = timeout
self.event = event
self.setDaemon(True)
def run(self):
self.event.wait(self.timeout)
if not self.event.isSet() :
try:
os.kill( self.pid, signal.SIGKILL )
except OSError, e:
#This is raised if the process has already completed
pass
def runTimed(dt, dir, args, kwargs ):
event = threading.Event()
cwd = os.getcwd()
os.chdir(dir)
proc = subprocess.Popen(args, **kwargs )
os.chdir(cwd)
killer = KillerThread(proc.pid, dt, event)
killer.start()
(stdout, stderr) = proc.communicate()
event.set()
return (stdout,stderr, proc.returncode)
A bit more complex, I added an answer to solve a similar problem: Capturing stdout, feeding stdin, and being able to terminate after some time of inactivity and/or after some overall runtime.
I run a subprocess using:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
This subprocess could either exit immediately with an error on stderr, or keep running. I want to detect either of these conditions - the latter by waiting for several seconds.
I tried this:
SECONDS_TO_WAIT = 10
select.select([],
[p.stdout, p.stderr],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
but it just returns:
([],[],[])
on either condition. What can I do?
Have you tried using the Popen.Poll() method. You could just do this:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
time.sleep(SECONDS_TO_WAIT)
retcode = p.poll()
if retcode is not None:
# process has terminated
This will cause you to always wait 10 seconds, but if the failure case is rare this would be amortized over all the success cases.
Edit:
How about:
t_nought = time.time()
seconds_passed = 0
while(p.poll() is not None and seconds_passed < 10):
seconds_passed = time.time() - t_nought
if seconds_passed >= 10:
#TIMED OUT
This has the ugliness of being a busy wait, but I think it accomplishes what you want.
Additionally looking at the select call documentation again I think you may want to change it as follows:
SECONDS_TO_WAIT = 10
select.select([p.stderr],
[],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
Since you would typically want to read from stderr, you want to know when it has something available to read (ie the failure case).
I hope this helps.
This is what i came up with. Works when you need and don't need to timeout on thep process, but with a semi-busy loop.
def runCmd(cmd, timeout=None):
'''
Will execute a command, read the output and return it back.
#param cmd: command to execute
#param timeout: process timeout in seconds
#return: a tuple of three: first stdout, then stderr, then exit code
#raise OSError: on missing command or if a timeout was reached
'''
ph_out = None # process output
ph_err = None # stderr
ph_ret = None # return code
p = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# if timeout is not set wait for process to complete
if not timeout:
ph_ret = p.wait()
else:
fin_time = time.time() + timeout
while p.poll() == None and fin_time > time.time():
time.sleep(1)
# if timeout reached, raise an exception
if fin_time < time.time():
# starting 2.6 subprocess has a kill() method which is preferable
# p.kill()
os.kill(p.pid, signal.SIGKILL)
raise OSError("Process timeout has been reached")
ph_ret = p.returncode
ph_out, ph_err = p.communicate()
return (ph_out, ph_err, ph_ret)
Here is a nice example:
from threading import Timer
from subprocess import Popen, PIPE
proc = Popen("ping 127.0.0.1", shell=True)
t = Timer(60, proc.kill)
t.start()
proc.wait()
Using select and sleeping doesn't really make much sense. select (or any kernel polling mechanism) is inherently useful for asynchronous programming, but your example is synchronous. So either rewrite your code to use the normal blocking fashion or consider using Twisted:
from twisted.internet.utils import getProcessOutputAndValue
from twisted.internet import reactor
def stop(r):
reactor.stop()
def eb(reason):
reason.printTraceback()
def cb(result):
stdout, stderr, exitcode = result
# do something
getProcessOutputAndValue('/bin/someproc', []
).addCallback(cb).addErrback(eb).addBoth(stop)
reactor.run()
Incidentally, there is a safer way of doing this with Twisted by writing your own ProcessProtocol:
http://twistedmatrix.com/projects/core/documentation/howto/process.html
Python 3.3
import subprocess as sp
try:
sp.check_call(["/subprocess"], timeout=10,
stdin=sp.DEVNULL, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
except sp.TimeoutError:
# timeout (the subprocess is killed at this point)
except sp.CalledProcessError:
# subprocess failed before timeout
else:
# subprocess ended successfully before timeout
See TimeoutExpired docs.
If, as you said in the comments above, you're just tweaking the output each time and re-running the command, would something like the following work?
from threading import Timer
import subprocess
WAIT_TIME = 10.0
def check_cmd(cmd):
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
def _check():
if p.poll()!=0:
print cmd+" did not quit within the given time period."
# check whether the given process has exited WAIT_TIME
# seconds from now
Timer(WAIT_TIME, _check).start()
check_cmd('echo')
check_cmd('python')
The code above, when run, outputs:
python did not quit within the given time period.
The only downside of the above code that I can think of is the potentially overlapping processes as you keep running check_cmd.
This is a paraphrase on Evan's answer, but it takes into account the following :
Explicitly canceling the Timer object : if the Timer interval would be long and the process will exit by its "own will" , this could hang your script :(
There is an intrinsic race in the Timer approach (the timer attempt killing the process just after the process has died and this on Windows will raise an exception).
DEVNULL = open(os.devnull, "wb")
process = Popen("c:/myExe.exe", stdout=DEVNULL) # no need for stdout
def kill_process():
""" Kill process helper"""
try:
process.kill()
except OSError:
pass # Swallow the error
timer = Timer(timeout_in_sec, kill_process)
timer.start()
process.wait()
timer.cancel()