Hanging parent process after running a timed-out subprocess and piping results - python

I wrote some code to run a script (via a subprocess) and kill the child process after a certain timeout. I'm running a script called "runtime_hang_script.sh" that just contains "./runtime_hang," which runs an infinite loop. I'm also redirecting stdout to a pipe -- I plan to write it to both sys.stdout and to a file (aka I'm trying to implement tee). However, my code hangs after the subprocess times out. Note that this ONLY hangs when running "sh runtime_hang_script.sh" and not "./runtime_hang." Also, this doesn't hang when I try piping directly to a file or when I don't read from the pipe.
I've tried other implementations of creating a timed subprocess, but I keep on getting the same issue. I've even tried raising a signal at the end of the problem -- for some reason, the signal is raised earlier than anticipated, so this doesn't work either. Any help would be appreciated. Thanks in advance!
process = None
def run():
global process
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
sys.stdout.flush()
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
with process.stdout:
for line in iter(process.stdout.readline, b''):
sys.stdout.write(line.decode('utf-8'))
sys.stdout.flush()
process.wait()
proc_thread = threading.Thread(target=run)
proc_thread.start()
proc_thread.join(5)
print(proc_thread.is_alive())
if proc_thread.is_alive():
process.kill()

Assuming you are using Python 3.3 or newer, you can use the timeout argument of the subprocess.communicate() method to implement your 5-second timeout:
import subprocess
import sys
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
try:
print("Waiting for data from child process...")
(stdoutData, stderrData) = process.communicate(None, timeout_secs)
print("From child process: stdoutData=[%s] stderrData=[%s]" % (stdoutData, stderrData))
except subprocess.TimeoutExpired:
print("Oops, child process took too long! Now it has to die")
process.kill()
print("Waiting for child process to exit...")
process.wait()
print("Child process exited.")
Note that spawning a child thread isn't necessary with this approach, since the timeout can work directly from the main thread.

Related

How to run an EXE program in the background and get the outuput in python

I want to run an exe program in the background
Let's say the program is httpd.exe
I can run it but when I want to get the outupt It get stuck becuase there is no output if It starts successfully. But if there is an error It's OK.
Here is the code I'm using:
import asyncio
import os
os.chdir('c:\\apache\\bin')
process, stdout, stderr = asyncio.run(run('httpd.exe'))
print(stdout, stderr)
async def run(cmd):
proc = await asyncio.create_subprocess_exec(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await proc.communicate()
return (proc, stdout, stderr)
I tried to make the following code as general as possible:
I make no assumptions as to whether the program being run only writes its output to stdout alone or stderr alone. So I capture both outputs by starting two threads, one for each stream, and then write the output to a common queue that can be read in real time. When end-of-stream in encountered on each stdout and stderr, the threads write a special None record to the queue to indicate end of stream. So the reader of the queue know that after seeing two such "end of stream" indicators that there will be no more lines being written to the queue and that the process has effectively ended.
The call to subprocess.Popen can be made with argument shell=True so that this can also built-in shell commands and also to make the specification of the command easier (it can now be a single string rather than a list of strings).
The function run_cmd returns the created process and the queue. You just have to now loop reading lines from the queue until two None records are seen. Once that occurs, you can then just wait for the process to complete, which should be immediate.
If you know that the process you are starting only writes its output to stdout or stderr (or if you only want to catch one of these outputs), then you can modify the program to start only one thread and specify the subprocess.PIPE value for only one of these outputs and then the loop that is reading lines from the queue should only be looking for one None end-of-stream indicator.
The threads are daemon threads so that if you wish to terminate based on output from the process that has been read before all the end-of-stream records have been detected, then the threads will automatically be terminated along with the main process.
run_apache, which runs Apache as a subprocess, is itself a daemon thread. If it detects any output from Apache, it sets an event that has been passed to it. The main thread that starts run_apache can periodically test this event, wait on this event, wait for the run_apache thread to end (which will only occur when Apache ends) or can terminate Apache via global variable proc.
import subprocess
import sys
import threading
import queue
def read_stream(f, q):
for line in iter(f.readline, ''):
q.put(line)
q.put(None) # show no more data from stdout or stderr
def run_cmd(command, run_in_shell=True):
"""
Run command as a subprocess. If run_in_shell is True, then
command is a string, else it is a list of strings.
"""
proc = subprocess.Popen(command, shell=run_in_shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
q = queue.Queue()
threading.Thread(target=read_stream, args=(proc.stdout, q), daemon=True).start()
threading.Thread(target=read_stream, args=(proc.stderr, q), daemon=True).start()
return proc, q
import os
def run_apache(event):
global proc
os.chdir('c:\\apache\\bin')
proc, q = run_cmd(['httpd.exe'], False)
seen_None_count = 0
while seen_None_count < 2:
line = q.get()
if line is None:
# end of stream from either stdout or stderr
seen_None_count += 1
else:
event.set() # Seen output line:
print(line, end='')
# wait for process to terminate, which should be immediate:
proc.wait()
# This event will be set if Apache write output:
event = threading.Event()
t = threading.Thread(target=run_apache, args=(event,), daemon=True)
t.start()
# Main thread runs and can test event any time to see if it has done any output:
if event.is_set():
...
# The main thread can wait for run_apache thread to normally terminate,
# will occur when Apache terminates:
t.join()
# or the main thread can kill Apache via global variable procL
proc.terminate() # No need to do t.join() since run_apache is a daemon thread

Terminating a running script with subprocess

I am using subprocess to execute a python script. I can successfully run the script.
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
proc.communicate()
time.sleep(10)
proc.terminate()
test.py
import time
while True:
print("I am inside test.py")
time.sleep(1)
I can see the message I am inside test.py printed every second. However, I am not able to terminate the script while it is still running by proc.terminate().
Could someone kindly help me out?
proc.communicate waits for the process to finish unless you include a timeout. Your process doesn't exit and you don't have a timeout so communicate never returns. You could verify that with a print after the call. Instead, add a timeout and catch its exception
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
try:
proc.communicate(timeout=10)
except subprocess.TimeoutExpired:
proc.terminate()
First things first, don't use a list to pass in the arguments for the subprocess.Popen() if you're using the shell = True!, change the command to string, "python test.py".
Popen.communicate(input=None, timeout=None) is a blocking class method, it shall Interact with process, and Wait for process to terminate and set the returncode attribute.
since your test.py running infinite while loop, he will never return !
you have 2 options to timeout the process proc that you have spawned:
assign the timeout keyword argument in the,e.g. timing the process for 5 seconds, communicate(timeout=5) method. If the process proc does not terminate after timeout seconds, a TimeoutExpired exception will be raised. Catching this exception and retrying communication will not lose any output (in your case you dont need the child outputs, but i will mention this in the example below). ATTENTION The child process is not killed if the timeout expires, so in order to cleanup properly a well-behaved application should kill the child process (proc) and finish communication.
by using the poll method and do the timing by your calling method.
communicate with timeout
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
poll with time.sleep
proc = subprocess.Popen('python test.py', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still sleeping')
time.sleep(1)
t -= 1
proc.kill()

How to kill a parallel process(started by subprocess.Popen) and it's subprocess? [duplicate]

I'm launching a subprocess with the following command:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
However, when I try to kill using:
p.terminate()
or
p.kill()
The command keeps running in the background, so I was wondering how can I actually terminate the process.
Note that when I run the command with:
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
It does terminate successfully when issuing the p.terminate().
Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.
Here's the code:
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM) # Send the signal to all the process groups
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.kill()
p.kill() ends up killing the shell process and cmd is still running.
I found a convenient fix this by:
p = subprocess.Popen("exec " + cmd, stdout=subprocess.PIPE, shell=True)
This will cause cmd to inherit the shell process, instead of having the shell launch a child process, which does not get killed. p.pid will be the id of your cmd process then.
p.kill() should work.
I don't know what effect this will have on your pipe though.
If you can use psutil, then this works perfectly:
import subprocess
import psutil
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
proc = subprocess.Popen(["infinite_app", "param"], shell=True)
try:
proc.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(proc.pid)
I could do it using
from subprocess import Popen
process = Popen(command, shell=True)
Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
it killed the cmd.exe and the program that i gave the command for.
(On Windows)
When shell=True the shell is the child process, and the commands are its children. So any SIGTERM or SIGKILL will kill the shell but not its child processes, and I don't remember a good way to do it.
The best way I can think of is to use shell=False, otherwise when you kill the parent shell process, it will leave a defunct shell process.
None of these answers worked for me so Im leaving the code that did work. In my case even after killing the process with .kill() and getting a .poll() return code the process didn't terminate.
Following the subprocess.Popen documentation:
"...in order to cleanup properly a well-behaved application should kill the child process and finish communication..."
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
In my case I was missing the proc.communicate() after calling proc.kill(). This cleans the process stdin, stdout ... and does terminate the process.
As Sai said, the shell is the child, so signals are intercepted by it -- best way I've found is to use shell=False and use shlex to split the command line:
if isinstance(command, unicode):
cmd = command.encode('utf8')
args = shlex.split(cmd)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Then p.kill() and p.terminate() should work how you expect.
Send the signal to all the processes in group
self.proc = Popen(commands,
stdout=PIPE,
stderr=STDOUT,
universal_newlines=True,
preexec_fn=os.setsid)
os.killpg(os.getpgid(self.proc.pid), signal.SIGHUP)
os.killpg(os.getpgid(self.proc.pid), signal.SIGTERM)
There is a very simple way for Python 3.5 or + (Actually tested on Python 3.8)
import subprocess, signal, time
p = subprocess.Popen(['cmd'], shell=True)
time.sleep(5) #Wait 5 secs before killing
p.send_signal(signal.CTRL_C_EVENT)
Then, your code may crash at some point if you have a keyboard input detection, or sth like this. In this case, on the line of code/function where the error is given, just use:
try:
FailingCode #here goes the code which is raising KeyboardInterrupt
except KeyboardInterrupt:
pass
What this code is doing is just sending a "CTRL+C" signal to the running process, what will cause the process to get killed.
Solution that worked for me
if os.name == 'nt': # windows
subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
else:
os.kill(process.pid, signal.SIGTERM)
Full blown solution that will kill running process (including subtree) on timeout reached or specific conditions via a callback function.
Works both on windows & Linux, from Python 2.7 up to 3.10 as of this writing.
Install with pip install command_runner
Example for timeout:
from command_runner import command_runner
# Kills ping after 2 seconds
exit_code, output = command_runner('ping 127.0.0.1', shell=True, timeout=2)
Example for specific condition:
Here we'll stop ping if current system time seconds digit is > 5
from time import time
from command_runner import command_runner
def my_condition():
# Arbitrary condition for demo
return True if int(str(int(time()))[-1]) > 5
# Calls my_condition() every second (check_interval) and kills ping if my_condition() returns True
exit_code, output = command_runner('ping 127.0.0.1', shell=True, stop_on=my_condition, check_interval=1)

start a service with popen : command not stopping

I try to make a backup script in python and start, stop a service with popen...
Stopping the service is working, but unfortunatly starting the service works, but blocks the rest of the execution, the scripts stays there, why ?
Seems to be somehow linked with the httpd service... :-(
the program config element is like "service;httpd;start" or "/etc/init.d/myprog;start"
class execute(actions):
def __init__(self,config,section,logger):
self.name="execute"
actions.__init__(self,config,section,logger)
def process(self):
try:
program=self.config.get(self.section,"program").split(";")
self.logger.debug("program=%s" % program)
p = subprocess.Popen(program, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if stdout:
self.logger.info(stdout)
if stderr:
self.logger.error(stderr)
return p.returncode
except Exception:
self.logger.exception(Exception)
You have to open a stdin as a pipe as well, and then close it (if you use read() and write() instead of communicate()).
p = subprocess.Popen(..., stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.stdin.close()
print "Stdout:", p.stdout.read()
print "Stderr:", p.stderr.read()
If it doesn't work, and you don't really need any checks, just close all pipes after call to Popen, what will cause program execution and detachment
from pipes.
Warning: This will make program run as a daemon if it doesn't terminate on its own.
After doing this you may call wait() to see whether it'll block as well. And use exitcodes to check for eventual errors.
There are not much of them. Just service started or not. Sometimes even it returns that service is running, but service crashes.
To check whether service script is still running, but without blocking, use:
if p.poll()==None: print "Still running"
Else, poll() returns the exit code.
This works neatly for starting and stopping a service:
from subprocess import Popen, PIPE
service = "brltty"
p = Popen(["service", service, "start"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
# Note: using sequence uses shell=0
stdout, stderr = p.communicate()
print "Stdout:", stdout
print "Stderr:", stderr
Don't forget to change start to stop :D :D :D
The call to p.communicate() waits for the process to terminate.
Refer to: subprocess documentation
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
The optional input argument should be a string to be sent to the child
process, or None, if no data should be sent to the child.
You can try to use p.poll() instead. This method doesn't wait for a process to terminate.

How to kill a python child process created with subprocess.check_output() when the parent dies?

I am running on a linux machine a python script which creates a child process using subprocess.check_output() as it follows:
subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
The problem is that even if the parent process dies, the child is still running.
Is there any way I can kill the child process as well when the parent dies?
Yes, you can achieve this by two methods. Both of them require you to use Popen instead of check_output. The first is a simpler method, using try..finally, as follows:
from contextlib import contextmanager
#contextmanager
def run_and_terminate_process(*args, **kwargs):
try:
p = subprocess.Popen(*args, **kwargs)
yield p
finally:
p.terminate() # send sigterm, or ...
p.kill() # send sigkill
def main():
with run_and_terminate_process(args) as running_proc:
# Your code here, such as running_proc.stdout.readline()
This will catch sigint (keyboard interrupt) and sigterm, but not sigkill (if you kill your script with -9).
The other method is a bit more complex, and uses ctypes' prctl PR_SET_PDEATHSIG. The system will send a signal to the child once the parent exits for any reason (even sigkill).
import signal
import ctypes
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig = signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
p = subprocess.Popen(args, preexec_fn = set_pdeathsig(signal.SIGTERM))
Your problem is with using subprocess.check_output - you are correct, you can't get the child PID using that interface. Use Popen instead:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE)
# Here you can get the PID
global child_pid
child_pid = proc.pid
# Now we can wait for the child to complete
(output, error) = proc.communicate()
if error:
print "error:", error
print "output:", output
To make sure you kill the child on exit:
import os
import signal
def kill_child():
if child_pid is None:
pass
else:
os.kill(child_pid, signal.SIGTERM)
import atexit
atexit.register(kill_child)
Don't know the specifics, but the best way is still to catch errors (and perhaps even all errors) with signal and terminate any remaining processes there.
import signal
import sys
import subprocess
import os
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
a = subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
while 1:
pass # Press Ctrl-C (breaks the application and is catched by signal_handler()
This is just a mockup, you'd need to catch more than just SIGINT but the idea might get you started and you'd need to check for spawned process somehow still.
http://docs.python.org/2/library/os.html#os.kill
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.pid
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.kill
I'd recommend rewriting a personalized version of check_output cause as i just realized check_output is really just for simple debugging etc since you can't interact so much with it during executing..
Rewrite check_output:
from subprocess import Popen, PIPE, STDOUT
from time import sleep, time
def checkOutput(cmd):
a = Popen('ls -l', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
print(a.pid)
start = time()
while a.poll() == None or time()-start <= 30: #30 sec grace period
sleep(0.25)
if a.poll() == None:
print('Still running, killing')
a.kill()
else:
print('exit code:',a.poll())
output = a.stdout.read()
a.stdout.close()
a.stdin.close()
return output
And do whatever you'd like with it, perhaps store the active executions in a temporary variable and kill them upon exit with signal or other means of intecepting errors/shutdowns of the main loop.
In the end, you still need to catch terminations in the main application in order to safely kill any childs, the best way to approach this is with try & except or signal.
As of Python 3.2 there is a ridiculously simple way to do this:
from subprocess import Popen
with Popen(["sleep", "60"]) as process:
print(f"Just launched server with PID {process.pid}")
I think this will be best for most use cases because it's simple and portable, and it avoids any dependence on global state.
If this solution isn't powerful enough, then I would recommend checking out the other answers and discussion on this question or on Python: how to kill child process(es) when parent dies?, as there are a lot of neat ways to approach the problem that provide different trade-offs around portability, resilience, and simplicity. 😊
Manually you could do this:
ps aux | grep <process name>
get the PID(second column) and
kill -9 <PID>
-9 is to force killing it

Categories