Terminate subprocess running in thread on program exit - python

Based on the accepted answer to this question: python-subprocess-callback-when-cmd-exits I am running a subprocess in a separate thread and after the completion of the subprocess a callable is executed. All good, but the problem is that even if running the thread as a daemon, the subprocess continues to run even after the program exits normally or it is killed by kill -9, Ctrl + C, etc...
Below is a very simplified example (runs on 2.7):
import threading
import subprocess
import time
import sys
def on_exit(pid):
print 'Process with pid %s ended' % pid
def popen_with_callback(cmd):
def run_in_thread(command):
proc = subprocess.Popen(
command,
shell=False
)
proc.wait()
on_exit(proc.pid)
return
thread = threading.Thread(target=run_in_thread, args=([cmd]))
thread.daemon = True
thread.start()
return thread
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
])
time.sleep(5)
print 'program ended'
If the main thread lasts longer than the subprocess everything is fine:
(venv)~/Desktop|➤➤ python testing_threads.py 3
> 0
> 1
> 2
> Process with pid 26303 ended
> program ended
If the main thread lasts less than the subprocess, the subprocess continues to run until it eventually hangs:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
(venv)~/Desktop|➤➤ 5
> 6
> 7
# hanging from now on
How to terminate the subprocess if the main program is finished or killed? I tried to use atexit.register(os.kill(proc.pid, signal.SIGTERM)) just before proc.wait but it actually executes when the thread running the subprocess exits, not when the main thread exits.
I was also thinking of polling for the parent pid, but I am not sure how to implement it because of the proc.wait situation.
Ideal outcome would be:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
> Process with pid 1234 ended

Use Thread.join method, which blocks main thread until this thread exits:
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
]).join()
print 'program ended'

I just got an ugly but effective method.
Just set a global variable to handle the proc=subprocess.Popen(),
and you can kill the proc whenever you like:
my_proc = None
def popen_with_callback(cmd):
def run_in_thread(command):
global my_proc
proc = subprocess.Popen(command, shell=False)
my_proc = proc
proc.wait()
on_exit(proc.pid)
return
...
Then you can kill the proc wherever you want in the program.
just do:
my_proc.kill()

Related

Run subprocess in thread and stop it

I am trying to run a python file with another python file in threading with subprocess. I am able to run the file, but not able to stop it.
What I want is
I want to run the test.py with my main.py in thread and stop it by entering stop in the console.
test.py
import time
while True:
print("Hello from test.py")
time.sleep(5)
main.py
import subprocess
import _thread
processes = []
def add_process(file_name):
p = subprocess.Popen(["python", file_name], shell=True)
processes.append(p)
print("Waiting the process...")
p.wait()
while True:
try:
# user input
ui = input("> ")
if ui == "start":
_thread.start_new_thread(add_process, ("test.py",))
print("Process started.")
elif ui == "stop":
if len(processes) > 0:
processes[-1].kill()
print("Process killed.")
else:
pass
except KeyboardInterrupt:
print("Exiting Program!")
break
Output
C:\users\me>python main2.py
> start
Process started.
> Waiting the process...
Hello from test.py, after 0 seconds
Hello from test.py, after 4 seconds
> stop
Process killed.
> Hello from test.py, after 8 seconds
Hello from test.py, after 12 seconds
> stopHello from test.py, after 16 seconds
Process killed.
> Hello from test.py, after 20 seconds
>
The program is still running even after I stop it with kill function, I have tried terminate also. I need to stop that, how can I. Or, is there any alternative module for subprocess to do this?
I suspect you have just started multiple processes. What happens if you replace your stop code with this:
for proc in processes:
proc.kill()
This should kill them all.
Note that you are on windows, and on windows terminate() is an alias for kill(). It is not a good idea to rely on killing processes gracelessly anyway, and if you need to stop your test.py you would be better off having some means of communicating with it and having it exit gracefully.
Incidentally, you don't want shell=True, and you're better off with threading than with _thread.

how to get rid of warning `RuntimeWarning: A loop is being detached from a child watcher with pending handlers`

if I use asyncio to spawn subprocess which run another python script, there is warning at the end: RuntimeWarning: A loop is being detached from a child watcher with pending handlers, if the subprocess is terminate by terminate().
For example a very simple dummy:
import datetime
import time
import os
if __name__ == '__main__':
for i in range(3):
msg = 'pid({}) {}: continue'.format(os.getpid(), datetime.datetime.now())
print(msg)
time.sleep(1.0)
Then I spawn the dummy:
import asyncio
from asyncio import subprocess
T = 3 # if the T=5, which allows the thread to finish, then there is no such warning.
async def handle_proc():
p = None
try:
p = await subprocess.create_subprocess_exec(
'python3', 'dummy.py',
#'dummy.sh'
)
await asyncio.sleep(T)
finally:
if p and p.returncode is None:
p.terminate()
print('handle_proc Done!')
if __name__ == '__main__':
asyncio.run(handle_proc())
There will be the warning, if T is short enough to allow p.terminate() get called.
if I spawn a bash script there is no warning:
#!/usr/bin/env bash
set -e
N=10
T=1
for i in $(seq 1 $N)
do
echo "loop=$i/$N, sleep $T"
>&2 echo "msg in stderror: ($i/$N,$T)"
sleep $T
done
What did I do wrong here?
To get rid of this warning, you need to await p.wait() after p.terminate().
The reason you need to do this is that p.terminate() only sends signal.SIGTERM to the process. The process might still do some cleanup work after receiving the signal. If you don't wait for the process to finish terminating, your script finishes before the process and the process is cut off prematurely.

subprocess.popen detached from master (Linux)

I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well.
I know there are a couple of options for windows but I have not found anything for *nix.
I also don't need to call this using subprocess. All I need is to be able to cal another process detached and get the pid.
With linux, it's no issue at all. Just Popen(). For example, here is a little dying_demon.py
#!/usr/bin/python -u
from time import sleep
from subprocess import Popen
print Popen(["python", "-u", "child.py"]).pid
i = 0
while True:
i += 1
print "demon: %d" % i
sleep(1)
if i == 3:
i = hurz # exception
spinning off a child.py
#!/usr/bin/python -u
from time import sleep
i = 0
while True:
i += 1
print "child: %d" % i
sleep(1)
if i == 20:
break
The child continues to count (to the console), while the demon is dying by exception.
I think this should do the trick: https://www.python.org/dev/peps/pep-3143/#reference-implementation
You can create daemon which will call your subprocess, passing detach_process=True.
This might do what you want:
def cmd_detach(*command, **kwargs) -> subprocess.CompletedProcess:
# https://stackoverflow.com/questions/62521658/python-subprocess-detach-a-process
# if using with ffmpeg remember to run it with `-nostdin`
stdout = os.open(os.devnull, os.O_WRONLY)
stderr = os.open(os.devnull, os.O_WRONLY)
stdin = os.open(os.devnull, os.O_RDONLY)
command = conform(command)
if command[0] in ["fish", "bash"]:
import shlex
command = command[0:2] + [shlex.join(command[2:])]
subprocess.Popen(command, stdin=stdin, stdout=stdout, stderr=stderr, close_fds=True, start_new_session=True, **kwargs)
return subprocess.CompletedProcess(command, 0, "Detached command is async")
On Windows you might need
CREATE_NEW_PROCESS_GROUP = 0x00000200
DETACHED_PROCESS = 0x00000008
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP
instead of start_new_session=True
I managed to get it working by doing the following using python-daemon:
process = subprocess.Popen(["python", "-u", "Child.py"])
time.sleep(2)
process.kill()
Then in Child.py:
with daemon.DaemonContext():
print("Child Started")
time.sleep(30)
print "Done"
exit()
I do process.kill() because otherwise it creates a defunct python process. The main problem I have now is that the PID that popen returns does not match the final pid of the process. I can get by this by adding a function in Child.py to update a database with the pid.
Let me know if there is something that I am missing or if this is an ok method of doing this.
fork the subprocs using the NOHUP option

.kill() in python subprocess kills parent rather than child process

I'm attempting to open a child process of a second python script in the function below. It seems to open the process fine, but when I attempt to end the process, the parent process terminates and the child process persists. Any recommendations on why this may be happening?
def thermostat(input):
global ThermostatRunning
global proc
print("Thermostat Function input: %s" % input)
if input == 'stop' and ThermostatRunning == 1:
print("test")
proc.kill()
print proc.poll()
dev2(0) #ensure heater is off
return('Thermostat turned off')
elif input=='stop' and ThermostatRunning == 0:
status = "Thermostat already stopped"
print(status)
return(status)
if input.isdigit() == 0:
return("Thermostat input is not a number or -stop-")
if ThermostatRunning == 1:
print("test2")
proc.kill()
print("test3")
proc = subprocess.Popen('python thermostat.py -t %s' % input, shell=True)#, preexec_fn=os.setsid)
ThermostatRunning = 1
#for line in proc.stdout.readlines():
#print (line)
status = "Thermostat started with set temperature: %s" % input
print(status)
return(status)
The only other issue that may be pertinent is that this is a flask script. I'm not sure if that changes anything.
When you create the subprocess with shell=True, you actually spawn a process that spawns another subprocess, so when you call proc.kill(), you're only killing the parent process. You want to make your subprocess the process group leader so you can kill them all at once.
Uncomment the setsid in the Popen call and kill the whole process group like so:
os.killpg(proc.pid, signal.SIGTERM)

python subprocess poll() is not returning None even if Popen is still running

I have a python script that executes linux commands with timeout using a while loop and sleep like below
fout = tempfile.TemporaryFile()
try:
p = subprocess.Popen(["/bin/bash","-c", options.command], bufsize=-1, shell=False, preexec_fn=os.setsid, stdin=subprocess.PIPE, stdout=fout, stderr=subprocess.PIPE)
except:
sys.exit(UNEXPECTED_ERROR)
if options.timeout:
print "options.timeout = %s" % options.timeout
elapsed = 0
time.sleep(0.1) # This sleep is for the delay between Popen and poll() functions
while p.poll() is None:
time.sleep(1)
elapsed = elapsed + 1
print "elapsed = %s" % elapsed
if elapsed >= options.timeout:
# TIMEDOUT
# kill all processes that are in the same child process group
# which kills the process tree
pgid = os.getpgid(p.pid)
os.killpg(pgid, signal.SIGKILL)
p.wait()
fout.close()
sys.exit(TIMEOUT_ERROR)
break
else:
p.wait()
fout.seek(0) #rewind to the beginning of the file
print fout.read(),
fout.close()
sys.exit(p.returncode)
$ time myScript -c "cat file2" 2>&1 -t 5
options.timeout = 5
elapsed = 1
real 0m11.811s
user 0m0.046s
sys 0m1.153s
My question is in that above case even if the timeout is 5 seconds cat continues till it finishes. Am I missing something here? Please help.
It works as expected on Ubuntu:
$ /usr/bin/ssh root#localhost -t 'sync && echo 3 > /proc/sys/vm/drop_caches'
$ /usr/bin/time python2.4 myscript.py 'cat big_file'
timeout
done
0.01user 0.63system 0:05.16elapsed 12%CPU
$ /usr/bin/ssh root#localhost -t 'sync && echo 3 > /proc/sys/vm/drop_caches'
$ /usr/bin/time cat big_file >/dev/null
0.02user 0.82system 0:09.93elapsed 8%CPU
It also work with a shell command:
$ /usr/bin/time python2.4 myscript.py 'while : ; do sleep 1; done'
timeout
done
0.02user 0.00system 0:05.03elapsed 0%CPU
Assumptions:
you can't use time.time() due to possibility of a system clock change
time.clock() doesn't measure children times on Linux
we can't emulate time.monotonic() from Python 3.3 in pure Python
due to ctypes is not available on Python 2.4
it is acceptable to survive hibernation e.g., 2 seconds before hibernation + 3 seconds after computer wakes up whenever it happens if timeout is 5 seconds.
#!/usr/bin/env python2.4
import os
import signal
import sys
import tempfile
import time
from subprocess import Popen
class TimeoutExpired(Exception):
pass
def wait(process, timeout, _sleep_time=.1):
for _ in xrange(int(timeout * 1. / _sleep_time + .5)):
time.sleep(_sleep_time) # NOTE: assume it doesn't wake up earlier
if process.poll() is not None:
return process.wait()
raise TimeoutExpired # NOTE: timeout precision is not very good
f = tempfile.TemporaryFile()
p = Popen(["/bin/bash", "-c", sys.argv[1]], stdout=f, preexec_fn=os.setsid,
close_fds=True)
try:
wait(p, timeout=5)
except TimeoutExpired:
print >>sys.stderr, "timeout"
os.killpg(os.getpgid(p.pid), signal.SIGKILL)
p.wait()
else:
f.seek(0)
for line in f:
print line,
f.close() # delete it
print >>sys.stderr, "done"
Beside of the problems I see in your code
you call Popen() with stdin=subprocess.PIPE and stderr=subprocess.PIPE. But you never handle these pipes. With a command like cat file2, this should be fine, but it can lead to problems.
I can spot a potential misbehaviour: you might have mixed up indentation (as in the 1st version of your question). Assume you have the following:
while p.poll() is None:
time.sleep(1)
elapsed = elapsed + 1
print "elapsed = %s" % elapsed
if elapsed >= options.timeout:
# TIMEDOUT
# kill all processes that are in the same child process group
# which kills the process tree
pgid = os.getpgid(p.pid)
os.killpg(pgid, signal.SIGKILL)
p.wait()
fout.close()
sys.exit(TIMEOUT_ERROR)
break
You don't reach the timeout threshold, and nevertheless p.wait() is called due to a bad indentation. Don't mix up tabs and spaces; PEP 8 suggests to use spaces only and a indentation depth of 4 columns.

Categories