How do I send a Ctrl-C to process or kill process with child processes?
Example of my code (python 2.7):
# --*-- coding: utf-8 --*--
import subprocess
import os
import signal
proc = subprocess.Popen(['ping localhost'],shell=True,stdout=subprocess.PIPE)
print proc.pid
a = raw_input()
os.killpg(proc.pid, signal.SIGTERM)
I see next processes when I run program:
user 16078 0.0 0.0 4476 916 pts/6 S+ 14:41 0:00 /bin/sh -c ping localhost
user 16079 0.0 0.0 8628 1908 pts/6 S+ 14:41 0:00 ping localhost
Program output:
16078
After raw_input:
Traceback (most recent call last):
File "subproc2.py", line 10, in <module>
os.killpg(proc.pid, signal.SIGTERM)
OSError: [Errno 3] No such process
I want kill process pid 16078 and pid 16079.
How would I do this and what the bug in program? Appreciate the help.
How would I do this and what the bug in program?
If you want to kill all processes which includes in process group then you should use parent process id. Like that:
os.killpg(os.getpid(), signal.SIGTERM)
If you want to kill only one child process then use this:
os.kill(proc.pid, signal.SIGTERM)
Related
I have a script "run.py" that must print "Hello", launch another script "run2.py", and then terminate (do not wait for run2.py to return).
run2.py is not in the local directory and is only required to print "Hello again".
How can I do this?
# run_path = "C:/Program Files (x86)/xxx/run.py"
# run2_path = "//network_share/folder/run2.py"
**run.py**
import os
print("Hello")
# What do I do here?
# os.execl("//network_share/folder/run2.py")
exit()
**run2.py**
print("Hello again")
This seems to work for a script I have in the same folder I'm running this one in.
This should verify that the first script finishes and doesn't linger while the second script runs in its own process. It is possible on some systems, due to their configuration, the child process will terminate when the parent does. But not in this case...
I put more time into this post to add code that shows how to check if the parent process is still running. This would be a good way for the child to ensure it's exited. Also shows how to pass parameters to the child process.
# launch.py
import subprocess as sp
import os
if __name__ == '__main__':
sp.Popen(['ps']) # Print out runniing processes.
print("launch.py's process id is %s." % os.getpid())
# Give child process this one's process ID in the parameters.
sp.Popen(['python3', 'runinproc.py', str(os.getpid())])
# ^^^ This line above anwers the main question of how to kick off a
# child Python script.
print("exiting launch.py")
Other script.
# runinproc.py
import time
import subprocess as sp
import sys
import os
def is_launcher_running():
try:
# This only checks the status of the process. It doesn't
# kill it, or otherwise affect it.
os.kill(int(sys.argv[1]), 0)
except OSError:
return False
else:
return True
if __name__ == '__main__':
print("runinproc.py was launched by process ID %s" % sys.argv[1])
for i in range(100):
if is_launcher_running():
# Is launch.py still running?
print("[[ launch.py is still running... ]]")
sp.Popen(['ps']) # Print out the running processes.
print("going to sleep for 2 seconds...")
time.sleep(2)
Bash output:
Todds-iMac:pyexperiments todd$ python3 launch.py
launch.py process id is 40975.
exiting launch.py
Todds-iMac:pyexperiments todd$ runinproc.py was launched by process ID 40975
going to sleep for 2 seconds...
PID TTY TIME CMD
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
Note that the first call to the shell, ps from launch.py is executed after launch.py exited. That's why it doesn't show up in the printed process list.
subprocess is your friend, but if you need to not wait, check out the P_NOWAIT--replacing example code in https://docs.python.org/3/library/subprocess.html
EG:
pid = Popen(["/bin/mycmd", "myarg"]).pid
I don't think .communicate is what you need this time around - isn't it more for waiting?
The cleanest way to do this (since both scripts are written in pure Python) is to import the other script as a module and execute its content, placed within a function:
run.py
import os
import sys
sys.path.append("//network_share/folder/")
import run2
print("Hello")
run2.main()
exit()
run2.py
def main():
print("Hello again")
I have 3 files:
sleeper.py
import subprocess
print('start sleeper')
subprocess.run(['sleep', '10'])
print('end sleeper')
waker.py
import subprocess
print('The waker begins')
try:
subprocess.run(['python3', 'sleeper.py'], timeout=5)
except subprocess.TimeoutExpired:
pass
print('The waker ends')
awake.py
import subprocess
print('Awake begin')
try:
subprocess.run(['python3', 'waker.py'], timeout=2.5)
except subprocess.TimeoutExpired:
pass
print('Awake end')
and I run python3 awake.py.
but get the following output:
Awake begin
The waker begins
start sleeper
Awake end
end sleeper
actually to be more accurate I get the first 3 lines printer immediatly, then 2.5 seconds later the 4th line prints and I get my bash prompt, then 7.5 seconds later end sleeper appears on my bash prompt.
how do I make it so killing a subprocess via timeout also kills the subprocesses run by that subprocess?
run should terminate the child process when the timeout expires. Does it terminate children as well? it doesn't seem so in your case. A workaround would be to use Popen, poll for the timeout, and kill process & children.
Seems that you cannot have it both ways: use run and be sure that all subprocesses are terminated (when you get a TimeoutException the process is already killed so you lose track of children)
proc = subprocess.Popen(args, stderr=errFile, stdout=outFile, universal_newlines=False)
wait_remaining_sec = 2.5
while proc.poll() is None and wait_remaining_sec > 0:
time.sleep(0.5)
wait_remaining_sec -= 0.5
if proc.poll() is None:
# process is still there: terminate it and subprocesses:
import psutil
parent_pid = proc.pid # we get the process pid
parent = psutil.Process(parent_pid)
for child in parent.children(recursive=True):
child.kill()
parent.kill()
The poll loop is better than a bare time.sleep(2.5) call because if process ends before timeout, you don't want to wait 2.5 seconds. This doesn't induce more than a 0.5s delay if the process ends before.
References:
Using module 'subprocess' with timeout
how to kill process and child processes from python?
I have a python script: zombie.py
from multiprocessing import Process
from time import sleep
import atexit
def foo():
while True:
sleep(10)
#atexit.register
def stop_foo():
p.terminate()
p.join()
if __name__ == '__main__':
p = Process(target=foo)
p.start()
while True:
sleep(10)
When I run this with python zombie.py & and kill the parent process with kill -2, the stop() is correctly called and both processes terminate.
Now, suppose I have a bash script zombie.sh:
#!/bin/sh
python zombie.py &
echo "done"
And I run ./zombie.sh from the command line.
Now, stop() never gets called when the parent gets killed. If I run kill -2 on the parent process, nothing happens. kill -15 or kill -9 both just kill the parent process, but not the child:
[foo#bar ~]$ ./zombie.sh
done
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill -2 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27221 1 0 17:57 pts/3 00:00:00 python zombie.py
What is going on here? How can I make sure the child process dies with the parent?
Neither the atexit nor the p.daemon = True will truely ensure that the child process will die with the father. Receiving a SIGTERM will not trigger the atexit routines.
To make sure the child gets killed upon its father's death you will have to install a signal handler in the father. This way you can react on most signals (SIGQUIT, SIGINT, SIGHUP, SIGTERM, ...) but not on SIGKILL; there simply is no way to react on that signal from within the process which receives it.
Install a signal handler for all useful signals and in that handler kill the child process.
Update: This solution doesn't work for processes killed by a signal.
Your child process is not a zombie. It is alive.
If you want the child process to be killed when its parent exits normally then set p.daemon = True before p.start(). From the docs:
When a process exits, it attempts to terminate all of its daemonic child processes.
Looking at the source code, it is clear that multiprocessing uses atexit callback to kill its daemonic children i.e., it won't work if the parent is killed by a signal. For example:
#!/usr/bin/env python
import logging
import os
import signal
import sys
from multiprocessing import Process, log_to_stderr
from threading import Timer
from time import sleep
def foo():
while True:
sleep(1)
if __name__ == '__main__':
log_to_stderr().setLevel(logging.DEBUG)
p = Process(target=foo)
p.daemon = True
p.start()
# either kill itself or exit normally in 5 seconds
if '--kill' in sys.argv:
Timer(5, os.kill, [os.getpid(), signal.SIGTERM]).start()
else: # exit normally
sleep(5)
Output
$ python kill-orphan.py
[INFO/Process-1] child process calling self.run()
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[INFO/MainProcess] calling terminate() for daemon Process-1
[INFO/MainProcess] calling join() for process Process-1
[DEBUG/MainProcess] running the remaining "atexit" finalizers
Notice "calling terminate() for daemon" line.
Output (with --kill)
$ python kill-orphan.py --kill
[INFO/Process-1] child process calling self.run()
The log shows that if the parent is killed by a signal then "atexit" callback is not called (and ps shows that the child is alive in this case). See also Multiprocess Daemon Not Terminating on Parent Exit.
I am using this code
p1 = Popen(['rtmpdump'] + cmd_args.split(' '), stdout=PIPE)
p2 = Popen(player_cmd.split(' '), stdin=p1.stdout, stderr=PIPE)
p2.wait()
# try to kill rtmpdump
# FIXME: why is this not working ?
try:
p2.stdin.close()
p1.stdout.close()
p1.kill()
except AttributeError:
# if we use python 2.5
from signal import SIGTERM, SIGKILL
from os import kill
kill(p1.pid, SIGKILL)
when p1 terminates then p2 is terminated too.
The problem is:
If I manually close p2 (it's mplayer), rtmpdump/p1 is still running.
I tried various things like what is above but I still can't kill it.
i tried with adding close_fds=True.
so may be rtmpdump still tries to write to stdout. but why is this cause kill() to fail ?
full source code: http://github.com/solsticedhiver/arte-7.py
Here is the fix. call wait() after kill() to really kill the zombie process
# kill the zombie rtmpdump
try:
p1.kill()
p1.wait()
except AttributeError:
# if we use python 2.5
from signal import SIGKILL
from os import kill, waitpid
kill(p1.pid, SIGKILL)
waitpid(p1.pid, 0)
I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running.
But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
From the Python docs:
When a process exits, it attempts to
terminate all of its daemonic child
processes.
This is the expected behavior.
If you are on a unix system, you could use os.fork:
import os
import time
pid=os.fork()
if pid:
# parent
while True:
print("I'm the parent")
time.sleep(0.5)
else:
# child
while True:
print("I'm just a child")
time.sleep(0.5)
Running this creates two processes. You can kill the parent without killing the child.
For example, when you run script you'll see something like:
% script.py
I'm the parent
I'm just a child
I'm the parent
I'm just a child
...
Stop the script with ctrl-Z:
^Z
[1]+ Stopped script.py
Find the process ID number for the parent. It will be the smaller of the two process ID numbers since the parent came first:
% ps axuw | grep script.py
unutbu 6826 0.1 0.1 33792 6388 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6827 0.0 0.1 33792 4352 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6832 0.0 0.0 17472 952 pts/24 S+ 15:09 0:00 grep --color=auto script.py
Kill the parent process:
% kill 6826
Restore script.py to the foreground:
% fg
script.py
Terminated
You'll see the child process is still running:
% I'm just a child
I'm just a child
I'm just a child
...
Kill the child (in a new terminal) with
% kill 6827
Simply use the subprocess module:
import subprocess
subprocess.Popen(["sleep", "60"])
Here is a related question on SO, where one of the answers gives a nice solution to this problem:
"spawning process from python"
If you are on a unix system (using docs):
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
import multiprocessing
from multiprocessing import Process
def to_use_in_separate_process(*args):
print(args)
#check args before using them:
if len(args)>1:
subprocess.call((args[0], args[1]))
print('subprocess called')
def main(apathtofile):
print('checking os')
if os.name == 'posix':
print('os is posix')
multiprocessing.get_context('fork')
p = Process(target=to_use_in_separate_process, args=('xdg-open', apathtofile))
p.run()
print('exiting def main')
if __name__ == '__main__':
#parameter [1] must be some file that can be opened by xdg-open that this
#program uses.
if len(sys.argv)>1:
main(sys.argv[1])
print('we can exit now.')
else:
print('no parameters...')
print('mother program will end now!')
sys.exit(0)
In Ubuntu the following commands keep working even though the python app exit.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)