Python: how to kill child process(es) when parent dies? - python

The child process is started with
subprocess.Popen(arg)
Is there a way to ensure it is killed when parent terminates abnormally? I need this to work both on Windows and Linux. I am aware of this solution for Linux.
Edit:
the requirement of starting a child process with subprocess.Popen(arg) can be relaxed, if a solution exists using a different method of starting a process.

Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:
On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)
On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:
import subprocess
import win32api
import win32con
import win32job
hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)
child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)
win32job.AssignProcessToJobObject(hJob, hProcess)
Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.
One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.
If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:
On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)
On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:
from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)
This avoids any of the possible issues with job objects that I mentioned.
If you want to be really, really sure, then you can combine all these solutions.
Hope that helps!

The Popen object offers the terminate and kill methods.
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
These send the SIGTERM and SIGKILL signals for you.
You can do something akin to the below:
from subprocess import Popen
p = None
try:
p = Popen(arg)
# some code here
except Exception as ex:
print 'Parent program has exited with the below error:\n{0}'.format(ex)
if p:
p.terminate()
UPDATE:
You are correct--the above code will not protect against hard-crashing or someone killing your process. In that case you can try wrapping the child process in a class and employ a polling model to watch the parent process.
Be aware psutil is non-standard.
import os
import psutil
from multiprocessing import Process
from time import sleep
class MyProcessAbstraction(object):
def __init__(self, parent_pid, command):
"""
#type parent_pid: int
#type command: str
"""
self._child = None
self._cmd = command
self._parent = psutil.Process(pid=parent_pid)
def run_child(self):
"""
Start a child process by running self._cmd.
Wait until the parent process (self._parent) has died, then kill the
child.
"""
print '---- Running command: "%s" ----' % self._cmd
self._child = psutil.Popen(self._cmd)
try:
while self._parent.status == psutil.STATUS_RUNNING:
sleep(1)
except psutil.NoSuchProcess:
pass
finally:
print '---- Terminating child PID %s ----' % self._child.pid
self._child.terminate()
if __name__ == "__main__":
parent = os.getpid()
child = MyProcessAbstraction(parent, 'ping -t localhost')
child_proc = Process(target=child.run_child)
child_proc.daemon = True
child_proc.start()
print '---- Try killing PID: %s ----' % parent
while True:
sleep(1)
In this example I run 'ping -t localhost' b/c that will run forever. If you kill the parent process, the child process (the ping command) will also be killed.

Since, from what I can tell, the PR_SET_PDEATHSIG solution can result in a deadlock when any threads are running in the parent process, I didn't want to use that and figured out another way. I created a separate auto-terminate process that detects when its parent process is done and kills the other subprocess that is its target.
To accomplish this, you need to pip install psutil, and then write code similar to the following:
def start_auto_cleanup_subprocess(target_pid):
cleanup_script = f"""
import os
import psutil
import signal
from time import sleep
try:
# Block until stdin is closed which means the parent process
# has terminated.
input()
except Exception:
# Should be an EOFError, but if any other exception happens,
# assume we should respond in the same way.
pass
if not psutil.pid_exists({target_pid}):
# Target process has already exited, so nothing to do.
exit()
os.kill({target_pid}, signal.SIGTERM)
for count in range(10):
if not psutil.pid_exists({target_pid}):
# Target process no longer running.
exit()
sleep(1)
os.kill({target_pid}, signal.SIGKILL)
# Don't bother waiting to see if this works since if it doesn't,
# there is nothing else we can do.
"""
return Popen(
[
sys.executable, # Python executable
'-c', cleanup_script
],
stdin=subprocess.PIPE
)
This is similar to https://stackoverflow.com/a/23436111/396373 that I had failed to notice, but I think the way that I came up with is easier for me to use because the process that is the target of cleanup is created directly by the parent. Also note that it is not necessary to poll the status of the parent, though it is still necessary to use psutil and to poll the status of the target subprocess during the termination sequence if you want to try, as in this example, to terminate, monitor, and then kill if terminate didn't work expeditiously.

Hook exit of your process using SetConsoleCtrlHandler, and kill subprocess. I think I do a bit of a overkill there, but it works :)
import psutil, os
def kill_proc_tree(pid, including_parent=True):
parent = psutil.Process(pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
gone, still_alive = psutil.wait_procs(children, timeout=5)
if including_parent:
parent.kill()
parent.wait(5)
def func(x):
print("killed")
if anotherproc:
kill_proc_tree(anotherproc.pid)
kill_proc_tree(os.getpid())
import win32api,shlex
win32api.SetConsoleCtrlHandler(func, True)
PROCESSTORUN="your process"
anotherproc=None
cmdline=f"/c start /wait \"{PROCESSTORUN}\" "
anotherproc=subprocess.Popen(executable='C:\\Windows\\system32\\cmd.EXE', args=shlex.split(cmdline,posix="false"))
...
run program
...
Took kill_proc_tree from:
subprocess: deleting child processes in Windows

Related

Python Kill all subprocesses if one of them is finished

I have a python code that is running other scripts with multiple instances using subprocess.Popen and wait for them to finish with subprocess.Popen().wait().
Everything works fine, however I want to kill all subprocesses if one of them is terminated. Here is the code that I use to run multiple instances with python subprocess package
import ctypes
import os
import signal
import subprocess
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig=signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
if __name__ == "__main__":
procs = []
for i in range((os.cpu_count() * 2) - 1):
proc = subprocess.Popen(['python', "pythonscript_i_need_to_run/"], preexec_fn=set_pdeathsig(signal.SIGTERM))
procs.append(proc)
procs.append(subprocess.Popen(["python", "other_pythonscript_i_need_to_run"], preexec_fn=set_pdeathsig(signal.SIGTERM)))
for proc in procs:
proc.wait()
The set_pdeathsig function is for killing the children if parent is killed. Long story short I need to kill all children if one is killed. How can I do it ?
*** NOTE ***
When I try to kill the parent when one child is dead with
os.kill(os.getppid(), signal.SIGTERM) it doesn't kill the original parent script. Also I tried to kill by group pid but it didn't work as well.
In Unix and Unix-like Operating System has SIGCHLD signal which is send by OS kernel. This signal will be sent to parent process when child process terminated. If you have no handler for this signal, SIGCHLD signal will ignored by default. But if you have a handler function for this signal, you tell the kernel “hey I have a handler function, when child process terminated please trigger this handler function to run”
In your case, you have many child process, if one of them killed or finished its execution(by exit() syscall) kernel will send a SIGCHLD signal to the parent process which is your shared code.
We have a handler for SIGCHLD signal which is chld_handler() function. When one of the child process terminated, SIGCHLD signal will be sent to parent process and chld_handler function will triggered to run by OS kernel. (This named is signal catching)
In this function signal.signal(signal.SIGCHLD,chld_handler) we tell the kernel, “i have handler function for SIGCHLD signal, don’t ignore it when child terminated”. In chld_handler function which is run when SIGCHLD signal was sent, we call signal.signal(signal.SIGCHLD, signal.SIG_IGN) function that we tell the kernel, “hey I have no handler function, ignore the SIGCHLD signal” we do that because we do not need that anymore since we killing other childs with p.terminate() looping the procs.
All code would be like below
import ctypes
import os
import signal
import subprocess
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig=signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
def chld_handler(sig, frame):
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
print("one of the childs dead")
for p in procs:
p.terminate()
signal.signal(signal.SIGCHLD,chld_handler)
if __name__ == "__main__":
procs = []
for i in range((os.cpu_count() * 2) - 1):
proc = subprocess.Popen(['python', "pythonscript_i_need_to_run/"], preexec_fn=set_pdeathsig(signal.SIGTERM))
procs.append(proc)
procs.append(subprocess.Popen(["python", "other_pythonscript_i_need_to_run"], preexec_fn=set_pdeathsig(signal.SIGTERM)))
for proc in procs:
proc.wait()
Also there are much more detail about SIGCHLD signal and python signal library and also zombie process, i do not tell all the thing here because there are so many detail, and i am not expert all the deep knowledge now
I hope above informations give you some insight. If you think i am wrong somewhere, please correct me
Signal delivery (in python, that is using user-defined signal.signal() handlers) is sometimes race-prone. It's easy to code a solution that works most of the time, but may yet miss a signal that arrives just before or just after you are prepared to deal with it.
(For reliable delivery as an I/O event, the venerable self-pipe trick may be implemented in python.)
Signal acceptance is another approach, in which you SIG_BLOCK a signal to hold it pending when generated, and then accept it with the signal module's sigwait(), sigwaitinfo(), or sigtimedwait() when you're ready to do so. There's no chance of missing the signal here, but you must remember that basic UNIX signals do not queue up: only one signal of each type will be held pending for acceptance regardless of how many times that signal was generated.
For your problem, that would look something like this, assuming your implementation supported signal.pthread_sigmask():
def main():
signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGCHLD])
... launch children ...
signal.sigwait([signal.SIGCHLD])
# OK, at least one child terminated
... terminate other children ...

Have subprocess.Popen only wait on its child process to return, but not any grandchildren

I have a python script that does this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
theStdin=request.input.encode('utf-8')
(outputhere,errorshere) = p.communicate(input=theStdin)
It works as expected, it waits for the subprocess to finish via p.communicate(). However within the pythonscript.py I want to "fire and forget" a "grandchild" process. I'm currently doing this by overwriting the join function:
class EverLastingProcess(Process):
def join(self, *args, **kwargs):
pass # Overwrites join so that it doesn't block. Otherwise parent waits.
def __del__(self):
pass
And starting it like this:
p = EverLastingProcess(target=nameOfMyFunction, args=(arg1, etc,), daemon=False)
p.start()
This also works fine I just run pythonscript.py in a bash terminal or bash script. Control and a response returns while the child process started by EverLastingProcess keeps going. However, when I run pythonscript.py with Popen running the process as shown above, it looks from timings that the Popen is waiting on the grandchild to finish.
How can I make it so that the Popen only waits on the child process, and not any grandchild processes?
The solution above (using the join method with the shell=True addition) stopped working when we upgraded our Python recently.
There are many references on the internet about the pieces and parts of this, but it took me some doing to come up with a useful solution to the entire problem.
The following solution has been tested in Python 3.9.5 and 3.9.7.
Problem Synopsis
The names of the scripts match those in the code example below.
A top-level program (grandparent.py):
Uses subprocess.run or subprocess.Popen to call a program (parent.py)
Checks return value from parent.py for sanity.
Collects stdout and stderr from the main process 'parent.py'.
Does not want to wait around for the grandchild to complete.
The called program (parent.py)
Might do some stuff first.
Spawns a very long process (the grandchild - "longProcess" in the code below).
Might do a little more work.
Returns its results and exits while the grandchild (longProcess) continues doing what it does.
Solution Synopsis
The important part isn't so much what happens with subprocess. Instead, the method for creating the grandchild/longProcess is the critical part. It is necessary to ensure that the grandchild is truly emancipated from parent.py.
Subprocess only needs to be used in a way that captures output.
The longProcess (grandchild) needs the following to happen:
It should be started using multiprocessing.
It needs multiprocessing's 'daemon' set to False.
It should also be invoked using the double-fork procedure.
In the double-fork, extra work needs to be done to ensure that the process is truly separate from parent.py. Specifically:
Move the execution away from the environment of parent.py.
Use file handling to ensure that the grandchild no longer uses the file handles (stdin, stdout, stderr) inherited from parent.py.
Example Code
grandparent.py - calls parent.py using subprocess.run()
#!/usr/bin/env python3
import subprocess
p = subprocess.run(["/usr/bin/python3", "/path/to/parent.py"], capture_output=True)
## Comment the following if you don't need reassurance
print("The return code is: " + str(p.returncode))
print("The standard out is: ")
print(p.stdout)
print("The standard error is: ")
print(p.stderr)
parent.py - starts the longProcess/grandchild and exits, leaving the grandchild running. After 10 seconds, the grandchild will write timing info to /tmp/timelog.
!/usr/bin/env python3
import time
def longProcess() :
time.sleep(10)
fo = open("/tmp/timelog", "w")
fo.write("I slept! The time now is: " + time.asctime(time.localtime()) + "\n")
fo.close()
import os,sys
def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0: # parent process
return
except OSError as e:
print("fork #1 failed. See next. " )
print(e)
sys.exit(1)
# Decouple from the parent environment.
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as e:
print("fork #2 failed. See next. " )
print(e)
print(1)
# Redirect standard file descriptors.
# Here, they are reassigned to /dev/null, but they could go elsewhere.
sys.stdout.flush()
sys.stderr.flush()
si = open('/dev/null', 'r')
so = open('/dev/null', 'a+')
se = open('/dev/null', 'a+')
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# Run your daemon
func()
# Ensure that the daemon exits when complete
os._exit(os.EX_OK)
import multiprocessing
daemonicGrandchild=multiprocessing.Process(target=spawnDaemon, args=(longProcess,))
daemonicGrandchild.daemon=False
daemonicGrandchild.start()
print("have started the daemon") # This will get captured as stdout by grandparent.py
References
The code above was mainly inspired by the following two resources.
This reference is succinct about the use of the double-fork but does not include the file handling we need in this situation.
This reference contains the needed file handling, but does many other things that we do not need.
Edit: the below stopped working after a Python upgrade, see the accepted answer from Lachele.
Working answer from a colleague, change to shell=True like this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
I've tested and the grandchild subprocesses stay alive after the child processes returns without waiting for them to finish.

Killing parent process from a child process with Python on Linux

In my (very simplified) scenario, in python 2.7, I have 2 processes:
Parent process, which doing some tasks.
Child process, which needs to kill the parent process after X time.
Creation of child process:
killer = multiprocessing.Process(...)
killer.start()
The child process executes the following code after X time (simplified version of the code):
process = psutil.Process(parent_pid)
...
if time_elapsed:
while True:
process.kill()
if not process.is_alive:
exit()
The problem is that it's leaving the parent as a zombie process, and the child is never exiting because the parent is still alive.
The same code works as expected in Windows.
All the solutions that I saw were talking about the parent process waiting for the child to finish by calling killer.join(), but in my case, the parent is the one who does the task, and it shouldn't wait for its child.
What is the best way to deal with a scenario like that?
You could use os.getppid() to retrieve the parent's PID, and kill it with os.kill().
E.g. os.kill(os.getppid(), signal.SIGKILL)
See https://docs.python.org/2/library/os.html and https://docs.python.org/2/library/signal.html#module-signal for reference.
A mwo:
Parent:
import subprocess32 as subprocess
subprocess.run(['python', 'ch.py'])
Child:
import os
import signal
os.kill(os.getppid(), signal.SIGTERM)

Kill a chain of sub processes on KeyboardInterrupt

I'm having a strange problem I've encountered as I wrote a script to start my local JBoss instance.
My code looks something like this:
with open("/var/run/jboss/jboss.pid", "wb") as f:
process = subprocess.Popen(["/opt/jboss/bin/standalone.sh", "-b=0.0.0.0"])
f.write(str(process.pid))
try:
process.wait()
except KeyboardInterrupt:
process.kill()
Should be fairly simple to understand, write the PID to a file while its running, once I get a KeyboardInterrupt, kill the child process.
The problem is that JBoss keeps running in the background after I send the kill signal, as it seems that the signal doesn't propagate down to the Java process started by standalone.sh.
I like the idea of using Python to write system management scripts, but there are a lot of weird edge cases like this where if I would have written it in Bash, everything would have just worked™.
How can I kill the entire subprocess tree when I get a KeyboardInterrupt?
You can do this using the psutil library:
import psutil
#..
proc = psutil.Process(process.pid)
for child in proc.children(recursive=True):
child.kill()
proc.kill()
As far as I know the subprocess module does not offer any API function to retrieve the children spawned by subprocesses, nor does the os module.
A better way of killing the processes would probably be the following:
proc = psutil.Process(process.pid)
procs = proc.children(recursive=True)
procs.append(proc)
for proc in procs:
proc.terminate()
gone, alive = psutil.wait_procs(procs, timeout=1)
for p in alive:
p.kill()
This would give a chance to the processes to terminate correctly and when the timeout ends the remaining processes will be killed.
Note that psutil also provides a Popen class that has the same interface of subprocess.Popen plus all the extra functionality of psutil.Process. You may want to simply use that instead of subprocess.Popen. It is also safer because psutil checks that PIDs don't get reused if a process terminates, while subprocess doesn't.

Python kill all processes owned by user

I need to make a function that can kill all processes owned by user and later to start few.
My main problem is that I cannot figure how to check if all processes were killed, and if there are still running processes, to retry for 1-2 times to kill them, and then return error. I want to use only python code.
Here is my code:
import os
import pwd
def pkill(user):
pids = []
user_pids = []
uid = pwd.getpwnam(user).pw_uid
# get all PID
for i in os.listdir('/proc'):
if i.isdigit():
pids.append(i)
# test if PID is owned by user
for i in pids:
puid = os.stat(os.path.join('/proc', i)).st_uid
if puid == uid:
user_pids.append(i)
# print len(user_pids)
# check of PID still exist and kill it
for i in user_pids:
if os.path.exists(os.path.join('/proc',i)):
try:
os.kill(int(i), 15)
except OSError:
Thank you
The default way to check if a process is running, in Linux (it's POSIX compatible also), is to use kill -0 PID, so here you can simply do an os.kill but with 0 as a signal, if the process is dead it should throw an exception, if it's alive it should not.
can't you do the same thing you did to find the processes? that function should return 0..

Categories