I'm trying to use the pathos library to replace the builtin multiprocessing library, but am having difficulty using either pipes or queues on windows. Here's a representative example:
from pathos.helpers import mp
#import multiprocessing as mp
def f(pipe, queue):
if sys.gettrace():
print 'Debug mode. About to crash'
else:
print 'Execute mode'
pipe.send('pipe')
queue.put('queue')
if __name__ == '__main__':
mp.freeze_support()
to_child, to_self = mp.Pipe()
queue = mp.Queue()
p = mp.Process(target=f, args=(to_child, queue))
p.start()
p.join()
pipe.send('pipe') raises IOError: (6, 'The handle is invalid') and queue.put('queue') raises WindowsError: [Error 5] Access is denied. Both work correctly using the vanilla multiprocessing module.
Am I doing something wrong?
Edit:
This crash only occurs when I'm trying to debug child processes (I use WingIDE). I can accurately predict the crash by checking sys.gettrace(), as above.
It turns out the issue arises when Wing IDE is set to debug child processes. As I understand it, wing enables child process debugging by inserting itself between parent and child processes (which means that 'child' processes are now actually the grandchildren of the parent process). On windows, this is made possible by monkeypatching multiprocessing to cause Popen to set the 'inheritable' flag to True when calling duplicate() on handles.
Without the inheritable setting, grandchild processes cannot access the handles, and hence the exceptions above are raised.
It seems likely that a similar monkey patch could be applied to pathos's helpers.mp module to allow wing to debug child processes with pathos.
Related
I have a python code that is running other scripts with multiple instances using subprocess.Popen and wait for them to finish with subprocess.Popen().wait().
Everything works fine, however I want to kill all subprocesses if one of them is terminated. Here is the code that I use to run multiple instances with python subprocess package
import ctypes
import os
import signal
import subprocess
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig=signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
if __name__ == "__main__":
procs = []
for i in range((os.cpu_count() * 2) - 1):
proc = subprocess.Popen(['python', "pythonscript_i_need_to_run/"], preexec_fn=set_pdeathsig(signal.SIGTERM))
procs.append(proc)
procs.append(subprocess.Popen(["python", "other_pythonscript_i_need_to_run"], preexec_fn=set_pdeathsig(signal.SIGTERM)))
for proc in procs:
proc.wait()
The set_pdeathsig function is for killing the children if parent is killed. Long story short I need to kill all children if one is killed. How can I do it ?
*** NOTE ***
When I try to kill the parent when one child is dead with
os.kill(os.getppid(), signal.SIGTERM) it doesn't kill the original parent script. Also I tried to kill by group pid but it didn't work as well.
In Unix and Unix-like Operating System has SIGCHLD signal which is send by OS kernel. This signal will be sent to parent process when child process terminated. If you have no handler for this signal, SIGCHLD signal will ignored by default. But if you have a handler function for this signal, you tell the kernel “hey I have a handler function, when child process terminated please trigger this handler function to run”
In your case, you have many child process, if one of them killed or finished its execution(by exit() syscall) kernel will send a SIGCHLD signal to the parent process which is your shared code.
We have a handler for SIGCHLD signal which is chld_handler() function. When one of the child process terminated, SIGCHLD signal will be sent to parent process and chld_handler function will triggered to run by OS kernel. (This named is signal catching)
In this function signal.signal(signal.SIGCHLD,chld_handler) we tell the kernel, “i have handler function for SIGCHLD signal, don’t ignore it when child terminated”. In chld_handler function which is run when SIGCHLD signal was sent, we call signal.signal(signal.SIGCHLD, signal.SIG_IGN) function that we tell the kernel, “hey I have no handler function, ignore the SIGCHLD signal” we do that because we do not need that anymore since we killing other childs with p.terminate() looping the procs.
All code would be like below
import ctypes
import os
import signal
import subprocess
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig=signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
def chld_handler(sig, frame):
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
print("one of the childs dead")
for p in procs:
p.terminate()
signal.signal(signal.SIGCHLD,chld_handler)
if __name__ == "__main__":
procs = []
for i in range((os.cpu_count() * 2) - 1):
proc = subprocess.Popen(['python', "pythonscript_i_need_to_run/"], preexec_fn=set_pdeathsig(signal.SIGTERM))
procs.append(proc)
procs.append(subprocess.Popen(["python", "other_pythonscript_i_need_to_run"], preexec_fn=set_pdeathsig(signal.SIGTERM)))
for proc in procs:
proc.wait()
Also there are much more detail about SIGCHLD signal and python signal library and also zombie process, i do not tell all the thing here because there are so many detail, and i am not expert all the deep knowledge now
I hope above informations give you some insight. If you think i am wrong somewhere, please correct me
Signal delivery (in python, that is using user-defined signal.signal() handlers) is sometimes race-prone. It's easy to code a solution that works most of the time, but may yet miss a signal that arrives just before or just after you are prepared to deal with it.
(For reliable delivery as an I/O event, the venerable self-pipe trick may be implemented in python.)
Signal acceptance is another approach, in which you SIG_BLOCK a signal to hold it pending when generated, and then accept it with the signal module's sigwait(), sigwaitinfo(), or sigtimedwait() when you're ready to do so. There's no chance of missing the signal here, but you must remember that basic UNIX signals do not queue up: only one signal of each type will be held pending for acceptance regardless of how many times that signal was generated.
For your problem, that would look something like this, assuming your implementation supported signal.pthread_sigmask():
def main():
signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGCHLD])
... launch children ...
signal.sigwait([signal.SIGCHLD])
# OK, at least one child terminated
... terminate other children ...
I'm using multiprocessing in a larger code base where some of the import statements have side effects. How can I run a function in a background process without having it inherit global imports?
# helper.py:
print('This message should only print once!')
# main.py:
import multiprocessing as mp
import helper # This prints the message.
def worker():
pass # Unfortunately this also prints the message again.
if __name__ == '__main__':
mp.set_start_method('spawn')
process = mp.Process(target=worker)
process.start()
process.join()
Background: Importing TensorFlow initializes CUDA which reserves some amount of GPU memory. As a result, spawing too many processes leads to a CUDA OOM error, even though the processes don't use TensorFlow.
Similar question without an answer:
How to avoid double imports with the Python multiprocessing module?
Is there a resources that explains exactly what the multiprocessing
module does when starting an mp.Process?
Super quick version (using the spawn context not fork)
Some stuff (a pair of pipes for communication, cleanup callbacks, etc) is prepared then a new process is created with fork()exec(). On windows it's CreateProcessW(). The new python interpreter is called with a startup script spawn_main() and passed the communication pipe file descriptors via a crafted command string and the -c switch. The startup script cleans up the environment a little bit, then unpickles the Process object from its communication pipe. Finally it calls the run method of the process object.
So what about importing of modules?
Pickle semantics handle some of it, but __main__ and sys.modules need some tlc, which is handled here (during the "cleans up the environment" bit).
# helper.py:
print('This message should only print once!')
# main.py:
import multiprocessing as mp
def worker():
pass
def main():
# Importing the module only locally so that the background
# worker won't import it again.
import helper
mp.set_start_method('spawn')
process = mp.Process(target=worker)
process.start()
process.join()
if __name__ == '__main__':
main()
I'm having a strange problem I've encountered as I wrote a script to start my local JBoss instance.
My code looks something like this:
with open("/var/run/jboss/jboss.pid", "wb") as f:
process = subprocess.Popen(["/opt/jboss/bin/standalone.sh", "-b=0.0.0.0"])
f.write(str(process.pid))
try:
process.wait()
except KeyboardInterrupt:
process.kill()
Should be fairly simple to understand, write the PID to a file while its running, once I get a KeyboardInterrupt, kill the child process.
The problem is that JBoss keeps running in the background after I send the kill signal, as it seems that the signal doesn't propagate down to the Java process started by standalone.sh.
I like the idea of using Python to write system management scripts, but there are a lot of weird edge cases like this where if I would have written it in Bash, everything would have just worked™.
How can I kill the entire subprocess tree when I get a KeyboardInterrupt?
You can do this using the psutil library:
import psutil
#..
proc = psutil.Process(process.pid)
for child in proc.children(recursive=True):
child.kill()
proc.kill()
As far as I know the subprocess module does not offer any API function to retrieve the children spawned by subprocesses, nor does the os module.
A better way of killing the processes would probably be the following:
proc = psutil.Process(process.pid)
procs = proc.children(recursive=True)
procs.append(proc)
for proc in procs:
proc.terminate()
gone, alive = psutil.wait_procs(procs, timeout=1)
for p in alive:
p.kill()
This would give a chance to the processes to terminate correctly and when the timeout ends the remaining processes will be killed.
Note that psutil also provides a Popen class that has the same interface of subprocess.Popen plus all the extra functionality of psutil.Process. You may want to simply use that instead of subprocess.Popen. It is also safer because psutil checks that PIDs don't get reused if a process terminates, while subprocess doesn't.
I have this code :
import os
pid = os.fork()
if pid == 0:
os.environ['HOME'] = "rep1"
external_function()
else:
os.environ['HOME'] = "rep2"
external_function()
and this code :
from multiprocessing import Process, Pipe
def f(conn):
os.environ['HOME'] = "rep1"
external_function()
conn.send(some_data)
conn.close()
if __name__ == '__main__':
os.environ['HOME'] = "rep2"
external_function()
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print parent_conn.recv()
p.join()
The external_function initializes an external programs by creating the necessary sub-directories in the directory found in the environment variable HOME. This function does this work only once in each process.
With the first example, which uses os.fork(), the directories are created as expected. But with second example, which uses multiprocessing, only the directories in rep2 get created.
Why isn't the second example creating directories in both rep1 and rep2?
The answer you are looking for is in detail addressed here. There is also an explanation of differences between different OS.
One big issue is that the fork system call does not exist on Windows. Therefore, when running a Windows OS you cannot use this method. multiprocessing is a higher-level interface to execute a part of the currently running program. Therefore, it - as forking does - creates a copy of your process current state. That is to say, it takes care of the forking of your program for you.
Therefore, if available you could consider fork() a lower-level interface to forking a program, and the multiprocessing library to be a higher-level interface to forking.
To answer your question directly, there must be some side effect of external_process that makes it so that when the code is run in series, you get different results than if you run them at the same time. This is due to how you set up your code, and the lack of differences between os.fork and multiprocessing.Process in systems that os.fork is supported.
The only real difference between the os.fork and multiprocessing.Process is portability and library overhead, since os.fork is not supported in windows, and the multiprocessing framework is included to make multiprocessing.Process work. This is because os.fork is called by multiprocessing.Process, as this answer backs up.
The important distinction, then, is os.fork copies everything in the current process using Unix's forking, which means at the time of forking both processes are the same with PID differences. In Window's, this is emulated by rerunning all the setup code before the if __name__ == '__main__':, which is roughly the same as creating a subprocess using the subprocess library.
For you, the code snippets you provide are doing fairly different things above, because you call external_function in main before you open the new process in the second code clip, making the two processes run in series but in different processes. Also the pipe is unnecessary, as it emulates no functionality from the first code.
In Unix, the code snippets:
import os
pid = os.fork()
if pid == 0:
os.environ['HOME'] = "rep1"
external_function()
else:
os.environ['HOME'] = "rep2"
external_function()
and:
import os
from multiprocessing import Process
def f():
os.environ['HOME'] = "rep1"
external_function()
if __name__ == '__main__':
p = Process(target=f)
p.start()
os.environ['HOME'] = "rep2"
external_function()
p.join()
should do exactly the same thing, but with a little extra overhead from the included multiprocessing library.
Without further information, we can't figure out what the issue is. If you can provide code that demonstrates the issue, that would help us help you.
The child process is started with
subprocess.Popen(arg)
Is there a way to ensure it is killed when parent terminates abnormally? I need this to work both on Windows and Linux. I am aware of this solution for Linux.
Edit:
the requirement of starting a child process with subprocess.Popen(arg) can be relaxed, if a solution exists using a different method of starting a process.
Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:
On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)
On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:
import subprocess
import win32api
import win32con
import win32job
hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)
child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)
win32job.AssignProcessToJobObject(hJob, hProcess)
Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.
One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.
If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:
On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)
On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:
from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)
This avoids any of the possible issues with job objects that I mentioned.
If you want to be really, really sure, then you can combine all these solutions.
Hope that helps!
The Popen object offers the terminate and kill methods.
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
These send the SIGTERM and SIGKILL signals for you.
You can do something akin to the below:
from subprocess import Popen
p = None
try:
p = Popen(arg)
# some code here
except Exception as ex:
print 'Parent program has exited with the below error:\n{0}'.format(ex)
if p:
p.terminate()
UPDATE:
You are correct--the above code will not protect against hard-crashing or someone killing your process. In that case you can try wrapping the child process in a class and employ a polling model to watch the parent process.
Be aware psutil is non-standard.
import os
import psutil
from multiprocessing import Process
from time import sleep
class MyProcessAbstraction(object):
def __init__(self, parent_pid, command):
"""
#type parent_pid: int
#type command: str
"""
self._child = None
self._cmd = command
self._parent = psutil.Process(pid=parent_pid)
def run_child(self):
"""
Start a child process by running self._cmd.
Wait until the parent process (self._parent) has died, then kill the
child.
"""
print '---- Running command: "%s" ----' % self._cmd
self._child = psutil.Popen(self._cmd)
try:
while self._parent.status == psutil.STATUS_RUNNING:
sleep(1)
except psutil.NoSuchProcess:
pass
finally:
print '---- Terminating child PID %s ----' % self._child.pid
self._child.terminate()
if __name__ == "__main__":
parent = os.getpid()
child = MyProcessAbstraction(parent, 'ping -t localhost')
child_proc = Process(target=child.run_child)
child_proc.daemon = True
child_proc.start()
print '---- Try killing PID: %s ----' % parent
while True:
sleep(1)
In this example I run 'ping -t localhost' b/c that will run forever. If you kill the parent process, the child process (the ping command) will also be killed.
Since, from what I can tell, the PR_SET_PDEATHSIG solution can result in a deadlock when any threads are running in the parent process, I didn't want to use that and figured out another way. I created a separate auto-terminate process that detects when its parent process is done and kills the other subprocess that is its target.
To accomplish this, you need to pip install psutil, and then write code similar to the following:
def start_auto_cleanup_subprocess(target_pid):
cleanup_script = f"""
import os
import psutil
import signal
from time import sleep
try:
# Block until stdin is closed which means the parent process
# has terminated.
input()
except Exception:
# Should be an EOFError, but if any other exception happens,
# assume we should respond in the same way.
pass
if not psutil.pid_exists({target_pid}):
# Target process has already exited, so nothing to do.
exit()
os.kill({target_pid}, signal.SIGTERM)
for count in range(10):
if not psutil.pid_exists({target_pid}):
# Target process no longer running.
exit()
sleep(1)
os.kill({target_pid}, signal.SIGKILL)
# Don't bother waiting to see if this works since if it doesn't,
# there is nothing else we can do.
"""
return Popen(
[
sys.executable, # Python executable
'-c', cleanup_script
],
stdin=subprocess.PIPE
)
This is similar to https://stackoverflow.com/a/23436111/396373 that I had failed to notice, but I think the way that I came up with is easier for me to use because the process that is the target of cleanup is created directly by the parent. Also note that it is not necessary to poll the status of the parent, though it is still necessary to use psutil and to poll the status of the target subprocess during the termination sequence if you want to try, as in this example, to terminate, monitor, and then kill if terminate didn't work expeditiously.
Hook exit of your process using SetConsoleCtrlHandler, and kill subprocess. I think I do a bit of a overkill there, but it works :)
import psutil, os
def kill_proc_tree(pid, including_parent=True):
parent = psutil.Process(pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
gone, still_alive = psutil.wait_procs(children, timeout=5)
if including_parent:
parent.kill()
parent.wait(5)
def func(x):
print("killed")
if anotherproc:
kill_proc_tree(anotherproc.pid)
kill_proc_tree(os.getpid())
import win32api,shlex
win32api.SetConsoleCtrlHandler(func, True)
PROCESSTORUN="your process"
anotherproc=None
cmdline=f"/c start /wait \"{PROCESSTORUN}\" "
anotherproc=subprocess.Popen(executable='C:\\Windows\\system32\\cmd.EXE', args=shlex.split(cmdline,posix="false"))
...
run program
...
Took kill_proc_tree from:
subprocess: deleting child processes in Windows