Python Multiprocessing Documentation Example - python

I'm trying to learn Python multiprocessing.
http://docs.python.org/2/library/multiprocessing.html from the example of "To show the individual process IDs involved, here is an expanded example:"
from multiprocessing import Process
import os
def info(title):
print title
print 'module name:', __name__
if hasattr(os, 'getppid'): # only available on Unix
print 'parent process:', os.getppid()
print 'process id:', os.getpid()
def f(name):
info('function f')
print 'hello', name
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
What exactly am I looking at? I see that def f(name): is called after info('main line') is finished, but this synchronous call would be default anyways. I see that the same process info('main line') is the parent PID of def f(name): but not sure what is 'multiprocessing' about that.
Also, with join() "Block the calling thread until the process whose join() method is called terminates". I'm not clear on what the calling thread would be. In this example what would join() be blocking?

How multiprocessing works, in a nutshell:
Process() spawns (fork or similar on Unix-like systems) a copy of the original program (on Windows, which lacks a real fork, this is tricky and requires the special care that the module documentation notes).
The copy communicates with the original to figure out that (a) it's a copy and (b) it should go off and invoke the target= function (see below).
At this point, the original and copy are now different and independent, and can run simultaneously.
Since these are independent processes, they now have independent Global Interpreter Locks (in CPython) so both can use up to 100% of a CPU on a multi-cpu box, as long as they don't contend for other lower-level (OS) resources. That's the "multiprocessing" part.
Of course, at some point you have to send data back and forth between these supposedly-independent processes, e.g., to send results from one (or many) worker process(es) back to a "main" process. (There is the occasional exception where everyone's completely independent, but it's rare ... plus there's the whole start-up sequence itself, kicked off by p.start().) So each created Process instance—p, in the above example—has a communications channel to its parent creator and vice versa (it's a symmetric connection). The multiprocessing module uses the pickle module to turn data into strings—the same strings you can stash in files with pickle.dump—and sends the data across the channel, "downwards" to workers to send arguments and such, and "upwards" from workers to send back results.
Eventually, once you're all done with getting results, the worker finishes (by returning from the target= function) and tells the parent it's done. To make sure everything gets closed and cleaned-up, the parent should call p.join() to wait for the worker's "I'm done" message (actually an OS-level exit on Unix-ish sysems).
The example is a little bit silly since the two printed messages take basically no time at all, so running them "at the same time" has no measurable gain. But suppose instead of just printing hello, f were to calculate the first 100,000 digits of π (3.14159...). You could then spawn another Process, p2 with a different target g that calculates the first 100,000 digits of e (2.71828...). These would run independently. The parent could then call p.join() and p2.join() to wait for both to complete (or spawn yet more workers to do more work and occupy more CPUs, or even go off and do its own work for a while first).

Related

How do python pipe still work through spawning processes?

I'm trying to understand why the following code works:
import multiprocessing
def send_message(conn):
# Send a message through the pipe
conn.send("Hello, world!")
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
# Create a pipe
parent_conn, child_conn = multiprocessing.Pipe()
# Create a child process
p = multiprocessing.Process(target=send_message, args=(child_conn,))
p.start()
# Wait for the child process to finish
p.join()
# Read the message from the pipe
message = parent_conn.recv()
print(message)
As I understand, python pipes are just regular OS pipes, which are file descriptors.
When a new process is created via spawn , we should lose all the file descriptors (contrary to regular fork)
In that case, how is it possible that the python pipe is still "connected" to its parent process?
The documentation does not suggest that it will lose all the file descriptors - only that "unnecessary file descriptors and handles from the parent process will not be inherited". To figure out how this is achieved exactly in CPython, first we need to see what exactly happens when p.start() is called in the example code.
After some point upon starting the process, the Process instance's underlying Popen helper will be used, in the case for 'spawn' it would be the version provided by popen_spawn_posix. As part of the startup sequence goes, it will get the relevant data that is required to start the process, this includes which function to call and then their arguments (code), which a specific pickler is used.
Given that the Connection object (which Pipe is built upon of) has defined a hook that actually ensures the relevant file descriptor is marked for duplication. This is ultimately invoked from here which points back to the helper function at the 'spawn' version of the Popen.duplicate_for_child, ensuring that any connection objects passed (in your case, args=(child_conn,)) will have their file descriptors passed through to the actual start function spawnv_passfds such that the child process will have access to them.
I will note that I have glossed over various other details, but if you wish to you can always attach a debugger and trace through the startup sequence, which is what I did to derive this answer.

Difference in behavior between os.fork and multiprocessing.Process

I have this code :
import os
pid = os.fork()
if pid == 0:
os.environ['HOME'] = "rep1"
external_function()
else:
os.environ['HOME'] = "rep2"
external_function()
and this code :
from multiprocessing import Process, Pipe
def f(conn):
os.environ['HOME'] = "rep1"
external_function()
conn.send(some_data)
conn.close()
if __name__ == '__main__':
os.environ['HOME'] = "rep2"
external_function()
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print parent_conn.recv()
p.join()
The external_function initializes an external programs by creating the necessary sub-directories in the directory found in the environment variable HOME. This function does this work only once in each process.
With the first example, which uses os.fork(), the directories are created as expected. But with second example, which uses multiprocessing, only the directories in rep2 get created.
Why isn't the second example creating directories in both rep1 and rep2?
The answer you are looking for is in detail addressed here. There is also an explanation of differences between different OS.
One big issue is that the fork system call does not exist on Windows. Therefore, when running a Windows OS you cannot use this method. multiprocessing is a higher-level interface to execute a part of the currently running program. Therefore, it - as forking does - creates a copy of your process current state. That is to say, it takes care of the forking of your program for you.
Therefore, if available you could consider fork() a lower-level interface to forking a program, and the multiprocessing library to be a higher-level interface to forking.
To answer your question directly, there must be some side effect of external_process that makes it so that when the code is run in series, you get different results than if you run them at the same time. This is due to how you set up your code, and the lack of differences between os.fork and multiprocessing.Process in systems that os.fork is supported.
The only real difference between the os.fork and multiprocessing.Process is portability and library overhead, since os.fork is not supported in windows, and the multiprocessing framework is included to make multiprocessing.Process work. This is because os.fork is called by multiprocessing.Process, as this answer backs up.
The important distinction, then, is os.fork copies everything in the current process using Unix's forking, which means at the time of forking both processes are the same with PID differences. In Window's, this is emulated by rerunning all the setup code before the if __name__ == '__main__':, which is roughly the same as creating a subprocess using the subprocess library.
For you, the code snippets you provide are doing fairly different things above, because you call external_function in main before you open the new process in the second code clip, making the two processes run in series but in different processes. Also the pipe is unnecessary, as it emulates no functionality from the first code.
In Unix, the code snippets:
import os
pid = os.fork()
if pid == 0:
os.environ['HOME'] = "rep1"
external_function()
else:
os.environ['HOME'] = "rep2"
external_function()
and:
import os
from multiprocessing import Process
def f():
os.environ['HOME'] = "rep1"
external_function()
if __name__ == '__main__':
p = Process(target=f)
p.start()
os.environ['HOME'] = "rep2"
external_function()
p.join()
should do exactly the same thing, but with a little extra overhead from the included multiprocessing library.
Without further information, we can't figure out what the issue is. If you can provide code that demonstrates the issue, that would help us help you.

Python Multiprocessing - sending inputs to child processes

I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.

How to I pass a value from a child script to a parent script that are both running at the same time?

Return a value early to a calling process in Python?
Hello, I want to ask is there a way to have one script call another script, have both those scripts running at the same time, and have the child script send a value to the parent script long before that child script is done running (WITHOUT exiting that child script early)? I'm looking for a solution in Python, but any information or clues would help, thankyou.
I think one way to do this is to print the value that you want to send back to the parent script to the standard output and then have the the parent script redirect it or pick it up some how, but there must be a better solution, because what if the child script prints other things? (then the parent script has to know how to isolate that exact part of the output with something like Unix head and tail commands, and what if you don't want to use the standard output at all?)
I have searched for answers on this, but I cannot find any.
You could use multiprocessing to launch the child script from the parent script.
A mp.Queue could be used to communicate output from the child script back to the parent. Here is a simple example:
parent.py:
import multiprocessing as mp
import child
if __name__ == '__main__':
queue = mp.Queue()
proc = mp.Process(target=child.main, args=(queue,))
proc.daemon = True
# This launches the child process, calling child.main()
proc.start()
for i in range(10):
result = queue.get() # Get results from child.main
print(result)
child.py:
import time
def main(queue=None):
for i in range(10):
# do a computation
result = i
if queue:
# Put a result in the queue for the parent to get
queue.put(result)
time.sleep(.5)
if __name__=='__main__':
# We reach here only when child.py is run as a script
# (as opposed to child being imported as a module).
main()
Note that the result passed through the queue must be picklable.
It is probably best to use the multiprocessing module which is designed for exactly this purpose.

python: what happens when an object is passed in multiprocessing.Process?

p = Process(target=f, args=(myObject,))
p.start()
p.join()
From experimentation, inside of function f(), I can access myObject fine and its members appears to be intact, even though presumably we're in a different process. Printing id(myObject) in the current function and in f() returns the same number.
Is Python secretly performing IPC when myObject is accessed inside of f()?
As Winston wrote: on Unix the process will be forked and the forked process is basically a full copy of the parent process (that's why the id is identical).
The actual process depends on whether you are running unix or windows.
On *nix, fork() is used which creates a complete copy of your process.
On windows, I believe the object is pickled (see the pickle module) and sent over some IPC channel.

Categories