I am new to the multiprocessing module in Python and work with Jupyter notebooks. I have tried the following code snippet from PMOTW:
import multiprocessing
def worker():
"""worker function"""
print('Worker')
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
When I run this as is, there is no output.
I have also tried creating a module called worker.py and then importing that to run the code:
import multiprocessing
from worker import worker
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
There is still no output in that case. In the console, I see the following error (repeated multiple times):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
AttributeError: Can't get attribute 'worker' on <module '__main__' (built-in)>
However, I get the expected output when the code is saved as a Python script and exectued.
What can I do to run this code directly from the notebook without creating a separate script?
I'm relatively new to parallel computing so I may be wrong with some technicalities. My understanding is this:
Jupyter notebooks don't work with multiprocessing because the module pickles (serialises) data to send to processes.
multiprocess is a fork of multiprocessing that uses dill instead of pickle to serialise data which allows it to work from within Jupyter notebooks. The API is identical so the only thing you need to do is to change
import multiprocessing
to...
import multiprocess
You can install multiprocess very easily with a simple
pip install multiprocess
You will however find that your processes will still not print to the output, (although in Jupyter labs they will print out to the terminal the server out is running in). I stumbled upon this post trying to work around this and will edit this post when I find out how to.
I'm not an export either in multiprocessing or in ipykernel(which is used by jupyter notebook) but because there seems nobody gives an answer, I will tell you what I guessed. I hope somebody complements this later on.
I guess your jupyter notebook server is running on Windows host. In multiprocessing there are three different start methods. Let's focus on spawn, which is the default on windows, and fork, the default on Unix.
Here is a quick overview.
spawn
(cpython) interactive shell - always raise an error
run as a script - okay only if you nested multiprocessing code in if __name__ == '__main'__
Fork
always okay
For example,
import multiprocessing
def worker():
"""worker function"""
print('Worker')
return
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
This code works when it's saved and run as a script, but raises an error when entered in an python interactive shell. Here is the implementation of ipython kernel, and my guess is that that it uses some kind of interactive shell and so doesn't go well with spawn(but please don't trust me).
For a side note, I will give you an general idea of how spawn and fork are different. Each subprocess is running a different python interpreter in multiprocessing. Particularly, with spawn, a child process starts a new interpreter and imports necessary module from scratch. It's hard to import code in interactive shell, so it may raise an error.
fork is different. With fork, a child process copies the main process including most of the running states of the python interpreter and then continues execution. This code will help you understand the concept.
import os
main_pid = os.getpid()
os.fork()
print("Hello world(%d)" % os.getpid()) # print twice. Hello world(id1) Hello world(id2)
if os.getpid() == main_pid:
print("Hello world(main process)") # print once. Hello world(main process)
Much like you I encountered the attribute error. The problem seems to be related how jupyter handles multithreading. The fastest result I got was to follow the Multi-processing example.
So the ThreadPool took care of my issue.
from multiprocessing.pool import ThreadPool as Pool
def worker():
"""worker function"""
print('Worker\n')
return
pool = Pool(4)
for result in pool.map(worker, range(5)):
pass # or print diagnostics
This works for me on MAC (cannot make it work on windows):
import multiprocessing as mp
mp_start_count = 0
if __name__ == '__main__':
if mp_start_count == 0:
mp.set_start_method('fork')
mp_start_count += 1
Save the function to a separate Python file then import the function back in. It should work fine that way.
Related
I am new to the multiprocessing and exploring how to use them. As I refer to python documentation example, I just tried one of my functions but my jupyter notebook gave me an error. What could be the reason?
import concurrent.futures
def dummy(x):
return x**(1/200)
def main():
with concurrent.futures.ProcessPoolExecutor() as executer:
x =[1,2,3,4,5,6]
future = executer.map(dummy,x)
for result in future:
print(result)
if __name__ == '__main__':
main()
and the error is : How will I fix this? I have the latest version of Python.
BrokenProcessPool: A process in the process pool was terminated
abruptly while the future was running or pending.
looks like as I was using Jupyter notebook, it was the problem. I just read the line from the documentation, " The main module must be importable by worker subprocesses. This means that ProcessPoolExecutor will not work in the interactive interpreter".
I'm using multiprocessing in a larger code base where some of the import statements have side effects. How can I run a function in a background process without having it inherit global imports?
# helper.py:
print('This message should only print once!')
# main.py:
import multiprocessing as mp
import helper # This prints the message.
def worker():
pass # Unfortunately this also prints the message again.
if __name__ == '__main__':
mp.set_start_method('spawn')
process = mp.Process(target=worker)
process.start()
process.join()
Background: Importing TensorFlow initializes CUDA which reserves some amount of GPU memory. As a result, spawing too many processes leads to a CUDA OOM error, even though the processes don't use TensorFlow.
Similar question without an answer:
How to avoid double imports with the Python multiprocessing module?
Is there a resources that explains exactly what the multiprocessing
module does when starting an mp.Process?
Super quick version (using the spawn context not fork)
Some stuff (a pair of pipes for communication, cleanup callbacks, etc) is prepared then a new process is created with fork()exec(). On windows it's CreateProcessW(). The new python interpreter is called with a startup script spawn_main() and passed the communication pipe file descriptors via a crafted command string and the -c switch. The startup script cleans up the environment a little bit, then unpickles the Process object from its communication pipe. Finally it calls the run method of the process object.
So what about importing of modules?
Pickle semantics handle some of it, but __main__ and sys.modules need some tlc, which is handled here (during the "cleans up the environment" bit).
# helper.py:
print('This message should only print once!')
# main.py:
import multiprocessing as mp
def worker():
pass
def main():
# Importing the module only locally so that the background
# worker won't import it again.
import helper
mp.set_start_method('spawn')
process = mp.Process(target=worker)
process.start()
process.join()
if __name__ == '__main__':
main()
A basic example of multiprocessing Process class runs when executed from file, but not from IDLE. Why is that and can it be done?
from multiprocessing import Process
def f(name):
print('hello', name)
p = Process(target=f, args=('bob',))
p.start()
p.join()
Yes. The following works in that function f is run in a separate (third) process.
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
However, to see the print output, at least on Windows, one must start IDLE from a console like so.
C:\Users\Terry>python -m idlelib
hello bob
(Use idlelib.idle on 2.x.) The reason is that IDLE runs user code in a separate process. Currently the connection between the IDLE process and the user code process is via a socket. The fork done by multiprocessing does not duplicate or inherit the socket connection. When IDLE is started via an icon or Explorer (in Windows), there is nowhere for the print output to go. When started from a console with python (rather than pythonw), output goes to the console, as above.
I was trying to run a piece of code.This code is all about multiprocessing.It works fine on command prompt and it also generates some output.But when I try to run this code on pyscripter it just says that script runs ok and it doesn't generate any output nor even it displays any error message.It doesn't even crashes.It would be really helpful if anyone could help me out to find out a right interpreter where this multiprocessing works fine.
Here is the piece of code:
from multiprocessing import Process
def wait():
print "wait"
clean()
def clean():
print "clean"
def main():
p=Process(target=wait)
p.start()
p.join()
if _name_=='_main_':
main()
The normal interpreter works just fine with multiprocessing on Windows 7 for me. (Your IDE might not like multiprocessing.)
You just have to do
if __name__=='__main__':
main()
with 2 underscores (__) each instead of 1 (_).
Also - if you don't have an actual reason not to use it, multiprocessing.Pool is much easier to use than multiprocessing.Process in most cases. Have alook at https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.pool
An implementation with a Pool would be
import multiprocessing
def wait():
print "wait"
clean()
def clean():
print "clean"
def main():
p=multiprocessing.Pool()
p.apply_async(wait)
p.close()
p.join()
if __name__=='__main__':
main()
but which method of Pool to use strongly depends on what you actually want to do.
I've noticed strange behaviour when running some python code that made use of the multiprocessing library. This is all under Windows and likely a Windows thing, but maybe someone could explain what's happening.
If I create a simple python script and create a pool like so:
import multiprocessing
pool = multiprocessing.Pool()
print "made a pool"
while True:
pass
when I run the script I see "made a pool" printed 8 times, which would be the default number of processes created by Pool() as I have 8 cores on my machine.
When I change the script to be like so:
import multiprocessing
def run():
pool = multiprocessing.Pool()
print "made a pool"
while True:
pass
if __name__ == '__main__':
run()
I see "made a pool" printed once - which is what I would have expected in both cases.
I guess I would normally run any code using the multiprocessing library from a function, but got caught out by this while playing with some code in a single python file. Anyone know why it happens?