In a standalone Python script, I have a functioning GLib mainloop.
# myloop.py
import gobject as GLib
def main():
mainloop = GLib.MainLoop()
# do some DBus related stuffs
mainloop.run()
if __name__ == '__main__':
main()
I now need to run this from another Python project,
where multiple processes are used to implement different features.
from multiprocessing import Process
from myloop import main as myloop_main
def do something():
# do somethng else
if __name__ == '__main__':
something_proc = Process(target = do_something)
myloop_proc = Process(target = myloop_main)
something_proc.start()
myloop_proc.start()
something_proc.join()
myloop_proc.join()
This runs without error, but the mainloop is not really running
and the feature implemented in the mainloop is not working.
How to run the GLib's mainloop in a process?
For some reason, I can not do parallel processing by python. Fo example by running the below code, I get runtime errors:
import multiprocessing as mp
import time
def sleep_for_a_bit(seconds):
print(f'Sleeping {seconds} second(s)')
time.sleep(seconds)
print("Done Sleeping")
p1=mp.Process(target=sleep_for_a_bit,args=[1])
p2=mp.Process(target=sleep_for_a_bit,args=[1])
if __name__ == '__main__':
mp.freeze_support()
p1.start()
p2.start()
finish=time.perf_counter()
print("finish running after seconds : ",finish)
this is the error message:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.`
I have a Windows desktop and it actually ran (maybe I was lucky). But in general, on platforms such as Windows that use spawn to create new platforms, you should take all code that you do not want your newly created processes to execute out of global scope since these processes will be created by launching a new Python interpreter and restarting executing from the top of the program and if the code is not contained within a if __name__ == '__main__': block, it will be executed. So my best suggestion is to try the following (I have made a few corrections to the code):
import multiprocessing as mp
import time
def sleep_for_a_bit(seconds):
print(f'Sleeping {seconds} second(s)')
time.sleep(seconds)
print("Done Sleeping")
if __name__ == '__main__':
mp.freeze_support() # not required unless you are creating an .exe file
p1=mp.Process(target=sleep_for_a_bit,args=[1])
p2=mp.Process(target=sleep_for_a_bit,args=[1])
start = time.perf_counter()
p1.start()
p2.start()
p1.join() # wait for process to finish
p2.join() # wait for process to finish
finish=time.perf_counter()
# perf_counter() is only meaningful when you take the difference between readings:
print("finish running after seconds : ", finish - start)
Prints:
Sleeping 1 second(s)
Sleeping 1 second(s)
Done Sleeping
Done Sleeping
finish running after seconds : 1.0933153999999998
I am trying to understand how multiprocessing works in Python. Here is a simple code which is not calling the function as I expected it would.
import time
import multiprocessing
def do_something():
print('Sleep')
time.sleep(1)
print('Wake up')
start = time.perf_counter()
p1 = multiprocessing.Process(target=do_something)
p2 = multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s)')
In Jupyter Notebook, after executing I am getting following output:
Finished in 0.2 second(s)
I though it would be like something like this:
Sleep
Sleep
Wake up
Wake up
Finished in 0.2 second(s)
What am I missing?
You should check "Programming guidelines" (https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming) to figure out why you need the:
if __name__ == '__main__' :
guard in your scripts that use multiprocessing. Since you don't have that in your notebooks, it wont work properly.
I have the codes like this:
It is clear the 'finished' has been printed out. but join still blocks.
Why should this happend?
from multiprocessing import Process
class MyProcess(Process):
def run(self):
## do someting
print 'finished'
processes = []
for i in range(3):
p = MyProcess()
p.start()
processes.append(p)
for p in processes:
p.join()
you should add this line if __name__ == '__main__': for things to work properly
Explanation:
your main script will be imported by process.py module, then it will execute your script lines 2 times, one during importing and one from your script execution,
here is the runtime error if we didn't include if __name__ == '__main__':
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
your working code in python 3.6 is:
from multiprocessing import Process
class MyProcess(Process):
def run(self):
## do someting
print ('finished')
processes = []
if __name__ == '__main__':
for i in range(3):
p = MyProcess()
p.start()
processes.append(p)
for p in processes:
p.join()
print('we are done here .......')
output:
finished
finished
finished
we are done here .......
join would not block if the task is finished, also your program is invalid.
for i in 3: # X integer is not iterable,
for i in range(3): # should be like this.
I am trying to learn how to use multiprocessingbut I can't get it to work. Here is the code right out of the documentation
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
it should output
>>> 'hello bob'
but instead i get
>>>
no errors or other messages, it just sits there, It is running in IDLE from a saved .py file on a Windows 7 machine with the 32-bit version of Python 2.7
My guess is that you are using IDLE to try to run this script. Unfortunately, this example will not run correctly in IDLE. Note the comment at the beginning of the docs:
Note Functionality within this package requires that the main
module be importable by the children. This is covered in Programming
guidelines however it is worth pointing out here. This means that some
examples, such as the multiprocessing.Pool examples will not work in
the interactive interpreter.
The __main__ module is not importable by children in IDLE, even if you run the script as a file with IDLE (which is commonly done with F5).
The problem is not IDLE. The problem is trying to print to sys.stdout in a process that has no sys.stdout. That is why Spyder has the same problem. Any GUI program on Windows is likely to have the same problem.
On Windows, at least, GUI programs are usually run in a process without stdin, stdout, or stderr streams. Windows expects GUI programs to interact with users through widgets that paint pixels on the screen (the G in Graphical) and receive key and mouse events from Windows event system. That is what the IDLE GUI does, using the tkinter wrapper of the tcl tk GUI framework.
When IDLE runs user code in a subprocess, idlelib.run runs first, and it replaces None for the standard streams with objects that interact with IDLE itself through a socket. Then it exec()s user code. When the user code runs multiprocessing, multiprocessing starts further processes that have no std streams, but never get them.
The solution is to start IDLE in a console: python -m idlelib.idle (the .idle is not needed on 3.x). Processes started in a console get std streams connect to the console. So do further subprocesses. The real stdout (as opposed to the sys.stdout) of all the processes is the console. If one runs the third example in the doc,
from multiprocessing import Process
import os
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
then the 'main line' block goes to the IDLE shell and the 'function f' block goes to the console.
This result shows that Justin Barber's claim that the user file run by IDLE cannot be imported into processes started by multiprocessing is not correct.
EDIT: Python saves the original stdout of a process in sys.__stdout__. Here is the result in IDLE's shell when IDLE is started normally on Windows, as a pure GUI process.
>>> sys.__stdout__
>>>
Here is the result when IDLE is started from CommandPrompt.
>>> import sys
>>> sys.__stdout__
<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
>>> sys.__stdout__.fileno()
1
The standard file numbers for stdin, stdout, and stderr are 0, 1, 2. Run a file with
from multiprocessing import Process
import sys
def f(name):
print('hello', name)
print(sys.__stdout__)
print(sys.__stdout__.fileno())
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
in IDLE started in the console and the output is the same.
It works.
I've marked the changes needed to make your sample run using comments:
from multiprocessing import Process
def f(name):
print 'hello', name #indent
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()` # remove ` (grave accent)
result:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Output from my laptop after saving it as ex1.py:
reuts#reuts-K53SD:~/python_examples$ cat ex1.py
#!/usr/bin/env python
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
reuts#reuts-K53SD:~/python_examples$ python ex1.py
hello bob
I had the issue that multiprocessing did not work on Spyder, and always landed here. I solved it by using threading instead of multiprocessing. as described here: https://pymotw.com/2/threading/
import threading
def worker(num):
"""thread worker function"""
print 'Worker: %s' % num
return
threads = []
for i in range(5):
t = threading.Thread(target=worker, args=(i,))
threads.append(t)
t.start()
Most likely your main process exits before sysout is flushed. Try this:
from multiprocessing import Process
import sys
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
# make sure all output has been processed before we exit
sys.stdout.flush()
If this doesn't work, try adding time.sleep(1) as the last statement.
Try using this code (from standard manual). Works for me on windows. Another one did not work for me either :)
import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
mp.set_start_method('spawn')
q = mp.Queue()
p = mp.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()