Related to Python Multiprocessing error: AttributeError: module '__main__' has no attribute '__spec__' , but arising from different circumstances.
I'm encountering an issue in Python 3.7.4 when I try to run multiprocessing code with pdb. The issue replicates with the basic multiprocessing example from https://docs.python.org/3.6/library/multiprocessing.html :
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
This runs fine (outputs [1, 4, 9]) when run directly from Python via python.exe testcase.py. However, it does not work under pdb; python.exe -m pdb testcase.py fails with an error:
Traceback (most recent call last):
File "c:\python37\lib\pdb.py", line 1697, in main
pdb._runscript(mainpyfile)
File "c:\python37\lib\pdb.py", line 1566, in _runscript
self.run(statement)
File "c:\python37\lib\bdb.py", line 585, in run
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "c:\users\max\desktop\projects\errortest.py", line 1, in <module>
from multiprocessing import Pool
File "c:\python37\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "c:\python37\lib\multiprocessing\pool.py", line 176, in __init__
self._repopulate_pool()
File "c:\python37\lib\multiprocessing\pool.py", line 241, in _repopulate_pool
w.start()
File "c:\python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "c:\python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "c:\python37\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "c:\python37\lib\multiprocessing\spawn.py", line 172, in get_preparation_data
main_mod_name = getattr(main_module.__spec__, "name", None)
AttributeError: module '__main__' has no attribute '__spec__'
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> c:\python37\lib\multiprocessing\spawn.py(172)get_preparation_data()
-> main_mod_name = getattr(main_module.__spec__, "name", None)
I hesitate to think that I've found a bug in a pair of modules that have been important parts of Python for over a decade. Is something incorrect here?
This is a limitation of multiprocessing in windows. This question contains a good explanation for why this is so. A quick google search shows that the puDB may be able to help with debugging multi-processing code, but I have not used it before.
The following is from the python docs:
Functionality within this package requires that the main module be importable by the children. This is covered in Programming guidelines however it is worth pointing out here.
This means that some examples, such as the multiprocessing.pool.Pool examples will not work in the interactive interpreter.
Related
I'm using AlphaPose from GitHub and I'd like to run the script script/demo_inference.py from another script I created in AlphaPose root called run.py. In run.py I imported demo_inference.py as ap using this script:
def import_module_by_path(path):
name = os.path.splitext(os.path.basename(path))[0] spec =
importlib.util.spec_from_file_location(name, path) mod =
importlib.util.module_from_spec(spec) spec.loader.exec_module(mod) return mod
and
ap = import_module_by_path('./scripts/demo_inference.py')
Then, in demo_inference.py I substituted
if __name__ == "__main__":
with
def startAlphapose():
and in run.py I wrote
ap.StartAlphapose().
Now I got this error:
Load SE Resnet...
Loading YOLO model..
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/vislab/guerri/alphagastnet/insieme/alphapose/utils/detector.py", line 251, in image_postprocess
(orig_img, im_name, boxes, scores, ids, inps, cropped_boxes) = self.wait_and_get(self.det_queue)
File "/home/vislab/guerri/alphagastnet/insieme/alphapose/utils/detector.py", line 121, in wait_and_get
return queue.get()
File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/home/vislab/guerri/alphagastnet/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 284, in rebuild_storage_fd
fd = df.detach()
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 487, in Client
c = SocketClient(address)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
What does it mean?
We were running into this same problem in our cluster.
When using multiprocessing in PyTorch (typically to run multiple DataLoader workers), the subprocesses create sockets in the /tmp directory to communitcate with each other. These sockets all saved in folders named pymp-###### and look like 0-byte files. Deleting these files or folders while your PyTorch scripts are still running will cause the above error.
In our case, the problem was a buggy maintenance script that was erasing files out of the /tmp folder while they were still needed. It's possible there are other ways to trigger this error. But you should start by looking for those sockets and making sure they aren't getting erased by accident.
If that doesn't solve it, take a look at your /var/log/syslog file at the exact time when the error occurred. You'll very likely find the cause of it there.
I got an error for this code:
from pathos.multiprocessing import ProcessingPool
def diePlz(im):
print('Whoopdepoop!')
def caller():
im = 1
pool = ProcessingPool()
pool.map(diePlz,[im,im,im,im])
if __name__=='__main__':
caller()
when I ran it with the cProfiler: (python3 -m cProfile testProfiler.py)
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/rohit/.local/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/rohit/.local/lib/python3.6/site-packages/multiprocess/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/rohit/.local/lib/python3.6/site-packages/pathos/helpers/mp_helper.py", line 15, in <lambda>
func = lambda args: f(*args)
File "testProfiler.py", line 3, in diePlz
print('Whoopdepoop!')
NameError: name 'print' is not defined
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.6/cProfile.py", line 160, in <module>
main()
File "/usr/lib/python3.6/cProfile.py", line 153, in main
runctx(code, globs, None, options.outfile, options.sort)
File "/usr/lib/python3.6/cProfile.py", line 20, in runctx
filename, sort)
File "/usr/lib/python3.6/profile.py", line 64, in runctx
prof.runctx(statement, globals, locals)
File "/usr/lib/python3.6/cProfile.py", line 100, in runctx
exec(cmd, globals, locals)
File "testProfiler.py", line 11, in <module>
caller()
File "testProfiler.py", line 8, in caller
pool.map(diePlz,[im,im,im,im])
File "/home/rohit/.local/lib/python3.6/site-packages/pathos/multiprocessing.py", line 137, in map
return _pool.map(star(f), zip(*args)) # chunksize
File "/home/rohit/.local/lib/python3.6/site-packages/multiprocess/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/rohit/.local/lib/python3.6/site-packages/multiprocess/pool.py", line 644, in get
raise self._value
NameError: name 'print' is not defined
But when I ran it without the cProfiler:
$ python3 testProfiler.py
Whoopdepoop!
Whoopdepoop!
Whoopdepoop!
Whoopdepoop!
The code that I've provided is a minimal working example for the problem. There is a much larger code that I want to debug, but am not able to do so because cProfiler keeps raising weird errors.
In this case, the point of importance is
NameError: name 'print' is not defined
which means python3 is not able to recognize print itself. In my code, it was not able to recognize range.
So, I realize this is a long time after the original post, but I have this exact same issue.
In my case I was getting the exact same error as the original post - python builtin functions such as print() or len() resulted in errors like this:
NameError: name 'len' is not defined
I'm currently running multiprocess version 0.70.11.1 and dill version 0.3.3 (components of pathos that make process based parallelism work).
Based on what I found in an issue comment: https://github.com/uqfoundation/pathos/issues/129#issuecomment-536081859 one of the package authors recommends trying:
import dill
dill.settings['recurse'] = True
At least in my case, the above fixed the error!
I have this simple program:
from PIL import Image
import pyscreenshot as ImageGrab
print "hi"
im=ImageGrab.grab()
im.show()
This works perfectly fine on Ubuntu, but it gives the following error on Windows:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python27\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\Python27\lib\multiprocessing\forking.py", line 509, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\Administrator\Downloads\sample.py", line 5, in <module>
im=ImageGrab.grab()
File "C:\Python27\lib\site-packages\pyscreenshot\__init__.py", line 46, in gra
b
return _grab(to_file=False, childprocess=childprocess, backend=backend, bbox
=bbox)
File "C:\Python27\lib\site-packages\pyscreenshot\__init__.py", line 29, in _gr
ab
return run_in_childprocess(_grab_simple, imcodec.codec, to_file, backend, bb
ox, filename)
File "C:\Python27\lib\site-packages\pyscreenshot\procutil.py", line 28, in run
_in_childprocess
p.start()
File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\Python27\lib\multiprocessing\forking.py", line 258, in __init__
cmd = get_command_line() + [rhandle]
File "C:\Python27\lib\multiprocessing\forking.py", line 358, in get_command_li
ne
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
There is no multiprocessing. I saw some other answers, but they did not help.
Can some please suggest a possible problem here?
There's a known problem with the multiprocessing module on Windows (to elaborate on roganosh remark): using multiprocessing module must be done in a function or in the __main__ section (after all the imports are initialized), not in the root of the script because of the way Windows spawns the python executable (hence the "bootstrap phase" error). No issue on Linux. Looks very much like the same issue as RuntimeError on windows trying python multiprocessing.
Try changing into this code:
from PIL import Image
import pyscreenshot as ImageGrab
if __name__ == "__main__":
im=ImageGrab.grab()
im.show()
The traceback indicates that there is multiprocessing being used in the background, not explicitly in your own code. Specifically, it is being called by pyscreenshot\procutil.py. The relevant lines of the traceback:
File "C:\Python27\lib\site-packages\pyscreenshot\procutil.py", line 28, in run
_in_childprocess
p.start()
File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
Since the issue is in the library, there would be nothing you could do except modify the library yourself. However, this page says that pyscreenshot is a "Replacement for the ImageGrab Module, which works on Windows only". So instead, you should install ImageGrab library, which seems to do exactly the same thing, but is only compatible with Windows and MacOS (see here)
I'm trying to launch a function in a new thread because the function makes something not related to the main program.
I tried to do this with multiprocessing module as:
import multiprocessing
import time
def mp_worker(a):
#time.sleep(a)
print('a:' +str(a))
return
for k in range(5):
p = multiprocessing.Process(target= mp_worker , args=(k,))
p.start()
print('keep going :' + str(k))
But I have a bunch of error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
prepare(preparation_data)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 226, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Anaconda3\lib\runpy.py", line 254, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Maxime\PycharmProjects\NeuralMassModelSofware_Pyqt5\dj.py", line 14, in <module>
p.start()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Someone knows how I can launch whatever function I want in a new thread and be sure that the main program is still running normally? I'm a little lost with multiprocessing, I admit that :)
I aim to launch some displays (graphical or print I don't know yet) for the user in new threads without interrupting the main program.
What you are doing isn't multithreading, it's muliprocessing.
Change.
multiprocessing.Process(...
with.
threading.Thread(...
Python ยป 3.6.1 Documentation: threading.Thread
SO Q&A: multiprocessing-vs-threading-python
Beside this, you can overcome the Error adding a time.sleep(0.2)after start(...
This is what I am getting when running the code you have provided:
python3.6 -u "multiprocessingError_Cg.py"
keep going :0
a:0
keep going :1
keep going :2
a:1
a:2
keep going :3
keep going :4
a:3
a:4
>Exit code: 0
So the answer to your question is that you have to look for the cause of the trouble with multiprocessing elsewhere, but not in the section of code listed in your question.
Take a close look at what was provided in answers and comments here - maybe it could help also in your case?
Here what was said in comments to one of the answers there:
Dave: Would this work if beBusyFor (in your case mp_worker) uses multiprocessing internally?
#Dave just try it out and report back here if it worked. Haven't tested such a case yet, so my opinion that it should work leaving some mess of processes with no parents doesn't matter here and therefore shouldn't be taken in consideration.
Dave: **No cigar ending child processes unfortunately** :(
There is following simple code:
from multiprocessing import Process, freeze_support
def foo():
print 'hello'
if __name__ == '__main__':
freeze_support()
p = Process(target=foo)
p.start()
It works good on Linux or Windows with Python 3.3, but fails on Windows with Python 2.7.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\Python27\lib\multiprocessing\forking.py", line 346, in main
prepare(preparation_data)
File "c:\Python27\lib\multiprocessing\forking.py", line 454, in prepare
assert main_name not in sys.modules, main_name
AssertionError: thread
Generally speaking, all multiprocessing examples i tried fail on that setup. Why?
This is a known bug:
http://bugs.python.org/issue10845
Not sure if this will ever get ported to 2.7.X.