Python multiprocessing on Python 2.6 Win32 (xp) - python

I tried to copy this example from this Multiprocessing lecture by jesse noller (as recommended in another SO post)[http://pycon.blip.tv/file/1947354?filename=Pycon-IntroductionToMultiprocessingInPython630.mp4]
But for some reason I'm getting an error, as though it's ignoring my function definitions:
I'm on Windows XP (win32) which I know has restrictions with regards to the multiprocessing library in 2.6 that requires everything be pickleable
from multiprocessing import Process
import time
def sleeper(wait):
print 'Sleeping for %d seconds' % (wait,)
time.sleep(wait)
print 'Sleeping complete'
def doIT():
p = Process(target=sleeper, args=(9,))
p.start()
time.sleep(5)
p.join()
if __name__ == '__main__':
doIT()
Output:
Evaluating mypikklez.py
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python26\lib\multiprocessing\forking.py", line 342, in main
self = load(from_parent)
File "C:\Python26\lib\pickle.py", line 1370, in load
return Unpickler(file).load()
File "C:\Python26\lib\pickle.py", line 858, in load
dispatch[key](self)
File "C:\Python26\lib\pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "C:\Python26\lib\pickle.py", line 1126, in find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'sleeper'
The error causing the issue is : AttributeError: 'module' object has no attribute 'sleeper'
As simple of a function as it is I can't understand what would be the hold up.
This is just for self-teaching purposes of basic concepts. I'm not trying to pre-optimize any real world issue.
Thanks.

Seems from the traceback that you are running the code directly into the python interpreter (REPL).
Don't do that. Save the code in a file and run it from the file instead, with the command:
python myfile.py
That will solve your issue.
As an unrelated note, this line is wrong:
print 'Sleeping for ' + wait + ' seconds'
It should be:
print 'Sleeping for %d seconds' % (wait,)
Because you can't concatenate string and int objects (python is strongly typed)

Related

multiprocessing pickling error: _pickle.PicklingError: Can't pickle <function myProcess at 0x02B2D420>: it's not the same object as __main__.myProcess

I'm reading and applying code from the python book and I can't use multiprocessing in simple example that you can see below:
import multiprocessing
def myProcess():
print("Currently Executing Child Process")
print("This process has it's own instance of the GIL")
print("Executing Main Process")
print("Creating Child Process")
myProcess = multiprocessing.Process(target=myProcess)
myProcess.start()
myProcess.join()
print("Child Process has terminated, terminating main process")
My platform is Windows 10 64 bit and using if __name_ == "__main_" : doesn't work in this case. What's wrong here? This code should work in python version 3.5 and above. Python version I use is 3.7. Full error message below:
C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\python.exe "C:/OneDrive/Utilizing sub-process.py"
Traceback (most recent call last):
File "C:/OneDrive/Utilizing sub-process.py", line 25, in <module>
myProcess.start()
File "C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\Xian\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function myProcess at 0x02B2D420>: it's not the same object as __main__.myProcess
try this
def test()
import multiprocessing
multiprocessing.set_start_method("fork")
p = multiprocessing.Process(target=xxx)
p.start()
python multiprocessing Contexts and start methods

can't pickle _thread.RLock objects when using a webservice

I am using python 3.6
I am trying to use multiprocessing from inside a class method shown below by the name SubmitJobsUsingMultiProcessing() which further calls another class method in turn.
I keep running into this error : Type Error : can't pickle _thread.RLock objects.
I have no idea what this means. I have a suspicion that the below line trying to establish a connection to a webserver API might be responsible but I am all at sea to understand why.
I am not a proper programmer(code as a part of a portfolio modeling team) so if this is an obvious question please pardon my ignorance and many thanks in advance.
import multiprocessing as mp,functools
def SubmitJobsUsingMultiProcessing(self,PartitionsOfAnalysisDates,PickleTheJobIdsDict = True):
if (self.ExportSetResult == "SUCCESS"):
NumPools = mp.cpu_count()
PoolObj = mp.Pool(NumPools)
userId,clientId,password,expSetName = self.userId , self.clientId , self.password , self.expSetName
PartialFunctor = functools.partial(self.SubmitJobsAsOfDate,userId = userId,clientId = clientId,password = password,expSetName = expSetName)
Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
BatchJobIDs = OrderedDict((key, val) for Dct in Result for key, val in Dct.items())
f_pickle = open(self.JobIdPickleFileName, 'wb')
pickle.dump(BatchJobIDs, f_pickle, -1)
f_pickle.close()
def SubmitJobsAsOfDate(self,ListOfDatesForBatchJobs,userId,clientId,password,expSetName):
client = Client(self.url, proxy=self.proxysettings)
if (self.ExportSetResult != "SUCCESS"):
print("The export set creation was not successful...exiting")
sys.exit()
BatchJobIDs = OrderedDict()
NumJobsSubmitted = 0
CurrentProcessID = mp.current_process()
for AnalysisDate in ListOfDatesForBatchJobs:
jobName = "Foo_" + str(AnalysisDate)
print('Sending job from process : ', CurrentProcessID, ' : ', jobName)
jobId = client.service.SubmitExportJob(userId,clientId,password,expSetName, AnalysisDate, jobName, False)
BatchJobIDs[AnalysisDate] = jobId
NumJobsSubmitted += 1
'Sleep for 30 secs every 100 jobs'
if (NumJobsSubmitted % 100 == 0):
print('100 jobs have been submitted thus far from process : ', CurrentProcessID,'---Sleeping for 30 secs to avoid the SSL time out error')
time.sleep(30)
self.BatchJobIDs = BatchJobIDs
return BatchJobIDs
Below is the trace ::
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1599, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 289, in <module>
BDTProcessObj.SubmitJobsUsingMultiProcessing(Partitions)
File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 190, in SubmitJobsUsingMultiProcessing
Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 644, in get
raise self._value
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 424, in _handle_tasks
put(task)
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
I am struggling with a similar problem. There was a bug in <=3.5 whereby _thread.RLock objects did not raise an error when pickled (They cannot be) For the Pool object to work, a function and arguments must be passed to it from the main process and this relies on pickling (pickling is a means of serialising objects) In my case the RLock object is somewhere in the logging module. I suspect your code will work fine on 3.5. Good luck. See this bug resolution.

python multiprocessing returning error 'module' object has no attribute 'myfunc'

First off, I am very new to multiprocessing, and I can't seem to make a very simple and straightforward example work. This is the example I working with:
import multiprocessing
def worker():
"""worker function"""
print 'Worker'
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
everytime I run a code I am getting this error multiple times :
C:\Anaconda2\lib\site-packages\IPython\utils\traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package.
warn("IPython.utils.traitlets has moved to a top-level traitlets package.")
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Anaconda2\lib\multiprocessing\forking.py", line 381, in main
self = load(from_parent)
File "C:\Anaconda2\lib\pickle.py", line 1384, in load
return Unpickler(file).load()
File "C:\Anaconda2\lib\pickle.py", line 864, in load
dispatch[key](self)
File "C:\Anaconda2\lib\pickle.py", line 1096, in load_global
klass = self.find_class(module, name)
File "C:\Anaconda2\lib\pickle.py", line 1132, in find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'worker'
I know that this question is very vague but I if anyone could point me in the right direction I would appreciate it.
I am on Windows, I run it in Anaconda with python 2.7, the code is exactly the same as above, nothing more nothing less! I run it directly in the console in the IDE
EDIT: It looks like when I run the code directly in command prompt it works just fine, but doing it the console using Anaconda won't work. anybody knows why?
Anaconda doesn't like multiprocessing as explained in this
answer.
From the answer:
This is because of the fact that multiprocessing does not work well in the interactive interpreter. The main reason is that there is no fork() function applicable in windows. It is explained on their web page itself.
Thank you!

starting a process with eval

**exe.py**
def createProcess(f):
try:
from multiprocessing import Process
newProcess = Process(target=f)
newProcess.start()
newProcess.join()
except:
print "Error creating process"
def lala():
print "success creating process"
print "tying to make a process"
from multiprocessing import Process
newProcess = Process(target=lala)
newProcess.start()
**main.py**
if __name__ == '__main__':
f = open("exe.py", "r")
b = f.read()
f.close()
o = compile(b, "exe.py", "exec")
eval(o)
i get the following error
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Opsware\agent\lcpython15\lib\multiprocessing\forking.p
", line 374, in main
self = load(from_parent)
File "C:\Program Files\Opsware\agent\lcpython15\lib\pickle.py", line 1378, in
load
return Unpickler(file).load()
File "C:\Program Files\Opsware\agent\lcpython15\lib\pickle.py", line 858, in
oad
dispatch[key](self)
File "C:\Program Files\Opsware\agent\lcpython15\lib\pickle.py", line 1090, in
load_global
klass = self.find_class(module, name)
File "C:\Program Files\Opsware\agent\lcpython15\lib\pickle.py", line 1126, in
find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'lala'
later edit
i changed the exe.py to
def lala2():
f = open("C:\\work\\asdfas", "w")
f.write("dsdfg\r\n")
f.close()
print "success creating process"
if __name__ == '__main__':
print "tying to make a process"
from multiprocessing import Process, freeze_support
freeze_support()
import pickle
l = pickle.dumps(lala2)
pickle.loads(l)()
newProcess = Process(target=pickle.loads(l))
newProcess.daemon = True
newProcess.start()
if newProcess.is_alive() == True:
print "alive"
else:
print "not alive"
import time
time.sleep(12)
this should make it importable and because of the pickle test it shows that my method is pickable. any suggestions on why it behaves this way?
You're on Windows. Unfortunately, on Windows, it is not possible to use a dynamic code object as a target for multiprocessing because the Windows implementation of multiprocessing must be able to import the main module (the technical reason is because Windows lacks a native fork() equivalent). Please see the multiprocessing programming guidelines for Windows for more details on the applicable restrictions.
The solution is to write the code to a file, then import it to start the server.

Cannot open pipe descriptors created by threads in Python

I just began to study the pipe method of python.
I tried to wrap the pipe descriptors into file object and read by line.
import os,time,threading
def child():
while True:
time.sleep(1)
msg = ('Spam\n' ).encode()
os.write(pipeout,msg)
def parent():
while True:
a = os.fdopen(pipein)
line = a.readline()[:-1]
print('Parent %d got [%s] at %s' % (os.getpid(),line,time.time()))
pipein,pipeout = os.pipe()
threading.Thread(target=child,args=()).start()
parent()
when I run the script, the results are following----the script just works in the first iteration and then shows the error messages
Parent 621 got [Spam] at 1376785841.4
Traceback (most recent call last):
File "/Users/miteji/pipe-thread.py", line 43, in <module>
parent()
File "/Users/miteji/pipe-thread.py", line 36, in parent
line = a.readline()[:-1]
IOError: [Errno 9] Bad file descriptor
>>> Exception in thread Thread-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/miteji/pipe-thread.py", line 30, in child
os.write(pipeout,msg)
OSError: [Errno 32] Broken pipe
However, when I changed
a = os.fdopen(pipein)
line = a.readline()[:-1]
to
line = os.read(pipein,32)
the scrips works fine.
So Why the "os.fdopen" method cannot be used? why the pipe is broken? Thank you all!
The problem lies in the placement of os.fdopen here:
def parent():
while True:
a = os.fdopen(pipein)
line = a.readline()[:-1]
print('Parent %d got [%s] at %s' % (os.getpid(),line,time.time()))
Each trip through the loop, you call os.fdopen() again, even if you did before.
The first time you do it, you did not do any earlier os.fdopen(), so all is well. But the second time, this re-binds a to the new result, abandoning the earlier os.fdopen() value.
When the earlier value is abandoned, it becomes eligible for garbage collection. CPython notices immediately (due to reference counting) and collects it. This deletes the underlying object, which calls os.fdclose(). That, in turn, closes the pipe.
To fix the immediate problem, then, make sure you only open the pipe once, outside the loop.

Categories