NotImplementedError: X is not picklable - python

Using Python 3.x, I am trying to iterate over a dictionary of datasets (NetCDF4 datasets). They are just files...
I want to examine each dataset on a separate process:
def DoProcessWork(datasetId, dataset):
parameter = dataset.variables["so2"]
print(parameter[0,0,0,0])
if __name__ == '__main__':
mp.set_start_method('spawn')
processes = []
for key, dataset in datasets.items():
p = mp.Process(target=DoProcessWork, args=(key, dataset,))
p.start()
processes.append(p)
When I run my program, I get some message about 'pickable'
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "netCDF4\_netCDF4.pyx", line 1992, in netCDF4._netCDF4.Dataset.__reduce__ (netCDF4\_netCDF4.c:16805)
NotImplementedError: Dataset is not picklable
What am I doing wrong? How can I fix this?
Could it be that opening the file is done on another process, and so I am getting an error because I am trying to pass data loaded on 1 process to another process?

The multiprocessing needs to serialize (pickle) the inputs to pass them to the new proccess which will run the DoProcessWork. In your case the dataset object is a problem, see the list of what can be pickled.
A possible workaround for you would be using multiprocessing with another function which reads the dataset and calls DoProcessWork on it.

Related

Multiprocessing, file not found

I'm using AlphaPose from GitHub and I'd like to run the script script/demo_inference.py from another script I created in AlphaPose root called run.py. In run.py I imported demo_inference.py as ap using this script:
def import_module_by_path(path):
name = os.path.splitext(os.path.basename(path))[0] spec =
importlib.util.spec_from_file_location(name, path) mod =
importlib.util.module_from_spec(spec) spec.loader.exec_module(mod) return mod
and
ap = import_module_by_path('./scripts/demo_inference.py')
Then, in demo_inference.py I substituted
if __name__ == "__main__":
with
def startAlphapose():
and in run.py I wrote
ap.StartAlphapose().
Now I got this error:
Load SE Resnet...
Loading YOLO model..
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/vislab/guerri/alphagastnet/insieme/alphapose/utils/detector.py", line 251, in image_postprocess
(orig_img, im_name, boxes, scores, ids, inps, cropped_boxes) = self.wait_and_get(self.det_queue)
File "/home/vislab/guerri/alphagastnet/insieme/alphapose/utils/detector.py", line 121, in wait_and_get
return queue.get()
File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/home/vislab/guerri/alphagastnet/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 284, in rebuild_storage_fd
fd = df.detach()
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 487, in Client
c = SocketClient(address)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
What does it mean?
We were running into this same problem in our cluster.
When using multiprocessing in PyTorch (typically to run multiple DataLoader workers), the subprocesses create sockets in the /tmp directory to communitcate with each other. These sockets all saved in folders named pymp-###### and look like 0-byte files. Deleting these files or folders while your PyTorch scripts are still running will cause the above error.
In our case, the problem was a buggy maintenance script that was erasing files out of the /tmp folder while they were still needed. It's possible there are other ways to trigger this error. But you should start by looking for those sockets and making sure they aren't getting erased by accident.
If that doesn't solve it, take a look at your /var/log/syslog file at the exact time when the error occurred. You'll very likely find the cause of it there.

How to release the GIL for a thread in Python, using Numba?

I want to make a program that consists of 2 parts: one is to receive data and the other is to write it to a file. I thought that it would be better if I could use 2 threads(and possibly 2 cpu cores) to do the jobs separately. I found this: https://numba.pydata.org/numba-doc/dev/user/jit.html#compilation-options and it allows you to release the GIL. I wonder if it suits my purpose and if I could adopt it for this kind of job. This is what I tried:
import threading
import time
import os
import queue
import numba
import numpy as np
condition = threading.Condition()
q_text = queue.Queue()
##numba.jit(nopython=True, nogil=True)
def consumer():
t = threading.currentThread()
with condition:
while True:
str_test = q_text.get()
with open('hello.txt', 'a') as f:
f.write(str_test)
condition.wait()
def sender():
with condition:
condition.notifyAll()
def add_q(arr="hi\n"):
q_text.put(arr)
sender()
c1 = threading.Thread(name='c1', target=consumer)
c1.start()
add_q()
It works fine without numba, but when I apply it to consumer, it gives me an error:
Exception in thread c1:
Traceback (most recent call last):
File "d:\python36-32\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "d:\python36-32\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "d:\python36-32\lib\site-packages\numba\dispatcher.py", line 368, in _compile_for_args
raise e
File "d:\python36-32\lib\site-packages\numba\dispatcher.py", line 325, in _compile_for_args
return self.compile(tuple(argtypes))
File "d:\python36-32\lib\site-packages\numba\dispatcher.py", line 653, in compile
cres = self._compiler.compile(args, return_type)
File "d:\python36-32\lib\site-packages\numba\dispatcher.py", line 83, in compile
pipeline_class=self.pipeline_class)
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 873, in compile_extra
return pipeline.compile_extra(func)
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 367, in compile_extra
return self._compile_bytecode()
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 804, in _compile_bytecode
return self._compile_core()
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 791, in _compile_core
res = pm.run(self.status)
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 253, in run
raise patched_exception
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 245, in run
stage()
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 381, in stage_analyze_bytecode
func_ir = translate_stage(self.func_id, self.bc)
File "d:\python36-32\lib\site-packages\numba\compiler.py", line 937, in translate_stage
return interp.interpret(bytecode)
File "d:\python36-32\lib\site-packages\numba\interpreter.py", line 92, in interpret
self.cfa.run()
File "d:\python36-32\lib\site-packages\numba\controlflow.py", line 515, in run
assert not inst.is_jump, inst
AssertionError: Failed at nopython (analyzing bytecode)
SETUP_WITH(arg=60, lineno=17)
There was no error if I exclude condition(threading.Condion) from consumer, so maybe it's because JIT doesn't interpret it? I'd like to know if I can adopt numba to this kind of purpose and how to fix this problem(if it's possible).
You can't use the threading module within a Numba function, and opening/writing a file isn't supported either. Numba is great when you need computational performance, your example is purely I/O, that's not a usecase for Numba.
The only way Numba would add something is if you apply a function on your str_test data. Compiling that function with nogil=True would allow multi-threading. But again, that's only worth it if you that function would be computationally expensive compared to the I/O.
You could look into an async solution, that's more appropriate for I/O bound performance.
See this example from the Numba documentation for a case where threading improves performance:
https://numba.pydata.org/numba-doc/dev/user/examples.html#multi-threading

can't pickle _thread.RLock objects when using a webservice

I am using python 3.6
I am trying to use multiprocessing from inside a class method shown below by the name SubmitJobsUsingMultiProcessing() which further calls another class method in turn.
I keep running into this error : Type Error : can't pickle _thread.RLock objects.
I have no idea what this means. I have a suspicion that the below line trying to establish a connection to a webserver API might be responsible but I am all at sea to understand why.
I am not a proper programmer(code as a part of a portfolio modeling team) so if this is an obvious question please pardon my ignorance and many thanks in advance.
import multiprocessing as mp,functools
def SubmitJobsUsingMultiProcessing(self,PartitionsOfAnalysisDates,PickleTheJobIdsDict = True):
if (self.ExportSetResult == "SUCCESS"):
NumPools = mp.cpu_count()
PoolObj = mp.Pool(NumPools)
userId,clientId,password,expSetName = self.userId , self.clientId , self.password , self.expSetName
PartialFunctor = functools.partial(self.SubmitJobsAsOfDate,userId = userId,clientId = clientId,password = password,expSetName = expSetName)
Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
BatchJobIDs = OrderedDict((key, val) for Dct in Result for key, val in Dct.items())
f_pickle = open(self.JobIdPickleFileName, 'wb')
pickle.dump(BatchJobIDs, f_pickle, -1)
f_pickle.close()
def SubmitJobsAsOfDate(self,ListOfDatesForBatchJobs,userId,clientId,password,expSetName):
client = Client(self.url, proxy=self.proxysettings)
if (self.ExportSetResult != "SUCCESS"):
print("The export set creation was not successful...exiting")
sys.exit()
BatchJobIDs = OrderedDict()
NumJobsSubmitted = 0
CurrentProcessID = mp.current_process()
for AnalysisDate in ListOfDatesForBatchJobs:
jobName = "Foo_" + str(AnalysisDate)
print('Sending job from process : ', CurrentProcessID, ' : ', jobName)
jobId = client.service.SubmitExportJob(userId,clientId,password,expSetName, AnalysisDate, jobName, False)
BatchJobIDs[AnalysisDate] = jobId
NumJobsSubmitted += 1
'Sleep for 30 secs every 100 jobs'
if (NumJobsSubmitted % 100 == 0):
print('100 jobs have been submitted thus far from process : ', CurrentProcessID,'---Sleeping for 30 secs to avoid the SSL time out error')
time.sleep(30)
self.BatchJobIDs = BatchJobIDs
return BatchJobIDs
Below is the trace ::
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1599, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 289, in <module>
BDTProcessObj.SubmitJobsUsingMultiProcessing(Partitions)
File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 190, in SubmitJobsUsingMultiProcessing
Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 644, in get
raise self._value
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 424, in _handle_tasks
put(task)
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
I am struggling with a similar problem. There was a bug in <=3.5 whereby _thread.RLock objects did not raise an error when pickled (They cannot be) For the Pool object to work, a function and arguments must be passed to it from the main process and this relies on pickling (pickling is a means of serialising objects) In my case the RLock object is somewhere in the logging module. I suspect your code will work fine on 3.5. Good luck. See this bug resolution.

How launch a function in a new thread without locking the main program in python?

I'm trying to launch a function in a new thread because the function makes something not related to the main program.
I tried to do this with multiprocessing module as:
import multiprocessing
import time
def mp_worker(a):
#time.sleep(a)
print('a:' +str(a))
return
for k in range(5):
p = multiprocessing.Process(target= mp_worker , args=(k,))
p.start()
print('keep going :' + str(k))
But I have a bunch of error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
prepare(preparation_data)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 226, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Anaconda3\lib\runpy.py", line 254, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Maxime\PycharmProjects\NeuralMassModelSofware_Pyqt5\dj.py", line 14, in <module>
p.start()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Someone knows how I can launch whatever function I want in a new thread and be sure that the main program is still running normally? I'm a little lost with multiprocessing, I admit that :)
I aim to launch some displays (graphical or print I don't know yet) for the user in new threads without interrupting the main program.
What you are doing isn't multithreading, it's muliprocessing.
Change.
multiprocessing.Process(...
with.
threading.Thread(...
Python ยป 3.6.1 Documentation: threading.Thread
SO Q&A: multiprocessing-vs-threading-python
Beside this, you can overcome the Error adding a time.sleep(0.2)after start(...
This is what I am getting when running the code you have provided:
python3.6 -u "multiprocessingError_Cg.py"
keep going :0
a:0
keep going :1
keep going :2
a:1
a:2
keep going :3
keep going :4
a:3
a:4
>Exit code: 0
So the answer to your question is that you have to look for the cause of the trouble with multiprocessing elsewhere, but not in the section of code listed in your question.
Take a close look at what was provided in answers and comments here - maybe it could help also in your case?
Here what was said in comments to one of the answers there:
Dave: Would this work if beBusyFor (in your case mp_worker) uses multiprocessing internally?
#Dave just try it out and report back here if it worked. Haven't tested such a case yet, so my opinion that it should work leaving some mess of processes with no parents doesn't matter here and therefore shouldn't be taken in consideration.
Dave: **No cigar ending child processes unfortunately** :(

Python multiprocessing claims too many open files when no files are even opened

I'm attempting to speed up an algorithm that makes use of a gigantic matrix. I've parallelised it to operate on rows, and put the data matrix in shared memory so the system doesn't get clogged. However, instead of working smoothly as I'd have hoped, it now throws a weird error with regards to files, which I don't comprehend as I don't even open files in the thing.
Mock-up of roughly what's going on in the program proper, with the 1000-iteration for being representative of what's happening in the algorithm too.
import multiprocessing
import ctypes
import numpy as np
shared_array_base = multiprocessing.Array(ctypes.c_double, 10*10)
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(10, 10)
def my_func(i, shared_array):
shared_array[i,:] = i
def pool_init(_shared_array, _constans):
global shared_array, constans
shared_array = _shared_array
constans = _constans
def pool_my_func(i):
my_func(i, shared_array)
if __name__ == '__main__':
for i in np.arange(1000):
pool = multiprocessing.Pool(8, pool_init, (shared_array, 4))
pool.map(pool_my_func, range(10))
print(shared_array)
And this throws this error (I'm on OSX):
Traceback (most recent call last):
File "weird.py", line 24, in <module>
pool = multiprocessing.Pool(8, pool_init, (shared_array, 4))
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 118, in Pool
context=self.get_context())
File "//anaconda/lib/python3.4/multiprocessing/pool.py", line 168, in __init__
self._repopulate_pool()
File "//anaconda/lib/python3.4/multiprocessing/pool.py", line 233, in _repopulate_pool
w.start()
File "//anaconda/lib/python3.4/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 267, in _Popen
return Popen(process_obj)
File "//anaconda/lib/python3.4/multiprocessing/popen_fork.py", line 21, in __init__
self._launch(process_obj)
File "//anaconda/lib/python3.4/multiprocessing/popen_fork.py", line 69, in _launch
parent_r, child_w = os.pipe()
OSError: [Errno 24] Too many open files
I'm quite stumped. I don't even open files in here. All I want to do is pass shared_array to the individual processes in a manner that won't clog the system memory, I don't even need to modify it within the parallelised process if this will help anything.
Also, in case it matters, the exact error thrown by the proper code itself is a little different:
Traceback (most recent call last):
File "tcap.py", line 206, in <module>
File "tcap.py", line 202, in main
File "tcap.py", line 181, in tcap_cluster
File "tcap.py", line 133, in ap_step
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 118, in Pool
File "//anaconda/lib/python3.4/multiprocessing/pool.py", line 168, in __init__
File "//anaconda/lib/python3.4/multiprocessing/pool.py", line 233, in _repopulate_pool
File "//anaconda/lib/python3.4/multiprocessing/process.py", line 105, in start
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 267, in _Popen
File "//anaconda/lib/python3.4/multiprocessing/popen_fork.py", line 21, in __init__
File "//anaconda/lib/python3.4/multiprocessing/popen_fork.py", line 69, in _launch
OSError: [Errno 24] Too many open files
So yeah, I have no idea how to proceed. Any help would be appreciated. Thanks in advance!
You're trying to create 1000 process pools, which are not reclaimed (for some reason); these have consumed all available file descriptors in your main process for the pipes that are used for communicating between the main process and its children.
Perhaps you'd want to use:
pool = multiprocessing.Pool(8, pool_init, (shared_array, 4))
for _ in range(1000):
pool.map(pool_my_func, range(10))
There was a limit on number of file descriptors from OS. I changed my ulimit to 4096 from 1024 and it worked.
Check your num of descriptors limit using:
ulimit -n
For me it was 1024, and I updated it to 4096 and it worked.
ulimit -n 4096

Categories