return value from spawned multiprocessing.process [duplicate] - python
In the example code below, I'd like to get the return value of the function worker. How can I go about doing this? Where is this value stored?
Example Code:
import multiprocessing
def worker(procnum):
'''worker function'''
print str(procnum) + ' represent!'
return procnum
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker, args=(i,))
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
print jobs
Output:
0 represent!
1 represent!
2 represent!
3 represent!
4 represent!
[<Process(Process-1, stopped)>, <Process(Process-2, stopped)>, <Process(Process-3, stopped)>, <Process(Process-4, stopped)>, <Process(Process-5, stopped)>]
I can't seem to find the relevant attribute in the objects stored in jobs.
Use shared variable to communicate. For example like this:
import multiprocessing
def worker(procnum, return_dict):
"""worker function"""
print(str(procnum) + " represent!")
return_dict[procnum] = procnum
if __name__ == "__main__":
manager = multiprocessing.Manager()
return_dict = manager.dict()
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker, args=(i, return_dict))
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
print(return_dict.values())
I think the approach suggested by #sega_sai is the better one. But it really needs a code example, so here goes:
import multiprocessing
from os import getpid
def worker(procnum):
print('I am number %d in process %d' % (procnum, getpid()))
return getpid()
if __name__ == '__main__':
pool = multiprocessing.Pool(processes = 3)
print(pool.map(worker, range(5)))
Which will print the return values:
I am number 0 in process 19139
I am number 1 in process 19138
I am number 2 in process 19140
I am number 3 in process 19139
I am number 4 in process 19140
[19139, 19138, 19140, 19139, 19140]
If you are familiar with map (the Python 2 built-in) this should not be too challenging. Otherwise have a look at sega_Sai's link.
Note how little code is needed. (Also note how processes are re-used).
For anyone else who is seeking how to get a value from a Process using Queue:
import multiprocessing
ret = {'foo': False}
def worker(queue):
ret = queue.get()
ret['foo'] = True
queue.put(ret)
if __name__ == '__main__':
queue = multiprocessing.Queue()
queue.put(ret)
p = multiprocessing.Process(target=worker, args=(queue,))
p.start()
p.join()
print(queue.get()) # Prints {"foo": True}
Note that in Windows or Jupyter Notebook, with multithreading you have to save this as a file and execute the file. If you do it in a command prompt you will see an error like this:
AttributeError: Can't get attribute 'worker' on <module '__main__' (built-in)>
For some reason, I couldn't find a general example of how to do this with Queue anywhere (even Python's doc examples don't spawn multiple processes), so here's what I got working after like 10 tries:
from multiprocessing import Process, Queue
def add_helper(queue, arg1, arg2): # the func called in child processes
ret = arg1 + arg2
queue.put(ret)
def multi_add(): # spawns child processes
q = Queue()
processes = []
rets = []
for _ in range(0, 100):
p = Process(target=add_helper, args=(q, 1, 2))
processes.append(p)
p.start()
for p in processes:
ret = q.get() # will block
rets.append(ret)
for p in processes:
p.join()
return rets
Queue is a blocking, thread-safe queue that you can use to store the return values from the child processes. So you have to pass the queue to each process. Something less obvious here is that you have to get() from the queue before you join the Processes or else the queue fills up and blocks everything.
Update for those who are object-oriented (tested in Python 3.4):
from multiprocessing import Process, Queue
class Multiprocessor():
def __init__(self):
self.processes = []
self.queue = Queue()
#staticmethod
def _wrapper(func, queue, args, kwargs):
ret = func(*args, **kwargs)
queue.put(ret)
def run(self, func, *args, **kwargs):
args2 = [func, self.queue, args, kwargs]
p = Process(target=self._wrapper, args=args2)
self.processes.append(p)
p.start()
def wait(self):
rets = []
for p in self.processes:
ret = self.queue.get()
rets.append(ret)
for p in self.processes:
p.join()
return rets
# tester
if __name__ == "__main__":
mp = Multiprocessor()
num_proc = 64
for _ in range(num_proc): # queue up multiple tasks running `sum`
mp.run(sum, [1, 2, 3, 4, 5])
ret = mp.wait() # get all results
print(ret)
assert len(ret) == num_proc and all(r == 15 for r in ret)
This example shows how to use a list of multiprocessing.Pipe instances to return strings from an arbitrary number of processes:
import multiprocessing
def worker(procnum, send_end):
'''worker function'''
result = str(procnum) + ' represent!'
print result
send_end.send(result)
def main():
jobs = []
pipe_list = []
for i in range(5):
recv_end, send_end = multiprocessing.Pipe(False)
p = multiprocessing.Process(target=worker, args=(i, send_end))
jobs.append(p)
pipe_list.append(recv_end)
p.start()
for proc in jobs:
proc.join()
result_list = [x.recv() for x in pipe_list]
print result_list
if __name__ == '__main__':
main()
Output:
0 represent!
1 represent!
2 represent!
3 represent!
4 represent!
['0 represent!', '1 represent!', '2 represent!', '3 represent!', '4 represent!']
This solution uses fewer resources than a multiprocessing.Queue which uses
a Pipe
at least one Lock
a buffer
a thread
or a multiprocessing.SimpleQueue which uses
a Pipe
at least one Lock
It is very instructive to look at the source for each of these types.
It seems that you should use the multiprocessing.Pool class instead and use the methods .apply() .apply_async(), map()
http://docs.python.org/library/multiprocessing.html?highlight=pool#multiprocessing.pool.AsyncResult
You can use the exit built-in to set the exit code of a process. It can be obtained from the exitcode attribute of the process:
import multiprocessing
def worker(procnum):
print str(procnum) + ' represent!'
exit(procnum)
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker, args=(i,))
jobs.append(p)
p.start()
result = []
for proc in jobs:
proc.join()
result.append(proc.exitcode)
print result
Output:
0 represent!
1 represent!
2 represent!
3 represent!
4 represent!
[0, 1, 2, 3, 4]
The pebble package has a nice abstraction leveraging multiprocessing.Pipe which makes this quite straightforward:
from pebble import concurrent
#concurrent.process
def function(arg, kwarg=0):
return arg + kwarg
future = function(1, kwarg=1)
print(future.result())
Example from: https://pythonhosted.org/Pebble/#concurrent-decorators
Thought I'd simplify the simplest examples copied from above, working for me on Py3.6. Simplest is multiprocessing.Pool:
import multiprocessing
import time
def worker(x):
time.sleep(1)
return x
pool = multiprocessing.Pool()
print(pool.map(worker, range(10)))
You can set the number of processes in the pool with, e.g., Pool(processes=5). However it defaults to CPU count, so leave it blank for CPU-bound tasks. (I/O-bound tasks often suit threads anyway, as the threads are mostly waiting so can share a CPU core.) Pool also applies chunking optimization.
(Note that the worker method cannot be nested within a method. I initially defined my worker method inside the method that makes the call to pool.map, to keep it all self-contained, but then the processes couldn't import it, and threw "AttributeError: Can't pickle local object outer_method..inner_method". More here. It can be inside a class.)
(Appreciate the original question specified printing 'represent!' rather than time.sleep(), but without it I thought some code was running concurrently when it wasn't.)
Py3's ProcessPoolExecutor is also two lines (.map returns a generator so you need the list()):
from concurrent.futures import ProcessPoolExecutor
with ProcessPoolExecutor() as executor:
print(list(executor.map(worker, range(10))))
With plain Processes:
import multiprocessing
import time
def worker(x, queue):
time.sleep(1)
queue.put(x)
queue = multiprocessing.SimpleQueue()
tasks = range(10)
for task in tasks:
multiprocessing.Process(target=worker, args=(task, queue,)).start()
for _ in tasks:
print(queue.get())
Use SimpleQueue if all you need is put and get. The first loop starts all the processes, before the second makes the blocking queue.get calls. I don't think there's any reason to call p.join() too.
If you are using Python 3, you can use concurrent.futures.ProcessPoolExecutor as a convenient abstraction:
from concurrent.futures import ProcessPoolExecutor
def worker(procnum):
'''worker function'''
print(str(procnum) + ' represent!')
return procnum
if __name__ == '__main__':
with ProcessPoolExecutor() as executor:
print(list(executor.map(worker, range(5))))
Output:
0 represent!
1 represent!
2 represent!
3 represent!
4 represent!
[0, 1, 2, 3, 4]
A simple solution:
import multiprocessing
output=[]
data = range(0,10)
def f(x):
return x**2
def handler():
p = multiprocessing.Pool(64)
r=p.map(f, data)
return r
if __name__ == '__main__':
output.append(handler())
print(output[0])
Output:
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
You can use ProcessPoolExecutor to get a return value from a function as shown below:
from concurrent.futures import ProcessPoolExecutor
def test(num1, num2):
return num1 + num2
with ProcessPoolExecutor() as executor:
feature = executor.submit(test, 2, 3)
print(feature.result()) # 5
I modified vartec's answer a bit since I needed to get the error codes from the function. (Thanks vertec!!! its an awesome trick)
This can also be done with a manager.list but I think is better to have it in a dict and store a list within it. That way, way we keep the function and the results since we can't be sure of the order in which the list will be populated.
from multiprocessing import Process
import time
import datetime
import multiprocessing
def func1(fn, m_list):
print 'func1: starting'
time.sleep(1)
m_list[fn] = "this is the first function"
print 'func1: finishing'
# return "func1" # no need for return since Multiprocess doesnt return it =(
def func2(fn, m_list):
print 'func2: starting'
time.sleep(3)
m_list[fn] = "this is function 2"
print 'func2: finishing'
# return "func2"
def func3(fn, m_list):
print 'func3: starting'
time.sleep(9)
# if fail wont join the rest because it never populate the dict
# or do a try/except to get something in return.
raise ValueError("failed here")
# if we want to get the error in the manager dict we can catch the error
try:
raise ValueError("failed here")
m_list[fn] = "this is third"
except:
m_list[fn] = "this is third and it fail horrible"
# print 'func3: finishing'
# return "func3"
def runInParallel(*fns): # * is to accept any input in list
start_time = datetime.datetime.now()
proc = []
manager = multiprocessing.Manager()
m_list = manager.dict()
for fn in fns:
# print fn
# print dir(fn)
p = Process(target=fn, name=fn.func_name, args=(fn, m_list))
p.start()
proc.append(p)
for p in proc:
p.join() # 5 is the time out
print datetime.datetime.now() - start_time
return m_list, proc
if __name__ == '__main__':
manager, proc = runInParallel(func1, func2, func3)
# print dir(proc[0])
# print proc[0]._name
# print proc[0].name
# print proc[0].exitcode
# here you can check what did fail
for i in proc:
print i.name, i.exitcode # name was set up in the Process line 53
# here will only show the function that worked and where able to populate the
# manager dict
for i, j in manager.items():
print dir(i) # things you can do to the function
print i, j
Related
Python multiprocessing with Queue (split loads dynamically)
I am trying to use multiprocessing to process very large number of files. I tried to put the list of files into queue and make 3 workers split the load with a common Queue data type. However this seems not working. Probably I am misunderstanding about the queue in multiprocessing package. Below is the example source code: import multiprocessing from multiprocessing import Queue def worker(i, qu): """worker function""" while ~qu.empty(): val=qu.get() print 'Worker:',i, ' start with file:',val j=1 for k in range(i*10000,(i+1)*10000): # some time consuming process for j in range(i*10000,(i+1)*10000): j=j+k print 'Worker:',i, ' end with file:',val if __name__ == '__main__': jobs = [] qu=Queue() for j in range(100,110): # files numbers are from 100 to 110 qu.put(j) for i in range(3): # 3 multiprocess p = multiprocessing.Process(target=worker, args=(i,qu)) jobs.append(p) p.start() p.join() Thanks for the comments. I come to know that using Pool is the best solution. import multiprocessing import time def worker(val): """worker function""" print 'Worker: start with file:',val time.sleep(1.1) print 'Worker: end with file:',val if __name__ == '__main__': file_list=range(100,110) p = multiprocessing.Pool(2) p.map(worker, file_list)
Two issues: 1) you are joining only on the 3rd process 2) Why not use multiprocessing.Pool? 3) race condition on qu.get() 1 & 3) import multiprocessing from multiprocessing import Queue def worker(i, qu): """worker function""" while 1: try: val=qu.get(timeout) except Queue.Empty: break# Yay no race condition print 'Worker:',i, ' start with file:',val j=1 for k in range(i*10000,(i+1)*10000): # some time consuming process for j in range(i*10000,(i+1)*10000): j=j+k print 'Worker:',i, ' end with file:',val if __name__ == '__main__': jobs = [] qu=Queue() for j in range(100,110): # files numbers are from 100 to 110 qu.put(j) for i in range(3): # 3 multiprocess p = multiprocessing.Process(target=worker, args=(i,qu)) jobs.append(p) p.start() for p in jobs: #<--- join on all processes ... p.join() 2) for how to use the Pool, see: https://docs.python.org/2/library/multiprocessing.html
You are joining only the last of your created processes. That means if the first or the second process is still working while the third is finished, your main process is goning down and kills the remaining processes before they are finished. You should join them all in order to wait until they are finished: for p in jobs: p.join() Another thing is you should consider using qu.get_nowait() in order to get rid of the race condition between qu.empty() and qu.get(). For example: try: while 1: message = self.queue.get_nowait() """ do something fancy here """ except Queue.Empty: pass I hope that helps
Python Multiprocessing Pipe "Deadlock"
I'm facing problems with the following example code: from multiprocessing import Lock, Process, Queue, current_process def worker(work_queue, done_queue): for item in iter(work_queue.get, 'STOP'): print("adding ", item, "to done queue") #this works: done_queue.put(item*10) done_queue.put(item*1000) #this doesnt! return True def main(): workers = 4 work_queue = Queue() done_queue = Queue() processes = [] for x in range(10): work_queue.put("hi"+str(x)) for w in range(workers): p = Process(target=worker, args=(work_queue, done_queue)) p.start() processes.append(p) work_queue.put('STOP') for p in processes: p.join() done_queue.put('STOP') for item in iter(done_queue.get, 'STOP'): print(item) if __name__ == '__main__': main() When the done Queue becomes big enough (a limit about 64k i think), the whole thing freezes without any further notice. What is the general approach for such a situation when the queue becomes too big? is there some way to remove elements on the fly once they are processed? The Python docs recommend removing the p.join(), in a real application however i can not estimate when the processes have finished. Is there a simple solution for this problem besides infinite looping and using .get_nowait()?
This works for me with 3.4.0alpha4, 3.3, 3.2, 3.1 and 2.6. It tracebacks with 2.7 and 3.0. I pylint'd it, BTW. #!/usr/local/cpython-3.3/bin/python '''SSCCE for a queue deadlock''' import sys import multiprocessing def worker(workerno, work_queue, done_queue): '''Worker function''' #reps = 10 # this worked for the OP #reps = 1000 # this worked for me reps = 10000 # this didn't for item in iter(work_queue.get, 'STOP'): print("adding", item, "to done queue") #this works: done_queue.put(item*10) for thing in item * reps: #print('workerno: {}, adding thing {}'.format(workerno, thing)) done_queue.put(thing) done_queue.put('STOP') print('workerno: {0}, exited loop'.format(workerno)) return True def main(): '''main function''' workers = 4 work_queue = multiprocessing.Queue(maxsize=0) done_queue = multiprocessing.Queue(maxsize=0) processes = [] for integer in range(10): work_queue.put("hi"+str(integer)) for workerno in range(workers): dummy = workerno process = multiprocessing.Process(target=worker, args=(workerno, work_queue, done_queue)) process.start() processes.append(process) work_queue.put('STOP') itemno = 0 stops = 0 while True: item = done_queue.get() itemno += 1 sys.stdout.write('itemno {0}\r'.format(itemno)) if item == 'STOP': stops += 1 if stops == workers: break print('exited done_queue empty loop') for workerno, process in enumerate(processes): print('attempting process.join() of workerno {0}'.format(workerno)) process.join() done_queue.put('STOP') if __name__ == '__main__': main() HTH
How to get the return value of a function passed to multiprocessing.Process?
In the example code below, I'd like to get the return value of the function worker. How can I go about doing this? Where is this value stored? Example Code: import multiprocessing def worker(procnum): '''worker function''' print str(procnum) + ' represent!' return procnum if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i,)) jobs.append(p) p.start() for proc in jobs: proc.join() print jobs Output: 0 represent! 1 represent! 2 represent! 3 represent! 4 represent! [<Process(Process-1, stopped)>, <Process(Process-2, stopped)>, <Process(Process-3, stopped)>, <Process(Process-4, stopped)>, <Process(Process-5, stopped)>] I can't seem to find the relevant attribute in the objects stored in jobs.
Use shared variable to communicate. For example like this: import multiprocessing def worker(procnum, return_dict): """worker function""" print(str(procnum) + " represent!") return_dict[procnum] = procnum if __name__ == "__main__": manager = multiprocessing.Manager() return_dict = manager.dict() jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i, return_dict)) jobs.append(p) p.start() for proc in jobs: proc.join() print(return_dict.values())
I think the approach suggested by #sega_sai is the better one. But it really needs a code example, so here goes: import multiprocessing from os import getpid def worker(procnum): print('I am number %d in process %d' % (procnum, getpid())) return getpid() if __name__ == '__main__': pool = multiprocessing.Pool(processes = 3) print(pool.map(worker, range(5))) Which will print the return values: I am number 0 in process 19139 I am number 1 in process 19138 I am number 2 in process 19140 I am number 3 in process 19139 I am number 4 in process 19140 [19139, 19138, 19140, 19139, 19140] If you are familiar with map (the Python 2 built-in) this should not be too challenging. Otherwise have a look at sega_Sai's link. Note how little code is needed. (Also note how processes are re-used).
For anyone else who is seeking how to get a value from a Process using Queue: import multiprocessing ret = {'foo': False} def worker(queue): ret = queue.get() ret['foo'] = True queue.put(ret) if __name__ == '__main__': queue = multiprocessing.Queue() queue.put(ret) p = multiprocessing.Process(target=worker, args=(queue,)) p.start() p.join() print(queue.get()) # Prints {"foo": True} Note that in Windows or Jupyter Notebook, with multithreading you have to save this as a file and execute the file. If you do it in a command prompt you will see an error like this: AttributeError: Can't get attribute 'worker' on <module '__main__' (built-in)>
For some reason, I couldn't find a general example of how to do this with Queue anywhere (even Python's doc examples don't spawn multiple processes), so here's what I got working after like 10 tries: from multiprocessing import Process, Queue def add_helper(queue, arg1, arg2): # the func called in child processes ret = arg1 + arg2 queue.put(ret) def multi_add(): # spawns child processes q = Queue() processes = [] rets = [] for _ in range(0, 100): p = Process(target=add_helper, args=(q, 1, 2)) processes.append(p) p.start() for p in processes: ret = q.get() # will block rets.append(ret) for p in processes: p.join() return rets Queue is a blocking, thread-safe queue that you can use to store the return values from the child processes. So you have to pass the queue to each process. Something less obvious here is that you have to get() from the queue before you join the Processes or else the queue fills up and blocks everything. Update for those who are object-oriented (tested in Python 3.4): from multiprocessing import Process, Queue class Multiprocessor(): def __init__(self): self.processes = [] self.queue = Queue() #staticmethod def _wrapper(func, queue, args, kwargs): ret = func(*args, **kwargs) queue.put(ret) def run(self, func, *args, **kwargs): args2 = [func, self.queue, args, kwargs] p = Process(target=self._wrapper, args=args2) self.processes.append(p) p.start() def wait(self): rets = [] for p in self.processes: ret = self.queue.get() rets.append(ret) for p in self.processes: p.join() return rets # tester if __name__ == "__main__": mp = Multiprocessor() num_proc = 64 for _ in range(num_proc): # queue up multiple tasks running `sum` mp.run(sum, [1, 2, 3, 4, 5]) ret = mp.wait() # get all results print(ret) assert len(ret) == num_proc and all(r == 15 for r in ret)
This example shows how to use a list of multiprocessing.Pipe instances to return strings from an arbitrary number of processes: import multiprocessing def worker(procnum, send_end): '''worker function''' result = str(procnum) + ' represent!' print result send_end.send(result) def main(): jobs = [] pipe_list = [] for i in range(5): recv_end, send_end = multiprocessing.Pipe(False) p = multiprocessing.Process(target=worker, args=(i, send_end)) jobs.append(p) pipe_list.append(recv_end) p.start() for proc in jobs: proc.join() result_list = [x.recv() for x in pipe_list] print result_list if __name__ == '__main__': main() Output: 0 represent! 1 represent! 2 represent! 3 represent! 4 represent! ['0 represent!', '1 represent!', '2 represent!', '3 represent!', '4 represent!'] This solution uses fewer resources than a multiprocessing.Queue which uses a Pipe at least one Lock a buffer a thread or a multiprocessing.SimpleQueue which uses a Pipe at least one Lock It is very instructive to look at the source for each of these types.
It seems that you should use the multiprocessing.Pool class instead and use the methods .apply() .apply_async(), map() http://docs.python.org/library/multiprocessing.html?highlight=pool#multiprocessing.pool.AsyncResult
You can use the exit built-in to set the exit code of a process. It can be obtained from the exitcode attribute of the process: import multiprocessing def worker(procnum): print str(procnum) + ' represent!' exit(procnum) if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i,)) jobs.append(p) p.start() result = [] for proc in jobs: proc.join() result.append(proc.exitcode) print result Output: 0 represent! 1 represent! 2 represent! 3 represent! 4 represent! [0, 1, 2, 3, 4]
The pebble package has a nice abstraction leveraging multiprocessing.Pipe which makes this quite straightforward: from pebble import concurrent #concurrent.process def function(arg, kwarg=0): return arg + kwarg future = function(1, kwarg=1) print(future.result()) Example from: https://pythonhosted.org/Pebble/#concurrent-decorators
Thought I'd simplify the simplest examples copied from above, working for me on Py3.6. Simplest is multiprocessing.Pool: import multiprocessing import time def worker(x): time.sleep(1) return x pool = multiprocessing.Pool() print(pool.map(worker, range(10))) You can set the number of processes in the pool with, e.g., Pool(processes=5). However it defaults to CPU count, so leave it blank for CPU-bound tasks. (I/O-bound tasks often suit threads anyway, as the threads are mostly waiting so can share a CPU core.) Pool also applies chunking optimization. (Note that the worker method cannot be nested within a method. I initially defined my worker method inside the method that makes the call to pool.map, to keep it all self-contained, but then the processes couldn't import it, and threw "AttributeError: Can't pickle local object outer_method..inner_method". More here. It can be inside a class.) (Appreciate the original question specified printing 'represent!' rather than time.sleep(), but without it I thought some code was running concurrently when it wasn't.) Py3's ProcessPoolExecutor is also two lines (.map returns a generator so you need the list()): from concurrent.futures import ProcessPoolExecutor with ProcessPoolExecutor() as executor: print(list(executor.map(worker, range(10)))) With plain Processes: import multiprocessing import time def worker(x, queue): time.sleep(1) queue.put(x) queue = multiprocessing.SimpleQueue() tasks = range(10) for task in tasks: multiprocessing.Process(target=worker, args=(task, queue,)).start() for _ in tasks: print(queue.get()) Use SimpleQueue if all you need is put and get. The first loop starts all the processes, before the second makes the blocking queue.get calls. I don't think there's any reason to call p.join() too.
If you are using Python 3, you can use concurrent.futures.ProcessPoolExecutor as a convenient abstraction: from concurrent.futures import ProcessPoolExecutor def worker(procnum): '''worker function''' print(str(procnum) + ' represent!') return procnum if __name__ == '__main__': with ProcessPoolExecutor() as executor: print(list(executor.map(worker, range(5)))) Output: 0 represent! 1 represent! 2 represent! 3 represent! 4 represent! [0, 1, 2, 3, 4]
A simple solution: import multiprocessing output=[] data = range(0,10) def f(x): return x**2 def handler(): p = multiprocessing.Pool(64) r=p.map(f, data) return r if __name__ == '__main__': output.append(handler()) print(output[0]) Output: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
You can use ProcessPoolExecutor to get a return value from a function as shown below: from concurrent.futures import ProcessPoolExecutor def test(num1, num2): return num1 + num2 with ProcessPoolExecutor() as executor: feature = executor.submit(test, 2, 3) print(feature.result()) # 5
I modified vartec's answer a bit since I needed to get the error codes from the function. (Thanks vertec!!! its an awesome trick) This can also be done with a manager.list but I think is better to have it in a dict and store a list within it. That way, way we keep the function and the results since we can't be sure of the order in which the list will be populated. from multiprocessing import Process import time import datetime import multiprocessing def func1(fn, m_list): print 'func1: starting' time.sleep(1) m_list[fn] = "this is the first function" print 'func1: finishing' # return "func1" # no need for return since Multiprocess doesnt return it =( def func2(fn, m_list): print 'func2: starting' time.sleep(3) m_list[fn] = "this is function 2" print 'func2: finishing' # return "func2" def func3(fn, m_list): print 'func3: starting' time.sleep(9) # if fail wont join the rest because it never populate the dict # or do a try/except to get something in return. raise ValueError("failed here") # if we want to get the error in the manager dict we can catch the error try: raise ValueError("failed here") m_list[fn] = "this is third" except: m_list[fn] = "this is third and it fail horrible" # print 'func3: finishing' # return "func3" def runInParallel(*fns): # * is to accept any input in list start_time = datetime.datetime.now() proc = [] manager = multiprocessing.Manager() m_list = manager.dict() for fn in fns: # print fn # print dir(fn) p = Process(target=fn, name=fn.func_name, args=(fn, m_list)) p.start() proc.append(p) for p in proc: p.join() # 5 is the time out print datetime.datetime.now() - start_time return m_list, proc if __name__ == '__main__': manager, proc = runInParallel(func1, func2, func3) # print dir(proc[0]) # print proc[0]._name # print proc[0].name # print proc[0].exitcode # here you can check what did fail for i in proc: print i.name, i.exitcode # name was set up in the Process line 53 # here will only show the function that worked and where able to populate the # manager dict for i, j in manager.items(): print dir(i) # things you can do to the function print i, j
How to communicate between process in real time?
I have two processes and the data of one process has to be communicated to the other. I wrote a basic queue in order to communicate in real time but it doesn't serve the purpose. The following is example code: from multiprocessing import Process , Pipe , Queue a , b = Pipe() q = Queue() def f(name): i = 0 while i < 4: q.put(i) i += 1 def t(): print q.get() if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() p.join() p1 = Process(target=t, args= ('')) p1.start() p1.join() The expected output was 0 1 2 3 4, but I only get 0. How can I resolve this?
try with this version : def t(): while True: try: print q.get(timeout=1) except: break
You're only calling get() once. It returns one item at a time. (As an aside, your function f is very non-Pythonic, ty: def f(name): for i in range(4): q.put(i) You're also using q as a global...
How do you pass a Queue reference to a function managed by pool.map_async()?
I want a long-running process to return its progress over a Queue (or something similar) which I will feed to a progress bar dialog. I also need the result when the process is completed. A test example here fails with a RuntimeError: Queue objects should only be shared between processes through inheritance. import multiprocessing, time def task(args): count = args[0] queue = args[1] for i in xrange(count): queue.put("%d mississippi" % i) return "Done" def main(): q = multiprocessing.Queue() pool = multiprocessing.Pool() result = pool.map_async(task, [(x, q) for x in range(10)]) time.sleep(1) while not q.empty(): print q.get() print result.get() if __name__ == "__main__": main() I've been able to get this to work using individual Process objects (where I am alowed to pass a Queue reference) but then I don't have a pool to manage the many processes I want to launch. Any advise on a better pattern for this?
The following code seems to work: import multiprocessing, time def task(args): count = args[0] queue = args[1] for i in xrange(count): queue.put("%d mississippi" % i) return "Done" def main(): manager = multiprocessing.Manager() q = manager.Queue() pool = multiprocessing.Pool() result = pool.map_async(task, [(x, q) for x in range(10)]) time.sleep(1) while not q.empty(): print q.get() print result.get() if __name__ == "__main__": main() Note that the Queue is got from a manager.Queue() rather than multiprocessing.Queue(). Thanks Alex for pointing me in this direction.
Making q global works...: import multiprocessing, time q = multiprocessing.Queue() def task(count): for i in xrange(count): q.put("%d mississippi" % i) return "Done" def main(): pool = multiprocessing.Pool() result = pool.map_async(task, range(10)) time.sleep(1) while not q.empty(): print q.get() print result.get() if __name__ == "__main__": main() If you need multiple queues, e.g. to avoid mixing up the progress of the various pool processes, a global list of queues should work (of course, each process will then need to know what index in the list to use, but that's OK to pass as an argument;-).