So far, I only used Multiprocessing and Multi-threading on functions that return a result at the end. I know multiprocessing.Queue and multiprocessing.Queue.get() but I just don't understand how I could apply this to a data_loader..
I struggle with the following task:
def data_loader():
for _ in range(10**6):
#calculates for some seconds
yield result
for data in data_loader():
train_AI(data)
#Here an AI is being trained for another some seconds
So my question is: Is there any easy way to have my existing data_loader calculate (pre-buffer) its next yield while the AI is being trained on the GPU?
Or would I have to completely restructure this, with an external iterator that calls an inner smaller data_loader that returns a single batch each time it's called?
Yeah, you can use Python's Queue:
from multiprocessing import Process, Queue
from time import sleep
FINISHED_LOADING_DATA = 'LAST ONE' # just make sure it's not something that can be returned by some_function()
def some_function():
print('getting data')
sleep(0.5)
return 'some_result'
def train_AI(x):
print('training AI')
sleep(2)
q = Queue()
final_results = []
def data_loader(q):
for _ in range(10):
result = some_function()
q.put(result)
q.put(FINISHED_LOADING_DATA)
def train_if_data_available():
while True:
data = q.get()
if data == FINISHED_LOADING_DATA:
return 'DONE'
train_AI(data)
t = Process(target=data_loader, args=(q,))
t.daemon = True
t.start()
train_if_data_available()
Related
I'm trying to implement a function that takes 2 functions as arguments, runs both, returns the value of the function that returns first and kills the slower function before it finishes its execution.
My problem is that when I try to empty the Queue object I use to collect the return values, I get stuck.
Is there a more 'correct' way to handle this scenario or even an existing module? If not, can anyone explain what I'm doing wrong?
Here is my code (the implementation of the above function is 'run_both()'):
import multiprocessing as mp
from time import sleep
Q = mp.Queue()
def dump_queue(queue):
result = []
for i in iter(queue.get, 'STOP'):
result.append(i)
return result
def rabbit(x):
sleep(10)
Q.put(x)
def turtle(x):
sleep(30)
Q.put(x)
def run_both(a,b):
a.start()
b.start()
while a.is_alive() and b.is_alive():
sleep(1)
if a.is_alive():
a.terminate()
else:
b.terminate()
a.join()
b.join()
return dump_queue(Q)
p1 = mp.Process(target=rabbit, args=(1,))
p1 = mp.Process(target=turtle, args=(2,))
run_both(p1, p2)
Here's an example to call 2 or more functions with multiprocessing and return the fastest result. There are a few important things to note however.
Running multiprocessing code in IDLE sometimes causes problems. This example works, but I did run into that issue trying to solve this.
Multiprocessing code should start from inside a if __name__ == '__main__' clause, or else it will be run again if the main module is re-imported by another process. read the multiprocessing doc page for more info.
The result queue is passed directly to each process that uses it. When you use the queue by referencing a global name in the module, the code fails on windows because a new instance of the queue is used by each process. Read more here Multiprocessing Queue.get() hangs
I have also added a bit of a feature here to know which process' result was actually used.
import multiprocessing as mp
import time
import random
def task(value):
# our dummy task is to sleep for a random amount of time and
# return the given arg value
time.sleep(random.random())
return value
def process(q, idx, fn, args):
# simply call function fn with args, and push its result in the queue with its index
q.put([fn(*args), idx])
def fastest(calls):
queue = mp.Queue()
# we must pass the queue directly to each process that may use it
# or else on Windows, each process will have its own copy of the queue
# making it useless
procs = []
# create a 'mp.Process' that calls our 'process' for each call and start it
for idx, call in enumerate(calls):
fn = call[0]
args = call[1:]
p = mp.Process(target=process, args=(queue, idx, fn, args))
procs.append(p)
p.start()
# wait for the queue to have something
result, idx = queue.get()
for proc in procs: # kill all processes that may still be running
proc.terminate()
# proc may be using queue, so queue may be corrupted.
# https://docs.python.org/3.8/library/multiprocessing.html?highlight=queue#multiprocessing.Process.terminate
# we no longer need queue though so this is fine
return result, idx
if __name__ == '__main__':
from datetime import datetime
start = datetime.now()
print(start)
# to be compatible with 'fastest', each call is a list with the first
# element being callable, followed by args to be passed
calls = [
[task, 1],
[task, 'hello'],
[task, [1,2,3]]
]
val, idx = fastest(calls)
end = datetime.now()
print(end)
print('elapsed time:', end-start)
print('returned value:', val)
print('from call at index', idx)
Example output:
2019-12-21 04:01:09.525575
2019-12-21 04:01:10.171891
elapsed time: 0:00:00.646316
returned value: hello
from call at index 1
Apart from the typo on the penultimate line which should read:
p2 = mp.Process(target=turtle, args=(2,)) # not p1
the simplest change you can make to get the program to work is to add:
Q.put('STOP')
to the end of turtle() and rabbit().
You also don't really need to keep looping watching if the processes are alive, by definition if you just read the message queue and receive STOP, one of them has finished, so you could replace run_both() with:
def run_both(a,b):
a.start()
b.start()
result = dump_queue(Q)
a.terminate()
b.terminate()
return result
You may also need to think about what happens if both processes put some messages in the queue at much the same time. They could get mixed up. Maybe consider using 2 queues, or joining all the results up into a single message rather than appending multiple values together from queue.get()
The following code starts a few threads and prints the result after they are all done:
import threading
results = [None] * 5
threads = [None] * 5
def worker(i):
results[i] = i
for i in range(5):
threads[i] = threading.Thread(target=worker, args=(i,))
threads[i].start()
# here I would like to use the results of the threads which are finished
# while others still run
for i in range(5):
threads[i].join()
# here I have the results but only when all threads are done
print(results)
As mentioned in the code, I would like to use the results of the threads which are finished while others are still running. What is the correct way to do that?
Should I simply start a new thread which would have a while True: loop and continuously check for a new entry in results or is there a buil-in mechanism for such operations (as part of the threading.Thread call which would point to a callback when the thread is done)?
Since you're using Python 3, concurrent.futures is a better fit than threading:
import concurrent.futures
results = [None] * 5
def worker(i):
results[i] = i
with concurrent.futures.ThreadPoolExecutor(5) as pool:
futmap = {pool.submit(worker, i): i for i in range(len(results))}
for fut in concurrent.futures.as_completed(futmap):
print("doing more stuff with", futmap[fut])
I have a python iterator that solves a time-consuming task each iteration. It would be nice if the return values of the iterator could be precomputed in the background, such that when the iterator is called, the result can be yielded right away.
eg
import numpy as np
def sample_iterator():
while True:
x = np.random.rand(int(1e8)).mean()
yield x
Here is a iterator (precomputing_iterator) that takes an iterator (sample_iterator) as input. precomputing_iterator precomputes the return values of sample_iterator. When precomputing_iterator is created the precomputation of return values of sample_iterator is started right away. The return values are saved on a multiprocessing.Queue object. If there are values on the queue, precomputing_iterator can yield them right away.
from multiprocessing import Process, Queue
import numpy as np
import time
def sample_iterator():
while True:
x = np.random.rand(int(1e8)).mean()
yield x
def precomputing_iterator(iterator, maxsize = 5):
def enqueue(q):
while True:
q.put(iterator.next())
q = Queue(maxsize = maxsize)
p = Process(target=enqueue, args=(q,))
p.start()
while True:
yield q.get()
i1 = sample_iterator()
i2 = precomputing_iterator(i1)
t = time.time()
i2.next()
print "execution time:", time.time() - t
time.sleep(3)
t = time.time()
i2.next()
print "execution time:", time.time() - t
Here for me the first execution time is 1.4 seconds (queue is empty. No return values precomputed). The second execution time is 0.00031 seconds (the precomputed result is just returned)
To make my code more "pythonic" and faster, I use multiprocessing and a map function to send it a) the function and b) the range of iterations.
The implanted solution (i.e., calling tqdm directly on the range tqdm.tqdm(range(0, 30))) does not work with multiprocessing (as formulated in the code below).
The progress bar is displayed from 0 to 100% (when python reads the code?) but it does not indicate the actual progress of the map function.
How can one display a progress bar that indicates at which step the 'map' function is ?
from multiprocessing import Pool
import tqdm
import time
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
p = Pool(2)
r = p.map(_foo, tqdm.tqdm(range(0, 30)))
p.close()
p.join()
Any help or suggestions are welcome...
Use imap instead of map, which returns an iterator of the processed values.
from multiprocessing import Pool
import tqdm
import time
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
with Pool(2) as p:
r = list(tqdm.tqdm(p.imap(_foo, range(30)), total=30))
Sorry for being late but if all you need is a concurrent map, I added this functionality in tqdm>=4.42.0:
from tqdm.contrib.concurrent import process_map # or thread_map
import time
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
r = process_map(_foo, range(0, 30), max_workers=2)
References: https://tqdm.github.io/docs/contrib.concurrent/ and https://github.com/tqdm/tqdm/blob/master/examples/parallel_bars.py
It supports max_workers and chunksize and you can also easily switch from process_map to thread_map.
Solution found. Be careful! Due to multiprocessing, the estimation time (iteration per loop, total time, etc.) could be unstable, but the progress bar works perfectly.
Note: Context manager for Pool is only available in Python 3.3+.
from multiprocessing import Pool
import time
from tqdm import *
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
with Pool(processes=2) as p:
max_ = 30
with tqdm(total=max_) as pbar:
for _ in p.imap_unordered(_foo, range(0, max_)):
pbar.update()
You can use p_tqdm instead.
https://github.com/swansonk14/p_tqdm
from p_tqdm import p_map
import time
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
r = p_map(_foo, list(range(0, 30)))
based on the answer of Xavi MartÃnez I wrote the function imap_unordered_bar. It can be used in the same way as imap_unordered with the only difference that a processing bar is shown.
from multiprocessing import Pool
import time
from tqdm import *
def imap_unordered_bar(func, args, n_processes = 2):
p = Pool(n_processes)
res_list = []
with tqdm(total = len(args)) as pbar:
for i, res in tqdm(enumerate(p.imap_unordered(func, args))):
pbar.update()
res_list.append(res)
pbar.close()
p.close()
p.join()
return res_list
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
result = imap_unordered_bar(_foo, range(5))
import multiprocessing as mp
import tqdm
iterable = ...
num_cpu = mp.cpu_count() - 2 # dont use all cpus.
def func():
# your logic
...
if __name__ == '__main__':
with mp.Pool(num_cpu) as p:
list(tqdm.tqdm(p.imap(func, iterable), total=len(iterable)))
For progress bar with apply_async, we can use following code as suggested in:
https://github.com/tqdm/tqdm/issues/484
import time
import random
from multiprocessing import Pool
from tqdm import tqdm
def myfunc(a):
time.sleep(random.random())
return a ** 2
pool = Pool(2)
pbar = tqdm(total=100)
def update(*a):
pbar.update()
for i in range(pbar.total):
pool.apply_async(myfunc, args=(i,), callback=update)
pool.close()
pool.join()
Here is my take for when you need to get results back from your parallel executing functions. This function does a few things (there is another post of mine that explains it further) but the key point is that there is a tasks pending queue and a tasks completed queue. As workers are done with each task in the pending queue they add the results in the tasks completed queue. You can wrap the check to the tasks completed queue with the tqdm progress bar. I am not putting the implementation of the do_work() function here, it is not relevant, as the message here is to monitor the tasks completed queue and update the progress bar every time a result is in.
def par_proc(job_list, num_cpus=None, verbose=False):
# Get the number of cores
if not num_cpus:
num_cpus = psutil.cpu_count(logical=False)
print('* Parallel processing')
print('* Running on {} cores'.format(num_cpus))
# Set-up the queues for sending and receiving data to/from the workers
tasks_pending = mp.Queue()
tasks_completed = mp.Queue()
# Gather processes and results here
processes = []
results = []
# Count tasks
num_tasks = 0
# Add the tasks to the queue
for job in job_list:
for task in job['tasks']:
expanded_job = {}
num_tasks = num_tasks + 1
expanded_job.update({'func': pickle.dumps(job['func'])})
expanded_job.update({'task': task})
tasks_pending.put(expanded_job)
# Set the number of workers here
num_workers = min(num_cpus, num_tasks)
# We need as many sentinels as there are worker processes so that ALL processes exit when there is no more
# work left to be done.
for c in range(num_workers):
tasks_pending.put(SENTINEL)
print('* Number of tasks: {}'.format(num_tasks))
# Set-up and start the workers
for c in range(num_workers):
p = mp.Process(target=do_work, args=(tasks_pending, tasks_completed, verbose))
p.name = 'worker' + str(c)
processes.append(p)
p.start()
# Gather the results
completed_tasks_counter = 0
with tqdm(total=num_tasks) as bar:
while completed_tasks_counter < num_tasks:
results.append(tasks_completed.get())
completed_tasks_counter = completed_tasks_counter + 1
bar.update(completed_tasks_counter)
for p in processes:
p.join()
return results
Based on "user17242583" answer, I created the following function. It should be as fast as Pool.map and the results are always ordered. Plus, you can pass as many parameters to your function as you want and not just a single iterable.
from multiprocessing import Pool
from functools import partial
from tqdm import tqdm
def imap_tqdm(function, iterable, processes, chunksize=1, desc=None, disable=False, **kwargs):
"""
Run a function in parallel with a tqdm progress bar and an arbitrary number of arguments.
Results are always ordered and the performance should be the same as of Pool.map.
:param function: The function that should be parallelized.
:param iterable: The iterable passed to the function.
:param processes: The number of processes used for the parallelization.
:param chunksize: The iterable is based on the chunk size chopped into chunks and submitted to the process pool as separate tasks.
:param desc: The description displayed by tqdm in the progress bar.
:param disable: Disables the tqdm progress bar.
:param kwargs: Any additional arguments that should be passed to the function.
"""
if kwargs:
function_wrapper = partial(_wrapper, function=function, **kwargs)
else:
function_wrapper = partial(_wrapper, function=function)
results = [None] * len(iterable)
with Pool(processes=processes) as p:
with tqdm(desc=desc, total=len(iterable), disable=disable) as pbar:
for i, result in p.imap_unordered(function_wrapper, enumerate(iterable), chunksize=chunksize):
results[i] = result
pbar.update()
return results
def _wrapper(enum_iterable, function, **kwargs):
i = enum_iterable[0]
result = function(enum_iterable[1], **kwargs)
return i, result
This approach simple and it works.
from multiprocessing.pool import ThreadPool
import time
from tqdm import tqdm
def job():
time.sleep(1)
pbar.update()
pool = ThreadPool(5)
with tqdm(total=100) as pbar:
for i in range(100):
pool.apply_async(job)
pool.close()
pool.join()
I have a list of input data and would like to process it in parallel, but processing each takes time as network io is involved. CPU usage is not a problem.
I would not like to have the overhead of additional processes since I have a lot of things to process at a time and do not want to setup inter process communication.
# the parallel execution equivalent of this?
import time
input_data = [1,2,3,4,5,6,7]
input_processor = time.sleep
results = map(input_processor, input_data)
The code I am using makes use of twisted.internet.defer so a solution involving that is fine as well.
You can easily define Worker threads that work in parallel till a queue is empty.
from threading import Thread
from collections import deque
import time
# Create a new class that inherits from Thread
class Worker(Thread):
def __init__(self, inqueue, outqueue, func):
'''
A worker that calls func on objects in inqueue and
pushes the result into outqueue
runs until inqueue is empty
'''
self.inqueue = inqueue
self.outqueue = outqueue
self.func = func
super().__init__()
# override the run method, this is starte when
# you call worker.start()
def run(self):
while self.inqueue:
data = self.inqueue.popleft()
print('start')
result = self.func(data)
self.outqueue.append(result)
print('finished')
def test(x):
time.sleep(x)
return 2 * x
if __name__ == '__main__':
data = 12 * [1, ]
queue = deque(data)
result = deque()
# create 3 workers working on the same input
workers = [Worker(queue, result, test) for _ in range(3)]
# start the workers
for worker in workers:
worker.start()
# wait till all workers are finished
for worker in workers:
worker.join()
print(result)
As expected, this runs ca. 4 seconds.
One could also write a simple Pool class to get rid of the noise in the main function:
from threading import Thread
from collections import deque
import time
class Pool():
def __init__(self, n_threads):
self.n_threads = n_threads
def map(self, func, data):
inqueue = deque(data)
result = deque()
workers = [Worker(inqueue, result, func) for i in range(self.n_threads)]
for worker in workers:
worker.start()
for worker in workers:
worker.join()
return list(result)
class Worker(Thread):
def __init__(self, inqueue, outqueue, func):
'''
A worker that calls func on objects in inqueue and
pushes the result into outqueue
runs until inqueue is empty
'''
self.inqueue = inqueue
self.outqueue = outqueue
self.func = func
super().__init__()
# override the run method, this is starte when
# you call worker.start()
def run(self):
while self.inqueue:
data = self.inqueue.popleft()
print('start')
result = self.func(data)
self.outqueue.append(result)
print('finished')
def test(x):
time.sleep(x)
return 2 * x
if __name__ == '__main__':
data = 12 * [1, ]
pool = Pool(6)
result = pool.map(test, data)
print(result)
You can use the multiprocessing module. Without knowing more about how you want it to process, you can use a pool of workers:
import multiprocessing as mp
import time
input_processor = time.sleep
core_num = mp.cpu_count()
pool=Pool(processes = core_num)
result = [pool.apply_async(input_processor(i)) for for i in range(1,7+1) ]
result_final = [p.get() for p in results]
for n in range(1,7+1):
print n, result_final[n]
The above keeps track of the order each task is done. It also does not allow the processes to talk to each other.
Editted:
To call this as a function, you should input the input data and number of processors:
def parallel_map(processor_count, input_data):
pool=Pool(processes = processor_count)
result = [pool.apply_async(input_processor(i)) for for i in input_data ]
result_final = np.array([p.get() for p in results])
result_data = np.vstack( (input_data, result_final))
return result_data
I assume you are using Twisted. In that case, you can launch multiple deferreds and wait for the completion of all of them using DeferredList:
http://twistedmatrix.com/documents/15.4.0/core/howto/defer.html#deferredlist
If input_processor is a non-blocking call (returns deferred):
def main():
input_data = [1,2,3,4,5,6,7]
input_processor = asyn_function
for entry in input_data:
requests.append(defer.maybeDeferred(input_processor, entry))
deferredList = defer.DeferredList(requests, , consumeErrors=True)
deferredList.addCallback(gotResults)
return deferredList
def gotResults(results):
for (success, value) in result:
if success:
print 'Success:', value
else:
print 'Failure:', value.getErrorMessage()
In case input_processor is a long/blocking function, you can use deferToThread instead of maybeDeferred:
def main():
input_data = [1,2,3,4,5,6,7]
input_processor = syn_function
for entry in input_data:
requests.append(threads.deferToThread(input_processor, entry))
deferredList = defer.DeferredList(requests, , consumeErrors=True)
deferredList.addCallback(gotResults)
return deferredList