I'm writing my first multiprocessing script, which has to be converted to an executable file afterwards. I'd like to have an overview how many files of a list are already processed. But if I use tqdm to do this, my executable file gets extremely large. So I'm looking for a solution to get an impress, how long the task is still working. It doesn't matter if it is a progress bar or just an output in the console like "10 of 120 files done". Has anybody a hint how to do this? I have to pass multiple arguments a, b, c, d, e to the multiprocessing tool, so I have to use "partial" in addition. Then I get one return value for each processed file. This is my code how it works without showing a progress status:
import multiprocessing
from functools import partial
pool = multiprocessing.Pool(multiprocessing.cpu_count())
prod_x=partial(doSomething, a=a, b=b, c=c, d=0, e=e)
totalResult= list((pool.imap_unordered(prod_x, listOfFiles)))
The doSomething-Function calculates something and this is done for each file. The parameter totalResult is a list of all returned values
The most straight-forward way to handle this, is probably to use pool.apply_async to dispatch your jobs. Then you need to define a callback to execute each time a job is done.
If you want to inform the user about how many jobs that have currently executed, the callback needs some "memory" of the number of executed jobs so far. This can either be a global variable, or a class which I find preferable.
Combining these points, a solution could look something like:
import multiprocessing
import time
class ProgressUpdater:
def __init__(self, num_items):
self.num_items = num_items
self.num_processed = 0
def update(self, data):
self.num_processed += 1
print(f"Done processing {self.num_processed} of {self.num_items} inputs")
def func(item):
time.sleep(item // 10)
return item // 2
if __name__ == "__main__":
item_list = [3, 5, 7, 32, 6, 21, 12, 1, 7]
progress_updater = ProgressUpdater(len(item_list))
with multiprocessing.Pool(3) as pool:
result_objects = [
pool.apply_async(func, (item,), callback=progress_updater.update)
for item in item_list
]
results = [result_object.get() for result_object in result_objects]
pool.join()
print(results)
Now, to fit your needs, you need to massage it slightly, by using your partial functions et c.
Related
Result Array is displayed as empty after trying to append values into it.
I have even declared result as global inside function.
Any suggestions?
Error Image
try this
res= []
inputData = [a,b,c,d]
def function(data):
values = [some_Number_1, some_Number_2]
return values
def parallel_run(function, inputData):
cpu_no = 4
if len(inputData) < cpu_no:
cpu_no = len(inputData)
p = multiprocessing.Pool(cpu_no)
global resultsAr
resultsAr = p.map(function, inputData, chunksize=1)
p.close()
p.join()
print ('res = ', res)
This happens since you're misunderstanding the basic point of multiprocessing: the child process spawned by multiprocessing.Process is separate from the parent process, and thus any modifications to data (including global variables) in the child process(es) are not propagated into the parent.
You will need to use multiprocessing-specific data types (queues and pipes), or the higher-level APIs provided by e.g. multiprocessing.Pool, to get data out of the child process(es).
For your application, the high-level recipe would be
def square(v):
return v * v
def main():
arr = [1, 2, 3, 4, 5]
with multiprocessing.Pool() as p:
squared = p.map(square, arr)
print(squared)
– however you'll likely find that this is massively slower than not using multiprocessing due to the overheads involved in such a small task.
Welcome to StackOverflow, Suyash !
The problem is that multiprocessing.Process is, as its name says, a separate process. You can imagine it almost as if you're running your script again from the terminal, with very little connection to the mother script.
Therefore, it has its own copy of the result array, which it modifies and prints.
The result in the "main" process is unmodified.
To convince yourself of this, try to print id(res) in both __main__ and in square(). You'll see they are different.
I want to use Ray to parallelize some computations in python. As part of this, I want a method which takes the desired number of worker processes as an argument.
The introductory articles on Ray that I can find say to specify the number of processes at the top level, which is different from what I want. Is it possible to specify similarly to how one would do when instantiating e.g. a multiprocessing Pool object, as illustrated below?
Example using multiprocessing:
import multiprocessing as mp
def f(x):
return 2*x
def compute_results(x, n_jobs=4):
with mp.Pool(n_jobs) as pool:
res = pool.map(f, x)
return res
data = [1,2,3]
results = compute_results(data, n_jobs=4)
Example using ray
import ray
# Tutorials say to designate the number of cores already here
ray.remote(4)
def f(x):
return 2*x
def compute_results(x):
result_ids = [f.remote(val) for val in x]
res = ray.get(result_ids)
return res
If you run f.remote() four times then Ray will create four worker processes to run it.
Btw, you can use multiprocessing.Pool with Ray: https://docs.ray.io/en/latest/ray-more-libs/multiprocessing.html
Suppose I have two independent functions. I'd like to call them concurrently, using python's concurrent.futures.ThreadPoolExecutor. Is there a way to call them using Executor and ensure they are returned in order of submission?
I understand this is possible with the Executor.map, but I am looking to parallelize two separate functions, and not one function with a interable input.
I have example code below, but it doesn't guarantee that fn_a will return first, (by design of the wait function).
from concurrent.futures import ThreadPoolExecutor, wait
import time
def fn_a():
t_sleep = 0.5
print("fn_a: Wait {} seconds".format(t_sleep))
time.sleep(t_sleep)
ret = t_sleep * 5 # Do unique work
return "fn_a: return {}".format(ret)
def fn_b():
t_sleep = 1.0
print("fn_b: Wait {} seconds".format(t_sleep))
time.sleep(t_sleep)
ret = t_sleep * 10 # Do unique work
return "fn_b: return {}".format(ret)
if __name__ == "__main__":
with ThreadPoolExecutor() as executor:
futures = []
futures.append(executor.submit(fn_a))
futures.append(executor.submit(fn_b))
complete_futures, incomplete_futures = wait(futures)
for f in complete_futures:
print(f.result())
I'm also interested in knowing if there is a way to do this with joblib
Think I found a reasonable option using lambda and partials. The partials allow me to pass arguments to some functions in the parallelized iterable, but not others.
from functools import partial
import concurrent.futures
fns = [partial(fn_a), partial(fn_b)]
data = []
with concurrent.futures.ThreadPoolExecutor() as executor:
try:
for result in executor.map(lambda x: x(), fns):
data.append(result)
Since it is using executor.map, it returns in order.
I am new to python and tried a lot of methods for multiprocessing in python with no such benefits:
I have a task of implementing 3 methods x,y and z. What I have tried till now is:
Def foo:
Iterate over the lines in a text file:
Call_method_x()
Result from method x say x1
Call_method_y() #this uses x1
Result from method y say y1
For i in range(4):
Multiprocessing.Process(target=Call_method_z()) #this uses y1
I used multiprocessing here on method_z as this is the most cpu intensive.
i tried this another way:
def foo:
call method_x()
call method_y()
call method_z()
def main():
import concurrent.futures
with concurrent.futures.ProcessPoolExecutor() as executor:
executor.map(foo())
Which one seems more appropriate ? I checked the execution time but it was not much of a difference. the thing is that first method_x(), then method_y() and then method_z() should be implemented as they use the output from each other. Both these ways work but theres no significant difference of using multiprocessing with these two methods.
Please let me know if I am missing something here.
You can use multiprocessing.Pool from python, something like :
from multiprocessing import Pool
with open(<path-to-file>) as f:
data = f.readlines()
def method_x():
# do something
pass
def method_y():
x1 = method_x()
#do something
def method_z():
y1 = method_y()
# do something
def call_home():
p = Pool(6)
p.map(method_z, data)
First you read all lines in variable data. Then invoke 6 processes and allow each line to be processed by any of 6 process
I have a dataset df of trader transactions.
I have 2 levels of for loops as follows:
smartTrader =[]
for asset in range(len(Assets)):
df = df[df['Assets'] == asset]
# I have some more calculations here
for trader in range(len(df['TraderID'])):
# I have some calculations here, If trader is successful, I add his ID
# to the list as follows
smartTrader.append(df['TraderID'][trader])
# some more calculations here which are related to the first for loop.
I would like to parallelise the calculations for each asset in Assets, and I also want to parallelise the calculations for each trader for every asset. After ALL these calculations are done, I want to do additional analysis based on the list of smartTrader.
This is my first attempt at parallel processing, so please be patient with me, and I appreciate your help.
If you use pathos, which provides a fork of multiprocessing, you can easily nest parallel maps. pathos is built for easily testing combinations of nested parallel maps -- which are direct translations of nested for loops.
It provides a selection of maps that are blocking, non-blocking, iterative, asynchronous, serial, parallel, and distributed.
>>> from pathos.pools import ProcessPool, ThreadPool
>>> amap = ProcessPool().amap
>>> tmap = ThreadPool().map
>>> from math import sin, cos
>>> print amap(tmap, [sin,cos], [range(10),range(10)]).get()
[[0.0, 0.8414709848078965, 0.9092974268256817, 0.1411200080598672, -0.7568024953079282, -0.9589242746631385, -0.27941549819892586, 0.6569865987187891, 0.9893582466233818, 0.4121184852417566], [1.0, 0.5403023058681398, -0.4161468365471424, -0.9899924966004454, -0.6536436208636119, 0.2836621854632263, 0.9601702866503661, 0.7539022543433046, -0.14550003380861354, -0.9111302618846769]]
Here this example uses a processing pool and a thread pool, where the thread map call is blocking, while the processing map call is asynchronous (note the get at the end of the last line).
Get pathos here: https://github.com/uqfoundation
or with:
$ pip install git+https://github.com/uqfoundation/pathos.git#master
Nested parallelism can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
Assume you want to parallelize the following nested program
def inner_calculation(asset, trader):
return trader
def outer_calculation(asset):
return asset, [inner_calculation(asset, trader) for trader in range(5)]
inner_results = []
outer_results = []
for asset in range(10):
outer_result, inner_result = outer_calculation(asset)
outer_results.append(outer_result)
inner_results.append(inner_result)
# Then you can filter inner_results to get the final output.
Bellow is the Ray code parallelizing the above code:
Use the #ray.remote decorator for each function that we want to execute concurrently in its own process. A remote function returns a future (i.e., an identifier to the result) rather than the result itself.
When invoking a remote function f() the remote modifier, i.e., f.remote()
Use the ids_to_vals() helper function to convert a nested list of ids to values.
Note the program structure is identical. You only need to add remote and then convert the futures (ids) returned by the remote functions to values using the ids_to_vals() helper function.
import ray
ray.init()
# Define inner calculation as a remote function.
#ray.remote
def inner_calculation(asset, trader):
return trader
# Define outer calculation to be executed as a remote function.
#ray.remote(num_return_vals = 2)
def outer_calculation(asset):
return asset, [inner_calculation.remote(asset, trader) for trader in range(5)]
# Helper to convert a nested list of object ids to a nested list of corresponding objects.
def ids_to_vals(ids):
if isinstance(ids, ray.ObjectID):
ids = ray.get(ids)
if isinstance(ids, ray.ObjectID):
return ids_to_vals(ids)
if isinstance(ids, list):
results = []
for id in ids:
results.append(ids_to_vals(id))
return results
return ids
outer_result_ids = []
inner_result_ids = []
for asset in range(10):
outer_result_id, inner_result_id = outer_calculation.remote(asset)
outer_result_ids.append(outer_result_id)
inner_result_ids.append(inner_result_id)
outer_results = ids_to_vals(outer_result_ids)
inner_results = ids_to_vals(inner_result_ids)
There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.
Probably threading, from standard python library, is most convenient approach:
import threading
def worker(id):
#Do you calculations here
return
threads = []
for asset in range(len(Assets)):
df = df[df['Assets'] == asset]
for trader in range(len(df['TraderID'])):
t = threading.Thread(target=worker, args=(trader,))
threads.append(t)
t.start()
#add semaphore here if you need synchronize results for all traders.
Instead of using for, use map:
import functools
smartTrader =[]
m=map( calculations_as_a_function,
[df[df['Assets'] == asset] \
for asset in range(len(Assets))])
functools.reduce(smartTradder.append, m)
From then on, you can try different parallel map implementations s.a. multiprocessing's, or stackless'