python multiprocess very slow - python

I'm having trouble using python multiprocess.
im trying with a minimal version of code:
import os
os.environ["OMP_NUM_THREADS"] = "1" # just in case the system uses multithrad somehow
os.environ["OPENBLAS_NUM_THREADS"] = "1" # just in case the system uses multithrad somehow
os.environ["MKL_NUM_THREADS"] = "1" # just in case the system uses multithrad somehow
os.environ["VECLIB_MAXIMUM_THREADS"] = "1" # just in case the system uses multithrad somehow
os.environ["NUMEXPR_NUM_THREADS"] = "1" # just in case the system uses multithrad somehow
import numpy as np
from datetime import datetime as dt
from multiprocessing import Pool
from pandas import DataFrame as DF
def trytrytryshare(times):
i = 0
for j in range(times):
i+=1
return
def trymultishare(thread = 70 , times = 10):
st = dt.now()
args_l = [(times,) for i in range(thread)]
print(st)
p = Pool(thread)
for i in range(len(args_l)):
p.apply_async(func = trytrytryshare, args = (args_l[i]))
p.close()
p.join()
timecost = (dt.now()-st).total_seconds()
print('%d threads finished in %f secs' %(thread,timecost))
return timecost
if __name__ == '__main__':
res = DF(columns = ['thread','timecost'])
n = 0
for j in range(5):
for i in range(1,8,3):
timecost = trymultishare(thread = i,times = int(1e8))
res.loc[n] = [i,timecost]
n+=1
timecost = trymultishare(thread = 70,times = int(1e8))
res.loc[n] = [70,timecost]
n+=1
res_sum = res.groupby('thread').mean()
res_sum['decay'] = res_sum.loc[1,'timecost'] / res_sum['timecost']
on my own computer (8cores):
on my server (80 cores, im the only one using it)
i tried again, make one thread job longer.
the decay is really bad....
any idea how to "fix" this, or this is just what i can get when using multi-process?
thanks

The way you're timing apply_async is flawed. You won't know when the subprocesses have completed unless you wait for their results.
It's a good idea to work out an optimum process pool size based on number of CPUs. The code that follows isn't necessarily the best for all cases but it's what I use.
You shouldn't set the pool size to the number of processes you intend to run. That's the whole point of using a pool.
So here's a simpler example of how you could test subprocess performance.
from multiprocessing import Pool
from time import perf_counter
from os import cpu_count
def process(n):
r = 0
for _ in range(n):
r += 1
return r
POOL = max(cpu_count()-2, 1)
N = 1_000_000
def main(procs):
# no need for pool size to be bigger than the numer of processes to be run
poolsize = min(POOL, procs)
with Pool(poolsize) as pool:
_start = perf_counter()
for result in [pool.apply_async(process, (N,)) for _ in range(procs)]:
result.wait() # wait for async processes to terminate
_end = perf_counter()
print(f'Duration for {procs} processes with pool size of {poolsize} = {_end-_start:.2f}s')
if __name__ == '__main__':
print(f'CPU count = {cpu_count()}')
for procs in range(10, 101, 10):
main(procs)
Output:
CPU count = 20
Duration for 10 processes with pool size of 10 = 0.12s
Duration for 20 processes with pool size of 18 = 0.19s
Duration for 30 processes with pool size of 18 = 0.18s
Duration for 40 processes with pool size of 18 = 0.28s
Duration for 50 processes with pool size of 18 = 0.30s
Duration for 60 processes with pool size of 18 = 0.39s
Duration for 70 processes with pool size of 18 = 0.42s
Duration for 80 processes with pool size of 18 = 0.45s
Duration for 90 processes with pool size of 18 = 0.54s
Duration for 100 processes with pool size of 18 = 0.59s

My guess is that you're observing the cost of spawning new processes, since apply_async returns immediately. It's much cheaper to spawn one process in the case of thread==1 instead of spawning 70 processes (your last case with the worst decay).
The fact that the server with 80 cores performs better than you laptop with 8 cores could be due to the server containing better hardware in general (better heat removal, faster CPU, etc) or it might contain a different OS. Benchmarking across different machines is non-trivial.

Related

Why is the parallel version of my code slower than the serial one?

I am trying to run a model multiple times. As a result it is time consuming. As a solution I try to make it parallel. However, it ends up to be slower. Parallel is 40 seconds while serial is 34 seconds.
# !pip install --target=$nb_path transformers
oracle = pipeline(model="deepset/roberta-base-squad2")
question = 'When did the first extension of the Athens Tram take place?'
print(data)
print("Data size is: ", len(data))
parallel = True
if parallel == False:
counter = 0
l = len(data)
cr = []
for words in data:
counter+=1
print(counter, " out of ", l)
cr.append(oracle(question=question, context=words))
elif parallel == True:
from multiprocessing import Process, Queue
import multiprocessing
no_CPU = multiprocessing.cpu_count()
print("Number of cpu : ", no_CPU)
l = len(data)
def answer_question(data, no_CPU, sub_no):
cr_process = []
counter_process = 0
for words in data:
counter_process+=1
l_data = len(data)
# print("n is", no_CPU)
# print("l is", l_data)
print(counter_process, " out of ", l_data, "in subprocess number", sub_no)
cr_process.append(oracle(question=question, context=words))
# Q.put(cr_process)
cr.append(cr_process)
n = no_CPU # number of subprocesses
m = l//n # number of data the n-1 first subprocesses will handle
res = l % n # number of extra data samples the last subprocesses has
# print(m)
# print(res)
procs = []
# instantiating process with arguments
for x in range(n-1):
# print(x*m)
# print((x+1)*m)
proc = Process(target=answer_question, args=(data[x*m:(x+1)*m],n, x+1,))
procs.append(proc)
proc.start()
proc = Process(target=answer_question, args=(data[(n-1)*m:n*m+res],n,n,))
procs.append(proc)
proc.start()
# complete the processes
for proc in procs:
proc.join()
A sample of the data variable can be found here (to not flood the question). Argument parallel controls the serial and the parallel version. So my question is, why does it happen and how do I make the parallel version faster? I use google colab so it has 2 CPU cores available , that's what multiprocessing.cpu_count() is saying at least.
Your pipeline is already running on multi-cpu even when run as one process. The code of transformers are optimized to run on multi-cpu.
when on top of that you are creating multiple process, you are loosing some time for building the processes and switching between them.
To verify this, on the so-called "single process" version look at your cpu utilizations, you should already see all are at max, so creating extra parallel processes are not going to save you some time,

Parallelization with ray not working as expected

I am a beginner with parallel processing and I currently experiment with a simple program to understand how Ray works.
import numpy as np
import time
from pprint import pprint
import ray
ray.init(num_cpus = 4) # Specify this system has 4 CPUs.
data_rows = 800
data_cols = 10000
batch_size = int(data_rows/4)
# Prepare data
np.random.RandomState(100)
arr = np.random.randint(0, 100, size=[data_rows, data_cols])
data = arr.tolist()
# Solution Without Paralleization
def howmany_within_range(row, minimum, maximum):
"""Returns how many numbers lie within `maximum` and `minimum` in a given `row`"""
count = 0
for n in row:
if minimum <= n <= maximum:
count = count + 1
return count
results = []
start = time.time()
for row in data:
results.append(howmany_within_range(row, minimum=75, maximum=100))
end = time.time()
print("Without parallelization")
print("-----------------------")
pprint(results[:5])
print("Total time: ", end-start, "sec")
# Parallelization with ray
results = []
y = []
z = []
w = []
#ray.remote
def solve(data, minimum, maximum):
count = 0
count_row = 0
for i in data:
for n in i:
if minimum <= n <= maximum:
count = count + 1
count_row = count
count = 0
return count_row
start = time.time()
results = ray.get([solve.remote(data[i:i+1], 75, 100) for i in range(0, batch_size)])
y = ray.get([solve.remote(data[i:i+1], 75, 100) for i in range(1*batch_size, 2*batch_size)])
z = ray.get([solve.remote(data[i:i+1], 75, 100) for i in range(2*batch_size, 3*batch_size)])
w = ray.get([solve.remote(data[i:i+1], 75, 100) for i in range(3*batch_size, 4*batch_size)])
end = time.time()
results += y+z+w
print("With parallelization")
print("--------------------")
print(results[:5])
print("Total time: ", end-start, "sec")
I am getting much slower performance with Ray:
$ python3 raytest.py
Without parallelization
-----------------------
[2501, 2543, 2530, 2410, 2467]
Total time: 0.5162293910980225 sec
(solve pid=26294)
With parallelization
--------------------
[2501, 2543, 2530, 2410, 2467]
Total time: 1.1760196685791016 sec
In fact, if I scale up the input data I get messages in the terminal with the pid of the function and the program stalls.
Essentially, I try to split computations in batches of rows and assign each computation to a cpu core. What am I doing wrong?
there are two main problems when it comes to multiprocessing (your code)
there's an overhead associated with spawning the new processes to do your work.
there's an overhead associated with transferring data between different processes.
in order to spawn a new process, a new instance of the python interpreter is created and initialized (due to the GIL). also when you transfer data between processes, this data has to be serialized/deserialized at the sender/receiver, which in your program is happening twice (once from main process to workers, and again from workers to the main process.), so in short your program is spending all it's time paying this overhead instead of doing the actual computation.
if you want to utilize the benefit of multiprocessing in python you should have more computation being done at the workers using as little data transfer as possible, the way I usually determine if using multiprocessing will be a good idea is if the task is going to take more than 5 seconds to complete on a single cpu.
another good idea to reduce data transfer is slicing your arrays in chucks (multiple rows) instead of a single row per function call, as each row has to be serialized separately, which adds extra overhead.

Increase number of CPUs (ncores) has negative impact on multiprocessing pool

I have the following code and I want to spread the task into multi-process. After experiments, I realized that increase the number of CPU cores negatively impacts the execution time.
I have 8 cores on my machine
Case 1: without using multiprocessing
Execution time: 106 minutes
Case 2: with multiprocessing using ncores = 4
Execution time: 37 minutes
Case 3: with multiprocessing using ncores = 7
Execution time: 40 minutes
the following code:
import time
import multiprocessing as mp
def _fun(i, args1=10):
#Sort matrix W
#For loop 1 on matrix M
#For loop 2 on matrix Y
return value
def run1(ncores=mp.cpu_count()):
ncores = ncores - 4 # use 4 and 1 to have ncores = 4 and 7
_f = functools.partial(_fun,args1=x)
with mp.Pool(ncores) as pool:
result = pool.map(_f, range(n))
return [t for t in result]
start = time.time()
list1= run1()
end = time.time()
print( 'time {0} minutes '.format((end - start)/60))
My question, what is the best practice to use multiprocessing? As I understand that as much we use cpu cores as much it will be faster.

Parallelizing through Multi-threading and Multi-processing taking significantly more time than serial

I'm trying to learn how to do parallel programming in python. I wrote a simple int square function and then ran it in serial, multi-thread, and multi-process:
import time
import multiprocessing, threading
import random
def calc_square(numbers):
sq = 0
for n in numbers:
sq = n*n
def splita(list, n):
a = [[] for i in range(n)]
counter = 0
for i in range(0,len(list)):
a[counter].append(list[i])
if len(a[counter]) == len(list)/n:
counter = counter +1
continue
return a
if __name__ == "__main__":
random.seed(1)
arr = [random.randint(1, 11) for i in xrange(1000000)]
print "init completed"
start_time2 = time.time()
calc_square(arr)
end_time2 = time.time()
print "serial: " + str(end_time2 - start_time2)
newarr = splita(arr,8)
print 'split complete'
start_time = time.time()
for i in range(8):
t1 = threading.Thread(target=calc_square, args=(newarr[i],))
t1.start()
t1.join()
end_time = time.time()
print "mt: " + str(end_time - start_time)
start_time = time.time()
for i in range(8):
p1 = multiprocessing.Process(target=calc_square, args=(newarr[i],))
p1.start()
p1.join()
end_time = time.time()
print "mp: " + str(end_time - start_time)
Output:
init completed
serial: 0.0640001296997
split complete
mt: 0.0599999427795
mp: 2.97099995613
However, as you can see, something weird happened and mt is taking the same time as serial and mp is actually taking significantly longer (almost 50 times longer).
What am I doing wrong? Could someone push me in the right direction to learn parallel programming in python?
Edit 01
Looking at the comments, I see that perhaps the function not returning anything seems pointless. The reason I'm even trying this is because previously I tried the following add function:
def addi(numbers):
sq = 0
for n in numbers:
sq = sq + n
return sq
I tried returning the addition of each part to a serial number adder, so at least I could see some performance improvement over a pure serial implementation. However, I couldn't figure out how to store and use the returned value, and that's the reason I'm trying to figure out something even simpler than that, which is just dividing up the array and running a simple function on it.
Thanks!
I think that multiprocessing takes quite a long time to create and start each process. I have changed the program to make 10 times the size of arr and changed the way that the processes are started and there is a slight speed-up:
(Also note python 3)
import time
import multiprocessing, threading
from multiprocessing import Queue
import random
def calc_square_q(numbers,q):
while q.empty():
pass
return calc_square(numbers)
if __name__ == "__main__":
random.seed(1) # note how big arr is now vvvvvvv
arr = [random.randint(1, 11) for i in range(10000000)]
print("init completed")
# ...
# other stuff as before
# ...
processes=[]
q=Queue()
for arrs in newarr:
processes.append(multiprocessing.Process(target=calc_square_q, args=(arrs,q)))
print('start processes')
for p in processes:
p.start() # even tho' each process is started it waits...
print('join processes')
q.put(None) # ... for q to become not empty.
start_time = time.time()
for p in processes:
p.join()
end_time = time.time()
print("mp: " + str(end_time - start_time))
Also notice above how I create and start the processes in two different loops, and then finally join with the processes in a third loop.
Output:
init completed
serial: 0.53214430809021
split complete
start threads
mt: 0.5551605224609375
start processes
join processes
mp: 0.2800724506378174
Another factor of 10 increase in size of arr:
init completed
serial: 5.8455305099487305
split complete
start threads
mt: 5.411392450332642
start processes
join processes
mp: 1.9705185890197754
And yes, I've also tried this in python 2.7, although Threads seemed slower.

multithreading using pool.map takes longer time than normal single process

I want to parallelize a task using python, so I read about pool.map, where data is divided into multiple chunks and processed by each process (thread).
I have a huge dictionary(2 million words) and a text file of sentences, the idea is to divide sentences into words and match each word to the exiting dictionary and do further processing based on the return result. Before doing that ,I wrote a dummy program to check the functionality of pool.map but it is not working as expected (i.e single process takes less time than multiple process) (I am using process and thread interchangeably because I think every thread is nothing but a process here)
def add_1(x):
return (x*x+x)
def main():
iter = 10000000
num = [i for i in xrange(iter)]
threads = 4
pool = ThreadPool(threads)
start = time.time()
results = pool.map(add_1,num,iter/threads)
pool.close()
pool.join()
end = time.time()
print('Total Time Taken = %f')% (end-start)
Total Time Taken:
Thread 1 Total Time Taken = 2.252361
Thread 2 Total Time Taken = 2.560798
Thread 3 Total Time Taken = 2.938640
Thread 4 Total Time Taken = 3.048179
Just using pool = ThreadPool()
def main:
num = [i for i in xrange(iter)]
#pool = ThreadPool(threads)
pool = ThreadPool()
start = time.time()
#results = pool.map(add_1,num,iter/threads)
results = pool.map(add_1,num)
pool.close()
pool.join()
end = time.time()
print('Total Time Taken = %f')% (end-start)
Total Time Taken = 3.031125
Normal for loop execution:
def main():
iter = 10000000
start = time.time()
for k in xrange(iter):
add_1(k)
end = time.time()
print ('Total Time normally = %f') % (end-start)
Total Time normally = 1.087591
Config:
I am using python 2.7.6

Categories