This question already has an answer here:
How can I improve CPU utilization when using the multiprocessing module?
(1 answer)
Closed 8 years ago.
Using multiprocessing, I tried to parallelize a function but I have no performance improvement:
from MMTK import *
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator
from MMTK.Proteins import Protein, PeptideChain
import numpy as np
filename = 'traj_prot_nojump.nc'
trajectory = Trajectory(None, filename)
def calpha_2dmap_mult(trajectory = trajectory, t = range(0,len(trajectory))):
dist = []
universe = trajectory.universe
proteins = universe.objectList(Protein)
chain = proteins[0][0]
traj = trajectory[t]
dt = 1000 # calculate distance every 1000 steps
for n, step in enumerate(traj):
if n % dt == 0:
universe.setConfiguration(step['configuration'])
for i in np.arange(len(chain)-1):
for j in np.arange(len(chain)-1):
dist.append(universe.distance(chain[i].peptide.C_alpha,
chain[j].peptide.C_alpha))
return(dist)
c0 = time.time()
dist1 = calpha_2dmap_mult(trajectory, range(0,11001))
c1 = time.time() - c0
print(c1)
# Multiprocessing
from multiprocessing import Pool, cpu_count
pool = Pool(processes=4)
c0 = time.time()
dist_pool = [pool.apply(calpha_2dmap_mult, args=(trajectory, t,)) for t in
[range(0,2001), range(3000,5001), range(6000,8001),
range(9000,11001)]]
c1 = time.time() - c0
print(c1)
The time spent to calculate the distances is the 'same' without (70.1s) or with multiprocessing (70.2s)! I was maybe not expecting an improvement of a factor 4 but I was at least expecting some improvements!
Is someone knows what I did wrong?
Pool.apply is a blocking operation:
[Pool.apply is the] equivalent of the apply() built-in function. It blocks until the result is ready, so apply_async() is better suited for performing work in parallel ..
In this case Pool.map is likely more appropriate for collecting the results; the map itself blocks but the sequence elements / transformations are processed in parallel.
It addition to using partial application (or manual realization of such), also consider expanding the data itself. It's the same cat in a different skin.
data = ((trajectory, r) for r in [range(0,2001), ..])
result = pool.map(.., data)
This can in turn be expanded:
def apply_data(d):
return calpha_2dmap_mult(*d)
result = pool.map(apply_data, data)
The function (or simple argument-expanded proxy of such of such) will need to be written to accept a single argument but all the data is now mapped as a single unit.
Related
Below is my task code. In this case e0=15, but I would like to solve this problem for a set of e0 values (e0 - parameter (e0 = 7, 10, 15, 20, 28)). I have a multi-core processor and I would like to distribute the calculations of this task for each parameter e0 to a separate core.
How to do parallel calculations for this task in Python?
import sympy as sp
import scipy as sc
import numpy as np
e0=15
einf=15
def Psi(r,n):
return 2*np.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))*sc.special.hyp1f1(1-n, 2, 2*r/n)/n**2
def PsiSymb(n):
r=sp.symbols('r')
y1=2*sp.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))/n**2
y2 = sp.simplify(sp.functions.special.hyper.hyper([1-n], [2], 2*r/n))
y=y1*y2
return y
def LaplacianPsi(n):
r = sp.symbols('r')
ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
ydiffnum = sp.lambdify(r, ydiff, "numpy")
return ydiffnum
def k(n1,n2):
yint=sc.integrate.quad(lambda r: -0.5*Psi(r,n2)*LaplacianPsi(n1)(r)*r**2,0,np.inf)
return yint[0]
def p(n1,n2):
potC=sc.integrate.quad(lambda r: Psi(r,n2)*(-1/r)*Psi(r,n1)*(r**2),0,np.inf)
potB1=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.23))*Psi(r,n1)*(r**2),0,np.inf)
potB2=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.4))*Psi(r,n1)*(r**2),0,np.inf)
pot=potC[0]+potB1[0]+potB2[0]
return pot
def en(n1,n2):
return k(n1,n2)+p(n1,n2)
nmax=3
EnM = [[0]*nmax for i in range(nmax)]
for n1 in range(nmax):
for n2 in range(nmax):
EnM[n2][n1]=en(n1+1,n2+1)
EnEig=sc.linalg.eigvalsh(EnM)
EnB=min(EnEig)
print(EnB)
This is not needed to use multiple cores for this computation. Indeed, the bottleneck is the LaplacianPsi function which recompute the same thing over and over. You can use memoization to fix this. Here is an example:
import functools
#functools.cache
def LaplacianPsi(n):
r = sp.symbols('r')
ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
ydiffnum = sp.lambdify(r, ydiff, "numpy")
return ydiffnum
# The rest is the same
The code can be further optimized since sc.special.factorial(n) / sc.special.factorial(-1+n) is actually just n and np.sqrt is inefficient on scalar so it should be replaced with math.sqrt(n). This results in a code taking only 0.057 seconds as opposed to 16.5 seconds for the initial implementation on my machine. This means the new implementation is 290 times faster while it produces the same result!
Directly using many cores would just have wasted more resources for a slower result. You can still try to use more cores to compute this with the faster provided implementation though it might not be significantly faster.
I am trying to learn the multiprocessing library in Python3.9. One thing I compared was the performance of a repeated computation of on a dataset composing of 220500 samples per dataset. I did this using the multiprocessing library and then using for loops.
Throughout my tests I am consistently getting better performance using for loops. Here is the code for the test I am running. I am computing the FFT of a signal with 220500 samples. My experiment involves running this process for a certain amount of times in each test. I am testing this out with setting the number of processes to 10, 100, and 1000 respectively.
import time
import numpy as np
from scipy.signal import get_window
from scipy.fftpack import fft
import multiprocessing
from itertools import product
def make_signal():
# moved this code into a function to make threading portion of code clearer
DUR = 5
FREQ_HZ = 10
Fs = 44100
# precompute the size
N = DUR * Fs
# get a windowing function
w = get_window('hanning', N)
t = np.linspace(0, DUR, N)
x = np.zeros_like(t)
b = 2*np.pi*FREQ_HZ*t
for i in range(50):
x += np.sin(b*i)
return x*w, Fs
def fft_(x, Fs):
yfft = fft(x)[:x.size//2]
xfft = np.linspace(0,Fs//2,yfft.size)
return 2/yfft.size * np.abs(yfft), xfft
if __name__ == "__main__":
# grab the raw sample data which will be computed by the fft function
x = make_signal()
# len(x) = 220500
# create 5 different tests, each with the amount of processes below
# array([ 10, 100, 1000])
tests_sweep = np.logspace(1,3,3, dtype=int)
# sweep through the processes
for iteration, test_num in enumerate(tests_sweep):
# create a list of the amount of processes to give for each iteration
fft_processes = []
for i in range(test_num):
fft_processes.append(x)
start = time.time()
# repeat the process for test_num amount of times (e.g. 10, 100, 1000)
with multiprocessing.Pool() as pool:
results = pool.starmap(fft_, fft_processes)
end = time.time()
print(f'{iteration}: Multiprocessing method with {test_num} processes took: {end - start:.2f} sec')
start = time.time()
for fft_processes in fft_processes:
# repeat the process the same amount of time as the multiprocessing method using for loops
fft_(*fft_processes)
end = time.time()
print(f'{iteration}: For-loop method with {test_num} processes took: {end - start:.2f} sec')
print('----------')
Here are the results of my test.
0: Multiprocessing method with 10 processes took: 0.84 sec
0: For-loop method with 10 processes took: 0.05 sec
----------
1: Multiprocessing method with 100 processes took: 1.46 sec
1: For-loop method with 100 processes took: 0.45 sec
----------
2: Multiprocessing method with 1000 processes took: 6.70 sec
2: For-loop method with 1000 processes took: 4.21 sec
----------
Why is the for-loop method considerably faster? Am I using the multiprocessing library correctly? Thanks.
There is a nontrivial amount of overhead to starting a new process. In addition the data has to be copied from one process to another (again with some overhead compared to a normal memory copy).
Another aspect is that you should limit the number of processes to the number of cores you have. Going over will make you incurr process switching costs as well.
This, coupled with the fact that you have little computation per process makes the switch not worth while.
I think if you make the signal significantly longer (10x or 100x) you should start seeing some benefits from using multiple cores.
Also check if the operations you are running are already using some parallelism. They might be implemented with threads, which are significantly cheaper the processes (but historically didn't work well in python, dye to GIL).
I have been trying to exploit Numba to speed up large array calculations. I have been measuring the calculation speed in GFLOPS, and it consistently falls far short of my expectations for my CPU.
My processor is i9-9900k, which according to float32 benchmarks should be capable of over 200 GFLOPS. In my tests I have never exceeded about 50 GFLOPS. This is running on all 8 cores.
On a single core I achieve about 17 GFLOPS, which (I believe) is 50% of the theoretical performance. I'm not sure if this is improvable, but the fact that it doesn't extend well to multi-core is a problem.
I am trying to learn this because I am planning to write some image processing code that desperately needs every speed boost possible. I also feel I should understand this first, before I dip my toes into GPU computing.
Here is some example code with a few of my attempts at writing fast functions. The operation I am testing, is multiplying an array by a float32 then summing the whole array, i.e. a MAC operation.
How can I get better results?
import os
# os.environ["NUMBA_ENABLE_AVX"] = "1"
import numpy as np
import timeit
from timeit import default_timer as timer
import numba
# numba.config.NUMBA_ENABLE_AVX = 1
# numba.config.LOOP_VECTORIZE = 1
# numba.config.DUMP_ASSEMBLY = 1
from numba import float32, float64
from numba import jit, njit, prange
from numba import vectorize
from numba import cuda
lengthY = 16 # 2D array Y axis
lengthX = 2**16 # X axis
totalops = lengthY * lengthX * 2 # MAC operation has 2 operations
iters = 100
doParallel = True
#njit(fastmath=True, parallel=doParallel)
def MAC_numpy(testarray):
output = (float)(0.0)
multconst = (float)(.99)
output = np.sum(np.multiply(testarray, multconst))
return output
#njit(fastmath=True, parallel=doParallel)
def MAC_01(testarray):
lengthX = testarray.shape[1]
lengthY = testarray.shape[0]
output = (float)(0.0)
multconst = (float)(.99)
for y in prange(lengthY):
for x in prange(lengthX):
output += multconst*testarray[y,x]
return output
#njit(fastmath=True, parallel=doParallel)
def MAC_04(testarray):
lengthX = testarray.shape[1]
lengthY = testarray.shape[0]
output = (float)(0.0)
multconst = (float)(.99)
for y in prange(lengthY):
for x in prange(int(lengthX/4)):
xn = x*4
output += multconst*testarray[y,xn] + multconst*testarray[y,xn+1] + multconst*testarray[y,xn+2] + multconst*testarray[y,xn+3]
return output
# ======================================= TESTS =======================================
testarray = np.random.rand(lengthY, lengthX)
# ==== MAC_numpy ====
time = 1000
for n in range(iters):
start = timer()
output = MAC_numpy(testarray)
end = timer()
if((end-start) < time): #get shortest time
time = end-start
print("\nMAC_numpy")
print("output = %f" % (output))
print(type(output))
print("fastest time = %16.10f us" % (time*10**6))
print("Compute Rate = %f GFLOPS" % ((totalops/time)/10**9))
# ==== MAC_01 ====
time = 1000
lengthX = testarray.shape[1]
lengthY = testarray.shape[0]
for n in range(iters):
start = timer()
output = MAC_01(testarray)
end = timer()
if((end-start) < time): #get shortest time
time = end-start
print("\nMAC_01")
print("output = %f" % (output))
print(type(output))
print("fastest time = %16.10f us" % (time*10**6))
print("Compute Rate = %f GFLOPS" % ((totalops/time)/10**9))
# ==== MAC_04 ====
time = 1000
for n in range(iters):
start = timer()
output = MAC_04(testarray)
end = timer()
if((end-start) < time): #get shortest time
time = end-start
print("\nMAC_04")
print("output = %f" % (output))
print(type(output))
print("fastest time = %16.10f us" % (time*10**6))
print("Compute Rate = %f GFLOPS" % ((totalops/time)/10**9))
Q : How can I get better results?
1st : Learn how to avoid doing useless work - you can straight eliminate HALF of the FLOP-s not speaking about also the half of all the RAM-I/O-s avoided, each one being at a cost of +100~350 [ns] per writeback
Due to the distributive nature of MUL and ADD ( a.C + b.C ) == ( a + b ).C, better first np.sum( A ) and only after that then MUL the sum by the (float) constant.
#utput = np.sum(np.multiply(testarray, multconst)) # AWFULLY INEFFICIENT
output = np.sum( testarray)*multconst #######################
2nd : Learn how to best align data along the order of processing ( cache-line reuses get you ~100x faster re-use of pre-fetched data. Not aligning vectorised-code along these already pre-fetched data side-effects just let your code pay many times the RAM-access latencies, instead of smart re-using the already paid for data-blocks. Designing work-units aligned according to this principle means a few SLOCs more, but the rewards are worth that - who gets ~100x faster CPUs+RAMs for free and right now or about a ~100x speedup for free, just from not writing a badly or naively designed looping iterators?
3rd : Learn how to efficiently harness vectorised (block-directed) operations inside numpy or numba code-blocks and avoid pressing numba to spend time on auto-analysing the call-signatures ( you pay an extra time for this auto-analyses per call, while you have designed the code and knew exactly what data-types are going to go there, so why to pay an extra time for auto-analysis each time a numba-block gets called???)
4th : Learn where the extended Amdahl's Law, having all the relevant add-on costs and processing atomicity put into the game, supports your wish to get speedups, not to ever pay way more than you will get back (to at least justify the add-on costs... ) - paying extra costs for not getting any reward is possible, yet has no beneficial impact on your code's performance ( rather the opposite )
5th : Learn when and how the manually created inline(s) may save your code, once the steps 1-4 are well learnt and routinely excersised with proper craftmanship ( Using popular COTS frameworks is fine, yet these may deliver results after a few days of work, while a hand-crafted single purpose smart designed assembly code was able to get the same results in about 12 minutes(!), not several days without any GPU/CPU tricks etc - yes, that faster - just by not doing a single step more than what was needed for the numerical processing of the large matrix data )
Did I mention float32 may surprise at being processed slower on small scales than float64, while on larger data-scales ~ n [GB] the RAM I/O-times grow slower for more efficient float32 pre-fetches? This never happens here, as float64 array gets processed here. Sure, unless one explicitly instructs the constructor(s) to downconvert the default data type, like this: np.random.rand( lengthY, lengthX ).astype( dtype = np.float32 )>>> np.random.rand( 10, 2 ).dtypedtype('float64')Avoiding extensive memory allocations is another performance trick, supported in numpy call-signatures. Using this option for large arrays will save you a lot of extra time wasted on mem-allocs for large interim arrays. Reusing already pre-allocated memory-zones and wisely controlled gc-policing are another signs of a professional, focused on low-latency & design-for-performance
Ok, here is my problem: I have a nested for loop in my program which runs on a single core. Since the program spend over 99% of run time in this nested for loop I would like to parallelize it. Right now I have to wait 9 days for the computation to finish. I tried to implement a parallel for loop by using the multiprocessing library. But I only find very basic examples and can not transfer them to my problem. Here are the nested loops with random data:
import numpy as np
dist_n = 100
nrm = np.linspace(1,10,dist_n)
data_Y = 11000
data_I = 90000
I = np.random.randn(data_I, 1000)
Y = np.random.randn(data_Y, 1000)
dist = np.zeros((data_I, dist_n)
for t in range(data_Y):
for i in range(data_I):
d = np.abs(I[i] - Y[t])
for p in range(dist_n):
dist[i,p] = np.sum(d**nrm[p])/nrm[p]
print(dist)
Please give me some advise how to make it parallel.
There's a small overhead with initiating a process (50ms+ depending on data size) so it's generally best to MP the largest block of code possible. From your comment it sounds like each loop of t is independent so we should be free to parallelize this.
When python creates a new process you get a copy of the main process so you have available all your global data but when each process writes the data, it writes to it's own local copy. This means dist[i,p] won't be available to the main process unless you explicitly pass it back with a return (which will have some overhead). In your situation, if each process writes dist[i,p] to a file then you should be fine, just don't try to write to the same file unless you implement some type of mutex access control.
#!/usr/bin/python
import time
import multiprocessing as mp
import numpy as np
data_Y = 11 #11000
data_I = 90 #90000
dist_n = 100
nrm = np.linspace(1,10,dist_n)
I = np.random.randn(data_I, 1000)
Y = np.random.randn(data_Y, 1000)
dist = np.zeros((data_I, dist_n))
def worker(t):
st = time.time()
for i in range(data_I):
d = np.abs(I[i] - Y[t])
for p in range(dist_n):
dist[i,p] = np.sum(d**nrm[p])/nrm[p]
# Here - each worker opens a different file and writes to it
print 'Worker time %4.3f mS' % (1000.*(time.time()-st))
if 1: # single threaded
st = time.time()
for x in map(worker, range(data_Y)):
pass
print 'Single-process total time is %4.3f seconds' % (time.time()-st)
print
if 1: # multi-threaded
pool = mp.Pool(28) # try 2X num procs and inc/dec until cpu maxed
st = time.time()
for x in pool.imap_unordered(worker, range(data_Y)):
pass
print 'Multiprocess total time is %4.3f seconds' % (time.time()-st)
print
If you re-increase the size of data_Y/data_I again, the speed-up should increase up to the theoretical limit.
Let's use, for example, numpy.sin()
The following code will return the value of the sine for each value of the array a:
import numpy
a = numpy.arange( 1000000 )
result = numpy.sin( a )
But my machine has 32 cores, so I'd like to make use of them. (The overhead might not be worthwhile for something like numpy.sin() but the function I actually want to use is quite a bit more complicated, and I will be working with a huge amount of data.)
Is this the best (read: smartest or fastest) method:
from multiprocessing import Pool
if __name__ == '__main__':
pool = Pool()
result = pool.map( numpy.sin, a )
or is there a better way to do this?
There is a better way: numexpr
Slightly reworded from their main page:
It's a multi-threaded VM written in C that analyzes expressions, rewrites them more efficiently, and compiles them on the fly into code that gets near optimal parallel performance for both memory and cpu bounded operations.
For example, in my 4 core machine, evaluating a sine is just slightly less than 4 times faster than numpy.
In [1]: import numpy as np
In [2]: import numexpr as ne
In [3]: a = np.arange(1000000)
In [4]: timeit ne.evaluate('sin(a)')
100 loops, best of 3: 15.6 ms per loop
In [5]: timeit np.sin(a)
10 loops, best of 3: 54 ms per loop
Documentation, including supported functions here. You'll have to check or give us more information to see if your more complicated function can be evaluated by numexpr.
Well this is kind of interesting note if you run the following commands:
import numpy
from multiprocessing import Pool
a = numpy.arange(1000000)
pool = Pool(processes = 5)
result = pool.map(numpy.sin, a)
UnpicklingError: NEWOBJ class argument has NULL tp_new
wasn't expecting that, so whats going on, well:
>>> help(numpy.sin)
Help on ufunc object:
sin = class ufunc(__builtin__.object)
| Functions that operate element by element on whole arrays.
|
| To see the documentation for a specific ufunc, use np.info(). For
| example, np.info(np.sin). Because ufuncs are written in C
| (for speed) and linked into Python with NumPy's ufunc facility,
| Python's help() function finds this page whenever help() is called
| on a ufunc.
yep numpy.sin is implemented in c as such you can't really use it directly with multiprocessing.
so we have to wrap it with another function
perf:
import time
import numpy
from multiprocessing import Pool
def numpy_sin(value):
return numpy.sin(value)
a = numpy.arange(1000000)
pool = Pool(processes = 5)
start = time.time()
result = numpy.sin(a)
end = time.time()
print 'Singled threaded %f' % (end - start)
start = time.time()
result = pool.map(numpy_sin, a)
pool.close()
pool.join()
end = time.time()
print 'Multithreaded %f' % (end - start)
$ python perf.py
Singled threaded 0.032201
Multithreaded 10.550432
wow, wasn't expecting that either, well theres a couple of issues for starters we are using a python function even if its just a wrapper vs a pure c function, and theres also the overhead of copying the values, multiprocessing by default doesn't share data, as such each value needs to be copy back/forth.
do note that if properly segment our data:
import time
import numpy
from multiprocessing import Pool
def numpy_sin(value):
return numpy.sin(value)
a = [numpy.arange(100000) for _ in xrange(10)]
pool = Pool(processes = 5)
start = time.time()
result = numpy.sin(a)
end = time.time()
print 'Singled threaded %f' % (end - start)
start = time.time()
result = pool.map(numpy_sin, a)
pool.close()
pool.join()
end = time.time()
print 'Multithreaded %f' % (end - start)
$ python perf.py
Singled threaded 0.150192
Multithreaded 0.055083
So what can we take from this, multiprocessing is great but we should always test and compare it sometimes its faster and sometimes its slower, depending how its used ...
Granted you are not using numpy.sin but another function I would recommend you first verify that indeed multiprocessing will speed up the computation, maybe the overhead of copying values back/forth may affect you.
Either way I also do believe that using pool.map is the best, safest method of multithreading code ...
I hope this helps.
SciPy actually has a pretty good writeup on this subject here.