I have a distributed dask cluster setup and I have used it to load and transform a bunch of data. Works like a charm.
I'm want to use it do some processing in parallel. Here's my function
el = 5000
n_using = 26
n_across= 6
mat = np.random.random((el,n_using,n_across))
idx = np.tril_indices(n_across*2, -n_across)
def get_vals(c1, m, el, idx):
m1 = m[c1,:,:]
corr_vals = np.zeros((el, (n_across//2)*(n_across+1)))
for c2 in range(c1+1, el):
corr = np.corrcoef(m1.T, m[c2,:,:].T)
corr_vals[c2] = corr[idx]
return corr_vals
lazy_get_val = dask.delayed(get_vals, pure=True)
Here is a single processor version of what I'm trying to do:
arrays = [get_vals(c1, mat, el, idx) for c1 in range(el)]
all_corr = np.stack(arrays, axis=0)
Works fine but takes a few hours.
Here's my go at doing this in dask:
lazy_list = [lazy_get_val(c1, mat, el, idx) for c1 in range(el)]
arrays = [da.from_delayed(lazy_item, dtype=float, shape=(el, 21)) for lazy_item in lazy_list]
all_corr = da.stack(arrays, axis=0)
Even if it run all_corr[1].compute(), it just sits there and doesn't respond. When I interrupt the kernel, it seems to be stuck at /distributed/utils.py:
~/.../lib/python3.6/site-packages/distributed/utils.py in sync(loop, func, *args, **kwargs)
249 else:
250 while not e.is_set():
--> 251 e.wait(10)
252 if error[0]:
253 six.reraise(*error[0])
Any suggestions on debugging this?
Other things:
If I run it with a smaller mat (el=1000) and it runs fine.
If I make el = 5000, it hangs.
If I interrupt the kernel and run it again with el = 1000, it hangs.
After adding imports to the example I ran things and it was very slow while building the graph. This can be improved by avoiding placing numpy arrays directly in delayed calls as follows:
# mat = np.random.random((el,n_using,n_across))
# idx = np.tril_indices(n_across*2, -n_across)
mat = dask.delayed(np.random.random)((el,n_using,n_across))
idx = dask.delayed(np.tril_indices)(n_across*2, -n_across)
Or by removing the pure=True keyword to dask.delayed (when you set pure=True it has to hash the contents of all inputs to get a unique key for them, you're doing this 5000 times). I found this out by profiling your code with the %snakeviz magic in IPython.
I then ran all_corr[1].compute() and it was fine. I then ran all_corr.compute() and it seemed like it would progress to completion, but wasn't very fast. I suspect that either your tasks are too small so that there is too much overhead, or that your code is spending too much time in Python for loops and so is running into GIL issues. Not sure which.
The next thing I would recommend trying would be using the dask.distributed scheduler, which would handle the GIL issue better and exacerbate the overhead issue. Seeing how that performed would probably help isolate the issue.
Related
Below is my task code. In this case e0=15, but I would like to solve this problem for a set of e0 values (e0 - parameter (e0 = 7, 10, 15, 20, 28)). I have a multi-core processor and I would like to distribute the calculations of this task for each parameter e0 to a separate core.
How to do parallel calculations for this task in Python?
import sympy as sp
import scipy as sc
import numpy as np
e0=15
einf=15
def Psi(r,n):
return 2*np.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))*sc.special.hyp1f1(1-n, 2, 2*r/n)/n**2
def PsiSymb(n):
r=sp.symbols('r')
y1=2*sp.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))/n**2
y2 = sp.simplify(sp.functions.special.hyper.hyper([1-n], [2], 2*r/n))
y=y1*y2
return y
def LaplacianPsi(n):
r = sp.symbols('r')
ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
ydiffnum = sp.lambdify(r, ydiff, "numpy")
return ydiffnum
def k(n1,n2):
yint=sc.integrate.quad(lambda r: -0.5*Psi(r,n2)*LaplacianPsi(n1)(r)*r**2,0,np.inf)
return yint[0]
def p(n1,n2):
potC=sc.integrate.quad(lambda r: Psi(r,n2)*(-1/r)*Psi(r,n1)*(r**2),0,np.inf)
potB1=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.23))*Psi(r,n1)*(r**2),0,np.inf)
potB2=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.4))*Psi(r,n1)*(r**2),0,np.inf)
pot=potC[0]+potB1[0]+potB2[0]
return pot
def en(n1,n2):
return k(n1,n2)+p(n1,n2)
nmax=3
EnM = [[0]*nmax for i in range(nmax)]
for n1 in range(nmax):
for n2 in range(nmax):
EnM[n2][n1]=en(n1+1,n2+1)
EnEig=sc.linalg.eigvalsh(EnM)
EnB=min(EnEig)
print(EnB)
This is not needed to use multiple cores for this computation. Indeed, the bottleneck is the LaplacianPsi function which recompute the same thing over and over. You can use memoization to fix this. Here is an example:
import functools
#functools.cache
def LaplacianPsi(n):
r = sp.symbols('r')
ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
ydiffnum = sp.lambdify(r, ydiff, "numpy")
return ydiffnum
# The rest is the same
The code can be further optimized since sc.special.factorial(n) / sc.special.factorial(-1+n) is actually just n and np.sqrt is inefficient on scalar so it should be replaced with math.sqrt(n). This results in a code taking only 0.057 seconds as opposed to 16.5 seconds for the initial implementation on my machine. This means the new implementation is 290 times faster while it produces the same result!
Directly using many cores would just have wasted more resources for a slower result. You can still try to use more cores to compute this with the faster provided implementation though it might not be significantly faster.
I have a function that contains a for-loop with numerous actions that I am executing on dataframe with shape (68k,10) in a Jupyter notebook on Windows. My machine has a 11th gen Intel i7-1165G7 2.8Ghz processor (1.5 years old) with 16 GB of RAM.
The execution takes 2928 seconds, as evidenced by the following cProfile results. I am including the top tasks by execution time:
On my colleague's 4-year old Mac, this same function takes 191 seconds:
The built-in method nt.stat that takes 812 seconds on my machine doesn't even appear in my colleague's profiling results. I have also tried this on a 3-rd colleague's older Windows machine, and it takes longer than on mine.
Why is there such a big difference in the execution time?
If helpful, I can try to include the entire cProfile results.
Update:
I created a reproducible example and ran on both machines.
Here is the code:
import numpy as np
import pandas as pd
import cProfile
df_shape = [100000, 8]
df = pd.DataFrame(np.random.uniform(0, 0.3, df_shape), index = pd.date_range('2022-01-01', periods=df_shape[0], freq='H'))
df.columns = ['loss_' + str(i) for i in range(df_shape[1])]
var_incoming = np.random.normal(550, 80, df.shape[0])
def test_func(df_, vars):
a_dct = {}
wf_dct = {}
res_lst = []
wf_res_lst = []
for i in range(df_.shape[0]):
y = vars[i]
for j, z in df_.iloc[i, :].items():
a = z * y
a_dct[j] = a
y = y * (1 - z)
wf_dct[j] = y
temp1 = pd.DataFrame(dict(a_dct), index=[df_.index[i]])
temp2 = pd.DataFrame(dict(wf_dct),
index=[df_.index[i]])
res_lst.append(temp1)
wf_res_lst.append(temp2)
res = pd.concat(res_lst, axis=0)
wf = pd.concat(wf_res_lst, axis=0)
return res, wf
The commands performed seem to be largely the same across the machines. The difference in execution is now smaller, and unfortunately, I no longer see the nt.stat command that was previously taking very long.
Colleague's machine:
I have a compile function with Numba that splits an array based on an index, this returns an irregular(variable length) list of numpy arrays. This then get padded to form a 2d array from the irregular list.
Problem
The compile function 'nb_array2mat' should be much faster than the pure python 'array2mat' but it is not.
Additionally, is this possible using numpy?
length of the array and index
1456391 95007
times:
numba: 1.3438396453857422
python: 1.1407015323638916
I think I am not using the numba compile in a proper manner. Any help would be great.
EDIT
Using dummy data as edited in the code section now I get an speed up, why does it not work with the actual data?
length of the array and index
1456391 95007
times:
numba: 0.012002706527709961
python: 0.13403034210205078
Code
idx_split: https://drive.google.com/file/d/1hSduTs1_s3seEFAiyk_n5yk36ZBl0AXW/view?usp=sharing
dist_min_orto: https://drive.google.com/file/d/1fwarVmBa0NGbWPifBEezTzjEZSrHncSN/view?usp=sharing
import time
import numba
import numpy as np
from numba.pycc import CC
cc = CC('compile_func')
cc.verbose = True
#numba.njit(parallel=True, fastmath=True)
#cc.export('nb_array2mat', 'f8[:,:](f8[:], i4[:])')
def array2mat(arr, idx):
# split arr by idx indexes
out = []
s = 0
for n in numba.prange(len(idx)):
e = idx[n]
out.append(arr[s:e])
s = e
# create a 2d array with arr values pading empty values with fill_value=1000000.0
_len = [len(_i) for _i in out]
cols = max(_len)
rows = len(out)
mat = np.full(shape=(rows, cols), fill_value=1000000.0)
for row in numba.prange(rows):
len_col = len(out[row])
mat[row, :len_col] = out[row]
return mat
if __name__ == "__main__":
cc.compile()
# PYTHON FUNC
def array2mat(arr, idx):
# split arr by idx indexes
out = []
s = 0
for n in range(len(idx)):
e = idx[n]
out.append(arr[s:e])
s = e
# create a 2d array with arr values pading empty values with fill_value=1000000.0
_len = [len(_i) for _i in out]
cols = max(_len)
rows = len(out)
mat = np.full(shape=(rows, cols), fill_value=1000000.0)
for row in range(rows):
len_col = len(out[row])
mat[row, :len_col] = out[row]
return mat
import compile_func
#ACTUAL DATA
arr = np.load('dist_min_orto.npy').astype(float)
idx = np.load('idx_split.npy').astype(int)
# DUMMY DATA
arr = np.random.randint(50, size=1456391).astype(float)
idx = np.cumsum(np.random.randint(5, size=95007).astype(int))
print(len(arr), len(idx))
#NUMBA FUNC
t0 = time.time()
print(compile_func.nb_array2mat(arr, idx))
print(time.time() - t0)
# PYTHON FUNC
t0 = time.time()
print(array2mat(arr, idx))
print(time.time() - t0)
You cannot use nb.prange on the first loop since out is shared between threads and it is also read/written by them. This causes a race condition. Numba assume that there is not dependencies between iterations and this is your responsibility to guarantee this. The simplest solution is not to use a parallel loop here
Additionally, the second loop is mainly memory-bound so I do not expect a big speed up using multiple threads since the RAM is a shared resource with a limited throughput (few threads are often enough to saturate it, especially on PC where sometimes one thread is enough).
Hopefully, you do not need to create the out temporary list, just the end offsets so then to compute len_cols in the parallel loop. The maximum cols can be computed on the fly in the first loop. The first loop should be executed very quickly compared to the second loop. Filling a big matrix newly allocated is often faster in parallel on Linux since page faults can be done in parallel. AFAIK, one Windows this is less true (certainly since pages faults scale more badly). This is also better here since the range 0:len_col is variable and thus the time to fill this part of the matrix is variable causing some thread to finish after others (the slower thread bound the execution). Furthermore, this is generally much faster on NUMA machines since each NUMA node can write in its own memory.
Note that AOT compilation does not support automatic parallel execution. To quote a Numba developer:
From discussion in today's triage meeting, related to #7696: this is not likely to be supported as AOT code doesn't require Numba to be installed - this would mean a great deal of work and issues to overcome for packaging the code for the threading layers.
The same thing applies for fastmath also it is likely to be added in the next incoming release regarding the current work.
Note that JIT compilation and AOT compilation are two separate process. Thus the parameters of njit are not shared to cc.export and the signature is not shared to njit. This means that the function will be compiled during its first execution due to lazy compilation. That being said, the function is redefined, so the njit is just useless here (overwritten).
Here is the resulting code (using only the JIT implementation with an eager compilation instead of the AOT one):
import time
import numba
import numpy as np
#numba.njit('f8[:,:](f8[:], i4[:])', fastmath=True)
def nb_array2mat(arr, idx):
# split arr by idx indexes
s = 0
ends = np.empty(len(idx), dtype=np.int_)
cols = 0
for n in range(len(idx)):
e = idx[n]
ends[n] = e
len_col = e - s
cols = max(cols, len_col)
s = e
# create a 2d array with arr values pading empty values with fill_value=1000000.0
rows = len(idx)
mat = np.empty(shape=(rows, cols))
for row in numba.prange(rows):
s = ends[row-1] if row >= 1 else 0
e = ends[row]
len_col = e - s
mat[row, 0:len_col] = arr[s:e]
mat[row, len_col:cols] = 1000000.0
return mat
# PYTHON FUNC
def array2mat(arr, idx):
# split arr by idx indexes
out = []
s = 0
for n in range(len(idx)):
e = idx[n]
out.append(arr[s:e])
s = e
# create a 2d array with arr values pading empty values with fill_value=1000000.0
_len = [len(_i) for _i in out]
cols = max(_len)
rows = len(out)
mat = np.full(shape=(rows, cols), fill_value=1000000.0)
for row in range(rows):
len_col = len(out[row])
mat[row, :len_col] = out[row]
return mat
#ACTUAL DATA
arr = np.load('dist_min_orto.npy').astype(np.float64)
idx = np.load('idx_split.npy').astype(np.int32)
#NUMBA FUNC
t0 = time.time()
print(nb_array2mat(arr, idx))
print(time.time() - t0)
# PYTHON FUNC
t0 = time.time()
print(array2mat(arr, idx))
print(time.time() - t0)
On my machine, the new Numba code is slightly faster: it takes 0.358 seconds for the Numba implementation and 0.418 for the Python implementation. In fact, using a sequential Numba code is even slightly faster on my machine as it takes 0.344 second.
Note that the shape of the output matrix is (95007,5469). Thus, the matrix takes 3.87 GiB in memory. You should check you have enough memory to store it. In fact the Python implementation takes about 7.5 GiB on my machine (possibly because the GC/default-allocator does not release the memory directly). If you do not have enouth memory, then the system can use the very slow swap memory (which use your storage device). Moreover, x86-64 processors use a write allocate cache policy causing written cache-lines to be actually read by default. Non temporal writes can be used to avoid this on a big matrix. Unfortunately, neither Numpy nor Numba use this on my machine. This means half the RAM throughput is wasted. Not to mention page faults are pretty expensive: in sequential, 60% of the time of the Numpy implementation is spent in page faults. The Numba code spend almost all its time writing in memory and performing page faults. Here is a related open issue.
based on #Jérôme Richard answer I wrote the same function. The improvement was in the way the mat numpy array is created, as the previous answer stated, the size in memory of the np.full takes a lot longer to operate, so the solution was to initialize it as a np.empty.
The improvement is not much bewtewn python and numba, but the size of the mat array takes a big impact in processing time.
1456391 95007
python: 0.29506611824035645
numba: 0.1800403594970703
Code
#cc.export('nb_array2mat', 'f8[:,:](f8[:], i4[:])')
def nb_array2mat(arr, idx):
s = 0
_len = np.empty(len(idx), dtype=np.int_)
_len[0] = idx[0]
_len[1:] = idx[1:] - idx[:-1]
# create a 2d array
cols = int(np.max(_len))
rows = len(idx)
mat = np.empty(shape=(rows, cols), dtype=np.float_)
for row in range(len(idx)):
e = idx[row]
len_col = _len[row]
mat[row, :len_col] = arr[s:e]
s = e
return mat
Calling MATLAB from Python is bound to give some performance reduction that I could avoid by rewriting (a lot of) code in Python. However, this isn't a realistic option for me, but it annoys me that a huge loss of efficiency lies in the simple conversion from a numpy array to a MATLAB double.
I'm talking about the following conversion from data1 to data1m, where
data1 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
data1m = matlab.double(list(data1))
Here matlab.double comes from Mathworks own MATLAB package / engine. The second line of code takes 20 s on my system, which just seems like too much for a conversion that doesn't really do anything other than making the numbers 'edible' for MATLAB.
So basically I'm looking for a trick opposite to the one given here that works for converting MATLAB output back to Python.
Passing numpy arrays efficiently
Take a look at the file mlarray_sequence.py in the folder PYTHONPATH\Lib\site-packages\matlab\_internal. There you will find the construction of the MATLAB array object. The performance problem comes from copying data with loops within the generic_flattening function.
To avoid this behavior we will edit the file a bit. This fix should work on complex and non-complex datatypes.
Make a backup of the original file in case something goes wrong.
Add import numpy as np to the other imports at the beginning of the file
In line 38 you should find:
init_dims = _get_size(initializer)
replace this with:
try:
init_dims=initializer.shape
except:
init_dims = _get_size(initializer)
In line 48 you should find:
if is_complex:
complex_array = flat(self, initializer,
init_dims, typecode)
self._real = complex_array['real']
self._imag = complex_array['imag']
else:
self._data = flat(self, initializer, init_dims, typecode)
Replace this with:
if is_complex:
try:
self._real = array.array(typecode,np.ravel(initializer, order='F').real)
self._imag = array.array(typecode,np.ravel(initializer, order='F').imag)
except:
complex_array = flat(self, initializer,init_dims, typecode)
self._real = complex_array['real']
self._imag = complex_array['imag']
else:
try:
self._data = array.array(typecode,np.ravel(initializer, order='F'))
except:
self._data = flat(self, initializer, init_dims, typecode)
Now you can pass a numpy array directly to the MATLAB array creation method.
data1 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
#faster
data1m = matlab.double(data1)
#or slower method
data1m = matlab.double(data1.tolist())
data2 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,)).astype(np.complex128)
#faster
data1m = matlab.double(data2,is_complex=True)
#or slower method
data1m = matlab.double(data2.tolist(),is_complex=True)
The performance in MATLAB array creation increases by a factor of 15 and the interface is easier to use now.
While awaiting better suggestions, I'll post the best trick I've come up with so far. It comes down to saving the file with `scipy.io.savemat´ and then loading this file in MATLAB.
This is not the prettiest hack and it requires some care to ensure different processes relying on the same script don't end up writing and loading each other's .mat files, but the performance gain is worth it for me.
As a test case I wrote two simple, almost identical MATLAB functions that require 2 numpy arrays (I tested with length 1000000) and one int as input.
function d = test(x, y, fs_signal)
d = sum((x + y))./double(fs_signal);
function d = test2(path)
load(path)
d = sum((x + y))./double(fs_signal);
The function test requires conversion, while test2 requires saving.
Testing test: Converting the two numpy arrays takes cirka 40 s on my system. The total time to prepare for and run test comes down to 170 s
Testing test2: Saving the arrays and int takes cirka 0.35 s on my system. Suprisingly, loading the .mat file in MATLAB is extremely efficient (or more suprisingly, it is extremely ineffcient at dealing with its doubles)... The total time to prepare for and run test2 comes down to 0.38 s
That's a performance gain of almost 450x...
My situation was a bit different (python script called from matlab) but for me converting the ndarray into an array.array massively speed up the process. Basically it is very similar to Alexandre Chabot solution but without the need to alter any files:
#untested i.e. only deducted from my "matlab calls python" situation
import numpy
import array
data1 = numpy.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
ar = array.array('d',data1.flatten('F').tolist())
p = matlab.double(ar)
C = matlab.reshape(p,data1.shape) #this part I am definitely not sure about if it will work like that
At least if done from Matlab the combination of "array.array" and "double" is relative fast. Tested with Matlab 2016b + python 3.5.4 64bit.
Calling MATLAB from Python is bound to give some performance reduction that I could avoid by rewriting (a lot of) code in Python. However, this isn't a realistic option for me, but it annoys me that a huge loss of efficiency lies in the simple conversion from a numpy array to a MATLAB double.
I'm talking about the following conversion from data1 to data1m, where
data1 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
data1m = matlab.double(list(data1))
Here matlab.double comes from Mathworks own MATLAB package / engine. The second line of code takes 20 s on my system, which just seems like too much for a conversion that doesn't really do anything other than making the numbers 'edible' for MATLAB.
So basically I'm looking for a trick opposite to the one given here that works for converting MATLAB output back to Python.
Passing numpy arrays efficiently
Take a look at the file mlarray_sequence.py in the folder PYTHONPATH\Lib\site-packages\matlab\_internal. There you will find the construction of the MATLAB array object. The performance problem comes from copying data with loops within the generic_flattening function.
To avoid this behavior we will edit the file a bit. This fix should work on complex and non-complex datatypes.
Make a backup of the original file in case something goes wrong.
Add import numpy as np to the other imports at the beginning of the file
In line 38 you should find:
init_dims = _get_size(initializer)
replace this with:
try:
init_dims=initializer.shape
except:
init_dims = _get_size(initializer)
In line 48 you should find:
if is_complex:
complex_array = flat(self, initializer,
init_dims, typecode)
self._real = complex_array['real']
self._imag = complex_array['imag']
else:
self._data = flat(self, initializer, init_dims, typecode)
Replace this with:
if is_complex:
try:
self._real = array.array(typecode,np.ravel(initializer, order='F').real)
self._imag = array.array(typecode,np.ravel(initializer, order='F').imag)
except:
complex_array = flat(self, initializer,init_dims, typecode)
self._real = complex_array['real']
self._imag = complex_array['imag']
else:
try:
self._data = array.array(typecode,np.ravel(initializer, order='F'))
except:
self._data = flat(self, initializer, init_dims, typecode)
Now you can pass a numpy array directly to the MATLAB array creation method.
data1 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
#faster
data1m = matlab.double(data1)
#or slower method
data1m = matlab.double(data1.tolist())
data2 = np.random.uniform(low = 0.0, high = 30000.0, size = (1000000,)).astype(np.complex128)
#faster
data1m = matlab.double(data2,is_complex=True)
#or slower method
data1m = matlab.double(data2.tolist(),is_complex=True)
The performance in MATLAB array creation increases by a factor of 15 and the interface is easier to use now.
While awaiting better suggestions, I'll post the best trick I've come up with so far. It comes down to saving the file with `scipy.io.savemat´ and then loading this file in MATLAB.
This is not the prettiest hack and it requires some care to ensure different processes relying on the same script don't end up writing and loading each other's .mat files, but the performance gain is worth it for me.
As a test case I wrote two simple, almost identical MATLAB functions that require 2 numpy arrays (I tested with length 1000000) and one int as input.
function d = test(x, y, fs_signal)
d = sum((x + y))./double(fs_signal);
function d = test2(path)
load(path)
d = sum((x + y))./double(fs_signal);
The function test requires conversion, while test2 requires saving.
Testing test: Converting the two numpy arrays takes cirka 40 s on my system. The total time to prepare for and run test comes down to 170 s
Testing test2: Saving the arrays and int takes cirka 0.35 s on my system. Suprisingly, loading the .mat file in MATLAB is extremely efficient (or more suprisingly, it is extremely ineffcient at dealing with its doubles)... The total time to prepare for and run test2 comes down to 0.38 s
That's a performance gain of almost 450x...
My situation was a bit different (python script called from matlab) but for me converting the ndarray into an array.array massively speed up the process. Basically it is very similar to Alexandre Chabot solution but without the need to alter any files:
#untested i.e. only deducted from my "matlab calls python" situation
import numpy
import array
data1 = numpy.random.uniform(low = 0.0, high = 30000.0, size = (1000000,))
ar = array.array('d',data1.flatten('F').tolist())
p = matlab.double(ar)
C = matlab.reshape(p,data1.shape) #this part I am definitely not sure about if it will work like that
At least if done from Matlab the combination of "array.array" and "double" is relative fast. Tested with Matlab 2016b + python 3.5.4 64bit.