I found was a small code snippet that used to be a double for loop and I managed to bring it to a single for loop with vectorization. Having dones this resulted in a drastic time improvement so I was wondering if it is possible to get rid of the second for loop here via vectorization as well and if it would improve the performance.
import numpy as np
from timeit import default_timer as timer
nlin, npix = 478, 480
bb = np.random.rand(nlin,npix)
slope = -8
fac = 4
offset= 0
barray = np.zeros([2,2259]);
timex = timer()
for y in range(nlin):
for x in range(npix):
ling=(np.ceil((x-y/slope)*fac)+1-offset).astype(np.int);
barray[0,ling] +=1;
barray[1,ling] +=bb[y,x];
newVar = np.copy(barray)
print(timer() - timex)
So the ling can be taken out of the loops by creating the following matrix
lingMat = (np.ceil((np.vstack(npixrange)-nlinrange/slope)*fac)+1-offset).astype(np.int);
which satisfies lingMat[x,y] = "ling in the for loop at x and y". And this gives a first step of the vectorization.
In terms of vectorization, you could potentially use something based on np.add.at:
def yaco_addat(bb,slope,fac,offset):
barray = np.zeros((2,2259),dtype=np.float64)
nlin_range = np.arange(nlin)
npix_range = np.arange(npix)
ling_mat = (np.ceil((npix_range-nlin_range[:,None]/slope)*fac)+1-offset).astype(np.int)
np.add.at(barray[0,:],ling_mat,1)
np.add.at(barray[1,:],ling_mat,bb)
return barray
However, I would suggest you to optimize this directly with numba, using #jit decorator with option nopython=True, which gives you:
import numpy as np
from numba import jit
nlin, npix = 478, 480
bb = np.random.rand(nlin,npix)
slope = -8
fac = 4
offset= 0
def yaco_plain(bb,slope,fac,offset):
barray = np.zeros((2,2259),dtype=np.float64)
for y in range(nlin):
for x in range(npix):
ling=(np.ceil((x-y/slope)*fac)+1-offset).astype(np.int)
barray[0,ling] += 1
barray[1,ling] += bb[y,x]
return barray
#jit(nopython=True)
def yaco_numba(bb,slope,fac,offset):
barray = np.zeros((2,2259),dtype=np.float64)
for y in range(nlin):
for x in range(npix):
ling = int((np.ceil((x-y/slope)*fac)+1-offset))
barray[0,ling] += 1
barray[1,ling] += bb[y,x]
return barray
Let's check the outputs
np.allclose(yaco_plain(bb,slope,fac,offset),yaco_addat(bb,slope,fac,offset))
>>> True
np.allclose(yaco_plain(bb,slope,fac,offset),yaco_jit(bb,slope,fac,offset))
>>> True
and now time these
%timeit yaco_plain(bb,slope,fac,offset)
>>> 648 ms ± 4.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit yaco_addat(bb,slope,fac,offset)
>>> 27.2 ms ± 92.3 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit yaco_jit(bb,slope,fac,offset)
>>> 505 µs ± 995 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
resulting in an optimized function that is way quicker than the initial 2 loops version and 53x faster than the np.add.at one. Hope this helps.
Related
I have a little bit code which I'll have to vectorizate it to make it faster. I'm not very attached into python and thinking that the for loop is not so efficient.
Is there any way to reduce the time?
import numpy as np
import time
start = time.time()
N = 10000000 #9 seconds
#N = 100000000 #93 seconds
alpha = np.linspace(0.00000000000001, np.pi/2, N)
tmp = 2.47*np.sin(alpha)
for i in range(N):
if (abs(tmp[i])>1.0):
tmp[i]=1.0*np.sign(tmp[i])
beta = np.arcsin(tmp)
end = time.time()
print("Executed time: ",round(end-start,1),"Seconds")
I read about some numpy functions but I don't have a solution for this.
Clip the array:
tmp = np.clip(2.47 * np.sin(alpha), -1.0, 1.0)
Instead of using loop with condition, you can access the values by computing a mask. Here is example:
N = 10000000
alpha = np.linspace(0.00000000000001, np.pi/2, N)
tmp = 2.47*np.sin(alpha)
indices = np.abs(tmp) > 1.0
tmp[indices] = np.sign(tmp[indices])
beta = np.arcsin(tmp)
Results on my setup:
before: 5.66 s ± 30.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each),
after: 182 ms ± 877 µs per loop (mean ± std. dev. of 7 runs, 10 loops each).
Say I have an array of distances x=[1,2,1,3,3,2,1,5,1,1].
I want to get the indices from x where cumsum reaches 10, in this case, idx=[4,9].
So the cumsum restarts after the condition are met.
I can do it with a loop, but loops are slow for large arrays and I was wondering if I could do it in a vectorized way.
A fun method
sumlm = np.frompyfunc(lambda a,b:a+b if a < 10 else b,2,1)
newx=sumlm.accumulate(x, dtype=np.object)
newx
array([1, 3, 4, 7, 10, 2, 3, 8, 9, 10], dtype=object)
np.nonzero(newx==10)
(array([4, 9]),)
Here's one with numba and array-initialization -
from numba import njit
#njit
def cumsum_breach_numba2(x, target, result):
total = 0
iterID = 0
for i,x_i in enumerate(x):
total += x_i
if total >= target:
result[iterID] = i
iterID += 1
total = 0
return iterID
def cumsum_breach_array_init(x, target):
x = np.asarray(x)
result = np.empty(len(x),dtype=np.uint64)
idx = cumsum_breach_numba2(x, target, result)
return result[:idx]
Timings
Including #piRSquared's solutions and using the benchmarking setup from the same post -
In [58]: np.random.seed([3, 1415])
...: x = np.random.randint(100, size=1000000).tolist()
# #piRSquared soln1
In [59]: %timeit list(cumsum_breach(x, 10))
10 loops, best of 3: 73.2 ms per loop
# #piRSquared soln2
In [60]: %timeit cumsum_breach_numba(np.asarray(x), 10)
10 loops, best of 3: 69.2 ms per loop
# From this post
In [61]: %timeit cumsum_breach_array_init(x, 10)
10 loops, best of 3: 39.1 ms per loop
Numba : Appending vs. array-initialization
For a closer look at how the array-initialization helps, which seems be the big difference between the two numba implementations, let's time these on the array data, as the array data creation was in itself heavy on runtime and they both depend on it -
In [62]: x = np.array(x)
In [63]: %timeit cumsum_breach_numba(x, 10)# with appending
10 loops, best of 3: 31.5 ms per loop
In [64]: %timeit cumsum_breach_array_init(x, 10)
1000 loops, best of 3: 1.8 ms per loop
To force the output to have it own memory space, we can make a copy. Won't change the things in a big way though -
In [65]: %timeit cumsum_breach_array_init(x, 10).copy()
100 loops, best of 3: 2.67 ms per loop
Loops are not always bad (especially when you need one). Also, There is no tool or algorithm that will make this quicker than O(n). So let's just make a good loop.
Generator Function
def cumsum_breach(x, target):
total = 0
for i, y in enumerate(x):
total += y
if total >= target:
yield i
total = 0
list(cumsum_breach(x, 10))
[4, 9]
Just In Time compiling with Numba
Numba is a third party library that needs to be installed.
Numba can be persnickety about what features are supported. But this works.
Also, as pointed out by Divakar, Numba performs better with arrays
from numba import njit
#njit
def cumsum_breach_numba(x, target):
total = 0
result = []
for i, y in enumerate(x):
total += y
if total >= target:
result.append(i)
total = 0
return result
cumsum_breach_numba(x, 10)
Testing the Two
Because I felt like it ¯\_(ツ)_/¯
Setup
np.random.seed([3, 1415])
x0 = np.random.randint(100, size=1_000_000)
x1 = x0.tolist()
Accuracy
i0 = cumsum_breach_numba(x0, 200_000)
i1 = list(cumsum_breach(x1, 200_000))
assert i0 == i1
Time
%timeit cumsum_breach_numba(x0, 200_000)
%timeit list(cumsum_breach(x1, 200_000))
582 µs ± 40.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
64.3 ms ± 5.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Numba was on the order of 100 times faster.
For a more true apples to apples test, I convert a list to a Numpy array
%timeit cumsum_breach_numba(np.array(x1), 200_000)
%timeit list(cumsum_breach(x1, 200_000))
43.1 ms ± 202 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
62.8 ms ± 327 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Which brings them to about even.
I seek single precision rfft to accelerate computation; scipy.fftpack.rfft does this, but returns a real array that packs real and imaginary components in same axis, requiring a post-processing step. I implemented below to obtain the standard complex array, but Numpy's rfft ends up being faster for 2D inputs (but slower for 1D). Memory is also of concern, OOM with float64.
Does scipy or another library have a single precision rfft implementation that returns the standard complex array? (else, can below be done faster?)
import numpy as np
from numpy.fft import rfft
from scipy.fftpack import rfft as srfft
def rfft_sp(x): # assumes len(x) is even
xf = np.zeros((len(x)//2 + 1, x.shape[1]), dtype='complex64')
h = srfft(x, axis=0)
xf[0] = h[0]
xf[1:] = h[1::2]
xf[:1].imag = 0
xf[-1:].imag = 0
xf[1:-1].imag = h[2::2]
return xf
x = np.random.randn(500, 100000).astype('float32')
%timeit rfft_sp(x)
%timeit rfft(x, axis=0)
>>> 565 ms ± 15.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> 517 ms ± 22.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
on the machine on which I tested, using scipy.fft.rfft and casting to complex64 is faster than your implementation:
import numpy as np
from numpy.fft import rfft
from scipy.fft import rfft as srfft
from scipy.fftpack import rfft as srfft2
def rfft_sp(x): # assumes len(x) is even
xf = np.zeros((len(x)//2 + 1, x.shape[1]), dtype='complex64')
h = srfft2(x, axis=0)
xf[0] = h[0]
xf[1:] = h[1::2]
xf[:1].imag = 0
xf[-1:].imag = 0
xf[1:-1].imag = h[2::2]
return xf
def rfft_cast(x):
h = srfft(x, axis=0)
return h.astype('complex64')
x = np.random.randn(500, 100000).astype('float32')
%timeit rfft(x, axis = 0 )
%timeit rfft_sp(x )
%timeit rfft_cast(x)
produces:
1.81 s ± 144 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.89 s ± 7.58 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.24 s ± 9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
scipy.fft works with single precision.
I want to add all nonzero elements from a numpy array arr to a list out_list. Previous research suggests that for numpy arrays, using np.nonzero is most efficient. (My own benchmark below actually suggests it can be slightly improved using np.delete).
However, in my case I want my output to be a list, because I am combining many arrays for which I don't know the number of nonzero elements (so I can't effectively preallocate a numpy array for them). Hence, I was wondering whether there are some synergies that can be exploited to speed up the process. While my naive list comprehension approach is much slower than the pure numpy approach, I got some promising results combining list comprehension with numba.
Here's what I found so far:
import numpy as np
n = 60_000 # size of array
nz = 0.3 # fraction of zero elements
arr = (np.random.random_sample(n) - nz).clip(min=0)
# method 1
def add_to_list1(arr, out):
out.extend(list(arr[np.nonzero(arr)]))
# method 2
def add_to_list2(arr, out):
out.extend(list(np.delete(arr, arr == 0)))
# method 3
def add_to_list3(arr, out):
out += [x for x in arr if x != 0]
# method 4 (not sure how to get numba to accept an empty list as argument)
#njit
def add_to_list4(arr):
return [x for x in arr if x != 0]
out_list = []
%timeit add_to_list1(arr, out_list)
out_list = []
%timeit add_to_list2(arr, out_list)
out_list = []
%timeit add_to_list3(arr, out_list)
_ = add_to_list4(arr) # call once to compile
out_list = []
%timeit out_list.extend(add_to_list4(arr))
Yielding the following results:
2.51 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.19 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
15.6 ms ± 183 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.63 ms ± 158 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Not surprisingly, numba outperforms all other methods. Among the rest, method 2 (using np.delete) is the best. Am I missing any obvious alternative that exploits the fact that I am converting to a list afterwards? Can you think of anything to further speed up the process?
Edit 1:
Performance of .tolist():
# method 5
def add_to_list5(arr, out):
out += arr[arr != 0].tolist()
# method 6
def add_to_list6(arr, out):
out += np.delete(arr, arr == 0).tolist()
# method 7
def add_to_list7(arr, out):
out += arr[arr.astype(bool)].tolist()
Timings are on par with numba:
1.62 ms ± 118 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.65 ms ± 104 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each
1.78 ms ± 119 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Edit 2:
Here's some benchmarking using Mad Physicist's suggestion to use np.concatenate to construct a numpy array instead.
# construct numpy array using np.concatenate
out_list = []
t = time.perf_counter()
for i in range(100):
out_list.append(arr[arr != 0])
result = np.concatenate(out_list)
print(f"Time elapsed: {time.perf_counter() - t:.4f}s")
# compare with best list-based method
out_list = []
t = time.perf_counter()
for i in range(100):
out_list += arr[arr != 0].tolist()
print(f"Time elapsed: {time.perf_counter() - t:.4f}s")
Concatenating numpy arrays yields indeed another significant speed-up, although it is not directly comparable since the output is a numpy array instead of a list. So it will depend on the precise use what will be best.
Time elapsed: 0.0400s
Time elapsed: 0.1430s
TLDR;
1/ using arr[arr != 0] is the fastest of all the indexing options
2/ using .tolist() instead of list(.) speeds up things by a factor 1.3 - 1.5
3/ with the gains of 1/ and 2/ combined, the speed is on par with numba
4/ if having a numpy array instead of a list is acceptable, then using np.concatenate yields another gain in speed by a factor of ~3.5 compared to the best alternative
I submit that the method of choice, if you are indeed looking for a list output, is:
def f(arr, out_list):
out_list += arr[arr != 0].tolist()
It seems to beat all the other methods mentioned so far in the OP's question or in other responses (at the time of this writing).
If, however, you are looking for a result as a numpy array, then following #MadPhysicist's version (slightly modified to use arr[arr != 0] instead of using np.nonzero()) is almost 6x faster, see at the end of this post.
Side note: I would avoid using %timeit out_list.extend(some_list): it keeps adding to out_list during the many loops of timeit. Example:
out_list = []
%timeit out_list.extend([1,2,3])
and now:
>>> len(out_list)
243333333 # yikes
Timings
On 60K items on my machine, I see:
out_list = []
a = %timeit -o out_list + arr[arr != 0].tolist()
b = %timeit -o out_list + arr[np.nonzero(arr)].tolist()
c = %timeit -o out_list + list(arr[np.nonzero(arr)])
Yields:
1.23 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.53 ms ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
4.29 ms ± 3.02 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
And:
>>> c.average / a.average
3.476
>>> b.average / a.average
1.244
For a numpy array result instead
Following #MadPhysicist, you can get some extra boost by not turning the arrays into lists, but using np.concatenate() instead:
def all_nonzero(arr_iter):
"""return non zero elements of all arrays as a np.array"""
return np.concatenate([a[a != 0] for a in arr_iter])
def all_nonzero_list(arr_iter):
"""return non zero elements of all arrays as a list"""
out_list = []
for a in arr_iter:
out_list += a[a != 0].tolist()
return out_list
from itertools import repeat
ta = %timeit -o all_nonzero(repeat(arr, 100))
tl = %timeit -o all_nonzero_list(repeat(arr, 100))
Yields:
39.7 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
227 ms ± 680 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
and
>>> tl.average / ta.average
5.75
Instead of extending a list by all of the elements of a new array, append the array itself. This will make for much fewer and smaller reallocations. You can also pre-allocate a list of Nones up-front or even use an object array, if you have an upper bound on the number of arrays you will process.
When you're done, call np.concatenate on the list.
So instead of this:
L = []
for i in range(10):
arr = (np.random.random_sample(n) - nz).clip(min=0)
L.extend(arr[np.nonzero(arr)])
result = np.array(L)
Try this:
L = []
for i in range(10):
arr = (np.random.random_sample(n) - nz).clip(min=0)
L.append(arr[np.nonzero(arr)])
result = np.concatenate(L)
Since you're keeping arrays around, the final concatenation will be a series of buffer copies (which is fast), rather than a bunch of python-to numpy type conversions (which won't be). The exact method you choose for deletion is of course still up to the result of your benchmark.
Also, here's another method to add to your benchmark:
def add_to_list5(arr, out):
out.extend(list(arr[arr.astype(bool)]))
I don't expect this to be overwhelmingly fast, but it's interesting to see how masking stacks up next to indexing.
I would like to fill a numpy array with values using a function. I want the array to start with one initial value and be filled to a given length, using each previous value in the array as the input to the function.
Each array value i should be (i-1)*x**(y/z).
After a bit of work, I have got to:
import numpy as np
f = np.zeros([31,1])
f[0] = 20
fun = lambda i, j: i*2**(1/3)
f[1:] = np.fromfunction(np.vectorize(fun), (len(f)-1,1), dtype = int)
This fills an array with
[firstvalue=20, 0, i-1 + 1*2**(1/3),...]
I have arrived here having read
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.fromfunction.html
Use of numpy fromfunction
Most efficient way to map function over numpy array
Fastest way to populate a matrix with a function on pairs of elements in two numpy vectors?
How do I create a numpy array using a function?
But I'm just not getting how to translate it to my function.
Except for the initial 20, this produces the same values
np.arange(31)*2**(1/3)
Your iterative version (slightly modified)
def foo0(n):
f = np.zeros(n)
f[0] = 20
for i in range(1,n):
f[i] = f[i-1]*2**(1/3)
return f
An alternative:
def foo1(n):
g = [20]
for i in range(n-1):
g.append(g[-1]*2**(1/3))
return np.array(g)
They produce the same thing:
In [25]: np.allclose(foo0(31), foo1(31))
Out[25]: True
Mine is a bit faster:
In [26]: timeit foo0(100)
35 µs ± 75 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [27]: timeit foo1(100)
23.6 µs ± 83.6 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
But we don't need to evaluate 2**(1/3) every time
def foo2(n):
g = [20]
const = 2**(1/3)
for i in range(n-1):
g.append(g[-1]*const)
return np.array(g)
minor time savings. But that's just multiplying each entry by the same const. So we can use cumprod for a bigger time savings:
def foo3(n):
g = np.ones(n)*(2**(1/3))
g[0]=20
return np.cumprod(g)
In [37]: timeit foo3(31)
14.9 µs ± 14.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [40]: np.allclose(foo0(31), foo3(31))
Out[40]: True