Cython numpy array indexer speed improvement - python

I wrote the following code in pure python, the description of what it does is in the docstrings:
import numpy as np
from scipy.ndimage.measurements import find_objects
import itertools
def alt_indexer(arr):
"""
Returns a dictionary with the elements of arr as key
and the corresponding slice as value.
Note:
This function assumes arr is sorted.
Example:
>>> arr = [0,0,3,2,1,2,3]
>>> loc = _indexer(arr)
>>> loc
{0: (slice(0L, 2L, None),),
1: (slice(2L, 3L, None),),
2: (slice(3L, 5L, None),),
3: (slice(5L, 7L, None),)}
>>> arr = sorted(arr)
>>> arr[loc[3][0]]
[3, 3]
>>> arr[loc[2][0]]
[2, 2]
"""
unique, counts = np.unique(arr, return_counts=True)
labels = np.arange(1,len(unique)+1)
labels = np.repeat(labels,counts)
slicearr = find_objects(labels)
index_dict = dict(itertools.izip(unique,slicearr))
return index_dict
Since i will be indexing very large arrays, i wanted to speed up the operations by using cython, here is the equivalent implementation:
import numpy as np
cimport numpy as np
def _indexer(arr):
cdef tuple unique_counts = np.unique(arr, return_counts=True)
cdef np.ndarray[np.int32_t,ndim=1] unique = unique_counts[0]
cdef np.ndarray[np.int32_t,ndim=1] counts = unique_counts[1].astype(int)
cdef int start=0
cdef int end
cdef int i
cdef dict d ={}
for i in xrange(len(counts)):
if i>0:
start = counts[i-1]+start
end=counts[i]+start
d[unique[i]]=slice(start,end)
return d
Benchmarks
I compared the time it took to complete both operations:
In [26]: import numpy as np
In [27]: rr=np.random.randint(0,1000,1000000)
In [28]: %timeit _indexer(rr)
10 loops, best of 3: 40.5 ms per loop
In [29]: %timeit alt_indexer(rr) #pure python
10 loops, best of 3: 51.4 ms per loop
As you can see the speed improvements are minimal. I do realize that my code was already partly optimized since i used numpy.
Is there a bottleneck that i am not aware of?
Should i not use np.unique and write my own implementation instead?
Thanks.

With arr having non-negative, not very large and many repeated int numbers, here's an alternative approach using np.bincount to simulate the same behavior as np.unique(arr, return_counts=True) -
def unique_counts(arr):
counts = np.bincount(arr)
mask = counts!=0
unique = np.nonzero(mask)[0]
return unique, counts[mask]
Runtime test
Case #1 :
In [83]: arr = np.random.randint(0,100,(1000)) # Input array
In [84]: unique, counts = np.unique(arr, return_counts=True)
...: unique1, counts1 = unique_counts(arr)
...:
In [85]: np.allclose(unique,unique1)
Out[85]: True
In [86]: np.allclose(counts,counts1)
Out[86]: True
In [87]: %timeit np.unique(arr, return_counts=True)
10000 loops, best of 3: 53.2 µs per loop
In [88]: %timeit unique_counts(arr)
100000 loops, best of 3: 10.2 µs per loop
Case #2:
In [89]: arr = np.random.randint(0,1000,(10000)) # Input array
In [90]: %timeit np.unique(arr, return_counts=True)
1000 loops, best of 3: 713 µs per loop
In [91]: %timeit unique_counts(arr)
10000 loops, best of 3: 39.1 µs per loop
Case #3: Let's run a case with unique having some missing numbers in the min to max range and verify the results against np.unique version as a sanity check. We won't have a lot of repeated numbers in this case and as such isn't expected to be better on performance.
In [98]: arr = np.random.randint(0,10000,(1000)) # Input array
In [99]: unique, counts = np.unique(arr, return_counts=True)
...: unique1, counts1 = unique_counts(arr)
...:
In [100]: np.allclose(unique,unique1)
Out[100]: True
In [101]: np.allclose(counts,counts1)
Out[101]: True
In [102]: %timeit np.unique(arr, return_counts=True)
10000 loops, best of 3: 61.9 µs per loop
In [103]: %timeit unique_counts(arr)
10000 loops, best of 3: 71.8 µs per loop

Related

Multiple cumulative sum within a numpy array

I'm sort of newbie in numpy so I'm sorry if this question was already asked. I'm looking for a vectorization solution which enable to run multiple cumsum of different size within a one dimension numpy array.
my_vector=np.array([1,2,3,4,5])
size_of_groups=np.array([3,2])
I would like something like
np.cumsum.group(my_vector,size_of_groups)
[1,3,6,4,9]
I do not want a solution with loops. Either numpy functions or numpy operations.
Not sure about numpy, but pandas can do this pretty easily with a groupby + cumsum:
import pandas as pd
s = pd.Series(my_vector)
s.groupby(s.index.isin(size_of_groups.cumsum()).cumsum()).cumsum()
0 1
1 3
2 6
3 4
4 9
dtype: int64
Here's a vectorized solution -
def intervaled_cumsum(ar, sizes):
# Make a copy to be used as output array
out = ar.copy()
# Get cumumlative values of array
arc = ar.cumsum()
# Get cumsumed indices to be used to place differentiated values into
# input array's copy
idx = sizes.cumsum()
# Place differentiated values that when cumumlatively summed later on would
# give us the desired intervaled cumsum
out[idx[0]] = ar[idx[0]] - arc[idx[0]-1]
out[idx[1:-1]] = ar[idx[1:-1]] - np.diff(arc[idx[:-1]-1])
return out.cumsum()
Sample run -
In [114]: ar = np.array([1,2,3,4,5,6,7,8,9,10,11,12])
...: sizes = np.array([3,2,2,3,2])
In [115]: intervaled_cumsum(ar, sizes)
Out[115]: array([ 1, 3, 6, 4, 9, 6, 13, 8, 17, 27, 11, 23])
Benchmarking
Other approach(es) -
# #cᴏʟᴅsᴘᴇᴇᴅ's solution
import pandas as pd
def pandas_soln(my_vector, sizes):
s = pd.Series(my_vector)
return s.groupby(s.index.isin(sizes.cumsum()).cumsum()).cumsum().values
The given sample used two intervals of lengths 2 and 3 Keeping that and simply giving it more number of groups for timing purpose.
Timings -
In [146]: N = 10000 # number of groups
...: np.random.seed(0)
...: sizes = np.random.randint(2,4,(N))
...: ar = np.random.randint(0,N,sizes.sum())
In [147]: %timeit intervaled_cumsum(ar, sizes)
...: %timeit pandas_soln(ar, sizes)
10000 loops, best of 3: 178 µs per loop
1000 loops, best of 3: 1.82 ms per loop
In [148]: N = 100000 # number of groups
...: np.random.seed(0)
...: sizes = np.random.randint(2,4,(N))
...: ar = np.random.randint(0,N,sizes.sum())
In [149]: %timeit intervaled_cumsum(ar, sizes)
...: %timeit pandas_soln(ar, sizes)
100 loops, best of 3: 3.91 ms per loop
100 loops, best of 3: 17.3 ms per loop
In [150]: N = 1000000 # number of groups
...: np.random.seed(0)
...: sizes = np.random.randint(2,4,(N))
...: ar = np.random.randint(0,N,sizes.sum())
In [151]: %timeit intervaled_cumsum(ar, sizes)
...: %timeit pandas_soln(ar, sizes)
10 loops, best of 3: 31.6 ms per loop
1 loop, best of 3: 357 ms per loop
Here is an unconventional solution. Not very fast, though. (Even a bit slower than pandas).
>>> from scipy import linalg
>>>
>>> N = len(my_vector)
>>> D = np.repeat((*zip((1,-1)),), N, axis=1)
>>> D[1, np.cumsum(size_of_groups) - 1] = 0
>>>
>>> linalg.solve_banded((1, 0), D, my_vector)
array([1., 3., 6., 4., 9.])

Vectorizing nearest neighbor computation

I have the following function which is returning an array calculating the nearest neighbor:
def p_batch(U,X,Y):
return [nearest(u,X,Y) for u in U]
I would like to replace the for loop using numpy. I've been looking into numpy.vectorize() as this seems to be the right approach, but I can't get it to work. This is what I've tried so far:
def n_batch(U,X,Y):
vbatch = np.vectorize(nearest)
return vbatch(U,X,Y)
Can anyone give me a hint where I went wrong?
Edit:
Implementation of nearest:
def nearest(u,X,Y):
return Y[np.argmin(np.sqrt(np.sum(np.square(np.subtract(u,X)),axis=1)))]
Function for U,X,Y (with M=20,N=100,d=50):
U = numpy.random.mtrand.RandomState(123).uniform(0,1,[M,d])
X = numpy.random.mtrand.RandomState(456).uniform(0,1,[N,d])
Y = numpy.random.mtrand.RandomState(789).randint(0,2,[N])
Approach #1
You could use Scipy's cdist to generate all those euclidean distances and then simply use argmin and index into Y -
from scipy.spatial.distance import cdist
out = Y[cdist(U,X).argmin(1)]
Sample run -
In [76]: M,N,d = 5,6,3
...: U = np.random.mtrand.RandomState(123).uniform(0,1,[M,d])
...: X = np.random.mtrand.RandomState(456).uniform(0,1,[N,d])
...: Y = np.random.mtrand.RandomState(789).randint(0,2,[N])
...:
# Using a loop comprehension to verify values
In [77]: [nearest(U[i], X,Y) for i in range(len(U))]
Out[77]: [1, 0, 0, 1, 1]
In [78]: Y[cdist(U,X).argmin(1)]
Out[78]: array([1, 0, 0, 1, 1])
Approach #2
Another way with sklearn.metrics.pairwise_distances_argmin_min to give us those argmin indices directly -
from sklearn.metrics import pairwise
Y[pairwise.pairwise_distances_argmin(U,X)]
Runtime test with M=20,N=100,d=50 -
In [90]: M,N,d = 20,100,50
...: U = np.random.mtrand.RandomState(123).uniform(0,1,[M,d])
...: X = np.random.mtrand.RandomState(456).uniform(0,1,[N,d])
...: Y = np.random.mtrand.RandomState(789).randint(0,2,[N])
...:
Testing between cdist and pairwise_distances_argmin -
In [91]: %timeit cdist(U,X).argmin(1)
10000 loops, best of 3: 55.2 µs per loop
In [92]: %timeit pairwise.pairwise_distances_argmin(U,X)
10000 loops, best of 3: 90.6 µs per loop
Timings against loopy version -
In [93]: %timeit [nearest(U[i], X,Y) for i in range(len(U))]
1000 loops, best of 3: 298 µs per loop
In [94]: %timeit Y[cdist(U,X).argmin(1)]
10000 loops, best of 3: 55.6 µs per loop
In [95]: %timeit Y[pairwise.pairwise_distances_argmin(U,X)]
10000 loops, best of 3: 91.1 µs per loop
In [96]: 298.0/55.6 # Speedup with cdist over loopy one
Out[96]: 5.359712230215827

Efficiently count zero elements in numpy array?

I need to count the number of zero elements in numpy arrays. I'm aware of the numpy.count_nonzero function, but there appears to be no analog for counting zero elements.
My arrays are not very large (typically less than 1E5 elements) but the operation is performed several millions of times.
Of course I could use len(arr) - np.count_nonzero(arr), but I wonder if there's a more efficient way to do it.
Here's a MWE of how I do it currently:
import numpy as np
import timeit
arrs = []
for _ in range(1000):
arrs.append(np.random.randint(-5, 5, 10000))
def func1():
for arr in arrs:
zero_els = len(arr) - np.count_nonzero(arr)
print(timeit.timeit(func1, number=10))
A 2x faster approach would be to just use np.count_nonzero() but with the condition as needed.
In [3]: arr
Out[3]:
array([[1, 2, 0, 3],
[3, 9, 0, 4]])
In [4]: np.count_nonzero(arr==0)
Out[4]: 2
In [5]:def func_cnt():
for arr in arrs:
zero_els = np.count_nonzero(arr==0)
# here, it counts the frequency of zeroes actually
You can also use np.where() but it's slower than np.count_nonzero()
In [6]: np.where( arr == 0)
Out[6]: (array([0, 1]), array([2, 2]))
In [7]: len(np.where( arr == 0))
Out[7]: 2
Efficiency: (in descending order)
In [8]: %timeit func_cnt()
10 loops, best of 3: 29.2 ms per loop
In [9]: %timeit func1()
10 loops, best of 3: 46.5 ms per loop
In [10]: %timeit func_where()
10 loops, best of 3: 61.2 ms per loop
more speedups with accelerators
It is now possible to achieve more than 3 orders of magnitude speed boost with the help of JAX if you've access to accelerators (GPU/TPU). Another advantage of using JAX is that the NumPy code needs very little modification to make it JAX compatible. Below is a reproducible example:
In [1]: import jax.numpy as jnp
In [2]: from jax import jit
# set up inputs
In [3]: arrs = []
In [4]: for _ in range(1000):
...: arrs.append(np.random.randint(-5, 5, 10000))
# JIT'd function that performs the counting task
In [5]: #jit
...: def func_cnt():
...: for arr in arrs:
...: zero_els = jnp.count_nonzero(arr==0)
# efficiency test
In [8]: %timeit func_cnt()
15.6 µs ± 391 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Numpy Vectorization of sliding-window operation

I have the following numpy arrays:
arr_1 = [[1,2],[3,4],[5,6]] # 3 X 2
arr_2 = [[0.5,0.6],[0.7,0.8],[0.9,1.0],[1.1,1.2],[1.3,1.4]] # 5 X 2
arr_1 is clearly a 3 X 2 array, whereas arr_2 is a 5 X 2 array.
Now without looping, I want to element-wise multiply arr_1 and arr_2 so that I apply a sliding window technique (window size 3) to arr_2.
Example:
Multiplication 1: np.multiply(arr_1,arr_2[:3,:])
Multiplication 2: np.multiply(arr_1,arr_2[1:4,:])
Multiplication 3: np.multiply(arr_1,arr_2[2:5,:])
I want to do this in some sort of a matrix multiplication form to make it faster than my current solution which is of the form:
for i in (2):
np.multiply(arr_1,arr_2[i:i+3,:])
So if the number of rows in arr_2 are large (of the order of tens of thousands), this solution doesn't really scale very well.
Any help would be much appreciated.
We can use NumPy broadcasting to create those sliding windowed indices in a vectorized manner. Then, we can simply index into arr_2 with those to create a 3D array and perform element-wise multiplication with 2D array arr_1, which in turn will bring on broadcasting again.
So, we would have a vectorized implementation like so -
W = arr_1.shape[0] # Window size
idx = np.arange(arr_2.shape[0]-W+1)[:,None] + np.arange(W)
out = arr_1*arr_2[idx]
Runtime test and verify results -
In [143]: # Input arrays
...: arr_1 = np.random.rand(3,2)
...: arr_2 = np.random.rand(10000,2)
...:
...: def org_app(arr_1,arr_2):
...: W = arr_1.shape[0] # Window size
...: L = arr_2.shape[0]-W+1
...: out = np.empty((L,W,arr_1.shape[1]))
...: for i in range(L):
...: out[i] = np.multiply(arr_1,arr_2[i:i+W,:])
...: return out
...:
...: def vectorized_app(arr_1,arr_2):
...: W = arr_1.shape[0] # Window size
...: idx = np.arange(arr_2.shape[0]-W+1)[:,None] + np.arange(W)
...: return arr_1*arr_2[idx]
...:
In [144]: np.allclose(org_app(arr_1,arr_2),vectorized_app(arr_1,arr_2))
Out[144]: True
In [145]: %timeit org_app(arr_1,arr_2)
10 loops, best of 3: 47.3 ms per loop
In [146]: %timeit vectorized_app(arr_1,arr_2)
1000 loops, best of 3: 1.21 ms per loop
This is a nice case to test the speed of as_strided and Divakar's broadcasting.
In [281]: %%timeit
...: out=np.empty((L,W,arr1.shape[1]))
...: for i in range(L):
...: out[i]=np.multiply(arr1,arr2[i:i+W,:])
...:
10 loops, best of 3: 48.9 ms per loop
In [282]: %%timeit
...: idx=np.arange(L)[:,None]+np.arange(W)
...: out=arr1*arr2[idx]
...:
100 loops, best of 3: 2.18 ms per loop
In [283]: %%timeit
...: arr3=as_strided(arr2, shape=(L,W,2), strides=(16,16,8))
...: out=arr1*arr3
...:
1000 loops, best of 3: 805 µs per loop
Create Numpy array without enumerating array for more of a comparison of these methods.

Elementwise multiplication of arrays of different shapes in python

Say I have two arrays a and b,
a.shape = (5,2,3)
b.shape = (2,3)
then c = a * b will give me an array c of shape (5,2,3) with c[i,j,k] = a[i,j,k]*b[j,k].
Now the situation is,
a.shape = (5,2,3)
b.shape = (2,3,8)
and I want c to have a shape (5,2,3,8) with c[i,j,k,l] = a[i,j,k]*b[j,k,l].
How to do this efficiently? My a and b are actually quite large.
This should work:
a[..., numpy.newaxis] * b[numpy.newaxis, ...]
Usage:
In : a = numpy.random.randn(5,2,3)
In : b = numpy.random.randn(2,3,8)
In : c = a[..., numpy.newaxis]*b[numpy.newaxis, ...]
In : c.shape
Out: (5, 2, 3, 8)
Ref: Array Broadcasting in numpy
Edit: Updated reference URL
I think the following should work:
import numpy as np
a = np.random.normal(size=(5,2,3))
b = np.random.normal(size=(2,3,8))
c = np.einsum('ijk,jkl->ijkl',a,b)
and:
In [5]: c.shape
Out[5]: (5, 2, 3, 8)
In [6]: a[0,0,1]*b[0,1,2]
Out[6]: -0.041308376453821738
In [7]: c[0,0,1,2]
Out[7]: -0.041308376453821738
np.einsum can be a bit tricky to use, but is quite powerful for these sort of indexing problems:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html
Also note that this requires numpy >= v1.6.0
I'm not sure about efficiency for your particular problem, but if it doesn't perform as well as needed, definitely look into using Cython with explicit for loops, and possibly parallelize it using prange
UPDATE
In [18]: %timeit np.einsum('ijk,jkl->ijkl',a,b)
100000 loops, best of 3: 4.78 us per loop
In [19]: %timeit a[..., np.newaxis]*b[np.newaxis, ...]
100000 loops, best of 3: 12.2 us per loop
In [20]: a = np.random.normal(size=(50,20,30))
In [21]: b = np.random.normal(size=(20,30,80))
In [22]: %timeit np.einsum('ijk,jkl->ijkl',a,b)
100 loops, best of 3: 16.6 ms per loop
In [23]: %timeit a[..., np.newaxis]*b[np.newaxis, ...]
100 loops, best of 3: 16.6 ms per loop
In [2]: a = np.random.normal(size=(500,20,30))
In [3]: b = np.random.normal(size=(20,30,800))
In [4]: %timeit np.einsum('ijk,jkl->ijkl',a,b)
1 loops, best of 3: 3.31 s per loop
In [5]: %timeit a[..., np.newaxis]*b[np.newaxis, ...]
1 loops, best of 3: 2.6 s per loop

Categories