Why is numpy's einsum faster than numpy's built in functions? - python

Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds:
arr_1D=np.arange(500,dtype=np.double)
large_arr_1D=np.arange(100000,dtype=np.double)
arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500)
arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500)
First lets look at the np.sum function:
np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D))
True
%timeit np.sum(arr_3D)
10 loops, best of 3: 142 ms per loop
%timeit np.einsum('ijk->', arr_3D)
10 loops, best of 3: 70.2 ms per loop
Powers:
np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D))
True
%timeit arr_3D*arr_3D*arr_3D
1 loops, best of 3: 1.32 s per loop
%timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D)
1 loops, best of 3: 694 ms per loop
Outer product:
np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D))
True
%timeit np.outer(arr_1D, arr_1D)
1000 loops, best of 3: 411 us per loop
%timeit np.einsum('i,k->ik', arr_1D, arr_1D)
1000 loops, best of 3: 245 us per loop
All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this:
np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D))
True
%timeit np.sum(arr_2D*arr_3D)
1 loops, best of 3: 813 ms per loop
%timeit np.einsum('ij,oij->', arr_2D, arr_3D)
10 loops, best of 3: 85.1 ms per loop
Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent?
The DGEMM case for completeness:
np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D))
True
%timeit np.einsum('ij,jk',arr_2D,arr_2D)
10 loops, best of 3: 56.1 ms per loop
%timeit np.dot(arr_2D,arr_2D)
100 loops, best of 3: 5.17 ms per loop
The leading theory is from #sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

First off, there's been a lot of past discussion about this on the numpy list. For example, see:
http://numpy-discussion.10968.n7.nabble.com/poor-performance-of-sum-with-sub-machine-word-integer-types-td41.html
http://numpy-discussion.10968.n7.nabble.com/odd-performance-of-sum-td3332.html
Some of boils down to the fact that einsum is new, and is presumably trying to be better about cache alignment and other memory access issues, while many of the older numpy functions focus on a easily portable implementation over a heavily optimized one. I'm just speculating, there, though.
However, some of what you're doing isn't quite an "apples-to-apples" comparison.
In addition to what #Jamie already said, sum uses a more appropriate accumulator for arrays
For example, sum is more careful about checking the type of the input and using an appropriate accumulator. For example, consider the following:
In [1]: x = 255 * np.ones(100, dtype=np.uint8)
In [2]: x
Out[2]:
array([255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255], dtype=uint8)
Note that the sum is correct:
In [3]: x.sum()
Out[3]: 25500
While einsum will give the wrong result:
In [4]: np.einsum('i->', x)
Out[4]: 156
But if we use a less limited dtype, we'll still get the result you'd expect:
In [5]: y = 255 * np.ones(100)
In [6]: np.einsum('i->', y)
Out[6]: 25500.0

Now that numpy 1.8 is released, where according to the docs all ufuncs should use SSE2, I wanted to double check that Seberg's comment about SSE2 was valid.
To perform the test a new python 2.7 install was created- numpy 1.7 and 1.8 were compiled with icc using standard options on a AMD opteron core running Ubuntu.
This is the test run both before and after the 1.8 upgrade:
import numpy as np
import timeit
arr_1D=np.arange(5000,dtype=np.double)
arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500)
arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500)
print 'Summation test:'
print timeit.timeit('np.sum(arr_3D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print timeit.timeit('np.einsum("ijk->", arr_3D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print '----------------------\n'
print 'Power test:'
print timeit.timeit('arr_3D*arr_3D*arr_3D',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print timeit.timeit('np.einsum("ijk,ijk,ijk->ijk", arr_3D, arr_3D, arr_3D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print '----------------------\n'
print 'Outer test:'
print timeit.timeit('np.outer(arr_1D, arr_1D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print timeit.timeit('np.einsum("i,k->ik", arr_1D, arr_1D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print '----------------------\n'
print 'Einsum test:'
print timeit.timeit('np.sum(arr_2D*arr_3D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print timeit.timeit('np.einsum("ij,oij->", arr_2D, arr_3D)',
'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
number=5)/5
print '----------------------\n'
Numpy 1.7.1:
Summation test:
0.172988510132
0.0934836149216
----------------------
Power test:
1.93524689674
0.839519000053
----------------------
Outer test:
0.130380821228
0.121401786804
----------------------
Einsum test:
0.979052495956
0.126066613197
Numpy 1.8:
Summation test:
0.116551589966
0.0920487880707
----------------------
Power test:
1.23683619499
0.815982818604
----------------------
Outer test:
0.131808176041
0.127472200394
----------------------
Einsum test:
0.781750011444
0.129271841049
I think this is fairly conclusive that SSE plays a large role in the timing differences, it should be noted that repeating these tests the timings very by only ~0.003s. The remaining difference should be covered in the other answers to this question.

I think these timings explain what's going on:
a = np.arange(1000, dtype=np.double)
%timeit np.einsum('i->', a)
100000 loops, best of 3: 3.32 us per loop
%timeit np.sum(a)
100000 loops, best of 3: 6.84 us per loop
a = np.arange(10000, dtype=np.double)
%timeit np.einsum('i->', a)
100000 loops, best of 3: 12.6 us per loop
%timeit np.sum(a)
100000 loops, best of 3: 16.5 us per loop
a = np.arange(100000, dtype=np.double)
%timeit np.einsum('i->', a)
10000 loops, best of 3: 103 us per loop
%timeit np.sum(a)
10000 loops, best of 3: 109 us per loop
So you basically have an almost constant 3us overhead when calling np.sum over np.einsum, so they basically run as fast, but one takes a little longer to get going. Why could that be? My money is on the following:
a = np.arange(1000, dtype=object)
%timeit np.einsum('i->', a)
Traceback (most recent call last):
...
TypeError: invalid data type for einsum
%timeit np.sum(a)
10000 loops, best of 3: 20.3 us per loop
Not sure what is going on exactly, but it seems that np.einsum is skipping some checks to extract type specific functions to do the multiplications and additions, and is going directly with * and + for standard C types only.
The multidimensional cases are not different:
n = 10; a = np.arange(n**3, dtype=np.double).reshape(n, n, n)
%timeit np.einsum('ijk->', a)
100000 loops, best of 3: 3.79 us per loop
%timeit np.sum(a)
100000 loops, best of 3: 7.33 us per loop
n = 100; a = np.arange(n**3, dtype=np.double).reshape(n, n, n)
%timeit np.einsum('ijk->', a)
1000 loops, best of 3: 1.2 ms per loop
%timeit np.sum(a)
1000 loops, best of 3: 1.23 ms per loop
So a mostly constant overhead, not a faster running once they get down to it.

An update for numpy 1.21.2: Numpy's native functions are faster than einsums in almost all cases. Only einsum's outer variant and the sum23 test faster than the non-einsum versions.
If you can use numpy's native functions, do that.
(Images created with perfplot, a project of mine.)
Code to reproduce the plots:
import numpy
import perfplot
def setup1(n):
return numpy.arange(n, dtype=numpy.double)
def setup2(n):
return numpy.arange(n ** 2, dtype=numpy.double).reshape(n, n)
def setup3(n):
return numpy.arange(n ** 3, dtype=numpy.double).reshape(n, n, n)
def setup23(n):
return (
numpy.arange(n ** 2, dtype=numpy.double).reshape(n, n),
numpy.arange(n ** 3, dtype=numpy.double).reshape(n, n, n),
)
def numpy_sum(a):
return numpy.sum(a)
def einsum_sum(a):
return numpy.einsum("ijk->", a)
perfplot.save(
"sum.png",
setup=setup3,
kernels=[numpy_sum, einsum_sum],
n_range=[2 ** k for k in range(10)],
)
def numpy_power(a):
return a * a * a
def einsum_power(a):
return numpy.einsum("ijk,ijk,ijk->ijk", a, a, a)
perfplot.save(
"power.png",
setup=setup3,
kernels=[numpy_power, einsum_power],
n_range=[2 ** k for k in range(9)],
)
def numpy_outer(a):
return numpy.outer(a, a)
def einsum_outer(a):
return numpy.einsum("i,k->ik", a, a)
perfplot.save(
"outer.png",
setup=setup1,
kernels=[numpy_outer, einsum_outer],
n_range=[2 ** k for k in range(13)],
)
def dgemm_numpy(a):
return numpy.dot(a, a)
def dgemm_einsum(a):
return numpy.einsum("ij,jk", a, a)
def dgemm_einsum_optimize(a):
return numpy.einsum("ij,jk", a, a, optimize=True)
perfplot.save(
"dgemm.png",
setup=setup2,
kernels=[dgemm_numpy, dgemm_einsum],
n_range=[2 ** k for k in range(13)],
)
def dot_numpy(a):
return numpy.dot(a, a)
def dot_einsum(a):
return numpy.einsum("i,i->", a, a)
perfplot.save(
"dot.png",
setup=setup1,
kernels=[dot_numpy, dot_einsum],
n_range=[2 ** k for k in range(20)],
)
def sum23_numpy(data):
a, b = data
return numpy.sum(a * b)
def sum23_einsum(data):
a, b = data
return numpy.einsum("ij,oij->", a, b)
perfplot.save(
"sum23.png",
setup=setup23,
kernels=[sum23_numpy, sum23_einsum],
n_range=[2 ** k for k in range(10)],
)

Related

Use NumPy to apply a fixed palette to an image?

I have a NumPy image in RGB bytes, let's say it's this 2x3 image:
img = np.array([[[ 0, 255, 0], [255, 255, 255]],
[[255, 0, 255], [ 0, 255, 255]],
[[255, 0, 255], [ 0, 0, 0]]])
I also have a palette that covers every color used in the image. Let's say it's this palette:
palette = np.array([[255, 0, 255],
[ 0, 255, 0],
[ 0, 255, 255],
[ 0, 0, 0],
[255, 255, 255]])
Is there some combination of indexing the image against the palette (or vice versa) that will give me a paletted image equivalent to this?
img_p = np.array([[1, 4],
[0, 2],
[0, 3]])
For comparison, I know the reverse is pretty simple. palette[img_p] will give a result equivalent to img. I'm trying to figure out if there's a similar approach in the opposite direction that will let NumPy do all the heavy lifting.
I know I can just iterate over all the image pixels individually and build my own paletted image. I'm hoping there's a more elegant option.
Okay, so I implemented the various solutions below and ran them over a moderate test set: 20 images, each one 2000x2000 pixels, with a 32-element palette of three-byte colors. Pixels were given random palette indexes. All algorithms were run over the same images.
Timing results:
mostly empty lookup array - 0.89 seconds
np.searchsorted approach - 3.20 seconds
Pandas lookup, single integer - 38.7 seconds
Using == and then aggregating the boolean results - 66.4 seconds
inverting the palette into a dict and using np.apply_along_axis() - Probably ~500 seconds, based on a smaller test set
Pandas lookup with a MultiIndex - Probably ~3000 seconds, based on a smaller test set
Given that the lookup array has a significant memory penalty (and a prohibitive one if there's an alpha channel), I'm going to go with the np.searchsorted approach. The lookup array is significantly faster if you want to spend the RAM on it.
Edit Here is a faster way that uses np.searchsorted.
def rev_lookup_by_sort(img, palette):
M = (1 + palette.max())**np.arange(3)
p1d, ix = np.unique(palette # M, return_index=True)
return ix[np.searchsorted(p1d, img # M)]
Correctness (by equivalence to rev_lookup_by_dict() in the original answer below):
np.array_equal(
rev_lookup_by_sort(img, palette),
rev_lookup_by_dict(img, palette),
)
Speedup (for a 1000 x 1000 image and a 1000 colors palette):
orig = %timeit -o rev_lookup_by_dict(img, palette)
# 2.47 s ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
v2 = %timeit -o rev_lookup_by_sort(img, palette)
# 71.8 ms ± 93.7 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> orig.average / v2.average
34.46
So that answer using np.searchsorted is 30x faster at that size.
Original answer
An initial shot gives a slowish version (hopefully we can do better). It uses a dict, where keys are colors as tuples.
def rev_lookup_by_dict(img, palette):
d = {tuple(v): k for k, v in enumerate(palette)}
def func(pix):
return d.get(tuple(pix), -1)
return np.apply_along_axis(func, -1, img)
img_p = rev_lookup_by_dict(img, palette)
Notice that "color not found" is expressed as -1 in img_p.
On your (modified) data:
>>> img_p
array([[1, 4],
[0, 2],
[0, 3]])
Larger example:
# setup
from math import isqrt
w, h = 1000, 1000
s = isqrt(w * h)
palette = np.random.randint(0, 256, (s, 3))
img = palette[np.random.randint(0, s, (w, h))]
Test:
img_p = rev_lookup_by_dict(img, palette)
>>> np.array_equal(palette[img_p], img)
True
Timing:
%timeit rev_lookup_by_dict(img, palette)
# 2.48 s ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
That's quite awful, but hopefully we can do better.
Faster than a dictionary, but with a 64 MB lookup array.
d = np.zeros((256,256,256), np.int32) # 64 MB!
d[tuple(palette.T)] = np.arange(len(palette))
img_p = d[tuple(img.reshape(-1,3).T)].reshape(*img.shape[:2])
# %%timeit 10 loops, best of 5: 25.8 ms per loop (1000 x 1000)
np.testing.assert_equal(img, palette[img_p])
If you can use Pandas in addition to NumPy, you can use a Pandas MultiIndex as a sort of sparse array:
inverse_palette = pd.Series(np.arange(len(palette)),
index=pd.MultiIndex.from_arrays(palette.T)).sort_index()
img_p = np.apply_along_axis(lambda px: inverse_palette[tuple(px)], 2, img)
That's really slow, though. You can do a bit better by converting the colors into integers first:
def collapse_bytes(array):
result = np.zeros(array.shape[:-1], np.uint32)
for i in range(array.shape[-1]):
result = result * 256 + array[...,i]
return result
inverse_palette = pd.Series(np.arange(len(palette)),
index=collapse_bytes(palette)).sort_index()
img_p = inverse_palette[collapse_bytes(img).flat].to_numpy()\
.reshape(img.shape[:-1])

numpy - efficiently copy values from matrix to matrix using some precalculated map

I have an input matrix A of size I*J
And an output matrix B of size N*M
And some precalculated map of size N*M*2 that dictates for each coordinate in B, which coordinate in A to take. The map has no specific rule or linearity that I can use. Just a map that seems random.
The matrices are pretty big (~5000*~3000) so creating a mapping matrix is out of the question (5000*3000*5000*3000)
I managed to do it using a simple map and loop:
for i in range(N):
for j in range(M):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
And I managed to do it using indexing:
B[coords_y, coords_x] = A[some_mapping[:, 0], some_mapping[:, 1]]
# Where coords_x, coords_y are defined as all of the coordinates:
# [[0,0],[0,1]..[0,M-1],[1,0],[1,1]...[N-1,M-1]]
This works much better, but still kind of slow.
I have infinite time in advance to calculate the mapping or any other utility calculation. But after these precalculations, this mapping should happen as fast as possible.
Currently, the only other option that I see is just to reimplement this in C or something faster...
(Just to make it clear if someone is curious, I'm creating an image out of some other, differently shaped and oriented image with some encoding. But its' mapping is very complicated and not something simple or linear that can be used)
If you have infinity time for precomputing you can get a slight speedup by going to flat indexing:
map_f = np.ravel_multi_index((*np.moveaxis(mapping, 2, 0),), A.shape)
Then simply do:
A.ravel()[map_f]
Please note that this speedup is on top of the large speedup we get from fancy indexing. For example:
>>> A = np.random.random((5000, 3000))
>>> mapping = np.random.randint(0, 15000, (5000, 3000, 2)) % [5000, 3000]
>>>
>>> map_f = np.ravel_multi_index((*np.moveaxis(mapping, 2, 0),), A.shape)
>>>
>>> np.all(A.ravel()[map_f] == A[mapping[..., 0], mapping[..., 1]])
True
>>>
>>> timeit('A[mapping[:, :, 0], mappping[:, :, 1]]', globals=globals(), number=10)
4.101239089999581
>>> timeit('A.ravel()[map_f]', globals=globals(), number=10)
2.7831342950012186
If we were to compare to the original loopy code, the speedup would be more like ~40x.
Finally, note that this solution does not only avoid the additional dependency and potential installation nightmare that is numba, but is also simpler, shorter and faster:
numba:
precomp: 132.957 ms
main 238.359 ms
flat indexing:
precomp: 76.223 ms
main: 219.910 ms
Code:
import numpy as np
from numba import jit
#jit
def fast(A, B, mapping):
N, M = B.shape
for i in range(N):
for j in range(M):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
return B
from timeit import timeit
A = np.random.random((5000, 3000))
mapping = np.random.randint(0, 15000, (5000, 3000, 2)) % [5000, 3000]
a = np.random.random((5, 3))
m = np.random.randint(0, 15, (5, 3, 2)) % [5, 3]
print('numba:')
print(f"precomp: {timeit('b = fast(a, np.empty_like(a), m)', globals=globals(), number=1)*1000:10.3f} ms")
print(f"main {timeit('B = fast(A, np.empty_like(A), mapping)', globals=globals(), number=10)*100:10.3f} ms")
print('\nflat indexing:')
print(f"precomp: {timeit('map_f = np.ravel_multi_index((*np.moveaxis(mapping, 2, 0),), A.shape)', globals=globals(), number=10)*100:10.3f} ms")
map_f = np.ravel_multi_index((*np.moveaxis(mapping, 2, 0),), A.shape)
print(f"main: {timeit('B = A.ravel()[map_f]', globals=globals(), number=10)*100:10.3f} ms")
One very nice solution to these types of performance critical problems is to keep it simple and utilize one of the high performance packages. The easiest might be Numba which provides the jit decorator that compiles array and loop heavy code to optimized LLVM. Below is a full example:
from time import time
import numpy as np
from numba import jit
# Function doing the computation
def normal(A, B, mapping):
N, M = B.shape
for i in range(N):
for j in range(M):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
return B
# The same exact function, but with the Numba jit decorator
#jit
def fast(A, B, mapping):
N, M = B.shape
for i in range(N):
for j in range(M):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
return B
# Create sample data
def create_sample_data(I, J, N, M):
A = np.random.random((I, J))
B = np.empty((N, M))
mapping = np.asarray(np.stack((
np.random.random((N, M))*I,
np.random.random((N, M))*J,
), axis=2), dtype=int)
return A, B, mapping
A, B, mapping = create_sample_data(500, 600, 700, 800)
# Run normally
t0 = time()
B = normal(A, B, mapping)
t1 = time()
print('normal took', t1 - t0, 'seconds')
# Run using Numba.
# First we should run the function with smaller arrays,
# just to compile the code.
fast(*create_sample_data(5, 6, 7, 8))
# Now, run with real data
t0 = time()
B = fast(A, B, mapping)
t1 = time()
print('fast took', t1 - t0, 'seconds')
This uses your own looping solution, which is inherently slow using standard Python, but as fast as C when using Numba. On my machine the normal function executes in 0.270 seconds, while the fast function executes in 0.00248 seconds. That is, Numba gives us a 109x speedup (!) pretty much for free.
Note that the fast Numba function is called twice, first with small input arrays and only then with the real data. This is a critical step which is often neglected. Without it, you will find that the performance increase is not nearly as good, as the first call is used to compile the code. The types and dimensions of the input arrays should be the same in this initial call, but the size in each dimension is not important.
I create B outside of the function(s) and passed it as an argument (to be "filled with values"). You might just as well allocate B inside of the function, Numba does not care.
The easiest way to get Numba is properly via the Anaconda distribution.
One option would be to use numba, which can often provide substantial improvements in this kind of simple algorithmic code.
import numpy as np
from numba import njit
I, J = 5000, 5000
N, M = 3000, 3000
A = np.random.randint(0, 10, [I, J])
B = np.random.randint(0, 10, [N, M])
mapping = np.dstack([np.random.randint(0, I - 1, (N, M)),
np.random.randint(0, J - 1, (N, M))])
B0 = B.copy()
def orig(A, B, mapping):
for i in range(N):
for j in range(M):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
new = njit(orig)
which gives us matching results:
In [313]: Bold = B0.copy()
In [314]: orig(A, Bold, mapping)
In [315]: Bnew = B0.copy()
In [316]: new(A, Bnew, mapping)
In [317]: (Bold == Bnew).all()
Out[317]: True
and is much faster:
In [320]: %time orig(A, B0.copy(), mapping)
Wall time: 6.11 s
In [321]: %time new(A, B0.copy(), mapping)
Wall time: 257 ms
and faster still after the first call, when it has to do its jit work:
In [322]: %time new(A, B0.copy(), mapping)
Wall time: 171 ms
In [323]: %time new(A, B0.copy(), mapping)
Wall time: 163 ms
for a 30x improvement for adding two lines of code.
The most straightforward optimization you can do is drop the native python loops and use fancy numpy indexing. You already have the array to do that:
import numpy as np
A = np.random.rand(2000,3000)
B = np.empty((2500,3500)) # just for shape, really
# this is the same as your original, but with random indices
mapping = np.stack([np.random.randint(0, A.shape[0] - 1, B.shape),
np.random.randint(0, A.shape[1] - 1, B.shape)],
axis=-1)
# your loopy original
def loopy(A, B, mapping):
B = B.copy()
for i in range(B.shape[0]):
for j in range(B.shape[1]):
B[i, j] = A[mapping[i, j, 0], mapping[i, j, 1]]
return B
# vectorization with fancy indexing
def fancy(A, mapping):
return A[mapping[...,0], mapping[...,1]]
Note that the fancy advanced-indexing function doesn't need preallocation of a B array, as a new array is constructed by the indexing operation.
There's a slight variation of the fancy indexing version which could be marginally more efficient: put your last dimension of mapping first, in this way both indexing arrays are contiguous blocks of memory. It turns out from my timing test that this happens to be slower in the above setup. Anyway:
mapping_T = mapping.transpose(2, 0, 1).copy() # but it's actually `mapping` without axis=-1 kwarg
# has shape (2, N, M)
def fancy_T(A, mapping_T):
return A[tuple(mapping_T)]
As Paul Panzer noted in a comment, just calling .transpose on mapping will not create a copy, but rather implement the transpose using stride tricks. In order to end up with a contiguous array (which is the point of the optimization) we need to force the creation of a copy.
I get the following timings in ipython:
# loopy(A, B, mapping)
6.63 s ± 141 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# fancy(A, mapping)
250 ms ± 3.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# fancy_T(A, mapping_T)
277 ms ± 1.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
To be honest I don't understand why the original array order is faster compared to the transposed, but there's that.

Performance of 2 vector-matrix dot product

The question is more focused on performance of calculation.
I have 2 vector-matrix. This means that they have a 3 depth dimension for X,Y,Z. Each element of the matrix has to make dot product with the element on the same position of the other matriz.
A simple and non efficient code will be this one:
import numpy as np
a = np.random.uniform(low=-1.0, high=1.0, size=(1000,1000,3))
b = np.random.uniform(low=-1.0, high=1.0, size=(1000,1000,3))
c = np.zeros((1000,1000))
numRow,numCol,numDepth = np.shape(a)
for idRow in range(numRow):
for idCol in range(numCol):
# Angle in radians
c[idRow,idCol] = math.acos(a[idRow,idCol,0]*b[idRow,idCol,0] + a[idRow,idCol,1]*b[idRow,idCol,1] + a[idRow,idCol,2]*b[idRow,idCol,2])
However, the numpy functions can speed up the calculations as the following ones, making code much faster:
# Angle in radians
d = np.arccos(np.multiply(a[:,:,0],b[:,:,0]) + np.multiply(a[:,:,1],b[:,:,1]) + np.multiply(a[:,:,2],b[:,:,2]))
However, I would like to know if there are other sintaxis that improve this one above with maybe other functions, indices,...
First code takes 4.658s while second takes 0.354s
You can do this with np.einsum, which multiplies and then sums over any axes:
np.arccos(np.einsum('ijk,ijk->ij', a, b))
The more straightforward way to do what you posted in the question is to use np.sum, where you sum along the last axis (-1):
np.arccos(np.sum(a*b, -1))
They all give the same answer but einsum is the fastest and sum is next:
In [36]: timeit np.arccos(np.einsum('ijk,ijk->ij', a, b))
10000 loops, best of 3: 20.4 µs per loop
In [37]: timeit e = np.arccos(np.sum(a*b, -1))
10000 loops, best of 3: 29.8 µs per loop
In [38]: %%timeit
....: d = np.arccos(np.multiply(a[:,:,0],b[:,:,0]) +
....: np.multiply(a[:,:,1],b[:,:,1]) +
....: np.multiply(a[:,:,2],b[:,:,2]))
....:
10000 loops, best of 3: 34.6 µs per loop
The Pythran compiler can further optimize your original expression by:
Removing temporary arrays
Using SIMD instructions
Using multithreading
As showcased by this example:
$ cat cross.py
#pythran export cross(float[][][], float[][][])
import numpy as np
def cross(a,b):
return np.arccos(np.multiply(a[:, :, 0], b[:, :, 0]) + np.multiply(a[:, :, 1],b[:, :, 1]) + np.multiply(a[:, :, 2], b[:, :, 2]))
$ python -m timeit -s 'import numpy as np; a = np.random.uniform(low=-1.0, high=1.0, size=(1000, 1000, 3)); b = np.random.uniform(low=-1.0, high=1.0, size=(1000, 1000, 3)); c = np.zeros((1000, 1000)); from cross import cross' 'cross(a,b)'
10 loops, best of 3: 35.4 msec per loop
$ pythran cross.py -DUSE_BOOST_SIMD -fopenmp -march=native
$ python -m timeit -s 'import numpy as np; a = np.random.uniform(low=-1.0, high=1.0, size=(1000, 1000, 3)); b = np.random.uniform(low=-1.0, high=1.0, size=(1000, 1000, 3)); c = np.zeros((1000, 1000)); from cross import cross' 'cross(a,b)'
100 loops, best of 3: 11.8 msec per loop

Apply a function to each row of a ndarray

I have this function to calculate squared Mahalanobis distance of vector x to mean:
def mahalanobis_sqdist(x, mean, Sigma):
'''
Calculates squared Mahalanobis Distance of vector x
to distibutions' mean
'''
Sigma_inv = np.linalg.inv(Sigma)
xdiff = x - mean
sqmdist = np.dot(np.dot(xdiff, Sigma_inv), xdiff)
return sqmdist
I have an numpy array that has a shape of (25, 4). So, I want to apply that function to all 25 rows of my array without a for loop. So, basically, how can I write the vectorized form of this loop:
for r in d1:
mahalanobis_sqdist(r[0:4], mean1, Sig1)
where mean1 and Sig1 are :
>>> mean1
array([ 5.028, 3.48 , 1.46 , 0.248])
>>> Sig1 = np.cov(d1[0:25, 0:4].T)
>>> Sig1
array([[ 0.16043333, 0.11808333, 0.02408333, 0.01943333],
[ 0.11808333, 0.13583333, 0.00625 , 0.02225 ],
[ 0.02408333, 0.00625 , 0.03916667, 0.00658333],
[ 0.01943333, 0.02225 , 0.00658333, 0.01093333]])
I have tried the following but it didn't work:
>>> vecdist = np.vectorize(mahalanobis_sqdist)
>>> vecdist(d1, mean1, Sig1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1862, in __call__
theout = self.thefunc(*newargs)
File "<stdin>", line 6, in mahalanobis_sqdist
File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 445, in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
IndexError: tuple index out of range
To apply a function to each row of an array, you could use:
np.apply_along_axis(mahalanobis_sqdist, 1, d1, mean1, Sig1)
In this case, however, there is a better way. You don't have to apply a function to each row. Instead, you can apply NumPy operations to the entire d1 array to calculate the same result. np.einsum can replace the for-loop and the two calls to np.dot:
def mahalanobis_sqdist2(d, mean, Sigma):
Sigma_inv = np.linalg.inv(Sigma)
xdiff = d - mean
return np.einsum('ij,im,mj->i', xdiff, xdiff, Sigma_inv)
Here are some benchmarks:
import numpy as np
np.random.seed(1)
def mahalanobis_sqdist(x, mean, Sigma):
'''
Calculates squared Mahalanobis Distance of vector x
to distibutions mean
'''
Sigma_inv = np.linalg.inv(Sigma)
xdiff = x - mean
sqmdist = np.dot(np.dot(xdiff, Sigma_inv), xdiff)
return sqmdist
def mahalanobis_sqdist2(d, mean, Sigma):
Sigma_inv = np.linalg.inv(Sigma)
xdiff = d - mean
return np.einsum('ij,im,mj->i', xdiff, xdiff, Sigma_inv)
def using_loop(d1, mean, Sigma):
expected = []
for r in d1:
expected.append(mahalanobis_sqdist(r[0:4], mean1, Sig1))
return np.array(expected)
d1 = np.random.random((25,4))
mean1 = np.array([ 5.028, 3.48 , 1.46 , 0.248])
Sig1 = np.cov(d1[0:25, 0:4].T)
expected = using_loop(d1, mean1, Sig1)
result = np.apply_along_axis(mahalanobis_sqdist, 1, d1, mean1, Sig1)
result2 = mahalanobis_sqdist2(d1, mean1, Sig1)
assert np.allclose(expected, result)
assert np.allclose(expected, result2)
In [92]: %timeit mahalanobis_sqdist2(d1, mean1, Sig1)
10000 loops, best of 3: 31.1 µs per loop
In [94]: %timeit using_loop(d1, mean1, Sig1)
1000 loops, best of 3: 569 µs per loop
In [91]: %timeit np.apply_along_axis(mahalanobis_sqdist, 1, d1, mean1, Sig1)
1000 loops, best of 3: 806 µs per loop
Thus mahalanobis_sqdist2 is about 18x faster than a for-loop, and 26x faster than using np.apply_along_axis.
Note that np.apply_along_axis, np.vectorize, np.frompyfunc are Python utility functions. Under the hood they use for- or while-loops. There is no real "vectorization" going on here. They can provide syntactic assistance, but don't expect them to make your code perform any better than a for-loop you write yourself.
The answer by #unutbu works very nicely for applying any function to the rows of an array.
In this particular case, there are some mathematical symmetries you can use that will speed things up considerably if you are working with large arrays.
Here is a modified version of your function:
def mahalanobis_sqdist3(x, mean, Sigma):
Sigma_inv = np.linalg.inv(Sigma)
xdiff = x - mean
return (xdiff.dot(Sigma_inv)*xdiff).sum(axis=-1)
If you end up using any sort of large Sigma, I would recommend that you cache Sigma_inv and pass that in as an argument to your function instead.
Since it is 4x4 in this example, this doesn't matter.
I'll show how to deal with large Sigma anyway for anyone else who comes across this.
If you aren't going to be using the same Sigma repeatedly, you won't be able to cache it, so, instead of inverting the matrix, you could use a different method to solve the linear system.
Here I'll use the LU decomposition built in to SciPy.
This only improves the time if the number of columns of x is large relative to its number of rows.
Here is a function that shows that approach:
from scipy.linalg import lu_factor, lu_solve
def mahalanobis_sqdist4(x, mean, Sigma):
xdiff = x - mean
Sigma_inv = lu_factor(Sigma)
return (xdiff.T*lu_solve(Sigma_inv, xdiff.T)).sum(axis=0)
Here are some timings.
I'll include the version with einsum as mentioned in the other answer.
import numpy as np
Sig1 = np.array([[ 0.16043333, 0.11808333, 0.02408333, 0.01943333],
[ 0.11808333, 0.13583333, 0.00625 , 0.02225 ],
[ 0.02408333, 0.00625 , 0.03916667, 0.00658333],
[ 0.01943333, 0.02225 , 0.00658333, 0.01093333]])
mean1 = np.array([ 5.028, 3.48 , 1.46 , 0.248])
x = np.random.rand(25, 4)
%timeit np.apply_along_axis(mahalanobis_sqdist, 1, x, mean1, Sig1)
%timeit mahalanobis_sqdist2(x, mean1, Sig1)
%timeit mahalanobis_sqdist3(x, mean1, Sig1)
%timeit mahalanobis_sqdist4(x, mean1, Sig1)
giving:
1000 loops, best of 3: 973 µs per loop
10000 loops, best of 3: 36.2 µs per loop
10000 loops, best of 3: 40.8 µs per loop
10000 loops, best of 3: 83.2 µs per loop
However, changing the sizes of the arrays involved changes the timing results.
For example, letting x = np.random.rand(2500, 4), the timings are:
10 loops, best of 3: 95 ms per loop
1000 loops, best of 3: 355 µs per loop
10000 loops, best of 3: 131 µs per loop
1000 loops, best of 3: 337 µs per loop
And letting x = np.random.rand(1000, 1000), Sigma1 = np.random.rand(1000, 1000), and mean1 = np.random.rand(1000), the timings are:
1 loops, best of 3: 1min 24s per loop
1 loops, best of 3: 2.39 s per loop
10 loops, best of 3: 155 ms per loop
10 loops, best of 3: 99.9 ms per loop
Edit: I noticed that one of the other answers used the Cholesky decomposition.
Given that Sigma is symmetric and positive definite, we can actually do better than my above results.
There are some good routines from BLAS and LAPACK available through SciPy that can work with symmetric positive-definite matrices.
Here are two faster versions.
from scipy.linalg.fblas import dsymm
def mahalanobis_sqdist5(x, mean, Sigma_inv):
xdiff = x - mean
Sigma_inv = la.inv(Sigma)
return np.einsum('...i,...i->...',dsymm(1., Sigma_inv, xdiff.T).T, xdiff)
from scipy.linalg.flapack import dposv
def mahalanobis_sqdist6(x, mean, Sigma):
xdiff = x - mean
return np.einsum('...i,...i->...', xdiff, dposv(Sigma, xdiff.T)[1].T)
The first one still inverts Sigma.
If you pre-compute the inverse and reuse it, it is much faster (the 1000x1000 case takes 35.6ms on my machine with the pre-computed inverse).
I also used einsum to take the product then sum along the last axis.
This ended up being marginally faster than doing something like (A * B).sum(axis=-1).
These two functions give the following timings:
First test case:
10000 loops, best of 3: 55.3 µs per loop
100000 loops, best of 3: 14.2 µs per loop
Second test case:
10000 loops, best of 3: 121 µs per loop
10000 loops, best of 3: 79 µs per loop
Third test case:
10 loops, best of 3: 92.5 ms per loop
10 loops, best of 3: 48.2 ms per loop
Just saw a really nice comment on reddit that might speed things up even a little more:
This is not surprising to anyone who uses numpy regularly. For loops
in python are horribly slow. Actually, einsum is pretty slow too.
Here's a version that is faster if you have lots of vectors (500
vectors in 4 dimensions is enough to make this version faster than
einsum on my machine):
def no_einsum(d, mean, Sigma):
L_inv = np.linalg.inv(numpy.linalg.cholesky(Sigma))
xdiff = d - mean
return np.sum(np.dot(xdiff, L_inv.T)**2, axis=1)
If your points are also high dimensional then computing the inverse is
slow (and generally a bad idea anyway) and you can save time by
solving the system directly (500 vectors in 250 dimensions is enough
to make this version the fastest on my machine):
def no_einsum_solve(d, mean, Sigma):
L = numpy.linalg.cholesky(Sigma)
xdiff = d - mean
return np.sum(np.linalg.solve(L, xdiff.T)**2, axis=0)
The problem is that np.vectorize vectorizes over all arguments, but you need to vectorize only over the first one. You need to use excluded keyword argument to vectorize:
np.vectorize(mahalanobis_sqdist, excluded=[1, 2])

Fastest way to count array values above a threshold in numpy

I have a numpy array containing 10^8 floats and want to count how many of them are >= a given threshold. Speed is crucial because the operation has to be done on large numbers of such arrays. The contestants so far are
np.sum(myarray >= thresh)
np.size(np.where(np.reshape(myarray,-1) >= thresh))
The answers at Count all values in a matrix greater than a value suggest that np.where() would be faster, but I've found inconsistent timing results. What I mean by this is for some realizations and Boolean conditions np.size(np.where(cond)) is faster than np.sum(cond), but for some it is slower.
Specifically, if a large fraction of entries fulfil the condition then np.sum(cond) is significantly faster but if a small fraction (maybe less than a tenth) do then np.size(np.where(cond)) wins.
The question breaks down into 2 parts:
Any other suggestions?
Does it make sense that the time taken by np.size(np.where(cond)) increases with the number of entries for which cond is true?
Using cython might be a decent alternative.
import numpy as np
cimport numpy as np
cimport cython
from cython.parallel import prange
DTYPE_f64 = np.float64
ctypedef np.float64_t DTYPE_f64_t
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
cdef int count_above_cython(DTYPE_f64_t [:] arr_view, DTYPE_f64_t thresh) nogil:
cdef int length, i, total
total = 0
length = arr_view.shape[0]
for i in prange(length):
if arr_view[i] >= thresh:
total += 1
return total
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
def count_above(np.ndarray arr, DTYPE_f64_t thresh):
cdef DTYPE_f64_t [:] arr_view = arr.ravel()
cdef int total
with nogil:
total = count_above_cython(arr_view, thresh)
return total
Timing of different proposed methods.
myarr = np.random.random((1000,1000))
thresh = 0.33
In [6]: %timeit count_above(myarr, thresh)
1000 loops, best of 3: 693 µs per loop
In [9]: %timeit np.count_nonzero(myarr >= thresh)
100 loops, best of 3: 4.45 ms per loop
In [11]: %timeit np.sum(myarr >= thresh)
100 loops, best of 3: 4.86 ms per loop
In [12]: %timeit np.size(np.where(np.reshape(myarr,-1) >= thresh))
10 loops, best of 3: 61.6 ms per loop
With a larger array:
In [13]: myarr = np.random.random(10**8)
In [14]: %timeit count_above(myarr, thresh)
10 loops, best of 3: 63.4 ms per loop
In [15]: %timeit np.count_nonzero(myarr >= thresh)
1 loops, best of 3: 473 ms per loop
In [16]: %timeit np.sum(myarr >= thresh)
1 loops, best of 3: 511 ms per loop
In [17]: %timeit np.size(np.where(np.reshape(myarr,-1) >= thresh))
1 loops, best of 3: 6.07 s per loop

Categories