How is numpy multi_dot slower than numpy.dot? - python

I'm trying to optimize some code that performs lots of sequential matrix operations.
I figured numpy.linalg.multi_dot (docs here) would perform all the operations in C or BLAS and thus it would be way faster than going something like arr1.dot(arr2).dot(arr3) and so on.
I was really surprised running this code on a notebook:
v1 = np.random.rand(2,2)
v2 = np.random.rand(2,2)
%%timeit
​
v1.dot(v2.dot(v1.dot(v2)))
The slowest run took 9.01 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 3.14 µs per loop
%%timeit ​
np.linalg.multi_dot([v1,v2,v1,v2])
The slowest run took 4.67 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 32.9 µs per loop
To find out that the same operation is about 10x slower using multi_dot.
My questions are:
Am I missing something ? does it make any sense ?
Is there another way to optimize sequential matrix operations ?
Should I expect the same behavior using cython ?

It's because your test matrices are too small and too regular; the overhead in figuring out the fastest evaluation order may outweights the potential performance gain.
Using the example from the document:
import numpy as snp
from numpy.linalg import multi_dot
# Prepare some data
A = np.random.rand(10000, 100)
B = np.random.rand(100, 1000)
C = np.random.rand(1000, 5)
D = np.random.rand(5, 333)
%timeit -n 10 multi_dot([A, B, C, D])
%timeit -n 10 np.dot(np.dot(np.dot(A, B), C), D)
%timeit -n 10 A.dot(B).dot(C).dot(D)
Result:
10 loops, best of 3: 12 ms per loop
10 loops, best of 3: 62.7 ms per loop
10 loops, best of 3: 59 ms per loop
multi_dot improves performance by evaluating the fastest multiplication order in which there are least scalar multiplications.
In the above case, the default regular multiplication order ((AB)C)D is evaluated as A((BC)D)--so that a 1000x100 # 100x1000 multiplication is reduced to 1000x100 # 100x333, cutting down at least 2/3 scalar multiplications.
You can verify this by testing
%timeit -n 10 np.dot(A, np.dot(np.dot(B, C), D))
10 loops, best of 3: 19.2 ms per loop

Related

In NumPy, larger arrays are created more quickly?

Is this a cache thing, as timeit suggests?
In [55]: timeit a = zeros((10000, 400))
100 loops, best of 3: 3.11 ms per loop
In [56]: timeit a = zeros((10000, 500))
The slowest run took 13.43 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 3.43 µs per loop
Tried to fool it, but it didn't work:
In [58]: timeit a = zeros((10000, 500+random.randint(100)))
The slowest run took 13.31 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 4.35 µs per loop
The reason is not caching but that numpy just creates a placeholder instead of the full array. This can be easily verified by monitoring your RAM usage when you do something like this:
a = np.zeros((20000, 20000), np.float64)
This doesn't allocate 20k*20k*8byte ~ 3GB on my computer (but might be OS-dependant because np.zeros uses the C function calloc). But be careful because most operations on this array (for example a += 5) will immediatly allocate that memory! Make sure you use an appropriate size compared to your RAM so that you'll notice the RAM increase without overusing it.
In the end this just postpones the allocation of the array, as soon as you operate with it the combined timing of allocation and operation should be as expected (linear with the number of elements). It seems you're using IPython so you can use a block-timeit %%timeit:
%%timeit
a = np.zeros((10000, 400))
a += 10
# => 10 loops, best of 3: 30.3 ms per loop
%%timeit
a = np.zeros((10000, 800))
a += 10
# => 10 loops, best of 3: 60.2 ms per loop

Numpy: get the column and row index of the minimum value of a 2D array

For example,
x = array([[1,2,3],[3,2,5],[9,0,2]])
some_func(x) gives (2,1)
I know one can do it by a custom function:
def find_min_idx(x):
k = x.argmin()
ncol = x.shape[1]
return k/ncol, k%ncol
However, I am wondering if there's a numpy built-in function that does this faster.
Thanks.
EDIT: thanks for the answers. I tested their speeds as follows:
%timeit np.unravel_index(x.argmin(), x.shape)
#100000 loops, best of 3: 4.67 µs per loop
%timeit np.where(x==x.min())
#100000 loops, best of 3: 12.7 µs per loop
%timeit find_min_idx(x) # this is using the custom function above
#100000 loops, best of 3: 2.44 µs per loop
Seems the custom function is actually faster than unravel_index() and where(). unravel_index() does similar things as the custom function plus the overhead of checking extra arguments. where() is capable of returning multiple indices but is significantly slower for my purpose. Perhaps pure python code is not that slow for doing just two simple arithmetic and the custom function approach is as fast as one can get.
You may use np.where:
In [9]: np.where(x == np.min(x))
Out[9]: (array([2]), array([1]))
Also as #senderle mentioned in comment, to get values in an array, you can use np.argwhere:
In [21]: np.argwhere(x == np.min(x))
Out[21]: array([[2, 1]])
Updated:
As OP's times show, and much clearer that argmin is desired (no duplicated mins etc.), one way I think may slightly improve OP's original approach is to use divmod:
divmod(x.argmin(), x.shape[1])
Timed them and you will find that extra bits of speed, not much but still an improvement.
%timeit find_min_idx(x)
1000000 loops, best of 3: 1.1 µs per loop
%timeit divmod(x.argmin(), x.shape[1])
1000000 loops, best of 3: 1.04 µs per loop
If you are really concerned about performance, you may take a look at cython.
You can use np.unravel_index
print(np.unravel_index(x.argmin(), x.shape))
(2, 1)

Fastest way to replace values in a numpy array with a list

I want to read a list into a numpy array. This list is being replaced in every iteration of a loop and further operations are done on the array. These operations include element-wise subtraction from another numpy array for a distance measure, and checking a threshold condition in this distance using the numpy.all() function. Currently I am using np.array( list ) each time to convert the list to an array:
#!/usr/bin/python
import numpy as np
a = [1.33,2.555,3.444,5.666,4.555,6.777,8.888]
%timeit b = np.array(a)
100000 loops, best of 3: 4.83 us per loop
Is it possible to do anything better than this, if I know the size of the list and it is invariable? Even small improvements are welcome, as I run this a very large number of times.
I've tried %timeit(np.take(a,range(len(a)),out=b)) which takes much longer: 100000 loops, best of 3: 16.8 us per loop
As you "know the size of the list and it is invariable", you can set up an array first:
b = np.zeros((7,))
This then works faster:
%timeit b[:] = a
1000000 loops, best of 3: 1.41 µs per loop
vs
%timeit b = np.array(a)
1000000 loops, best of 3: 1.67 µs per loop

Fast indexed dot-product for numpy/scipy

I'm using numpy to do linear algebra. I want to do fast subset-indexed dot and other linear operations.
When dealing with big matrices, slicing solution like A[:,subset].dot(x[subset]) may be longer than doing the multiplication on the full matrix.
A = np.random.randn(1000,10000)
x = np.random.randn(10000,1)
subset = np.sort(np.random.randint(0,10000,500))
Timings show that sub-indexing can be faster when columns are in one block.
%timeit A.dot(x)
100 loops, best of 3: 4.19 ms per loop
%timeit A[:,subset].dot(x[subset])
100 loops, best of 3: 7.36 ms per loop
%timeit A[:,:500].dot(x[:500])
1000 loops, best of 3: 1.75 ms per loop
Still the acceleration is not what I would expect (20x faster!).
Does anyone know an idea of a library/module that allow these kind of fast operation through numpy or scipy?
For now on I'm using cython to code a fast column-indexed dot product through the cblas library. But for more complex operation (pseudo-inverse, or subindexed least square solving) I'm not shure to reach good acceleration.
Thanks!
Well, this is faster.
%timeit A.dot(x)
#4.67 ms
%%timeit
y = numpy.zeros_like(x)
y[subset]=x[subset]
d = A.dot(y)
#4.77ms
%timeit c = A[:,subset].dot(x[subset])
#7.21ms
And you have all(d-ravel(c)==0) == True.
Notice that how fast this is depends on the input. With subset = array([1,2,3]) you have that the time of my solution is pretty much the same, while the timing of the last solution is 46micro seconds.
Basically this will be faster if the size ofsubset is not much smaller than the size of x

Are Numpy functions slow?

Numpy is supposed to be fast. However, when comparing Numpy ufuncs with standard Python functions I find that the latter are much faster.
For example,
aa = np.arange(1000000, dtype = float)
%timeit np.mean(aa) # 1000 loops, best of 3: 1.15 ms per loop
%timeit aa.mean # 10000000 loops, best of 3: 69.5 ns per loop
I got similar results with other Numpy functions like max, power. I was under the impression that Numpy has an overhead that makes it slower for small arrays but would be faster for large arrays. In the code above aa is not small: it has 1 million elements. Am I missing something?
Of course, Numpy is fast, only the functions seem to be slow:
bb = range(1000000)
%timeit mean(bb) # 1 loops, best of 3: 551 ms per loop
%timeit mean(list(bb)) # 10 loops, best of 3: 136 ms per loop
Others already pointed out that your comparison is not a real comparison (you are not calling the function + both are numpy).
But to give an answer to the question "Are numpy function slow?": generally speaking, no, numpy function are not slow (or not slower than plain python function). Off course there are some side notes to make:
'Slow' depends off course on what you compare with, and it can always faster. With things like cython, numexpr, numba, calling C-code, ... and others it is in many cases certainly possible to get faster results.
Numpy has a certain overhead, which can be significant in some cases. For example, as you already mentioned, numpy can be slower on small arrays and scalar math. For a comparison on this, see eg Are NumPy's math functions faster than Python's?
To make the comparison you wanted to make:
In [1]: import numpy as np
In [2]: aa = np.arange(1000000)
In [3]: bb = range(1000000)
For the mean (note, there is no mean function in python standard library: Calculating arithmetic mean (average) in Python):
In [4]: %timeit np.mean(aa)
100 loops, best of 3: 2.07 ms per loop
In [5]: %timeit float(sum(bb))/len(bb)
10 loops, best of 3: 69.5 ms per loop
For max, numpy vs plain python:
In [6]: %timeit np.max(aa)
1000 loops, best of 3: 1.52 ms per loop
In [7]: %timeit max(bb)
10 loops, best of 3: 31.2 ms per loop
As a final note, in the above comparison I used a numpy array (aa) for the numpy functions and a list (bb) for the plain python functions. If you would use a list with numpy functions, in this case it would again be slower:
In [10]: %timeit np.max(bb)
10 loops, best of 3: 115 ms per loop
because the list is first converted to an array (which consumes most of the time). So, if you want to rely on numpy in your application, it is important to make use of numpy arrays to store you data (or if you have a list, convert it to an array so this conversion has to be done only once).
You're not calling aa.mean. Put the function call parentheses on the end, to actually call it, and the speed difference will nearly vanish. (Both np.mean(aa) and aa.mean() are NumPy; neither uses Python builtins to do the math.)

Categories