Speed-up numpy matrix multiplication using cython - python

I am computing a matrix multiplication at few thousand times during my algorithm. Therefore, I compute:
import numpy as np
import time
def mat_mul(mat1, mat2, mat3, mat4):
return(np.dot(np.transpose(mat1),np.multiply(np.diag(mat2)[:,None], mat3))+mat4)
n = 2000
mat1 = np.random.rand(n,n)
mat2 = np.diag(np.random.rand(n))
mat3 = np.random.rand(n,n)
mat4 = np.random.rand(n,n)
t0=time.time()
cov_11=mat_mul(mat1, mat2, mat1, mat4)
t1=time.time()
print('time ',t1-t0, 's')
The matrices are of size:
n = (2000,2000) and mat2 only has entries along its diagonal. The remaining entries are zero.
On my machine I get the following:
time 0.3473696708679199 s
How can I speed this up?
Thanks.

The Numpy implementation can be optimized a bit by reducing the amount of temporary arrays and reuse them as much as possible (ie. multiple times). Indeed, while matrix multiplications are generally heavily-optimized by BLAS implementations, filling/copying (newly allocated) arrays add a non-negligible overhead.
Here is the implementation:
def mat_mul_opt(mat1, mat2, mat3, mat4):
tmp1 = np.empty((n,n))
tmp2 = np.empty((n,n))
vect = np.diag(mat2)[:,None]
np.dot(np.transpose(mat1),np.multiply(vect, mat3, out=tmp1), out=tmp2)
np.add(mat4, tmp2, out=tmp1)
return tmp1
The code can be optimized further if it is fine to mutate input matrices or if you can pre-allocate tmp1 and tmp2 outside the function once (and then reuse them multiple times). Here is an example:
def mat_mul_opt2(mat1, mat2, mat3, mat4, tmp1, tmp2):
vect = np.diag(mat2)[:,None]
np.dot(np.transpose(mat1),np.multiply(vect, mat3, out=tmp1), out=tmp2)
np.add(mat4, tmp2, out=tmp1)
return tmp1
Here are performance results on my i5-9600KF processor (6-cores):
mat_mul: 103.6 ms
mat_mul_opt1: 96.7 ms
mat_mul_opt2: 83.5 ms
np.dot time only: 74.4 ms (kind of practical lower-bound)
Optimal lower bound: 55 ms (quite optimistic)

cython is not going to speed it up, simply because numpy is using other tricks to speed things up like threading and SIMD, anyone that tries to implement such function with only cython is going to end up with much worse performance.
only 2 things are possible:
use a gpu based version of numpy (cupy)
use a different more optimized backend for numpy if you aren't using the best already (like intel MKL)

Related

Why is looping through pytorch tensors so slow (compared to Numpy)?

I've been working with image transformations recently and came to a situation where I have a large array (shape of 100,000 x 3) where each row represents a point in 3D space like:
pnt = [x y z]
All I'm trying to do is iterating through each point and matrix multiplying each point with a matrix called T (shape = 3 X 3).
Test with Numpy:
def transform(pnt_cloud, T):
i = 0
for pnt in pnt_cloud:
xyz_pnt = np.dot(T, pnt)
if xyz_pnt[0] > 0:
arr[i] = xyz_pnt[0]
i += 1
return arr
Calling the following code and calculating runtime (using %time) gives the output:
Out[190]: CPU times: user 670 ms, sys: 7.91 ms, total: 678 ms
Wall time: 674 ms
Test with Pytorch Tensor:
import torch
tensor_cld = torch.tensor(pnt_cloud)
tensor_T = torch.tensor(T)
def transform(pnt_cloud, T):
depth_array = torch.tensor(np.zeros(pnt_cloud.shape[0]))
i = 0
for pnt in pnt_cloud:
xyz_pnt = torch.matmul(T, pnt)
if xyz_pnt[0] > 0:
depth_array[i] = xyz_pnt[0]
i += 1
return depth_array
Calling the following code and calculating runtime (using %time) gives the output:
Out[199]: CPU times: user 6.15 s, sys: 28.1 ms, total: 6.18 s
Wall time: 6.09 s
NOTE: Doing the same with torch.jit only reduces 2s
I would have thought that PyTorch tensor computations would be much faster due to the way PyTorch breaks its code down in the compiling stage. What am I missing here?
Would there be any faster way to do this other than using Numba?
Why are you using a for loop??
Why do you compute a 3x3 dot product and only uses the first element of the result??
You can do all the math in a single matmul:
with torch.no_grad():
depth_array = torch.matmul(pnt_cloud, T[:1, :].T) # nx3 dot 3x1 -> nx1
# since you only want non negative results
depth_array = torch.maximum(depth_array, 0)
Since you want to compare runtime to numpy, you should disable gradient accumulation.
For the speed, I got this reply from the PyTorch forums:
operations of 1-3 elements are generally rather expensive in PyTorch as the overhead of Tensor creation becomes significant (this includes setting single elements), I think this is the main thing here. This is also the reason why the JIT doesn’t help a whole lot (it only takes away the Python overhead) and Numby shines (where e.g. the assignment to depth_array[i] is just memory write).
the matmul itself might differ in speed if you have different BLAS backends for it in PyTorch vs. NumPy.

Large scale matrix multiplication using Numpy

I am facing a problem where I need to perform matrix multiplication between two large matrix A [400000 x 70000] and B [70000 x 1000]. The two matrices are dense and have no special structure that I can utilize.
Currently my implementation is to divide A into multiple chunks of rows, say, sub_A [2000 x 70000] and perfrom sub_A * B. I noticed that there are a lot of time is spent on I/O, i.e. read in the sub_A. Read in the matrix takes about 500 seconds and computation takes about 300 seconds.
Will using PyTables here be useful to improve the I/O efficiency? Are there any library that will help in improving the time efficiency?
Here is the code:
def sim_phe_g(geno, betas, chunk_size):
num_indv = geno.row_count
num_snps = geno.col_count
num_settings = betas.shape[1]
phe_g = np.zeros([num_indv, num_settings])
# divide individuals into chunks
for i in range(0, num_indv, chunk_size):
sub_geno = geno[i : i + chunk_size, :]
sub_geno = sub_geno.read().val
phe_g[i : i + chunk_size, :] = np.dot(sub_geno, betas)
return phe_g
geno is of size [400000 x 70000] and betas is of size [70000 x 1000]. geno here is a large matrix that is stored in disk. The statement sub_geno = sub_geno.read().val will load a chunk of the genotype into the memory. And this statement costs a lot of time.
Also, I divide the big matrix into chunks because of 32GB memory size limitation.
Try using TensowFlow for GPU optimization, it's very good for matrix multiplication as it will allow you to parallelize each operation.
If applicable try using tensorflow for large matrices multiplication, as you can see from this article that tensorflow performs significantly better in case of large matrices under many circumstances. The reason for the same most likely being that its primarily built for this very purpose of handling large matrices efficiently.
for more details on the specific use of matrix multiplication kindly refer to the documentation.
I tested it on a (1000,1000) matrix for multiplication:
for numpy.matmul = 60 ms ± 5.35
for tensorflow.matmul = 42.5 ms ± 2.47 m
100 runs for each were conducted sharing mean and stdev
P.S. Tensorflow's cpu version was only used

Faster way of finding least-square solution for large matrix

I want to find the least-square solution of a matrix and I am using the numpy linalg.lstsq function;
weights = np.linalg.lstsq(semivariance, prediction, rcond=None)
The dimension for my variables are;
semivariance is a float of size 5030 x 5030
prediction is a 1D array of length 5030
The problem I have is it takes approximately 80sec to return the value of weights and I have to repeat the calculation of weights about 10000 times so the computational time is just elevated.
Is there a faster way/pythonic function to do this?
#Brenlla appears to be right, even if you perform least squares by solving using the Moore-Penrose pseudo inverse, it is significantly faster than np.linalg.lstsq:
import numpy as np
import time
semivariance=np.random.uniform(0,100,[5030,5030]).astype(np.float64)
prediction=np.random.uniform(0,100,[5030,1]).astype(np.float64)
start=time.time()
weights_lstsq = np.linalg.lstsq(semivariance, prediction, rcond=None)
print("Took: {}".format(time.time()-start))
>>> Took: 34.65818190574646
start=time.time()
weights_pseudo = np.linalg.solve(semivariance.T.dot(semivariance),semivariance.T.dot(prediction))
print("Took: {}".format(time.time()-start))
>>> Took: 2.0434153079986572
np.allclose(weights_lstsq[0],weights_pseudo)
>>> True
The above is not on your exact matrices but the concept on the samples likely transfers. np.linalg.lstsq performs an optimisation problem by minimizing || b - a x ||^2 to solve for x in ax=b. This is usually faster on extremely large matrices, hence why linear models are often solved using gradient decent in neural networks, but in this case the matrices just aren't large enough for the performance benefit.

K-Means: assign clusters to new data points

I've implemented a k-means clustering algorithm in python, and now I want to label a new data with the clusters I got with my algorithm. My approach is to iterate through every data point and every centroid to find the minimum distance and the centroid associated with it. But I wonder if there are simpler or shorter ways to do it.
def assign_cluster(clusterDict, data):
clusterList = []
label = []
cen = list(clusterDict.values())
for i in range(len(data)):
for j in range(len(cen)):
# if cen[j] has the minimum distance with data[i]
# then clusterList[i] = cen[j]
Where clusterDict is a dictionary with keys as labels, [0,1,2,....] and values as coordinates of centroids.
Can someone help me implementing this?
This is a good use case for numba, because it lets you express this as a simple double loop without a big performance penalty, which in turn allows you to avoid the excessive extra memory of using np.tile to replicate the data across a third dimension just to do it in a vectorized manner.
Borrowing the standard vectorized numpy implementation from the other answer, I have these two implementations:
import numba
import numpy as np
def kmeans_assignment(centroids, points):
num_centroids, dim = centroids.shape
num_points, _ = points.shape
# Tile and reshape both arrays into `[num_points, num_centroids, dim]`.
centroids = np.tile(centroids, [num_points, 1]).reshape([num_points, num_centroids, dim])
points = np.tile(points, [1, num_centroids]).reshape([num_points, num_centroids, dim])
# Compute all distances (for all points and all centroids) at once and
# select the min centroid for each point.
distances = np.sum(np.square(centroids - points), axis=2)
return np.argmin(distances, axis=1)
#numba.jit
def kmeans_assignment2(centroids, points):
P, C = points.shape[0], centroids.shape[0]
distances = np.zeros((P, C), dtype=np.float32)
for p in range(P):
for c in range(C):
distances[p, c] = np.sum(np.square(centroids[c] - points[p]))
return np.argmin(distances, axis=1)
Then for some sample data, I did a few timing experiments:
In [12]: points = np.random.rand(10000, 50)
In [13]: centroids = np.random.rand(30, 50)
In [14]: %timeit kmeans_assignment(centroids, points)
196 ms ± 6.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit kmeans_assignment2(centroids, points)
127 ms ± 12.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I won't go as far to say that the numba version is certainly faster than the np.tile version, but clearly it's very close while not incurring the extra memory cost of np.tile.
In fact, I noticed for my laptop that when I make the shapes larger and use (10000, 1000) for the shape of points and (200, 1000) for the shape of centroids, then np.tile generated a MemoryError, meanwhile the numba function runs in under 5 seconds with no memory error.
Separately, I actually noticed a slowdown when using numba.jit on the first version (withnp.tile), which is likely due to the extra array creation inside the jitted function combined with the fact that there's not much numba can optimize when you're already calling all vectorized functions.
And I also did not notice any significant improvement in the second version when trying to shorten the code by using broadcasting. E.g. shortening the double loop to be
for p in range(P):
distances[p, :] = np.sum(np.square(centroids - points[p, :]), axis=1)
did not really help anything (and would use more memory when repeatedly broadcasting points[p, :] across all of centroids).
This is one of the really nice benefits of numba. You really can write the algorithms in a very straightforward, loop-based way that comports with standard descriptions of algorithms and allows finer point of control over how the syntax unpacks into memory consumption or broadcasting... all without giving up runtime performance.
An efficient way to perform assignment phase is by doing vectorized computation. This approach assumes that you start with two 2D arrays: points and centroids, with the same number of columns (dimensionality of space), but possibly different number of rows. By using tiling (np.tile) we can then compute the distance matrix in a batch, then select the closest clusters per each point.
Here's the code:
def kmeans_assignment(centroids, points):
num_centroids, dim = centroids.shape
num_points, _ = points.shape
# Tile and reshape both arrays into `[num_points, num_centroids, dim]`.
centroids = np.tile(centroids, [num_points, 1]).reshape([num_points, num_centroids, dim])
points = np.tile(points, [1, num_centroids]).reshape([num_points, num_centroids, dim])
# Compute all distances (for all points and all centroids) at once and
# select the min centroid for each point.
distances = np.sum(np.square(centroids - points), axis=2)
return np.argmin(distances, axis=1)
See this GitHub gist for a complete runnable example.

Computing the spectral norms of ~1m Hermitian matrices: `numpy.linalg.norm` is too slow

I would like to calculate the spectral norms of N 8x8 Hermitian matrices, with N being close to 1E6. As an example, take these 1 million random complex 8x8 matrices:
import numpy as np
array = np.random.rand(8,8,1e6) + 1j*np.random.rand(8,8,1e6)
It currently takes me almost 10 seconds using numpy.linalg.norm:
np.linalg.norm(array, ord=2, axis=(0,1))
I tried using the Cython code below, but this gave me only a negligible performance improvement:
import numpy as np
cimport numpy as np
cimport cython
np.import_array()
DTYPE = np.complex64
#cython.boundscheck(False)
#cython.wraparound(False)
def function(np.ndarray[np.complex64_t, ndim=3] Array):
assert Array.dtype == DTYPE
cdef int shape0 = Array.shape[2]
cdef np.ndarray[np.float32_t, ndim=1] normarray = np.zeros(shape0, dtype=np.float32)
normarray = np.linalg.norm(Array, ord=2, axis=(0, 1))
return normarray
I also tried numba and some other scipy functions (such as scipy.linalg.svdvals) to calculate the singular values of these matrices. Everything is still too slow.
Is it not possible to make this any faster? Is numpy already optimized to the extent that no speed gains are possible by using Cython or numba? Or is my code highly inefficient and I am doing something fundamentally wrong?
I noticed that only two of my CPU cores are 100% utilized while doing the calculation. With that in mind, I looked at these previous StackOverflow questions:
why isn't numpy.mean multithreaded?
Why does multiprocessing use only a single core after I import numpy?
multithreaded blas in python/numpy (didn't help)
and several others, but unfortunately I still don't have a solution.
I considered splitting my array into smaller chunks, and processing these in parallel (perhaps on the GPU using CUDA). Is there a way within numpy/Python to do this? I don't yet know where the bottleneck is in my code, i.e. whether it is CPU or memory-bound, or perhaps something different.
Digging into the code for np.linalg.norm, I've deduced, that for these parameters, it is finding the maximum of matrix singular values over the N dimension
First generate a small sample array. Make N the first dimension to eliminate a rollaxis operation:
In [268]: N=10; A1 = np.random.rand(N,8,8)+1j*np.random.rand(N,8,8)
In [269]: np.linalg.norm(A1,ord=2,axis=(1,2))
Out[269]:
array([ 5.87718306, 5.54662999, 6.15018125, 5.869058 , 5.80882818,
5.86060462, 6.04997992, 5.85681085, 5.71243196, 5.58533323])
the equivalent operation:
In [270]: np.amax(np.linalg.svd(A1,compute_uv=0),axis=-1)
Out[270]:
array([ 5.87718306, 5.54662999, 6.15018125, 5.869058 , 5.80882818,
5.86060462, 6.04997992, 5.85681085, 5.71243196, 5.58533323])
same values, and same time:
In [271]: timeit np.linalg.norm(A1,ord=2,axis=(1,2))
1000 loops, best of 3: 398 µs per loop
In [272]: timeit np.amax(np.linalg.svd(A1,compute_uv=0),axis=-1)
1000 loops, best of 3: 389 µs per loop
And most of the time spent in svd, which produces an (N,8) array:
In [273]: timeit np.linalg.svd(A1,compute_uv=0)
1000 loops, best of 3: 366 µs per loop
So if you want to speed up the norm, you have look further into speeding up this svd. svd uses np.linalg._umath_linalg functions - that is a .so file - compiled.
The c code is in https://github.com/numpy/numpy/blob/97c35365beda55c6dead8c50df785eb857f843f0/numpy/linalg/umath_linalg.c.src
It sure looks like this is the fastest you'll get. There's no Python level loop. Any looping is in that c code, or the lapack function it calls.
np.linalg.norm(A, ord=2) computes the spectral norm by finding the largest singular value using SVD. However, since your 8x8 submatrices are Hermitian, their largest singular values will be equal to the maximum of their absolute eigenvalues (see here):
import numpy as np
def random_symmetric(N, k):
A = np.random.randn(N, k, k)
A += A.transpose(0, 2, 1)
return A
N = 100000
k = 8
A = random_symmetric(N, k)
norm1 = np.abs(np.linalg.eigvalsh(A)).max(1)
norm2 = np.linalg.norm(A, ord=2, axis=(1, 2))
print(np.allclose(norm1, norm2))
# True
Eigendecomposition on a Hermitian matrix is quite a bit faster than SVD:
In [1]: %%timeit A = random_symmetric(N, k)
np.linalg.norm(A, ord=2, axis=(1, 2))
....:
1 loops, best of 3: 1.54 s per loop
In [2]: %%timeit A = random_symmetric(N, k)
np.abs(np.linalg.eigvalsh(A)).max(1)
....:
1 loops, best of 3: 757 ms per loop

Categories