Scipy spsolve is order of magnitude slower than matlab mldivide - python

Ok, so I have a linear system. A is sparse 29791 by 29791 with 202771 stored elements. B is 29791 by 1 with 4561 stored elements.
I have tried solving this system by storing A as csr_matrix and as csc_matrix, and B as a regular numpy array. Both csr and csc matrices take about a minute to solve. Then, I tried saving the triplets (row, column, data) in csv format, and importing this in matlab and then doing:
Asp = sparse(data(:, 1), data(:, 2), data(:, 3))
P = Asp\b
This takes around a second, if not less. Why is matlab nearly 2 orders of magnitude faster than scipy, and how can I speed this computation up? Even matlab mldivide takes less than 2 seconds.
A = csc_matrix((data, (np.asarray(rows), np.asarray(cols))), shape=A_shape)
P = spsolve(A, csc_matrix(B))
I expect the scipy spsolve to be at max twice as slow as matlab backslash, but its not.

Related

NumPy array row differences

I have a NumPy array vectors = np.random.randn(rows, cols). I want to find differences between its rows according to some other array diffs which is sparse and "2-hot": containing a 1 in its column corresponding to the first row of vectors and a -1 corresponding to the second row. Perhaps an example shall make it clearer:
diffs = np.array([[ 1, 0, -1],
[ 1, -1, 0]])
then I can compute the row differences by simply diffs # vectors.
Unfortunately this is slow for diffs of 10_000x1000 and vectors 1000x15_000. I can get a speedup by using scipy.sparse: sparse.csr_matrix(diffs) # vectors, but even this is 300ms.
Possibly this is simply as fast as it gets, but some part of me thinks whether using matrix multiplications is the wisest decision for this task.
What's more is I need to take the absolute value afterwards so really I'm doing np.abs(sparse.csr_matrix(diffs) # vectors) which adds ~ 200ms for a grand total of ~500ms.
I can compute the row differences by simply diffs # vectors.
This is very inefficient. A matrix multiplication runs in O(n*m*k) for a (n,m) multiplied by a (m,k) one. In your case, there is only two values per line and you do not actually need a multiplication by 1 or -1. Your problem can be computed in O(n*k) time (ie. m times faster).
Unfortunately this is slow for diffs of 10_000x1000 and vectors 1000x15_000. I can get a speedup by using scipy.sparse.
The thing is the input data representation is inefficient. When diff is an array of size (10_000,1000), this is not reasonable to use a dense matrix that would be ~1000 times bigger than needed nor a sparse matrix that is not optimized for having only two non-zero values (especially 1 and -1). You need to store the position of the non-zeros values in a 2D array called sel_rows of shape (2,n) where the first row contains the location of the 1 and the second one contains the location of the -1 in the diff 2D array. Then, you can extract the rows of vectors for example with vectors[sel_rows[0]]. You can perform the final operation with vectors[sel_rows[0,:]] - vectors[sel_rows[1,:]]. This approach should be drastically faster than a dense matrix product and it may be a bit faster than a sparse one regarding the target machine.
While the above solution is simple, it create multiple temporary arrays that are not cache-friendly since your output array should take 10_000 * 15_000 * 8 = 1.1 GiB (which is quite huge). You can use Numba so to remove temporary array and so improve the performance. Multiple threads can be used to improve performance even further. Here is an untested code:
import numba as nb
#nb.njit('(int_[:,::1], float64[:,::1])', parallel=True)
def compute(diffs, vectors):
n, k = diffs.shape[0], vectors.shape[1]
assert diffs.shape[1] == 2
res = np.empty((n, k))
for i in nb.prange(n):
a, b = diffs[i]
for j in range(k):
# Compute nb.abs here if needed so to avoid
# creating new temporary arrays
res[i, j] = vectors[a, j] - vectors[b, j]
return res
This above code should be nearly optimal. It should be memory bound and able to saturate the memory bandwidth. Note that writing such huge arrays in memory take some time as well as reading (twice) the input array. On x86-64 platforms, a basic implementation should move 4.4 GiB of data from/to the RAM. Thus, on a mainstream PC with a 20 GiB/s RAM, this takes 220 ms. In fact, the sparse matrix computation result was not so bad in practice for a sequential implementation.
If this is not enough to you, then you can use simple-precision floating-point numbers instead of double-precision (twice faster). You could also use a low-level C/C++ implementation so to reduce the memory bandwidth used (thanks to non-temporal instructions -- ~30% faster). There is no much more to do.

Faster way of finding least-square solution for large matrix

I want to find the least-square solution of a matrix and I am using the numpy linalg.lstsq function;
weights = np.linalg.lstsq(semivariance, prediction, rcond=None)
The dimension for my variables are;
semivariance is a float of size 5030 x 5030
prediction is a 1D array of length 5030
The problem I have is it takes approximately 80sec to return the value of weights and I have to repeat the calculation of weights about 10000 times so the computational time is just elevated.
Is there a faster way/pythonic function to do this?
#Brenlla appears to be right, even if you perform least squares by solving using the Moore-Penrose pseudo inverse, it is significantly faster than np.linalg.lstsq:
import numpy as np
import time
semivariance=np.random.uniform(0,100,[5030,5030]).astype(np.float64)
prediction=np.random.uniform(0,100,[5030,1]).astype(np.float64)
start=time.time()
weights_lstsq = np.linalg.lstsq(semivariance, prediction, rcond=None)
print("Took: {}".format(time.time()-start))
>>> Took: 34.65818190574646
start=time.time()
weights_pseudo = np.linalg.solve(semivariance.T.dot(semivariance),semivariance.T.dot(prediction))
print("Took: {}".format(time.time()-start))
>>> Took: 2.0434153079986572
np.allclose(weights_lstsq[0],weights_pseudo)
>>> True
The above is not on your exact matrices but the concept on the samples likely transfers. np.linalg.lstsq performs an optimisation problem by minimizing || b - a x ||^2 to solve for x in ax=b. This is usually faster on extremely large matrices, hence why linear models are often solved using gradient decent in neural networks, but in this case the matrices just aren't large enough for the performance benefit.

Resources for doing an n dimensional FFT of a huge multi dimensional array

I have a 50 dimensional array, whose dimensions are 255 x 255 x 255 x...(50 times)..x255. So its a total of 50^255 floating point numbers. Its just out of scope to even think of fitting in a RAM. Moreover I need to take an 50 dimensional Fast Fourier Transform (DFT) of this array. I can't do it in python on an ordinary PC. I cant even imagine doing it on a GPU. so I am guessing I have to take help of a hard disk memory, but even that is too huge. I don't need this in real time, I can afford even days for it to run. I have no clue what sort of machine I need or is it even possible? Appreciate your advice. Super computers, grids, or something even if its too costly, I am not worried about investment.
If you found enough universes to save your data in, here is what you could do:
The Fourier Transform is separable, that means that calculating the DFT of each axis one after the other will give you the same result as if you calculated the n-dimensional DFT:
for i in range(C.ndim):
C[...] = numpy.fft.fft(C, axis=i)
Double checking if the value is correct using a 2D tensor (because we have a 2D FFT numpy.fft.fft2 to compare against):
import numpy
A = numpy.random.rand(*[16] * 2)
B = numpy.fft.fft2(A)
C = A.astype(numpy.complex) # output vector for separable FFT
for i in range(C.ndim):
C[...] = numpy.fft.fft(C, axis=i)
numpy.allclose(C, B) # True

How can I generate a large identity matrix in python without running into "memory full"?

I am using the code below:
n = 40000
numpy.matlib.identity(n)
You can do this with scipy using sparse matrix representation:
import numpy as np
from scipy.sparse import identity
n = 30000
a = np.identity(n)
print a.nbytes
b = identity(n)
print b.data.nbytes
The difference is huge (quadratic): 7200000000 vs 240000.
You can also try to decrease the size by providing appropriate dtype, like a = np.identity(n, dtype='int8') but this will only reduce the size linearly (with maximum linear factor of less than 200).
The same way you can do b = identity(n, dtype='int8', format='dia') which will reduce the size even further to 30000.
But the most important thing is what are you planning to do with this matrix (highly doubt you just want to create it)? And some of the operations would not support sparse indices. Then you either have to buy more memory or come up with smart linear-algebra stuff to operate on parts of the matrices, store results on disk and merge them together.

Python - matrix multiplication with sparse result

Suppose I have two dense matrices U (10000x50) and V(50x10000), and one sparse matrix A(10000x10000). Each element in A is either 1 or 0. I hope to find A*(UV), noting that '*' is element-wise multiplication. To solve the problem, Scipy/numpy will calculate a dense matrix UV first. But UV is dense and large (10000x10000) so it's very slow.
Because I only need a few elements of UV indicated by A, it should save a lot of time if only necessary elements are calculated instead of calculating all elements then filtering using A. Is there a way to instruct scipy to do this?
BTW, I used Matlab to solve this problem and Matlab is smart enough to find what I'm trying to do and works efficiently.
Update:
I found Matlab calculated UV fully as scipy does. My scipy installation is simply too slow...
Here's a test script and possible speedup. The basic idea is to use the nonzero coordinates of A to select rows and columns of U and V, and then use einsum to perform a subset of the possible dot products.
import numpy as np
from scipy import sparse
#M,N,d = 10,5,.1
#M,N,d = 1000,50,.1
M,N,d = 5000,50,.01 # about the limit for my memory
A=sparse.rand(M,M,d)
A.data[:] = 1 # a sparse 0,1 array
U=(np.arange(M*N)/(M*N)).reshape(M,N)
V=(np.arange(M*N)/(M*N)).reshape(N,M)
A1=A.multiply(U.dot(V)) # the direct solution
A2=np.einsum('ij,ik,kj->ij',A.A,U,V)
print(np.allclose(A1,A2))
def foo(A,U,V):
# use A to select elements of U and V
A3=A.copy()
U1=U[A.row,:]
V1=V[:,A.col]
A3.data[:]=np.einsum('ij,ji->i',U1,V1)
return A3
A3 = foo(A,U,V)
print(np.allclose(A1,A3.A))
The 3 solutions match. For large arrays, foo is about 2x faster than the direct solution. For small size, the pure einsum is competitive, but bogs down for large arrays.
The use of dot in foo would have computed too many products, ij,jk->ik as opposed to ij,ji->i.

Categories