Fast subsequent multiplication of many matrices in python - python

I have to generate a matrix (propagator in physics) by ordered multiplication of many other matrices. Each matrix is about the size of (30,30), all real entries (floats), but not symmetric. The number of matrices to multiply varies between 1e3 to 1e5. Each matrix is only slightly different from previous, however they are not commutative (and at the end I need the product of all these non-commutative multiplication). Each matrix is for certain time slice, so I know how to generate each of them independently, wherever they are in the multiplication sequence. At the end, I have to produce many such matrix propagators, so any performance enhancement is welcomed.
What is the algorithm for fastest implementation of such matrix multiplication in python?
In particular -
How to structure it? Are there fast axes and so on? preferable dimensions for rows / columns of the matrix?
Assuming memory is not a problem, to allocate and build all matrices before multiplication, or to generate each per time step? To store each matrix in dedicated variable before multiplication, or to generate when needed and directly multiply?
Cumulative effects of function call overheads effects when generating matrices?
As I know how to build each, should it be parallelized? For example maybe to create batch sequences from start of the sequence and from the end, multiply them in parallel and at the end multiply the results in proper order?
Is it preferable to use other module than numpy? Numba can be useful? or some other efficient way to compile in place to C, or use of optimized external libraries? (please give reference if so, I don't have experience in that)
Thanks in advance.

I don't think that the matrix multiplication would take much time. So, I would do it in a single loop. The assembling is probably the costly part here.
If you have bigger matrices, a map-reduce approach could be helpful. (split the set of matrices, apply matrix multiplication to each set and do the same for the resulting matrices)
Numpy is perfectly fine for problems like this as it is pretty optimized. (and is partly in C)
Just test how much time the matrix multiplication takes and how much the assembling. The result should indicate where you need to optimize.

Related

Computing top eigenvalues, operator norm of sparse matrix

I have a large sparse square non-normal matrix: 73080 rows, but only 6 nonzero entries per row (and all equal to 1.). I'd like to compute the two largest eigenvalues, as well as the operator (2) norm, ideally with Python. The natural way for me to store this matrix is with scipy's csr_matrix, especially since I'll be multiplying it with other sparse matrices. However, I don't see a good way to compute the relevant statistics: scipy.sparse.linalg's norm method doesn't have the 2-norm implemented and converting to a dense matrix seems like it would be a bad idea, and running scipy.sparse.linalg.eigs seems to run extremely, maybe prohibitively, slowly, and in any event it computes lots of data that I just don't need. I suppose I could subtract off the spectral projector corresponding to the top eigenvalue but then I'd still need to know the top eigenvalue of the new matrix, which I'd like to do with an out-of-the-box method if at all possible, and in any event this wouldn't continue to work after multiplying with other large sparse matrices.
However, these kinds of computations seem to be doable: the top of page 6 of this paper seems to have data on the eigenvalues of ~10000-row matrices. If this is not feasible in Python, is there another way I should try to do this? Thanks in advance.

How sparse should a numpy vector be to run faster

I saw a post on stack overflow where someone showed that the CSR representation of a vector/matrix was slower for computations compared to just using the typical matrix/vector format for various numpy computations. The speed seems to depend on the computation and how sparse the vectors or matrices are.
I have lots of sparse vectors (average number of 0s is 66%) for which I would like to take the dot product of. Note that all elements in my vectors are either a 0 or 1. Which representation is better for this (eg. csr, normal vector, etc.) in terms of computational speed? Does it depend on how sparse my vector is? If so, is there a certain sparsity (%) after which one is better than the other?
Any help with this issue is much appreciated! Thanks in advance!

Vectorized matrix multiplication in Python

Maybe it's ill advised doing this in the first place, but I'm trying to multiply a (k,k) matrix with (k,1) random vector, and I want to do this M times. I want to do this in one calculation, so having a (k,M) matrix and multiplying each column by my (k,K) matrix. Similar to how you would multiply a scalar with a vector. Is this possible without a loop?
Not in pure Python. The numpy package is universally used for numerical computation in Python. It provides several ways of doing this kind of vectorized matrix multiplication, of which the most common is probably numpy.matmul():
https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html

Efficient way to solve matrix equation in Python

Right now I am using the numpy.linalg.solve to solve my matrix, but the fact that I am using it to solve a 5000*17956 matrix makes it really time consuming. It runs really slow and It have taken me more than an hour to solve. The running time for this is probably O(n^3) for solving matrix equation but I never thought it would be that slow. Is there any way to solve it faster in Python?
My code is something like that, to solve a for the equation BT * UT = BT*B a, where m is the number of test cases (in my case over 5000), B is a data matrix m*17956, and u is 1*m.
C = 0.005 # hyperparameter term for regulization
I = np.identity(17956) # 17956*17956 identity matrix
rhs = np.dot(B.T, U.T) # (17956*m) * (m*1) = 17956*1
lhs = np.dot(B.T, B)+C*I # (17956*m) * (m*17956) = 17956*17956
a = np.linalg.solve(lhs, rhs) # B.T u = B.T B a, solve for a (17956*1)
Update (2 July 2018): The updated question asks about the impact of a regularization term and the type of data in the matrices. In general, this can make a large impact in terms of the datatypes a particular CPU is most optimized for (as a rough rule of thumb, AMD is better with vectorized integer math and Intel is better with vectorized floating point math when all other things are held equal), and the presence of a large number of zero values can allow for the use of sparse matrix libraries. In this particular case though, the changes on the main diagonal (well under 1% of all the values in consideration) will have a negligible impact in terms of runtime.
TLDR;
An hour is reasonable (a cubic regression suggests that this would take around 83 minutes on my machine -- a low-end chromebook).
The pre-processing to generate lhs and rhs account for almost none of that time.
You won't be able to solve that exact problem much faster than with numpy.linalg.solve.
If m is small as you suggest and if B is invertible, you can instead solve the equation U.T=Ba in a minute or less.
If this is part of a larger problem, this costly intermediate step might be able to be simplified away from a mathematical framework.
Performance bottlenecks really should be addressed with profiling to figure out which step is causing the issues.
Since this comes from real-world data, you might be able to get away with fewer features (either directly or through a reduction step like PCA, NMF, or LLE), depending on the end goal.
As mentioned in another answer, if the matrix is sufficiently sparse you can get away with sparse linear algebra routines to great effect (many natural language processing data sources are like this).
Since the output is a 1D vector, I would use np.dot(U, B).T instead of np.dot(B.T, U.T). Transposes are neat that way. This avoids doing the transpose on a big matrix like B, though since you have a cubic operation as the dominant step this doesn't matter much for your problem.
Depending on whether you need the original data anymore and if the matrices involved have any other special properties, you might be able to fiddle with the parameters in scipy.linalg.solve instead for a gain.
I've had mixed success replacing large matrix equations with block matrix equations falling back on numpy routines. That approach typically saves 5-20% over numpy approaches and takes 1% or so off scipy approaches on my system. I haven't fully explored the reason for the discrepancy.
Assuming your matrix is sparse, the scipy.sparse.linalg module will be useful. Here is the documentation for the whole module, and here is the documentation for spsolve.

What's the most efficient way to sum up an ndarray in numpy while minimizing floating point inaccuracy?

I have a big matrix with values that vary greatly in orders of magnitude. To calculate the sum as accurate as possible, my approach would be to reshape the ndarray into a 1-dimensional array, sort it and then add it up, starting with the smallest entries. Is there a better / more efficient way to do this?
I think that, given floating point precision problems, the best known algorithm for your task is Kahan summation. For practical purposes, Kahan summation has an error bound that is independent of the number of summands, while naive summation has an error bound that grows linearly with the number of summands.
NumPy does not use Kahan summation, and there is no easy way of implementing it without a big performance tradeoff. But it uses the next best thing, pairwise summation, where error grows, under some reasonable assumptions, like the square root of the logarithm of the number of summands.
So it is very likely that Numpy is on its own already able to provide sufficiently good precision for your problem. To validate this, I would actually run a few sample cases through Kahan summation (the pseudocode in the Wikipedia link above can be trivially converted to Python), and take this as the golden, best possible result, and compare it against:
Calling np.sum on your matrix as is.
Calling np.sum on your matrix after reshaping to 1D, which may give better results if your matrix is not contiguous in memory.
Calling np.sum on a sorted version of the 1D array.
For most cases these last three options should behave similarly, but the only way to know is to actually test it.

Categories