Suppose I have the following:
A complex-valued sparse matrix Q (and its conjugate transpose is Q^H)
A complex-valued diagonal matrix D
A real-valued sparse vector v
and I wish to compute as efficiently as possible, using scipy sparse matrices, the product
Q # (D # (Q^H # v)),
where the #s are matrix-vector multiplication and the bracketed order is chosen to avoid matrix-matrix multiplication. I am new to use of sparse matrices, and the available types confuse me. How would I best compute this in scipy? In particular, I'm wondering:
Is it inefficient to mix matrix types? (e.g. would it be inefficient if Q were a CSC matrix and v were a CSR matrix - is there any general requirement for them all to be the same?)
For each of Q, D, and v, which scipy sparse representation would be best to do efficient matrix-vector multiplication?
If I were to port the code to run on my GPU using cupy, are there any additional factors I'd need to consider for efficiency?
Related
The bottleneck of some code I have is:
for _ in range(n):
W = np.dot(A, W)
where n can vary, A is a fixed size MxM matrix, W is Mx1.
Is there a good way to optimize this?
Numpy Solution
Since np.dot is just a matrix multiplication for your shapes you can write what you want as A^n*W. With ^ being repeated matrix multiplication "matrix_power" and * matrix multiplication. So you can rewrite your code as
np.linalg.matrix_power(A,n)#W
Linear Algebra Solution
You can do even better with linear algebra. Assuming for the moment W was an eigenvector of A i.e. that A*W=a*W with a just a number then it follows A^n*W=a^n*W. And now you might think ok but what if W is not an eigenvector. Since matrix multiplication is linear it is just as good if W can be written as a linear combination of eigenvectors and there is even a generalisation of this idea in case W can not be written as a linear combination of eigenvectors. If you want to read more about this google diagonalization and Jordan normal form.
This is question is the same as this, but for a sparse matrix (scipy.sparse). The solution given to the linked question used indexing schemes that are incompatible with sparse matrices.
For context I am constructing a Jacobian for a large discretized PDE, so the B matrix in this case contains various relevant partial terms while A will be the complete Jacobian I need to invert for a Newton's method approximation. On a large grid A will be far too large to fit in memory, so I want to use sparse matrices.
I would like to construct an array with the following structure:
A[i,j,i,j,] = B[i,j] with all other entries 0: A[i,j,l,k]=0 # (i,j) =\= (l,k)
I.e. if I have the B matrix constructed how can I create the matrix A, preferably in a vectorized manner.
Explicitly, let B = [[1,2],[3,4]]
Then:
A[1,1,:,:]=[[1,0],[0,0]]
A[1,2,:,:]=[[0,2],[0,0]]
A[2,1,:,:]=[[0,0],[3,0]]
A[2,2,:,:]=[[0,0],[0,4]]
I have noticed that scipy.sparse.linalg.splu() does not allow me to decompose a sparse matrix A into the correct L and U matrix that I can call separately. The command ''merely'' allows me to decompose the matrix and reconstruct it later on using the permutation matrix. However, for my code I need to decompose a sparse matrix A into a sparse matrix L and U and then be able to call the L and U matrices separately (without permutation matrices etc.). This does not work when using the scipy.sparse.linalg.splu() command. I could use scipy.linalg.lu() but I cannot apply this to a matrix A in sparse format. Are there any other methods out there for obtaining the correct L and U decomposition matrices from a sparse matrix A? Thanks in advance.
I'm using numpy and scipy. I have a large sparse matrix and I want to find the largest eigenvalue of the sparse matrix. How can I do that?
I use scipy.sparse.linalg.eigsh for symmetric sparse matrices passing which="LM":
eigvals, eigvecs = eigsh(A, k=10, which='LM', sigma=1.)
but you should definitely read the documentation.
I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.