I have derived d sparse matrices m[d] of size (n, n) each, and I would like to stack them along a new axis in order to build a sparse matrix of size (n, n, d).
I tried building this stacked matrix with np.stack([m[i] for i in range(d)], axis=-1) but this yields a numpy.ndarray of size d and not a sparse matrix (in such format, I can't use scipy.sparse.save_npz, which is what I ultimately want to do). scipy.sparse only comes with vstack and hstack, none of which suits my need here.
Is there a way to build such a matrix?
Is there a way to build a sparse matrix with more than two axis at all?
Notes:
All sparse matrices have the same number of stored elements m[d], and these elements have the same coordinates in the matrix, so stacking them should be straightforward.
To give some context, I encountered this problem trying to compute the gradient of a function f defined on a mesh surface. This function associates each vertex i of the mesh surface with a vector f(i) of size d. All edges (i,j) can be stored in a sparse matrix of size (n, n). Finally, each matrix m[d] contains the gradient along the dth dimension for each edge (i, j) of the mesh.
I have a python program that solves iterative methods for linear problem where the input data matrix is sparse.
My question is that which of the two compressed scipy.sparse formats, csc or csr, performs a faster matrix vector multiplication A#x and why? In other words, if I want to multiply a matrix from the left side to a vector is it more efficient to have it compressed in csc or csr format?
This is question is the same as this, but for a sparse matrix (scipy.sparse). The solution given to the linked question used indexing schemes that are incompatible with sparse matrices.
For context I am constructing a Jacobian for a large discretized PDE, so the B matrix in this case contains various relevant partial terms while A will be the complete Jacobian I need to invert for a Newton's method approximation. On a large grid A will be far too large to fit in memory, so I want to use sparse matrices.
I would like to construct an array with the following structure:
A[i,j,i,j,] = B[i,j] with all other entries 0: A[i,j,l,k]=0 # (i,j) =\= (l,k)
I.e. if I have the B matrix constructed how can I create the matrix A, preferably in a vectorized manner.
Explicitly, let B = [[1,2],[3,4]]
Then:
A[1,1,:,:]=[[1,0],[0,0]]
A[1,2,:,:]=[[0,2],[0,0]]
A[2,1,:,:]=[[0,0],[3,0]]
A[2,2,:,:]=[[0,0],[0,4]]
My Python code generates upper triangle of a matrix in coordinate (COO) format (row,col,mat_element) and stores non-zero elements only to decrease memory usage. However, it is sparse symmetric matrix and I wish to obtain it's eigenvalues and eigenvectors using 'eigsh' from 'scipy.sparse.linalg'. How can I do this?
I would prefer to convert COO to CSR matrix directly and give it as an input for 'eigsh'. But if that is not possible I would like to make the matrix symmetric first and then convert it to CSR format.
COO and CSR matrix wiki:
https://en.wikipedia.org/wiki/Sparse_matrix
I'm using numpy and scipy. I have a large sparse matrix and I want to find the largest eigenvalue of the sparse matrix. How can I do that?
I use scipy.sparse.linalg.eigsh for symmetric sparse matrices passing which="LM":
eigvals, eigvecs = eigsh(A, k=10, which='LM', sigma=1.)
but you should definitely read the documentation.