I'm using numpy and scipy. I have a large sparse matrix and I want to find the largest eigenvalue of the sparse matrix. How can I do that?
I use scipy.sparse.linalg.eigsh for symmetric sparse matrices passing which="LM":
eigvals, eigvecs = eigsh(A, k=10, which='LM', sigma=1.)
but you should definitely read the documentation.
Related
Suppose I have the following:
A complex-valued sparse matrix Q (and its conjugate transpose is Q^H)
A complex-valued diagonal matrix D
A real-valued sparse vector v
and I wish to compute as efficiently as possible, using scipy sparse matrices, the product
Q # (D # (Q^H # v)),
where the #s are matrix-vector multiplication and the bracketed order is chosen to avoid matrix-matrix multiplication. I am new to use of sparse matrices, and the available types confuse me. How would I best compute this in scipy? In particular, I'm wondering:
Is it inefficient to mix matrix types? (e.g. would it be inefficient if Q were a CSC matrix and v were a CSR matrix - is there any general requirement for them all to be the same?)
For each of Q, D, and v, which scipy sparse representation would be best to do efficient matrix-vector multiplication?
If I were to port the code to run on my GPU using cupy, are there any additional factors I'd need to consider for efficiency?
My Python code generates upper triangle of a matrix in coordinate (COO) format (row,col,mat_element) and stores non-zero elements only to decrease memory usage. However, it is sparse symmetric matrix and I wish to obtain it's eigenvalues and eigenvectors using 'eigsh' from 'scipy.sparse.linalg'. How can I do this?
I would prefer to convert COO to CSR matrix directly and give it as an input for 'eigsh'. But if that is not possible I would like to make the matrix symmetric first and then convert it to CSR format.
COO and CSR matrix wiki:
https://en.wikipedia.org/wiki/Sparse_matrix
I am trying to find 100 eigen values and vectors of a huge sparse matrix of size 409600x409600. I am using scipy.sparse.linalg.eigs for this and it's taking ages to find the result whereas eigs on matlab solves it within 10 mins.Any suggestions on how to speed it up?
Python:
eigenvectors, eigenvalues = scipy.sparse.linalg.eigs(Laplacian, k=100, which='SM')
Matlab:
eigCnt = 100;
[eigenvectors, eigenvalues] = eigs(Laplacian, eigCnt, 'SM');
where Laplacian is a sparse matrix of size 409600x409600 with 10418204 non zero entries
`
I have noticed that scipy.sparse.linalg.splu() does not allow me to decompose a sparse matrix A into the correct L and U matrix that I can call separately. The command ''merely'' allows me to decompose the matrix and reconstruct it later on using the permutation matrix. However, for my code I need to decompose a sparse matrix A into a sparse matrix L and U and then be able to call the L and U matrices separately (without permutation matrices etc.). This does not work when using the scipy.sparse.linalg.splu() command. I could use scipy.linalg.lu() but I cannot apply this to a matrix A in sparse format. Are there any other methods out there for obtaining the correct L and U decomposition matrices from a sparse matrix A? Thanks in advance.
I have a large sparse matrix, implemented as a lil sparse matrix from sci-py. I just want a statistic for how sparse the matrix is once populated. Is there a method to find out this?
m.nnz is the number of nonzero elements in the matrix m, you can use m.size to get the total number of elements.