Rank of Sparse matrix Python - python

I have a sparse csr matrix, sparse.csr_matrix(A), for which I would like to compute the matrix rank
There are two options that I am aware of: I could convert it to numpy matrix or array (.todense() or .toarray()) and then use np.linalg.matrix_rank(A), which defeats my purpose of using the sparse matrix format, since I have extremely large matrices. The other option is to compute a SVD decomposition (sparse matrix svd in python) for a matrix, then deduce matrix rank from this.
Are there any other options for this? Is there currently a standard, most efficient way for me to compute the rank of a sparse matrix? I am relatively new to doing linear algebra in python, so any alternatives and suggestions with that in mind would be most helpful.

I have been using .todense() method and using rank method of numpy to calculate the answers.
It has given me a satisfactory answer till now.

Related

Fastest way to compute matrix multiplied with its transpose (AA^T) in Python

What is the fastest way to multiply a matrix with its transpose (AA^T) in Python?
I don't think NumPy/SciPy take into account the symmetries involved when using e.g. np.dot or np.matmul. The resulting matrix is always symmetric so I can imagine there is a faster solution.

Compute Cholesky decomposition of Sparse Matrix in Python

I'm trying to implement Reinsch's Algorithm (pp 4).
Since the working matrices are sparse, I'm using scipy.sparse module, but as you can see, Reinsch's algorithm needs the Cholesky decomposition of a sparse matrix (let's call it my_matrix) in order to solve certain system, but I couldn't find anything related to this.
Of course, in the same algorithm I can solve the sparse system using, for instance scipy.sparse.linalg.spsolve, and then at the end of the algorithm use something like:
R = numpy.linalg.chol( my_matrix.A )
But, in my application my_matrix is usualy about 800*800, so the last one is very inneficient.
So, my question is, where I can find such decomposition?.
Thank's in advance.
For fast decomposition, you can try,
from scikits.sparse.cholmod import cholesky
factor = cholesky(A.toarray())
x = factor(b)
A is your sparse, symmetric, positive-definite matrix.
Since your matrix is not "Huge!!" converting it into numpy array doesn't cause any problem.

Performing Decomposition on Sparse Matrices in Python

I'm trying to decomposing signals in components (matrix factorization) in a large sparse matrix in Python using the sklearn library.
I made use of scipy's scipy.sparse.csc_matrix to construct my matrix of data. However I'm unable to perform any analysis such as factor analysis or independent component analysis. The only thing I'm able to do is use truncatedSVD or scipy's scipy.sparse.linalg.svds and perform PCA.
Does anyone know any work-arounds to doing ICA or FA on a sparse matrix in python? Any help would be much appreciated! Thanks.
Given:
M = UΣV^t
The drawback with SVD is that the matrix U and V^t are dense matrices. It doesn't really matter that the input matrix is sparse, U and T will be dense. Also the computational complexity of SVD is O(n^2*m) or O(m^2*n) where n is the number of rows and m the number of columns in the input matrix M. It depends on which one is biggest.
It is worth mentioning that SVD will give you the optimal solution and if you can live with a smaller loss, calculated by the frobenius norm, you might want to consider using the CUR algorithm. It will scale to larger datasets with O(n*m).
U = CUR^t
Where C and R are now SPARSE matrices.
If you want to look at a python implementation, take a look at pymf. But be a bit careful about that exact implementations since it seems, at this point in time, there is an open issue with the implementation.
Even the input matrix is sparse the output will not be a sparse matrix. If the system does not support a dense matrix neither the results will not be supported
It is usually a best practice to use coo_matrix to establish the matrix and then convert it using .tocsc() to manipulate it.

Scipy or pandas for sparse matrix computations?

I have to compute massive similarity computations between vectors in a sparse matrix. What is currently the best tool, scipy-sparse or pandas, for this task?
After some research I found that both pandas and Scipy have structures to represent sparse matrix efficiently in memory. But none of them have out of box support for compute similarity between vectors like cosine, adjusted cosine, euclidean etc. Scipy support this on dense matrix only. For sparse, Scipy support dot products and others linear algebra basic operations.

PCA on large Sparse matrix using Correlation matrix

I have a large (500k by 500k), sparse matrix. I would like to get the principle components of it (in fact, even computing just the largest PC would be fine). Randomized PCA works great, except that it is essentially finding the eigenvectors of the covariance matrix instead of the correlation matrix. Any ideas of a package that will find PCA using the covariance matrix of a large, sparse matrix? Preferably in python, though matlab and R work too.
(For reference, a similar question was asked here but the methods refer to covariance matrix).
Are they not the same thing? As far as I understand it, the correlation matrix is just the covariance matrix normalised by the product of each variable's standard deviation. And, if I recall correctly, isn't there a scaling ambiguity in PCA anyway?
Have you ever tried irlba package in R - "The IRLBA package is the R language implementation of the method. With it, you can compute partial SVDs and principal component analyses of very large scale data. The package works well with sparse matrices and with other matrix classes like those provided by the Bigmemory package." you can check here for details

Categories