sparse circulant construction python - python

scipy.linalg.circulant provides a nice way of turning a (1-D) numpy array x into a circulant matrix C. Is there any stock way of doing so that preserves sparse representation if x is a scipy.sparse.csr_matrix matrix?

Related

How do I do this calculation most efficiently with scipy sparse matrices?

Suppose I have the following:
A complex-valued sparse matrix Q (and its conjugate transpose is Q^H)
A complex-valued diagonal matrix D
A real-valued sparse vector v
and I wish to compute as efficiently as possible, using scipy sparse matrices, the product
Q # (D # (Q^H # v)),
where the #s are matrix-vector multiplication and the bracketed order is chosen to avoid matrix-matrix multiplication. I am new to use of sparse matrices, and the available types confuse me. How would I best compute this in scipy? In particular, I'm wondering:
Is it inefficient to mix matrix types? (e.g. would it be inefficient if Q were a CSC matrix and v were a CSR matrix - is there any general requirement for them all to be the same?)
For each of Q, D, and v, which scipy sparse representation would be best to do efficient matrix-vector multiplication?
If I were to port the code to run on my GPU using cupy, are there any additional factors I'd need to consider for efficiency?

CSC vs. CSR matrix vector multiplication

I have a python program that solves iterative methods for linear problem where the input data matrix is sparse.
My question is that which of the two compressed scipy.sparse formats, csc or csr, performs a faster matrix vector multiplication A#x and why? In other words, if I want to multiply a matrix from the left side to a vector is it more efficient to have it compressed in csc or csr format?

Eigenvalues and Eigenvectors of large sparse matrix in Python Scipy

My Python code generates upper triangle of a matrix in coordinate (COO) format (row,col,mat_element) and stores non-zero elements only to decrease memory usage. However, it is sparse symmetric matrix and I wish to obtain it's eigenvalues and eigenvectors using 'eigsh' from 'scipy.sparse.linalg'. How can I do this?
I would prefer to convert COO to CSR matrix directly and give it as an input for 'eigsh'. But if that is not possible I would like to make the matrix symmetric first and then convert it to CSR format.
COO and CSR matrix wiki:
https://en.wikipedia.org/wiki/Sparse_matrix

mmap sparse vector in python

I'm looking for simple sparse vector implementation that can be mapped into memory, similarly to numpy.memmap.
Unfortunately, numpy implementation deals only with full vector. Example usage:
vec = SparseVector('/tmp/file.dat') # SparseVector is the class I'm looking for
vec[10] = 10
vec[50] = 21
for key in vec:
print vec[key] # 10, 21
I foung scipy class representing sparse matrix, however 2 dimensions are clumsy to use as I'd need to make matrix with only one row a then use vec[0,i].
Any suggestions?
Someone else was just asking about 1d sparse vectors, only they wanted to take advantage of the scipy.sparse method of handling duplicate indices.
is there something like coo_matrix but for sparse vectors?
As shown there, a coo_matrix actually consists of 3 numpy arrays, data, row, col. Other formats rearrange the values in other ways, lil for example has 2 nested lists, one for the data, another for the coordinates. dok is a regular dictionary, with (i,j) tuples as keys.
In theory then a sparse vector will require 2 arrays. Or as your example shows it could be a simple dictionary.
So you could implement a mmap sparse vector by using two mmap arrays. As far as I know there isn't a mmap version of the scipy sparse matrices, though it's not something I've looked for.
But what functionality do you want? What dimension? So large that a dense version would not fit in regular memory? Are you doing math with it? Or just data lookup?

Efficient numpy / lapack routine for product of inverse and sparse matrix?

I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.

Categories