This is question is the same as this, but for a sparse matrix (scipy.sparse). The solution given to the linked question used indexing schemes that are incompatible with sparse matrices.
For context I am constructing a Jacobian for a large discretized PDE, so the B matrix in this case contains various relevant partial terms while A will be the complete Jacobian I need to invert for a Newton's method approximation. On a large grid A will be far too large to fit in memory, so I want to use sparse matrices.
I would like to construct an array with the following structure:
A[i,j,i,j,] = B[i,j] with all other entries 0: A[i,j,l,k]=0 # (i,j) =\= (l,k)
I.e. if I have the B matrix constructed how can I create the matrix A, preferably in a vectorized manner.
Explicitly, let B = [[1,2],[3,4]]
Then:
A[1,1,:,:]=[[1,0],[0,0]]
A[1,2,:,:]=[[0,2],[0,0]]
A[2,1,:,:]=[[0,0],[3,0]]
A[2,2,:,:]=[[0,0],[0,4]]
Related
I have a fairly large (1600,1600) sparse matrix B and I need to find its kernel (note: I know it to be 1dimensional, i.e. just one vector).
One approach is to "bruteforce" it by using scipy.linalg.null_space(). That is a fine way I would say, but for larger matrices it may not be the most optimal way. Therefore, I tried to implement my own method by using the sparse scipy library and svd
def kernel(matrix):
# Convert the matrix to a compressed sparse row (CSR) format
matrix = csr_matrix(matrix)
# Calculate the singular value decomposition (SVD) of the matrix
U, s, vh = linalg.svds(matrix, which='SM')
ind = np.where(s == np.min(s))[0][0] # get the index of the vector with lowest singular value
return vh[ind].T
The problem I am having is that the above code does return me a vector (which is nice), it is although far from being the kernel of B, as the norm of B times that vector is (firstly not constant, as it changes after every run) way larger than zero.
Anyone has ever encountered this problem? Does anyone know how to fix this?
I would like to generate invertible matrices (specifically those from GL(n), a general linear group of size n) using Tensorflow and/or Numpy for use with my neural network.
How can this be done and what would be the best way of doing so?
I understand there is a way to generate symmetric invertible matrices by computing (A + A.T)/2 for arbitrary square matrices A, however, I would like mine to not just be symmetric.
I happened to have found one way which I believe can generate a large variety of random invertible matrices using diagonal dominance.
The theorem is that given an nxn matrix, if the abs of the diagonal element is larger than the sum of the abs of all the row elements with respect to the row the diagonal element is in, and this holds true for all rows, then the underlying matrix is invertible. (here is the corresponding wikipedia article: https://en.wikipedia.org/wiki/Diagonally_dominant_matrix)
Therefore the following code snippet generates an arbitrary invertible matrix.
n = 5 # size of invertible matrix I wish to generate
m = np.random.rand(n, n)
mx = np.sum(np.abs(m), axis=1)
np.fill_diagonal(m, mx)
Suppose I have the following:
A complex-valued sparse matrix Q (and its conjugate transpose is Q^H)
A complex-valued diagonal matrix D
A real-valued sparse vector v
and I wish to compute as efficiently as possible, using scipy sparse matrices, the product
Q # (D # (Q^H # v)),
where the #s are matrix-vector multiplication and the bracketed order is chosen to avoid matrix-matrix multiplication. I am new to use of sparse matrices, and the available types confuse me. How would I best compute this in scipy? In particular, I'm wondering:
Is it inefficient to mix matrix types? (e.g. would it be inefficient if Q were a CSC matrix and v were a CSR matrix - is there any general requirement for them all to be the same?)
For each of Q, D, and v, which scipy sparse representation would be best to do efficient matrix-vector multiplication?
If I were to port the code to run on my GPU using cupy, are there any additional factors I'd need to consider for efficiency?
I have derived d sparse matrices m[d] of size (n, n) each, and I would like to stack them along a new axis in order to build a sparse matrix of size (n, n, d).
I tried building this stacked matrix with np.stack([m[i] for i in range(d)], axis=-1) but this yields a numpy.ndarray of size d and not a sparse matrix (in such format, I can't use scipy.sparse.save_npz, which is what I ultimately want to do). scipy.sparse only comes with vstack and hstack, none of which suits my need here.
Is there a way to build such a matrix?
Is there a way to build a sparse matrix with more than two axis at all?
Notes:
All sparse matrices have the same number of stored elements m[d], and these elements have the same coordinates in the matrix, so stacking them should be straightforward.
To give some context, I encountered this problem trying to compute the gradient of a function f defined on a mesh surface. This function associates each vertex i of the mesh surface with a vector f(i) of size d. All edges (i,j) can be stored in a sparse matrix of size (n, n). Finally, each matrix m[d] contains the gradient along the dth dimension for each edge (i, j) of the mesh.
I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.