Defining matrix functions on higher dimensional tensors? Specifically a matrix logarithm - python

Pytorch does not have a native matrix logarithm. I would like to create an efficient logarithm using 'scipy.linalg.logm'. The problem is that 'logm' can only be applied to square tensors. I would like to compute the log of a tensor shape (64,22,3,3), in which the log is being applied to the (3,3) matrix component. Is there a fast and efficient way to apply logm to the entire tensor?
I currently am resorting to list comprehension which is incredibly slow:
x=torch.rand(64,22,3,3)
Log_x=torch.stack([logm(x[i,j]) for i in range(x.shape[0]) for j in range(x.shape[1])])

Related

Diagonal of sparse 4D matrix

This is question is the same as this, but for a sparse matrix (scipy.sparse). The solution given to the linked question used indexing schemes that are incompatible with sparse matrices.
For context I am constructing a Jacobian for a large discretized PDE, so the B matrix in this case contains various relevant partial terms while A will be the complete Jacobian I need to invert for a Newton's method approximation. On a large grid A will be far too large to fit in memory, so I want to use sparse matrices.
I would like to construct an array with the following structure:
A[i,j,i,j,] = B[i,j] with all other entries 0: A[i,j,l,k]=0 # (i,j) =\= (l,k)
I.e. if I have the B matrix constructed how can I create the matrix A, preferably in a vectorized manner.
Explicitly, let B = [[1,2],[3,4]]
Then:
A[1,1,:,:]=[[1,0],[0,0]]
A[1,2,:,:]=[[0,2],[0,0]]
A[2,1,:,:]=[[0,0],[3,0]]
A[2,2,:,:]=[[0,0],[0,4]]

Covariance Matrix from 2D vectors - Tensorflow, Numpy

I'm trying to generate a kernel function for GP using only Matrix operations (no loops).
Vectors where no problem taking advantage of broadcasting
def kernel(A,B):
return 1/np.exp(np.linalg.norm(A-B.T))**2
A and B are both [n,1] vectors, but with [n,m] shaped matrices It just doesn't work. (Also tried reshaping to [1,n,m])
I'm interested on computing an X Matrix where every ij-th element is defined by Ai-Bj.
Now I'm working on Numpy but my final objective is implement this on Tensorflow.
Thanks in Advance.

Fast sequential lists for tensorflow?

I have an array A of matrices (or a 3-dim tensor) and I want to do the following:
Denote each matrix with a number, so A is [1,2,3,4,...,], and let's say that we have a window of length 3, I want to pass as input to a TensorFlow graph the 4-dim array [[1,2,3],[2,3,4],[3,4,5],....]. What's the most efficient way of doing this? (It's a bit like a convolution with a constant kernel, but without summing over the resulting matrices).
At the moment this is what I'm doing:
input_NN = [data[t, t + window] for t in range(my_range)]
and then I pass it to a TF placeholder.
Shall I think of a better way of doing it in numpy and pass the result to a placeholder or is there a fast way of doing this in TensorFlow by passing A directly?

Sparse-Dense multiplication in Python

I am using Python 3.23 and I am want to multiply a sparse VECTOR with a dense MATRIX. The idea of first unfolding the sparse vector into a dense one and then multiplying is of course silly from any standpoint except for mem management until the actual unfolding. It will be more expensive with zeros in there...
Also, does any one know of a good way for SciPy to keep one dimensional matrices in sparse mode? The only one (admittedly) i have used is the classical notation of three vectors (x,y,value), so i have had to use np.ones(len(...)) to get it to work.
Well.. comments welcome!
Store the vector using the Scipy sparse matrix classes:
x = csr_matrix(np.random.rand(1000) > 0.99).T
print x.shape # (1000, 1)

Efficient numpy / lapack routine for product of inverse and sparse matrix?

I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.

Categories