Is there a way to calculate the determinant of a complex matrix in PyTroch?
torch.det is not implemented for 'ComplexFloat'
Unfortunately it's not implemented currently. One way would be to implement your own version or simply use np.linalg.det.
Here is a short function which computes the determinant of a complex matrix that I wrote using LU-decomposition:
def complex_det(A):
def complex_diag(A):
return torch.view_as_complex(torch.stack((A.real.diag(), A.imag.diag()),dim=1))
#Perform LU decomposition to matrix A:
A_LU, pivots = A.lu()
P, A_L, A_U = torch.lu_unpack(A_LU, pivots)
#Det. of multiplied matrices is multiplcation of det.:
det = torch.prod(complex_diag(A_L)) * torch.prod(complex_diag(A_U)) * torch.det(P.real) #Could probably calculate det(P) [which is +-1] efficiently using Sylvester's determinant identity
return det
#Test it:
A = torch.view_as_complex(torch.randn(3,3,2))
complex_det(A)
As of version 1.8, PyTorch has native support for numpy-style torch.linalg operations. In particular, torch.linalg.det has support for cfloat and cdouble complex number data-types:
torch.linalg.det(input)
Computes the determinant of a square matrix input, or of each square matrix in a batched input.
This function supports float, double, cfloat and cdouble dtypes.
Related
If I want to calculate the k smallest eigenvalues of the matrix multiplication AA' with A of size 300K by 512 and "'" is the transpose, then that would be infeasible to do it in traditional way. Matlab however provides a nice functionality by using a function argument that perform the product Afun = #(x) A*(A'*x)); to the eigs function. Then, to find the smallest 6 eigenvalues/eigenvectors we call d = eigs(Afun,300000,6,'smallestabs'), where the second input is the size of the matrix AA'. Is there a function in python that performs a similar thing?
To my knowledge, there is no such functionality in numpy. However, I don't see any limitations by using simply numpy.linalg.eigvals for retrieving an array of the matrix eigenvalues. Then simply find the N smallest with a sort:
import numpy as np
import numpy.linalg
A = np.array() # your matrix
eigvals = numpy.linalg.eigvals(A)
eigvals.sort()
smallest_6_eigvals = eigvals[:6]
I'm trying to make a dot product of an expression and it was supposed to be symmetric.
It turns out that it just isn't.
B is a 4D array which I must transpose its last two dimensions to become B^t.
D is a 2D array. (It's an expression of the Stiffness Matrix known to the Finite Element Method programmers)
The numpy.dotproduct associated with numpy.transpose and as a second alternative numpy.einsum (the idea came from this topic: Numpy Matrix Multiplication U*B*U.T Results in Non-symmetric Matrix) have already been used and the problem persists.
By the end of the calculations the product B^tDB is obtained and when it's verified if it really is symmetric by subtracting its transpose B^tDB, there is still a residue.
The Dot product or the Einstein Summation are used only over the dimensions of interest (last ones).
The question is: How can these residues be eliminated?
You need to use arbitrary precision floating point math. Here's how you can combine numpy and the mpmath package to define an arbitrary precision version of matrix multiplication (ie the np.dot method):
from mpmath import mp, mpf
import numpy as np
# stands for "decimal places". Larger values
# mean higher precision, but slower computation
mp.dps = 75
def tompf(arr):
"""Convert any numpy array to one of arbitrary precision mpmath.mpf floats
"""
if arr.size and not isinstance(arr.flat[0], mpf):
return np.array([mpf(x) for x in arr.flat]).reshape(*arr.shape)
else:
return arr
def dotmpf(arr0, arr1):
"""An arbitrary precision version of np.dot
"""
return tompf(arr0).dot(tompf(arr1))
As an example, if you then set up B, B^t, and D matrices as so:
bshape = (8,8,8,8)
dshape = (8,8)
B = np.random.rand(*bshape)
BT = np.swapaxes(B, -2, -1)
d = np.random.rand(*dshape)
D = d.dot(d.T)
then B^tDB - (B^tDB)^t will always have a non-zero value if you calculate it using the standard matrix multiplication method from numpy:
M = np.dot(np.dot(B, D), BT)
np.sum(M - M.T)
but if you use the arbitrary precision version given above it won't have a residue:
M = dotmpf(dotmpf(B, D), BT)
np.sum(M - M.T)
Watch out though. Calculations using arbitrary precision math run much slower than those done using standard floating point numbers.
I am trying to diagonalise a simple sparse Hermitian matrix using python's scipy.sparse.linalg.eigsh function; although the documentation says it supports Hermitian matrices, the file python wrapper file arpack.py says it does not:
# The entry points to ARPACK are
# - (s,d)seupd : single and double precision symmetric matrix
# - (s,d,c,z)neupd: single,double,complex,double complex general matrix
# This wrapper puts the *neupd (general matrix) interfaces in eigs()
# and the *seupd (symmetric matrix) in eigsh().
# There is no Hermetian complex/double complex interface.
# To find eigenvalues of a Hermetian matrix you
# must use eigs() and not eigsh()
# It might be desirable to handle the Hermetian case differently
# and, for example, return real eigenvalues.
Indeed the above comments from the wrapper are vindicated when I try to run the following code for diagonalising a sparse complex Hermitian matrix using eigsh:
from scipy.sparse import *
from scipy.sparse.linalg import *
import math
sign = lambda x: math.copysign(1, x)
S = lil_matrix((5,5), dtype=complex)
for i in range(5):
for j in range(5):
S[i,j] = sign(i-j)*1j
eigval = eigsh(-S, k=1, M=None,sigma=None, which='LM', v0=None, ncv=None, tol=0, return_eigenvectors=False)
Then the following error comes up:
ValueError: Input matrix is not real-valued.
Of course, if I use a regular numpy matrix for S and the numpy.linalg.eigh function, then all the eigenvalues are computed. But
I do not want all the eigenvalues, and
I need my matrix to be sparse.
Does anyone know what then is the point of having a sparse version eigsh for sparse complex Hermitian matrices if it cannot be used?
Note: I understand that the matrix in the example here is not necessarily sparse by some definition of sparsity but it's just used for illustrative purposes.
When I use the linear algebra module in scipy to calculate the matrix logarithm of a hermitian matrix, the matrix that it outputs isn't hermitian. I first define a vector using:
n = np.random.uniform(size = 3) + 1j*np.random.uniform(size = 3)
Then I define the respective hermitian matrix:
N = np.outer(n,n.conj())
However, linalg.logm(N) doesn't return a hermitian matrix. Why is this happening?
All but one eigenvalues of the random matrix are zero. Since functions on matrices can be written as functions on the eigenvalues of a matrix, I see why the logarithm has a problem there, because log(0) is not defined. Maybe the function doesn't see this problem and just returns garbage.
I guess that you just need to make sure that your random Hermitian matrix has nonzero eigenvalues.
I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.