I have a problem when i generate random matrices and turn them into positive semidefinite. If we take a random matrix Q and then multiply it with its transpose then the outcome should be positive semidefinite ( no negative eigenvalues) However when i print some random eigenvalues i see that i have negative ones. Does my code has something wrong? Also I want to save the max values of the eigenvalues into a vector. My Q random matrices are integers and the eigenvalues from what i ve seen are real numbers and the complex part is always 0. However i get a warning as well. Let me show you my code first
#here i create a random torch with N matrices of n by n size
Q = torch.randint(0, 10, size=(N, n, n))
#here i initialize my vector for the max eigenvalues
max=np.zeros(N)
#here i create a loop where i multiply each Q matrix in my torch with its transpose
for i in range(0,N):
Q[i] = Q[i]*Q[i].t()
#here i find my eigenvalues and save the max into my vector max
val,vec=lg.eig(Q[i])
max[i]= np.amax(val)
The warning i get is
ComplexWarning: Casting complex values to real discards the imaginary part
and my eigenvalues i print them from the console with the command
lg.eig(Q[0])
However i see for example
(array([120.20198423+0.j, -1.93985888+0.j, 34.73787466+0.j])
Which has a negative value
Your example is a random integer matrix, but your warning message implies Q contains complex values.
As such, in order to create a positive semidefinite matrix you must multiply Q by its conjugate transpose torch.conj(Q).t() (this is equivalent to the transpose across the reals).
Also, you are computing the dot product by using *, for matrix multiplication use #, torch.mm or torch.matmul.
Related
I've been working with matrices in python recently, and when I use np.linalg.pinv(matrix) I get negative values even though all the values in matrix are positive. For example,
matrix = np.array([[1,2,3],[4,5,6],[7,8,9]])
print(np.linalg.pinv(matrix))
outputs
[[-6.38888889e-01 -1.66666667e-01 3.05555556e-01]
[-5.55555556e-02 3.36727575e-17 5.55555556e-02]
[ 5.27777778e-01 1.66666667e-01 -1.94444444e-01]]
Why would the sign change if it's just an inverse?
I've got a 2x2 matrix defined by the variables J00, J01, J10, J11 coming in from other inputs. Since the matrix is small, I was able to compute the spectral norm by first computing the trace and determinant
J_T = tf.reduce_sum([J00, J11])
J_ad = tf.reduce_prod([J00, J11])
J_cb = tf.reduce_prod([J01, J10])
J_det = tf.reduce_sum([J_ad, -J_cb])
and then solving the quadratic
L1 = J_T/2.0 + tf.sqrt(J_T**2/4.0 - J_det)
L2 = J_T/2.0 - tf.sqrt(J_T**2/4.0 - J_det)
spectral_norm = tf.maximum(L1, L2)
This works, but it looks rather ugly and it isn't generalizable to larger matrices. Is there cleaner way (maybe a method call that I'm missing) to compute spectral_norm?
The spectral norm of a matrix J equals the largest singular value of the matrix.
Therefore you can use tf.svd() to perform the singular value decomposition, and take the largest singular value:
spectral_norm = tf.svd(J,compute_uv=False)[...,0]
where J is your matrix.
Notes:
I use compute_uv=False since we are interested only in singular values, not singular vectors.
J does not need to be square.
This solution works also for the case where J has any number of batch dimensions (as long as the two last dimensions are the matrix dimensions).
The elipsis ... operation works as in NumPy.
I take the 0 index because we are interested only in the largest singular value.
So I would like to generate a 50 X 50 covariance matrix for a random variable X given the following conditions:
one variance is 10 times larger than the others
the parameters of X are only slightly correlated
Is there a way of doing this in Python/R etc? Or is there a covariance matrix that you can think of that might satisfy these requirements?
Thank you for your help!
OK, you only need one matrix and randomness isn't important. Here's a way to construct a matrix according to your description. Start with an identity matrix 50 by 50. Assign 10 to the first (upper left) element. Assign a small number (I don't know what's appropriate for your problem, maybe 0.1? 0.01? It's up to you) to all the other elements. Now take that matrix and square it (i.e. compute transpose(X) . X where X is your matrix). Presto! You've squared the eigenvalues so now you have a covariance matrix.
If the small element is small enough, X is already positive definite. But squaring guarantees it (assuming there are no zero eigenvalues, which you can verify by computing the determinant -- if the determinant is nonzero then there are no zero eigenvalues).
I assume you can find Python functions for these operations.
I have a nxn matrix C and use inv from numpy.linalg to take the inverse to get Cinverse. My Cmatrix has elements of order 10**4 but my Cinverse matrix has elements of order 10**12 and higher (not sure if thats correct). When I do numpyp.dot(C,Cinverse), I do not get the identity matrix. Why is this?
I have a vector x which I multiply by itself to get a matrix.
x=array([ 121.41191662, 74.22830468, 73.23156336, 75.48354975,
79.89580817])
c=np.outer(xvector,xvector)
this is a 5x5 matrix.
then I get its inverse by
from numpy.linalg import inv
cinverse=inv(c)
then I want to see if I can get identity matrix back.
identity=np.dot(C00,C00inv)
However, I do not get the identity matrix. cinverse has very large matrix elements
around 10**13 and higher while c has matrix elements around 10,000.
The outer product of two vectors (be they the same or not) is not invertible. Since it is just a stack of scaled copies of the same vector its rank is one. Rank defective matrices cannot be inverted.
I'm surprised that numpy is not raising an exception or at least giving a warning.
So here is some code that generates the inverse matrix, and I will comment about it afterwards.
import numpy as np
x = np.random.rand(5,5)*10000 # makes a 5x5 matrix with elements around 10000
xin = np.linalg.inv(x)
iden = np.dot(x,xinv)
Now the first line of your iden matrix probably looks something like this:
[ 1.00000000e+00, -2.05382445e-16, -5.61067365e-16, 1.99719718e-15, -2.12322957e-16]
. Notice that the first element is exactly 1, as it should be, but there others are not exactly 0, however they are essentially zero and should be regarded as zero according to machine precision.
Ok, so I'm doing the power method in python.
Basically, the equation revolves around multiplying a matrix A by a vector (y) like this:
for i in range(0, 100):
y = mult(matrix,y)
y = scalarMult(y, 1.0/y[0][0])
Then you multiply the vector y by 1/(the first element in y). Now, if the matrix is sparse or has a zero in just the right spot, you will get a zero for the first element in a. None of my googling skills have yielded a modification to the power method to avoid this.
For those interested, I'm trying to solve for the eigenvalues of a matrix; and my code works as long as there aren't too many zeros.
Instead of dividing by first element of the vector you can divide by one of its norms.
For example if you use second norm, the length of the vector will always be 1.
norm = sum(e**2 for e in y)**0.5
Norm of the vector is only zero when vector is 0 (has all elements 0), so division by 0 should not happen.