Decomposition of matrix exponential in python: Eigenvector matrix doesn't match - python

When trying to reproduce the following example, where the decomposition is T x Lambda x T^{-1} (not named in the image but visible) where Lambda is a diagonal matrix of exponential eigenvalues, and T is some eigenvector matrix I think. I do not know how, or am unable to, calculate the matrix T in Python.
My code attempt is shown below. The eigenvectors vecs should correspond to T. what am I doing wrong? And is there a more appropriate function than eigs?
import numpy as np
from scipy.sparse.linalg import eigs
from scipy.linalg import expm,inv
A = np.array([[5,1],[-2,2]])
eA = expm(A)
vals, vecs = eigs(A, k=2)
print(vals) #eigenvalues match Lambda in example
print(vecs) #eigenvectors don't match T in example
print(inv(vecs))
Returns:
[4.+0.j 3.+0.j]
[[ 0.70710678 -0.4472136 ]
[-0.70710678 0.89442719]]
[[2.82842712 1.41421356]
[2.23606798 2.23606798]]
when they should be matrices of positive/negative 1's and 2's as shown in the image.

Related

On not getting an identity matrix with dot product of matrix and its inverse while using numpy.linalg.inv()

from numpy.linalg import inv, qr
X = np.random.randn(5, 5)
mat = X.T.dot(X)
inv(mat)
mat.dot(inv(mat))
dot product of matrix and its inverse should be Identity matrix.
But, here output is-
array([[ 1.00000000e+00, 6.70961522e-16, 3.98202719e-16,
-2.04084178e-15, 3.07963387e-16],
[-6.46120445e-15, 1.00000000e+00, 4.44698794e-16,
1.40254635e-15, 2.71601492e-16],
[ 3.00736839e-15, -5.65091222e-16, 1.00000000e+00,
1.63129995e-16, -6.43576692e-17],
[ 1.01120865e-14, -1.23622826e-15, -6.99882344e-16,
1.00000000e+00, -1.13627444e-16],
[-6.31447442e-15, 2.46897480e-15, 9.95010178e-16,
-2.81959392e-15, 1.00000000e+00]])
Please explain.
That must be due to the algorithm rounding but I've found that if you diagonalize the matrix and calculate the dot product with the inverse you end up correctly with the identity matrix. This might be due to a different algorithm used to calculate the inverse matrix for a diagonal matrix.
import numpy as np
m = np.random.randn(5,5)
print(np.linalg.det(m))
e = np.linalg.eig(m)[0]
mdiag = np.eye(5)*e
print(mdiag.dot(np.linalg.inv(mdiag)))
This method seems to work always for 3x3 matrix but some times fails for bigger matrixes since there is an immaginary part left in the order of 1e-17

Orthogonal Matrix filled according to function

I want to create an orthogonal matrices in Python to visualize the decline of a signal according to the distance from the source and the angle to the source.
For simplicity we can describe the decline:
NewValue = cos(angle)*(StartingValue – a*(distance))
I found that Scipy.stats has ortho_group, which can be used to create random orthogonal matrices:
from scipy.stats import ortho_group
x = ortho_group.rvs(3)
np.dot(x, x.T)
# returns:
array([[ 1.00000000e+00, 1.13231364e-17, -2.86852790e-16],
[ 1.13231364e-17, 1.00000000e+00, -1.46845020e-16],
[ -2.86852790e-16, -1.46845020e-16, 1.00000000e+00]])
import scipy.linalg
np.fabs(scipy.linalg.det(x))
# returns:
1.0
Since a random matrix isn’t really useful, I keep wondering how I can create a orthogonal matrix with values according to my function.
A second challenge, I’m encountering is how to limit the range of the matrix to a range of angles of 0-45° degrees.

Scipy eigsh returning wrong results for complex input matrix

I am trying to find the eigenvalues and eigenvectors of a complex matrix with scipy.sparse.linalg.eigsh using its shift-invert mode. With just real numbers in the matrix I get the same result for the spicy.linalg.eigh solver, but when adding the imaginary parts the eigenvalues diverge. A tiny example:
import numpy as np
from scipy.linalg import eigh
from scipy.sparse.linalg import eigsh
n = 10
X = np.random.random((n, n)) - 0.5 + (np.random.random((n, n)) - 0.5) * 1j
X = np.dot(X, X.T) # create a symmetric matrix
evals_all, evecs_all = eigh(X)
evals_small, evecs_small = eigsh(X, 3, sigma=0, which='LM')
print(sorted(evals_all, key=abs))
print(sorted(evals_small, key=abs))
The prints in this case are for example
[0.041577858515751132, -0.084104744918533481, -0.58668240775486691, 0.63845672501004724, -1.2311727737115068, 1.5193345703630159, -1.8652302423152105, 1.9970059660853923, -2.6414593461321654, 2.8624290667460293]
[-0.017278543470343462, -0.32684893256215408, 0.34551438015659475]
whereas in the real case, the first three eigenvalues are identical.
I am aware that I'm passing a dense matrix to the sparse solver, but this is just intended as an example.
I am probably missing something obvious somewhere, but I'd be happy about some hints where to look. Thank you!
scipy is not checking your input if it's hermitian.
Doing it like proposed in the link:
if not np.allclose(X, np.asmatrix(X).H):
raise ValueError('expected symmetric or Hermitian matrix')
outputs:
ValueError: expected symmetric or Hermitian matrix
I think this is also indicated by those negative eigenvalues you see (but complex-based math is really not my speciality...).

Performing svd by sklearn.decomposition.PCA , how can I get the U S V from this?

I perform SVD with sklearn.decomposition.PCA
From the equation of the SVD
A= U x S x V_t
V_t = transpose matrix of V
(Sorry I can't paste the original equation)
If I want the matrix U, S, and V, how can I get it if I use the sklearn.decomposition.PCA ?
First of all, depending on the size of your matrix, sklearn implementation of PCA will not always compute the full SVD decomposition. The following is taken from PCA's GitHub reciprocity:
svd_solver : string {'auto', 'full', 'arpack', 'randomized'}
auto :
the solver is selected by a default policy based on `X.shape` and
`n_components`: if the input data is larger than 500x500 and the
number of components to extract is lower than 80% of the smallest
dimension of the data, then the more efficient 'randomized'
method is enabled. Otherwise the exact full SVD is computed and
optionally truncated afterwards.
full :
run exact full SVD calling the standard LAPACK solver via
`scipy.linalg.svd` and select the components by postprocessing
arpack :
run SVD truncated to n_components calling ARPACK solver via
`scipy.sparse.linalg.svds`. It requires strictly
0 < n_components < X.shape[1]
randomized :
run randomized SVD by the method of Halko et al.
In addition, it also performs some manipulations on the data (see here).
Now, if you want to get U, S, V that are used in sklearn.decomposition.PCA you can use pca._fit(X).
For example:
from sklearn.decomposition import PCA
X = np.array([[1, 2], [3,5], [8,10], [-1, 1], [5,6]])
pca = PCA(n_components=2)
pca._fit(X)
prints
(array([[ -3.55731195e-01, 5.05615563e-01],
[ 2.88830295e-04, -3.68261259e-01],
[ 7.10884729e-01, -2.74708608e-01],
[ -5.68187889e-01, -4.43103380e-01],
[ 2.12745524e-01, 5.80457684e-01]]),
array([ 9.950385 , 0.76800941]),
array([[ 0.69988535, 0.71425521],
[ 0.71425521, -0.69988535]]))
However, if you just want the SVD decomposition of the original data, I would suggest to use scipy.linalg.svd

Finding and utilizing eigenvalues and eigenvectors from PCA in scikit-learn

I have been utilizing PCA implemented in scikit-learn. However, I want to find the eigenvalues and eigenvectors that result after we fit the training dataset. There is no mention of both in the docs.
Secondly, can these eigenvalues and eigenvectors themselves be utilized as features for classification purposes?
I am assuming here that by EigenVectors you mean the Eigenvectors of the Covariance Matrix.
Lets say that you have n data points in a p-dimensional space, and X is a p x n matrix of your points then the directions of the principal components are the Eigenvectors of the Covariance matrix XXT. You can obtain the directions of these EigenVectors from sklearn by accessing the components_ attribute of the PCA object. This can be done as follows:
from sklearn.decomposition import PCA
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca = PCA()
pca.fit(X)
print pca.components_
This gives an output like
[[ 0.83849224 0.54491354]
[ 0.54491354 -0.83849224]]
where every row is a principal component in the p-dimensional space (2 in this toy example). Each of these rows is an Eigenvector of the centered covariance matrix XXT.
As far as the Eigenvalues go, there is no straightforward way to get them from the PCA object. The PCA object does have an attribute called explained_variance_ratio_ which gives the percentage of the variance of each component. These numbers for each component are proportional to the Eigenvalues. In the case of our toy example, we get these if print the explained_variance_ratio_ attribute :
[ 0.99244289 0.00755711]
This means that the ratio of the eigenvalue of the first principal component to the eigenvalue of the second principal component is 0.99244289:0.00755711.
If the understanding of the basic mathematics of PCA is clear, then a better way to get the Eigenvectors and Eigenvalues is to use numpy.linalg.eig to get Eigenvalues and Eigenvectors of the centered covariance matrix. If your data matrix is a p x n matrix, X (p features, n points), then the you can use the following code:
import numpy as np
centered_matrix = X - X.mean(axis=1)[:, np.newaxis]
cov = np.dot(centered_matrix, centered_matrix.T)
eigvals, eigvecs = np.linalg.eig(cov)
Coming to your second question. These EigenValues and EigenVectors cannot be used themselves for classification. For classification you need features for each data point. These Eigenvectors and Eigenvalues that you generate are derived from the entire covariance matrix, XXT. For dimensionality reduction you could use the projections of your original points(in the p-dimensional space) on the principal components obtained as a result of PCA. However, this is also not always useful, because PCA does not take into account the labels of your training data. I would recommend you to look into LDA for supervised problems.
Hope that helps.
The docs say explained_variance_ will give you
"The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X.", new in version 0.18.
Seems a little questionable since the first and second sentences do not seem to agree.
sklearn PCA documentation

Categories