Linear Algebra in Python: Calculating Eigenvectors for 3x3 Matrix - python

I am using Python to derive the eigenvectors associated with the eigenvalues in a 3x3 matrix. My code is returning correct eigenvalues but wrong eigenvectors.
A = np.array([[-2, -4, 2],
[-2, 1, 2],
[4, 2, 5]])
print (A)
print ('-------------------------------------------------------')
eigenvalues, eigenvectors = np.linalg.eig(A) # must use this line of code exactly
print(f'eigenvalues of matrix A are:{eigenvalues}')
print ('-------------------------------------------------------')
print(f'eigenvectors of matrix A are:{eigenvectors}')
For example, the eigenvector associated with value 6 should be [1, 6, 16], not what the code output.

It is correct and you can check it by the eigenvector/eigenvalue condition for the second eigenvalue and eigenvector.
Where u is the eigenvector and lambda is its eigenvalue.
So we multiply the eigenvector v[:,1] by A and check that it is the same as multiplying the same eigenvector by its eigenvalue w[1].
import numpy as np
>>> w, v = np.linalg.eig(A)
# w contains the eigenvalues.
# v contains the corresponding eigenvectors, one eigenvector per column.
# The eigenvectors are normalized so their Euclidean norms are 1
>>> u = v[:,1]
>>> print(u)
[ 0.53452248, -0.80178373, -0.26726124]
>>> lam = w[1]
>>> lam
3.0
>>> print(np.dot(A,u))
[ 1.60356745 -2.40535118 -0.80178373]
>>> print(lam*u)
[ 1.60356745 -2.40535118 -0.80178373]

See the key here is as said above the normalization process made by the NumPy library to display the array of the "Eigenvectors" numerically as it is not programmed to display something like
# when λ = 6
x_dash = np.array([[ 1*x],
[ 6*x],
[16*x]])
# and then you replace x with your lucky number that would save you some effort in math manipulation
, so in your case, you expect [1, 6, 16] as eigenvector for the 6 eigenvalues, that is OK, don't panic. You just have to recognize that the whole vector underwent a dot multiplication process with some constant that came from the vectorization, and in your case, it happens to be
0.05842062
1 * 0.05842062 = 0.05842062
6 * 0.05842062 = 0.35052372
16 * 0.05842062 = 0.93472992
that is what you get with np.linalg library

Related

How to generate multivariate Normal distribution from a standard normal value?

I need to generate multivariate Normal distribution using only a generator of a random value and without scipy or numpy generators.
I need to generate the following
This is my attempt
V = np.array([
[1, 2],
[2, 5]])
B = np.linalg.cholesky(V)
A = np.array([1,2])
# norm() return one number from standard normal distribution
n1 = np.array([norm() for _ in range(40)])
n2 = np.array([norm() for _ in range(40)])
np.array([n1,n2]).T.dot(B) + A
Here, I used Cholesky decomposition as in this post
However, I reckon this is not correct.
Your code is almost correct, but you can check that your numbers don't have the desired covariance property, if you apply numpy's cov function:
res = np.array([n1,n2]).T.dot(B) + A
np.cov(res.T).round()
# returns ~
# array([[5., 2.],
# [2., 1.]])
Note that the elements 1,1 and 2,2 are exchanged compared to the desired value.
To leverage numpy's CPU-vectorized matrix multiplication, you use numpy's dot function. You properly arranged the N pieces of 2D input vectors Z into a Nx2 dimensional vector np.array([n1,n2]).T. But as you pointed out in the Cholesky decomposition and variance question, the Z values have to be multiplied by B from the left, and you also would like to incorporate it into the dot function's broadcasting rule, and the problem lies here. The code np.array([n1,n2]).T.dot(B) multiplies the (array of) Z from the right, not from the left. To compute the left-product by B, you need to use dot(B.T)
This example also shows that the covariance matrix has the right form
import random
import numpy as np
random.seed(0)
N=10000
V = np.array([
[1, 2],
[2, 5]])
B = np.linalg.cholesky(V)
A = np.array([1, 2])
# norm() return one number from standard normal distribution
n1 = np.array([random.gauss(0, 1) for _ in range(10000)])
n2 = np.array([random.gauss(0, 1) for _ in range(10000)])
res = np.array([n1, n2]).T.dot(B.T) + A
np.cov(res.T).round()
# returns ~ array([[1., 2.],
# [2., 5.]])
In the fig. below the random points are plotted, together with the eigenvectors of the covariance matrix with a length of the square root of their eigenvalues, like on Wikipedia.

How to find two largest eigenvector in python?

I can find eigenvectors of a matrix in Python as follows:
from numpy import linalg as LA
w, v = LA.eig(np.diag((1, 2, 3)))
But how to find the largest two eigenvectors for a larger matrix of size 100*200?
Eigenvalue decomposition is not defined for a non-square matrix. The closest operation is single value decomposition. SVD and EIG for a non-square matrix are related in that the single values are the square root of the eigenvalues of the transpose of the matrix times itself.
B = A' * A
SVD(A) * SVD(A) ~= EIG(B)
So one potential answer to your question is:
import numpy as np
A = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
B = np.matmul(np.transpose(A), A)
u,s,v = np.linalg.svd(A)
V, D = np.linalg.eig(B)
print(f'Compare s*s to V {s*s - V}')
While s is not directly the eigenvalues of A it is somewhat related.

Eigenvector of matrix computed by Python does not appear to be an eigenvector

Apologies in advance, Python is not my strong suit.
The eigenvector corresponding to the real eigenvalue of this matrix (as computed by Python) does not appear to be an eigenvector, whereas the eigenvector as computed by Wolfram Alpha appears to work. (My colleague confirmed that the same pathology appears to be the case when performing the calculation in R, though I do not have the transcript.) Code snippet:
>>> import numpy as np
>>> in_matrix = np.array([[0.904, 0.012, 0.427], [-0.0032, 0.99975, -0.02207], [-0.4271, 0.0186, 0.904]])
>>> evals, evecs = np.linalg.eig(in_matrix)
>>> print evals
[ 0.90388357+0.42760138j 0.90388357-0.42760138j 0.99998285+0.j]
>>> print evecs[2]
[ 0.70696571+0.j 0.70696571-0.j 0.01741090+0.j]
>>> print in_matrix.dot(evecs[2])
[ 0.65501505+0.j 0.70414242+0.j -0.27305604+0.j]
Note that multiplying evecs[2] by the in_matrix variable produces a new vector which is NOT evecs[2] (eigenvector should be 1).
Plugging the same matrix into Wolfram Alpha produces the eigenvector (-0.0474067, -0.998724, 0.0174109) for the real eigenvalue. Multiplying in_matrix by this eigenvector does produce the same vector, as expected.
>>> wolfram_vec = np.array([-0.0474067, -0.998724, 0.0174109])
>>> print in_matrix.dot(wolfram_vec)
[-0.04740589 -0.99870688 0.01741059]
The Wolfram (correct) eigenvector corresponds to a negative Y axis, whereas numpy gives (sqrt(2), sqrt(2), 0).
Bottom line: eigenvector from numpy is not an eigenvector, but Wolfram Alpha eigenvector is correct (and looks it). Can anyone shed any light onto this?
This has been tested on a standard installation of Python 2.7.10 on Mac OS X and on a customised installation of Python 2.7.8 on Centos 6.8.
Quoting the docs:
v : (..., M, M) array
The normalized (unit "length") eigenvectors, such that the
column ``v[:,i]`` is the eigenvector corresponding to the
eigenvalue ``w[i]``.
You need to extract the columns, evecs[:, i], not the rows, evecs[i].
In [30]: evecs[:, 2]
Out[30]: array([-0.04740673+0.j, -0.99872392+0.j, 0.01741090+0.j])
which you may recognize the same as the Wolfram vector. All three eigenvectors are correct:
In [31]: in_matrix.dot(evecs[:, 0]) - evals[0] * evecs[:, 0]
Out[31]:
array([ 5.55111512e-17 +1.11022302e-16j,
-7.11236625e-17 +1.38777878e-17j, 2.22044605e-16 -1.66533454e-16j])
In [32]: in_matrix.dot(evecs[:, 1]) - evals[1] * evecs[:, 1]
Out[32]:
array([ 5.55111512e-17 -1.11022302e-16j,
-7.11236625e-17 -1.38777878e-17j, 2.22044605e-16 +1.66533454e-16j])
In [33]: in_matrix.dot(evecs[:, 2]) - evals[2] * evecs[:, 2]
Out[33]: array([ 3.46944695e-17+0.j, 4.44089210e-16+0.j, 3.15719673e-16+0.j])
where each result is [0, 0, 0] to within the expected precision.

Python, simultaneous pseudo-inversion of many 3x3, singular, symmetric, matrices

I have a 3D image with dimensions rows x cols x deps. For every voxel in the image, I have computed a 3x3 real symmetric matrix. They are stored in the array D, which therefore has shape (rows, cols, deps, 6).
D stores the 6 unique elements of the 3x3 symmetric matrix for every voxel in my image. I need to find the Moore-Penrose pseudo inverse of all row*cols*deps matrices simultaneously/in vectorized code (looping through every image voxel and inverting is far too slow in Python).
Some of these 3x3 symmetric matrices are non-singular, and I can find their inverses, in vectorized code, using the analytical formula for the true inverse of a non-singular 3x3 symmetric matrix, and I've done that.
However, for those matrices that ARE singular (and there are sure to be some) I need the Moore-Penrose pseudo inverse. I could derive an analytical formula for the MP of a real, singular, symmetric 3x3 matrix, but it's a really nasty/lengthy formula, and would therefore involve a VERY large number of (element-wise) matrix arithmetic and quite a bit of confusing looking code.
Hence, I would like to know if there is a way to simultaneously find the MP pseudo inverse for all these matrices at once numerically. Is there a way to do this?
Gratefully,
GF
NumPy 1.8 included linear algebra gufuncs, which do exactly what you are after. While np.linalg.pinv is not gufunc-ed, np.linalg.svd is, and behind the scenes that is the function that gets called. So you can define your own gupinv function, based on the source code of the original function, as follows:
def gu_pinv(a, rcond=1e-15):
a = np.asarray(a)
swap = np.arange(a.ndim)
swap[[-2, -1]] = swap[[-1, -2]]
u, s, v = np.linalg.svd(a)
cutoff = np.maximum.reduce(s, axis=-1, keepdims=True) * rcond
mask = s > cutoff
s[mask] = 1. / s[mask]
s[~mask] = 0
return np.einsum('...uv,...vw->...uw',
np.transpose(v, swap) * s[..., None, :],
np.transpose(u, swap))
And you can now do things like:
a = np.random.rand(50, 40, 30, 6)
b = np.empty(a.shape[:-1] + (3, 3), dtype=a.dtype)
# Expand the unique items into a full symmetrical matrix
b[..., 0, :] = a[..., :3]
b[..., 1:, 0] = a[..., 1:3]
b[..., 1, 1:] = a[..., 3:5]
b[..., 2, 1:] = a[..., 4:]
# make matrix at [1, 2, 3] singular
b[1, 2, 3, 2] = b[1, 2, 3, 0] + b[1, 2, 3, 1]
# Find all the pseudo-inverses
pi = gu_pinv(b)
And of course the results are correct, both for singular and non-singular matrices:
>>> np.allclose(pi[0, 0, 0], np.linalg.pinv(b[0, 0, 0]))
True
>>> np.allclose(pi[1, 2, 3], np.linalg.pinv(b[1, 2, 3]))
True
And for this example, with 50 * 40 * 30 = 60,000 pseudo-inverses calculated:
In [2]: %timeit pi = gu_pinv(b)
1 loops, best of 3: 422 ms per loop
Which is really not that bad, although it is noticeably (4x) slower than simply calling np.linalg.inv, but this of course fails to properly handle the singular arrays:
In [8]: %timeit np.linalg.inv(b)
10 loops, best of 3: 98.8 ms per loop
EDIT: See #Jaime's answer. Only the discussion in the comments to this answer is useful now, and only for the specific problem at hand.
You can do this matrix by matrix, using scipy, that provides pinv (link) to calculate the Moore-Penrose pseudo inverse. An example follows:
from scipy.linalg import det,eig,pinv
from numpy.random import randint
#generate a random singular matrix M first
while True:
M = randint(0,10,9).reshape(3,3)
if det(M)==0:
break
M = M.astype(float)
#this is the method you need
MPpseudoinverse = pinv(M)
This does not exploit the fact that the matrix is symmetric though. You may also want to try the version of pinv exposed by numpy, that is supposedely faster, and different. See this post.

Inverse of a matrix using numpy

I'd like to use numpy to calculate the inverse. But I'm getting an error:
'numpy.ndarry' object has no attribute I
To calculate inverse of a matrix in numpy, say matrix M, it should be simply:
print M.I
Here's the code:
x = numpy.empty((3,3), dtype=int)
for comb in combinations_with_replacement(range(10), 9):
x.flat[:] = comb
print x.I
I'm presuming, this error occurs because x is now flat, thus 'I' command is not compatible. Is there a work around for this?
My goal is to print the INVERSE MATRIX of every possible numerical matrix combination.
The I attribute only exists on matrix objects, not ndarrays. You can use numpy.linalg.inv to invert arrays:
inverse = numpy.linalg.inv(x)
Note that the way you're generating matrices, not all of them will be invertible. You will either need to change the way you're generating matrices, or skip the ones that aren't invertible.
try:
inverse = numpy.linalg.inv(x)
except numpy.linalg.LinAlgError:
# Not invertible. Skip this one.
pass
else:
# continue with what you were doing
Also, if you want to go through all 3x3 matrices with elements drawn from [0, 10), you want the following:
for comb in itertools.product(range(10), repeat=9):
rather than combinations_with_replacement, or you'll skip matrices like
numpy.array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])
Another way to do this is to use the numpy matrix class (rather than a numpy array) and the I attribute. For example:
>>> m = np.matrix([[2,3],[4,5]])
>>> m.I
matrix([[-2.5, 1.5],
[ 2. , -1. ]])
Inverse of a matrix using python and numpy:
>>> import numpy as np
>>> b = np.array([[2,3],[4,5]])
>>> np.linalg.inv(b)
array([[-2.5, 1.5],
[ 2. , -1. ]])
Not all matrices can be inverted. For example singular matrices are not Invertable:
>>> import numpy as np
>>> b = np.array([[2,3],[4,6]])
>>> np.linalg.inv(b)
LinAlgError: Singular matrix
Solution to singular matrix problem:
try-catch the Singular Matrix exception and keep going until you find a transform that meets your prior criteria AND is also invertable.
What about inv?
e.g.:
my_inverse_array = inv(my_array)
IDK if anyone already mentioned this but I want to point out that matrix_object. I and np.linalg.inv(matrix_object) don't give a true inverse. This has given me a lot of grief. It's true that for a matrix object m, np.dot(m, m.I) = an identity matrix, but np.dot(m.I, m) =/= I. Same goes for np.linalg.inv(I).
Be careful with that.

Categories