Clarification on computation of eigenvectors using NumPy - python

I am reviewing some linear algebra and working through some implementations in Python. I am working through a problem related to finding eigenvectors of a matrix A.
A = [[ 1, 2,-2],
[-2, 5,-2],
[-6, 6,-3]]
When I solve this problem by hand, I get eigenvalues 3 and -3, with 3 having a multiplicity of 2. My eigenvectors are [[1/3], [1/3], [1]], [[1], [1], [0]], [[-1], [0], [1]].
Trying my implementation in NumPy:
import numpy as np
A = [[ 1, 2, -2],
[-2, 5, -2],
[-6, 6, -3]]
np.linalg.eig(A)
which gives the output
(array([ 3., -3., 3.]), array([[ 0.53452248, -0.30151134, -0.05332571],
[-0.26726124, -0.30151134, -0.73225996],
[-0.80178373, -0.90453403, -0.67893425]]))
The eigenvalues are what I would expect, but the eigenvectors are confusing to me. From what I've read, I understand they are the columns and they are normalized, i.e., norm(e1) = 1. Also, numerically they seem to be correct in the sense they satisfy the Ax = lambda*x equation.
Furthermore, when I do the implementation in SymPy, I get the expected result.
from sympy.matrices import Matrix, eye, zeros, ones, diag, GramSchmidt
A = Matrix([[ 1, 2, -2],
[-2, 5, -2],
[-6, 6, -3]])
A.eigenvects()
Output:
[(-3, 1, [Matrix([
[1/3],
[1/3],
[ 1]])]), (3, 2, [Matrix([
[1],
[1],
[0]]), Matrix([
[-1],
[ 0],
[ 1]])])]
Can anyone shed some light on the differences and what's going on with NumPy? Is it solving numerically and these aren't truly eigenvectors but they seem to be in the sense they satisfy the conditions within a certain level of numerical precision? Thank you.

Here, the eigenvalue 3 has geometric multiplicity 2 (the rank of the matrix (A - 3 I) is 1) and there are infinitely many ways to choose the two basis vectors (eigenvectors) for this eigenspace.
In the case of normal matrix A, numpy.linalg.eig will return an array of row eigenvectors forming a set of orthonormal bases of the whole space, and in computing practice the eigenvectors are unique up to permutations and the orientation (sign) in each column. In the non-normal case (as is here) there's no unique choice, only a unique partition of the whole space into sub(eigen)spaces associated with each eigenvalue.
You can consider the output eigenvectors for the eigenvalue 3 (namely, the 0th and 2nd columns in the returned eigenvector array) an arbitrary set of bases satisfying the eigenvalue equation.
The implementation should be a wrapper of the underlying ?GEEV function of the LAPACK API. Apart from matching the order of eigenvalues (which is ordered as conjugate pairs), the only constraint on the output eigenvectors seems
Each eigenvector is scaled so that the Euclidean norm is 1 and the largest component is real.
So there's still lots of arbitrariness and I wouldn't count on a particular output.

Related

diagonalize multiple vectors using numpy

Say I have a matrix of shape (2,3), I need to diagonalize the 3-elements vector into matrix of shape (3,3), for all the 2 vectors at once. That is, I need to return matrix with shape (2,3,3). How can I do that with Numpy elegantly ?
given data = np.array([[1,2,3],[4,5,6]])
i want the result [[[1,0,0],
[0,2,0],
[0,0,3]],
[[4,0,0],
[0,5,0],
[0,0,6]]]
Thanks
tl;dr, my one-liner: mydiag=np.vectorize(np.diag, signature='(n)->(n,n)')
I suppose here that by "diagonalize" you mean "applying np.diag".
Which, as a teacher of linear algebra, tickles me a bit. Since "diagonalizing" has a specific meaning, which is not that (it is computing eigen vectors and values, and from there, writing M=P⁻¹ΛP. Which you cannot do from the inputs you have).
So, I suppose that if input matrix is
[[1, 2, 3],
[9, 8, 7]]
The output matrix you want is
[[[1, 0, 0],
[0, 2, 0],
[0, 0, 3]],
[[9, 0, 0],
[0, 8, 0],
[0, 0, 7]]]
If not, you can ignore this answer [Edit: in the meantime, you explained exactly that. So yo may continue to read].
There are many way to do that.
My one liner would be
mydiag=np.vectorize(np.diag, signature='(n)->(n,n)')
Which build a new functions which does what you want (it interprets the input as a list of 1D-array, call np.diag of each of them, to get a 2D-array, and put each 2D-array in a numpy array, thus getting a 3D-array)
Then, you just call mydiag(M)
One advantage of vectorize, is that it uses numpy broadcasting. In other words, the loops are executed in C, not in python. In yet other words, it is faster. Well it is supposed to be (on small matrix, it is in fact slower than Michael's method - in comment; on large matrix, it is has the exact same speed. Which is frustrating, since einsum doc itself specify that it sacrifices broadcasting).
Plus, it is a one-liner, which has no other interest than bragging on forums. But well, here we are.
Here is one way with indexing:
out = np.zeros(data.shape+(data.shape[-1],), dtype=data.dtype)
x,y = np.indices(data.shape).reshape(2, -1)
out[x,y,y] = data.ravel()
output:
array([[[1, 0, 0],
[0, 2, 0],
[0, 0, 3]],
[[4, 0, 0],
[0, 5, 0],
[0, 0, 6]]])
We use array indexing to precisely grab those elements that are on the diagonal. Note that array indexing allows broadcasting between the indices, so we have index1 contain the index of the array, and index2 contain the index of the diagonal element.
index1 = np.arange(2)[:, None] # 2 is the number of arrays
index2 = np.arange(3)[None, :] # 3 is the square size of each matrix
result = np.zeros((2, 3, 3))
result[index1, index2, index2] = data

What is the appropriate way to find linearly independent column vectors in a square matrix?

I want to find linearly independent column vectors in a square matrix, but without using sympy. The reason why I don't want to use a sympy is that it is very slow in cases of calculation of large matrices.
For example,
matrix = [[2, 3 ,5],[-1, -4, -10],[1, -2, -8]]
_, ind_col_idx = sympy.Matrix(matrix.T).rref()
print(ind_col_idx) # (0,1)
print(matrix[ind_col_idx,:]) #array([[ 2, 3, 5], [ -1, -4, -10]])
In similar Stack Overflow questions, to find linearly independent columns in a square matrix, the recommended method is to use eigenvalue. If eigenvalue is zero, its corresponding eigenvector is linearly dependent. But I don't understand the relationship between eigenvalue and linear independence.
Can you explain the relationship between eigenvalue and independence?
Plus, determining linearly independent columns using eigenvalue is not correct.
matrix = [[2, 3 ,5],[-1, -4, -10],[1, -2, -8]])
lambdas, V = np.linalg.eig(matrix) # [-1.12449980e+01, 1.24499800e+00, -1.54101807e-15], []
ind_col_idx = np.where(lambdas == 0)[0] # []

Linear Algebra in Python: Calculating Eigenvectors for 3x3 Matrix

I am using Python to derive the eigenvectors associated with the eigenvalues in a 3x3 matrix. My code is returning correct eigenvalues but wrong eigenvectors.
A = np.array([[-2, -4, 2],
[-2, 1, 2],
[4, 2, 5]])
print (A)
print ('-------------------------------------------------------')
eigenvalues, eigenvectors = np.linalg.eig(A) # must use this line of code exactly
print(f'eigenvalues of matrix A are:{eigenvalues}')
print ('-------------------------------------------------------')
print(f'eigenvectors of matrix A are:{eigenvectors}')
For example, the eigenvector associated with value 6 should be [1, 6, 16], not what the code output.
It is correct and you can check it by the eigenvector/eigenvalue condition for the second eigenvalue and eigenvector.
Where u is the eigenvector and lambda is its eigenvalue.
So we multiply the eigenvector v[:,1] by A and check that it is the same as multiplying the same eigenvector by its eigenvalue w[1].
import numpy as np
>>> w, v = np.linalg.eig(A)
# w contains the eigenvalues.
# v contains the corresponding eigenvectors, one eigenvector per column.
# The eigenvectors are normalized so their Euclidean norms are 1
>>> u = v[:,1]
>>> print(u)
[ 0.53452248, -0.80178373, -0.26726124]
>>> lam = w[1]
>>> lam
3.0
>>> print(np.dot(A,u))
[ 1.60356745 -2.40535118 -0.80178373]
>>> print(lam*u)
[ 1.60356745 -2.40535118 -0.80178373]
See the key here is as said above the normalization process made by the NumPy library to display the array of the "Eigenvectors" numerically as it is not programmed to display something like
# when λ = 6
x_dash = np.array([[ 1*x],
[ 6*x],
[16*x]])
# and then you replace x with your lucky number that would save you some effort in math manipulation
, so in your case, you expect [1, 6, 16] as eigenvector for the 6 eigenvalues, that is OK, don't panic. You just have to recognize that the whole vector underwent a dot multiplication process with some constant that came from the vectorization, and in your case, it happens to be
0.05842062
1 * 0.05842062 = 0.05842062
6 * 0.05842062 = 0.35052372
16 * 0.05842062 = 0.93472992
that is what you get with np.linalg library

NumPy: why does np.linalg.eig and np.linalg.svd give different V values of SVD?

I am learning SVD by following this MIT course.
the Matrix is constructed as
C = np.matrix([[5,5],[-1,7]])
C
matrix([[ 5, 5],
[-1, 7]])
the lecturer gives the V as
this is close to
w, v = np.linalg.eig(C.T*C)
matrix([[-0.9486833 , -0.31622777],
[ 0.31622777, -0.9486833 ]])
but np.linalg.svd(C) gives a different output
u, s, vh = np.linalg.svd(C)
vh
matrix([[ 0.31622777, 0.9486833 ],
[ 0.9486833 , -0.31622777]])
it seems the vh exchange the elements in the V vector, is it acceptable?
did I do and understand this correctly?
For linalg.eig your Eigenvalues are stored in w. These are:
>>> w
array([20., 80.])
For your singular value decomposition you can get your Eigenvalues by squaring your singular values (C is invertible so everything is easy here):
>>> s**2
array([80., 20.])
As you can see their order is flipped.
From the linalg.eig documentation:
The eigenvalues are not necessarily ordered
From the linalg.svd documentation:
Vector(s) with the singular values, within each vector sorted in descending order. ...
In general routines that give you Eigenvalues and Eigenvectors do not "sort" them necessarily the way you might want them. So it is always important to make sure you have the Eigenvector for the Eigenvalue you want. If you need them sorted (e.g. by Eigenvalue magnitude) you can always do this yourself (see here: sort eigenvalues and associated eigenvectors after using numpy.linalg.eig in python).
Finally note that the rows in vh contain the Eigenvectors, whereas in v it's the columns.
So that means that e.g.:
>>> v[:,0].flatten()
matrix([[-0.9486833 , 0.31622777]])
>>> vh[1].flatten()
matrix([[ 0.9486833 , -0.31622777]])
give you both the Eigenvector for the Eigenvalue 20.

Condition number of a matrix using numpy

[python 2.7 and numpy v1.11.1] I am looking at matrix condition numbers and am trying to compute the condition number for a matrix without using the function np.linalg.cond().
Based on numpy's documentation, the definition of a matrix's condition number is, "the norm of x times the norm of the inverse of x."
||X|| * ||X^-1||
for the matrix
a = np.matrix([[1, 1, 1],
[2, 2, 1],
[3, 3, 0]])
print np.linalg.cond(a)
1.84814479698e+16
print np.linalg.norm(a) * np.linalg.norm(np.linalg.inv(a))
2.027453660713377e+17
Where is the mistake in my computation?
Thanks!
You are trying to compute the condition using the Frobenius Norm definition. That is an optional parameter to the condition computation.
print(np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a)))
print(np.linalg.cond(a, p='fro'))
Produces
2.02745366071e+17
2.02745366071e+17
norm uses the Frobenius norm for matrix by default,when cond uses 2-norm:
In [347]: np.linalg.cond(a)
Out[347]: 38.198730775206172
In [348]:np.linalg.norm(a,2)*np.linalg.norm(np.linalg.inv(a),2)
Out[348]: 38.198730775206243
In [349]: np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a))
Out[349]: 39.29814570824248
NumPy cond() is currently buggy. There will come a time when we will fix it but for now if you are doing this for linear equation solutions you can use SciPy linalg.solve which will either produce an error for exact singularity or a warning if reciprocal condition number is below threshold and nothing if the array is invertible.

Categories