Matlab VS Python - eig(A,B) VS sc.linalg.eig(A,B) - python

I have the following matrices sigma and sigmad:
sigma:
1.9958 0.7250
0.7250 1.3167
sigmad:
4.8889 1.1944
1.1944 4.2361
If I try to solve the generalized eigenvalue problem in python I obtain:
d,V = sc.linalg.eig(matrix(sigmad),matrix(sigma))
V:
-1 -0.5614
-0.4352 1
If I try to solve the g. e. problem in matlab I obtain:
[V,d]=eig(sigmad,sigma)
V:
-0.5897 -0.5278
-0.2564 0.9400
But the d's do coincide.

Any (nonzero) scalar multiple of an eigenvector will also be an eigenvector; only the direction is meaningful, not the overall normalization. Different routines use different conventions -- often you'll see the magnitude set to 1, or the maximum value set to 1 or -1 -- and some routines don't even bother being internally consistent for performance reasons. Your two different results are multiples of each other:
In [227]: sc = array([[-1., -0.5614], [-0.4352, 1. ]])
In [228]: ml = array([[-.5897, -0.5278], [-0.2564, 0.94]])
In [229]: sc/ml
Out[229]:
array([[ 1.69577751, 1.06366048],
[ 1.69734789, 1.06382979]])
and so they're actually the same eigenvectors. Think of the matrix as an operator which changes a vector: the eigenvectors are the special directions where a vector pointing that way won't be twisted by the matrix, and the eigenvalues are the factors measuring how much the matrix expands or contracts the vector.

Related

numpy and solving symmetric systems

Suppose I have a symmetric matrix A and a vector b and want to find A^(-1) b. Now, this is well-known to be doable in time O(N^2) (where N is the dimension of the vector\matrix), and I believe that in MATLAB this can be done as b\A. But all I can find in python is numpy.linalg.solve() which will do Gaussian elimination, which is O(N^3). I must not be looking in the right place...
scipy.linalg.solve has an argument to make it assume a symmetric matrix:
x = scipy.linalg.solve(A, b, assume_a="sym")
If you know your matrix is not just symmetric but positive definite you can give this stronger assumption instead, as "pos".

LinAlgError: Last 2 dimensions of the array must be square

I need to solve a set of simultaneous equations of the form Ax = B for x. I've used the numpy.linalg.solve function, inputting A and B, but I get the error 'LinAlgError: Last 2 dimensions of the array must be square'. How do I fix this?
Here's my code:
A = matrix([[v1x, v2x], [v1y, v2y], [v1z, v2z]])
print A
B = [(p2x-p1x-nmag[0]), (p2y-p1y-nmag[1]), (p2z-p1z-nmag[2])]
print B
x = numpy.linalg.solve(A, B)
The values of the matrix/vector are calculated earlier in the code and this works fine, but the values are:
A =
(-0.56666301, -0.52472909)
(0.44034147, 0.46768087)
(0.69641397, 0.71129036)
B =
(-0.38038602567630364, -24.092279373295057, 0.0)
x should have the form (x1,x2,0)
In case you still haven't found an answer, or in case someone in the future has this question.
To solve Ax=b:
numpy.linalg.solve uses LAPACK gesv. As mentioned in the documentation of LAPACK, gesv requires A to be square:
LA_GESV computes the solution to a real or complex linear system of equations AX = B, where A is a square matrix and X and B are rectangular matrices or vectors. Gaussian elimination with row interchanges is used to factor A as A = PL*U , where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the above system.
If A matrix is not square, it means that you either have more variables than your equations or the other way around. In these situations, you can have the cases of no solution or infinite number of solutions. What determines the solution space is the rank of the matrix compared to the number of columns. Therefore, you first have to check the rank of the matrix.
That being said, you can use another method to solve your system of linear equations. I suggest having a look at factorization methods like LU or QR or even SVD. In LAPACK you can use getrs, in Python you can different things:
first do the factorization like QR and then feed the resulting matrices to a method like scipy.linalg.solve_triangular
solve the least-squares using numpy.linalg.lstsq
Also have a look here where a simple example is formulated and solved.
A square matrix is a matrix with the same number of rows and columns. The matrix you are doing is a 3 by 2. Add a column of zeroes to fix this problem.

numpy.linalg.inv returns inverse for a singular matrix

The matrix below is singular, and AFAIK attempting to invert it should result in
numpy.linalg.linalg.LinAlgError: Singular matrix
but instead, I do get some output matrix. Note that output matrix is a non-sensical result, because it has a row of 0's (which is impossible, since an inverse of a matrix should itself be invertible)!
Am I missing something here related to floating point precision, or the computation of a pseudoinverse as opposed to a true inverse?
$ np.__version__
'1.13.1'
$ np.linalg.inv(np.array([[2,7,7],[7,7,7],[8,7,7]]))
array([[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 3.43131400e+15, -2.05878840e+16, 1.71565700e+16],
[ -3.43131400e+15, 2.05878840e+16, -1.71565700e+16]])```
Behind the scenes, NumPy and SciPy (and many other software) fall back to LAPACK implementations (or C translations) of linear equation solvers (in this case GESV).
Since GESV first performs a LU decomposition and then checks the diagonal of U matrix for exact zeros, it is very difficult to hit perfect zeros in the decompositions. That's why you don't get a singular matrix error.
Apart from that you should never ever invert a matrix if you are multiplying with other matrices but instead solve for AX=B.
In SciPy since version 0.19, scipy.linalg.solve uses the "expert" driver GESVX of GESV which also reports back condition number and a warning is emitted. This is similar to matlab behavior in case the singularity is missed.
In [7]: sp.linalg.solve(np.array([[2,7,7],[7,7,7],[8,7,7]]), np.eye(3))
...\lib\site-packages\scipy\linalg\basic.py:223: RuntimeWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number: 1.1564823173178713e-18
' condition number: {}'.format(rcond), RuntimeWarning)
Out[7]:
array([[ 0.00000000e+00, -1.00000000e+00, 1.50000000e+00],
[ 3.43131400e+15, -2.05878840e+16, 1.71565700e+16],
[ -3.43131400e+15, 2.05878840e+16, -1.71565700e+16]])
One note from the numpy team:
The de-facto convention in the field is that errors in matrix
inversion are mostly silently ignored --- it is assumed that the user
knows if this is something that needs to be checked for (implying that
a more controlled approximate inversion method needs to be used ---
the regularization is problem-dependent).
https://github.com/numpy/numpy/issues/2074
Seems to give an error on 1.13.0 however

Analytical solution for Linear Regression using Python vs. Julia

Using example from Andrew Ng's class (finding parameters for Linear Regression using normal equation):
With Python:
X = np.array([[1, 2104, 5, 1, 45], [1, 1416, 3, 2, 40], [1, 1534, 3, 2, 30], [1, 852, 2, 1, 36]])
y = np.array([[460], [232], [315], [178]])
θ = ((np.linalg.inv(X.T.dot(X))).dot(X.T)).dot(y)
print(θ)
Result:
[[ 7.49398438e+02]
[ 1.65405273e-01]
[ -4.68750000e+00]
[ -4.79453125e+01]
[ -5.34570312e+00]]
With Julia:
X = [1 2104 5 1 45; 1 1416 3 2 40; 1 1534 3 2 30; 1 852 2 1 36]
y = [460; 232; 315; 178]
θ = ((X' * X)^-1) * X' * y
Result:
5-element Array{Float64,1}:
207.867
0.0693359
134.906
-77.0156
-7.81836
Furthermore, when I multiple X by Julia's — but not Python's — θ, I get numbers close to y.
I can't figure out what I am doing wrong. Thanks!
Using X^-1 vs the pseudo inverse
pinv(X) which corresponds to the pseudo inverse is more broadly applicable than inv(X), which X^-1 equates to. Neither Julia nor Python do well using inv, but in this case apparently Julia does better.
but if you change the expression to
julia> z=pinv(X'*X)*X'*y
5-element Array{Float64,1}:
188.4
0.386625
-56.1382
-92.9673
-3.73782
you can verify that X*z = y
julia> X*z
4-element Array{Float64,1}:
460.0
232.0
315.0
178.0
A more numerically robust approach in Python, without having to do the matrix algebra yourself is to use numpy.linalg.lstsq to do the regression:
In [29]: np.linalg.lstsq(X, y)
Out[29]:
(array([[ 188.40031942],
[ 0.3866255 ],
[ -56.13824955],
[ -92.9672536 ],
[ -3.73781915]]),
array([], dtype=float64),
4,
array([ 3.08487554e+03, 1.88409728e+01, 1.37100414e+00,
1.97618336e-01]))
(Compare the solution vector with #waTeim's answer in Julia).
You can see the source of the ill-conditioning by printing the matrix inverse you're calculating:
In [30]: np.linalg.inv(X.T.dot(X))
Out[30]:
array([[ -4.12181049e+13, 1.93633440e+11, -8.76643127e+13,
-3.06844458e+13, 2.28487459e+12],
[ 1.93633440e+11, -9.09646601e+08, 4.11827338e+11,
1.44148665e+11, -1.07338299e+10],
[ -8.76643127e+13, 4.11827338e+11, -1.86447963e+14,
-6.52609055e+13, 4.85956259e+12],
[ -3.06844458e+13, 1.44148665e+11, -6.52609055e+13,
-2.28427584e+13, 1.70095424e+12],
[ 2.28487459e+12, -1.07338299e+10, 4.85956259e+12,
1.70095424e+12, -1.26659193e+11]])
Eeep!
Taking the dot product of this with X.T leads to a catastrophic loss of precision.
Notice that X is a 4x5 matrix or in statistical terms that you have fewer observations than parameters to estimate. Therefore, the least squares problem has infinitely many solutions with the sum of the squared errors exactly equal to zero. In this case, the normal equations don't help you much because the matrix X'X is singular. Instead, you should just find a solution to X*b=y.
Most numerical linear algebra systems are based on the FORTRAN package LAPACK which uses the a pivoted QR factorization for solving the problem X*b=y. Since there are infinitely many solutions, LAPACK's picks the solution with the smallest norm. In Julia, you can get this solution, simply by writing
float(X)\y
(Unfortunately, the float part is necessary right now, but that will change.)
In exact arithmetic, you should get the same solution as the one above with either of your proposed methods, but the floating point representation of you problem introduces small rounding errors and these errors will affect the calculated solution. The effect of the rounding errors on the solution is much larger when using the normal equations compared to using the QR factorization directly on X.
This holds true also in the usual case where X has more rows than columns so often it is recommended that you avoid the normal equations when solving least squares problems. However, when X has many more rows than columns, the matrix X'X is relatively small. In this case, it will be much faster to solve the problem with the normal equations instead of using the QR factorization. In many statistical problems, the extra numerical error is extremely small compared to the statical error so the loss of precision due to the normal equations can simply be ignored.

Quickly and efficiently calculating an eigenvector for known eigenvalue

Short version of my question:
What would be the optimal way of calculating an eigenvector for a matrix A, if we already know the eigenvalue belonging to the eigenvector?
Longer explanation:
I have a large stochastic matrix A which, because it is stochastic, has a non-negative left eigenvector x (such that A^Tx=x).
I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any one would be fine).
I know that 1 is the largest eigenvalue of A, so I know that calling something like this Python code:
from scipy.sparse.linalg import eigs
vals, vecs = eigs(A, k=1)
will result in vals = 1 and vecs equalling the vector I need.
However, the thing that bothers me here is that calculating eigenvalues is, in general, a more difficult operation than solving a linear system, and, in general, if a matrix M has eigenvalue l, then finding the appropriate eigenvector is a matter of solving the equation (M - 1 * I) * x = 0, which is, in theory at least, an operation that is simpler than calculating an eigenvalue, since we are only solving a linear system, more specifically, finding the nullspace of a matrix.
However, I find that all methods of nullspace calculation in MATLAB rely on svd calculation, a process I cannot afford to perform on a matrix of my size. I also cannot call solvers on the linear equation, because they all only find one solution, and that solution is 0 (which, yes, is a solution, but not the one I need).
Is there any way to avoid calls to eigs-like function to solve my problem more quickly than by calculating the largest eigenvalue and accompanying eigenvector?
Here's one approach using Matlab:
Let x denote the (row) left† eigenvector associated to eigenvalue 1. It satisfies the system of linear equations (or matrix equation) xA = x, or x(A−I)=0.
To avoid the all-zeros solution to that system of equations, remove the first equation and arbitrarily set the first entry of x to 1 in the remaining equations.
Solve those remaining equations (with x1 = 1) to obtain the other entries of x.
Example using Matlab:
>> A = [.6 .1 .3
.2 .7 .1
.5 .1 .4]; %// example stochastic matrix
>> x = [1, -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1))]
x =
1.000000000000000 0.529411764705882 0.588235294117647
>> x*A %// check
ans =
1.000000000000000 0.529411764705882 0.588235294117647
Note that the code -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1)) is step 3.
In your formulation you define x to be a (column) right eigenvector of AT (such that ATx = x). This is just x.' from the above code:
>> x = x.'
x =
1.000000000000000
0.529411764705882
0.588235294117647
>> A.'*x %// check
ans =
1.000000000000000
0.529411764705882
0.588235294117647
You can of course normalize the eigenvector to sum 1:
>> x = x/sum(x)
x =
0.472222222222222
0.250000000000000
0.277777777777778
>> A.'*x %'// check
ans =
0.472222222222222
0.250000000000000
0.277777777777778
† Following the usual convention. Equivalently, this corresponds to a right eigenvector of the transposed matrix.

Categories