Why can't scipy compute all eigenvalues? - python

The documentation of scipy.sparse.linalg.eigs reads
The number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix.
Why can it not compute all eigenvalues? This is possible for standard (non-sparse) matrices.

The docs also say:
This function is a wrapper to the ARPACK SNEUPD, DNEUPD, CNEUPD, ZNEUPD, functions which use the Implicitly Restarted Arnoldi Method to find the eigenvalues and eigenvectors.
Digging into the ARPACK docs, the description of the algorithm says:
The algorithm is capable of computing a few (k) eigenvalues with user specified features such as largest real part or largest magnitude using storage.
So this is a limitation of the algorithm, as explained by the Wikipedia:
The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results.

Related

Difference in results with sparse solver

I'm solving a non-linear elliptic PDE via linearization + iteration and a finite difference method: basically it comes down to solving a matrix equation Ax = b. A is a banded matrix. Due to the large size of A (typically ~8 billion elements) I have been using a sparse solver (scipy.sparse.linalg.spsolve) to do this. In my code, I compute a residual value which measures deviation from the true non-linear solution and lowers it with successive iterations. It turns out that there is a difference between the values that the sparse solver produces in comparison to what scipy.linalg.solve does.
Output of normal solver:
Output of sparse solver:
The only difference in my code is the replacement of the solver. I don't think this is down to floating-point errors since the error creeps upto the 2nd decimal place (in the last iteration - but the order of magnitude also decreases... so I'm not sure). Any insights on why this might be happening? The final solution, it seems, is not affected qualitatively - but I wonder whether this can create problems.
(No code has been included since the difference is only there in the creation of a sparse matrix and the sparse solver. However, if you feel you need to check some part of it, please ask me to include code accordingly)

Eigenvector ambiguity - How to enforce a specific sign convention

I am writing a program in Python that uses numpy.linalg.eigh to diagonalize a Hermitian matrix (a Hamiltonian). I diagonalize many such matrices and use the resultant eigenvector matrices for multiple unitary transformations of some other matrix. By "eigenvector matrix", I mean a matrix whose columns are the eigenvectors of the original matrix.
Unfortunately, I am hitting a potential problem because of the eigenvector sign ambiguity (i.e., eigenvectors are only defined up to a constant and normalization still does not fix the sign of an eigenvector). Specifically, the result I am calculating depends on the interference patterns produced by the successive unitary transformations. Thus, I anticipate that the sign ambiguity will become a problem.
My question:
What is the best way (or the industry standard) to enforce a particular sign convention for the eigenvectors?
I have thought of/come across the following:
Ensure the first coefficient of each eigenvector is positive. Problem: some of these coefficients are zero or within numerical error of zero.
Ensure the first coefficient of largest magnitude is positive. Problem: some of the eigenvectors have multiple coefficients with the same magnitude within numerical error. Numerical error then "randomly" determines which coefficient is "bigger."
Ensure the sum of the coefficients is positive. Problem: some coefficients are equal in magnitude but opposite in sign, leaving the sign still ambiguous/determined by numerical error. (I also see other problems with this approach).
Add a small number (such as 1E-16) to the eigenvector, ensure that the first coefficient is positive, then subtract the number. Problem: Maybe none important for me, but this makes me uneasy as I am not sure what problems it may cause.
(Inspired by Eigenshuffle and Sign correction in SVD and PCA) Pick a reference vector and ensure that the dot product of every eigenvector with this vector is positive. Problem: How to pick the vector? A random vector increases the likelihood that no eigenvectors are orthogonal to it (within numerical error), but there is no guarantee. Alternatively, one could choose a set of random vectors (all with positive coefficients) to increase the likelihood that the vector space is "spanned" well-enough.
I have tried to find what is the "standard" convention but I have a hard time finding anything particularly useful, particularly in Python. There is a solution for SVD (Sign correction in SVD and PCA), but I don't have any data vectors to compare to. There is Eigenshuffle (which is for Matlab and I am using Python), but my matrices are not usually successive small modifications of each other (though some are).
I am leaning toward solution 5 at it seems pretty intuitive; we are simply ensuring that all eigenvectors are in the same high-dimensional "quadrant". Also, having two or three random reference vectors with positive coefficients should cover almost all eigenvectors with very high probability, assuming the dimensionality of the system is not too big (my system has a dimensionality of 9).

Using scipy.sparse.linalg.eigs to find when the eigenvalue real part crosses zero

I am using scipy.sparse.linalg.eigs to calculate the eigenvalues of a large sparse matrix, which is a Jacobian for a vector function (the Jacobian size is 1200x1200). The method raises ArpackNoConvergence every once in a while, and I think it happens especially when the real part of the eigenvalues become small in magnitude (but still negative). How can I set this method to be able to calculate those eigenvalues without crashing?
My current setup is:
eigs = sparse.linalg.eigs(jacobian(state),k=1,which='LR',return_eigenvectors=False)[0]
What I would like to achieve is to find when the real part of one of the eigenvalues crosses zero (and thus the state is unstable linearly).

The difference between C++ (LAPACK, sgels) and Python (Numpy, lstsq) results

I am comparing the numerical results of C++ and Python computations. In C++, I make use of LAPACK's sgels function to compute the coefficients of a linear regression problem. In Python, I use Numpy's linalg.lstsq function for a similar task.
What is the mathematical difference between the methods used by sgels and linalg.lstsq?
What is the expected error (e.g. 6 significant digits) when comparing the results (i.e. the regression coefficients) numerically?
FYI: I am by no means a C++ or Python expert, which makes it difficult to understand what is going on inside the functions.
Taking a look at the source of numpy, in the file linalg.py, lstsq relies on LAPACK's zgelsd() for complex and dgelsd() for real. Here are the differences to sgels():
dgelsd() is for double while sgels() is for float. There is a difference of precision...
dgels() makes use the QR factorization of the matrix A and assumes that A has full rank. The condition number of the matrix must be reasonable to get a significant result. See this course for getting the logic of the method. On the other hand, dgelsd() makes use of the Singular value decomposition of A. In particular, A can be rank defiencient and small singular values are discarted depending of the additional argument rcond or machine precision. Notice that numpy's default value for rcond is -1: negative values refers to machine precision. See this course for the logic.
According to the benchmark of LAPACK, on can expect dgels() to be about 5 time faster than dgelsd().
You may see significant differences between the result of sgels() and dgelsd() if the matrix is ill conditionned. Indeed, there is a bound on the error of the linear regression which depends on the algorithm and the value of rcond() that is used. See the user guide of LAPACK on, Error Bounds for Linear Least Squares Problems for estimates of the errors and Further Details: Error Bounds for Linear Least Squares Problems for technical details.
As a conclusion, sgels() and dgels() can be used if the measures in b are accurate and easily related to the explanatory variables. For instance, if sensors are placed at the exits of exhaust pipes, it's easy to guess which motors are running. But sometimes, the linear link between the source and the measures is not precisely known (uncertainty on the terms of A) or discriminating polluters on the base of measurements becomes more difficult (Some polluters are far from the set of sensors and A is ill-conditionned). In this kind of situation, dgelsd() and tunning the rcond argument can help. Whenever in doubt, use dgelsd() and estimate the error on the estimated x according to LAPACK's user guide.

Alternative to numpy's linalg.eig?

I have written a simple PCA code that calculates the covariance matrix and then uses linalg.eig on that covariance matrix to find the principal components. When I use scikit's PCA for three principal components I get almost the equivalent result. My PCA function outputs the third column of transformed data with flipped signs to what scikit's PCA function does. Now I think there is a higher probability that scikit's built-in PCA is correct than to assume that my code is correct. I have noticed that the third principal component/eigenvector has flipped signs in my case. So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d). I might a bit shabby in my linear algebra, but I assume those are different results. The way I arrive at my eigenvectors is by computing the eigenvectors and eigenvalues of the covariance matrix using linalg.eig. I would gladly try to find eigenvectors by hand, but doing that for a 4x4 matrix (I am using iris data set) is not fun.
Iris data set has 4 dimensions, so at most I can run PCA for 4 components. When I run for one component, the results are equivalent. When I run for 2, also equivalent. For three, as I said, my function outputs flipped signs in the third column. When I run for four, again signs are flipped in the third column and all other columns are fine. I am afraid I cannot provide the code for this. This is a project, kind of.
This is desired behaviour, even stated in the documentation of sklearn's PCA
Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.
and quite obviously correct from mathematical perspective, as if v is eigenvector of A then
Av = kv
thus also
A(-v) = -(Av) = -(kv) = k(-v)
So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d).
That's completely normal. If v is an eigenvector of a matrix, then -v is an eigenvector with the same eigenvalue.

Categories