I was wondering if there is a Python package, numpy or otherwise, that has a function that computes the first eigenvalue and eigenvector of a small matrix, say 2x2. I could use the linalg package in numpy as follows.
import numpy as np
def whatever():
A = np.asmatrix(np.rand(2, 2))
evals, evecs = np.linalg.eig(A)
#Assume that the eigenvalues are ordered from large to small and that the
#eigenvectors are ordered accordingly.
return evals[0], evecs[:, 0]
But this takes a really long time. I suspect that it's because numpy computes eigenvectors through some sort of iterative process. So I was wondering if there were a much faster algorithm that only returns the first (largest) eigenvalue and eigenvector, since I only need the first.
For 2x2 matrices of course I can write a function myself, that computes the eigenvalue and eigenvector analytically, but then there are problems with floating point computations, for example when I divide a very big number by a very small number, I get infinity or NaN. Does anyone know anything about this? Please help! Thank you in advance!
Use this: http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html
http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html#scipy.sparse.linalg.eigs
Find k eigenvalues and eigenvectors of the square matrix A.
According to the docs:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html
and also to my own experience, numpy.linalg.eig(A) does NOT sort the eigenvectors in any particular order, which is what the OP and subsequent seem to be assuming. I suggest something like:
rearrangedEvalsVecs = sorted(zip(evals,evecs.T),\
key=lambda x: x[0].real, reverse=True)
There doesn't appear to be a numpy equivalent of Matlab's eigs(A,B,k) for finding the k largest eigenvectors.
If you're interested, Enthought has compiled a table showing the differences between Matlab and numpy. That should be helpful for answering questions like this one: Link
One other thought, for 2x2 matrices, I don't think eigs(A,B,1) would help anyway. The effort involved in computing the first eigenpair leaving the matrix transformed to where the second emerges directly. There is only benefit for 3x3 and larger.
Related
I am working with a 4x4 matrix which, in general, has complex valued-elements. I am trying to determine if there exists a non-real eigenvalue for this matrix; I do not necessarily care what the eigenvalue is. My current algorithm for the numpy array A (which is pre-defined by me) is as follows:
import scipy.linalg as SciLA
import numpy as np
import mpmath as mp
w1 = SciLA.eigvals(A)
w2 = [mp.chop(i,tol=1e-14) for i in w1]
imag_list = [(np.imag(w2[i])) for i in range(0,len(w1))]
imag_num = np.sign(len([x for x in imag_list if x != 0]))
Using %timeit, the code takes around 1.43 ms per loop (after testing over 1000 loops) for a simple 4x4 matrix. However, I feel that there should be a simpler way of just checking if a certain matrix has complex eigenvalues. I also need the code to go faster, as I am looping over many 4x4 matrices. Any suggestions for possible packages or mathematical/numerical techniques to aid in simplifying the code and/or speeding it up would be greatly appreciated.
As per my comment above, I am going to assume that you are looking to see if any if the values is non-real, i.e. has a non-null imaginary part. This isn't strictly a solution, but I'm guessing it'll be close enough for what you want:
The trace of a matrix is the sum of its eigenvalues. If this trace is non-real, certainly at least one of these eigenvalues must be non-real. So just check if this is the case, and if so you can be sure that there is a non-real eigenvalue.
This condition isn't perfect, of course, one can easily find matrices with a real trace but some non-real eigenvalues. Therefore, if the trace is real, you should fall back to the method above to figure out whether the eigenvalues are all real or not. However, for most applications, most matrices will have a non-real trace, and so your execution time should be much shorter since all you need to compute is the trace.
I guess what is make your code bad is that you are building three lists from the response and using loops for that. Use numpy vectorized operations instead
# This will tell you if all the eigenvalues are (nearly) real
np.allclose(SciLA.eigvals(A).imag, 0)
I have a large sparse square non-normal matrix: 73080 rows, but only 6 nonzero entries per row (and all equal to 1.). I'd like to compute the two largest eigenvalues, as well as the operator (2) norm, ideally with Python. The natural way for me to store this matrix is with scipy's csr_matrix, especially since I'll be multiplying it with other sparse matrices. However, I don't see a good way to compute the relevant statistics: scipy.sparse.linalg's norm method doesn't have the 2-norm implemented and converting to a dense matrix seems like it would be a bad idea, and running scipy.sparse.linalg.eigs seems to run extremely, maybe prohibitively, slowly, and in any event it computes lots of data that I just don't need. I suppose I could subtract off the spectral projector corresponding to the top eigenvalue but then I'd still need to know the top eigenvalue of the new matrix, which I'd like to do with an out-of-the-box method if at all possible, and in any event this wouldn't continue to work after multiplying with other large sparse matrices.
However, these kinds of computations seem to be doable: the top of page 6 of this paper seems to have data on the eigenvalues of ~10000-row matrices. If this is not feasible in Python, is there another way I should try to do this? Thanks in advance.
I'm facing the inversion of a 6x6 matrix which can also be represented as a symmetric block matrix as following:
Each of the P sub matrices is then a 3x3 matrix. P12 and P21 are equal so that P is symmetric. I would like to exploit this form to compute the inverse P matrix in an efficient way. Until now I'm using using the inv() function from Scipy directly on P but, having profiled my code and considering that I have to invert this type of matrices thousands of times in the code I would wish for a better way. Looking up online I found a formula using Schur complements as follow:
I'm wondering if using this strategy will be more computationally efficient then inverting the 6x6 matrix after assembling it. Since the blocks are only 3x3 I could also think of using formulas for calculating the inverse of the blocks, and then use them in the formula represented in the second picture.
I am trying to get accustomed to doing singular value decomposition with numpy. I decided to do the SVD on a matrix from an example to understand how it works. I am following this pdf, where A = [[3, 2, 2], [2, 3, -2]]. When I run the svd, however, I get something different for the matrices U and V then what is provided in the pdf. It is the same matrix, except the signs have been flipped. Now, since the matrices are both linear operators and the signs have been flipped on both it is technically still correct, the flipping cancels out. But why is it this way?
Remember that U and V are eigenvectors. Scaling an eigenvector is still an eigenvector, but as long as you get some linear multiple of the solution that you get in the PDF, it is perfectly acceptable. You know the implementation is correct if the eigenvalues are the same. Judging from your post as you didn't comment on the eigenvalues, I'm assuming that they are correct. The eigenvalues need to be the same, but the eigenvectors can be different.
In your case, the scaling is done by -1, which are still valid eigenvectors to the same eigenvalues. As to the reason why the eigenvectors are different in sign is most likely the way the SVD is calculated. Finding the actual left and right eigenvectors is computationally prohibitive, so some tips and tricks to arrive at the same solution are done, and that may mean that the eigenvectors are of a different scale than you expect.
I'd finally like to point you to this Cross Validated post that talks about the different algorithms that compute the SVD. numpy.svd examines the properties of the input matrix and chooses the right algorithm that is suitable.
https://stats.stackexchange.com/questions/66034/what-are-efficient-algorithms-to-compute-singular-value-decomposition-svd
I'm trying to implement Reinsch's Algorithm (pp 4).
Since the working matrices are sparse, I'm using scipy.sparse module, but as you can see, Reinsch's algorithm needs the Cholesky decomposition of a sparse matrix (let's call it my_matrix) in order to solve certain system, but I couldn't find anything related to this.
Of course, in the same algorithm I can solve the sparse system using, for instance scipy.sparse.linalg.spsolve, and then at the end of the algorithm use something like:
R = numpy.linalg.chol( my_matrix.A )
But, in my application my_matrix is usualy about 800*800, so the last one is very inneficient.
So, my question is, where I can find such decomposition?.
Thank's in advance.
For fast decomposition, you can try,
from scikits.sparse.cholmod import cholesky
factor = cholesky(A.toarray())
x = factor(b)
A is your sparse, symmetric, positive-definite matrix.
Since your matrix is not "Huge!!" converting it into numpy array doesn't cause any problem.