The bottleneck of some code I have is:
for _ in range(n):
W = np.dot(A, W)
where n can vary, A is a fixed size MxM matrix, W is Mx1.
Is there a good way to optimize this?
Numpy Solution
Since np.dot is just a matrix multiplication for your shapes you can write what you want as A^n*W. With ^ being repeated matrix multiplication "matrix_power" and * matrix multiplication. So you can rewrite your code as
np.linalg.matrix_power(A,n)#W
Linear Algebra Solution
You can do even better with linear algebra. Assuming for the moment W was an eigenvector of A i.e. that A*W=a*W with a just a number then it follows A^n*W=a^n*W. And now you might think ok but what if W is not an eigenvector. Since matrix multiplication is linear it is just as good if W can be written as a linear combination of eigenvectors and there is even a generalisation of this idea in case W can not be written as a linear combination of eigenvectors. If you want to read more about this google diagonalization and Jordan normal form.
Related
How can I create Matrix P consisting of three eigenvector columns by using a double nested loop.
from sympy.matrices import Matrix, zeros
from sympy import pprint
A = Matrix([[6,2,6], [2,6,6], [6,6,2]])
ew_A = A.eigenvals()
ev_A = A.eigenvects()
pprint(ew_A)
pprint(ev_A)
# Matrix P
(n,m) = A.shape
P = TODO # Initialising
# "filling Matrix P with ...
for i in TODO:
for j in TODO:
P[:,i+j] = TODO
## Calculating Diagonalmatrix
D= P**-1*P*A
Thanks so much in Advance
Finding the eigenvalues of a matrix, or diagonalizing it, is equivalent to finding the zeros of a polynomial with a degree equal to the size of the matrix. So in your case diagonalizing a 3x3 matrix is equivalent to finding the zeros of a 3rd degree polynomial. Maybe there is a simple algorithm for that, but mathematicians always go for the general case.
And in the general case you can show that there is no terminating algorithm for finding the zeros of a 5th-or-higher degree polynomial (that is called Galois theory), so there is also no simple "triple loop" algorithm for matrices of size 5x5 and higher. Eigenvalue software works by an iterative approximation algorithm, so that is a "while" loop around some finite loops.
This means that your question has no answer in the general case. In the 3x3 case maybe, but even that is not going to be terribly simple.
I'm trying to solve a classic eigenvalues problen on python: uFA + EA = 0 where u is an eigenvalue of the problem, F and E are (20x20) matrices and A is an eigenvector.
So first I tried to use numpy.linalg.eig(-F^-1E) to compute the eigenvalues u of the problem. The eigenvalues are complex conjugate. Then I tried to compute numpy.linalg.det(F^-1E+u*Id) to check the eigenvalues. It should return 0 or something really close to 0 but the result is around 10e50 which makes no sense.
Then I decided to use scipy.linalg.eigvals(-E,F) to avoid inversing F and to check the results I computed numpy.linalg.det(E+u*F) and once again the result is around 10e50 and not close to zero.
I don't understand where the problem comes from, I tried with a smaller problem:
-For a (4x4) matrix the determinant is around 10e-8
-For a (6x6) matrix the determinant is around 10e-4
-For a (8x8) matric the determinant is around 10e4
I figure this might come from the size of the matrix but (8x8) is not that big and the determinant is already a lot bigger than 0.
Also for the (20x20) matrix there are terms around 10e7 while others are around 1, might it cause a problem?
Thank you for you help, I really don't know how to make this work.
Edit1: Added the E and F matrices in (20x20) dimension
Edit2: I computed the rank of the matrix F^-1E+u*Id and it's equal to 19 while the matrix size is (20x20) so I think that the eigenvalues are correct. I guess that the problem comes from the determinant calculation.
Matrix E:
Matrix F:
I have matrix Q
Q.shape is (2,2,n,n). Where n is k**2 , k > 1
So Q is block matrix:
Q[0,0].shape is (n,n)
Q[0,1].shape is (n,n)
Q[1,0].shape is (n,n)
Q[1,1].shape is (n,n)
Nested inside Q matrices are sparsed.
I have eigenvalue problem Q * v = lambda * v.
Where v.shape is vector of vectors (2,1,n,1)
Matrices are sparsed because they are extremely huge. (k suppose to be bigger then 1000).
The only solution I see for the moment it's to use scipy.sparse.linalg.eigs, but Q should be square.
What if I use scipy.sparse.hstack and scipy.sparse.vstack to transform Q shape (2,2,n,n) to Qs (2n,2n) and v vector shape will be vs (2n,1)?
1. Will eigenvalue problem be legit with: Qs * vs = lambda * vs?
If yes, in this case I can use this:
w, v = scipy.sparse.linalg.eigs(Qs, k=1, sigma=1.4, which='SM')
2. As well I looking for eigenvalue solvers that could be faster then eigs from scipy.sparse.linalg for extremely large matrix Qs
What can you advice for hpc slurm based with gpu?
There is Intel python, it's the only one speed up that I found.
There is CUDA sparse library cuSOLVER seems to be good. But I did't find examples for slurm.
Hope I describe it understandable.
Here is as well my first question with a rendered matrices.
Suppose I have a symmetric matrix A and a vector b and want to find A^(-1) b. Now, this is well-known to be doable in time O(N^2) (where N is the dimension of the vector\matrix), and I believe that in MATLAB this can be done as b\A. But all I can find in python is numpy.linalg.solve() which will do Gaussian elimination, which is O(N^3). I must not be looking in the right place...
scipy.linalg.solve has an argument to make it assume a symmetric matrix:
x = scipy.linalg.solve(A, b, assume_a="sym")
If you know your matrix is not just symmetric but positive definite you can give this stronger assumption instead, as "pos".
I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.