I am looking to solve a problem of the type: Aw = xBw where x is a scalar (eigenvalue), w is an eigenvector, and A and B are symmetric, square numpy matrices of equal dimension. I should be able to find d x/w pairs if A and B are d x d. How would I solve this in numpy? I was looking in the Scipy docs and not finding anything like what I wanted.
For real symmetric or complex Hermitian dense matrices, you can use scipy.linalg.eigh() to solve a generalized eigenvalue problem. To avoid extracting all the eigenvalues you can specify only the desired ones by using subset_by_index:
from scipy.linalg import eigh
eigvals, eigvecs = eigh(A, B, eigvals_only=False, subset_by_index=[0, 1, 2])
One could use eigvals_only=True to obtain only the eigenvalues.
Have you seen scipy.linalg.eig? From the documentation:
Solve an ordinary or generalized eigenvalue problem of a square matrix.
This method have optional parameter b:
scipy.linalg.eig(a, b=None, ...
b : (M, M) array_like, optional
Right-hand side matrix in a generalized eigenvalue problem.
Default is None, identity matrix is assumed.
Related
Suppose I have a symmetric matrix A and a vector b and want to find A^(-1) b. Now, this is well-known to be doable in time O(N^2) (where N is the dimension of the vector\matrix), and I believe that in MATLAB this can be done as b\A. But all I can find in python is numpy.linalg.solve() which will do Gaussian elimination, which is O(N^3). I must not be looking in the right place...
scipy.linalg.solve has an argument to make it assume a symmetric matrix:
x = scipy.linalg.solve(A, b, assume_a="sym")
If you know your matrix is not just symmetric but positive definite you can give this stronger assumption instead, as "pos".
I need to solve a set of simultaneous equations of the form Ax = B for x. I've used the numpy.linalg.solve function, inputting A and B, but I get the error 'LinAlgError: Last 2 dimensions of the array must be square'. How do I fix this?
Here's my code:
A = matrix([[v1x, v2x], [v1y, v2y], [v1z, v2z]])
print A
B = [(p2x-p1x-nmag[0]), (p2y-p1y-nmag[1]), (p2z-p1z-nmag[2])]
print B
x = numpy.linalg.solve(A, B)
The values of the matrix/vector are calculated earlier in the code and this works fine, but the values are:
A =
(-0.56666301, -0.52472909)
(0.44034147, 0.46768087)
(0.69641397, 0.71129036)
B =
(-0.38038602567630364, -24.092279373295057, 0.0)
x should have the form (x1,x2,0)
In case you still haven't found an answer, or in case someone in the future has this question.
To solve Ax=b:
numpy.linalg.solve uses LAPACK gesv. As mentioned in the documentation of LAPACK, gesv requires A to be square:
LA_GESV computes the solution to a real or complex linear system of equations AX = B, where A is a square matrix and X and B are rectangular matrices or vectors. Gaussian elimination with row interchanges is used to factor A as A = PL*U , where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the above system.
If A matrix is not square, it means that you either have more variables than your equations or the other way around. In these situations, you can have the cases of no solution or infinite number of solutions. What determines the solution space is the rank of the matrix compared to the number of columns. Therefore, you first have to check the rank of the matrix.
That being said, you can use another method to solve your system of linear equations. I suggest having a look at factorization methods like LU or QR or even SVD. In LAPACK you can use getrs, in Python you can different things:
first do the factorization like QR and then feed the resulting matrices to a method like scipy.linalg.solve_triangular
solve the least-squares using numpy.linalg.lstsq
Also have a look here where a simple example is formulated and solved.
A square matrix is a matrix with the same number of rows and columns. The matrix you are doing is a 3 by 2. Add a column of zeroes to fix this problem.
I am trying to find the eigenvalues and eigenvectors of a complex matrix with scipy.sparse.linalg.eigsh using its shift-invert mode. With just real numbers in the matrix I get the same result for the spicy.linalg.eigh solver, but when adding the imaginary parts the eigenvalues diverge. A tiny example:
import numpy as np
from scipy.linalg import eigh
from scipy.sparse.linalg import eigsh
n = 10
X = np.random.random((n, n)) - 0.5 + (np.random.random((n, n)) - 0.5) * 1j
X = np.dot(X, X.T) # create a symmetric matrix
evals_all, evecs_all = eigh(X)
evals_small, evecs_small = eigsh(X, 3, sigma=0, which='LM')
print(sorted(evals_all, key=abs))
print(sorted(evals_small, key=abs))
The prints in this case are for example
[0.041577858515751132, -0.084104744918533481, -0.58668240775486691, 0.63845672501004724, -1.2311727737115068, 1.5193345703630159, -1.8652302423152105, 1.9970059660853923, -2.6414593461321654, 2.8624290667460293]
[-0.017278543470343462, -0.32684893256215408, 0.34551438015659475]
whereas in the real case, the first three eigenvalues are identical.
I am aware that I'm passing a dense matrix to the sparse solver, but this is just intended as an example.
I am probably missing something obvious somewhere, but I'd be happy about some hints where to look. Thank you!
scipy is not checking your input if it's hermitian.
Doing it like proposed in the link:
if not np.allclose(X, np.asmatrix(X).H):
raise ValueError('expected symmetric or Hermitian matrix')
outputs:
ValueError: expected symmetric or Hermitian matrix
I think this is also indicated by those negative eigenvalues you see (but complex-based math is really not my speciality...).
I sloved a least square problem (Ax=b for A ) using pinv in numpy and (pinv , lstsq) in scipy and "/" in matlab. I got some different answers, so what is the different between them?
problem :Ax=b for A
method:
pinv in numpy and scipy : A=b*pinv(x)
lstsq in scipy : A.T=lstsq(x.T,b.T) , the problem has changed to (x.T*A.T=b.T)
/ in matlab : A=b/x
matlab: https://cn.mathworks.com/help/matlab/ref/mrdivide.html
"x = B/A solves the system of linear equations xA = B for x. The matrices A and B must contain the same number of columns. If A is a rectangular m-by-n matrix with m ~= n, and B is a matrix with n columns, then x = B/A returns a least-squares solution of the system of equations xA = B. "
scipy.linalg.lstsq:Compute least-squares solution to equation Ax = b.
scipy.linalg.pinv:Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Calculate a generalized inverse of a matrix using a least-squares solver.
numpy.linalg.pinv:Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values.
I would like to solve a sparse linear equations system: A x = b, where A is a (M x M) array, b is an (M x N) array and x is and (M x N) array.
I solve this in three ways using the:
scipy.linalg.solve(A.toarray(), b.toarray()),
scipy.sparse.linalg.spsolve(A, b),
scipy.sparse.linalg.splu(A).solve(b.toarray()) # returns a dense array
I wish to solve the problem using the iterative scipy.sparse.linalg methods:
scipy.sparse.linalg.cg,
scipy.sparse.linalg.bicg,
...
However, the metods suport only a right hand side b with a shape (M,) or (M, 1). Any ideas on how to expand these methods to (M x N) array b?
A key difference between iterative solvers and direct solvers is that direct solvers can more efficiently solve for multiple right-hand values by using a factorization (usually either Cholesky or LU), while iterative solvers can't. This means that for direct solvers there is a computational advantage to solving for multiple columns simultaneously.
For iterative solvers, on the other hand, there's no computational gain to be had in simultaneously solving multiple columns, and this is probably why matrix solutions are not supported natively in the API of cg, bicg, etc.
Because of this, a direct solution like scipy.sparse.linalg.spsolve will probably be optimal for your case. If for some reason you still desire an iterative solution, I'd just create a simple convenience function like this:
from scipy.sparse.linalg import bicg
def bicg_solve(M, B):
X, info = zip(*(bicg(M, b) for b in B.T))
return np.transpose(X), info
Then you can create some data and call it as follows:
import numpy as np
from scipy.sparse import csc_matrix
# create some matrices
M = csc_matrix(np.random.rand(5, 5))
B = np.random.rand(5, 4)
X, info = bicg_solve(M, B)
print(X.shape)
# (5, 4)
Any iterative solver API which accepts a matrix on the right-hand-side will essentially just be a wrapper for something like this.