How to implement ILU precondioner in scipy? - python

For the iterative solvers in the scipy.sparse.linalg such as bicg, gmres, etc, there is an option to add the precondioner for the matrix A. However, the documentation is not very clear about what I should give as the preconditioner. If I use ilu = sp.sparse.linalg.spilu(A), ilu is not any matrices but an object that encompasses many things.
Someone asked about a similar question here for Python 2.7, but I doesn't work for me (Python 3.7, scipy version 1.1.0)
So my question is how to incorporate incomplete LU preconditioner for those iterative algorithms?

As a preconditioner, bicg or gmres accept
sparse matrix
dense matrix
linear operator
In your case, the preconditioner comes from a factorization, thus it has to be passed as a linear operator.
Thus, you may want to explicitly define a linear operator from the ILU factorization you obtained via spilu. Something along the lines:
sA_iLU = sparse.linalg.spilu(sA)
M = sparse.linalg.LinearOperator((nrows,ncols), sA_iLU.solve)
Here, sA is a sparse matrix in a CSC format, and M will now be the preconditioner linear operator that you will supply to the iterative solver.
A complete example based on the question you mentioned:
import numpy as np
from scipy import sparse
from scipy.sparse import linalg
A = np.array([[ 0.4445, 0.4444, -0.2222],
[ 0.4444, 0.4445, -0.2222],
[-0.2222, -0.2222, 0.1112]])
sA = sparse.csc_matrix(A)
b = np.array([[ 0.6667],
[ 0.6667],
[-0.3332]])
sA_iLU = sparse.linalg.spilu(sA)
M = sparse.linalg.LinearOperator((3,3), sA_iLU.solve)
x = sparse.linalg.gmres(A,b,M=M)
print(x)
Notes:
I am actually using a dense matrix as an example, while it would make more sense to start from a representative sparse matrix in your case.
size of the linear operator M is hardcoded.
ILU has not been configured anyhow, but with the defaults.
this is pretty much what has been suggested in the comments to that aforementioned answer, however, I had to do small tweaks to make it Python3-compatible.

Related

python equivalent for `eigs` in matlab with a matrix function

If I want to calculate the k smallest eigenvalues of the matrix multiplication AA' with A of size 300K by 512 and "'" is the transpose, then that would be infeasible to do it in traditional way. Matlab however provides a nice functionality by using a function argument that perform the product Afun = #(x) A*(A'*x)); to the eigs function. Then, to find the smallest 6 eigenvalues/eigenvectors we call d = eigs(Afun,300000,6,'smallestabs'), where the second input is the size of the matrix AA'. Is there a function in python that performs a similar thing?
To my knowledge, there is no such functionality in numpy. However, I don't see any limitations by using simply numpy.linalg.eigvals for retrieving an array of the matrix eigenvalues. Then simply find the N smallest with a sort:
import numpy as np
import numpy.linalg
A = np.array() # your matrix
eigvals = numpy.linalg.eigvals(A)
eigvals.sort()
smallest_6_eigvals = eigvals[:6]

Use of scipy sparse in ode solver

I am trying to solve a differential equation system
x´=Ax with x(0) = f(x)
in python, where A indeed is a complex sparse matrix.
For now i have been solving the system using the scipy.integrate.complex_ode class in the following way.
def to_solver_function(time,vector):
sendoff = np.dot(A, np.transpose(vector))
return sendoff
solver = complex_ode(to_solver_function)
solver.set_initial_value(f(x),0)
solution = [f(x)]
for time in time_grid:
next = solver.integrate(time)
solution.append(next)
This has been working OK, but I need to "tell the solver" that my matrix is sparse. I figured out that i should use
Asparse = sparse.lil_matrix(A)
but how do i change my solver to work with this?
How large and sparse is A?
It looks like A is just a constant in this function:
def to_solver_function(time,vector):
sendoff = np.dot(A, np.transpose(vector))
return sendoff
Is vector 1d? Then np.transpose(vector) does nothing.
For calculation purposes you want
Asparse = sparse.csr_matrix(A)
Does np.dot(Asparse, vector) work? np.dot is supposed to be sparse aware. If not, try Asparse*vector. This probably produces a dense matrix, so you may need (Asparse*vector).A1 to produce a 1d array.
But check the timings. Asparse needs to quite large and very sparse to perform faster than A in a dot product.

Python: scipy.sparse.linalg.eigsh for complex Hermitian matrices

I am trying to diagonalise a simple sparse Hermitian matrix using python's scipy.sparse.linalg.eigsh function; although the documentation says it supports Hermitian matrices, the file python wrapper file arpack.py says it does not:
# The entry points to ARPACK are
# - (s,d)seupd : single and double precision symmetric matrix
# - (s,d,c,z)neupd: single,double,complex,double complex general matrix
# This wrapper puts the *neupd (general matrix) interfaces in eigs()
# and the *seupd (symmetric matrix) in eigsh().
# There is no Hermetian complex/double complex interface.
# To find eigenvalues of a Hermetian matrix you
# must use eigs() and not eigsh()
# It might be desirable to handle the Hermetian case differently
# and, for example, return real eigenvalues.
Indeed the above comments from the wrapper are vindicated when I try to run the following code for diagonalising a sparse complex Hermitian matrix using eigsh:
from scipy.sparse import *
from scipy.sparse.linalg import *
import math
sign = lambda x: math.copysign(1, x)
S = lil_matrix((5,5), dtype=complex)
for i in range(5):
for j in range(5):
S[i,j] = sign(i-j)*1j
eigval = eigsh(-S, k=1, M=None,sigma=None, which='LM', v0=None, ncv=None, tol=0, return_eigenvectors=False)
Then the following error comes up:
ValueError: Input matrix is not real-valued.
Of course, if I use a regular numpy matrix for S and the numpy.linalg.eigh function, then all the eigenvalues are computed. But
I do not want all the eigenvalues, and
I need my matrix to be sparse.
Does anyone know what then is the point of having a sparse version eigsh for sparse complex Hermitian matrices if it cannot be used?
Note: I understand that the matrix in the example here is not necessarily sparse by some definition of sparsity but it's just used for illustrative purposes.

Efficient numpy / lapack routine for product of inverse and sparse matrix?

I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.

Python Inverse of a Matrix

How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.
You should have a look at numpy if you do matrix manipulation. This is a module mainly written in C, which will be much faster than programming in pure python. Here is an example of how to invert a matrix, and do other matrix manipulation.
from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]]) # Creates a matrix.
x = matrix( [[1],[2],[3]] ) # Creates a matrix (like a column vector).
y = matrix( [[1,2,3]] ) # Creates a matrix (like a row vector).
print A.T # Transpose of A.
print A*x # Matrix multiplication of A and x.
print A.I # Inverse of A.
print linalg.solve(A, x) # Solve the linear equation system.
You can also have a look at the array module, which is a much more efficient implementation of lists when you have to deal with only one data type.
Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.
See Don't invert that matrix.
It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:
A = matrix( [[1,2,3],[11,12,13],[21,22,23]])
By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...
It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!
I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.
If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So
A = matrix( [[2,2,3],[11,24,13],[21,22,46]])
becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.
For those like me, who were looking for a pure Python solution without pandas or numpy involved, check out the following GitHub project: https://github.com/ThomIves/MatrixInverse.
It generously provides a very good explanation of how the process looks like "behind the scenes". The author has nicely described the step-by-step approach and presented some practical examples, all easy to follow.
This is just a little code snippet from there to illustrate the approach very briefly (AM is the source matrix, IM is the identity matrix of the same size):
def invert_matrix(AM, IM):
for fd in range(len(AM)):
fdScaler = 1.0 / AM[fd][fd]
for j in range(len(AM)):
AM[fd][j] *= fdScaler
IM[fd][j] *= fdScaler
for i in list(range(len(AM)))[0:fd] + list(range(len(AM)))[fd+1:]:
crScaler = AM[i][fd]
for j in range(len(AM)):
AM[i][j] = AM[i][j] - crScaler * AM[fd][j]
IM[i][j] = IM[i][j] - crScaler * IM[fd][j]
return IM
But please do follow the entire thing, you'll learn a lot more than just copy-pasting this code! There's a Jupyter notebook as well, btw.
Hope that helps someone, I personally found it extremely useful for my very particular task (Absorbing Markov Chain) where I wasn't able to use any non-standard packages.
You could calculate the determinant of the matrix which is recursive
and then form the adjoined matrix
Here is a short tutorial
I think this only works for square matrices
Another way of computing these involves gram-schmidt orthogonalization and then transposing the matrix, the transpose of an orthogonalized matrix is its inverse!
Numpy will be suitable for most people, but you can also do matrices in Sympy
Try running these commands at http://live.sympy.org/
M = Matrix([[1, 3], [-2, 3]])
M
M**-1
For fun, try M**(1/2)
If you hate numpy, get out RPy and your local copy of R, and use it instead.
(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)

Categories