I am trying to obtain the left inverse of a non-square matrix in python using either numpy or scipy.
How can I translate the following Matlab code to Python?
>> A = [0,1; 0,1; 1,0]
A =
0 1
0 1
1 0
>> y = [2;2;1]
y =
2
2
1
>> A\y
ans =
1.0000
2.0000
Is there a numpy or scipy equivalent of the left inverse \ operator in Matlab?
Use linalg.lstsq(A,y) since A is not square. See here for details. You can use linalg.solve(A,y) if A is square, but not in your case.
Here is a method that will work with sparse matrices (which from your comments is what you want) which uses the leastsq function from the optimize package
from numpy import *
from scipy.sparse import csr_matrix
from scipy.optimize import leastsq
from numpy.random import rand
A=csr_matrix([[0.,1.],[0.,1.],[1.,0.]])
b=array([[2.],[2.],[1.]])
def myfunc(x):
x.shape = (2,1)
return (A*x - b)[:,0]
print leastsq(myfunc,rand(2))[0]
generates
[ 1. 2.]
It is kind of ugly because of how I had to get the shapes to match up according to what leastsq wanted. Maybe someone else knows how to make this a little more tidy.
I have also tried to get something to work with the functions in scipy.sparse.linalg by using the LinearOperators, but to no avail. The problem is that all of those functions are made to handle square functions only. If anyone finds a way to do it that way, I would like to know as well.
For those who wish to solve large sparse least squares problems:
I have added the LSQR algorithm to SciPy. With the next release, you'll be able to do:
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import lsqr
import numpy as np
A = csr_matrix([[0., 1], [0, 1], [1, 0]])
b = np.array([[2.], [2.], [1.]])
lsqr(A, b)
which returns the answer [1, 2].
If you'd like to use this new functionality without upgrading SciPy, you may download lsqr.py from the code repository at
http://projects.scipy.org/scipy/browser/trunk/scipy/sparse/linalg/isolve/lsqr.py
You can also look for the equivalent of the pseudo-inverse function pinv in numpy/scipy, as an alternative to the other answers that is.
You can calculate the left inverse using matrix calculations:
import numpy as np
linv_A = np.linalg.solve(A.T.dot(A), A.T)
(Why? Because:
)
Test:
np.set_printoptions(suppress=True, precision=3)
np.random.seed(123)
A = np.random.randn(3, 2)
print('A\n', A)
A_linv = np.linalg.solve(A.T.dot(A), A.T)
print('A_linv.dot(A)\n', A_linv.dot(A))
Result:
A
[[-1.086 0.997]
[ 0.283 -1.506]
[-0.579 1.651]]
A_linv.dot(A)
[[ 1. -0.]
[ 0. 1.]]
I haven't tested it, but according to this web page it is:
linalg.solve(A,y)
You can use lsqr from scipy.sparse.linalg to solve sparse matrix systems with least squares
Related
For the iterative solvers in the scipy.sparse.linalg such as bicg, gmres, etc, there is an option to add the precondioner for the matrix A. However, the documentation is not very clear about what I should give as the preconditioner. If I use ilu = sp.sparse.linalg.spilu(A), ilu is not any matrices but an object that encompasses many things.
Someone asked about a similar question here for Python 2.7, but I doesn't work for me (Python 3.7, scipy version 1.1.0)
So my question is how to incorporate incomplete LU preconditioner for those iterative algorithms?
As a preconditioner, bicg or gmres accept
sparse matrix
dense matrix
linear operator
In your case, the preconditioner comes from a factorization, thus it has to be passed as a linear operator.
Thus, you may want to explicitly define a linear operator from the ILU factorization you obtained via spilu. Something along the lines:
sA_iLU = sparse.linalg.spilu(sA)
M = sparse.linalg.LinearOperator((nrows,ncols), sA_iLU.solve)
Here, sA is a sparse matrix in a CSC format, and M will now be the preconditioner linear operator that you will supply to the iterative solver.
A complete example based on the question you mentioned:
import numpy as np
from scipy import sparse
from scipy.sparse import linalg
A = np.array([[ 0.4445, 0.4444, -0.2222],
[ 0.4444, 0.4445, -0.2222],
[-0.2222, -0.2222, 0.1112]])
sA = sparse.csc_matrix(A)
b = np.array([[ 0.6667],
[ 0.6667],
[-0.3332]])
sA_iLU = sparse.linalg.spilu(sA)
M = sparse.linalg.LinearOperator((3,3), sA_iLU.solve)
x = sparse.linalg.gmres(A,b,M=M)
print(x)
Notes:
I am actually using a dense matrix as an example, while it would make more sense to start from a representative sparse matrix in your case.
size of the linear operator M is hardcoded.
ILU has not been configured anyhow, but with the defaults.
this is pretty much what has been suggested in the comments to that aforementioned answer, however, I had to do small tweaks to make it Python3-compatible.
So I am trying to implement a simple optimization code in Python using the CVXPY package (an optimization problem with a linear matrix inequality constraint). The code is show below.
I have tried running the code using Python 3.6.
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
import control as cs
gamma = cp.Variable();
MAT1 = np.array([[1, gamma], [1+gamma, 3]])
constraints_2 = [MAT1 >> 0]
prob = cp.Problem(cp.Minimize(gamma),constraints_2)
prob.solve()
Every time I try to run this code, I get the following error:
"Non-square matrix in positive definite constraint."
But the matrix is clearly square! So I don't know what is happening.
Any ideas?
Your help is much appreciated!
MAT1 is a numpy array, you'll need to make it a cvxpy Variable to use the semidefinite constraint. Try this:
MAT1 = cp.Variable((2, 2))
constraints_2 = [MAT1 >> 0, MAT1[0, 0] == 1, MAT1[1, 0] == 1 + MAT1[0, 1], MAT1[1, 1] == 3]
prob = cp.Problem(cp.Minimize(MAT1[0, 1]), constraints_2)
gamma is then about -2.73
There is a technical and a conceptual problem here
Tecnical problem
The problem is that your MAT1 is not a numpy array
You could have written
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
Or more concisely
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
This way the cvxpy will accept your code, but will not give the correct answer.
Conceptual problem
The SDP problems are convex only for symmetric matrices, what cvxpy will do is to make it symmetric (apparently by adding it to it's transpose). The solution given is the minimum gamma such that [[1, 0.5+gamma], [0.5+gamma, 3]] >> 0.
I would like to generate a matrix M, whose elements M(i,j) are from a standard normal distribution. One trivial way of doing it is,
import numpy as np
A = [ [np.random.normal() for i in range(3)] for j in range(3) ]
A = np.array(A)
print(A)
[[-0.12409887 0.86569787 -1.62461893]
[ 0.30234536 0.47554092 -1.41780764]
[ 0.44443707 -0.76518672 -1.40276347]]
But, I was playing around with numpy and came across another "solution":
import numpy as np
import numpy.matlib as npm
A = np.random.normal(npm.zeros((3, 3)), npm.ones((3, 3)))
print(A)
[[ 1.36542538 -0.40676747 0.51832243]
[ 1.94915748 -0.86427391 -0.47288974]
[ 1.9303462 -1.26666448 -0.50629403]]
I read the document for numpy.random.normal, and it says it doesn't clarify how does this function work when matrix is passed instead of a single value. I suspected that in the second "solution" I might be drawing from a multivariate normal distribution. But this can't be true because the input arguments both have the same dimensions (covariance should be a matrix and mean is a vector). Not sure what is being generated by the second code.
The intended way to do what you want is
A = np.random.normal(0, 1, (3, 3))
This is the optional size parameter that tells numpy what shape you want returned (3 by 3 in this case).
Your second way works too, because the documentation states
If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn.
So there is no multivariate distribution and no correlation.
I have a function in python (using scipy and numpy also) defined as
import numpy as np
from scipy import integrate
LCDMf = lambda x: 1.0/np.sqrt(0.3*(1+x)**3+0.7)
I would like to integrate it from 0 to each element in a numpy array say z = np.arange(0,100)
I know I can write a loop for each element iterating through like
an=integrate.quad(LCDMf,0,z[i])
But, I was wondering if there is a faster, efficient (and simpler) way to do this with each numpy element.
You could rephrase the problem as an ODE.
The odeint function can then be used to compute F(z) for a series of z.
>>> scipy.integrate.odeint(lambda y, t: LCDMf(t), 0, [0, 1, 2, 5, 8])
array([[ 0. ], # integrate until z = 0 (must exist, to provide initial value)
[ 0.77142712], # integrate until z = 1
[ 1.20947123], # integrate until z = 2
[ 1.81550912], # integrate until z = 5
[ 2.0881925 ]]) # integrate until z = 8
After tinkering with np.vectorize I found the following solution. Simple - elegant and it works!
import numpy as np
from scipy import integrate
LCDMf = lambda x: 1.0/math.sqrt(0.3*(1+x)**3+0.7)
np.vectorize(LCDMf)
def LCDMfint(z):
return integrate.quad(LCDMf, 0, z)
LCDMfint=np.vectorize(LCDMfint)
z=np.arange(0,100)
an=LCDMfint(z)
print an[0]
This method works with unsorted float arrays or anything we throw at it and doesn't any initial conditions as in the odeint method.
I hope this helps someone somewhere too... Thanks all for your inputs.
It can definitely be done more efficiently. In the end what you've got is a series of calculations:
Integrate LCDMf from 0 to 1.
Integrate LCDMf from 0 to 2.
Integrate LCDMf from 0 to 3.
So the very first thing you can do is change it to integrate distinct subintervals and then sum them (after all, what else is integration!):
Integrate LCDMf from 0 to 1.
Integrate LCDMf from 1 to 2, add result from step 1.
Integrate LCDMf from 2 to 3, add result from step 2.
Next you can dive into LCDMf, which is defined this way:
1.0/np.sqrt(0.3*(1+x)**3+0.7)
You can use NumPy broadcasting to evaluate this function at many points instantly:
dx = 0.0001
x = np.arange(0, 100, dx)
y = LCDMf(x)
That should be quite fast, and gives you one million points on the curve. Now you can integrate it using scipy.integrate.trapz() or one of the related functions. Call this with the y and dx already computed, using the workflow above where you integrate each interval and then use cumsum() to get your final result. The only function you need to call in a loop then is the integrator itself.
I need to compute in numpy where $x_i$ and $x_j$ are rows in a matrix $X$. Now I am using loop, which is very slow. Is there any numpy native function allows such computation, like einsum:
n=X.shape[0]
Y=np.zeros((n,n))
for i in range(n):
x=(X-X[i])**2
x=np.sum(x, axis=1)
Y[i]=x
return Y
BTW, I am very confused with einsum. Is there any good material for its introduction. The manual page on numpy was not very clear to me.
Approach #1
You can use broadcasting as a vectorized approach -
import numpy as np
Y = np.sum((X - X[:,None,:])**2,2)
This should be efficient with relatively smaller input arrays.
Approach #2
Seems like you are performing euclidean distance calculations and getting the squared distances. So, you can use distance.cdist like so -
import numpy as np
from scipy.spatial import distance
Y = distance.cdist(X, X, 'sqeuclidean')
This should be efficient with large input arrays.