Have problem when trying calculate jacobian matrix from nonlinear system of equations - python

I found code for compute jacobian matrix from here and try it for non-linear system of equations.
import autograd.numpy as np
from autograd import jacobian
x = np.array([1,2], dtype=float)
def fs(x,y):
return np.array([x + 2*y - 2, x**2 + 4*y**2 - 4])
jacobian_cost = jacobian(fs)
jacobian_cost(x[0],x[1])
But instead of outputting result like this
array([ 1. 2.]
[ 2. 16.])
it outputting result like this
array([1., 2.])
I curious about whats wrong with this code, maybe i missing something.

You need to vectorize your function, and call it in a vectorized manner. So define
def fs(x):
return np.stack([x[0,:]+ 2*x[1, :]-2, x[0, :]**2+4*x[1,:]**2-4])
and then call
x = np.array([[1.0, 2]])
output = jacobian(fs)(x)
But note that the autograd library is very outdated, doesn't have a very good documentation and is no longer maintained. I'd recommend using PyTorch or JAX instead.

Related

Implement LMI constraint with CVXPY

So I am trying to implement a simple optimization code in Python using the CVXPY package (an optimization problem with a linear matrix inequality constraint). The code is show below.
I have tried running the code using Python 3.6.
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
import control as cs
gamma = cp.Variable();
MAT1 = np.array([[1, gamma], [1+gamma, 3]])
constraints_2 = [MAT1 >> 0]
prob = cp.Problem(cp.Minimize(gamma),constraints_2)
prob.solve()
Every time I try to run this code, I get the following error:
"Non-square matrix in positive definite constraint."
But the matrix is clearly square! So I don't know what is happening.
Any ideas?
Your help is much appreciated!
MAT1 is a numpy array, you'll need to make it a cvxpy Variable to use the semidefinite constraint. Try this:
MAT1 = cp.Variable((2, 2))
constraints_2 = [MAT1 >> 0, MAT1[0, 0] == 1, MAT1[1, 0] == 1 + MAT1[0, 1], MAT1[1, 1] == 3]
prob = cp.Problem(cp.Minimize(MAT1[0, 1]), constraints_2)
gamma is then about -2.73
There is a technical and a conceptual problem here
Tecnical problem
The problem is that your MAT1 is not a numpy array
You could have written
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
Or more concisely
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
This way the cvxpy will accept your code, but will not give the correct answer.
Conceptual problem
The SDP problems are convex only for symmetric matrices, what cvxpy will do is to make it symmetric (apparently by adding it to it's transpose). The solution given is the minimum gamma such that [[1, 0.5+gamma], [0.5+gamma, 3]] >> 0.

SAS Proc Corr with Weighting in Python

I have a SAS script that uses the "proc corr" procedure, along with weighting in order to create a weighted correlation matrix. I am now trying to reproduce this function in python, but I haven't found a good way of including the weighting in the output matrix.
While looking for a solution, I've found a few scripts and functions that calculate weighted correlation coefficients for two columns/variables (examples here) using a weights array, but I am trying to create a weighted correlation matrix with many more variables. I've tried using these functions by looping through variable combinations, but it is running magnitudes slower than the SAS procedure.
I was wondering if there was an efficient way to create a weighted correlation matrix in python that works similarly to the SAS code, or at least returns equivalent results without looping through all variable combinations.
numpy's covariance takes two different kind of weights parameters - I don't have SAS to check against, but it is likely a similar approach.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html#numpy.cov
Once you have a covariance matrix, it can be converted to a correlation matrix using a formula like this
https://en.wikipedia.org/wiki/Covariance_matrix#Correlation_matrix
Complete example
import numpy as np
x = np.array([1., 1.1, 1.2, 0.9])
y = np.array([2., 2.05, 2.02, 2.8])
np.cov(x, y)
Out[49]:
array([[ 0.01666667, -0.03816667],
[-0.03816667, 0.151225 ]])
cov = np.cov(x, y, fweights=[10, 1, 1, 1])
cov
Out[51]:
array([[ 0.00474359, -0.00703205],
[-0.00703205, 0.04872308]])
def cov_to_corr(cov):
""" based on https://en.wikipedia.org/wiki/Covariance_matrix#Correlation_matrix """
D = np.sqrt(np.diag(np.diag(cov)))
Dinv = np.linalg.inv(D)
return Dinv # cov # Dinv # requires python3.5, use np.dot otherwise
cov_to_corr(cov)
Out[53]:
array([[ 1. , -0.46255259],
[-0.46255259, 1. ]])

Integrate a function with each element of numpy arrays as limit of integration

I have a function in python (using scipy and numpy also) defined as
import numpy as np
from scipy import integrate
LCDMf = lambda x: 1.0/np.sqrt(0.3*(1+x)**3+0.7)
I would like to integrate it from 0 to each element in a numpy array say z = np.arange(0,100)
I know I can write a loop for each element iterating through like
an=integrate.quad(LCDMf,0,z[i])
But, I was wondering if there is a faster, efficient (and simpler) way to do this with each numpy element.
You could rephrase the problem as an ODE.
The odeint function can then be used to compute F(z) for a series of z.
>>> scipy.integrate.odeint(lambda y, t: LCDMf(t), 0, [0, 1, 2, 5, 8])
array([[ 0. ], # integrate until z = 0 (must exist, to provide initial value)
[ 0.77142712], # integrate until z = 1
[ 1.20947123], # integrate until z = 2
[ 1.81550912], # integrate until z = 5
[ 2.0881925 ]]) # integrate until z = 8
After tinkering with np.vectorize I found the following solution. Simple - elegant and it works!
import numpy as np
from scipy import integrate
LCDMf = lambda x: 1.0/math.sqrt(0.3*(1+x)**3+0.7)
np.vectorize(LCDMf)
def LCDMfint(z):
return integrate.quad(LCDMf, 0, z)
LCDMfint=np.vectorize(LCDMfint)
z=np.arange(0,100)
an=LCDMfint(z)
print an[0]
This method works with unsorted float arrays or anything we throw at it and doesn't any initial conditions as in the odeint method.
I hope this helps someone somewhere too... Thanks all for your inputs.
It can definitely be done more efficiently. In the end what you've got is a series of calculations:
Integrate LCDMf from 0 to 1.
Integrate LCDMf from 0 to 2.
Integrate LCDMf from 0 to 3.
So the very first thing you can do is change it to integrate distinct subintervals and then sum them (after all, what else is integration!):
Integrate LCDMf from 0 to 1.
Integrate LCDMf from 1 to 2, add result from step 1.
Integrate LCDMf from 2 to 3, add result from step 2.
Next you can dive into LCDMf, which is defined this way:
1.0/np.sqrt(0.3*(1+x)**3+0.7)
You can use NumPy broadcasting to evaluate this function at many points instantly:
dx = 0.0001
x = np.arange(0, 100, dx)
y = LCDMf(x)
That should be quite fast, and gives you one million points on the curve. Now you can integrate it using scipy.integrate.trapz() or one of the related functions. Call this with the y and dx already computed, using the workflow above where you integrate each interval and then use cumsum() to get your final result. The only function you need to call in a loop then is the integrator itself.

Use Python SciPy to compute the Rodrigues formula P_n(x) (Legendre polynomials)

I'm trying to use Python to calculate the Rodrigues formula, P_n(x).
http://en.wikipedia.org/wiki/Rodrigues%27_formula
That is, I would like a function which takes into two input parameters, n and x, and returns the output of this formula.
However, I don't think SciPy has this function yet. SpiPy does offer a Legendre module:
http://docs.scipy.org/doc/numpy/reference/routines.polynomials.legendre.html
I don't think any of these is the Rodrigues formula. Am I wrong?
Is there a standard way SciPy offers to do this?
EDIT: I would like the input parameters to be arrays, not just single input values.
If you simply want P_n(x), then you can create a suitable object representing the P_n polynomial using scipy.special.legendre and call it with your values of x:
In [1]: from scipy.special import legendre
In [2]: n = 3
In [3]: Pn = legendre(n)
In [4]: Pn(2.5)
Out[4]: 35.3125 # P_3(2.5)
The object Pn is, in a sense, the "output" of the Rodrigues formula: it is a polynomial of the required order, which can be evaluated at a provided value of x. If you want a single function that takes n and x, you can use eval_legendre:
In [5]: from scipy.special import eval_legendre
In [6]: eval_legendre(3, 2.5)
Out[6]: 35.3125
As noted in the docs, this is the recommended way to do it for large-ish n (e.g. n > 20), instead of creating a polynomial object with all the coefficients which does not handle rounding errors and numerical stability as well.
EDIT: Both approaches work with arrays (at least for the x argument). For example:
In [7]: x = np.array([0, 1, 2, 5, 10])
In [8]: Pn(x)
Out[8]:
array([ 0.00000000e+00, 1.00000000e+00, 1.70000000e+01,
3.05000000e+02, 2.48500000e+03])

Left inverse in numpy or scipy?

I am trying to obtain the left inverse of a non-square matrix in python using either numpy or scipy.
How can I translate the following Matlab code to Python?
>> A = [0,1; 0,1; 1,0]
A =
0 1
0 1
1 0
>> y = [2;2;1]
y =
2
2
1
>> A\y
ans =
1.0000
2.0000
Is there a numpy or scipy equivalent of the left inverse \ operator in Matlab?
Use linalg.lstsq(A,y) since A is not square. See here for details. You can use linalg.solve(A,y) if A is square, but not in your case.
Here is a method that will work with sparse matrices (which from your comments is what you want) which uses the leastsq function from the optimize package
from numpy import *
from scipy.sparse import csr_matrix
from scipy.optimize import leastsq
from numpy.random import rand
A=csr_matrix([[0.,1.],[0.,1.],[1.,0.]])
b=array([[2.],[2.],[1.]])
def myfunc(x):
x.shape = (2,1)
return (A*x - b)[:,0]
print leastsq(myfunc,rand(2))[0]
generates
[ 1. 2.]
It is kind of ugly because of how I had to get the shapes to match up according to what leastsq wanted. Maybe someone else knows how to make this a little more tidy.
I have also tried to get something to work with the functions in scipy.sparse.linalg by using the LinearOperators, but to no avail. The problem is that all of those functions are made to handle square functions only. If anyone finds a way to do it that way, I would like to know as well.
For those who wish to solve large sparse least squares problems:
I have added the LSQR algorithm to SciPy. With the next release, you'll be able to do:
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import lsqr
import numpy as np
A = csr_matrix([[0., 1], [0, 1], [1, 0]])
b = np.array([[2.], [2.], [1.]])
lsqr(A, b)
which returns the answer [1, 2].
If you'd like to use this new functionality without upgrading SciPy, you may download lsqr.py from the code repository at
http://projects.scipy.org/scipy/browser/trunk/scipy/sparse/linalg/isolve/lsqr.py
You can also look for the equivalent of the pseudo-inverse function pinv in numpy/scipy, as an alternative to the other answers that is.
You can calculate the left inverse using matrix calculations:
import numpy as np
linv_A = np.linalg.solve(A.T.dot(A), A.T)
(Why? Because:
)
Test:
np.set_printoptions(suppress=True, precision=3)
np.random.seed(123)
A = np.random.randn(3, 2)
print('A\n', A)
A_linv = np.linalg.solve(A.T.dot(A), A.T)
print('A_linv.dot(A)\n', A_linv.dot(A))
Result:
A
[[-1.086 0.997]
[ 0.283 -1.506]
[-0.579 1.651]]
A_linv.dot(A)
[[ 1. -0.]
[ 0. 1.]]
I haven't tested it, but according to this web page it is:
linalg.solve(A,y)
You can use lsqr from scipy.sparse.linalg to solve sparse matrix systems with least squares

Categories