I'm trying to solve a system of differential equations with scipy.integrate.solve_ivp. The system depends on a real independent variable t and the dependent variables, cn(t)-s are complex in general. The catch is, the solver always gets stuck, no matter the dimension of the system (determined by n_max). Here's the setup:
# Constants
from scipy.constants import hbar as h_
n_max = 2
t_max = 1
# The derivative function
def dcdt(t, c):
return (-1.0j/h_)*((V_mat*np.exp(1.0j*w_mat*t)) # c)
# Initial conditions
c_0 = np.zeros(n_max, dtype = complex)
c_0[0] = 1.0
# Solving the deal
t = np.linspace(0, t_max, 10)
c = solve_ivp(dcdt, [0, t_max], c_0, t_eval = t)
And there it goes, doesn't ever stop running.
Here are sample matrices V_mat and w_mat:
>>> V_mat
array([[1.0000000e-09, 1.8008153e-56],
[1.8008153e-56, 1.0000000e-09]])
>>> w_mat
array([[ 0. , -156123.07053024],
[ 156123.07053024, 0. ]])
As you will notice, V_mat and w_mat are 2-D square matrices of dimension n_max.
Is the problem tied to large/very small values in the matrices? Or is it something to do with complex values?
As I had already guessed, the problem is tied to large values in the differential equations I'm trying to solve, in particular, due to -1.0j/h_ in
def dcdt(t, c):
return (-1.0j/h_)*((V_mat*np.exp(1.0j*w_mat*t)) # c)
where h_ = 1.054e-34 is the reduced Planck's constant. Rescaling the equation and removing h_ fixes the problem.
Related
I am trying to solve a set of differential equations, but I have been having difficulty making this work. My differential equations contain an "i" subscript that represents numbers from 1 to n. I tried implementing a forloop as follows, but I have been getting this index error (the error message is below). I have tried changing the initial conditions (y0) and other values, but nothing seems to work. In this code, I am using solve_ivp. The code is as follows:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_ivp
def testmodel(t, y):
X = y[0]
Y = y[1]
J = y[2]
Q = y[3]
a = 3
S = 0.4
K = 0.8
L = 2.3
n = 100
for i in range(1,n+1):
dXdt[i] = K**a+(Q[i]**a) - S*X[i]
dYdt[i] = (K*X[i])-(L*Y[i])
dJdt[i] = S*Y[i]-(K*Q[i])
dQdt[i] = K*X[i]/L+J[i]
return dXdt, dYdt, dJdt, dQdt
t_span= np.array([0, 120])
times = np.linspace(t_span[0], t_span[1], 1000)
y0 = 0,0,0,0
soln = solve_ivp(testmodel, t_span, y0, t_eval=times,
vectorized=True)
t = soln.t
X = soln.y[0]
Y = soln.y[1]
J = soln.y[2]
Q = soln.y[3]
plt.plot(t, X,linewidth=2, color='red')
plt.show()
The error I get is
IndexError Traceback (most recent call last)
<ipython-input-107-3a0cfa6e42ed> in testmodel(t, y)
15 n = 100
16 for i in range(1,n+1):
--> 17 dXdt[i] = K**a+(Q[i]**a) - S*X[i]
IndexError: index 1 is out of bounds for axis 0 with size 1
I have scattered the web for a solution to this, but I have been unable to apply any solution to this problem. I am not sure what I am doing wrong and what to actually change.
I have tried to remove the "vectorized=True" argument, but then I get an error that states I cannot index scalar variables. This is confusing because I do not think these values should be scalar. How do I resolve this problem, my ultimate goal is to plot these differential equations. Thank you in advance.
It is nice that you provide the standard solver with a vectorized ODE function for multi-point evalutions. But the default method is the explicit RK45, and explicit methods do not use Jacobi matrices. So there is no need for multi-point evaluations for difference quotients for the partial derivatives.
In essence, the coordinate arrays always have size 1, as the evaluation is at a single point, so for instance Q is an array of length 1, the only valid index is 0. Remember, in all "true" programming languages, array indices start at 0. It is only some CAS script languages that use the "more mathematical" 1 as index start. (Setting n=100 and ignoring the length of the arrays provided by the solver is wrong as well.)
You can avoid all that and shorten your routine by taking into account that the standard arithmetic operations are applied element-wise for numpy arrays, so
def testmodel(t, y):
X,Y,J,Q = y
a = 3; S = 0.4; K = 0.8; L = 2.3
dXdt = K**a + Q**a - S*X
dYdt = K*X - L*Y
dJdt = S*Y - K*Q
dQdt = K*X/L + J
return dXdt, dYdt, dJdt, dQdt
Modifying your code for multiple compartments with the same dynamic
You need to pass the solver a flat vector of the state. The first design decision is how the compartments and their components are arranged in the flat vector. One variant that is most compatible with the existing code is to cluster the same components together. Then in the ODE function the first operation is to separate out these clusters.
X,Y,J,Q = y.reshape([4,-1])
This splits the input vector into 4 pieces of equal length. At the end you need to reverse this split so that the derivatives are again in a flat vector.
return np.concatenate([dXdt, dYdt, dJdt, dQdt])
Everything else remains the same. Apart from the initial vector, which needs to have 4 segments of length N containing the data for the compartments. Here that could just be
y0 = np.zeros(4*N)
If the initial data is from any other source, and given in records per compartment, you might have to transpose the resulting array before flattening it.
Note that this construction is not vectorized, so leave that option unset in its default False.
For uniform interaction patterns like in a circle I recommend the use of numpy.roll to continue to avoid the use of explicit loops. For an interaction pattern that looks like a network one can use connectivity matrices and masks like in Using python built-in functions for coupled ODEs
I have data that I want to fit with polynomials. I have 200,000 data points, so I want an efficient algorithm. I want to use the numpy.polynomial package so that I can try different families and degrees of polynomials. Is there some way I can formulate this as a system of equations like Ax=b? Is there a better way to solve this than with scipy.minimize?
import numpy as np
from scipy.optimize import minimize as mini
x1 = np.random.random(2000)
x2 = np.random.random(2000)
y = 20 * np.sin(x1) + x2 - np.sin (30 * x1 - x2 / 10)
def fitness(x, degree=5):
poly1 = np.polynomial.polynomial.polyval(x1, x[:degree])
poly2 = np.polynomial.polynomial.polyval(x2, x[degree:])
return np.sum((y - (poly1 + poly2)) ** 2 )
# It seems like I should be able to solve this as a system of equations
# x = np.linalg.solve(np.concatenate([x1, x2]), y)
# minimize the sum of the squared residuals to find the optimal polynomial coefficients
x = mini(fitness, np.ones(10))
print fitness(x.x)
Your intuition is right. You can solve this as a system of equations of the form Ax = b.
However:
The system is overdefined and you want to get the least-squares solution, so you need to use np.linalg.lstsq instead of np.linalg.solve.
You can't use polyval because you need to separate the coefficients and powers of the independent variable.
This is how to construct the system of equations and solve it:
A = np.stack([x1**0, x1**1, x1**2, x1**3, x1**4, x2**0, x2**1, x2**2, x2**3, x2**4]).T
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function
Of course you can generalize over the degree:
A = np.stack([x1**p for p in range(degree)] + [x2**p for p in range(degree)]).T
With the example data, the least squares solution runs much faster than the minimize solution (800µs vs 35ms on my laptop). However, A can become quite large, so if memory is an issue minimize might still be an option.
Update:
Without any knowledge about the internals of the polynomial function things become tricky, but it is possible to separate terms and coefficients. Here is a somewhat ugly way to construct the system matrix A from a function like polyval:
def construct_A(valfunc, degree):
columns1 = []
columns2 = []
for p in range(degree):
c = np.zeros(degree)
c[p] = 1
columns1.append(valfunc(x1, c))
columns2.append(valfunc(x2, c))
return np.stack(columns1 + columns2).T
A = construct_A(np.polynomial.polynomial.polyval, 5)
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function
I am now trying to learn the ADMM algorithm (Boyd 2010) for LASSO regression.
I found out a very good example on this page.
The matlab code is shown here.
I tried to convert it into python language so that I could develop a better understanding.
Here is the code:
import scipy.io as io
import scipy.sparse as sp
import scipy.linalg as la
import numpy as np
def l1_norm(x):
return np.sum(np.abs(x))
def l2_norm(x):
return np.dot(x.ravel().T, x.ravel())
def fast_threshold(x, threshold):
return np.multiply(np.sign(x), np.fmax(abs(x) - threshold, 0))
def lasso_admm(X, A, gamma):
c = X.shape[1]
r = A.shape[1]
C = io.loadmat("C.mat")["C"]
L = np.zeros(X.shape)
rho = 1e-4
maxIter = 200
I = sp.eye(r)
maxRho = 5
cost = []
for n in range(maxIter):
B = la.solve(np.dot(A.T, A) + rho * I, np.dot(A.T, X) + rho * C - L)
C = fast_threshold(B + L / rho, gamma / rho)
L = L + rho * (B - C);
rho = min(maxRho, rho * 1.1);
cost.append(0.5 * l2_norm(X - np.dot(A, B)) + gamma * l1_norm(B))
cost = np.array(cost).ravel()
return B, cost
data = io.loadmat("lasso.mat")
A = data["A"]
X = data["X"]
B, cost = lasso_admm(X, A, gamma)
I have found the loss function did not converge after 100+ iterations. Matrix B did not tend to be sparse, on the other hand, the matlab code worked in different situations.
I have checked with different input data and compared with Matlab outputs, yet I still could not get hints.
Could anybody take a try?
Thank you in advance.
My gut feeling as to why this is not working to your expectations is your la.solve() call. la.solve() assumes that the matrix is full rank and is independent (i.e. invertible). When you use \ in MATLAB, what MATLAB does under the hood is that if the matrix is full rank, the exact inverse is found. However, should the matrix not be this way (i.e. overdetermined or underdetermined), the solution to the system is solved by least-squares instead. I would suggest you modify that call so that you're using lstsq instead of solve. As such, simply replace your la.solve() call with this:
sol = la.lstsq(np.dot(A.T, A) + rho * I, np.dot(A.T, X) + rho * C - L)
B = sol[0]
Note that lstsq returns a whole bunch of outputs in a 4-element tuple, in addition to the solution. The solution of the system is in the first element of this tuple, which is why I did B = sol[0]. What is also returned are the sums of residues (second element), the rank (third element) and the singular values of the matrix you are trying to invert when solving (fourth element).
Also some peculiarities that I have noticed:
One thing that may or may not matter is the random generation of numbers. MATLAB and Python NumPy generate random numbers differently, so this may or may not affect your solution.
In MATLAB, Simon Lucey's code initializes L to be a zero matrix such that L = zeros(size(X));. However, in your Python code, you initialize L to be this way: L = np.zeros(C.shape);. You are using different variables to ascertain the shape of L. Obviously, the
code wouldn't work if there was a dimension mismatch, but that's another thing that's different. Not sure if this will affect your solution either.
So far I haven't found anything out of the ordinary, so try that fix and let me know.
Update: I have modified the Optimize and Eigen and Solve methods to reflect changes. All now return the "same" vector allowing for machine precision. I am still stumped on the Eigen method. Specifically How/Why I select slice of the eigenvector does not make sense. It was just trial and error till the normal matched the other solutions. If anyone can correct/explain what I really should do, or why what I have done works I would appreciate it..
Thanks Alexander Kramer, for explaining why I take a slice, only alowed to select one correct answer
I have a depth image. I want to calculate a crude surface normal for a pixel in the depth image. I consider the surrounding pixels, in the simplest case a 3x3 matrix, and fit a plane to these point, and calculate the normal unit vector to this plane.
Sounds easy, but thought best to verify the plane fitting algorithms first. Searching SO and various other sites I see methods using least squares, singlualar value decomposition, eigenvectors/values etc.
Although I don't fully understand the maths I have been able to get the various fragments/example to work. The problem I am having, is that I am getting different answers for each method. I was expecting the various answers would be similar (not exact), but they seem significantly different. Perhaps some methods are not suited to my data, but not sure why I am getting different results. Any ideas why?
Here is the Updated output of the code:
LTSQ: [ -8.10792259e-17 7.07106781e-01 -7.07106781e-01]
SVD: [ 0. 0.70710678 -0.70710678]
Eigen: [ 0. 0.70710678 -0.70710678]
Solve: [ 0. 0.70710678 0.70710678]
Optim: [ -1.56069661e-09 7.07106781e-01 7.07106782e-01]
The following code implements five different methods to calculate the surface normal of a plane. The algorithms/code were sourced from various forums on the internet.
import numpy as np
import scipy.optimize
def fitPLaneLTSQ(XYZ):
# Fits a plane to a point cloud,
# Where Z = aX + bY + c ----Eqn #1
# Rearanging Eqn1: aX + bY -Z +c =0
# Gives normal (a,b,-1)
# Normal = (a,b,-1)
[rows,cols] = XYZ.shape
G = np.ones((rows,3))
G[:,0] = XYZ[:,0] #X
G[:,1] = XYZ[:,1] #Y
Z = XYZ[:,2]
(a,b,c),resid,rank,s = np.linalg.lstsq(G,Z)
normal = (a,b,-1)
nn = np.linalg.norm(normal)
normal = normal / nn
return normal
def fitPlaneSVD(XYZ):
[rows,cols] = XYZ.shape
# Set up constraint equations of the form AB = 0,
# where B is a column vector of the plane coefficients
# in the form b(1)*X + b(2)*Y +b(3)*Z + b(4) = 0.
p = (np.ones((rows,1)))
AB = np.hstack([XYZ,p])
[u, d, v] = np.linalg.svd(AB,0)
B = v[3,:]; # Solution is last column of v.
nn = np.linalg.norm(B[0:3])
B = B / nn
return B[0:3]
def fitPlaneEigen(XYZ):
# Works, in this case but don't understand!
average=sum(XYZ)/XYZ.shape[0]
covariant=np.cov(XYZ - average)
eigenvalues,eigenvectors = np.linalg.eig(covariant)
want_max = eigenvectors[:,eigenvalues.argmax()]
(c,a,b) = want_max[3:6] # Do not understand! Why 3:6? Why (c,a,b)?
normal = np.array([a,b,c])
nn = np.linalg.norm(normal)
return normal / nn
def fitPlaneSolve(XYZ):
X = XYZ[:,0]
Y = XYZ[:,1]
Z = XYZ[:,2]
npts = len(X)
A = np.array([ [sum(X*X), sum(X*Y), sum(X)],
[sum(X*Y), sum(Y*Y), sum(Y)],
[sum(X), sum(Y), npts] ])
B = np.array([ [sum(X*Z), sum(Y*Z), sum(Z)] ])
normal = np.linalg.solve(A,B.T)
nn = np.linalg.norm(normal)
normal = normal / nn
return normal.ravel()
def fitPlaneOptimize(XYZ):
def residiuals(parameter,f,x,y):
return [(f[i] - model(parameter,x[i],y[i])) for i in range(len(f))]
def model(parameter, x, y):
a, b, c = parameter
return a*x + b*y + c
X = XYZ[:,0]
Y = XYZ[:,1]
Z = XYZ[:,2]
p0 = [1., 1.,1.] # initial guess
result = scipy.optimize.leastsq(residiuals, p0, args=(Z,X,Y))[0]
normal = result[0:3]
nn = np.linalg.norm(normal)
normal = normal / nn
return normal
if __name__=="__main__":
XYZ = np.array([
[0,0,1],
[0,1,2],
[0,2,3],
[1,0,1],
[1,1,2],
[1,2,3],
[2,0,1],
[2,1,2],
[2,2,3]
])
print "Solve: ", fitPlaneSolve(XYZ)
print "Optim: ",fitPlaneOptimize(XYZ)
print "SVD: ",fitPlaneSVD(XYZ)
print "LTSQ: ",fitPLaneLTSQ(XYZ)
print "Eigen: ",fitPlaneEigen(XYZ)
Optimize
The normal vector of a plane a*x + b*y +c*z = 0, equals (a,b,c)
The optimize method finds a values for a and b such that a*x+b*y~z (~ denotes approximates) It omits to use the value of c in the calculation at all. I don't have numpy installed on this machine but I expect that changing the model to (a*x+b*y)/c should fix this method. It will not give the same result for all data-sets. This method will always assume a plane that goes through the origin.
SVD and LTSQ
produce the same results. (The difference is about the size of machine precision).
Eigen
The wrong eigenvector is chosen. The eigenvector corresponding to the greatest eigenvalue (lambda = 1.50) is x=[0, sqrt(2)/2, sqrt(2)/2] just as in the SVD and LTSQ.
Solve
I have no clue how this is supposed to work.
The normal vector of the plane in Eigen solution is the eigenvector for smallest eigenvalue. Some Eigen implementations sort the eigenvalues and eigenvectors some others don't. So in some implementations it's sufficient to take first (or last) eigenvector for normal. In other implementations you have to sort them first. On the other hand the majority of SVD implementations provide sorted values so it's simple first (or last) vector.
I have a university project in which we are asked to simulate a satellite approach to Mars using ODE's and SciPy's odeint function.
I manage to simulate it in 2D by making a second-order ODE into two first-order ODE's. However I am stuck in the time limitation because my code is using SI units therefore running in seconds and Python's linspace limits does not even simulate one complete orbit.
I tried converting the variables and constants to hours and kilometers but now the code keeps giving errors.
I followed this method:
http://bulldog2.redlands.edu/facultyfolder/deweerd/tutorials/Tutorial-ODEs.pdf
And the code is:
import numpy
import scipy
from scipy.integrate import odeint
def deriv_x(x,t):
return array([ x[1], -55.3E10/(x[0])**2 ]) #55.3E10 is the value for G*M in km and hours
xinit = array([0,5251]) # this is the velocity for an orbit of period 24 hours
t=linspace(0,24.0,100)
x=odeint(deriv_x, xinit, t)
def deriv_y(y,t):
return array([ y[1], -55.3E10/(y[0])**2 ])
yinit = array([20056,0]) # this is the radius for an orbit of period 24 hours
t=linspace(0,24.0,100)
y=odeint(deriv_y, yinit, t)
I don't know how to copy/paste the error code from PyLab so I took a PrintScreen of the error:
Second error with t=linspace(0.01,24.0,100) and xinit=array([0.001,5251]):
If anyone has any suggestions on how to improve the code I will be very grateful.
Thank you very much!
odeint(deriv_x, xinit, t)
uses xinit as its initial guess for x. This value for x is used when evaluating deriv_x.
deriv_x(xinit, t)
raises a divide-by-zero error since x[0] = xinit[0] equals 0, and deriv_x divides by x[0].
It looks like you are trying to solve the second-order ODE
r'' = - C rhat
---------
|r|**2
where rhat is the unit vector in the radial direction.
You appear to be separating the x and y coordinates into separate second-order ODES:
x'' = - C y'' = - C
----- and -----
x**2 y**2
with initial conditions x0 = 0 and y0 = 20056.
This is very problematic. Among the problems is that when x0 = 0, x'' blows up. The original second-order ODE for r'' does not have this problem -- the denominator does not blow up when x0 = 0 because y0 = 20056, and so r0 = (x**2+y**2)**(1/2) is far from zero.
Conclusion: Your method of separating the r'' ODE into two ODEs for x'' and y'' is incorrect.
Try searching for a different way to solve the r'' ODE.
Hint:
What if your "state" vector is z = [x, y, x', y']?
Can you write down a first-order ODE for z' in terms of x, y,
x' and y'?
Can you solve it with one call to integrate.odeint?