I am trying to write a very simple gravity simulation of a mass orbiting the origin. I have used scipy.integrate.odeint to integrate the differential equations.
The problem is that I get the following error message:
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
As well as this something is clearly going wrong - the equations are not being integrated correctly and the motion is incorrect. Below is a plot of the motion for initial conditions that should give circular motion around the origin:
This is the code:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
G=1
m=1
def f_grav(y, t):
x1, x2, v1, v2 = y
m = t
dydt = [v1, v2, -x1*G*m/(x1**2+x2**2)**(3/2), -x2*G*m/(x1**2+x2**2)**(3/2)]
return dydt
t = np.linspace(0, 100, 1001)
init = [0, 1, 1, 0]
ans = odeint(f_grav, init, t)
print(ans)
x = []
y = []
for i in range (100):
x.append(ans[i][0])
y.append(ans[i][1])
plt.plot(x, y)
plt.show()
Note that I have used this function before, and writing almost identical code for an SHM differential equation obtains correct results. Changing the numbers in t does not help. Does anyone have any idea as to why this may be failing so badly?
The incorrect motion is probably numerical instability, either way, from the documentation of odeint:
note: For new code, use scipy.integrate.solve_ivp to solve a differential equation.
solve_ivp actually only takes the boundaries and decides the number of points so that the integration method is stable for the equation. You can also choose the integration method.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
G=1
m=1
def f_grav(t, y):
x1, x2, v1, v2 = y
m = t
dydt = [v1, v2, -x1*G*m/(x1**2+x2**2)**(3/2), -x2*G*m/(x1**2+x2**2)**(3/2)]
return dydt
domain = (0, 100)
init = [0, 1, 1, 0]
ans = solve_ivp(fun=f_grav, t_span=domain, y0=init)
plt.plot(ans['y'][0], ans['y'][1])
plt.show()
with that I'm not getting any warnings and the simulation looks better (note that the function must have the parameters in the order (t, y)).
As has been worked out in the comments, the error was that the mass was set to the time, this growing mass is in contradiction to the physics of the situation, but on the other hand, it explains the spiraling down as energy and momentum are preserved.
The corrected and a little streamlined code
G=1
m=1
def f_grav(y, t):
x1, x2, v1, v2 = y
r = np.hypot(x1,x2);
F = G*m/r**3;
return [v1, v2, -x1*F, -x2*F];
t = np.linspace(0, 100, 1001)
init = [0, 1, 1, 0]
ans = odeint(f_grav, init, t)
print(ans)
x,y,_,_ = ans.T
plt.plot(0,0,'oy', ms=8)
plt.plot(x, y); plt.axis('equal');
plt.show()
gives (in vx=1, where (vx,0) is the initial speed) a visually perfect circle as orbit.
There might be two main issues: you give input parameters as integers instead of doubles; in addition, you do not set the tolerance for the accuracy of the integration and the output is not 'dense'. Something like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
G=1.
m=1.
def f_grav(t, w):
x, y, z, vx, vy, vz = w
fg = -G*m/(x**2.+y**2.+z**2.)**(1.5)
dwdt = [vx, vy, vz, fg*x, fg*y, fg*z]
return dwdt
n=100 # total number of orbits
tmin = 0.
tmax = float(n)*6.283185
domain = (tmin, tmax)
t_eval = np.linspace(tmin, tmax, int(tmax*1000))
init = [0., 1., 0., 1., 0., 0.]
ans = solve_ivp(fun=f_grav, t_span=domain, t_eval =t_eval, y0=init,dense_output=True, rtol=1.e-10)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(ans['y'][0], ans['y'][1])
plt.show()
Should work properly, at least it does on my laptop.
I hope this is helpful.
Related
I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:
I was wondering if there's a function in Python that would do the same job as scipy.linalg.lstsq but uses “least absolute deviations” regression instead of “least squares” regression (OLS). I want to use the L1 norm, instead of the L2 norm.
In fact, I have 3d points, which I want the best-fit plane of them. The common approach is by the least square method like this Github link. But It's known that this doesn't give the best fit always, especially when we have interlopers in our set of data. And it's better to calculate the least absolute deviation. The difference between the two methods is explained more here.
It'll not be solved by functions such as MAD since it's an Ax = b matrix equations and requires loops to minimizes the results. I want to know if anyone knows of a relevant function in Python - probably in a linear algebra package - that would calculate “least absolute deviations” regression?
This is not so difficult to roll yourself, using scipy.optimize.minimize and a custom cost_function.
Let us first import the necessities,
from scipy.optimize import minimize
import numpy as np
And define a custom cost function (and a convenience wrapper for obtaining the fitted values),
def fit(X, params):
return X.dot(params)
def cost_function(params, X, y):
return np.sum(np.abs(y - fit(X, params)))
Then, if you have some X (design matrix) and y (observations), we can do the following,
output = minimize(cost_function, x0, args=(X, y))
y_hat = fit(X, output.x)
Where x0 is some suitable initial guess for the optimal parameters (you could take #JamesPhillips' advice here, and use the fitted parameters from an OLS approach).
In any case, when test-running with a somewhat contrived example,
X = np.asarray([np.ones((100,)), np.arange(0, 100)]).T
y = 10 + 5 * np.arange(0, 100) + 25 * np.random.random((100,))
I find,
fun: 629.4950595335436
hess_inv: array([[ 9.35213468e-03, -1.66803210e-04],
[ -1.66803210e-04, 1.24831279e-05]])
jac: array([ 0.00000000e+00, -1.52587891e-05])
message: 'Optimization terminated successfully.'
nfev: 144
nit: 11
njev: 36
status: 0
success: True
x: array([ 19.71326758, 5.07035192])
And,
fig = plt.figure()
ax = plt.axes()
ax.plot(y, 'o', color='black')
ax.plot(y_hat, 'o', color='blue')
plt.show()
With the fitted values in blue, and the data in black.
You can solve your problem using scipy.minimize function. You have to set the function you want to minimize (in our case a plane with the form Z= aX + bY + c) and the error function (L1 norm) then run the minimizer with some starting value.
import numpy as np
import scipy.linalg
from scipy.optimize import minimize
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
def fit(X, params):
# 3d Plane Z = aX + bY + c
return X.dot(params[:2]) + params[2]
def cost_function(params, X, y):
# L1- norm
return np.sum(np.abs(y - fit(X, params)))
We generate 3d points
# Generating 3-dim points
mean = np.array([0.0,0.0,0.0])
cov = np.array([[1.0,-0.5,0.8], [-0.5,1.1,0.0], [0.8,0.0,1.0]])
data = np.random.multivariate_normal(mean, cov, 50)
Last we run the minimizer
output = minimize(cost_function, [0.5,0.5,0.5], args=(np.c_[data[:,0], data[:,1]], data[:, 2]))
y_hat = fit(np.c_[data[:,0], data[:,1]], output.x)
X,Y = np.meshgrid(np.arange(min(data[:,0]), max(data[:,0]), 0.5), np.arange(min(data[:,1]), max(data[:,1]), 0.5))
XX = X.flatten()
YY = Y.flatten()
# # evaluate it on grid
Z = output.x[0]*X + output.x[1]*Y + output.x[2]
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, alpha=0.2)
ax.scatter(data[:,0], data[:,1], data[:,2], c='r')
plt.show()
Note: I have used the previous response code and the code from the github as a start
I am totally new to python, and try to integrate following ode:
$\dot{x} = -2x-y^2$
$\dot{y} = -y-x^2
This results in an array with everything 0 though
What am I doing wrong? It is mostly copied code, and with another, not coupled ode it worked fine.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
def fun(t, z):
"""
Right hand side of the differential equations
dx/dt = -omega * y
dy/dt = omega * x
"""
x, y = z
f = [-2*x-y**2, -y-x**2]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `fun`, and set the solver method to 'dop853'.
solver = ode(fun)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
t0 = 0.0
z0 = [0, 0]
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 2.5
N = 75
t = np.linspace(t0, t1, N)
sol = np.empty((N, 2))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
# Plot the solution...
plt.plot(t, sol[:,0], label='x')
plt.plot(t, sol[:,1], label='y')
plt.xlabel('t')
plt.grid(True)
plt.legend()
plt.show()
Your initial state (z0) is [0,0]. The time derivative (fun) for this initial state is also [0,0]. Hence, for this initial condition, [0,0] is the correct solution for all times.
If you change your initial condition to some other value, you should observe more interesting result.
I've implemented a solution to the following system of equations
dy/dt = -t*y(t) - x(t)
dx/dt = 2*x(t) - y(t)^3
y(0) = x(0) = 1.
0 <= t <= 20
firstly in Mathematica and afterwards in Python.
My code in Mathematica:
s = NDSolve[
{x'[t] == -t*y[t] - x[t], y'[t] == 2 x[t] - y[t]^3, x[0] == y[0] == 1},
{x, y}, {t, 20}]
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 20}]
From that I get the following plot: Plot1 (if it gives a 403 Forbidden message please press enter inside the url field)
Later on I coded the same into python:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
g = lambda t: t
def f(z,t):
xi = z[0]
yi = z[1]
gi = z[2]
f1 = -gi*yi-xi
f2 = 2*xi-yi**3
return [f1,f2]
# Initial Conditions
x0 = 1.
y0 = 1.
g0 = g(0)
z0 = [x0,y0,g0]
t= np.linspace(0,20.,1000)
# Solve the ODEs
soln = odeint(f,z0,t)
x = soln[:,0]
y = soln[:,1]
plt.plot(x,y)
plt.show()
And this is the plot I get:
Plot2 (if it gives a 403 Forbidden message please press enter inside the url field)
If one plots again the Mathematica solution in a smaller field:
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 6}]
he will get a similar result to the python solution. Only the axis' will be misplaced.
Why is there such a big difference in the plots? What am I doing wrong?
I suspect that my python implementation of the model is wrong, especially where f1 is calculated. Or maybe the plot() function isn't handy at all for plotting parametric equations as in this case.
Thanks.
ps: sorry for making your life hard by not slapping the images inside the text; I don't have enough reputation yet.
You're using t as your third parameter in the input vector, not as a separate parameter. The t in f(z,t) is never used; instead, you use z[2], which will not equal the range of t as you define before (t=np.linspace(0,20.,1000)). The lambda function for g won't help here: you only use it once to set a t0, but never after.
Simplify your code, and remove that third parameter from your input vector (as well as the lambda function). For example:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def f(z,t):
xi = z[0]
yi = z[1]
f1 = -t*yi-xi
f2 = 2*xi-yi**3
return [f1,f2]
# Initial Conditions
x0 = 1.
y0 = 1.
#t= np.linspace(0,20.,1000)
t = np.linspace(0, 10., 100)
# Solve the ODEs
soln = odeint(f,[x0,y0],t)
x = soln[:,0]
y = soln[:,1]
ax = plt.axes()
#plt.plot(x,y)
plt.plot(t,x)
# Put those axes at their 0 value position
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
#plt.axis([-0.085, 0.085, -0.05, 0.07])
plt.show()
I have commented out the actual plot you want, and instead I'm plotting x versus t, what you have in the comments, since I feel that makes it easier to see things are correct now. The figure I get:
I am having some trouble translating my MATLAB code into Python via Scipy & Numpy. I am stuck on how to find optimal parameter values (k0 and k1) for my system of ODEs to fit to my ten observed data points. I currently have an initial guess for k0 and k1. In MATLAB, I can using something called 'fminsearch' which is a function that takes the system of ODEs, the observed data points, and the initial values of the system of ODEs. It will then calculate a new pair of parameters k0 and k1 that will fit the observed data. I have included my code to see if you can help me implement some kind of 'fminsearch' to find the optimal parameter values k0 and k1 that will fit my data. I want to add whatever code to do this to my lsqtest.py file.
I have three .py files - ode.py, lsq.py, and lsqtest.py
ode.py:
def f(y, t, k):
return (-k[0]*y[0],
k[0]*y[0]-k[1]*y[1],
k[1]*y[1])
lsq.py:
import pylab as py
import numpy as np
from scipy import integrate
from scipy import optimize
import ode
def lsq(teta,y0,data):
#INPUT teta, the unknowns k0,k1
# data, observed
# y0 initial values needed by the ODE
#OUTPUT lsq value
t = np.linspace(0,9,10)
y_obs = data #data points
k = [0,0]
k[0] = teta[0]
k[1] = teta[1]
#call the ODE solver to get the states:
r = integrate.odeint(ode.f,y0,t,args=(k,))
#the ODE system in ode.py
#at each row (time point), y_cal has
#the values of the components [A,B,C]
y_cal = r[:,1] #separate the measured B
#compute the expression to be minimized:
return sum((y_obs-y_cal)**2)
lsqtest.py:
import pylab as py
import numpy as np
from scipy import integrate
from scipy import optimize
import lsq
if __name__ == '__main__':
teta = [0.2,0.3] #guess for parameter values k0 and k1
y0 = [1,0,0] #initial conditions for system
y = [0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309] #observed data points
data = y
resid = lsq.lsq(teta,y0,data)
print resid
For these kind of fitting tasks you could use the package lmfit. The outcome of the fit would look like this; as you can see, the data are reproduced very well:
For now, I fixed the initial concentrations, you could also set them as variables if you like (just remove the vary=False in the code below). The parameters you obtain are:
[[Variables]]
x10: 5 (fixed)
x20: 0 (fixed)
x30: 0 (fixed)
k0: 0.12183301 +/- 0.005909 (4.85%) (init= 0.2)
k1: 0.77583946 +/- 0.026639 (3.43%) (init= 0.3)
[[Correlations]] (unreported correlations are < 0.100)
C(k0, k1) = 0.809
The code that reproduces the plot looks like this (some explanation can be found in the inline comments):
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from lmfit import minimize, Parameters, Parameter, report_fit
from scipy.integrate import odeint
def f(y, t, paras):
"""
Your system of differential equations
"""
x1 = y[0]
x2 = y[1]
x3 = y[2]
try:
k0 = paras['k0'].value
k1 = paras['k1'].value
except KeyError:
k0, k1 = paras
# the model equations
f0 = -k0 * x1
f1 = k0 * x1 - k1 * x2
f2 = k1 * x2
return [f0, f1, f2]
def g(t, x0, paras):
"""
Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
"""
x = odeint(f, x0, t, args=(paras,))
return x
def residual(paras, t, data):
"""
compute the residual between actual data and fitted data
"""
x0 = paras['x10'].value, paras['x20'].value, paras['x30'].value
model = g(t, x0, paras)
# you only have data for one of your variables
x2_model = model[:, 1]
return (x2_model - data).ravel()
# initial conditions
x10 = 5.
x20 = 0
x30 = 0
y0 = [x10, x20, x30]
# measured data
t_measured = np.linspace(0, 9, 10)
x2_measured = np.array([0.000, 0.416, 0.489, 0.595, 0.506, 0.493, 0.458, 0.394, 0.335, 0.309])
plt.figure()
plt.scatter(t_measured, x2_measured, marker='o', color='b', label='measured data', s=75)
# set parameters including bounds; you can also fix parameters (use vary=False)
params = Parameters()
params.add('x10', value=x10, vary=False)
params.add('x20', value=x20, vary=False)
params.add('x30', value=x30, vary=False)
params.add('k0', value=0.2, min=0.0001, max=2.)
params.add('k1', value=0.3, min=0.0001, max=2.)
# fit model
result = minimize(residual, params, args=(t_measured, x2_measured), method='leastsq') # leastsq nelder
# check results of the fit
data_fitted = g(np.linspace(0., 9., 100), y0, result.params)
# plot fitted data
plt.plot(np.linspace(0., 9., 100), data_fitted[:, 1], '-', linewidth=2, color='red', label='fitted data')
plt.legend()
plt.xlim([0, max(t_measured)])
plt.ylim([0, 1.1 * max(data_fitted[:, 1])])
# display fitted statistics
report_fit(result)
plt.show()
If you have data for additional variables, you can simply update the function residual.
The following worked for me:
import pylab as pp
import numpy as np
from scipy import integrate, interpolate
from scipy import optimize
##initialize the data
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
def f(y, t, k):
"""define the ODE system in terms of
dependent variable y,
independent variable t, and
optinal parmaeters, in this case a single variable k """
return (-k[0]*y[0],
k[0]*y[0]-k[1]*y[1],
k[1]*y[1])
def my_ls_func(x,teta):
"""definition of function for LS fit
x gives evaluation points,
teta is an array of parameters to be varied for fit"""
# create an alias to f which passes the optional params
f2 = lambda y,t: f(y, t, teta)
# calculate ode solution, retuen values for each entry of "x"
r = integrate.odeint(f2,y0,x)
#in this case, we only need one of the dependent variable values
return r[:,1]
def f_resid(p):
""" function to pass to optimize.leastsq
The routine will square and sum the values returned by
this function"""
return y_data-my_ls_func(x_data,p)
#solve the system - the solution is in variable c
guess = [0.2,0.3] #initial guess for params
y0 = [1,0,0] #inital conditions for ODEs
(c,kvg) = optimize.leastsq(f_resid, guess) #get params
print "parameter values are ",c
# fit ODE results to interpolating spline just for fun
xeval=np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
#pick a few more points for a very smooth curve, then plot
# data and curve fit
xeval=np.linspace(min(x_data), max(x_data),200)
#Plot of the data as red dots and fit as blue line
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('xlabel',{"fontsize":16})
pp.ylabel("ylabel",{"fontsize":16})
pp.legend(('data','fit'),loc=0)
pp.show()
Look at the scipy.optimize module. The minimize function looks fairly similar to fminsearch, and I believe that both basically use a simplex algorithm for optimization.
# cleaned up a bit to get my head around it - thanks for sharing
import pylab as pp
import numpy as np
from scipy import integrate, optimize
class Parameterize_ODE():
def __init__(self):
self.X = np.linspace(0,9,10)
self.y = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
self.y0 = [1,0,0] # inital conditions ODEs
def ode(self, y, X, p):
return (-p[0]*y[0],
p[0]*y[0]-p[1]*y[1],
p[1]*y[1])
def model(self, X, p):
return integrate.odeint(self.ode, self.y0, X, args=(p,))
def f_resid(self, p):
return self.y - self.model(self.X, p)[:,1]
def optim(self, p_quess):
return optimize.leastsq(self.f_resid, p_guess) # fit params
po = Parameterize_ODE(); p_guess = [0.2, 0.3]
c, kvg = po.optim(p_guess)
# --- show ---
print "parameter values are ", c, kvg
x = np.linspace(min(po.X), max(po.X), 2000)
pp.plot(po.X, po.y,'.r',x, po.model(x, c)[:,1],'-b')
pp.xlabel('X',{"fontsize":16}); pp.ylabel("y",{"fontsize":16}); pp.legend(('data','fit'),loc=0); pp.show()