Finding zero values with odeint - python

How can I find the point where the first derivative of my equation equals 0 using scipy.integrate.ode?
I set up this function, which gets the answer, but I'm not sure about accuracy and it can't be the most efficient way to do this.
Basically I am using this function to find the time a projectile with initial velocity stops moving. With systems of ODEs, is there a better way to solve for this answer?
import numpy as np
from scipy import integrate
def find_nearest(array,value):
idx=(np.abs(array-value)).argmin()
return array[idx], idx
def deriv(x,t):
# This function sets up the following relations
# dx/dt = v , dv/dt = -(Cp/m)*(4+v^2)
return np.array([ x[1], -(0.005/0.1) * (4+ (x[1]**2)) ])
def findzero(start, stop, v0):
time = np.linspace(start, stop, 100000)
#xinit are initial vaules of equation
xinit = np.array([0.0,v0])
x = integrate.odeint(deriv,xinit,time)
# find nearest velocity value nearest to 0
value, num = find_nearest(x[:,1],0.0001)
print 'closest value ',
print value
print 'reaches zero at time ',
print time[num]
return time[num]
# from 0 to 20 seconds with initial velocity of 100 m/s
b = findzero(0.0,20.0,100.0)

In general, a good approach to solve this sort of problem is to rewrite your equations so that velocity is the independent variable and time and distance are the dependent variables. Then, you simply have to integrate the equations from v=v0 to v=0.
However, in the example you give it is not even necessary to resort to scipy.integrate at all. The equations can be easily solved with pencil and paper (separation of variables followed by a standard integral). The result is
t = (m/(2 Cp)) arctan(v0/2)
where v0 is the initial velocity and the result of arctan must be taken in radians.
For an initial velocity of 100 m/s, the answer is 15.5079899282 seconds.

I would use something like scipy.optimize.fsolve() to find the roots of the derivative. Using this, one can work backwards to find the time taken to reach a root.

Related

TDOA calculation, scipy minimize overshoot sometime

I am trying to use TDOA to find the sound source location in a coordinate system.
We have a costfunction looks like this:
def costfunction(point, meas_list, dims=2):
#### Cost function to be fed into the scipy.optimize.minimize function
#### For a candidate point, calculates the time differences, compares it
#### to data
error = 0
for m in meas_list:
actualdiff = m.delta
calcdiff = (
m.soundpointB.dist(point, dims=dims)
- m.soundpointA.dist(point, dims=dims)
)
error += (actualdiff - calcdiff) ** 2
return(error)
soundpointB.dist #---> is to calculate the distance between sensors
our sensor location is
b = [-2, 2,0]
c = [2,-2,0]
d = [2,2,0]
e = [0,0,2]
we simulate random source in range -50 to 50 in x,y and 0 to -20 in z
the simulation would generate time difference from the source to each sensor.
so we have the cost function to calculate the mean square error
Then we use scipy minimize to find out the best solution
startpoint = np.array([0,0,0])
dims = 3
result = minimize(
fun=costfunction,
x0=startpoint,
args=(meas_list, dims),
method='BFGS',
options={
'gtol': 1e-06,
'return_all':True,
'norm':inf},
jac='2-point')
For most of the calculation are good, the %error between the actual position and TDOA calculated position is +-10%
However for some location like [-30.0, 0.5, -6.0], the TDOA calculation return [-325.07,5.49,-64.73]
When we check all result from the scipy minimize. it seems it already found the location but it overshoot.
Does anyone know how to make this better? or something need to be corrected?
BFGS will not guarantee anything but a local minimum. If you know that your sensors will always be between -50 and 50 and -2 and 2, then I would suggest:
As a first step, try a bounded local minimization - using L-BFGS-B, SLSQP, etc... and give your variables those bounds
If the above is not satisfactory enough, bring out the big artillery and try global optimization algorithms, such as SHGO, DifferentialEvolution, DualAnnealing. They are all available in SciPy, might be slower but you’ll be much more confident in your results.

ValueError: diff requires input that is at least one dimensional

I studying how to solve differential equations in Python with odeint and for test, I try to solve the following ODE (the following example came from of https://apmonitor.com/pdc/index.php/Main/SolveDifferentialEquations):
# first import the necessary libraries
import numpy as np
from scipy.integrate import odeint
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k*y
return dydt
#Initial condition
y0 = 5.0
# Time points
t = np.linspace(0,20)
# Solve ODE
def y(t):
return odeint(model,y0,t)
So if I plot the results with matplotlib, or more simply, give the command print(y(t)) then this work perfectly! But if I try compute the value of the function for a fixed value of time, for instance, t1 = t[2] ( = 0.8163 ) so I get the error
t1 = t[2]
print(y(t1))
ValueError("diff requires input that is at least one dimensional")
why I only can compute the value for y(t) for a interval t = np.linspace(0,20) but not for a number in this interval? There is some manner to fix this?
Thank you very much.
The odeint function solves you differential equation numerically. To do that you need to specify the points where you want your solution to be evaluated. These points also influence the accuracy of the solution. Generally, the more points you give to odeint the better the result (when solving the same time interval).
This means that there is no way for odeint to know what accuracy you want if you only supply a single time at which you want to evaluate the function. Instead you always need to supply a range of times (like you did with np.linspace). odeint then returns the value of the solution at all these times.
y(t) is an array of values of your solution and the third value in the array corresponds to the solution at the third time in t:
The solution evaluated at t[0] is y(t)[0] = y0
The solution evaluated at t[1] is y(t)[1]
The solution evaluated at t[2] is y(t)[2]
...
So instead of
print(y(t[2]))
you need to use
print(y(t)[2])

Physics bullet with drag in Python immediately goes below zero

I am coding a program to find how far a bullet shoots at a 20 degree angle with various parameters with and without drag. For some reason my program immediately calculates the y value of the bullet as being below zero and I think it may be a problem with the physics equations I have used. I don't have the strongest background in physics so after trying my best to figure out where I went wrong I figured I would ask for help. In the graph below I am attempting to display the different distances associated with different timesteps.
import math
import matplotlib.pyplot as plt
rho = 1.2754
Af = math.pi*(16*0.0254)**2/4
g = -9.8 #m/s^2
V = 250.0 #m/s
#equation of motion
def update(r,V,a,dt):
return r+V*dt +.5*a*dt*dt, V + a*dt
#drag force from velocity vector return x and y
def drag(Vx,Vy):
Vmag = math.sqrt(Vx**2+Vy**2)
Ff = .5*Cd*rho*Af*Vmag**2
return -Ff * Vx/Vmag, -Ff *Vy/Vmag
def getTrajectory(dt, th):
#initialize problem [time,x,y,Vx,Vy]
state = [[0.0,0.0,0.0, V*math.cos(th), V*math.sin(th)]]
while state[-1][2] >= 0 : # while y > 0
time = state[-1][0] + dt
Fd = drag(state[-1][3],state[-1][4]) # Vx,Vy
ax = Fd[0]/m # new x acceleration
ay = (Fd[1]/m) +g # new y acceleration
nextX, nextVx = update(state[-1][1], state[-1][3],ax,dt) #x,Vx,acceleration x, dt
nextY, nextVy = update(state[-1][2], state[-1][4],ay,dt) #y,Vy, acceleration y, dt
state.append([time,nextX,nextY,nextVx,nextVy])
# linear interpolation
dtf = -state[-1][2]*(state[-2][0]-state[-1][0])/(state[-2][2]-state[-1][2])
xf,vxf = update(state[-1][1],state[-1][3], ax, dtf)
print(xf,vxf)
yf,vyf = update(state[-1][2],state[-1][4], ay, dtf)
print(yf,vyf)
state[-1] = [time+dtf,xf,yf,vxf,vyf]
return state
delt = 0.01 #timestep
th = 20.0 * math.pi/180.0 #changing to radians
m = 2400 #kg
Cd = 1.0
tsteps = [0.001,0.01,0.1,1.0,3.0] # trying different timesteps to find the optimal one
xdistance = [getTrajectory(tstep,th)[-1][1] for tstep in tsteps]
plt.plot(tsteps,xdistance)
plt.show()
A print of the state variable which contains all of the values of time,x,y,Vx, and Vy after one timestep yields:
[[0.0, 0.0, 0.0, 234.9231551964771, 85.50503583141717]
,[0.0, 0.0, -9.540979117872439e-18, 234.9231551964771, 85.50503583141717]]
The first array is the initial conditions with the Vx and Vy broken up into their vectors calculated by the 20 degree angle. The second array should have the bullet flying off with a positive x and y but for some reason the y value is calculated to be -9.54e-18 which quits the getTrajectory method as it is only supposed to continue while y > 0
In order to simplify my problem, I think it may help to look specifically at my update function.
def update(initial,V,a,dt): #current x or y, Velocity in the x or y direction, acceleration in x or y, timestep
return initial+V*dt +.5*a*dt*dt, V + a*dt
This function is supposed to return an updated x,Vx or an updated y,Vy depending on what is passed. I think this may be where the problem is in the physics equations but I'm not completely sure.
I think I see what's going on: in your while loop, you first make a forward time step to compute the kinematic variables at, say, t = 0.001 from their values at t = 0, and then you set dtf to a negative value and make a backward time step to compute new values at t = 0 from the values you just computed at t = 0.001 (or whatever). I'm not sure why you do that, but at any rate your update() function is using the Euler method which is numerically unstable - as you take more and more steps, it often diverges more and more from the true solution in somewhat unpredictable ways. In this case, that turns up as a negative y position, instead of setting it back to 0 as you might expect.
No method of numerical integration is going to give you perfect results, so you can always expect a little bit of inaccuracy, which may be positive or negative. Just because something is a little bit negative isn't necessarily cause for concern. (But it could be, depending on the situation.) That being said, you can get better results by using a more robust method of integration, like Runge-Kutta 4th-order (a.k.a. RK4) or a symplectic integrator. But those are somewhat advanced techniques, so I wouldn't worry about them too much for now.
As for the problem at hand, I wonder if you might have a sign error somewhere in your equations, possibly in the line that sets dtf. I can't say for sure, though, because I really don't understand how getTrajectory() is supposed to work.
This seems to be an indentation error. The part of determining the location of the zero crossing of y starting with the linear interpolation is surely not meant to be part of the integration loop, especially not the return statement. Repairing this indentation level gives the output
(3745.3646817336303, 205.84399509886356)
(6.278677479133594e-07, -81.7511764639779)
(3745.2091465034314, 205.8432621147464)
(8.637877372419503e-05, -81.75094436152747)
(3743.6311931495325, 205.83605594288176)
(0.01056407140434586, -81.74756057157798)
(3727.977248738945, 205.75827166721402)
(0.1316764390634169, -81.72155620161116)
(3669.3437231150106, 205.73704583954634)
(10.273469073526238, -80.52193274789204)

Solve ODEs with discontinuous input/forcing data

I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.
An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.
Simplified example
Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form
In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.
Strategy 1
The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint
np.random.seed(seed=17)
def f(y, t, R_t):
""" Function to integrate.
"""
# Unpack parameters
Q_t = y[0]
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# #############################################################################
# User input
T = 10 # Time constant (days)
Q0 = 0. # Initial condition for outflow rate (mm/day)
days = 300 # Number of days to simulate
# #############################################################################
# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
index=range(days+20))
# Smooth with a moving window to make more sensible
df['R'] = pd.rolling_mean(df['R'], window=20)
# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)
# List to store results
Q_vals = []
# Vector of initial conditions
y0 = [Q0, ]
# Loop over each day in the R dataset
for step in range(days):
# We want to find the value of Q at the end of this time step
t = [0, 1]
# Get R for this step
R_t = float(df.ix[step])
# Solve the ODEs
soln = odeint(f, y0, t, args=(R_t,))
# Extract flow at end of step from soln
Q = float(soln[1])
# Append result
Q_vals.append(Q)
# Update initial condition for next step
y0 = [Q, ]
# Add results to df
df['Q'] = Q_vals
Strategy 2
The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:
def f(y, t):
""" Function used integrate.
"""
# Unpack incremental values for S and D
Q_t = y[0]
# Get the value for R at this t
idx = df.index.get_loc(t, method='ffill')
R_t = float(df.ix[idx])
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# Vector of initial parameter values
y0 = [Q0, ]
# Time grid
t = np.arange(0, days, 1)
# solve the ODEs
soln = odeint(f, y0, t)
# Add result to df
df['Q'] = soln[:, 0]
Both of these approaches give identical answers, which look like this:
However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?
My questions
Is strategy 1 the best approach here, or is there a better way?
Is strategy 2 a bad idea and why is it so slow?
Thank you!
1.) Yes
2.) Yes
Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.
This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.
You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories