I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
Related
I would like to compute the Buchstab function numerically. It is defined by the delay differential equation:
How can I compute this numerically efficiently?
To get a general feeling of how DDE integration works, I'll give some code, based on the low-order Heun method (to avoid uninteresting details while still being marginally useful).
In the numerical integration the previous values are treated as a function of time like any other time-depending term. As there is not really a functional expression for it, the solution so-far will be used as a function table for interpolation. The interpolation error order should be as high as the error order of the ODE integrator, which is easy to arrange for low-order methods, but will require extra effort for higher order methods. The solve_ivp stepper classes provide such a "dense output" interpolation per step that can be assembled into a function for the currently existing integration interval.
So after the theory the praxis. Select step size h=0.05, convert the given history function into the start of the solution function table
u=1
u_arr = []
w_arr = []
while u<2+0.5*h:
u_arr.append(u)
w_arr.append(1/u)
u += h
Then solve the equation, for the delayed value use interpolation in the function table, here using numpy.interp. There are other functions with more options in `scipy.interpolate.
Note that h needs to be smaller than the smallest delay, so that the delayed values are from a previous step. Which is the case here.
u = u_arr[-1]
w = w_arr[-1]
while u < 4:
k1 = (-w + np.interp(u-1,u_arr,w_arr))/u
us, ws = u+h, w+h*k1
k2 = (-ws + np.interp(us-1,u_arr,w_arr))/us
u,w = us, w+0.5*h*(k1+k2)
u_arr.append(us)
w_arr.append(ws)
Now the numerical approximation can be further processed, for instance plotted.
plt.plot(u_arr,w_arr); plt.grid(); plt.show()
I need to find the minimum distance from a point (X,Y) to a curve defined by four coefficients C0, C1, C2, C3 like y = C0 + C1X + C2X^2 + C3X^3
I have used a numerical approach using np.linspace and np.polyval to generate discrete (X,Y) for the curve and then the shapely 's Point, MultiPoint and nearest_points to find the nearest points, and finally np.linalg.norm to find the distance.
This is a numerical approach by discretizing the curve.
My question is how can I find the distance by analytical methods and code it?
Problem definition
For the sake of simplicity let's use P for the point and Px and Py for the coordinates. Let's call the function f(x).
An other way to look at you're problem is that you're trying to find an x that minimzes the distance between the P and the point (x, f(x))
The problem can then be formulated as a minimization problem.
Find x that minimizes (x-Px)² + (f(x)-Py)²
(Not that we can drop the square root that should be there because square root is a monotonic function and doesn't change the optima. Some details here.)
Analytical solution
The fully analytical way to solve this would be a pen and paper approach. You can develop the equation and compute the derivative, see where they cancel out to find out where extremums are (This will be a lengthy process to do analytically. #Yves Daoust addresses it in his answer. Either do that or use a numerical solver for this part. For example a version of Newton's method should do). Then check if the extremums are maximums or minimums by computing the point and sampling a few points around to check how the function is evolving. From this you can find where the global minimum is and that gives you the x you're looking for. But developing this is content probably better suited for mathematics.
Optimization approach
So instead I'm gonna suggest a solution that uses numerical minimization that doesn't use a sampling approach. You can use the minimize function from scipy to solve the minimization problem.
from math import pow
from scipy.optimize import minimize
# Define function
C0 = -1
C1 = 5
C2 = -5
C3 = 6
f = lambda x: C0 + C1 * x + C2 * pow(x, 2) + C3 * pow(x, 3)
# Define function to minimize
p_x = 12
p_y = -7
min_f = lambda x: pow(x-p_x, 2) + pow(f(x) - p_y, 2)
# Minimize
min_res = minimze(min_f, 0) # The starting point doesn't really matter here
# Show result
print("Closest point is x=", min_res.x[0], " y=", f(min_res.x[0]))
Here I used your function with dummy values but you could use any function you want with this approach.
You need to differentiate (x - X)² + (C0 + C1 x + C2 x² + C3 x³ - Y)² and find the roots. But this is a quintic polynomial (fifth degree) with general coefficients so the Abel-Ruffini theorem fully applies, meaning that there is no solution in radicals.
There is a known solution anyway, by reducing the equation (via a lengthy substitution process) to the form x^5 - x + t = 0 known as the Bring–Jerrard normal form, and getting the solutions (called ultraradicals) by means of the elliptic functions of Hermite or evaluation of the roots by Taylor.
Personal note:
This approach is virtually foolish, as there exist ready-made numerical polynomial root-finders, and the ultraradical function is uneasy to evaluate.
Anyway, looking at the plot of x^5 - x, one can see that it is intersected once or three times by and horizontal, and finding an interval with a change of sign is easy. With that, you can obtain an accurate root by dichotomy (and far from the extrema, Newton will easily converge).
After having found this root, you can deflate the polynomial to a quartic, for which explicit formulas by radicals are known.
So I'm trying to solve for the eigenvalues of bound state solutions to the finite square well potential which involves solving the equation:
I was able to solve it graphically by plotting the two functions and figuring out where they intersect, then using the intersection as an initial guess in scipy.optimize.root. However, this seems like a fairly cumbersome process. From what I understand, root finding methods usually requires an initial guess and finds the local minima closest to the initial guess.
What I'm wondering is if there is a way to find all the roots or all local minima in a given range in Python without necessarily providing an initial guess. From the web searches I've done, there seems to be some methods in mathematica but in general, finding all roots within a good tolerance is impossible except for specific functions. What I'm wondering, though, is whether my equation happens to fall under those specific situations? Below is a graph of the two functions and the code I used using scipy.optimize.root:
def func(z, z0):
return np.tan(z) - np.sqrt((z0/z)**2 - 1)
#finding z0
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
#calculating roots
res = opt.root(func, [1.4, 4.0, 6.2], args = (z0, ))
res.x
#output
array([1.38165158, 4.11759958, 6.70483966])
There is no general way.
In this specific case, all roots are well localized due to periodicity of the tangent function, so you can simply run root-findind, e.g. brentq on each of these intervals.
This is more of a hack rather than an answer, but one may succeed in making the process of root finding a bit less tedious by providing some grid of initial guesses and taking the set of solutions found with all the attempts.
Using sympy (whose defaults in nsolve may provide a more robust solver) you could do
from sympy.solvers import nsolve
from sympy import tan, sqrt, symbols
import numpy as np
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
z = symbols('z')
expr = tan(z) - sqrt((z0/z)**2 - 1)
# try out a grid of initial conditions and get set of found solutions
# this may fail with some other choice of interval
initial_guesses = np.linspace(0.2, 7, 100)
# candidate solutions
set([nsolve(expr, z, guess) for guess in initial_guesses])
I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.
An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.
Simplified example
Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form
In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.
Strategy 1
The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint
np.random.seed(seed=17)
def f(y, t, R_t):
""" Function to integrate.
"""
# Unpack parameters
Q_t = y[0]
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# #############################################################################
# User input
T = 10 # Time constant (days)
Q0 = 0. # Initial condition for outflow rate (mm/day)
days = 300 # Number of days to simulate
# #############################################################################
# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
index=range(days+20))
# Smooth with a moving window to make more sensible
df['R'] = pd.rolling_mean(df['R'], window=20)
# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)
# List to store results
Q_vals = []
# Vector of initial conditions
y0 = [Q0, ]
# Loop over each day in the R dataset
for step in range(days):
# We want to find the value of Q at the end of this time step
t = [0, 1]
# Get R for this step
R_t = float(df.ix[step])
# Solve the ODEs
soln = odeint(f, y0, t, args=(R_t,))
# Extract flow at end of step from soln
Q = float(soln[1])
# Append result
Q_vals.append(Q)
# Update initial condition for next step
y0 = [Q, ]
# Add results to df
df['Q'] = Q_vals
Strategy 2
The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:
def f(y, t):
""" Function used integrate.
"""
# Unpack incremental values for S and D
Q_t = y[0]
# Get the value for R at this t
idx = df.index.get_loc(t, method='ffill')
R_t = float(df.ix[idx])
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# Vector of initial parameter values
y0 = [Q0, ]
# Time grid
t = np.arange(0, days, 1)
# solve the ODEs
soln = odeint(f, y0, t)
# Add result to df
df['Q'] = soln[:, 0]
Both of these approaches give identical answers, which look like this:
However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?
My questions
Is strategy 1 the best approach here, or is there a better way?
Is strategy 2 a bad idea and why is it so slow?
Thank you!
1.) Yes
2.) Yes
Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.
This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.
You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.
I'm trying to obtain the function expected_W or H that is the result of an integration:
where:
theta is a vector with two elements: theta_0 and theta_1
f(beta | theta) is a normal density for beta with mean theta_0 and variance theta_1
q(epsilon) is a normal density for epsilon with mean zero and variance sigma_epsilon (set to 1 by default).
w(p, theta, eps, beta) is a function I take as input, so I cannot predict exactly how it looks. It will likely be non-linear, but not particularly nasty.
This is the way I implement the problem. I'm sure the wrapper functions I make are a mess, so I'd be happy to receive any help on that too.
from __future__ import division
from scipy import integrate
from scipy.stats import norm
import math
import numpy as np
def exp_w(w_B, sigma_eps = 1, **kwargs):
'''
Integrates the w_B function
Input:
+ w_B : the function to be integrated.
+ sigma_eps : variance of the epsilon term. Set to 1 by default
'''
#The integrand function gives everything under the integral:
# w(B(p, \theta, \epsilon, \beta)) f(\beta | \theta ) q(\epsilon)
def integrand(eps, beta, p, theta_0, theta_1, sigma_eps=sigma_eps):
q_e = norm.pdf(eps, loc=0, scale=math.sqrt(sigma_eps))
f_beta = norm.pdf(beta, loc=theta_0, scale=math.sqrt(theta_1))
return w_B(p = p,
theta_0 = theta_0, theta_1 = theta_1,
eps = eps, beta=beta)* q_e *f_beta
#limits of integration. Using limited support for now.
eps_inf = lambda beta : -10 # otherwise: -np.inf
eps_sup = lambda beta : 10 # otherwise: np.inf
beta_inf = -10
beta_sup = 10
def integrated_f(p, theta_0, theta_1):
return integrate.dblquad(integrand, beta_inf, beta_sup,
eps_inf, eps_sup,
args = (p, theta_0, theta_1))
# this integrated_f is the H referenced at the top of the question
return integrated_f
I tested this function with a simple w function for which I know the analytic solution (this won't usually be the case).
def test_exp_w():
def w_B(p, theta_0, theta_1, eps, beta):
return 3*(p*eps + p*(theta_0 + theta_1) - beta)
# Function that I get
integrated = exp_w(w_B, sigma_eps = 1)
# Function that I should get
def exp_result(p, theta_0, theta_1):
return 3*p*(theta_0 + theta_1) - 3*theta_0
args = np.random.rand(3)
d_args = {'p' : args[0], 'theta_0' : args[1], 'theta_1' : args[2]}
if not (np.allclose(
integrated(**d_args)[0], exp_result(**d_args)) ):
raise Exception("Integration procedure isn't working!")
Hence, my implementation seems to be working, but it's very slow for my purpose. I need to repeat this process with tens or hundreds of thousands of times (this is a step in a Value function iteration. I can give more info if people think it's relevant).
With scipy version 0.14.0 and numpy version 1.8.1, this integral takes 15 seconds to compute.
Does anybody have any suggestion on how to go about this?
To start with, tt probably would help to get bounded domains of integration, but I haven't figure out how to do that or if the gaussian quadrature in SciPy takes care of it in a good way (does it use Gauss-Hermite?).
Thanks for your time.
---- Edit: adding profiling times -----
%lprun results gives that most of the time is spent in
_distn_infraestructure.py:1529(pdf) and
_continuous_distns.py:97(_norm_pdf)
each with a whopping 83244 number calls.
The time taken to integrate your function sounds very long if the function is not a nasty one.
First thing I suggest you do is to profile where the time is spent. Is it spent in dblquad or elsewhere? How many calls are made to w_B during the integration? If the time is spent in dblquad and the number of calls is very high, could you use looser tolerances in the integration?
It seems that the multiplication by the gaussians actually enables you to limit the integration limits a great deal, as most of the energy of the gaussian is within a very small area. You might want to try and calculate reasonable tighter bounds. You have already limited the area into -10..10; is there any significant performance change between -100..100, -10..10, and -1..1?
If you know your functions are relatively smooth, then there is a Mickey-Mouse version of the integration:
determine reasonable upper and lower limits in both axes (by the gaussians)
calculate a reasonable grid density (e.g. 100 points in each direction)
calculate the w_B for each of these points (and this will be much faster, if it is possible to require a vectorized version of w_B)
sum it all together
This is very low-tech but also very fast. Whether or not it gives you results which are good enough for the outer iteration is an interesting question. It just might.