I have solved a single second order differential equation with two boundary conditions using the module solve_bvp. However, now I am trying to solve the system of two second order differential equations;
U'' + a*B' = 0
B'' + b*U' = 0
with the boundary conditions U(+/-0.5) = +/-0.01 and B(+/-0.5) = 0. I have split this into a system of first ordinary differential equations and I am trying to use solve_bvp to solve them numerically. However, I am just getting arrays full of zeros for my solution. I believe I am implementing the boundary conditions wrong. It is not clear to me how to handle more than two equations from the documentation. My attempt is below
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_bvp
alpha = 1E-8
zeta = 8E-3
C_k = 0.05
sigma = 0.01
def fun(x, y):
return np.vstack((y[1],-((alpha)/(C_k*sigma))*y[2],y[2], -(1/(C_k*zeta))*y[1]))
def bc(ya, yb):
return np.array([ya[0]+0.001, yb[0]-0.001,ya[0]-0, yb[0]-0])
x = np.linspace(-0.5, 0.5, 5000)
y = np.zeros((4, x.size))
print(y)
sol = solve_bvp(fun, bc, x, y)
print(sol)
In my question I have just relabeled a and b, but they're just parameters that I input. I have the analytic solution for this set of equations so I know one exists that is non-trivial. Any help would be greatly appreciated.
It is most times really helpful if you state at least once in a comment or by assignment to specifically named variables how you want to compose the state vector.
By the form of the derivative return vector, I would think you intend
U, U', B, B'
which means that U=y[0], U'=y[1] and B=y[2],B'=y[3], so that your derivatives vector should correctly be
return y[1], -((alpha)/(C_k*sigma))*y[3], y[3], -(1/(C_k*zeta))*y[1]
and the boundary conditions
return ya[0]+0.001, yb[0]-0.001, ya[2]-0, yb[2]-0
Especially your boundary condition should throw the algorithm in the first step because of a singular Jacobian, always check the .success field and the .message field of the solution structure.
Note that by default the absolute and relative tolerance of the experimental solve_bvp is 1e-3, and the number of nodes is limited to 500.
Setting the initial node number to 50 (5000 is much too much, the solver refines where necessary), and the tolerance to 1-6, I get the following solution plots that visibly satisfy the boundary conditions.
Related
I'm new to Python and trying to create basic trajectory plots of a 2D system. This is what I'm working with right now. It plots forward trajectories for a number of points.
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.integrate import odeint
def dx_dt(x,t):
return [x[0]*2, x[1]]
xmin = ymin = -10
xmax = ymax = 10
plt.figure()
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
ts = np.linspace(0,1,20)
ic = np.linspace(xmin, xmax, 11)
for r in ic:
for s in ic:
x0 = [r, s]
xs = sp.integrate.odeint(dx_dt, x0, ts)
plt.plot(xs[:,0], xs[:,1], "g")
plt.savefig("flowlines.jpg")
The issue that's arising is ODEINT is not working with certain systems, instead saying this half a dozen times
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
and subsequently crashing.
I've been playing around with the specific systems of differential equations, and I think I've found the specific problem that's arising. Something like
def dx_dt(x,t):
return [x[1], x[0]**2+x[1]]
works, while
def dx_dt(x,t):
return [x[0], x[1]**2+x[0]]
does not. It seems that it does not like any specific initial conditions where x' =f(x,y) if f(x,y) contains anything involving x other than addition and scalar multiplication. It can do anything it wants with y. Hence, the first one runs just fine, but the second one fails, because y' = y^2+x involves a power of y.
I have no idea how to proceed from here.
You suspicion about the behavior of the solver depending on the algebraic form of the equations is reasonable. It is likely that the systems for which you are experiencing a problem are ones where the solution "blows up" in finite time. That is, one or both components go to infinity as t approaches a finite value. This can easily happen when the right-hand side depends on powers of the variables.
A simple one-dimensional system that exhibits this behavior is
dx/dt = x**2, x(0) = x0
which has the explicit solution
x(t) = x0/(1 - x0*t)
You can verify that it solves the differential equation and initial condition, at least in a neighborhood of t=0. As t → 1/x0 (assuming x0 ≠ 0), the solution diverges to infinity. For example, here's a plot of the solution with x(0) = 0.5:
The red vertical dashed line indicates where the solution blows up.
Equations like this violate the mathematical assumptions on which ODE solvers such as odeint are based. Solvers will generally fail when they encounter such a point, typically because they vainly try to take smaller and smaller steps to bring down the estimated local error as they approach the point where the solution blows up:
In [54]: from scipy.integrate import odeint
In [55]: def sys(x, t):
...: return x**2
...:
In [56]: odeint(sys, 0.5, [0, 2.5])
[...]/lib/python3.9/site-packages/scipy/integrate/odepack.py:247:
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
Out[56]:
array([[5.00000000e-01],
[4.82437777e+08]])
The solution is the same as the punch line to the old joke:
Patient: Doctor, it hurts when I do this...
Doctor: Then don't do that!
I studying how to solve differential equations in Python with odeint and for test, I try to solve the following ODE (the following example came from of https://apmonitor.com/pdc/index.php/Main/SolveDifferentialEquations):
# first import the necessary libraries
import numpy as np
from scipy.integrate import odeint
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k*y
return dydt
#Initial condition
y0 = 5.0
# Time points
t = np.linspace(0,20)
# Solve ODE
def y(t):
return odeint(model,y0,t)
So if I plot the results with matplotlib, or more simply, give the command print(y(t)) then this work perfectly! But if I try compute the value of the function for a fixed value of time, for instance, t1 = t[2] ( = 0.8163 ) so I get the error
t1 = t[2]
print(y(t1))
ValueError("diff requires input that is at least one dimensional")
why I only can compute the value for y(t) for a interval t = np.linspace(0,20) but not for a number in this interval? There is some manner to fix this?
Thank you very much.
The odeint function solves you differential equation numerically. To do that you need to specify the points where you want your solution to be evaluated. These points also influence the accuracy of the solution. Generally, the more points you give to odeint the better the result (when solving the same time interval).
This means that there is no way for odeint to know what accuracy you want if you only supply a single time at which you want to evaluate the function. Instead you always need to supply a range of times (like you did with np.linspace). odeint then returns the value of the solution at all these times.
y(t) is an array of values of your solution and the third value in the array corresponds to the solution at the third time in t:
The solution evaluated at t[0] is y(t)[0] = y0
The solution evaluated at t[1] is y(t)[1]
The solution evaluated at t[2] is y(t)[2]
...
So instead of
print(y(t[2]))
you need to use
print(y(t)[2])
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
I have a university project in which we are asked to simulate a satellite approach to Mars using ODE's and SciPy's odeint function.
I manage to simulate it in 2D by making a second-order ODE into two first-order ODE's. However I am stuck in the time limitation because my code is using SI units therefore running in seconds and Python's linspace limits does not even simulate one complete orbit.
I tried converting the variables and constants to hours and kilometers but now the code keeps giving errors.
I followed this method:
http://bulldog2.redlands.edu/facultyfolder/deweerd/tutorials/Tutorial-ODEs.pdf
And the code is:
import numpy
import scipy
from scipy.integrate import odeint
def deriv_x(x,t):
return array([ x[1], -55.3E10/(x[0])**2 ]) #55.3E10 is the value for G*M in km and hours
xinit = array([0,5251]) # this is the velocity for an orbit of period 24 hours
t=linspace(0,24.0,100)
x=odeint(deriv_x, xinit, t)
def deriv_y(y,t):
return array([ y[1], -55.3E10/(y[0])**2 ])
yinit = array([20056,0]) # this is the radius for an orbit of period 24 hours
t=linspace(0,24.0,100)
y=odeint(deriv_y, yinit, t)
I don't know how to copy/paste the error code from PyLab so I took a PrintScreen of the error:
Second error with t=linspace(0.01,24.0,100) and xinit=array([0.001,5251]):
If anyone has any suggestions on how to improve the code I will be very grateful.
Thank you very much!
odeint(deriv_x, xinit, t)
uses xinit as its initial guess for x. This value for x is used when evaluating deriv_x.
deriv_x(xinit, t)
raises a divide-by-zero error since x[0] = xinit[0] equals 0, and deriv_x divides by x[0].
It looks like you are trying to solve the second-order ODE
r'' = - C rhat
---------
|r|**2
where rhat is the unit vector in the radial direction.
You appear to be separating the x and y coordinates into separate second-order ODES:
x'' = - C y'' = - C
----- and -----
x**2 y**2
with initial conditions x0 = 0 and y0 = 20056.
This is very problematic. Among the problems is that when x0 = 0, x'' blows up. The original second-order ODE for r'' does not have this problem -- the denominator does not blow up when x0 = 0 because y0 = 20056, and so r0 = (x**2+y**2)**(1/2) is far from zero.
Conclusion: Your method of separating the r'' ODE into two ODEs for x'' and y'' is incorrect.
Try searching for a different way to solve the r'' ODE.
Hint:
What if your "state" vector is z = [x, y, x', y']?
Can you write down a first-order ODE for z' in terms of x, y,
x' and y'?
Can you solve it with one call to integrate.odeint?
The leastsq method in scipy lib fits a curve to some data. And this method implies that in this data Y values depends on some X argument. And calculates the minimal distance between curve and the data point in the Y axis (dy)
But what if I need to calculate minimal distance in both axes (dy and dx)
Is there some ways to implement this calculation?
Here is a sample of code when using one axis calculation:
import numpy as np
from scipy.optimize import leastsq
xData = [some data...]
yData = [some data...]
def mFunc(p, x, y):
return y - (p[0]*x**p[1]) # is takes into account only y axis
plsq, pcov = leastsq(mFunc, [1,1], args=(xData,yData))
print plsq
I recently tryed scipy.odr library and it returns the proper results only for linear function. For other functions like y=a*x^b it returns wrong results. This is how I use it:
def f(p, x):
return p[0]*x**p[1]
myModel = Model(f)
myData = Data(xData, yData)
myOdr = ODR(myData, myModel , beta0=[1,1])
myOdr.set_job(fit_type=0) #if set fit_type=2, returns the same as leastsq
out = myOdr.run()
out.pprint()
This returns wrong results, not desired, and in some input data not even close to real.
May be, there is some special ways of using it, what do I do wrong?
I've found the solution. Scipy Odrpack works noramally but it needs a good initial guess for correct results. So I divided the process into two steps.
First step: find the initial guess by using ordinaty least squares method.
Second step: substitude these initial guess in ODR as beta0 parameter.
And it works very well with an acceptable speed.
Thank you guys, your advice directed me to the right solution
scipy.odr implements the Orthogonal Distance Regression. See the instructions for basic use in the docstring and documentation.
If/when you are able to invert the function described by p you may just include x-pinverted(y) in mFunc, I guess as sqrt(a^2+b^2), so (pseudo code)
return sqrt( (y - (p[0]*x**p[1]))^2 + (x - (pinverted(y))^2)
for example for
y=kx+m p=[m,k]
pinv=[-m/k,1/k]
return sqrt( (y - (p[0]+x*p[1]))^2 + (x - (pinv[0]+y*pinv[1]))^2)
But what you ask for is in some cases problematic. For example, if a polynomial (or your x^j) curve has a minimum ym at y(m) and you have a point x,y lower than ym, what kind of value do you want to return? There's not always a solution.
you can use the ONLS package in R.