Using scipy solve_ivp for equation with initial conditions - python

I am having a hard time getting used to scipy's solve_ivp. So let's say we have an ordinary linear differential equation of second order, spring for example (y'' = -k**2*y). Conditions are when the spring is at position 0 (time 0) speed is v0. How can I use initial conditions to solve it?
y'' = -k**2*y # First this needs to be modified into first order equation
.
def function1(t, y, k): #original function
return y[1], -k**2*y[1]
function2 = lambda t, y: function1(t, y, k = 10) #function with only t and y
t = np.linspace(0, 100, 1000)
solution = solve_ivp(function2, (0, 100), (0, 0), t_eval = t)
solution.y[0]

If you want to encode
y'' = -k**2*y
as a first-order system, you should use
def function1(t, y, k): #original function
return y[1], -k**2*y[0]
The code in the question encodes y'' = -k**2*y'.

Related

Solving a boundary value problem (diffusion-reaction equation) with scipy solve_bvp

I'm struggling to solve the following 2nd order boundary value problem:
y'' + 2/x*y' + k**2.0*F(y) = 0
y(x=1)=1, y'(x=0)=0
F(y) = -y or F(y) = -y*exp(AB*(1-y)/(1+B(1-y))
I somehow fail to set the boundary conditions right. I defined the function for F(y)=y and boundary conditions the following way:
def fun(x, y, p):
k = p[0]
return np.vstack((y[1], -2.0/x*y[1] + k**2.0*y[0]))
def bc(ya, yb, p):
return np.array([ya[0], yb[0],ya[1]])
y[0,:] = 1
y[1,0] = 0
sol = solve_bvp(fun, bc, x, y, p=[40])
The results should I get are definitely wrong, changing the initial conditions doesn't make things better. I think my problem is some how related to the zero gradient boundary condition at x=0. Does anybody know what I'm doing wrong?
EDIT:
Here a MWE, which should give a constant value of 1 for k=0.01. but for a k=5, the value at x=0 should be approx. 0.06:
def fun(x, y, p):
k = p[0]
return np.vstack((y[1], -2.0/x*y[1] + k**2.0*y[0]))
def bc(ya, yb, p):
return np.array([ya[0], yb[0]-1.0,yb[1]])
x = np.linspace(1e-3, 1, 100)
y = np.zeros((2, x.size))
y[0,:] = 1
from scipy.integrate import solve_bvp
sol = solve_bvp(fun, bc, x, y, p=[1000])
x_plot = np.linspace(0, 1, 100)
y_plot = sol.sol(x_plot)[0]
plt.figure()
plt.plot(x_plot, y_plot)
Consider the case F(y)=y. Then it is easy to see that the basis solutions of this linear ODE are sin(k*x)/x and cos(kx/x). Similarly for F(y)=-y one gets sinh(k*x)/x and cosh(k*x)/x. This means that most solutions have a singularity at x=0. Such a singularity is almost impossible to handle out-of-the-box for standard ODE solvers. One has to help the solver at the singularity, normal procedure applies again at some distance from the singularity.
What you can do is to analyze the situation at x=0 and move away a little bit.
You get via limit of difference quotients
y''(0) + 2*y''(0) + k^2*F(y(0)) = 0
which allows you to compute a quadratic Taylor polynomial. Thus solve the problem on [a, 1] with the ODE solver using the continuation to the singularity y(x)=y(0)-k**2/6*F(y(0))*x**2 on [0,a].
The boundary condition at x=a is easiest to establish with treating y0=y(0) as parameter. The ODE and BC functions then are
def ode(x,y,y0): return [ y[1], -2*y[1]/x - k**2*F(y[0]) ]
def bc(ya,yb,y0): y2 = -k**2*F(y0)/6; return [ ya[0] - y0 - y2*a**2, ya[1] - 2*y2*a, yb[0]-1 ]
In the cases discussed in the question, this gives
a = 1e-2
def F(y): return -y
for k in [0.01, 5]:
res = solve_bvp(ode, bc, [a,1], [[1,1], [0,0]], p=[1], tol=1e-5)
print(f'k={k}: {res.message}, y0={res.p[0]}, theory: {k/np.sinh(k)}')
if res.success:
y0 = res.p[0]
x = np.linspace(a,1,61);
plt.plot(x, res.sol(x)[0])
plt.plot([0], [y0],'o', res.x, res.y[0],'+', ms=4)
plt.title(f'k={k}'); plt.grid(); plt.show()
with the result
k=0.01: The algorithm converged to the desired accuracy., y0=0.9999833335277726, theory: 0.9999833335277757
k=5: The algorithm converged to the desired accuracy., y0=0.06738256929427147, theory: 0.06738252915294544

Why does decreasing the range of np.linspace increase the accuracy of numerical integration?

Reading a tutorial on simple numerical integration (https://helloacm.com/how-to-compute-numerical-integration-in-numpy-python/), which seems to suggest that decreasing the range of the x values used in your function returns a more accurate numerical answer. The code the use is
def integrate(f, a, b, N):
x = np.linspace(a, b, N)
fx = f(x)
area = np.sum(fx)*(b-a)/N
return area
integrate(np.sin, 0, np.pi/2, 100)
This returns a value of 0.99783321217729803.
However when they modify the integration method to:
def integrate(f, a, b, N):
x = np.linspace(a+(b-a)/(2*N), b-(b-a)/(2*N), N)
fx = f(x)
area = np.sum(fx)*(b-a)/N
return area
integrate(np.sin, 0, np.pi/2, 100)
This returns a more accurate value of 1.0000102809119051. Why is this the case?
Two things:
The step width in your first integrate is not (b-a) / N, but (b-a) / (N-1).
In your first method, the error is dominated by the half-bar overshoot on the left and right, i.e., the (b-a)/(N-1)/2 * f(a) and (b-a)/(N-1)/2 * f(b). If your subtract those two, you get an accuracy in comparable to your second method.

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

What am I doing wrong in this Dopri5 implementation

I am totally new to python, and try to integrate following ode:
$\dot{x} = -2x-y^2$
$\dot{y} = -y-x^2
This results in an array with everything 0 though
What am I doing wrong? It is mostly copied code, and with another, not coupled ode it worked fine.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
def fun(t, z):
"""
Right hand side of the differential equations
dx/dt = -omega * y
dy/dt = omega * x
"""
x, y = z
f = [-2*x-y**2, -y-x**2]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `fun`, and set the solver method to 'dop853'.
solver = ode(fun)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
t0 = 0.0
z0 = [0, 0]
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 2.5
N = 75
t = np.linspace(t0, t1, N)
sol = np.empty((N, 2))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
# Plot the solution...
plt.plot(t, sol[:,0], label='x')
plt.plot(t, sol[:,1], label='y')
plt.xlabel('t')
plt.grid(True)
plt.legend()
plt.show()
Your initial state (z0) is [0,0]. The time derivative (fun) for this initial state is also [0,0]. Hence, for this initial condition, [0,0] is the correct solution for all times.
If you change your initial condition to some other value, you should observe more interesting result.

Solving a system of odes (with changing constant!) using scipy.integrate.odeint?

I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.

Categories