I'm struggling to solve the following 2nd order boundary value problem:
y'' + 2/x*y' + k**2.0*F(y) = 0
y(x=1)=1, y'(x=0)=0
F(y) = -y or F(y) = -y*exp(AB*(1-y)/(1+B(1-y))
I somehow fail to set the boundary conditions right. I defined the function for F(y)=y and boundary conditions the following way:
def fun(x, y, p):
k = p[0]
return np.vstack((y[1], -2.0/x*y[1] + k**2.0*y[0]))
def bc(ya, yb, p):
return np.array([ya[0], yb[0],ya[1]])
y[0,:] = 1
y[1,0] = 0
sol = solve_bvp(fun, bc, x, y, p=[40])
The results should I get are definitely wrong, changing the initial conditions doesn't make things better. I think my problem is some how related to the zero gradient boundary condition at x=0. Does anybody know what I'm doing wrong?
EDIT:
Here a MWE, which should give a constant value of 1 for k=0.01. but for a k=5, the value at x=0 should be approx. 0.06:
def fun(x, y, p):
k = p[0]
return np.vstack((y[1], -2.0/x*y[1] + k**2.0*y[0]))
def bc(ya, yb, p):
return np.array([ya[0], yb[0]-1.0,yb[1]])
x = np.linspace(1e-3, 1, 100)
y = np.zeros((2, x.size))
y[0,:] = 1
from scipy.integrate import solve_bvp
sol = solve_bvp(fun, bc, x, y, p=[1000])
x_plot = np.linspace(0, 1, 100)
y_plot = sol.sol(x_plot)[0]
plt.figure()
plt.plot(x_plot, y_plot)
Consider the case F(y)=y. Then it is easy to see that the basis solutions of this linear ODE are sin(k*x)/x and cos(kx/x). Similarly for F(y)=-y one gets sinh(k*x)/x and cosh(k*x)/x. This means that most solutions have a singularity at x=0. Such a singularity is almost impossible to handle out-of-the-box for standard ODE solvers. One has to help the solver at the singularity, normal procedure applies again at some distance from the singularity.
What you can do is to analyze the situation at x=0 and move away a little bit.
You get via limit of difference quotients
y''(0) + 2*y''(0) + k^2*F(y(0)) = 0
which allows you to compute a quadratic Taylor polynomial. Thus solve the problem on [a, 1] with the ODE solver using the continuation to the singularity y(x)=y(0)-k**2/6*F(y(0))*x**2 on [0,a].
The boundary condition at x=a is easiest to establish with treating y0=y(0) as parameter. The ODE and BC functions then are
def ode(x,y,y0): return [ y[1], -2*y[1]/x - k**2*F(y[0]) ]
def bc(ya,yb,y0): y2 = -k**2*F(y0)/6; return [ ya[0] - y0 - y2*a**2, ya[1] - 2*y2*a, yb[0]-1 ]
In the cases discussed in the question, this gives
a = 1e-2
def F(y): return -y
for k in [0.01, 5]:
res = solve_bvp(ode, bc, [a,1], [[1,1], [0,0]], p=[1], tol=1e-5)
print(f'k={k}: {res.message}, y0={res.p[0]}, theory: {k/np.sinh(k)}')
if res.success:
y0 = res.p[0]
x = np.linspace(a,1,61);
plt.plot(x, res.sol(x)[0])
plt.plot([0], [y0],'o', res.x, res.y[0],'+', ms=4)
plt.title(f'k={k}'); plt.grid(); plt.show()
with the result
k=0.01: The algorithm converged to the desired accuracy., y0=0.9999833335277726, theory: 0.9999833335277757
k=5: The algorithm converged to the desired accuracy., y0=0.06738256929427147, theory: 0.06738252915294544
Related
I was trying to compute an ODE of attitude kinematics eqs in Python using the solve_ivp function, but the problem is that one of the parameters, the angular velocity omega, changes, and I would like to take this into account. I have previously computed omega from another ODE, and now I would like to use the result I got as input for this other ODE.
This is what I did:
def fun2(time, euler):
omegax = 5.2928e-10; omegay = -2.5347e-11; omegaz = 2.6609e-6
dot1 = (omegax*np.sin(euler[2]) + omegay*np.cos(euler[2]))/np.sin(euler[1])
dot2 = omegax*np.cos(euler[2]) - omegay*np.sin(euler[2])
dot3 = omegaz - (omegax*np.sin(euler[2]) + omegay*np.cos(euler[2]))/np.tan(euler[1])
return np.array([dot1, dot2, dot3])
angles = integrate.solve_ivp(fun2, tspan, euler0, t_eval = t, method = 'RK45', dense_output = True, rtol=1e-13, atol=1e-22)
Here I used a constant omega to run the code, but I would like it to change. The omega I have obtained in the other ODE (always using solve_ivp) is in the form of a matrix, where there are all the omegax, y and z in the 1st, 2nd and 3rd column respectively, with 1000000 rows.
One thing I tried was to solve the previous ODE inside fun2, like this:
def fun2(time, euler):
x_t = integrate.solve_ivp(fun, tspan, omega0, t_eval=t, method='RK45', dense_output=True, rtol=1e-13, atol=1e-22)
omegax = x_t.y[0]; omegay = x_t.y[1]; omegaz = x_t.y[2]
dot1 = (omegax*np.sin(euler[2]) + omegay*np.cos(euler[2]))/np.sin(euler[1])
dot2 = omegax*np.cos(euler[2]) - omegay*np.sin(euler[2])
dot3 = omegaz - (omegax*np.sin(euler[2]) + omegay*np.cos(euler[2]))/np.tan(euler[1])
return np.array([dot1, dot2, dot3])
angles = integrate.solve_ivp(fun2, tspan, euler0, t_eval = t, method = 'RK45', dense_output = True, rtol=1e-13, atol=1e-22)
Unfortunately it did not work, and I got this error message: "operands could not be broadcast together with shapes (3,1000000) (3,)" and now I'm stuck. Can anyone help me please?
You should solve the coupled system as coupled system
def fun2(t, euleromega):
euler, omega = euleromega[:3], euleromega[3:]
dotomega = fun(t,omega)
...
return np.concatenate([[dot1,dot2,dot3], dotomega])
Here is my code.
import numpy as np
from scipy.integrate import odeint
#Constant
R0=1.475
gamma=2.
ScaleMeVfm3toEskm3 = 8.92*np.power(10.,-7.)
def EOSe(p):
return np.power((p/450.785),(1./gamma))
def M(m,r):
return (4./3.)*np.pi*np.power(r,3.)*p
# function that returns dz/dt
def model(z,r):
p, m = z
dpdr = -((R0*EOSe(p)*m)/(np.power(r,2.)))*(1+(p/EOSe(p)))*(1+((4*math.pi*(np.power(r,3))*p)/(m)))*((1-((2*R0)*m)/(r))**(-1.))
dmdr = 4.*math.pi*(r**2.)*EOSe(p)
dzdr = [dpdr,dmdr]
return dzdr
# initial condition
r0=10.**-12.
p0=10**-6.
z0 = [p0, M(r0, p0)]
# radius
r = np.linspace(r0, 15, 100000)
# solve ODE
z = odeint(model,z0,r)
The result of z[:,0] keeps decreasing as I expected. But what I want is only positive values. One may run the code and try print(z[69306]) and it will show [2.89636405e-11 5.46983202e-01]. That is the last point I want the odeint to stop integration.
Of course, the provided code shows
RuntimeWarning: invalid value encountered in power
return np.power((p/450.785),(1./gamma))
because the result of p starts being negative. For any further points, the odeint yields the result [nan nan].
However, I could use np.nanmin() to find the minimum of z[:,0] that is not nan. But I have a set of p0 values for my work. I will need to call odeint in a loop like
P=np.linspace(10**-8.,10**-2.,10000)
for p0 in P:
#the code for solving ode provided above.
which takes more time.
I think it would reduce a time for execution if I can just stop at before z[:,0] going to be negative a value?
Here is the modified code using solve_ivp:
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pylab as plt
# Constants
R0 = 1.475
gamma = 2.
def EOSe(p):
return np.power(np.abs(p)/450.785, 1./gamma)
def M(m, r):
return (4./3.)*np.pi*np.power(r,3.)*p
# function that returns dz/dt
# note: the argument order is reversed compared to `odeint`
def model(r, z):
p, m = z
dpdr = -R0*EOSe(p)*m/r**2*(1 + p/EOSe(p))*(1 + 4*np.pi*r**3*p/m)*(1 - 2*R0*m/r)**(-1)
dmdr = 4*np.pi * r**2 * EOSe(p)
dzdr = [dpdr, dmdr]
return dzdr
# initial condition
r0 = 1e-3
r_max = 50
p0 = 1e-6
z0 = [p0, M(r0, p0)]
# Define the event function
# from the doc: "The solver will find an accurate value
# of t at which event(t, y(t)) = 0 using a root-finding algorithm. "
def stop_condition(r, z):
return z[0]
stop_condition.terminal = True
# solve ODE
r_span = (r0, r_max)
sol = solve_ivp(model, r_span, z0,
events=stop_condition)
print(sol.message)
print('last p, m = ', sol.y[:, -1], 'for r_event=', sol.t_events[0][0])
r_sol = sol.t
p_sol = sol.y[0, :]
m_sol = sol.y[1, :]
# Graph
plt.subplot(2, 1, 1);
plt.plot(r_sol, p_sol, '.-b')
plt.xlabel('r'); plt.ylabel('p');
plt.subplot(2, 1, 2);
plt.plot(r_sol, m_sol, '.-r')
plt.xlabel('r'); plt.ylabel('m');
Actually, using events in this case do not prevent a warning because of negative p. The reason is that the solver is going to evaluate the model for p<O anyway. A solution is to take the absolute value of p in the square root (as in the code above). Using np.sign(p)*np.power(np.abs(p)/450.785, 1./gamma) gives interesting result too.
I cannot write the program which is solving 2nd order differential equation with respect to code I wrote for y'=y
I know that I should write a program which turn a 2nd order differential equation into two ordinary differential equations but I don!t know how can I do in Python.
P.S. : I have to use that code below. It's a homework
Please forgive my mistakes, it's my first question. Thanks in advance
from pylab import*
xd=[];y=[]
def F(x,y):
return y
def rk4(x0,y0,h,N):
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
k1=F(x0,y0)
k2=F(x0+h/2,y0+h/2*k1)
k3=F(x0+h/2,y0+h/2*k2)
k4=F(x0+h,y0+h*k3)
k=1/6*(k1+2*k2+2*k3+k4)
y=y0+h*k
x=x0+h
yd.append(y)
xd.append(x)
y0=y
x0=x
return xd,yd
x0=0
y0=1
h=0.1
N=10
x,y=rk4(x0,y0,h,N)
print("x=",x)
print("y=",y)
plot(x,y)
show()
You can basically reformulate any scalar ODE (Ordinary Differential Equation) of order n in Cauchy form into an ODE of order 1. The only thing that you "pay" in this operation is that the second ODE's variables will be vectors instead of scalar functions.
Let me give you an example with an ODE of order 2. Suppose your ODE is: y'' = F(x,y, y'). Then you can replace it by [y, y']' = [y', F(x,y,y')], where the derivative of a vector has to be understood component-wise.
Let's take back your code and instead of using Runge-Kutta of order 4 as an approximate solution of your ODE, we will apply a simple Euler scheme.
from pylab import*
import matplotlib.pyplot as plt
# we are approximating the solution of y' = f(x,y) for x in [x_0, x_1] satisfying the Cauchy condition y(x_0) = y0
def f(x, y0):
return y0
# here f defines the equation y' = y
def explicit_euler(x0, x1, y0, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
y = yd[-1] + h * f(xd[-1], yd[-1])
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h
yd.append(y)
xd.append(x)
return xd, yd
N = 250
x1 = 5
x0 = 0
y0 = 1
# the only function which satisfies y(0) = 1 and y'=y is y(x)=exp(x).
xd, yd =explicit_euler(x0, x1, y0, N)
plt.plot(xd,yd)
plt.show()
# this plot has the right shape !
Note that you can replace the Euler scheme by R-K 4 which has better stability and convergence properties.
Now, suppose that you want to solve a second order ODE, let's say for example: y'' = -y with initial conditions y(0) = 1 and y'(0) = 0. Then you have to transform your scalar function y into a vector of size 2 as explained above and in the comments in code below.
from pylab import*
import matplotlib.pyplot as plt
import numpy as np
# we are approximating the solution of y'' = f(x,y,y') for x in [x_0, x_1] satisfying the Cauchy condition of order 2:
# y(x_0) = y0 and y'(x_0) = y1
def f(x, y_d_0, y_d_1):
return -y_d_0
# here f defines the equation y'' = -y
def explicit_euler(x0, x1, y0, y1, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
# to allow group operations in R^2, we use the numpy library
yd.append(np.array([y0, y1]))
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
# remember that now, yd is a list of vectors
# the equivalent order 1 equation is [y, y']' = [y', f(x,y,y')]
y = yd[-1] + h * np.array([yd[-1][1], f(xd[-1], yd[-1][0], yd[-1][1])]) # vector of dimension 2
print(y)
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h # vector of dimension 1
yd.append(y)
xd.append(x)
return xd, yd
x0 = 0
x1 = 30
y0 = 1
y1 = 0
# the only function satisfying y(0) = 1, y'(0) = 0 and y'' = -y is y(x) = cos(x)
N = 5000
xd, yd =explicit_euler(x0, x1, y0, y1, N)
# I only want the first variable of yd
yd_1 = list(map(lambda y: y[0], yd))
plt.plot(xd,yd_1)
plt.show()
I am having a hard time getting used to scipy's solve_ivp. So let's say we have an ordinary linear differential equation of second order, spring for example (y'' = -k**2*y). Conditions are when the spring is at position 0 (time 0) speed is v0. How can I use initial conditions to solve it?
y'' = -k**2*y # First this needs to be modified into first order equation
.
def function1(t, y, k): #original function
return y[1], -k**2*y[1]
function2 = lambda t, y: function1(t, y, k = 10) #function with only t and y
t = np.linspace(0, 100, 1000)
solution = solve_ivp(function2, (0, 100), (0, 0), t_eval = t)
solution.y[0]
If you want to encode
y'' = -k**2*y
as a first-order system, you should use
def function1(t, y, k): #original function
return y[1], -k**2*y[0]
The code in the question encodes y'' = -k**2*y'.
I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.