scipy odeint with complex initial values - python

I need to solve a complex-domain-defined ODE system, with complex initial values.
scipy.integrate.odeint does not work on complex systems.
I rod about cutting my system in real and imaginary part and solve separately, but my ODE system's rhs involves products between dependent variables themselves and their complex conjugates.
Haw do I do that? Here is my code, I tried breaking RHS in Re and Im parts, but I don't think the solution is the same as if I wouldn't break it because of the internal products between complex numbers.
In my script u1 is a (very)long complex function, say u1(Lm) = f_real(Lm) + 1j* f_imag(Lm).
from numpy import *
from scipy import integrate
def cj(z): return z.conjugate()
def dydt(y, t=0):
# Notation
# Dependent Variables
theta1 = y[0]
theta3 = y[1]
Lm = y[2]
u11 = u1(Lm)
u13 = u1(3*Lm)
zeta1 = -2*E*u11*theta1
zeta3 = -2*E*3*u13*theta3
# Coefficients
A0 = theta1*cj(zeta1) + 3*theta3*cj(zeta3)
A2 = -zeta1*theta1 + 3*cj(zeta1)*theta3 + zeta3*cj(theta1)
A4 = -theta1*zeta3 - 3*zeta1*theta3
A6 = -3*theta3*zeta3
A = - (A2/2 + A4/4 + A6/6)
# RHS vector components
dy1dt = Lm**2 * (theta1*(A - cj(A)) - cj(theta1)*A2/2
- 3/2*theta3*cj(A2)
- 3/4*cj(theta3)*A4
- zeta1)
dy2dt = Lm**2 * (3*theta3*(A - cj(A)) - theta1*A2/2
- cj(theta1)*A4/4
- 1/2*cj(theta3)*A6
- 3*zeta3)
dy3dt = Lm**3 * (A0 + cj(A0))
return array([dy1dt, dy2dt, dy3dt])
t = linspace(0, 10000, 100) # Integration time-step
ry0 = array([0.001, 0, 0.1]) # Re(initial condition)
iy0 = array([0.0, 0.0, 0.0]) # Im(initial condition)
y0 = ry0 + 1j*iy0 # Complex Initial Condition
def rdydt(y, t=0): # Re(RHS)
return dydt(y, t).real
def idydt(y, t=0): # Im(RHS)
return dydt(y, t).imag
ry, rinfodict = integrate.odeint(rdydt, y0, t, full_output=True)
iy, iinfodict = integrate.odeint(idydt, y0, t, full_output=True)
The error I get is this
TypeError: array cannot be safely cast to required type
odepack.error: Result from function call is not a proper array of
floats.

As you've discovered, odeint does not handle complex-valued differential equations, but there is scipy.integrate.complex_ode. complex_ode is a convenience function that takes care of converting the system of n complex equations into a system of 2*n real equations. (Note the discrepancy in the signatures of the functions used to define the equations for odeint and ode. odeint expects f(t, y, *args) while ode (and complex_ode) expect f(y, t, *args).)
A similar convenience function can be created for odeint. In the following code, odeintz is a function that handles the conversion of a complex system into a real system and solving it with odeint. The code includes an example of solving a complex system. It also shows how that system can be converted "by hand" to a real system and solved with odeint. But for a large system, that is a tedious and error prone process; using a complex solver is certainly a saner approach.
import numpy as np
from scipy.integrate import odeint
def odeintz(func, z0, t, **kwargs):
"""An odeint-like function for complex valued differential equations."""
# Disallow Jacobian-related arguments.
_unsupported_odeint_args = ['Dfun', 'col_deriv', 'ml', 'mu']
bad_args = [arg for arg in kwargs if arg in _unsupported_odeint_args]
if len(bad_args) > 0:
raise ValueError("The odeint argument %r is not supported by "
"odeintz." % (bad_args[0],))
# Make sure z0 is a numpy array of type np.complex128.
z0 = np.array(z0, dtype=np.complex128, ndmin=1)
def realfunc(x, t, *args):
z = x.view(np.complex128)
dzdt = func(z, t, *args)
# func might return a python list, so convert its return
# value to an array with type np.complex128, and then return
# a np.float64 view of that array.
return np.asarray(dzdt, dtype=np.complex128).view(np.float64)
result = odeint(realfunc, z0.view(np.float64), t, **kwargs)
if kwargs.get('full_output', False):
z = result[0].view(np.complex128)
infodict = result[1]
return z, infodict
else:
z = result.view(np.complex128)
return z
if __name__ == "__main__":
# Generate a solution to:
# dz1/dt = -z1 * (K - z2)
# dz2/dt = L - z2
# K and L are fixed parameters. z1(t) and z2(t) are complex-
# valued functions of t.
# Define the right-hand-side of the differential equation.
def zfunc(z, t, K, L):
z1, z2 = z
return [-z1 * (K - z2), L - z2]
# Set up the inputs and call odeintz to solve the system.
z0 = np.array([1+2j, 3+4j])
t = np.linspace(0, 4, 101)
K = 3
L = 1
z, infodict = odeintz(zfunc, z0, t, args=(K,L), full_output=True)
# For comparison, here is how the complex system can be converted
# to a real system. The real and imaginary parts are used to
# write a system of four coupled equations. The formulas for
# the complex right-hand-sides are
# -z1 * (K - z2) = -(x1 + i*y1) * (K - (x2 + i*y2))
# = (-x1 - i*y1) * (K - x2 + i(-y2))
# = -x1 * (K - x2) - y1*y2 + i*(-y1*(K - x2) + x1*y2)
# and
# L - z2 = L - (x2 + i*y2)
# = (L - x2) + i*(-y2)
def func(r, t, K, L):
x1, y1, x2, y2 = r
dx1dt = -x1 * (K - x2) - y1*y2
dy1dt = -y1 * (K - x2) + x1*y2
dx2dt = L - x2
dy2dt = -y2
return [dx1dt, dy1dt, dx2dt, dy2dt]
# Use regular odeint to solve the real system.
r, infodict = odeint(func, z0.view(np.float64), t, args=(K,L), full_output=True)
# Compare the two solutions. They should be the same. (As usual for
# floating point calculations, there could be a small difference.)
delta_max = np.abs(z.view(np.float64) - r).max()
print "Maximum difference between the complex and real versions is", delta_max
# Plot the real and imaginary parts of the complex solution.
import matplotlib.pyplot as plt
plt.clf()
plt.plot(t, z[:,0].real, label='z1.real')
plt.plot(t, z[:,0].imag, label='z1.imag')
plt.plot(t, z[:,1].real, label='z2.real')
plt.plot(t, z[:,1].imag, label='z2.imag')
plt.xlabel('t')
plt.grid(True)
plt.legend(loc='best')
plt.show()
Here's the plot generated by the script:
Update
This code has been significantly expanded into a function called odeintw that handles complex variables and matrix equations. The new function can be found on github: https://github.com/WarrenWeckesser/odeintw

I think I found a solution by myself. I'm posting it as anybody would find it useful.
It appears that odeint cannot deal with complex numbers. Anyway scipy.integrate.ode does
by making use of the 'zvode' integration method.

Related

Applying Forward Euler Method to a Three-Box Model System of ODEs

I am trying to model a system of coupled ODEs which represent a three-box ocean model of phosphorous concentration (y) in the low-latitude surface ocean (Box 1), high-latitude deep ocean (Box 2), and deep ocean (Box 3). The ODEs are given below:
dy1dt = (Q/V1)*(y3-y1) - y1/tau # dP/dt in Box 1
dy2dt = (Q/V2)*(y1-y2) + (qh/V2)*(y3-y2) - (fh/V2) # dP/dt in Box 2
dy3dt = (Q/V3)*(y2-y3) + (qh/V3)*(y2-y3) + (V1*y1)/(V3*tau) + (fh/V3) # dP/dt in Box 3
The constants and box volumes are given by:
### Define Constants
tau = 86400 # s
VT = 1.37e18 # m3
Q = 25e6 # m3/s
qh = 38e6 # m3/s
fh = 0.0022 # mol/m3
avp = 0.00215 # mol/m3
### Calculate Surface Area of Ocean
r = 6.4e6 # m
earth = 4*np.pi*(r**2) # m2
ocean = .70*earth # m2
### Calculate Volumes for Each Box
V1 = .85*100*ocean # m3
V2 = .15*250*ocean # m3
V3 = VT-V1-V2 # m3
This can be put into matrix form y = Ay + f, where y = [y1, y2, y3]. I have provided the matrices and initial conditions below:
A = np.array([[(-Q/V1)-(1/tau),0,(Q/V1)],
[(Q/V2),(-Q/V2)-(qh/V2),(qh/V2)],
[(V1/(V3*tau)),((Q+qh)/V3),((-Q-qh)/V3)]])
f = np.array([[0],[-fh/V2],[fh/V3]])
y1 = y2 = y3 = 0.00215 # mol/m3
I am having trouble adapting the Forward Euler method to apply to a system on linear ODEs, rather than just one. This is what I have come up with so far (it runs with no issues but doesn't work if that makes sense; I think it has something t do with initial conditions?):
### Define a Function for the Euler-Forward Scheme
import numpy as np
def ForwardEuler(t0,y0,N,dt):
N = 100
dt = 0.1
# Create empty 2D arrays for t and y
t = np.zeros([N+1,3,3]) # steps, # variables, # solutions
y = np.zeros([N+1,3,3])
# Assign each ODE to a vector element
y1 = y[0]
y2 = y[1]
y3 = y[2]
# Set initial conditions for each solution
t[0, 0] = t0[0]
y[0, 0] = y0[0]
t[0, 1] = t0[1]
y[0, 1] = y0[1]
t[0, 2] = t0[2]
y[0, 2] = y0[2]
for i in trange(int(N)):
t[i+1] = t[i] + t[i]*dt
y1[i+1] = y1[i] + ((Q/V1)*(y3[i]-y1[i]) - (y1[i]/tau))*dt
y2[i+1] = y2[i] + ((Q/V2)*(y1[i]-y2[i]) + (qh/V2)*(y3[i]-y2[i]) - (fh/V2))*dt
y3[i+1] = y3[i] + ((Q/V3)*(y2[i]-y3[i]) + (qh/V3)*(y2[i]-y3[i]) + (V1*y1[i])/(V3*tau) + (fh/V3))*dt
return t, y1, y2, y3
Any help on this is greatly appreciated. I have not found any resources online that go through the Euler Forward for a system of 3 ODEs, and am at a loss. I am happy to explain further if there are more questions.
As Lutz Lehmann pointed out, you need to design a simple ODE system. You could define the whole ODE system inside a function as follows:
import numpy as np
def fun(t, RHS):
# get initial boundary condition values
y1 = RHS[0]
y2 = RHS[1]
y3 = RHS[2]
# calculte rate of respective variables
dy1dt = (Q/V1)*(y3-y1) - y1/tau
dy2dt = (Q/V2)*(y1-y2) + (qh/V2)*(y3-y2) - (fh/V2)
dy3dt = (Q/V3)*(y2-y3) + (qh/V3)*(y2-y3) + (V1*y1)/(V3*tau) + (fh/V3)
# Left-hand side of ODE
LHS = np.zeros([3,])
LHS[0] = dy1dt
LHS[1] = dy2dt
LHS[2] = dy3dt
return LHS
In the above function, we get time t as an argument and the initial values of y1, y2, and y3 as a list in the variable RHS, which is then unpacked to get the respective variables. Afterward, the rate equation of each variable is defined. In the end, the calculated rates are returned also as a list in the variable LHS.
Now, we can define a simple Euler Forward method to solve this ODE system as follows:
def ForwardEuler(fun,t0,y0,N,dt):
t = np.arange(t0,N+dt,dt)
y = np.zeros([len(t), len(y0)])
y[0] = y0
for i in range(len(t)-1):
y[i+1,:] = y[i,:] + fun(t[i],y[i,:]) * dt
return t, y
Here, we create a time range from 0 to 100 with a step size of 0.1. Afterward, we create an array of zeros with the shape (len(t), len(y0)) which is in this case (1001,3). We need to do this because we want to solve fun for the range of t (1001) and the RHS variable of fun has a shape of (3,) ([y1, y2, y3]). So for each and every point in t, we will solve for the three variables of RHS, which will be returned as LHS.
In the end, we can solve this ODE system as follows:
dt = 0.1
N = 100
y0 = [0.00215, 0.00215, 0.00215]
t0 = 0
t,y = ForwardEuler(fun,t0,y0,N,dt)
Solution using scipy.integrate
As Lutz Lehmann also pointed out, you can use scipy.integrate for this purpose as well which is far easier. Here you can use the above defined fun and simply solve the ODE as follows:
import numpy as np
from scipy.integrate import odeint
dt = 0.1
N = 100
t = np.linspace(0,N,int(N/dt))
y0 = [0.00215, 0.00215, 0.00215]
res = odeint(fun, y0, t, tfirst=True)
print(res)

Solving system of 1-D partial differential equations numerically in python (py-pde lib or any other)

I have the following system of partial differential equations:
Where
c0 - constant,
r - independent spatial variable,
t - time variable,
f(r, t) - 1st unknown function,
f_t(r, t) - 2nd unknown function (actually it just represents first-order derivative of f over t: ft(r, t)),
fr(r, t) - first-order derivative of f over r,
frr(r, t) - second-order derivative of f over r
The unknown f(r, t) and f_t(r, t) must be found within variables ranges r0..r1, t0..t1 (r0 = 5*10-4, r1 = 1, t0 = 0, t1 = 5*10-5)
Also there are the following initial conditions:
f(r, t0) = 0
f_t(r, t0) = 0
And the boundary condition:
f_t(r0, t) = ...<a function which is defined by set of points>...
# e.g.: scipy.interpolate.interp1d(array_of_t_values, array_of_F_values, kind='linear')
How do I define all this and solve it numerically with py-pde (or any other suitable library) in python? And also I need to draw the resulting surfaces of f(r, t) and f_t(r, t)
Here below I tried to modify existing example (py-pde), but I have no idea how to define the derivatives:
from pde import FieldCollection, PDEBase, UnitGrid
class MyPDE(PDEBase):
def __init__(self, c0=1500):
self.c0 = c0
def evolution_rate(self, state, t=0):
f, f_t = state
f = f_t
f_t = (f.diff(r, 2) + 1/r * f.diff(r)) / self.c0
return FieldCollection([f, f_t])
grid = UnitGrid([32, 32])
state = FieldCollection.scalar_random_uniform(2, grid)
eq = MyPDE()
result = eq.solve(state, t_range=100, dt=0.01)
result.plot()

Using scipy.quad with iε trick: Bad results

In order to circumvent the cauchy principle value, I tried to integrate an integral using a small shift iε into the complex plane to evade the pole. However, as can be inferred from the figure below, the result is pretty bad. The code for this result is shown below. Do you have ideas how to improve this method? Why is it not working? I already tried changing ε or the limit in the integral.
Edit: I included the method "cauchy" with the principle value, which seems not to work at all.
import matplotlib.pyplot as plt
from scipy.integrate import quad
import numpy as np
def cquad(func, a, b, **kwargs):
real_integral = quad(lambda x: np.real(func(x)), a, b, limit = 200,**kwargs)
imag_integral = quad(lambda x: np.imag(func(x)), a, b, limit = 200,**kwargs)
return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:])
def k_(a):
ϵ = 1e-32
return (cquad(lambda x: np.exp(-1j*x)/(x**2 - a**2 - 1j*ϵ),-np.inf,np.inf)[0])
def k2_(a):
return (cquad(lambda x: np.exp(-1j*x)/(x**2 - a**2),-1e6,1e6, weight='cauchy', wvar = a)[0])
k = np.vectorize(k_)
k2 = np.vectorize(k2_)
fig, ax = plt.subplots()
a = np.linspace(-10,10,300)
ax.plot(a,np.real(k(a)),".-",label = "numerical result")
ax.plot(a,np.real(k2(a)),".-",label = "numerical result (cauchy)")
ax.plot(a, - np.pi*np.sin(a)/a,"-",label="analytical result")
ax.set_ylim(-5,5)
ax.set_ylabel("f(x)")
ax.set_xlabel("x")
ax.set_title(r"$\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-i y}}{y^2 - x^2}\mathrm{d}y = -\frac{\pi\sin(x)}{x}$")
plt.legend()
plt.savefig("./bad_result.png")
plt.show()
The main problem is that the integrand has poles at both x=a and
x=-a. ev-br's post show how
to deal with a pole at x=a. All that's needed then is to find a way to
massage the integral into a form that avoids integrating through the other pole
at x=-a. Taking advantage of evenness allows us to "fold the integral over",
so instead of having two poles we just need to deal with one pole at x=a.
The real part of
np.exp(-1j*x) / (x**2 - a**2) = (np.cos(x) - 1j * np.sin(x)) / (x**2 - a**2)
is an even function of x so integrating the real part from x = -infinity to
infinity would equal twice the integral from x = 0 to infinity. The
imaginary part of the integrand is an odd function of x. The integral from x = -infinity to infinity equals the integral from x = -infinity to 0, plus
the integral from x = 0 to infinity. These two parts cancel each other out
since the (imaginary) integrand is odd. So the integral of the imaginary part equals 0.
Finally, using ev-br's suggestion, since
1 / (x**2 - a**2) = 1 / ((x - a)(x + a))
using weight='cauchy', wvar=a implicitly weights the integrand by 1 / (x - a) thus allowing us to reduce the explicit integrand to
np.cos(x) / (x + a)
Since the integrand is an even function of a, we can assume without loss of generality that a is positive:
a = abs(a)
Now integrating from x = 0 to infinity avoids the pole at x = -a.
import matplotlib.pyplot as plt
from scipy.integrate import quad
import numpy as np
def cquad(func, a, b, **kwargs):
real_integral = quad(lambda x: np.real(func(x)), a, b, limit=200, **kwargs)
imag_integral = quad(lambda x: np.imag(func(x)), a, b, limit=200, **kwargs)
return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:])
def k2_(a):
a = abs(a)
# return 2*(cquad(lambda x: np.exp(-1j*x)/(x + a), 0, 1e6, weight='cauchy', wvar=a)[0]) # also works
# return 2*(cquad(lambda x: np.cos(x)/(x + a), 0, 1e6, weight='cauchy', wvar=a)[0]) # also works, but not necessary
return 2*quad(lambda x: np.cos(x)/(x + a), 0, 1e6, limit=200, weight='cauchy', wvar=a)[0]
k2 = np.vectorize(k2_)
fig, ax = plt.subplots()
a = np.linspace(-10, 10, 300)
ax.plot(a, np.real(k2(a)), ".-", label="numerical result (cauchy)")
ax.plot(a, - np.pi*np.sin(a)/a, "-", label="analytical result")
ax.set_ylim(-5, 5)
ax.set_ylabel("f(x)")
ax.set_xlabel("x")
ax.set_title(
r"$\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-i y}}{y^2 - x^2}\mathrm{d}y = -\frac{\pi\sin(x)}{x}$")
plt.legend()
plt.show()
You can use instead the weight="cauchy" argument to quad.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html

How to solve a 9-equations system of non linear DE with python?

I'm desperately trying to solve (and display the graph) a system made of nine nonlinear differential equations which model the path of a boomerang. The system is the following:
All the letters on the left side are variables, the others are either constants or known functions depending on v_G and w_z
I have tried with scipy.odeint with no conclusive results (I had this issue but the workaround did not work.)
I begin to think that the problem is linked with the fact that these equations are nonlinear or that the function in denominator might cause a singularity that the scipy solver is simply unable to handle. However, I am not familiar with that sort of mathematical knowledge.
What possibilities python-wise do I have to solve this set of equations?
EDIT : Sorry if I was not clear enough. Since it models the path of a boomerang, my goal is not to solve analytically this system (ie I don't care about the mathematical expression of each function), but rather to get the values of each function for a specific time range (say, from t1 = 0s to t2 = 15s with an interval of 0.01s between each value) in order to display the graph of each function and the graph of the center of mass of the boomerang (X,Y,Z are its coordinates).
Here is the code I tried :
import scipy.integrate as spi
import numpy as np
#Constants
I3 = 10**-3
lamb = 1
L = 5*10**-1
mu = I3
m = 0.1
Cz = 0.5
rho = 1.2
S = 0.03*0.4
Kz = 1/2*rho*S*Cz
g = 9.81
#Initial conditions
omega0 = 20*np.pi
V0 = 25
Psi0 = 0
theta0 = np.pi/2
phi0 = 0
psi0 = -np.pi/9
X0 = 0
Y0 = 0
Z0 = 1.8
INPUT = (omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0) #initial conditions
def diff_eqs(t, INP):
'''The main set of equations'''
Y=np.zeros((9))
Y[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
Y[1] = -(lamb/m)*INP[1]
Y[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
Y[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
Y[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
Y[5] = -np.cos(INP[3])*Y[4]
Y[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
Y[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
Y[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return Y # For odeint
t_start = 0.0
t_end = 20
t_step = 0.01
t_range = np.arange(t_start, t_end, t_step)
RES = spi.odeint(diff_eqs, INPUT, t_range)
However, I keep getting the same problem as shown here and especially the error message :
Excess work done on this call (perhaps wrong Dfun type)
I am not quite sure what it means but it looks like the solver have troubles solving the system. In any case, when I try to display the 3D path thanks to the XYZ coordinates, I just get 3 or 4 points where there should be something like 2000.
So my questions are : - Am I doing something wrong in my code ?
- If not, is there an other maybe more sophisticated tool to solve this sytem ?
- If not, is it even possible to get what I want from this system of ODEs ?
Thanks in advance
There are several issues:
if I copy the code, it does not run
the workaround you mention does not work with odeint, the given
solution uses ode
The scipy reference for odeint says:"For new code, use
scipy.integrate.solve_ivp to solve a differential equation."
the call RES = spi.odeint(diff_eqs, INPUT, t_range) should be
consistent to the function head def diff_eqs(t, INP) . Mind the
order: RES = spi.odeint(diff_eqs,t_range, INPUT)
There are some issues about to mathematical formulas too:
have a look at the 3rd formula on your picture. It has no tendency term, it starts with a zero - what does that mean ?
it's hard to check wether you have translated the formula correctly into code since the code does not follow the formulas strictly.
Below I tried a solution with scipy solve_ivp. In case A I'm able to run a pendulum, but in case B no meaningful solution for the boomerang can be found. So check the maths, I guess some error in the mathematical expressions.
For the graphics use pandas to plot all variables together (see code below).
import scipy.integrate as spi
import numpy as np
import pandas as pd
def diff_eqs_boomerang(t,Y):
INP = Y
dY = np.zeros((9))
dY[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
dY[1] = -(lamb/m)*INP[1]
dY[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
dY[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
dY[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
dY[5] = -np.cos(INP[3])*INP[4]
dY[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
dY[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
dY[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return dY
def diff_eqs_pendulum(t,Y):
dY = np.zeros((3))
dY[0] = Y[1]
dY[1] = -Y[0]
dY[2] = Y[0]*Y[1]
return dY
t_start, t_end = 0.0, 12.0
case = 'A'
if case == 'A': # pendulum
Y = np.array([0.1, 1.0, 0.0]);
Yres = spi.solve_ivp(diff_eqs_pendulum, [t_start, t_end], Y, method='RK45', max_step=0.01)
if case == 'B': # boomerang
Y = np.array([omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0])
print('Y initial:'); print(Y); print()
Yres = spi.solve_ivp(diff_eqs_boomerang, [t_start, t_end], Y, method='RK45', max_step=0.01)
#---- graphics ---------------------
yy = pd.DataFrame(Yres.y).T
tt = np.linspace(t_start,t_end,yy.shape[0])
with plt.style.context('fivethirtyeight'):
plt.figure(1, figsize=(20,5))
plt.plot(tt,yy,lw=8, alpha=0.5);
plt.grid(axis='y')
for j in range(3):
plt.fill_between(tt,yy[j],0, alpha=0.2, label='y['+str(j)+']')
plt.legend(prop={'size':20})

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Categories