I am having some trouble with this question. I am given this system of equations
dx / dt = -y -z
dy / dt = x + a * y
dz / dt = b + z * (x - c)
and default values a=0.1, b=0.1, c=14 and also the Runge-Kutta algorithm:
def rk4(f, xvinit, Tmax, N):
T = np.linspace(0,Tmax,N+1)
xv = np.zeros( (len(T), len(xvinit)) )
xv[0] = xvinit
h = Tmax / N
for i in range(N):
k1 = f(xv[i])
k2 = f(xv[i] + h/2.0*k1)
k3 = f(xv[i] + h/2.0*k2)
k4 = f(xv[i] + h*k3)
xv[i+1] = xv[i] + h/6.0 *( k1 + 2*k2 + 2*k3 + k4)
return T, xv
I need to solve this system from t=0 to t=100 in time steps of 0.1 and using initial conditions (𝑥0,𝑦0,𝑧0)=(0,0,0) at 𝑡=0
I'm not really sure where to begin on this, I've tried defining a function to give the Oscillator:
def roessler(xyx, a=0.1, b=0.1, c=14):
xyx=(x,y,x)
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return dxdt ,dydt ,dzdt
which returns the right side of the equation with default values, i've then tried to solve by replacing f with roessler and filling in values for xvinit,Tmax and N with values i'm given but it's not working.
Any help is appreciated sorry if some of this is formatted wrong i'm new here.
Well, you almost got it already. Changing your roessler function to the following
def roessler(xyx, a=0.1, b=0.1, c=14):
x, y, z = xyx
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return np.array([dxdt, dydt, dzdt])
and then calling
T, sol = rk4(roessler, np.array([0, 0, 0]), 100, 1000)
makes it work.
Taking aside the typo in the first line of your roessler function, the key to solving this is to understand that you have a system of differential equations, i.e., you need to work with vectors. While you already had the input as the vector correct, you also need to make the output of roessler a vector and put in the initial value with the appropriate shape.
Related
Aiming to solve this system of coupled differential equations:
$ frac{dx}{dt} = -y $
$\frac{dy}{dt} = x $
following the below implicit evolution scheme:
$$ y(t_{n+1}) = y(t_{n}) + \frac{\Delta t}{2}(x_{old}(t_{n+1}) + x(t_{n})) $$
$$ x(t_{n+1}) = x(t_{n}) - \frac{\Delta t}{2}(y_{old}(t_{n+1}) + y(t_{n})) $$
My code is as follows
# coupled ODE's to be solved
def f(x,y):
return -y
def g(x,y):
return x
#implicit evolution scheme
def Imp(f,g,x0,y0, tend,N):
t = np.linspace(0.0, tend, N+1)
dt = 0.1
x1 = np.zeros((N+1, ))
y2 = np.zeros((N+1, ))
xold = np.zeros((N+1, ))
yold = np.zeros((N+1, ))
xxold = np.zeros((N+1, ))
yyold = np.zeros((N+1, ))
x1[0] = x0
y2[0] = y0
for n in range(0,N):
xold = f(t[n+1], x1[n])
xxold = f(t[n+1], x1[n+1] + xold)
yold = g(t[n], y2[n])
yyold = g(t[n], y2[n+1]+yold)
y2[n+1] = y2[n] + (x1[n]+xxold)*0.5*dt
x1[n+1] = x1[n] - (y2[n]+ yyold)*0.5*dt
return t,x1,y2
t, y1,y2 = Imp(f,g,np.sqrt(2),0.0, 100, 1000)
plt.plot(y1,y2)
I was expecting the output (phase plot) to be a full circle as reported in the literature though I got a spiral which was not expected (I would have embedded the picture though my low reputation did not allowed it, please run to see the output).
Does anyone know how could I fix my Imp routine ? thanks
Please study again how the implicit Euler or trapezoidal method works, for scalar equations and for systems. Then think hard about the interfaces of your function, where there is an x, y and why there is no t in the declaration of f and g.
However, from what you describe you are not implementing an implicit method but the explicit 2nd order Heun method. In an implicit method you would solve the implicit equation until sufficient "numerical" convergence is reached.
The explicit Heun loop could look like
for n in range(0,N):
xold = x[n] + f(x[n],y[n])*dt
yold = y[n] + g(x[n],y[n])*dt
xnew = x[n] + f(xold, yold)*dt
ynew = x[n] + g(xold, yold)*dt
x[n+1] = 0.5*(xold+xnew)
y[n+1] = 0.5*(yold+ynew)
But as said, this is an explicit method with a fixed number of explicit steps, not an implicit method using an implicit equation solving strategy.
I'm desperately trying to solve (and display the graph) a system made of nine nonlinear differential equations which model the path of a boomerang. The system is the following:
All the letters on the left side are variables, the others are either constants or known functions depending on v_G and w_z
I have tried with scipy.odeint with no conclusive results (I had this issue but the workaround did not work.)
I begin to think that the problem is linked with the fact that these equations are nonlinear or that the function in denominator might cause a singularity that the scipy solver is simply unable to handle. However, I am not familiar with that sort of mathematical knowledge.
What possibilities python-wise do I have to solve this set of equations?
EDIT : Sorry if I was not clear enough. Since it models the path of a boomerang, my goal is not to solve analytically this system (ie I don't care about the mathematical expression of each function), but rather to get the values of each function for a specific time range (say, from t1 = 0s to t2 = 15s with an interval of 0.01s between each value) in order to display the graph of each function and the graph of the center of mass of the boomerang (X,Y,Z are its coordinates).
Here is the code I tried :
import scipy.integrate as spi
import numpy as np
#Constants
I3 = 10**-3
lamb = 1
L = 5*10**-1
mu = I3
m = 0.1
Cz = 0.5
rho = 1.2
S = 0.03*0.4
Kz = 1/2*rho*S*Cz
g = 9.81
#Initial conditions
omega0 = 20*np.pi
V0 = 25
Psi0 = 0
theta0 = np.pi/2
phi0 = 0
psi0 = -np.pi/9
X0 = 0
Y0 = 0
Z0 = 1.8
INPUT = (omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0) #initial conditions
def diff_eqs(t, INP):
'''The main set of equations'''
Y=np.zeros((9))
Y[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
Y[1] = -(lamb/m)*INP[1]
Y[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
Y[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
Y[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
Y[5] = -np.cos(INP[3])*Y[4]
Y[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
Y[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
Y[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return Y # For odeint
t_start = 0.0
t_end = 20
t_step = 0.01
t_range = np.arange(t_start, t_end, t_step)
RES = spi.odeint(diff_eqs, INPUT, t_range)
However, I keep getting the same problem as shown here and especially the error message :
Excess work done on this call (perhaps wrong Dfun type)
I am not quite sure what it means but it looks like the solver have troubles solving the system. In any case, when I try to display the 3D path thanks to the XYZ coordinates, I just get 3 or 4 points where there should be something like 2000.
So my questions are : - Am I doing something wrong in my code ?
- If not, is there an other maybe more sophisticated tool to solve this sytem ?
- If not, is it even possible to get what I want from this system of ODEs ?
Thanks in advance
There are several issues:
if I copy the code, it does not run
the workaround you mention does not work with odeint, the given
solution uses ode
The scipy reference for odeint says:"For new code, use
scipy.integrate.solve_ivp to solve a differential equation."
the call RES = spi.odeint(diff_eqs, INPUT, t_range) should be
consistent to the function head def diff_eqs(t, INP) . Mind the
order: RES = spi.odeint(diff_eqs,t_range, INPUT)
There are some issues about to mathematical formulas too:
have a look at the 3rd formula on your picture. It has no tendency term, it starts with a zero - what does that mean ?
it's hard to check wether you have translated the formula correctly into code since the code does not follow the formulas strictly.
Below I tried a solution with scipy solve_ivp. In case A I'm able to run a pendulum, but in case B no meaningful solution for the boomerang can be found. So check the maths, I guess some error in the mathematical expressions.
For the graphics use pandas to plot all variables together (see code below).
import scipy.integrate as spi
import numpy as np
import pandas as pd
def diff_eqs_boomerang(t,Y):
INP = Y
dY = np.zeros((9))
dY[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
dY[1] = -(lamb/m)*INP[1]
dY[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
dY[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
dY[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
dY[5] = -np.cos(INP[3])*INP[4]
dY[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
dY[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
dY[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return dY
def diff_eqs_pendulum(t,Y):
dY = np.zeros((3))
dY[0] = Y[1]
dY[1] = -Y[0]
dY[2] = Y[0]*Y[1]
return dY
t_start, t_end = 0.0, 12.0
case = 'A'
if case == 'A': # pendulum
Y = np.array([0.1, 1.0, 0.0]);
Yres = spi.solve_ivp(diff_eqs_pendulum, [t_start, t_end], Y, method='RK45', max_step=0.01)
if case == 'B': # boomerang
Y = np.array([omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0])
print('Y initial:'); print(Y); print()
Yres = spi.solve_ivp(diff_eqs_boomerang, [t_start, t_end], Y, method='RK45', max_step=0.01)
#---- graphics ---------------------
yy = pd.DataFrame(Yres.y).T
tt = np.linspace(t_start,t_end,yy.shape[0])
with plt.style.context('fivethirtyeight'):
plt.figure(1, figsize=(20,5))
plt.plot(tt,yy,lw=8, alpha=0.5);
plt.grid(axis='y')
for j in range(3):
plt.fill_between(tt,yy[j],0, alpha=0.2, label='y['+str(j)+']')
plt.legend(prop={'size':20})
I am trying to solve a system of Odes of the form Du/Dt = F(u) in python, and I suspect I may have made a fairly dumb mistake somewhere.
Technically F(u) is actually the second derivative of u with respect to another variable y, but in practice we can consider it to be a system and some function.
#Settings#
minx = -20
h = float(1)
w = float(10)
U0 = float(10)
Nt = 10
Ny = 10
tmax = 10
v=float(1)
#Arrays#
y = np.linspace(0,h,Ny)
t = np.linspace(0,tmax,Nt)
#Variables from arrays#
dt = t[1]-t[0]
p = [0]*(Nt)
delta = y[1] - y[0]
def zo(y):
return math.cos(y/(2*math.pi))
z0 = [zo(i) for i in y]
def df(t,v1):
output = np.zeros(len(y))
it = 1
output[0] = math.cos(w*t)
output[len(y)-1] = math.cos(w*t)
while it < len(y)-1:
output[it] = ( v1[it - 1] + v1[it + 1] - 2 * v1[it] ) * ( v / ( ( delta )**2 ))
it += 1
return output
r = ode(df).set_integrator('zvode', method='bdf',order =15)
r.set_initial_value(z0, 0)
it=0
while r.successful() and r.t < tmax:
p[it] = r.integrate(r.t+dt)
it+=1
print(z0-p[0])
print(p[1])
Now the problem is twofold :
-First of all, the initial "condition" ie p[0] seems to be off.
(That may be just because of the way the ode function works though, so I don't know if that is normal)
-Second, p[1] and all p's after that are just 0.
So for some reason the ode function fails immediately... (you can check that by changing the values to 1 when initializing p)
Except that I know that this method should work.
This is the "equivalent" to ode45 in matlab after all and that definitely works.
Why did you choose a complex solver with an implicit backward differentiation formula of a rather high order if you wanted to use Dormand-Price rk45 resp. dopri5?
Please also correct the loop indentation in df. Why not a for loop over range(1, len(y)-1)?
As it currently stands p[0] contains the solution point after the first step, at t=1*dt. You would have to explicitly assign p[0]=z0 and start it=1 to get the full solution path in p. Check the length of p, it could be that you need Nt+1.
I'd appreciate it if someone more experienced on implementation would help me to spot my logical flaw in my current code. For the past couple of hours I've been stuck with the implementation and testing of various step sizes for the following RK4 function to solve the Lotka-Volterra Differential equation.
I did my absolute best to ensure readability of the code and comment out crucial steps, so the code below should be clear.
import matplotlib.pyplot as plt
import numpy as np
def model(state,t):
"""
A function that creates an 1x2-array containing the Lotka Volterra Differential equation
Parameter assignement/convention:
a natural growth rate of the preys
b chance of being eaten by a predator
c dying rate of the predators per week
d chance of catching a prey
"""
x,y = state # will corresponding to initial conditions
# consider it as a vector too
a = 0.08
b = 0.002
c = 0.2
d = 0.0004
return np.array([ x*(a-b*y) , -y*(c - d*x) ]) # corresponds to [dx/dt, dy/dt]
def rk4( f, x0, t):
"""
4th order Runge-Kutta method implementation to solve x' = f(x,t) with x(t[0]) = x0.
INPUT:
f - function of x and t equal to dx/dt.
x0 - the initial condition(s).
Specifies the value of x # t = t[0] (initial).
Can be a scalar or a vector (NumPy Array)
Example: [x0, y0] = [500, 20]
t - a time vector (array) at which the values of the solution are computed at.
t[0] is considered as the initial time point
the step size h is dependent on the time vector, choosing more points will
result in a smaller step size.
OUTPUT:
x - An array containing the solution evaluated at each point in the t array.
"""
n = len( t )
x = np.array( [ x0 ] * n ) # creating an array of length n
for i in xrange( n - 1 ):
h = t[i+1]- t[i] # step size, dependent on time vector
# starting below - the implementation of the RK4 algorithm:
# for further informations visit http://en.wikipedia.org/wiki/Runge-Kutta_methods
# k1 is the increment based on the slope at the beginning of the interval (same as Euler)
# k2 is the increment based on the slope at the midpoint of the interval
# k3 is AGAIN the increment based on the slope at the midpoint
# k4 is the increment based on the slope at the end of the interval
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * h * k1, t[i] + 0.5 * h )
k3 = f( x[i] + 0.5 * h * k2, t[i] + 0.5 * h )
k4 = f( x[i] + h * k3, t[i] + h )
# finally computing the weighted average and storing it in the x-array
t[i+1] = t[i] + h
x[i+1] = x[i] + h * ( ( k1 + 2.0 * ( k2 + k3 ) + k4 ) / 6.0 )
return x
################################################################
# just the graphical output
# initial conditions for the system
x0 = 500
y0 = 20
# vector of times
t = np.linspace( 0, 200, 150 )
result = rk4( model,[x0,y0], t )
plt.plot(t,result)
plt.xlabel('Time')
plt.ylabel('Population Size')
plt.legend(('x (prey)','y (predator)'))
plt.title('Lotka-Volterra Model')
plt.show()
The current output looks 'okay-ish' on a small interval and then goes 'berserk'. Oddly enough the code seems to perform better when I choose a larger step size rather than a small one, which points out that my implementation must be wrong, or maybe my model is off. I couldn't spot the error myself.
Output (wrong):
and this is the desired output which can be easily obtained by using one of Scipys integration modules. Note that on the time interval [0,50] the simulation seems correct, then it gets worse by every step.
Unfortunately, you fell into the same trap I've occasionally fallen into: your initial x0 array contains integers, and thus, all resulting x[i] values will be converted to an integer after calculation.
Why is that? Because int is the type of your initial conditions:
x0 = 500
y0 = 20
The solution is, of course, to explicitly make them floats:
x0 = 500.
y0 = 20.
So why does scipy does it correctly when you feed it integer starting values? It probably converts them to float before starting the actual calculation. You could for example do:
x = np.array( [ x0 ] * n, dtype=np.float)
and then you're still safe to use integer initial conditions without problems.
At least this way, the conversion is done inside the function for once and for all, and if you ever use it again half a year (or, someone else uses it), you can't fall into that trap again.
I need to solve a complex-domain-defined ODE system, with complex initial values.
scipy.integrate.odeint does not work on complex systems.
I rod about cutting my system in real and imaginary part and solve separately, but my ODE system's rhs involves products between dependent variables themselves and their complex conjugates.
Haw do I do that? Here is my code, I tried breaking RHS in Re and Im parts, but I don't think the solution is the same as if I wouldn't break it because of the internal products between complex numbers.
In my script u1 is a (very)long complex function, say u1(Lm) = f_real(Lm) + 1j* f_imag(Lm).
from numpy import *
from scipy import integrate
def cj(z): return z.conjugate()
def dydt(y, t=0):
# Notation
# Dependent Variables
theta1 = y[0]
theta3 = y[1]
Lm = y[2]
u11 = u1(Lm)
u13 = u1(3*Lm)
zeta1 = -2*E*u11*theta1
zeta3 = -2*E*3*u13*theta3
# Coefficients
A0 = theta1*cj(zeta1) + 3*theta3*cj(zeta3)
A2 = -zeta1*theta1 + 3*cj(zeta1)*theta3 + zeta3*cj(theta1)
A4 = -theta1*zeta3 - 3*zeta1*theta3
A6 = -3*theta3*zeta3
A = - (A2/2 + A4/4 + A6/6)
# RHS vector components
dy1dt = Lm**2 * (theta1*(A - cj(A)) - cj(theta1)*A2/2
- 3/2*theta3*cj(A2)
- 3/4*cj(theta3)*A4
- zeta1)
dy2dt = Lm**2 * (3*theta3*(A - cj(A)) - theta1*A2/2
- cj(theta1)*A4/4
- 1/2*cj(theta3)*A6
- 3*zeta3)
dy3dt = Lm**3 * (A0 + cj(A0))
return array([dy1dt, dy2dt, dy3dt])
t = linspace(0, 10000, 100) # Integration time-step
ry0 = array([0.001, 0, 0.1]) # Re(initial condition)
iy0 = array([0.0, 0.0, 0.0]) # Im(initial condition)
y0 = ry0 + 1j*iy0 # Complex Initial Condition
def rdydt(y, t=0): # Re(RHS)
return dydt(y, t).real
def idydt(y, t=0): # Im(RHS)
return dydt(y, t).imag
ry, rinfodict = integrate.odeint(rdydt, y0, t, full_output=True)
iy, iinfodict = integrate.odeint(idydt, y0, t, full_output=True)
The error I get is this
TypeError: array cannot be safely cast to required type
odepack.error: Result from function call is not a proper array of
floats.
As you've discovered, odeint does not handle complex-valued differential equations, but there is scipy.integrate.complex_ode. complex_ode is a convenience function that takes care of converting the system of n complex equations into a system of 2*n real equations. (Note the discrepancy in the signatures of the functions used to define the equations for odeint and ode. odeint expects f(t, y, *args) while ode (and complex_ode) expect f(y, t, *args).)
A similar convenience function can be created for odeint. In the following code, odeintz is a function that handles the conversion of a complex system into a real system and solving it with odeint. The code includes an example of solving a complex system. It also shows how that system can be converted "by hand" to a real system and solved with odeint. But for a large system, that is a tedious and error prone process; using a complex solver is certainly a saner approach.
import numpy as np
from scipy.integrate import odeint
def odeintz(func, z0, t, **kwargs):
"""An odeint-like function for complex valued differential equations."""
# Disallow Jacobian-related arguments.
_unsupported_odeint_args = ['Dfun', 'col_deriv', 'ml', 'mu']
bad_args = [arg for arg in kwargs if arg in _unsupported_odeint_args]
if len(bad_args) > 0:
raise ValueError("The odeint argument %r is not supported by "
"odeintz." % (bad_args[0],))
# Make sure z0 is a numpy array of type np.complex128.
z0 = np.array(z0, dtype=np.complex128, ndmin=1)
def realfunc(x, t, *args):
z = x.view(np.complex128)
dzdt = func(z, t, *args)
# func might return a python list, so convert its return
# value to an array with type np.complex128, and then return
# a np.float64 view of that array.
return np.asarray(dzdt, dtype=np.complex128).view(np.float64)
result = odeint(realfunc, z0.view(np.float64), t, **kwargs)
if kwargs.get('full_output', False):
z = result[0].view(np.complex128)
infodict = result[1]
return z, infodict
else:
z = result.view(np.complex128)
return z
if __name__ == "__main__":
# Generate a solution to:
# dz1/dt = -z1 * (K - z2)
# dz2/dt = L - z2
# K and L are fixed parameters. z1(t) and z2(t) are complex-
# valued functions of t.
# Define the right-hand-side of the differential equation.
def zfunc(z, t, K, L):
z1, z2 = z
return [-z1 * (K - z2), L - z2]
# Set up the inputs and call odeintz to solve the system.
z0 = np.array([1+2j, 3+4j])
t = np.linspace(0, 4, 101)
K = 3
L = 1
z, infodict = odeintz(zfunc, z0, t, args=(K,L), full_output=True)
# For comparison, here is how the complex system can be converted
# to a real system. The real and imaginary parts are used to
# write a system of four coupled equations. The formulas for
# the complex right-hand-sides are
# -z1 * (K - z2) = -(x1 + i*y1) * (K - (x2 + i*y2))
# = (-x1 - i*y1) * (K - x2 + i(-y2))
# = -x1 * (K - x2) - y1*y2 + i*(-y1*(K - x2) + x1*y2)
# and
# L - z2 = L - (x2 + i*y2)
# = (L - x2) + i*(-y2)
def func(r, t, K, L):
x1, y1, x2, y2 = r
dx1dt = -x1 * (K - x2) - y1*y2
dy1dt = -y1 * (K - x2) + x1*y2
dx2dt = L - x2
dy2dt = -y2
return [dx1dt, dy1dt, dx2dt, dy2dt]
# Use regular odeint to solve the real system.
r, infodict = odeint(func, z0.view(np.float64), t, args=(K,L), full_output=True)
# Compare the two solutions. They should be the same. (As usual for
# floating point calculations, there could be a small difference.)
delta_max = np.abs(z.view(np.float64) - r).max()
print "Maximum difference between the complex and real versions is", delta_max
# Plot the real and imaginary parts of the complex solution.
import matplotlib.pyplot as plt
plt.clf()
plt.plot(t, z[:,0].real, label='z1.real')
plt.plot(t, z[:,0].imag, label='z1.imag')
plt.plot(t, z[:,1].real, label='z2.real')
plt.plot(t, z[:,1].imag, label='z2.imag')
plt.xlabel('t')
plt.grid(True)
plt.legend(loc='best')
plt.show()
Here's the plot generated by the script:
Update
This code has been significantly expanded into a function called odeintw that handles complex variables and matrix equations. The new function can be found on github: https://github.com/WarrenWeckesser/odeintw
I think I found a solution by myself. I'm posting it as anybody would find it useful.
It appears that odeint cannot deal with complex numbers. Anyway scipy.integrate.ode does
by making use of the 'zvode' integration method.