How can I use Cython well to solve a differential equation faster? - python

I would like to lower the time Scipy's odeint takes for solving a differential
equation.
To practice, I used the example covered in Python in scientific computations as template. Because odeint takes a function f as argument, I wrote this function as a statically typed Cython version and hoped
the running time of odeint would decrease significantly.
The function f is contained in file called ode.pyx as follows:
import numpy as np
cimport numpy as np
from libc.math cimport sin, cos
def f(y, t, params):
cdef double theta = y[0], omega = y[1]
cdef double Q = params[0], d = params[1], Omega = params[2]
cdef double derivs[2]
derivs[0] = omega
derivs[1] = -omega/Q + np.sin(theta) + d*np.cos(Omega*t)
return derivs
def fCMath(y, double t, params):
cdef double theta = y[0], omega = y[1]
cdef double Q = params[0], d = params[1], Omega = params[2]
cdef double derivs[2]
derivs[0] = omega
derivs[1] = -omega/Q + sin(theta) + d*cos(Omega*t)
return derivs
I then create a file setup.py to complie the function:
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules=cythonize('ode.pyx'))
The script solving the differential equation (also containing the Python
version of f) is called solveODE.py and looks as:
import ode
import numpy as np
from scipy.integrate import odeint
import time
def f(y, t, params):
theta, omega = y
Q, d, Omega = params
derivs = [omega,
-omega/Q + np.sin(theta) + d*np.cos(Omega*t)]
return derivs
params = np.array([2.0, 1.5, 0.65])
y0 = np.array([0.0, 0.0])
t = np.arange(0., 200., 0.05)
start_time = time.time()
odeint(f, y0, t, args=(params,))
print("The Python Code took: %.6s seconds" % (time.time() - start_time))
start_time = time.time()
odeint(ode.f, y0, t, args=(params,))
print("The Cython Code took: %.6s seconds ---" % (time.time() - start_time))
start_time = time.time()
odeint(ode.fCMath, y0, t, args=(params,))
print("The Cython Code incorpoarting two of DavidW_s suggestions took: %.6s seconds ---" % (time.time() - start_time))
I then run:
python setup.py build_ext --inplace
python solveODE.py
in the terminal.
The time for the python version is approximately 0.055 seconds,
whilst the Cython version takes roughly 0.04 seconds.
Does somebody have a recommendation to improve on my attempt of solving the
differential equation, preferably without tinkering with the odeint routine itself, with Cython?
Edit
I incorporated DavidW's suggestion in the two files ode.pyx and solveODE.py It took only roughly 0.015 seconds to run the code with these suggestions.

The easiest change to make (which will probably gain you a lot) is to use the C math library sin and cos for operations on single numbers instead of number. The call to numpy and the time spent working out that it isn't an array is fairly costly.
from libc.math cimport sin, cos
# later
-omega/Q + sin(theta) + d*cos(Omega*t)
I'd be tempted to assign a type to the input d (none of the other inputs are easily typed without changing the interface):
def f(y, double t, params):
I think I'd also just return a list like you do in your Python version. I don't think you gain a lot by using a C array.

tldr; use numba.jit for 3x speedup...
I don't have much experience with cython, but my machine seems to get similar computation times for your strictly python version, so we should be able to compare roughly apples to apples. I used numba to compile the function f (which I re-wrote slightly to make it play nicer with the compiler).
def f(y, t, params):
return np.array([y[1], -y[1]/params[0] + np.sin(y[0]) + params[1]*np.cos(params[2]*t)])
numba_f = numba.jit(f)
dropping in numba_f in place of your ode.f gives me this output...
The Python Code took: 0.0468 seconds
The Numba Code took: 0.0155 seconds
I then wondered if I could duplicate odeint and also compile with numba to speed things up even further... (I could not)
Here is my Runge-Kutta numerical differential equation integrator:
#function f is provided inline (not as an arg)
def runge_kutta(y0, steps, dt, args=()): #improvement on euler's method. *note: time steps given in number of steps and dt
Y = np.empty([steps,y0.shape[0]])
Y[0] = y0
t = 0
n = 0
for n in range(steps-1):
#calculate coeficients
k1 = f(Y[n], t, args) #(euler's method coeficient) beginning of interval
k2 = f(Y[n] + (dt * k1 / 2), t + (dt/2), args) #interval midpoint A
k3 = f(Y[n] + (dt * k2 / 2), t + (dt/2), args) #interval midpoint B
k4 = f(Y[n] + dt * k3, t + dt, args) #interval end point
Y[n + 1] = Y[n] + (dt/6) * (k1 + 2*k2 + 2*k3 + k4) #calculate Y(n+1)
t += dt #calculate t(n+1)
return Y
naive looping functions are typically the fastest once compiled, although this could probably be re-structured for a little better speed. I should note, this gives a different answer than odeint, deviating by as much as .001 after around 2000 steps, and is completely different after 3000. For the numba version of the function, I simply replaced f with numba_f, and added the compilation with #numba.jit as a decorator. In this case, as expected the pure python version is very slow, but the numba version is not any faster than the numba with odeint (again, ymmv).
using custom integrator
The Python Code took: 0.2340 seconds
The Numba Code took: 0.0156 seconds
Here's an example of compiling ahead of time. I don't have the necessary toolchain on this computer to compile, and I don't have admin to install it, so this gives me an error that I don't have the required compiler, but it should work otherwise.
import numpy as np
from numba.pycc import CC
cc = CC('diffeq')
#cc.export('func', 'f8[:](f8[:], f8, f8[:])')
def func(y, t, params):
return np.array([y[1], -y[1]/params[0] + np.sin(y[0]) + params[1]*np.cos(params[2]*t)])
cc.compile()

If others answer this question using other modules, I might as well chime in:
I am the author of JiTCODE, which accepts an ODE written in SymPy symbols and then converts this ODE to C code for a Python module, compiles this C code, loads the result and uses this as a derivative for SciPy’s ODE. Your example translated to JiTCODE looks like this:
from jitcode import jitcode, provide_basic_symbols
import numpy as np
from sympy import sin, cos
import time
Q = 2.0
d = 1.5
Ω = 0.65
t, y = provide_basic_symbols()
f = [
y(1),
-y(1)/Q + sin(y(0)) + d*cos(Ω*t)
]
initial_state = np.array([0.0,0.0])
ODE = jitcode(f)
ODE.set_integrator("lsoda")
ODE.set_initial_value(initial_state,0.0)
start_time = time.time()
data = np.vstack(ODE.integrate(T) for T in np.arange(0.05, 200., 0.05))
end_time = time.time()
print("JiTCODE took: %.6s seconds" % (end_time - start_time))
This takes 0.11 seconds, which is horribly slow compared to the solutions based on odeint, but this is not due to the actual integration but the way the results are handled: While odeint directly creates an array efficiently internally, this is done via Python here. Depending on what you do, this may be a crucial disadvantage, but this quickly becomes irrelevant for a coarser sampling or larger differential equations.
So, let’s remove the data collection and just look at the integration, by replacing the last lines with the following:
ODE = jitcode(f)
ODE.set_integrator("lsoda", max_step=0.05, nsteps=1e10)
ODE.set_initial_value(initial_state,0.0)
start_time = time.time()
ODE.integrate(200.0)
end_time = time.time()
print("JiTCODE took: %.6s seconds" % (end_time - start_time))
Note that I set max_step=0.05 to force the integrator to make at least as many steps as in your example and ensure that the only difference is that the results of the integration are not stored to some array. This runs in 0.010 seconds.

NumbaLSODA takes 0.00088 seconds (17x faster than Cython).
from NumbaLSODA import lsoda_sig, lsoda
import numba as nb
import numpy as np
import time
#nb.cfunc(lsoda_sig)
def f(t, y_, dy, p_):
p = nb.carray(p_, (3,))
y = nb.carray(y_, (2,))
theta, omega = y
Q, d, Omega = p
dy[0] = omega
dy[1] = -omega/Q + np.sin(theta) + d*np.cos(Omega*t)
funcptr = f.address # address to ODE function
y0 = np.array([0.0, 0.0])
data = np.array([2.0, 1.5, 0.65])
t = np.arange(0., 200., 0.05)
start_time = time.time()
usol, success = lsoda(funcptr, y0, t, data = data)
print("NumbaLSODA took: %.8s seconds ---" % (time.time() - start_time))
result
NumbaLSODA took: 0.000880 seconds ---

Related

Python - Using a Kronecker Delta with ODEINT

I'm trying to plot the output from an ODE using a Kronecker delta function which should only become 'active' at a specific time = t1.
This should give a sawtooth like response where the initial value decays down exponentially until t=t1 where it rises again instantly before decaying down once again.
However, when I plot this it looks like the solver is seeing the Kronecker delta function as zero for all time t. Is there anyway to do this in Python?
from scipy import KroneckerDelta
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
dy_dt = 500*KroneckerDelta(t,t1) - 2y
return dy_dt
t1 = 4
y0 = 500
t = np.arrange(0,10,0.1)
y = sp.odeint(dy_dt,y0,t)
plt.plot(t,y)
In the case of a simple Kronecker delta using time, you can run the ode in pieces like so:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
return -2*y
t_delta = 4
tend = 10
y0 = [500]
t1 = np.linspace(0,t_delta,50)
y1 = odeint(dy_dt,y0,t1)
y0 = y1[-1] + 500 # execute Kronecker delta
t2 = np.linspace(t_delta,tend,50)
y2 = odeint(dy_dt,y0,t2)
t = np.append(t1, t2)
y = np.append(y1, y2)
plt.plot(t,y)
Another option for complicated situations is to the events functionality of solve_ivp.
I think the problem could be internal rounding errors, because 0.1 cannot be represented exactly as a python float. I would try
import math
def dy_dt(y,t):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2y
Also the documentation of odeint suggests using the args parameter instead of global variables to give your derivative function access to additional arguments and replacing np.arange by np.linspace:
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
import math
def dy_dt(y, t, t1):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2*y
t1 = 4
y0 = 500
t = np.linspace(0, 10, num=101)
y = sp.odeint(dy_dt, y0, t, args=(t1,))
plt.plot(t, y)
I did not test the code so tell me if there is anything wrong with it.
EDIT:
When testing my code I took a look at the t values for which dy_dt is evaluated. I noticed that odeint does not only use the t values that where specified, but alters them slightly:
...
3.6636447422787928
3.743098503914526
3.822552265550259
3.902006027185992
3.991829287543431
4.08165254790087
4.171475808258308
...
Now using my method, we get
math.isclose(3.991829287543431, 4) # False
because the default tolerance is set to a relative error of at most 10^(-9), so the odeint function "misses" the bump of the derivative at 4. Luckily, we can fix that by specifying a higher error threshold:
def dy_dt(y, t, t1):
if math.isclose(t, t1, abs_tol=0.01):
return 500 - 2*y
else:
return -2*y
Now dy_dt is very high for all values between 3.99 and 4.01. It is possible to make this range smaller if the num argument of linspace is increased.
TL;DR
Your problem is not a problem of python but a problem of numerically solving an differential equation: You need to alter your derivative for an interval of sufficient length, otherwise the solver will likely miss the interesting spot. A kronecker delta does not work with numeric approaches to solving ODEs.

Trying to optimize parameters in the Lugre Dynamic Friction model

I have data collected in CSV of every output of the friction model. the model imagines the contact between to surfaces as one dimensional bristles that react to being bent like springs this deflection. the force of friction is model as:
FL(V,Z) = sig0*Z +sig1*DZ/Dt +sig2*V
where V is the Velocity of the Surface Z is the deflection of the Bristles And DZ/Dt is the rate of deflection and is equal to:
DZ/Dt = V + abs(V)*Z/(Fc + (Fs-Fc)*exp(-(V^2/Vs^2))
= V + abs(V)*Z/G(V)
= V + H(V)*Z
Where Fc is the friction of the object in motion(constant), Fs is equal to the Force required to get the object into motion (a constant > Fc) and Vs is the total speed required to transition between the domains(a constant I've experimentally derived). the velocity and position of the block are provided in the CSV as well as the force of friction all with respect to time. I have also created an easily integrable approximation of the Velocity as a function of time (trigonometric).
On to the problem: the code throws a fit with the way I'm trying to pass lists in to the functions (I think).
The function the passes the parameters SEEMS to work (taken from a different file that simply plots the data the) however I've tried to numerically integrate the DZ/Dt and fit the sig parameters to the imported Friction data.
What I imported
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from scipy import optimize
import pylab as pp
from math import sin, pi, exp, fabs, pow
Parameters
Fc=2.7 #N
Fs=8.2 #N
Vs=.34 #mm/s
Initial_conditions
ITime=Time[0]
Iz=[0,0,0]
Building the friction model
def velocity(time):
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
return V
def g(v,vs,fc,fs,sig0):
G=(1/sig0)*(fc+(fs-fc)*exp(-pow(v,2)/pow(vs,2)))
return G
def h(v,vg):
H=fabs(v)/vg
return H
def findz(z, time, sig):
Vx=velocity(time)
VG=g(Vx,Vs,Fc,Fs,sig)
HVx=h(Vx,VG)
dzdt=Vx+HVx*z
return dzdt
def friction(time,sig,iz):
dz=lambda z,time: findz(z,time,sig)
z=odeint(dz,iz,time)
return sig[0]*z+sig[1]*findz(z,time,sig[0])+sig[2]*velocity(Time)
Should return the difference between the Constructed function and the data and
yield a list containing the optimized parameters
def residual(sig):
return Ff-friction(Time,sig,Iz)
SigG=[4,20,1]
SigVal=optimize.leastsq(residual,SigG)
print "parameter values are ",SigVal
This returns
line 56, in velocity
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
TypeError: can't multiply sequence by non-int of type 'float'
Is this to do with the fact that I am passing lists?
As I mentioned in my comment, Velocity() ist the cause of the error that is most probably due to the fact that it uses a time value, whereas you pass a whole list/ array (with multiple values) to Velocity() when you call it in friction().
Using some chosen values and after shortening you code and passing ITime instead of Time the code runs correctly but it is left to you to judge if this is analytically what you wanted to achieve. Below is my code:
import numpy as np
from scipy import optimize
from scipy.integrate import odeint
from math import sin, pi, exp, fabs
# Parameters
Fc = 2.7 #N
Fs = 8.2 #N
Vs = .34 #mm/s
# define test values for Ff and Time
Ff = np.array([100, 50, 50])
Time = np.array([10, 20, 30])
# Initial_conditions
ITime = Time[0]
Iz = np.array([0, 0, 0])
# Building the friction model
V = lambda t: (-13 / 2) * ( 1 / (60 * pi * sin(1 / 60 * pi * t + pi)))
G = lambda v, vs, fc, fs, sig0: (1 / sig0) * (fc + (fs - fc) * exp(-v**2 / vs**2))
H = lambda v, vg: fabs(v) / vg
dzdt = lambda z, t, sig: V(t) + H(V(t), G(V(t), Vs, Fc, Fs, sig)) * z
def friction(t, sig, iz):
dz = lambda z, t: dzdt(z, t, sig)
z = odeint(dz, iz, t)
return sig[0]*z + sig[1]*dzdt(z, t, sig[0]) + sig[2]*V(t)
# Should return the difference between the Constructed function and the data
# and yield a list containing the optimized parameters
def residual(sig):
return Ff-friction(ITime, sig, Iz)[0]
SigG = np.array([4, 20, 1])
SigVal = optimize.leastsq(residual, SigG, full_output = False)
print("parameter values are ", SigVal )
Output:
parameter values are (array([ 4. , 3251.47271228, -2284.82881887]), 1)

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

scipy odeint with complex initial values

I need to solve a complex-domain-defined ODE system, with complex initial values.
scipy.integrate.odeint does not work on complex systems.
I rod about cutting my system in real and imaginary part and solve separately, but my ODE system's rhs involves products between dependent variables themselves and their complex conjugates.
Haw do I do that? Here is my code, I tried breaking RHS in Re and Im parts, but I don't think the solution is the same as if I wouldn't break it because of the internal products between complex numbers.
In my script u1 is a (very)long complex function, say u1(Lm) = f_real(Lm) + 1j* f_imag(Lm).
from numpy import *
from scipy import integrate
def cj(z): return z.conjugate()
def dydt(y, t=0):
# Notation
# Dependent Variables
theta1 = y[0]
theta3 = y[1]
Lm = y[2]
u11 = u1(Lm)
u13 = u1(3*Lm)
zeta1 = -2*E*u11*theta1
zeta3 = -2*E*3*u13*theta3
# Coefficients
A0 = theta1*cj(zeta1) + 3*theta3*cj(zeta3)
A2 = -zeta1*theta1 + 3*cj(zeta1)*theta3 + zeta3*cj(theta1)
A4 = -theta1*zeta3 - 3*zeta1*theta3
A6 = -3*theta3*zeta3
A = - (A2/2 + A4/4 + A6/6)
# RHS vector components
dy1dt = Lm**2 * (theta1*(A - cj(A)) - cj(theta1)*A2/2
- 3/2*theta3*cj(A2)
- 3/4*cj(theta3)*A4
- zeta1)
dy2dt = Lm**2 * (3*theta3*(A - cj(A)) - theta1*A2/2
- cj(theta1)*A4/4
- 1/2*cj(theta3)*A6
- 3*zeta3)
dy3dt = Lm**3 * (A0 + cj(A0))
return array([dy1dt, dy2dt, dy3dt])
t = linspace(0, 10000, 100) # Integration time-step
ry0 = array([0.001, 0, 0.1]) # Re(initial condition)
iy0 = array([0.0, 0.0, 0.0]) # Im(initial condition)
y0 = ry0 + 1j*iy0 # Complex Initial Condition
def rdydt(y, t=0): # Re(RHS)
return dydt(y, t).real
def idydt(y, t=0): # Im(RHS)
return dydt(y, t).imag
ry, rinfodict = integrate.odeint(rdydt, y0, t, full_output=True)
iy, iinfodict = integrate.odeint(idydt, y0, t, full_output=True)
The error I get is this
TypeError: array cannot be safely cast to required type
odepack.error: Result from function call is not a proper array of
floats.
As you've discovered, odeint does not handle complex-valued differential equations, but there is scipy.integrate.complex_ode. complex_ode is a convenience function that takes care of converting the system of n complex equations into a system of 2*n real equations. (Note the discrepancy in the signatures of the functions used to define the equations for odeint and ode. odeint expects f(t, y, *args) while ode (and complex_ode) expect f(y, t, *args).)
A similar convenience function can be created for odeint. In the following code, odeintz is a function that handles the conversion of a complex system into a real system and solving it with odeint. The code includes an example of solving a complex system. It also shows how that system can be converted "by hand" to a real system and solved with odeint. But for a large system, that is a tedious and error prone process; using a complex solver is certainly a saner approach.
import numpy as np
from scipy.integrate import odeint
def odeintz(func, z0, t, **kwargs):
"""An odeint-like function for complex valued differential equations."""
# Disallow Jacobian-related arguments.
_unsupported_odeint_args = ['Dfun', 'col_deriv', 'ml', 'mu']
bad_args = [arg for arg in kwargs if arg in _unsupported_odeint_args]
if len(bad_args) > 0:
raise ValueError("The odeint argument %r is not supported by "
"odeintz." % (bad_args[0],))
# Make sure z0 is a numpy array of type np.complex128.
z0 = np.array(z0, dtype=np.complex128, ndmin=1)
def realfunc(x, t, *args):
z = x.view(np.complex128)
dzdt = func(z, t, *args)
# func might return a python list, so convert its return
# value to an array with type np.complex128, and then return
# a np.float64 view of that array.
return np.asarray(dzdt, dtype=np.complex128).view(np.float64)
result = odeint(realfunc, z0.view(np.float64), t, **kwargs)
if kwargs.get('full_output', False):
z = result[0].view(np.complex128)
infodict = result[1]
return z, infodict
else:
z = result.view(np.complex128)
return z
if __name__ == "__main__":
# Generate a solution to:
# dz1/dt = -z1 * (K - z2)
# dz2/dt = L - z2
# K and L are fixed parameters. z1(t) and z2(t) are complex-
# valued functions of t.
# Define the right-hand-side of the differential equation.
def zfunc(z, t, K, L):
z1, z2 = z
return [-z1 * (K - z2), L - z2]
# Set up the inputs and call odeintz to solve the system.
z0 = np.array([1+2j, 3+4j])
t = np.linspace(0, 4, 101)
K = 3
L = 1
z, infodict = odeintz(zfunc, z0, t, args=(K,L), full_output=True)
# For comparison, here is how the complex system can be converted
# to a real system. The real and imaginary parts are used to
# write a system of four coupled equations. The formulas for
# the complex right-hand-sides are
# -z1 * (K - z2) = -(x1 + i*y1) * (K - (x2 + i*y2))
# = (-x1 - i*y1) * (K - x2 + i(-y2))
# = -x1 * (K - x2) - y1*y2 + i*(-y1*(K - x2) + x1*y2)
# and
# L - z2 = L - (x2 + i*y2)
# = (L - x2) + i*(-y2)
def func(r, t, K, L):
x1, y1, x2, y2 = r
dx1dt = -x1 * (K - x2) - y1*y2
dy1dt = -y1 * (K - x2) + x1*y2
dx2dt = L - x2
dy2dt = -y2
return [dx1dt, dy1dt, dx2dt, dy2dt]
# Use regular odeint to solve the real system.
r, infodict = odeint(func, z0.view(np.float64), t, args=(K,L), full_output=True)
# Compare the two solutions. They should be the same. (As usual for
# floating point calculations, there could be a small difference.)
delta_max = np.abs(z.view(np.float64) - r).max()
print "Maximum difference between the complex and real versions is", delta_max
# Plot the real and imaginary parts of the complex solution.
import matplotlib.pyplot as plt
plt.clf()
plt.plot(t, z[:,0].real, label='z1.real')
plt.plot(t, z[:,0].imag, label='z1.imag')
plt.plot(t, z[:,1].real, label='z2.real')
plt.plot(t, z[:,1].imag, label='z2.imag')
plt.xlabel('t')
plt.grid(True)
plt.legend(loc='best')
plt.show()
Here's the plot generated by the script:
Update
This code has been significantly expanded into a function called odeintw that handles complex variables and matrix equations. The new function can be found on github: https://github.com/WarrenWeckesser/odeintw
I think I found a solution by myself. I'm posting it as anybody would find it useful.
It appears that odeint cannot deal with complex numbers. Anyway scipy.integrate.ode does
by making use of the 'zvode' integration method.

How do i speed up a python nested loop?

I'm trying to calculate the gravity effect of a buried object by calculating the effect on each side of the body then summing up the contributions to get one measurement at one station, an repeating for a number of stations. the code is as follows( the body is a square and the code calculates clockwise around it, that's why it goes from -x back to -x coordinates)
grav = []
x=si.arange(-30.0,30.0,0.5)
#-9.79742526 9.78716693 22.32153704 27.07382349 2138.27146193
xcorn = (-9.79742526,9.78716693 ,9.78716693 ,-9.79742526,-9.79742526)
zcorn = (22.32153704,22.32153704,27.07382349,27.07382349,22.32153704)
gamma = (6.672*(10**-11))#'N m^2 / Kg^2'
rho = 2138.27146193#'Kg / m^3'
grav = []
iter_time=[]
def procedure():
for i in si.arange(len(x)):# cycles position
t0=time.clock()
sum_lines = 0.0
for n in si.arange(len(xcorn)-1):#cycles corners
x1 = xcorn[n]-x[i]
x2 = xcorn[n+1]-x[i]
z1 = zcorn[n]-0.0 #just depth to corner since all observations are on the surface.
z2 = zcorn[n+1]-0.0
r1 = ((z1**2) + (x1**2))**0.5
r2 = ((z2**2) + (x2**2))**0.5
O1 = si.arctan2(z1,x1)
O2 = si.arctan2(z2,x2)
denom = z2-z1
if denom == 0.0:
denom = 1.0e-6
alpha = (x2-x1)/denom
beta = ((x1*z2)-(x2*z1))/denom
factor = (beta/(1.0+(alpha**2)))
term1 = si.log(r2/r1)#log base 10
term2 = alpha*(O2-O1)
sum_lines = sum_lines + (factor*(term1-term2))
sum_lines = sum_lines*2*gamma*rho
grav.append(sum_lines)
t1 = time.clock()
dt = t1-t0
iter_time.append(dt)
Any help in speeding this loop up would be appreciated Thanks.
Your xcorn and zcorn values repeat, so consider caching the result of some of the computations.
Take a look at the timeit and profile modules to get more information about what is taking the most computational time.
It is very inefficient to access individual elements of a numpy array in a Python loop. For example, this Python loop:
for i in xrange(0, len(a), 2):
a[i] = i
would be much slower than:
a[::2] = np.arange(0, len(a), 2)
You could use a better algorithm (less time complexity) or use vector operations on numpy arrays as in the example above. But the quicker way might be just to compile the code using Cython:
#cython: boundscheck=False, wraparound=False
#procedure_module.pyx
import numpy as np
cimport numpy as np
ctypedef np.float64_t dtype_t
def procedure(np.ndarray[dtype_t,ndim=1] x,
np.ndarray[dtype_t,ndim=1] xcorn):
cdef:
Py_ssize_t i, j
dtype_t x1, x2, z1, z2, r1, r2, O1, O2
np.ndarray[dtype_t,ndim=1] grav = np.empty_like(x)
for i in range(x.shape[0]):
for j in range(xcorn.shape[0]-1):
x1 = xcorn[j]-x[i]
x2 = xcorn[j+1]-x[i]
...
grav[i] = ...
return grav
It is not necessary to define all types but if you need a significant speed up compared to Python you should define at least types of arrays and loop indexes.
You could use cProfile (Cython supports it) instead of manual calls to time.clock().
To call procedure():
#!/usr/bin/env python
import pyximport; pyximport.install() # pip install cython
import numpy as np
from procedure_module import procedure
x = np.arange(-30.0,30.0,0.5)
xcorn = np.array((-9.79742526,9.78716693 ,9.78716693 ,-9.79742526,-9.79742526))
grav = procedure(x, xcorn)

Categories