Is it possible for scipy.integrate.odeint to output internal calculations - python

I have a need to pass internal calculations from a call to odeint. I normally recalculate the values again after I am finished with the integration, but I would prefer to do all of the calculations in the called function for odeint.
My problems are not computationally heavy, so taking a little extra performance hit in doing calculations inside of the ode solver is acceptable.
from scipy.integrate import odeint
import numpy as np
def eom(y, t):
internal_calc = y/(t+1)
xdot = y
return xdot, internal_calc
if __name__ == '__main__':
t = np.linspace(0, 5, 100)
y0 = 1.0 # the initial condition
output, internal_calc = odeint(eom, y0, t)
This code doesn't run, but hopefully shows what I am after. I want to get the 'internal_calc' value out of the eom function for each pass through the integrator.
I've looked around for options, but one of the best python programmers I know told me to write my own integrator so I can do what I want.
Before I did that, I thought I would ask if anyone else has a method for getting values out of the odeint solver.

It is possible, you just can't use the return values of your eom function. So you need some other way of smuggling data out from eom. There are many, many different ways to do this. The simplest is probably to just use a global variable:
import scipy.integrate as spi
count = 0
def pend(t, y):
global count
theta, omega = y
dydt = [omega, -.25*omega - 5*np.sin(theta)]
count += 1
return dydt
sol = spi.solve_ivp(pend, [0, 10], [np.pi - 0.1, 0.0])
print(count)
Output:
182
Also, note that I used solve_ivp instead of odeint in the code above. The odeint docs say that when writing new code, you should now use solve_ivp instead of the older odeint.
If it were my own code, I'd probably accomplish the task by passing an accumulator object into a partial version of my function:
class Acc:
def __init__(self):
self.x = 0
def __str__(self):
return str(self.x)
def pend_partial(acc):
def pend(t, y):
theta, omega = y
dydt = [omega, -.25*omega - 5*np.sin(theta)]
acc.x += 1
return dydt
return pend
count = Acc()
sol = spi.solve_ivp(pend_partial(count), [0, 10], [np.pi - 0.1, 0.0])
print(count)
Output:
182
However, if you're just writing a short script or something, you should probably just use the simpler global approach. This is a pretty good use case for it.

Related

Solving nonlinear least-squares with function returning both value and jacobian

I am trying to speed up the solving of a nonlinear least-squares problem in Python. I can compute both the function value and the Jacobian via one forwardpass, (val, jac) = fun. A solver like scipy.optimize.least_squares only accepts two seperate functions, fun and jac, which for my problem means that the function value has to be computed twice per iteration (once in fun, and once in jac).
Is there a trick, for avoiding solving the primal problem twice?
The more general function scipy.optimize.minimize support the above style with the jac=True keyword, but it's slow for my problem.
I think the best approach would be to use the MemoizeJac decorator. This is exactly what is done under the hood of scipy.optimize.minimize for jac=True:
import numpy as np
from scipy.optimize import least_squares
from scipy.optimize._optimize import MemoizeJac
def fun_and_jac(x):
return x**2 - 5 * x + 3, 2 * x - 5
fun = MemoizeJac(fun_and_jac)
jac = fun.derivative
res = least_squares(fun, x0=0, jac=jac)
print(res)
You can do a bit of a hack:
val_cache = {}
jac_cache = {}
def val_fun(*args):
try:
return val_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
jac_cache[args] = jac
return val
def jac_fun(*args):
try:
return jac_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
val_cache[args] = val
return jac
From the documentation of scipy.optimize.minimize:
If jac is a Boolean and is True, fun is assumed to return a tuple (f, g) containing the objective function and the gradient.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html?highlight=minimize
So you can simply do it like this:
from scipy.optimize import minimize
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x + 3, 2 * x - 5
print(minimize(function, 0, jac=True))
Edit, reread your question, it seems this option also works for least_squares but is undocumented.
This works as well:
from scipy.optimize import least_squares
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x + 3, 2 * x - 5
print(least_squares(function, 0, jac=True))

How to properly implement scipy.integrate.Radau?

(I'll apreciate even a link to an example, so far I haven't found any.)
I am trying to use Radau from scipy.integrate to solve second order differential equation. For now I am trying just a simple example, so that I can uderstand how it works (unsuccessfully so far).
Let's say that I have following equation:
d^2y/dt^2 = C,
which means that y = (Ct^2)/2 + Bt.
Let's say, for example, y(1) = 1 and C = 2. Let's say that I want to find value of y for t = 10.
This is my code:
from scipy.integrate import Radau
import numpy as np
C = 2.0
y = np.zeros(2)
def fun(t, y):
#y[0] = C*t
y[1] = C
return y
t0 = 1.0
y0 = np.array([1., 1.])
t_bound = 10.0
eq = Radau(fun, t0, y0, t_bound)
print(eq.n)
while(True):
print(eq.t)
print(eq.y)
print(eq.status)
if eq.status == 'finished':
break
eq.step()
Outputs is wrong. (If I uncomment that one commented line in fun definition, it also gives wrong answer. But I think that I shouldn't even have to tell solver that, right? I don't know this value usually.)
I think my biggest problem is that I am not really sure what should be passed as fun. Documentation says it should be right-hand side of the system, so I thought that first derivation should be in y[0], second derivation in y[1] etc.
What am I doing wrong? How should this be implemented?
Atm you are solving
y0' = y0, y0(1)=1
y1' = 2, y1(1)=1
which has the solution y0(t)=exp(t-1) and y1(t)=2*t-1 which is certainly not what you want. You want the first order system
y0' = y1
y1' = C
so that you need
def fun(t,y): return [y[1], C]
Then the solution is y1(t)=C*t+B=2*t-1 and y0(t)=0.5*C*t^2+B*t+A=t^2-t+1 and the integration ends correctly with eq.y = [91. 19.].

Python - Using a Kronecker Delta with ODEINT

I'm trying to plot the output from an ODE using a Kronecker delta function which should only become 'active' at a specific time = t1.
This should give a sawtooth like response where the initial value decays down exponentially until t=t1 where it rises again instantly before decaying down once again.
However, when I plot this it looks like the solver is seeing the Kronecker delta function as zero for all time t. Is there anyway to do this in Python?
from scipy import KroneckerDelta
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
dy_dt = 500*KroneckerDelta(t,t1) - 2y
return dy_dt
t1 = 4
y0 = 500
t = np.arrange(0,10,0.1)
y = sp.odeint(dy_dt,y0,t)
plt.plot(t,y)
In the case of a simple Kronecker delta using time, you can run the ode in pieces like so:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
return -2*y
t_delta = 4
tend = 10
y0 = [500]
t1 = np.linspace(0,t_delta,50)
y1 = odeint(dy_dt,y0,t1)
y0 = y1[-1] + 500 # execute Kronecker delta
t2 = np.linspace(t_delta,tend,50)
y2 = odeint(dy_dt,y0,t2)
t = np.append(t1, t2)
y = np.append(y1, y2)
plt.plot(t,y)
Another option for complicated situations is to the events functionality of solve_ivp.
I think the problem could be internal rounding errors, because 0.1 cannot be represented exactly as a python float. I would try
import math
def dy_dt(y,t):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2y
Also the documentation of odeint suggests using the args parameter instead of global variables to give your derivative function access to additional arguments and replacing np.arange by np.linspace:
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
import math
def dy_dt(y, t, t1):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2*y
t1 = 4
y0 = 500
t = np.linspace(0, 10, num=101)
y = sp.odeint(dy_dt, y0, t, args=(t1,))
plt.plot(t, y)
I did not test the code so tell me if there is anything wrong with it.
EDIT:
When testing my code I took a look at the t values for which dy_dt is evaluated. I noticed that odeint does not only use the t values that where specified, but alters them slightly:
...
3.6636447422787928
3.743098503914526
3.822552265550259
3.902006027185992
3.991829287543431
4.08165254790087
4.171475808258308
...
Now using my method, we get
math.isclose(3.991829287543431, 4) # False
because the default tolerance is set to a relative error of at most 10^(-9), so the odeint function "misses" the bump of the derivative at 4. Luckily, we can fix that by specifying a higher error threshold:
def dy_dt(y, t, t1):
if math.isclose(t, t1, abs_tol=0.01):
return 500 - 2*y
else:
return -2*y
Now dy_dt is very high for all values between 3.99 and 4.01. It is possible to make this range smaller if the num argument of linspace is increased.
TL;DR
Your problem is not a problem of python but a problem of numerically solving an differential equation: You need to alter your derivative for an interval of sufficient length, otherwise the solver will likely miss the interesting spot. A kronecker delta does not work with numeric approaches to solving ODEs.

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Categories