Finding equilibria of ODE as a function of initial conditions - python

Let us assume I have an ODE with x'(t) = f(x) with the respective solution x(t) = ϕ(x(0),t) of a initial condition x(0). Now I intend to calculate numerically the equilibria as a function of their initial condition: eq(x0) := ϕ(x0, ∞). The ODEs are such that these equilibria exist unambiguously for all initial conditions (including eq = ∞).
My poor man's approach would be to integrate the ODE up to a late time and fetch that value (for brevity I do not show the plotting):
import numpy as np
from scipy.integrate import odeint
# ODE
def func(X,t):
return [ X[2]**2 * (X[0] - X[1]),
X[2]**3 * (X[0] + 3 * X[1]),
-X[2]**2]
# Forming a grid
n = 15
x0 = x1 = np.linspace(0,1,n)
x0_,x1_ = np.meshgrid(x0,x1)
eq = np.zeros([n,n,3])
t = np.linspace(0,100,1000)
x2 = 1
for i in range(n):
for j in range(n):
X = odeint(func,[x0_[j,i],x1_[j,i],x2], t)
eq[j,i,:] = X[-1,:]
Naive example above:
The problem with that approach is that you can never be sure if it converged. I know that you can just find the roots of f(x), but this would not yield the equilibria as a function of their initial conditions (You could trace them back, but since this function is not injective, you will not find values for all initial values). I somehow need a ODE solver which integrates until an equilibria is reached (or stops integrating if it goes beyond a limit). Do you have any ideas?

Related

Using python built-in functions for coupled ODEs

THIS PART IS JUST BACKGROUND IF YOU NEED IT
I am developing a numerical solver for the Second-Order Kuramoto Model. The functions I use to find the derivatives of theta and omega are given below.
# n-dimensional change in omega
def d_theta(omega):
return omega
# n-dimensional change in omega
def d_omega(K,A,P,alpha,mask,n):
def layer1(theta,omega):
T = theta[:,None] - theta
A[mask] = K[mask] * np.sin(T[mask])
return - alpha*omega + P - A.sum(1)
return layer1
These equations return vectors.
QUESTION 1
I know how to use odeint for two dimensions, (y,t). for my research I want to use a built-in Python function that works for higher dimensions.
QUESTION 2
I do not necessarily want to stop after a predetermined amount of time. I have other stopping conditions in the code below that will indicate whether the system of equations converges to the steady state. How do I incorporate these into a built-in Python solver?
WHAT I CURRENTLY HAVE
This is the code I am currently using to solve the system. I just implemented RK4 with constant time stepping in a loop.
# This function randomly samples initial values in the domain and returns whether the solution converged
# Inputs:
# f change in theta (d_theta)
# g change in omega (d_omega)
# tol when step size is lower than tolerance, the solution is said to converge
# h size of the time step
# max_iter maximum number of steps Runge-Kutta will perform before giving up
# max_laps maximum number of laps the solution can do before giving up
# fixed_t vector of fixed points of theta
# fixed_o vector of fixed points of omega
# n number of dimensions
# theta initial theta vector
# omega initial omega vector
# Outputs:
# converges true if it nodes restabilizes, false otherwise
def kuramoto_rk4_wss(f,g,tol_ss,tol_step,h,max_iter,max_laps,fixed_o,fixed_t,n):
def layer1(theta,omega):
lap = np.zeros(n, dtype = int)
converges = False
i = 0
tau = 2 * np.pi
while(i < max_iter): # perform RK4 with constant time step
p_omega = omega
p_theta = theta
T1 = h*f(omega)
O1 = h*g(theta,omega)
T2 = h*f(omega + O1/2)
O2 = h*g(theta + T1/2,omega + O1/2)
T3 = h*f(omega + O2/2)
O3 = h*g(theta + T2/2,omega + O2/2)
T4 = h*f(omega + O3)
O4 = h*g(theta + T3,omega + O3)
theta = theta + (T1 + 2*T2 + 2*T3 + T4)/6 # take theta time step
mask2 = np.array(np.where(np.logical_or(theta > tau, theta < 0))) # find which nodes left [0, 2pi]
lap[mask2] = lap[mask2] + 1 # increment the mask
theta[mask2] = np.mod(theta[mask2], tau) # take the modulus
omega = omega + (O1 + 2*O2 + 2*O3 + O4)/6
if(max_laps in lap): # if any generator rotates this many times it probably won't converge
break
elif(np.any(omega > 12)): # if any of the generators is rotating this fast, it probably won't converge
break
elif(np.linalg.norm(omega) < tol_ss and # assert the nodes are sufficiently close to the equilibrium
np.linalg.norm(omega - p_omega) < tol_step and # assert change in omega is small
np.linalg.norm(theta - p_theta) < tol_step): # assert change in theta is small
converges = True
break
i = i + 1
return converges
return layer1
Thanks for your help!
You can wrap your existing functions into a function accepted by odeint (option tfirst=True) and solve_ivp as
def odesys(t,u):
theta,omega = u[:n],u[n:]; # or = u.reshape(2,-1);
return [ *f(omega), *g(theta,omega) ]; # or np.concatenate([f(omega), g(theta,omega)])
u0 = [*theta0, *omega0]
t = linspan(t0, tf, timesteps+1);
u = odeint(odesys, u0, t, tfirst=True);
#or
res = solve_ivp(odesys, [t0,tf], u0, t_eval=t)
The scipy methods pass numpy arrays and convert the return value into same, so that you do not have to care in the ODE function. The variant in comments is using explicit numpy functions.
While solve_ivp does have event handling, using it for a systematic collection of events is rather cumbersome. It would be easier to advance some fixed step, do the normalization and termination detection, and then repeat this.
If you want to later increase efficiency somewhat, use directly the stepper classes behind solve_ivp.

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

Solving a system of odes (with changing constant!) using scipy.integrate.odeint?

I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.

Euler's method in python

I'm trying to implement euler's method to approximate the value of e in python. This is what I have so far:
def Euler(f, t0, y0, h, N):
t = t0 + arange(N+1)*h
y = zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] + h*f(t[n], y[n])
f = (1+(1/N))^N
return y
However, when I try to call the function, I get the error "ValueError: shape <= 0". I suspect this has something to do with how I defined f? I tried inputting f directly when euler is called, but gave me errors related to variables not being defined. I also tried defining f as its own function, which gave me a division by 0 error.
def f(N):
for n in range(N):
return (1+(1/n))^n
(not sure if N was the appropriate variable to use here...)
The formula you are trying to use is not Euler's method, but rather the exact value of e as n approaches infinity wiki,
$n = \lim_{n\to\infty} (1 + \frac{1}{n})^n$
Euler's method is used to solve first order differential equations.
Here are two guides that show how to implement Euler's method to solve a simple test function: beginner's guide and numerical ODE guide.
To answer the title of this post, rather than the question you are asking, I've used Euler's method to solve usual exponential decay:
$\frac{dN}{dt} = -\lambda N$
Which has the solution,
$N(t) = N_0 e^{-\lambda t}$
Code:
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
# Concentration over time
N = lambda t: N0 * np.exp(-k * t)
# dN/dt
def dx_dt(x):
return -k * x
k = .5
h = 0.001
N0 = 100.
t = np.arange(0, 10, h)
y = np.zeros(len(t))
y[0] = N0
for i in range(1, len(t)):
# Euler's method
y[i] = y[i-1] + dx_dt(y[i-1]) * h
max_error = abs(y-N(t)).max()
print 'Max difference between the exact solution and Euler's approximation with step size h=0.001:'
print '{0:.15}'.format(max_error)
Output:
Max difference between the exact solution and Euler's approximation with step size h=0.001:
0.00919890254720457
Note: I'm not sure how to get LaTeX displaying properly.
Are you sure you are not trying to implement the Newton's method? Because Newton's method is used to approximate the roots.
In case you decide to go with Newton's method, here is a slightly changed version of your code that approximates the square-root of 2. You can change f(x) and fp(x) with the function and its derivative you use in your approximation to the thing you want.
import numpy as np
def f(x):
return x**2 - 2
def fp(x):
return 2*x
def Newton(f, y0, N):
y = np.zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] - f(y[n])/fp(y[n])
return y
print Newton(f, 1, 10)
gives
[ 1. 1.5 1.41666667 1.41421569 1.41421356 1.41421356
1.41421356 1.41421356 1.41421356 1.41421356 1.41421356]
which are the initial value and the first ten iterations to the square-root of two.
Besides this a big problem was the usage of ^ instead of ** for powers which is a legal but a totally different (bitwise) operation in python.

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Categories