Find steady-state of a set of differential equations - python

Let us assume I have a set of differential equations to be integrated with scipy odeint. Now my goal is to find the steady-state (I chose initial conditions such that this state exists). At the moment I have implemented something like
cond = True
while cond:
x = integrate(interval = [0,t], steps = 200)
if var(x[-22::]) < maxvar:
cond = False
return mean(x)
else:
t*= 2
Do you have a more efficient approach?

If you are using odeint, then you already have your differential equations written as a function f(x, t) (or possibly f(x, t, *args)). If your system is autonomous (i.e. f does not actually depend on t), you can find an equilibrium by solving f(x, 0) == 0 for x. You can use, for example, scipy.optimize.fsolve to solve for the equilibrium.
The following is an example. It uses the "Coupled Spring Mass System" example from the scipy cookbook. scipy.optimize.fsolve is used to find the equilibrium solution x1 = 0.5, y1 = 0, x2 = 1.5, y2 = 0.
from scipy.optimize import fsolve
def vectorfield(w, t, p):
"""
Defines the differential equations for the coupled spring-mass system.
Arguments:
w : vector of the state variables:
w = [x1, y1, x2, y2]
t : time
p : vector of the parameters:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
"""
x1, y1, x2, y2 = w
m1, m2, k1, k2, L1, L2, b1, b2 = p
# Create f = (x1', y1', x2', y2'):
f = [y1,
(-b1 * y1 - k1 * (x1 - L1) + k2 * (x2 - x1 - L2)) / m1,
y2,
(-b2 * y2 - k2 * (x2 - x1 - L2)) / m2]
return f
if __name__ == "__main__":
# Parameter values
# Masses:
m1 = 1.0
m2 = 1.5
# Spring constants
k1 = 8.0
k2 = 40.0
# Natural lengths
L1 = 0.5
L2 = 1.0
# Friction coefficients
b1 = 0.8
b2 = 0.5
# Pack up the parameters and initial conditions:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
# Initial guess to pass to fsolve. The second and fourth components
# are the velocities of the masses, and we know they will be 0 at
# equilibrium. For the positions x1 and x2, we'll try 1 for both.
# A better guess could be obtained by solving the ODEs for some time
# interval, and using the last point of that solution.
w0 = [1.0, 0, 1.0, 0]
# Find the equilibrium
eq = fsolve(vectorfield, w0, args=(0, p))
print "Equilibrium: x1 = {0:.1f} y1 = {1:.1f} x2 = {2:.1f} y2 = {3:.1f}".format(*eq)
The output is:
Equilibrium: x1 = 0.5 y1 = 0.0 x2 = 1.5 y2 = 0.0

Referring to your comments, I don't see any better way:
In this case where you know approximately where the system will settle.
(This is fairly easy to predict in some systems, like a pendulum, or a charging capacitor), and that it will settle, the fastest way I know is to check if
(p[0] * x[i] + p[1] * x[i-1] ... + p[n] * x[i-n] - mean(x[i-0:n]) )< epsilon)
The difficulty is determining the size of epsilon, the parameters p[0:n] to tune this detection.
Clearly you are already using this window method:
epsilon = varmax
p[0:n] = 1
n = 22
and have optimized it by removing the parameters for the filter, and simply using the variance.
For many systems (where settling looks like 1 order differential equations) the filter parameters turn out to look like this:
p[0] = n/2
p[n] = n/2
p[1:n-1] = 0
meaning you can make stuff go faster if you replace the calculation of state variance with this simple test:
if( abs(x[-22] - x[0]) > epsilon)
This will not detect correctly if small disturbances are still present, since we don't know your system, that's difficult to talk about...

Related

Find the value of a variable that maximizes a multivariable function

I have a function like this:
def objective(x, y, z, q):
theta1 = f(x, y)
theta2 = f(x, z)
rho = q - (theta1 + theta2)
return rho * x
and I would like to find the value of x that maximizes the objective function, knowing that x must be between two boundaries b1, b2.
My first guess was to iterate from b1 through b2 to find which value of x maximizes the objective function but I guess there are more efficient ways to do this.
What is the most efficient way to find the value of x that maximizes the objective function in this case ?
I know I could use scipy.optimize.maximize, but I can't find a way to use it to maximize the objective function with respect to just x.
Assuming that x is a scalar variable and you already know the values of y, z and q, you could do something like this:
from scipy.optimize import minimize
# bound: b1 <= x <= b2
bounds = [(b1, b2)]
# given values for y, q, and z
y0 = 1.0
z0 = 1.0
q0 = 1.0
# (feasible) initial guess
x0 = (b2 - b1) / 2.0
res = minimize(lambda x: -1.0*objective(x, y0, z0, q0), x0=x0, bounds=bounds)
Note that maximizing the function obj(x) is the equivalent to minimizing -1.0*obj(x).

Custom jvp of Newton Solver in Python's JAX

I'm having a lot of trouble writing a custom Jacobian vector product in JAX for a fixed point solver. I'm trying to solve F(x, y) = 0 where F(x, y) = f(x_1, y) - x_2 for some function f. I'm using a basic Newton root finding algorithm to do this, and would like to also write a custom_jvp to compute the derivatives with respect to x1 and x2 without unrolling the loop involved in the iteration.
However, though there is sample code of this in the JAX doc for custom vjp, I cannot seem to implement this custom jvp!
In particular I have:
#ft.partial(custom_jvp, nondiff_argnums = (0,1, 4, 5))
def Inverse(F, guess, x1, x2, max_iter = 50, threshold = 1e-12):
y = guess
iter = 0
while iter<max_iter:
value = lambda a: F(x1, a) - x2
if jnp.abs(value(y)) < threshold:
break
if jnp.abs(grad(value)(y)) < 0.01 * threshold:
break
y -= (value(y) / grad(value)(y))
return y
#Inverse.defjvp
def _inverse_jvp(f, guess, primal, tangent):
x1, x2 = primal
x1_grad, x2_grad = tangent
primal_out = Root(f, x1, x2, guess)
value_x1 = lambda s: f(s, guess) - x2
value_x2 = lambda s: f(x1, guess) - x2
value_y = lambda s: f(x1, s) - x2
tangent_out = (- grad(value_x1)(x1)/grad(value_y)(guess) * x1_grad, -
grad(value_x2)(x2)/grad(value_y)(guess) * x2_grad)
return primal_out, tangent_out
and I'm trying to call
jax.jvp(Root, f, 1., (x1, x2), (x1_grad, x2_grad)
I've found that this doesn't work, or at least I'm not sure how to pass the function f into jvp, and if I wrap this with something like
def wrap(f, y):
return lambda x1, x2: Root(f,y, x1, x2)
then I get an error saying I'm passing in too arguments.
I imagine something like this is not meant to be complicated...

Limit the output of ODEINT Python

I'm trying to define 'hard' limits to the return values of Scipy's odeint function but am unsure if function allows for such. I've modified this example from Scipy Cookbook so that the coupled two bodied system could collide.
Image Link
Graph Collision
Imaginary Stop
Specifically, the spring constant and mass have been changed to be weaker and lighter. You'll also notice the 'if' statement used inside the function trying to limit the travel of 'm1' no further than the initial input of 'x2'. Sorry, I don't have enough reputation points to post the graphs, but you'll clearly see the two masses can occupy the same space during portions of the solution.
# Use ODEINT to solve the differential equations defined by the vector field
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
def vectorfield(w, t, p):
"""
Defines the differential equations for the coupled spring-mass system.
Arguments:
w : vector of the state variables:
w = [x1,y1,x2,y2]
t : time
p : vector of the parameters:
p = [m1,m2,k1,k2,L1,L2,b1,b2]
"""
x1, y1, x2, y2 = w
m1, m2, k1, k2, L1, L2, b1, b2 = p
# Create f = (x1',y1',x2',y2'):
f = [y1,
(-b1 * y1 - k1 * (x1 - L1) + k2 * (x2 - x1 - L2)) / m1,
y2,
(-b2 * y2 - k2 * (x2 - x1 - L2)) / m2]
if y1 > x2:
y1 == x2
else:
y1 == y1
return f
# Parameter values
# Masses:
m1 = 0.5
m2 = 1.5
# Spring constants
k1 = 0.1
k2 = 40.0
# Natural lengths
L1 = 0.5
L2 = 1.0
# Friction coefficients
b1 = 0.8
b2 = 0.5
# Initial conditions
# x1 and x2 are the initial displacements; y1 and y2 are the initial velocities
x1 = 0.5
y1 = 0.0
x2 = 4.25
y2 = 0.0
# ODE solver parameters
abserr = 1.0e-8
relerr = 1.0e-6
stoptime = 5.0
numpoints = 2500
# Create the time samples for the output of the ODE solver.
# I use a large number of points, only because I want to make
# a plot of the solution that looks nice.
t = [stoptime * float(i) / (numpoints - 1) for i in range(numpoints)]
# Pack up the parameters and initial conditions:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
w0 = [x1, y1, x2, y2]
# Call the ODE solver.
wsol = odeint(vectorfield, w0, t, args=(p,),
atol=abserr, rtol=relerr)
plt.plot(t, wsol[:, 0], 'b', label='theta(t)')
plt.plot(t, wsol[:, 2], 'g', label='omega(t)')
plt.show()
I'm unsure if my variables are not being set up correctly, or if the odeint function cannot accept these types of 'limits' I'm trying to add. Ideally, it would be nice if an 'upper boundary' could be set, if the displacement of the mass were beyond that value, it could then be set equal to a maximum limit of sorts.
Thank you in advance for the help.

RK4 implementation to solve Lotka Volterra Diff-EQ

I am trying to understand what's wrong with the code I have butchered together. The code below is one of many implementations I have done today to solve the Lotka Volterra Differential equations (2 Systems), it is the one that I have brought the closest to the desired result.
import matplotlib.pyplot as plt
import numpy as np
from pylab import *
def rk4( f, x0, t ):
"""
4th order Runge-Kutta method implementation to solve x' = f(x,t) with x(t[0]) = x0.
USE:
x = rk4(f, x0, t)
INPUT:
f - function of x and t equal to dx/dt.
x0 - the initial condition(s).
Specifies the value of x # t = t[0] (initial).
Can be a scalar of a vector (NumPy Array)
Example: [x0, y0] = [500, 20]
t - a time vector (array) at which the values of the solution are computed at.
t[0] is considered as the initial time point
h = t[i+1] - t[i] determines the step size h as suggested by the algorithm
Example: t = np.linspace( 0, 500, 200 ), creates 200 time points between 0 and 500
increasing the number of points in the intervall automatically decreases the step size
OUTPUT:
x - An array containing the solution evaluated at each point in the t array.
"""
n = len( t )
x = np.array( [ x0 ] * n ) # creating an array of length n
for i in xrange( n - 1 ):
h = t[i+1] - t[i] # step size, dependent on the time vector.
# starting below - the implementation of the RK4 algorithm:
# for further informations visit http://en.wikipedia.org/wiki/Runge-Kutta_methods
# k1 is the increment based on the slope at the beginning of the interval (same as Euler)
# k2 is the increment based on the slope at the midpoint of the interval (with x + 0.5 * k1)
# k3 is AGAIN the increment based on the slope at the midpoint (with x + 0.5 * k2)
# k4 is the increment based on the slope at the end of the interval
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * k1, t[i] + 0.5 * h )
k3 = f( x[i] + 0.5 * k2, t[i] + 0.5 * h )
k4 = f( x[i] + h * k3, t[i] + h )
# finally computing the weighted average and storing it in the x-array
x[i+1] = x[i] + h * ( ( k1 + 2.0 * ( k2 + k3 ) + k4 ) / 6.0 )
return x
# model
def model(state,t):
"""
A function that creates an array containing the Lotka Volterra Differential equation
Parameter assignement convention:
a natural growth rate of the preys
b chance of being eaten by a predator
c dying rate of the predators per week
d chance of catching a prey
"""
x,y = state # will corresponding to initial conditions
# consider it as a vector too
a = 0.08
b = 0.002
c = 0.2
d = 0.0004
return np.array([ x*(a-b*y) , -y*(c - d*x) ]) # corresponds to [dx/dt, dy/dt]
################################################################
# initial conditions for the system
x0 = 500
y0 = 20
# vector of times
t = np.linspace( 0, 500, 1000 )
result = rk4( model, [x0,y0], t )
print result
plt.plot(t,result)
plt.xlabel('Time')
plt.ylabel('Population Size')
plt.legend(('x (prey)','y (predator)'))
plt.title('Lotka-Volterra Model')
plt.show()
The above code produces the following output
however if I move the from pylab import * code right above the initial conditions I get the correct output
why does this happen and how can I fix this?
pylab defines its own implementation of rk4, which it takes from matplotlib:
In [1]: import pylab
In [2]: pylab.rk4
Out[2]: <function matplotlib.mlab.rk4>
When you do a wildcard import like from pylab import *, you will override any local functions with the same name.
In particular, here you're redefining your own rk4 implementation (ie, the code you've written is never used).
This is why you should never do a wildcard import like that. pylab is particularly problematic, in that it defines several functions (such as any and all) which have completely different outputs than the python builtins for certain inputs.
Anyway, the root cause of your problem seems to be that your RK4 implementation is incorrect.
You need to use the step size in your calculation of k_n.
For example, here's a small snippet of my own RK4 implementation (which, I'll admit, is tuned for speed rather than readability):
while not target(xs):
...
# Do RK4
self.f(xs, self.k1)
self.f(xs + halfh*self.k1, self.k2)
self.f(xs + halfh*self.k2, self.k3)
self.f(xs + self.h*self.k3, self.k4)
xs += sixthh*(self.k1 + self.k2 + self.k2 + self.k3 + self.k3 \
+ self.k4)
You'll note that the entire state vector is multiplied by h, not just the time component.
Try fixing that up in your own code and seeing if the result is the same.
(In my opinion, the habit of wiki etc of treating time as a special case is a cause of a lot of these problems. Your time vector, ts, is simply a special derivative where t' = 1.)
So for your own code, I believe, but haven't tested, that something like this should work:
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * h * k1, t[i] + 0.5 * h ) ## changed to use h
k3 = f( x[i] + 0.5 * h * k2, t[i] + 0.5 * h ) ## changed to use h
k4 = f( x[i] + h * k3, t[i] + h )
Try
import pylab
help(pylab.rk4)
You'll find a long explanation of the pylab.rk4 command.
This is why it's not a good idea to use from x import *. It's much better to do import pylab as py and then this won't be an issue.
Be aware that even with moving your import command, any later call you might have to rk4 will fail.

Estimating parameter values using optimize.curve.fit

I am attempting to estimate the parameters of the non-linear equation:
y(x1, x2) = x1 / A + Bx1 + Cx2
using the method outlined in the answer to this question, but can find no documentation on how to pass multiple independent variables to the curve_fit function appropriately.
Specifically, I am attempting to estimate plant biomass (y) on the basis of the density of the plant (x1), and the density of a competitor (x2). I have three exponential equations (of the form y = a[1-exp(-b*x1)]) for the the relationship between plant density and plant biomass, with different parameter values for three competitor densities:
For x2 == 146: y = 1697 * [1 - exp(-0.010 * x1)]
For x2 == 112: y = 1994 * [1 - exp(-0.023 * x1)]
For x2 == 127: y = 1022 * [1 - exp(-0.008 * x1)]
I would therefore like to write code along the lines of:
def model_func(self, x_vals, A, B, C):
return x_vals[0] / (A + B * x_vals[0] + C * x_vals[1])
def fit_nonlinear(self, d, y):
opt_parms, parm_cov = sp.optimize.curve_fit(self.model_func, [x1, x2], y, p0 = (0.2, 0.004, 0.007), maxfev=10000)
A, B, C = opt_parms
return A, B, C
However I have not found any documentation on how to format the argument y (passed to fit_nonlinear) to capture to two-dimensional nature of the x_vals (the documentation for curve_fit states y should be an N-length sequence). Is what I am attempting possible with curve_fit?
Based on your comment above, you want to think of using the flat versions of the matrices. If you take the same element from both the X1 and X2 matrices, that pair of values has a corresponding y-value. Here's a minimal example
import numpy as np
import scipy as sp
import scipy.optimize
x1 = np.linspace(-1, 1)
x2 = np.linspace(-1, 1)
X1, X2 = np.meshgrid(x1, x2)
def func(X, A, B, C):
X1, X2 = X
return X1 / (A + B * X1 + C * X2)
# generate some noisy points corresponding to a set of parameter values
p_ref = [0.15, 0.001, 0.05]
Yref = func([X1, X2], *p_ref)
std = Yref.std()
Y = Yref + np.random.normal(scale=0.1 * std, size=Yref.shape)
# fit a curve to the noisy points
p0 = (0.2, 0.004, 0.007)
p, cov = sp.optimize.curve_fit(func, [X1.flat, X2.flat], Y.flat, p0=p0)
# if the parameters from the fit are close to the ones used
# to generate the noisy points, we succeeded
print p_ref
print p

Categories