I am trying to understand what's wrong with the code I have butchered together. The code below is one of many implementations I have done today to solve the Lotka Volterra Differential equations (2 Systems), it is the one that I have brought the closest to the desired result.
import matplotlib.pyplot as plt
import numpy as np
from pylab import *
def rk4( f, x0, t ):
"""
4th order Runge-Kutta method implementation to solve x' = f(x,t) with x(t[0]) = x0.
USE:
x = rk4(f, x0, t)
INPUT:
f - function of x and t equal to dx/dt.
x0 - the initial condition(s).
Specifies the value of x # t = t[0] (initial).
Can be a scalar of a vector (NumPy Array)
Example: [x0, y0] = [500, 20]
t - a time vector (array) at which the values of the solution are computed at.
t[0] is considered as the initial time point
h = t[i+1] - t[i] determines the step size h as suggested by the algorithm
Example: t = np.linspace( 0, 500, 200 ), creates 200 time points between 0 and 500
increasing the number of points in the intervall automatically decreases the step size
OUTPUT:
x - An array containing the solution evaluated at each point in the t array.
"""
n = len( t )
x = np.array( [ x0 ] * n ) # creating an array of length n
for i in xrange( n - 1 ):
h = t[i+1] - t[i] # step size, dependent on the time vector.
# starting below - the implementation of the RK4 algorithm:
# for further informations visit http://en.wikipedia.org/wiki/Runge-Kutta_methods
# k1 is the increment based on the slope at the beginning of the interval (same as Euler)
# k2 is the increment based on the slope at the midpoint of the interval (with x + 0.5 * k1)
# k3 is AGAIN the increment based on the slope at the midpoint (with x + 0.5 * k2)
# k4 is the increment based on the slope at the end of the interval
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * k1, t[i] + 0.5 * h )
k3 = f( x[i] + 0.5 * k2, t[i] + 0.5 * h )
k4 = f( x[i] + h * k3, t[i] + h )
# finally computing the weighted average and storing it in the x-array
x[i+1] = x[i] + h * ( ( k1 + 2.0 * ( k2 + k3 ) + k4 ) / 6.0 )
return x
# model
def model(state,t):
"""
A function that creates an array containing the Lotka Volterra Differential equation
Parameter assignement convention:
a natural growth rate of the preys
b chance of being eaten by a predator
c dying rate of the predators per week
d chance of catching a prey
"""
x,y = state # will corresponding to initial conditions
# consider it as a vector too
a = 0.08
b = 0.002
c = 0.2
d = 0.0004
return np.array([ x*(a-b*y) , -y*(c - d*x) ]) # corresponds to [dx/dt, dy/dt]
################################################################
# initial conditions for the system
x0 = 500
y0 = 20
# vector of times
t = np.linspace( 0, 500, 1000 )
result = rk4( model, [x0,y0], t )
print result
plt.plot(t,result)
plt.xlabel('Time')
plt.ylabel('Population Size')
plt.legend(('x (prey)','y (predator)'))
plt.title('Lotka-Volterra Model')
plt.show()
The above code produces the following output
however if I move the from pylab import * code right above the initial conditions I get the correct output
why does this happen and how can I fix this?
pylab defines its own implementation of rk4, which it takes from matplotlib:
In [1]: import pylab
In [2]: pylab.rk4
Out[2]: <function matplotlib.mlab.rk4>
When you do a wildcard import like from pylab import *, you will override any local functions with the same name.
In particular, here you're redefining your own rk4 implementation (ie, the code you've written is never used).
This is why you should never do a wildcard import like that. pylab is particularly problematic, in that it defines several functions (such as any and all) which have completely different outputs than the python builtins for certain inputs.
Anyway, the root cause of your problem seems to be that your RK4 implementation is incorrect.
You need to use the step size in your calculation of k_n.
For example, here's a small snippet of my own RK4 implementation (which, I'll admit, is tuned for speed rather than readability):
while not target(xs):
...
# Do RK4
self.f(xs, self.k1)
self.f(xs + halfh*self.k1, self.k2)
self.f(xs + halfh*self.k2, self.k3)
self.f(xs + self.h*self.k3, self.k4)
xs += sixthh*(self.k1 + self.k2 + self.k2 + self.k3 + self.k3 \
+ self.k4)
You'll note that the entire state vector is multiplied by h, not just the time component.
Try fixing that up in your own code and seeing if the result is the same.
(In my opinion, the habit of wiki etc of treating time as a special case is a cause of a lot of these problems. Your time vector, ts, is simply a special derivative where t' = 1.)
So for your own code, I believe, but haven't tested, that something like this should work:
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * h * k1, t[i] + 0.5 * h ) ## changed to use h
k3 = f( x[i] + 0.5 * h * k2, t[i] + 0.5 * h ) ## changed to use h
k4 = f( x[i] + h * k3, t[i] + h )
Try
import pylab
help(pylab.rk4)
You'll find a long explanation of the pylab.rk4 command.
This is why it's not a good idea to use from x import *. It's much better to do import pylab as py and then this won't be an issue.
Be aware that even with moving your import command, any later call you might have to rk4 will fail.
Related
Background.
I'm attempting to write a python implementation of this answer over on Math SE. You may find the following background to be useful.
Problem
I have an experimental setup consisting of three (3) receivers, with known locations [xi, yi, zi], and a transmitter with unknown location [x,y,z] emitting a signal at known velocity v. This signal arrives at the receivers at known times ti. The time of emission, t, is unknown.
I wish to find the angle of arrival (i.e. the transmitter's polar coordinates theta and phi), given only this information.
Solution
It is not possible to locate the transmitter exactly with only three (3) receivers, except in a handful of unique cases (there are several great answers across Math SE explaining why this is the case). In general, at least four (and, in practice, >>4) receivers are required to uniquely determine the rectangular coordinates of the transmitter.
The direction to the transmitter, however, may be "reliably" estimated. Letting vi be the vector representing the location of receiver i, ti being the time of signal arrival at receiver i, and n be the vector representing the unit vector pointing in the (approximate) direction of the transmitter, we obtain the following equations:
<n, vj - vi> = v(ti - tj)
(where < > denotes the scalar product)
...for all pairs of indices i,j. Together with |n| = 1, the system has 2 solutions in general, symmetric by reflection in the plane through vi/vj/vk. We may then determine phi and theta by simply writing n in polar coordinates.
Implementation.
I've attempted to write a python implementation of the above solution, using scipy's fsolve.
from dataclasses import dataclass
import scipy.optimize
import random
import math
c = 299792
#dataclass
class Vertexer:
roc: list
def fun(self, var, dat):
(x,y,z) = var
eqn_0 = (x * (self.roc[0][0] - self.roc[1][0])) + (y * (self.roc[0][1] - self.roc[1][1])) + (z * (self.roc[0][2] - self.roc[1][2])) - c * (dat[1] - dat[0])
eqn_1 = (x * (self.roc[0][0] - self.roc[2][0])) + (y * (self.roc[0][1] - self.roc[2][1])) + (z * (self.roc[0][2] - self.roc[2][2])) - c * (dat[2] - dat[0])
eqn_2 = (x * (self.roc[1][0] - self.roc[2][0])) + (y * (self.roc[1][1] - self.roc[2][1])) + (z * (self.roc[1][2] - self.roc[2][2])) - c * (dat[2] - dat[1])
norm = math.sqrt(x**2 + y**2 + z**2) - 1
return [eqn_0, eqn_1, eqn_2, norm]
def find(self, dat):
result = scipy.optimize.fsolve(self.fun, (0,0,0), args=dat)
print('Solution ', result)
# Crude code to simulate a source, receivers at random locations
x0 = random.randrange(0,50); y0 = random.randrange(0,50); z0 = random.randrange(0,50)
x1 = random.randrange(0,50); x2 = random.randrange(0,50); x3 = random.randrange(0,50);
y1 = random.randrange(0,50); y2 = random.randrange(0,50); y3 = random.randrange(0,50);
z1 = random.randrange(0,50); z2 = random.randrange(0,50); z3 = random.randrange(0,50);
t1 = math.sqrt((x0-x1)**2 + (y0-y1)**2 + (z0-z1)**2)/c
t2 = math.sqrt((x0-x2)**2 + (y0-y2)**2 + (z0-z2)**2)/c
t3 = math.sqrt((x0-x3)**2 + (y0-y3)**2 + (z0-z3)**2)/c
print('Actual coordinates ', x0,y0,z0)
myVertexer = Vertexer([[x1,y1,z1], [x2,y2,z2], [x3,y3,z3]])
myVertexer.find([t1,t2,t3])
Unfortunately, I have far more experience solving such problems in C/C++ using GSL, and have limited experience working with scipy and the like. I'm getting the error:
TypeError: fsolve: there is a mismatch between the input and output shape of the 'func' argument 'fun'.Shape should be (3,) but it is (4,).
...which seems to suggest that fsolve expects a square system.
How may I solve this rectangular system? I can't seem to find anything useful in the scipy docs.
If necessary, I'm open to using other (Python) libraries.
As you already mentioned, fsolve expects a system with N variables and N equations, i.e. it finds a root of the function F: R^N -> R^N. Since you have four equations, you simply need to add a fourth variable. Note also that fsolve is a legacy function, and it's recommended to use root instead. Last but not least, note that sqrt(x^2+y^2+z^2) = 1 is equivalent to x^2+y^2+z^2=1 and that the latter is much less susceptible to rounding errors caused by the finite differences when approximating the jacobian of F.
Long story short, your class should look like this:
from scipy.optimize import root
#dataclass
class Vertexer:
roc: list
def fun(self, var, dat):
x,y,z, *_ = var
eqn_0 = (x * (self.roc[0][0] - self.roc[1][0])) + (y * (self.roc[0][1] - self.roc[1][1])) + (z * (self.roc[0][2] - self.roc[1][2])) - c * (dat[1] - dat[0])
eqn_1 = (x * (self.roc[0][0] - self.roc[2][0])) + (y * (self.roc[0][1] - self.roc[2][1])) + (z * (self.roc[0][2] - self.roc[2][2])) - c * (dat[2] - dat[0])
eqn_2 = (x * (self.roc[1][0] - self.roc[2][0])) + (y * (self.roc[1][1] - self.roc[2][1])) + (z * (self.roc[1][2] - self.roc[2][2])) - c * (dat[2] - dat[1])
norm = x**2 + y**2 + z**2 - 1
return [eqn_0, eqn_1, eqn_2, norm]
def find(self, dat):
result = root(self.fun, (0,0,0,0), args=dat)
if result.success:
print('Solution ', result.x[:3])
I have defined the following functions in python:
from math import *
import numpy as np
import cmath
def BSM_CF(u, s0, T, r, sigma):
realp = -0.5*u**2*sigma**2*T
imagp = u*(s0+(r-0.5*sigma**2)*T)
zc = complex(realp, imagp)
return cmath.exp(zc)
def BSM_characteristic_function(v, x0, T, r, sigma):
cf_value = np.exp(((x0 / T + r - 0.5 * sigma ** 2) * 1j * v -
0.5 * sigma ** 2 * v ** 2) * T)
return cf_value
Parameters:
alpha = 1.5
K = 90
S0 = 100
T = 1
r = 0.05
sigma = 0.2
k = np.log(K / S0)
s0 = np.log(S0 / S0)
g = 1 # factor to increase accuracy
N = 2 ** 2
eta = 0.15
eps = (2*np.pi)/(N*eta)
b = 0.5 * N * eps - k
u = np.arange(1, N + 1, 1)
vo = eta * (u - 1)
v = vo - (alpha + 1) * 1j
BSMCF = BSM_characteristic_function(v, s0, T, r, sigma)
BSMCF_v2 = BSM_CF(0, s0, T, r, sigma)
print(BSMCF)
print(BSMCF_v2)
Both are the same functions. But, I get different results. How can I fix the function BSM_CF to get the same result from the function BSM_characteristic_function? The idea is get an array with len 4 values as in the funtion BSM_characteristic_function
Your calls are not identical. You are passing v in the first call and 0 in the second call. If I pass 0 for both, the results are identical. If I pass v, it complains because you can't call complex on a vector.
Numeric computation is Not always identical to symbolic algebra. For the first formula, you use complex computation as an alternative, which could result rounding errors in complex part. I came across such mistakes quite often as I used Mathematica, which loves to transfer a real formula to a complex one before doing the numeric computation.
I'm trying to solve a 1D wave equation for the pile with periodic BC (periodic load).
I'm pretty sure about my discretization formulas. The only thing I'm not sure about is the periodic BC and time (t) in there ==> sin(omega*t).
When I set it up as it is right now, it's giving me a weird displacement profile. However, if I set it up to be sin(omega*1) or sin(omega*2),... etc, it resembles a sine wave, but it basically means that sin(omega*t), i.e. sin(2*pi*f*t), is equal to 0 when t is an integer value..
I code everything up in Jupyter Notebook together with the visualization part, but the solution is nowhere near the propagating sine wave.
Here is the relevant Python code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
def oned_wave(L, dz, T, p0, Ep, ro, f):
"""Solve u_tt=c^2*u_xx on (0,L)x(0,T]"""
"""Assume C = 1"""
p = p0
E = Ep
ro = ro
omega = 2*np.pi*f
c = np.sqrt(E/ro)
C = 1 # Courant number
Nz = int(round(L/dz))
z = np.linspace(0, L, Nz+1) # Mesh points in space
dt = dz/c # Time step based on Courant Number
Nt = int(round(T/dt))
t = np.linspace(0, Nt*dt, Nt+1) # Mesh points in time
C2 = C**2 # Help variable in the scheme
# Make sure dz and dt are compatible with z and t
dz = z[1] - z[0]
dt = t[1] - t[0]
w = np.zeros(Nz+1) # Solution array at new time level
w_n = np.zeros(Nz+1) # Solution at 1 time level back
w_nm1 = np.zeros(Nz+1) # Solution at 2 time levels back
# Set initial condition into w_n
for i in range(0,Nz+1):
w_n[i] = 0
result_matrix = w_n[:] # Solution matrix where each row is displacement at given time step
# Special formula for first time step
for i in range(1, Nz):
w[i] = 0.5*C2 * w_n[i-1] + (1 - C2) * w_n[i] + 0.5*C2 * w_n[i+1]
# Set BC
w[0] = (1 - C2) * w_n[i] + C2 * w_n[i+1] - C2*dz*((p*np.sin(omega*dt))/E) # this is where, I think, the mistake is: sin(omega*t)
w[Nz] = 0
result_matrix = np.vstack((result_matrix, w)) # Append a row to the solution matrix
w_nm1[:] = w_n; w_n[:] = w # Switch variables before next step
for n in range(1, Nt):
# Update all inner points at time t[n+1]
for i in range(1, Nz):
w[i] = - w_nm1[i] + C2 * w_n[i-1] + 2*(1 - C2) * w_n[i] + C2 * w_n[i+1]
# Set BC
w[0] = - w_nm1[i] + 2*(1 - C2) * w_n[i] + 2*C2 * w_n[i+1] - 2*dz*((p*np.sin(omega*(dt*n)))/E) # this is where, I think, the mistake is: sin(omega*t)
w[Nz] = 0
result_matrix = np.vstack((result_matrix, w)) # Append a row to the solution matrix
w_nm1[:] = w_n; w_n[:] = w # Switch variables before next step
return result_matrix
My Jupyter Notebook document.
I am trying to solve an ODE using the 4th order Runge Kutta method. The problem is that the defined tmax in my code does not seem to affect the time range of the plot. Changing tmax only affects the curve of the plot. Instead the time range is dependent on nT.
#generated electricity
g = 1.0
#rate constants
k_diss = 0.1
k_em = .01
#4th order RK
def RK4(func, yi, ti, dt):
k1 = dt*func(yi, ti)
k2 = dt*func(yi + 0.5 * k1, ti + 0.5 * dt)
k3 = dt*func(yi + 0.5 * k2, ti + 0.5 * dt)
k4 = dt*func(yi + k3,ti + dt)
return yi + 1./6*(k1 + 2*k2 + 2*k3 + k4)
#variation of cEX
def df(t, c, k_diss, k_em):
return np.array([g - 2*k_diss*c[0] - k_em*c[0]])
#time
tmax = 30.0
nT = 500
T = np.linspace(0,tmax,nT)
dT = tmax/(nT-1)
#initial condition
C0, T0 = 1.0, 0.0
#storage
C = np.zeros(nT)
C[0] = C0
#ODE solver
r = ode(df).set_integrator('dopri5')
r.set_initial_value(C0,T0).set_f_params(k_diss,k_em)
#loop over time
iT = 1
for i in range(1,nT):
C[iT] = r.integrate(r.t + dT)
iT += 1
On top of that the plot should be decreasing (can be seen below), not increasing. I have based my code on the following (there tmax does operate the way it is supposed to):
def RK4(func, yi, ti, dt):
k1 = dt*func(yi,ti)
k2 = dt*func(yi+0.5*k1,ti+0.5*dt)
k3 = dt*func(yi+0.5*k2,ti+0.5*dt)
k4 = dt*func(yi+k3,ti+dt)
return yi + 1./6* ( k1+2*k2+2*k3+k4 )
# funtion that defines the concentration of CA
def df(c,t):
return np.array([-k*c[0],k*c[0]]) # return the value
# time argument
tmax,nT = 30.0,50
T = np.linspace(0,tmax,nT)
dT = tmax/(nT-1)
# rate constant
k = 0.2
# declare the array to store
RK = np.zeros((2,nT))
# initial condition
RK[0,0] = 1.
for iT in range(1,nT):
RK[:,iT] = RK4(df,RK[:,iT-1],T[iT-1],dT)
The plot:
I changed the end of the first script to
#loop over time
for iT in range(1,nT):
C[iT] = r.integrate(T[iT])
import matplotlib.pyplot as plt
plt.plot(T,C)
plt.xlabel("time"); plt.ylabel("cEX");
plt.show()
The first change since it is actually your intent that C[i] is the value at time T[i], and as there is a way to express this consistently, why not use it.
Since you did not add your plot commands, one can only guess your error. In all probability it was that you did not include the time in your plot command. If you only write plt.plot(C), then the horizontal axis is filled with the integer indices of C, since the plot procedure is not given knowledge about any other time scale.
Let us assume I have a set of differential equations to be integrated with scipy odeint. Now my goal is to find the steady-state (I chose initial conditions such that this state exists). At the moment I have implemented something like
cond = True
while cond:
x = integrate(interval = [0,t], steps = 200)
if var(x[-22::]) < maxvar:
cond = False
return mean(x)
else:
t*= 2
Do you have a more efficient approach?
If you are using odeint, then you already have your differential equations written as a function f(x, t) (or possibly f(x, t, *args)). If your system is autonomous (i.e. f does not actually depend on t), you can find an equilibrium by solving f(x, 0) == 0 for x. You can use, for example, scipy.optimize.fsolve to solve for the equilibrium.
The following is an example. It uses the "Coupled Spring Mass System" example from the scipy cookbook. scipy.optimize.fsolve is used to find the equilibrium solution x1 = 0.5, y1 = 0, x2 = 1.5, y2 = 0.
from scipy.optimize import fsolve
def vectorfield(w, t, p):
"""
Defines the differential equations for the coupled spring-mass system.
Arguments:
w : vector of the state variables:
w = [x1, y1, x2, y2]
t : time
p : vector of the parameters:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
"""
x1, y1, x2, y2 = w
m1, m2, k1, k2, L1, L2, b1, b2 = p
# Create f = (x1', y1', x2', y2'):
f = [y1,
(-b1 * y1 - k1 * (x1 - L1) + k2 * (x2 - x1 - L2)) / m1,
y2,
(-b2 * y2 - k2 * (x2 - x1 - L2)) / m2]
return f
if __name__ == "__main__":
# Parameter values
# Masses:
m1 = 1.0
m2 = 1.5
# Spring constants
k1 = 8.0
k2 = 40.0
# Natural lengths
L1 = 0.5
L2 = 1.0
# Friction coefficients
b1 = 0.8
b2 = 0.5
# Pack up the parameters and initial conditions:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
# Initial guess to pass to fsolve. The second and fourth components
# are the velocities of the masses, and we know they will be 0 at
# equilibrium. For the positions x1 and x2, we'll try 1 for both.
# A better guess could be obtained by solving the ODEs for some time
# interval, and using the last point of that solution.
w0 = [1.0, 0, 1.0, 0]
# Find the equilibrium
eq = fsolve(vectorfield, w0, args=(0, p))
print "Equilibrium: x1 = {0:.1f} y1 = {1:.1f} x2 = {2:.1f} y2 = {3:.1f}".format(*eq)
The output is:
Equilibrium: x1 = 0.5 y1 = 0.0 x2 = 1.5 y2 = 0.0
Referring to your comments, I don't see any better way:
In this case where you know approximately where the system will settle.
(This is fairly easy to predict in some systems, like a pendulum, or a charging capacitor), and that it will settle, the fastest way I know is to check if
(p[0] * x[i] + p[1] * x[i-1] ... + p[n] * x[i-n] - mean(x[i-0:n]) )< epsilon)
The difficulty is determining the size of epsilon, the parameters p[0:n] to tune this detection.
Clearly you are already using this window method:
epsilon = varmax
p[0:n] = 1
n = 22
and have optimized it by removing the parameters for the filter, and simply using the variance.
For many systems (where settling looks like 1 order differential equations) the filter parameters turn out to look like this:
p[0] = n/2
p[n] = n/2
p[1:n-1] = 0
meaning you can make stuff go faster if you replace the calculation of state variance with this simple test:
if( abs(x[-22] - x[0]) > epsilon)
This will not detect correctly if small disturbances are still present, since we don't know your system, that's difficult to talk about...