Implement a system of stochastic ODEs using python - python

I want to add noise on a system of ODEs from (Ramos & al. 2021) (Kind of SIR model)
system
I implemented the Milstein scheme on some relevant equation
# Create Brownian Motion
np.random.seed(1)
dS = np.sqrt(dt) * np.random.randn(tmax)
dE=np.sqrt(dt) * np.random.randn(tmax)
dI=np.sqrt(dt) * np.random.randn(tmax)
dIu=np.sqrt(dt) * np.random.randn(tmax)
dDu=np.sqrt(dt) * np.random.randn(tmax)
dHR=np.sqrt(dt) * np.random.randn(tmax)
dHD=np.sqrt(dt) * np.random.randn(tmax)
dB=[dS,dE,dI,dIu,dDu,dHR,dHD]
sigma=[0.5,0,0,0,0,0,0]
#Brief definition of systemf function evaluating the second term at time t for each variant i
for i in range(nvariants):
newe = S[0]*(mbetae[i]*E[i] + mbetai[i]*I[i] + mbetaiu[i]*Iu[i] + mbetahr[i]*HR[i] + mbetahd[i]*HD[i])/totalpop
newi = gammae * E[i]
newhid = gammai * I[i]
newhiu = gammaiu * Iu[i]
newr= gammahr * HR[i]
newd = gammahd*HD[i]
newq = gammaq * Q[i]
neweS = neweS + newe
fE[i] = newe-newi
fI[i] = newi - newhid
fIu[i]= (1-theta[i]-omegau)*newhid - newhiu
fHR[i]= p[i]*(theta[i]-fatrate[i])*newhid - newr
fHD[i]= fatrate[i] *newhid - newd
fQ[i] = (1-p[i])*(theta[i]-fatrate[i])*newhid + newr - newq
fS[0] = - neweS -(vjRK[int(mt.floor(t))]) #fS.insert(0,-neweS -(vjRK[mt.floor(t)]))
return [fS, fE, fI, fIu, fHR, fHD, fQ]
for t in range(delayini,tmax-1):
fsyseval=systemf(t,states[t],beta[2*t],gamma[2*t],frate[2*t],theta[2*t],p[2*t],omegau[2*t],vjsum)
#Running the scheme
for s in range(numstates):
for i in range(nvariants):
states[t+1][s][i] =states[t][s][i]+fsyseval[s][i]*dt+sigma[s]*dB[s][t]*states[t][s][i] + 0.5*sigma[s]**2 * states[t][s][i] * (dB[s][t] ** 2 - dt)
The problem is when I plot the results of each variable (susceptible-infected ....) the result is very strange and have nothing to do with the deterministic model (I see no fluctuations and the shape is not even close to deterministic one) which is illogic. so, I thought that maybe I didn't implement well the stochastic scheme and I missed something.
Now I want to know if my implementation of stochasticity is correct (if yes why the results show no fluctuation despite the high level of noise)
If no, how can I add the stochastic part correctly ?
I thank you for advance for your help

Related

Runtime error: Factor is exactly singular

I am trying to implement 2 temperature models, the following equations:
C_e(∂T_e)/∂t=∇[k_e∇T_e ]-G(T_e-T_ph )+ A(r,t)
C_ph(∂T_ph)/∂t=∇[k_ph∇T_ph] + G(T_e-T_ph)
Code
from fipy.tools import numerix
import scipy
import fipy
import numpy as np
from fipy import CylindricalGrid1D
from fipy import Variable, CellVariable, TransientTerm, DiffusionTerm, Viewer, LinearLUSolver, LinearPCGSolver, \
LinearGMRESSolver, ImplicitDiffusionTerm, Grid1D
FIPY_SOLVERS = scipy
## Mesh
nr = 50
dr = 1e-7
# r = nr * dr
mesh = CylindricalGrid1D(nr=nr, dr=dr, origin=0)
x = mesh.cellCenters[0]
# Variables
T_e = CellVariable(name="electronTemp", mesh=mesh,hasOld=True)
T_e.setValue(300)
T_ph = CellVariable(name="phononTemp", mesh=mesh, hasOld=True)
T_ph.setValue(300)
G = CellVariable(name="EPC", mesh=mesh)
t = Variable()
# Material parameters
C_e = CellVariable(name="C_e", mesh=mesh)
k_e = CellVariable(name="k_e", mesh=mesh)
C_ph = CellVariable(name="C_ph", mesh=mesh)
k_ph = CellVariable(name="k_ph", mesh=mesh)
C_e = 4.15303 - (4.06897 * numerix.exp(T_e / -85120.8644))
C_ph = 4.10446 - 3.886 * numerix.exp(-T_ph / 373.8)
k_e = 0.1549 * T_e**-0.052
k_ph =1.24 + 16.29 * numerix.exp(-T_ph / 151.57)
G = numerix.exp(21.87 + 10.062 * numerix.log(numerix.log(T_e )- 5.4))
# Boundary conditions
T_e.constrain(300, where=x > 4.5e-6)
T_ph.constrain(300, where=x > 4.5e-6)
# Source 𝐴(𝑟,𝑡) = 𝑎𝐷(𝑟)𝜏−1 𝑒−𝑡/𝜏 , 𝐷(𝑟) = 𝑆𝑒 exp (−𝑟2/𝜎2)/√2𝜋𝜎2
sig = 1.0e-6
tau = 1e-15
S_e = 35
d_r = (S_e * 1.6e-9 * numerix.exp(-x**2 /sig**2)) / (numerix.sqrt(2. * 3.14 * sig**2))
A_t = numerix.exp(-t/tau)
a = (numerix.sqrt(2. * 3.14)) / (3.14 * sig)
A_r = a * d_r * tau**-1 * A_t
eq0 = (TransientTerm(var=T_e, coeff=C_e) == DiffusionTerm(var=T_e, coeff=k_e) - G*(T_e - T_ph) + A_r
eq1 =(TransientTerm(var=T_ph, coeff=C_ph) == DiffusionTerm(var=T_ph, coeff=k_ph) + G*(T_e - T_ph)
eq = eq0 & eq1
dt = 1e-18
steps = 7000
elapsed = 0.
vi = Viewer((T_e, T_ph), datamin=0., datamax=2e4)
for step in range(steps):
T_e.updateOld()
T_ph.updateOld()
vi.plot()
res = 1e100
dt *= 1.1
while res > 1:
res = eq.sweep(dt=dt)
print(t, res)
t.setValue(t + dt)
Problem
The code is working fine with very small dt = 1e-18, but I need to run it until e 1e-10.
With this time step is going to take very long time, and setting dt *= 1.1 the resduals at some point start to increase then
gives following runtime error:
factor is exactly singular
Even with very small increment dt*= 1.005 the same issue pop up.
Using dt= 1.001 runs the code for quit long time then the residual get stuck at certain value.
Questions
I there any error in the fipy formalism of the equations?
What causes the error?
Is the error because of time step increase? If yes, how can I increase my time step?
I've made a few more changes to the code that can get you to an elapsed time of 1e-10. The main changes are
Using ImplicitSourceTerm for the terms with G. This stabalizes the solution.
Applied underRelaxation=0.5 in the sweep step. This slows down the updates in the sweep loop so the feedback loop is damped down.
Removed FIPY_SOLVERS=scipy. This isn't doing anything. FIPY_SOLVERS is an environment variable that you set outside of the Python environment.
The way the boundary conditions were applied seemed strange so I applied them in a more canonical way.
The sweep loop is fixed to 10 sweeps to get to a steady state quickly. Note that as the solution gets close to a stable steady state, the residual won't get better necessarily. Probably want to go back to residual checks if you need an accurate transient.
from fipy.tools import numerix
import scipy
import fipy
import numpy as np
from fipy import CylindricalGrid1D
from fipy import Variable, CellVariable, TransientTerm, DiffusionTerm, Viewer, LinearLUSolver, LinearPCGSolver, \
LinearGMRESSolver, ImplicitDiffusionTerm, Grid1D, ImplicitSourceTerm
## Mesh
nr = 50
dr = 1e-7
# r = nr * dr
mesh = CylindricalGrid1D(nr=nr, dr=dr, origin=0)
x = mesh.cellCenters[0]
# Variables
T_e = CellVariable(name="electronTemp", mesh=mesh,hasOld=True)
T_e.setValue(300)
T_ph = CellVariable(name="phononTemp", mesh=mesh, hasOld=True)
T_ph.setValue(300)
G = CellVariable(name="EPC", mesh=mesh)
t = Variable()
# Material parameters
C_e = CellVariable(name="C_e", mesh=mesh)
k_e = CellVariable(name="k_e", mesh=mesh)
C_ph = CellVariable(name="C_ph", mesh=mesh)
k_ph = CellVariable(name="k_ph", mesh=mesh)
C_e = 4.15303 - (4.06897 * numerix.exp(T_e / -85120.8644))
C_ph = 4.10446 - 3.886 * numerix.exp(-T_ph / 373.8)
k_e = 0.1549 * T_e**-0.052
k_ph =1.24 + 16.29 * numerix.exp(-T_ph / 151.57)
G = numerix.exp(21.87 + 10.062 * numerix.log(numerix.log(T_e )- 5.4))
# Boundary conditions
T_e.constrain(300, where=mesh.facesRight)
T_ph.constrain(300, where=mesh.facesRight)
# Source 𝐴(𝑟,𝑡) = 𝑎𝐷(𝑟)𝜏−1 𝑒−𝑡/𝜏 , 𝐷(𝑟) = 𝑆𝑒 exp (−𝑟2/𝜎2)/√2𝜋𝜎2
sig = 1.0e-6
tau = 1e-15
S_e = 35
d_r = (S_e * 1.6e-9 * numerix.exp(-x**2 /sig**2)) / (numerix.sqrt(2. * 3.14 * sig**2))
A_t = numerix.exp(-t/tau)
a = (numerix.sqrt(2. * 3.14)) / (3.14 * sig)
A_r = a * d_r * tau**-1 * A_t
eq0 = (
TransientTerm(var=T_e, coeff=C_e) == \
DiffusionTerm(var=T_e, coeff=k_e) - \
ImplicitSourceTerm(coeff=G, var=T_e) + \
ImplicitSourceTerm(var=T_ph, coeff=G) + \
A_r)
eq1 = (TransientTerm(var=T_ph, coeff=C_ph) == DiffusionTerm(var=T_ph, coeff=k_ph) + ImplicitSourceTerm(var=T_e, coeff=G) - ImplicitSourceTerm(coeff=G, var=T_ph))
eq = eq0 & eq1
dt = 1e-18
steps = 7000
elapsed = 0.
vi = Viewer((T_e, T_ph), datamin=0., datamax=2e4)
for step in range(steps):
T_e.updateOld()
T_ph.updateOld()
vi.plot()
res = 1e100
dt *= 1.1
count = 0
while count < 10:
res = eq.sweep(dt=dt, underRelaxation=0.5)
print(t, res)
count += 1
print('elapsed:', t.value)
t.setValue(t + dt)
Regarding your questions.
I there any error in the fipy formalism of the equations?
Actually, no. Nothing wrong with the formalism, but better to use ImplicitSourceTerm.
What causes the error?
There are two source of instability in this system. The source terms inside the equation when written explicitly are unstable above a certain time step. Using an ImplcitSourceTerm removes this instablity. There is also some sort of instability in the coupling of the equations. I think that using under relaxation helps with that.
Is the error because of time step increase? If yes, how can I increase my time step?
Explained above.
In addition to #wd15's answer:
Your equations are extremely non-linear. You will likely benefit from Newton iterations to get decent convergence.
As #TimRoberts said, geometrically increasing the time step without bound is probably not a good idea.
I've recently posted a package called steppyngstounes that takes care of adapting timesteps. Although a standalone package, it's intended to work with FiPy. For example, you could change your solve loop to this:
from steppyngstounes import FixedStepper, PIDStepper
T_e.updateOld()
T_ph.updateOld()
for checkpoint in FixedStepper(start=0, stop=1e-10, size=1e-12):
for step in PIDStepper(start=checkpoint.begin,
stop=checkpoint.end,
size=dt):
res = 1e100
for sweep in range(10):
res = eq.sweep(dt=dt, underRelaxation=0.5)
print(t, sweep, res)
if step.succeeded(error=res / 1000):
T_e.updateOld()
T_ph.updateOld()
t.value = step.end
else:
T_e.value = T_e.old
T_ph.value = T_ph.old
print('elapsed:', t.value)
# the last step might have been smaller than possible,
# if it was near the end of the checkpoint range
dt = step.want
_ = checkpoint.succeeded()
vi.plot()
This code will update the viewer every 1e-12 time units, and adaptively make it's way between those checkpoints. There are other steppers in the package that would facilitate taking geometrically or exponentially increasing checkpoints, if that kept things more interesting.
You could probably get better overall performance by sweeping fewer times and letting the adapter take much smaller time steps in the beginning. I found that no time step was small enough to get the initial residual lower than 777.9. After the first couple of steps, the error metric could probably be much more aggressive, giving more accurate results.

Solving an Integral equation with uncertainties, by using fsolve and uncertainties packages in Python

I have some variables that are uncertain, these are
w_m = u.ufloat(0.1430, 0.0011)
z_rec = u.ufloat(1089.92, 0.25)
theta_srec = u.ufloat(0.0104110, 0.0000031)
r_srec = u.ufloat(144.43, 0.26)
and some constant values
c = 299792.458 # speed of light in [km/s]
N_eff = 3.046
w_r = 2.469 * 10**(-5) * (1 + (7/8)*(4/11)**(4/3) * N_eff)
My problem is I need to solve an integral defined by this functions
def D_zrec(z):
return (c/100) / sqrt(w_m * (1+z)**3 + w_r * (1+z)**4 + (h**2 - w_m - w_r))
This function is evaluated for dz but it also contains an unknown h that we need to find, with corresponding uncertainty. So I need to write a code that finds h.
Here is my full code
from numpy import sqrt, vectorize
from scipy.integrate import quad
import uncertainties as u
from uncertainties.umath import *
from scipy.optimize import fsolve
#### Important Parameters #####
c = 299792.458 # speed of light in [km/s]
N_eff = 3.046
w_r = 2.469 * 10**(-5) * (1 + (7/8)*(4/11)**(4/3) * N_eff)
w_m = u.ufloat(0.1430, 0.0011)
z_rec = u.ufloat(1089.92, 0.25)
theta_srec = u.ufloat(0.0104110, 0.0000031)
r_srec = u.ufloat(144.43, 0.26)
D_zrec_true = r_srec / theta_srec
#u.wrap
def D_zrec_finder(h, w_m, z_rec, D_zrec_true):
def D_zrec(z):
return (c/100) / sqrt(w_m * (1+z)**3 + w_r * (1+z)**4 + (h**2 - w_m - w_r))
result, error = quad(D_zrec, 0, z_rec)
return D_zrec_true - result
def h0_finder(w_m, z_rec, D_zrec_true):
vfunc = vectorize(D_zrec_finder)
sol = fsolve(vfunc, u.ufloat(0.6728, 0.01), args=(w_m, z_rec, D_zrec_true))[0]
return sol
print(h0_finder(w_m, z_rec, D_zrec_true))
So to summarize I have an integral named as D_zrec that is a function of z, but also contains an unknown number h that we need to find by using fsolve.
I have found 3 sites that might be useful for the coder. Please look at them if you want to help
https://kitchingroup.cheme.cmu.edu/blog/2013/03/07/Another-approach-to-error-propagation/
https://kitchingroup.cheme.cmu.edu/blog/2013/07/10/Uncertainty-in-an-integral-equation/
https://kitchingroup.cheme.cmu.edu/blog/2013/01/23/Solving-integral-equations-with-fsolve/
I have looked at them to write my code but no luck.
Thanks for the help

How to implement exponential smoothing manually with Python?

This is my first question here and I'm also new to Python (without a CS background, I must add) as well!
I'm trying to implement triple exponential smoothing to make predictions. My data is based on AIS data and I'm focusing on SOG (Speed Over Ground) values specifically. Mathematical approach that I'm following is the Triple Exponential Smoothing Model.
I've still only followed the basics of Python and I'm struggling to figure out the iteration part. What I expect, however, is to read data from a CSV (which includes Time and SOG) and forecast the Speed values, so I can compare the predicted and real values.
Here is the example/test data table that I'm using atm.
I tried coding the equation part (shown below) and I know it is beyond sloppy. But I didn't want to come here without anything.
alpha = 0.9
m = 3
def test(ssv_current, x_current, ssv_previous, dsv_previous, tsv_previous):
# ssv = single smoothing value (s'(t-1) and s'(t))
ssv_current = (alpha * x_current) + ((1 - alpha) * ssv_previous)
# dsv = double smoothing value (s''(t-1) and s''(t))
dsv_current = (alpha * ssv_current) + ((1 - alpha) * dsv_previous)
# tsv = triple smoothing value (s'''(t-1) and s'''(t))
tsv_current = (alpha * dsv_current) + ((1 - alpha) * tsv_previous)
at = (3 * ssv_current) - (3 * dsv_current) + tsv_current
bt = ((alpha ** 2) / (2 * ((1 - alpha) ** 2))) * (((6 - 5 * alpha) * ssv_current) - ((10 - 8 * alpha) * dsv_current)
+ ((4 - 3 * alpha) * tsv_current))
ct = ((alpha ** 2) / ((1 - alpha) ** 2)) * (ssv_current - (2 * dsv_current) + tsv_current)
ft = at + (m * bt) + (0.5 * (m ** 2) * ct) # mth predicted value at time t
I know both my question and piece of code seem trash, but I look forward to learning from this community. I've only worked with MatLab before and any tip here would really help me.
TIA!
EDIT: I realized my post does not convey what I really want. Basically, I want the code to read through the speed values one-by-one and iterate through it and print the predicted value.
A very basic iterator would be
import csv
datafile = open('datafile.csv', 'r')
csv_file = csv.reader(datafile)
for row in csv_file:
print(row)
Each 'row' item would have the data
Refer: CSV Library reference
You could do the same with pandas as well.
import pandas as pd
df = pd.read_csv('datafile.csv')
Now, you dont need to iterate. Just do calculations using entire columns at once and pandas will create those results.
e.g.
df['total'] = df['a'] + df['b']
Just like that
Refer: Pandas

Solving differential equation with ODEINT in python

I have a coupled system of differential equations that I've already solved with Euler in Excel. Now I want to make it more precise with an ODE-solver in python.
However, there must be a mistake in my code because the curves look different than in Excel. I don't expect the curves to reach 1 and 0 in the end.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# define reactor
def reactor(x,z):
n_a = x[0]
n_b = x[1]
n_c = x[2]
dn_adz = A * (-1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dn_bdz = A * (1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dn_cdz = A * (1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dxdz = [dn_adz,dn_bdz,dn_cdz]
return dxdz
# initial conditions
n_a0 = 0.5775
n_b0 = 0.0
n_c0 = 0.0
x0 = [n_a0, n_b0, n_c0]
# parameters
A = 0.12
B = 3.1e-9
C = 4.02e15
# number of steps
n = 100
# z step interval (m)
z = np.linspace(0,0.0274,n)
# solve ODEs
x = odeint(reactor,x0,z)
# Plot the results
plt.plot(z,x[:,0],'b-')
plt.plot(z,x[:,1],'r--')
plt.plot(z,x[:,2],'k:')
plt.show()
Is is a problem with the initial condition that stays constant and does not change from step to step?
Should it be like in Excel with Euler, where the next step uses the conditions/values of the precious step?
From the structure of the right sides you get constant combinations of the state variables, n_a+n_b=n_a0+n_b0 and n_a+n_c=n_a0+n_c0. This means that the dynamic reduces to the one-dimensional dynamic of n_a.
By the first equation, the derivative of n_a is negative for positive n_a, so that the solution is falling towards n_a=0. By the constants of the dynamics, n_b converges to n_a0+n_b0 and n_c converges to n_a0+n_c0.
It is unclear how you get convergence towards 1 in some components, as that is not supported by the initial conditions. Apart from that, the described odeint result fits this qualitative behavior.

Adam Optimizer in Style Transfer

I am trying to practice the exercise questions in this style transfer tutorial, is there anyone know how to replace the basic gradient descent with Adam Optimizer.
I think these code maybe the place to change. Thank you very much for help.
# Reduce the dimensionality of the gradient.
grad = np.squeeze(grad)
# Scale the step-size according to the gradient-values.
step_size_scaled = step_size / (np.std(grad) + 1e-8)
# Update the image by following the gradient.
mixed_image -= grad * step_size_scaled
Referring to slides 36 and 37 from Stanford CS231n slides,
first_moment = 0
second_moment = 0
must be declared above the for i in range(num_iterations): line present in that GitHub file. Also, initialize beta1 and beta2 variables from below based on your requirements. Then, you can replace your code block with the following:
# Reduce the dimensionality of the gradient.
grad = np.squeeze(grad)
# Calculate moments
first_moment = beta1 * first_moment + (1 - beta1) * grad
second_moment = beta2 * second_moment + (1 - beta2) * grad * grad
# Bias correction steps
first_unbias = first_moment / (1 - beta1 ** i)
second_unbias = second_moment / (1 - beta2 ** i)
# Update the image by following the gradient (AdaGrad/RMSProp step)
mixed_image -= step_size * first_unbias / (tf.sqrt(second_unbias) + 1e-8)
I initialize beta1 and beta2 like this:
beta1=tf.Variable(0,name='beta1')
beta2=tf.Variable(0,name='beta2')
session.run([beta1.initializer,beta2.initializer])
However,there are something go wrong: Tensor' object has no attribute 'sqrt'.
The detailed error looks like this.

Categories