Solver Issue No Algorithm Found - python

Trying to replace the function U2(x,y,z) with specified values of x,y,z. Not sure how to do that with sympy because they are as "x = arange.(-h,h,0.001)" as seen in the code below.
Below you will find my implementation with sympy. Additionally I am using PyCharm.
This implementation is based on Dr. Annabestani and Dr. Naghavis' paper: A 3D analytical ion transport model for ionic polymer metal composite actuators in large bending deformations
import sympy as sp
h = 0.1 # [mm] half of thickness
W: float = 6 # [mm] width
L: float = 28 # [mm] length
F: float = 96458 # [C/mol] Faraday's constant
k_e = 1.34E-6 # [F/m]
Y = 5.71E8 # [Pa]
d = 1.03 - 11 # [m^2/s] diffiusitivity coefficient
T = 293 # [K]
C_minus = 1200 # [mol/m^3] Cation concentration
C_plus = 1200 # [mol/m^3] anion concentration
R = 8.3143 # [J/mol*K] Gas constant
Vol = 2*h*W*L
#dVol = diff(Vol,x) + diff(Vol, y) + diff(Vol, z) # change in Volume
theta = 1 / W
x, y, z, m, n, p, t = sp.symbols('x y z m n p t')
V_1 = 0.5 * sp.sin(2 * sp.pi * t) # Voltage as a function of time
k_f = 0.5
t_f = 44
k_g = 4.5
t_g = 0.07
B_mnp = 0.003
b_mnp: float = B_mnp
gamma_hat_2 = 0.04
gamma_hat_5 = 0.03
gamma_hat_6 = 5E-3
r_M = 0.15 # membrane resistance
r_ew = 0.175 # transverse resistance of electrode
r_el = 0.11 # longitudinal resistance of electrode
mu = 2.4
sigma_not = 0.1
a_L: float = 1.0 # distrubuted surface attentuation
r_hat = sp.sqrt(r_M ** 2 + r_ew ** 2 + r_el ** 2)
lambda_1 = 0.0001
dVol = 1
K = (F ** 2 * C_minus * d * (1 - C_minus * dVol)) / (R * T * k_e) # also K = a
K_hat = (K-lambda_1)/d
gamma_1 = 1.0
gamma_2 = 1.0
gamma_3 = 1.0
gamma_4 = 1.0
gamma_5 = 1.0
gamma_6 = 1.0
gamma_7 = 1.0
small_gamma_1 = 1.0
small_gamma_2 = 1.0
small_gamma_3 = 1.0
psi = gamma_1*x + gamma_2*y + gamma_3*z + gamma_4*x*y + gamma_5*x*z + gamma_6*y*z + gamma_7*x*y*z + (small_gamma_1/2)*x**2 + (small_gamma_2/2)*y**2 + (small_gamma_3/2)*x*z**2
psi_hat_part = ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * ((sp.sin(((p + 1) * sp.pi) / L)) * z)
psi_hat = psi * psi_hat_part # Eqn. 19
print(psi_hat)
x1: float = -h
x2: float = h
y1: float = 0
y2: float = W
z1: float = 0
z2: float = L
I = psi_hat.integrate((x, x1, x2), (y, y1, y2), (z, z1, z2)) # Integration for a_mnp Eqn. 18
A_mnp = ((8 * K_hat) / (2 * h * W * L)) * I
Partial = A_mnp * ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * ((sp.sin(((p + 1) * sp.pi) / L)) * z)
start = Partial.integrate((p, 0 , 10E9), (n, 0, 10E9), (m, 0, 10E9)) #when using infinity it goes weird, also integrating leads to higher thresholds than summation
a_mnp_denom = (((sp.sin(((m + 1) * sp.pi) / 2 * h)) ** 2) * ((sp.sin(((n + 1) * sp.pi) / W)) ** 2) * (
(sp.sin(((p + 1) * sp.pi) / L)) ** 2) + K_hat)
a_mnp = A_mnp / a_mnp_denom # Eqn. 18
U2 = sp.Function("U2")
U2 = a_mnp * ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * (
(sp.sin(((p + 1) * sp.pi) / L)) * z) # Eqn. 13
x = np.arange(-h, h, 0.001)
y = np.arange(-h, h, 0.001)
z = np.arange(-h, h, 0.001)
f= sp.subs((U2), (x ,y ,z))
I currently get the error message: ValueError: subs accepts either 1 or 2 arguments. So that means I can't use the subs() method and replace() also doesn't work too well. Are there any other methods one can use?
Any help will be grateful, thank you!

Oscar is right: you are trying to deal with too much of the problem at once. That aside, Numpy and SymPy do not work like you think they do. What were you hoping to see when you replaced 3 variables, each with a range?
You cannot replace a SymPy variable/Symbol with a Numpy arange object, but you can replace a Symbol with a single value:
>>> from sympy.abc import x, y
>>> a = 1.0
>>> u = x + y + a
>>> u.subs(x, 1)
y + 2.0
>>> u.subs([(x,1), (y,2)])
4.0
You might iterate over the arange values, creating values of f and then doing something with each value:
f = lambda v: u.subs(dict(zip((x,y),v)))
for xi in range(1,3): # replace range with your arange call
for yi in range(-4,-2):
fi = f((xi,yi))
print(xi,yi,fi)
Be careful about iterating and using x or y as your loop variable, however, since that will then lose the assignment of the Symbol to that variable,
for x in range(2):
print(u.subs(x, x)) # no change and x is no longer a Symbol, it is now an int

Related

Is it possible to create a graph using MatPlotLib with data from a scientific model?

To preface, I am completely new to Python and MatPlotLib, and I am working on a school project to calculate the trajectory of a satellite around the earth. I was wondering if it was possible to grab the data from every point the model calculates and put it inside the graph, plotting time on the x-axis and height on the y-axis. I tried to directly plot the two values, but it doesn't seem to appear on the graph. Is it possible to put every value from the calculations in the graph, or will I need to have a direct formula for the line to plot it in a graph.
The air resistance also changes for certain heights, so it makes it harder to make a direct formula. Here's the code, I am aware it is terribly formatted but it being my first project all I need from it is to get a t,h-graph. Thanks in advance for the help!
from typing import Any, Union
import matplotlib.pyplot as plt
import numpy as np
data = []
t = 0
dt = 10
A = 10
Cw = 2.7
h = 300000
h0 = h
G = 6.67384 * 10 ** -11
M = 5.972 * 10 ** 24
m = 209.4 * A ** (3 / 2)
r = 6371000 + h
v: Union[float, Any] = (G * M / r) ** .5
vx = v
vy = 0
Px = 0
Py = r
while 0 < h < h0 + 100000:
# for _ in range(2592000):
t = t + dt
r = (Px ** 2 + Py ** 2) ** .5
Fg = G * M * m / r ** 2
Fgx = -Fg * Px / r
Fgy = -Fg * Py / r
h = r - 6371000
if h > 600000:
z = 1.607 * 10 ** -11 * 0.991169 ** (h / 1000)
else:
if h > 139000:
z = 3.848 * 10 ** -8 * 0.978294 ** (h / 1000)
else:
z = 1.225 * 0.863697 ** (h / 1000)
v = (vx ** 2 + vy ** 2) ** .5
Fwl = 0.5 * z * Cw * A * v ** 2
Fwlx = -Fwl * vx / v
Fwly = -Fwl * vy / v
ax = (Fgx + Fwlx) / m
ay = (Fgy + Fwly) / m
vx = vx + ax * dt
vy = vy + ay * dt
Px = Px + vx * dt
Py = Py + vy * dt
data += [[h, t]]
print(h)
print(t)
fig, ax = plt.subplots()
ax.plot(t, h)
plt.show()
As suggested by Michael Szczesny in the comment, you should indent data += [[h, t]] within the while loop.
Then I suggest you to plot with:
ax.plot(*zip(*data))
in order to separate h for t and get h(t) curve.
An other suggestion is to flip the order in which you store h and t in data:
data += [[h, t]]
in this way you will get a plot of the height with respect to time (and not the inverse).
Complete Code
from typing import Any, Union
import matplotlib.pyplot as plt
data = []
t = 0
dt = 10
A = 10
Cw = 2.7
h = 300000
h0 = h
G = 6.67384 * 10 ** -11
M = 5.972 * 10 ** 24
m = 209.4 * A ** (3 / 2)
r = 6371000 + h
v: Union[float, Any] = (G * M / r) ** .5
vx = v
vy = 0
Px = 0
Py = r
while 0 < h < h0 + 100000:
# for _ in range(2592000):
t = t + dt
r = (Px ** 2 + Py ** 2) ** .5
Fg = G * M * m / r ** 2
Fgx = -Fg * Px / r
Fgy = -Fg * Py / r
h = r - 6371000
if h > 600000:
z = 1.607 * 10 ** -11 * 0.991169 ** (h / 1000)
else:
if h > 139000:
z = 3.848 * 10 ** -8 * 0.978294 ** (h / 1000)
else:
z = 1.225 * 0.863697 ** (h / 1000)
v = (vx ** 2 + vy ** 2) ** .5
Fwl = 0.5 * z * Cw * A * v ** 2
Fwlx = -Fwl * vx / v
Fwly = -Fwl * vy / v
ax = (Fgx + Fwlx) / m
ay = (Fgy + Fwly) / m
vx = vx + ax * dt
vy = vy + ay * dt
Px = Px + vx * dt
Py = Py + vy * dt
data += [[h, t]]
fig, ax = plt.subplots()
ax.plot(*zip(*data))
plt.show()

Solving a double integral with scipy dblquad

I am trying to solve the double integral of this complicated function. The function contains 3 symbolic variables which are defined with sympy.symbols. My goal is to integrate the function with respect to only two of the variables. sym.integrate run for 2 hours with no results to show. I tried numerical integration with scipy.integrate.dblquad. But I am having troubles which I suspect is due to the third symbolic variable. Is there a way to do this.
Problem summarized.
sym.symbols('x y z')
My_function(x,y,z)
Integrate My_function with respect to x and y (Both from 0 to inf i.e. definite integral).
Thank you in advance
h, t, w, r, q = sym.symbols('h t w r q') # Define symbols
wn = 2.0
alpha = 1.76
beta = 1.59
a0 = 0.7
a1 = 0.282
a2 = 0.167
b0 = 0.07
b1 = 0.3449
b2 = -0.2073
psi = 0.05
F_H = 1.0 - sym.exp(-(h / alpha) ** beta)
mu_h = a0 + a1 * h ** a2
sig_h = b0 + b1 * sym.exp(b2 * h)
F_TIH = (1 / 2) * (1 + sym.erf((sym.log(t) - mu_h) / (sig_h * sym.sqrt(2))))
f_h = sym.diff(F_H, h)
f_tzIhs = sym.diff(F_TIH, t)
f_S = f_h * f_tzIhs
H = (1.0 - (w / wn) ** 2.0 + 1.0j * 2.0 * psi * w / wn) ** (-1.0)
S_eta_h_t = h ** 2.0 * t / (8.0 * pi ** 2.0) * (w * t / (2.0 * pi)) ** (-5.0) * sym.exp(-1.0 / pi * (w * t / (2.0 * pi)) ** (-4.0))
S_RIS_hu_tu = abs(H) ** 2.0 * S_eta_h_t
m0_s = sym.integrate((w ** 0 * S_RIS_hu_tu), (w, 0, np.inf))
m0_s.doit()
m2_s = sym.integrate((w ** 2 * S_RIS_hu_tu), (w, 0, np.inf))
m2_s.doit()
v_rIs = 1 / (2 * pi) * sym.sqrt(m2_s / m0_s) * sym.exp(-r ** 2 / (2 * m0_s))
fun = v_rIs * f_S
# The integral I am trying to solve is a function of h,t and r.
integ_ht = sym.integrate(fun,(h,0,np.inf),(t,0,np.inf))

Grid Search over function

The function HH_model(I,area_factor) has as return value the number of spikes which are triggered by n runs. Assuming 1000 runs, there are 157 times that max(v[]-v_rest) > 60, then the return value of HH_model(I,area_factor) is 157.
Now I know value pairs from another model - the x-values are related to the stimulus I, while the y-values are the number of spikes.
I have written these values as a comment under the code. I want to choose my input parameters I and area_factor in a way that the error to the data is as small as possible. I have no idea how I should do this optimization.
import matplotlib.pyplot as py
import numpy as np
import scipy.optimize as optimize
# HH parameters
v_Rest = -65 # in mV
gNa = 1200 # in mS/cm^2
gK = 360 # in mS/cm^2
gL = 0.3*10 # in mS/cm^2
vNa = 115 # in mV
vK = -12 # in mV
vL = 10.6 # in mV
#Number of runs
runs = 1000
c = 1 # in uF/cm^2
ROOT = False
def HH_model(I,area_factor):
count = 0
t_end = 10 # in ms
delay = 0.1 # in ms
duration = 0.1 # in ms
dt = 0.0025 # in ms
area_factor = area_factor
#geometry
d = 2 # diameter in um
r = d/2 # Radius in um
l = 10 # Length of the compartment in um
A = (1*10**(-8))*area_factor # surface [cm^2]
I = I
C = c*A # uF
for j in range(0,runs):
# Introduction of equations and channels
def alphaM(v): return 12 * ((2.5 - 0.1 * (v)) / (np.exp(2.5 - 0.1 * (v)) - 1))
def betaM(v): return 12 * (4 * np.exp(-(v) / 18))
def betaH(v): return 12 * (1 / (np.exp(3 - 0.1 * (v)) + 1))
def alphaH(v): return 12 * (0.07 * np.exp(-(v) / 20))
def alphaN(v): return 12 * ((1 - 0.1 * (v)) / (10 * (np.exp(1 - 0.1 * (v)) - 1)))
def betaN(v): return 12 * (0.125 * np.exp(-(v) / 80))
# compute the timesteps
t_steps= t_end/dt+1
# Compute the initial values
v0 = 0
m0 = alphaM(v0)/(alphaM(v0)+betaM(v0))
h0 = alphaH(v0)/(alphaH(v0)+betaH(v0))
n0 = alphaN(v0)/(alphaN(v0)+betaN(v0))
# Allocate memory for v, m, h, n
v = np.zeros((int(t_steps), 1))
m = np.zeros((int(t_steps), 1))
h = np.zeros((int(t_steps), 1))
n = np.zeros((int(t_steps), 1))
# Set Initial values
v[:, 0] = v0
m[:, 0] = m0
h[:, 0] = h0
n[:, 0] = n0
### Noise component
knoise= 0.0005 #uA/(mS)^1/2
### --------- Step3: SOLVE
for i in range(0, int(t_steps)-1, 1):
# Get current states
vT = v[i]
mT = m[i]
hT = h[i]
nT = n[i]
# Stimulus current
IStim = 0
if delay / dt <= i <= (delay + duration) / dt:
IStim = I # in uA
else:
IStim = 0
# Compute change of m, h and n
m[i + 1] = (mT + dt * alphaM(vT)) / (1 + dt * (alphaM(vT) + betaM(vT)))
h[i + 1] = (hT + dt * alphaH(vT)) / (1 + dt * (alphaH(vT) + betaH(vT)))
n[i + 1] = (nT + dt * alphaN(vT)) / (1 + dt * (alphaN(vT) + betaN(vT)))
# Ionic currents
iNa = gNa * m[i + 1] ** 3. * h[i + 1] * (vT - vNa)
iK = gK * n[i + 1] ** 4. * (vT - vK)
iL = gL * (vT-vL)
Inoise = (np.random.normal(0, 1) * knoise * np.sqrt(gNa * A))
IIon = ((iNa + iK + iL) * A) + Inoise #
# Compute change of voltage
v[i + 1] = (vT + ((-IIon + IStim) / C) * dt)[0] # in ((uA / cm ^ 2) / (uF / cm ^ 2)) * ms == mV
# adjust the voltage to the resting potential
v = v + v_Rest
# test if there was a spike
if max(v[:]-v_Rest) > 60:
count += 1
return count
# some datapoints from another model out of 1000 runs. ydata means therefore 'count' out of 1000 runs.
# xdata = np.array([0.92*I,0.925*I,0.9535*I,0.975*I,0.9789*I,I,1.02*I,1.043*I,1.06*I,1.078*I,1.09*I])
# ydata = np.array([150,170,269,360,377,500,583,690,761,827,840])
EDIT:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
# HH parameters
v_Rest = -65 # in mV
gNa = 120 # in mS/cm^2
gK = 36 # in mS/cm^2
gL = 0.3 # in mS/cm^2
vNa = 115 # in mV
vK = -12 # in mV
vL = 10.6 # in mV
#Number of runs
runs = 1000
c = 1 # in uF/cm^2
def HH_model(x,I,area_factor):
count = 0
t_end = 10 # in ms
delay = 0.1 # in ms
duration = 0.1 # in ms
dt = 0.0025 # in ms
area_factor = area_factor
#geometry
d = 2 # diameter in um
r = d/2 # Radius in um
l = 10 # Length of the compartment in um
A = (1*10**(-8))*area_factor # surface [cm^2]
I = I*x
C = c*A # uF
for j in range(0,runs):
# Introduction of equations and channels
def alphaM(v): return 12 * ((2.5 - 0.1 * (v)) / (np.exp(2.5 - 0.1 * (v)) - 1))
def betaM(v): return 12 * (4 * np.exp(-(v) / 18))
def betaH(v): return 12 * (1 / (np.exp(3 - 0.1 * (v)) + 1))
def alphaH(v): return 12 * (0.07 * np.exp(-(v) / 20))
def alphaN(v): return 12 * ((1 - 0.1 * (v)) / (10 * (np.exp(1 - 0.1 * (v)) - 1)))
def betaN(v): return 12 * (0.125 * np.exp(-(v) / 80))
# compute the timesteps
t_steps= t_end/dt+1
# Compute the initial values
v0 = 0
m0 = alphaM(v0)/(alphaM(v0)+betaM(v0))
h0 = alphaH(v0)/(alphaH(v0)+betaH(v0))
n0 = alphaN(v0)/(alphaN(v0)+betaN(v0))
# Allocate memory for v, m, h, n
v = np.zeros((int(t_steps), 1))
m = np.zeros((int(t_steps), 1))
h = np.zeros((int(t_steps), 1))
n = np.zeros((int(t_steps), 1))
# Set Initial values
v[:, 0] = v0
m[:, 0] = m0
h[:, 0] = h0
n[:, 0] = n0
### Noise component
knoise= 0.0005 #uA/(mS)^1/2
### --------- Step3: SOLVE
for i in range(0, int(t_steps)-1, 1):
# Get current states
vT = v[i]
mT = m[i]
hT = h[i]
nT = n[i]
# Stimulus current
IStim = 0
if delay / dt <= i <= (delay + duration) / dt:
IStim = I # in uA
else:
IStim = 0
# Compute change of m, h and n
m[i + 1] = (mT + dt * alphaM(vT)) / (1 + dt * (alphaM(vT) + betaM(vT)))
h[i + 1] = (hT + dt * alphaH(vT)) / (1 + dt * (alphaH(vT) + betaH(vT)))
n[i + 1] = (nT + dt * alphaN(vT)) / (1 + dt * (alphaN(vT) + betaN(vT)))
# Ionic currents
iNa = gNa * m[i + 1] ** 3. * h[i + 1] * (vT - vNa)
iK = gK * n[i + 1] ** 4. * (vT - vK)
iL = gL * (vT-vL)
Inoise = (np.random.normal(0, 1) * knoise * np.sqrt(gNa * A))
IIon = ((iNa + iK + iL) * A) + Inoise #
# Compute change of voltage
v[i + 1] = (vT + ((-IIon + IStim) / C) * dt)[0] # in ((uA / cm ^ 2) / (uF / cm ^ 2)) * ms == mV
# adjust the voltage to the resting potential
v = v + v_Rest
# test if there was a spike
if max(v[:]-v_Rest) > 60:
count += 1
return count
def loss(parameters, model, x_ref, y_ref):
# unpack multiple parameters
I, area_factor = parameters
# compute prediction
y_predicted = np.array([model(x, I, area_factor) for x in x_ref])
# compute error and use it as loss
mse = ((y_ref - y_predicted) ** 2).mean()
return mse
# some datapoints from another model out of 1000 runs. ydata means therefore 'count' out of 1000 runs.
xdata = np.array([0.92,0.925,0.9535, 0.975, 0.9789, 1])
ydata = np.array([150,170,269, 360, 377, 500])
y_data_scaled = ydata / runs
y_predicted = np.array([HH_model(x,I=10**(-3), area_factor=1) for x in xdata])
parameters = (10**(-3), 1)
mse0 = loss(parameters, HH_model, xdata, y_data_scaled)
# compute the parameters that minimize the loss (alias, the error between the data and the predictions of the model)
optimum = minimize(loss, x0=np.array([10**(-3), 1]), args=(HH_model, xdata, y_data_scaled))
# compute the predictions with the optimized parameters
I = optimum['x'][0]
area_factor = optimum['x'][1]
y_predicted_opt = np.array([HH_model(x, I, area_factor) for x in xdata])
# plot the raw data, the model with handcrafted guess and the model with optimized parameters
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('input')
ax.set_ylabel('output predictions')
ax.plot(xdata, y_data_scaled, marker='o')
ax.plot(xdata, y_predicted, marker='*')
ax.plot(xdata, y_predicted_opt, marker='v')
ax.legend([
"raw data points",
"initial guess",
"predictions with optimized parameters"
])
I started using your function,
then I noticed it was very slow to execute.
Hence, I decided to show the process with a toy (linear) model.
The process remains the same.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
def loss(parameters, model, x_ref, y_ref):
# unpack multiple parameters
m, q = parameters
# compute prediction
y_predicted = np.array([model(x, m, q) for x in x_ref])
# compute error and use it as loss
mse = ((y_ref - y_predicted) ** 2).mean()
return mse
# load a dataset to fit a model
x_data = np.array([0.92, 0.925, 0.9535, 0.975, 0.9789, 1, 1.02, 1.043, 1.06, 1.078, 1.09])
y_data = np.array([150, 170, 269, 360, 377, 500, 583, 690, 761, 827, 840])
# normalise the data - input is already normalised
y_data_scaled = y_data / 1000
# create a model (linear, as an example) using handcrafted parameters, ex:(1,1)
linear_fun = lambda x, m, q: m * x + q
y_predicted = np.array([linear_fun(x, m=1, q=1) for x in x_data])
# create a function that given a model (linear_fun), a dataset(x,y) and the parameters, compute the error
parameters = (1, 1)
mse0 = loss(parameters, linear_fun, x_data, y_data_scaled)
# compute the parameters that minimize the loss (alias, the error between the data and the predictions of the model)
optimum = minimize(loss, x0=np.array([1, 1]), args=(linear_fun, x_data, y_data_scaled))
# compute the predictions with the optimized parameters
m = optimum['x'][0]
q = optimum['x'][1]
y_predicted_opt = np.array([linear_fun(x, m, q) for x in x_data])
# plot the raw data, the model with handcrafted guess and the model with optimized parameters
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('input')
ax.set_ylabel('output predictions')
ax.plot(x_data, y_data_scaled, marker='o')
ax.plot(x_data, y_predicted, marker='*')
ax.plot(x_data, y_predicted_opt, marker='v')
ax.legend([
"raw data points",
"initial guess",
"predictions with optimized parameters"
])
# Note1: good practise is to validate your model with a different set of data,
# respect to the one that you have used to find the parameters
# here, however, it is shown just the optimization procedure
# Note2: in your case you should use the HH_model instead of the linear_fun
# and I and Area_factor instead of m and q.
Output:
-- EDIT: To use the HH_model:
I went deeper in your code,
I tried few values for area and stimulus
and I executed a single run of HH_Model without taking the threshold.
Then, I checked the predicted dynamic of voltage (v):
the sequence is always diverging ( all values become nan after few steps )
if you have an initial guess for stimulus and area that could make the code to work, great.
if you have no idea of the order of magnitude of these parameters
the unique solution I see is a grid search over them - just to find this initial guess.
however, it might take a very long time without guarantee of success.
given that the code is based on a physical model, I would suggest to:
1 - find pen and paper a reasonable values.
2 - check that this simulation works for these values.
3 - then, run the optimizer to find the minimum.
Or, worst case scenario, reverse engineer the code and find the value that makes the equation to converge
Here the refactored code:
import math
import numpy as np
# HH parameters
v_Rest = -65 # in mV
gNa = 1200 # in mS/cm^2
gK = 360 # in mS/cm^2
gL = 0.3 * 10 # in mS/cm^2
vNa = 115 # in mV
vK = -12 # in mV
vL = 10.6 # in mV
# Number of runs
c = 1 # in uF/cm^2
# Introduction of equations and channels
def alphaM(v):
return 12 * ((2.5 - 0.1 * (v)) / (np.exp(2.5 - 0.1 * (v)) - 1))
def betaM(v):
return 12 * (4 * np.exp(-(v) / 18))
def betaH(v):
return 12 * (1 / (np.exp(3 - 0.1 * (v)) + 1))
def alphaH(v):
return 12 * (0.07 * np.exp(-(v) / 20))
def alphaN(v):
return 12 * ((1 - 0.1 * (v)) / (10 * (np.exp(1 - 0.1 * (v)) - 1)))
def betaN(v):
return 12 * (0.125 * np.exp(-(v) / 80))
def predict_voltage(A, C, delay, dt, duration, stimulus, t_end):
# compute the timesteps
t_steps = t_end / dt + 1
# Compute the initial values
v0 = 0
m0 = alphaM(v0) / (alphaM(v0) + betaM(v0))
h0 = alphaH(v0) / (alphaH(v0) + betaH(v0))
n0 = alphaN(v0) / (alphaN(v0) + betaN(v0))
# Allocate memory for v, m, h, n
v = np.zeros((int(t_steps), 1))
m = np.zeros((int(t_steps), 1))
h = np.zeros((int(t_steps), 1))
n = np.zeros((int(t_steps), 1))
# Set Initial values
v[:, 0] = v0
m[:, 0] = m0
h[:, 0] = h0
n[:, 0] = n0
# Noise component
knoise = 0.0005 # uA/(mS)^1/2
for i in range(0, int(t_steps) - 1, 1):
# Get current states
vT = v[i]
mT = m[i]
hT = h[i]
nT = n[i]
# Stimulus current
if delay / dt <= i <= (delay + duration) / dt:
IStim = stimulus # in uA
else:
IStim = 0
# Compute change of m, h and n
m[i + 1] = (mT + dt * alphaM(vT)) / (1 + dt * (alphaM(vT) + betaM(vT)))
h[i + 1] = (hT + dt * alphaH(vT)) / (1 + dt * (alphaH(vT) + betaH(vT)))
n[i + 1] = (nT + dt * alphaN(vT)) / (1 + dt * (alphaN(vT) + betaN(vT)))
# Ionic currents
iNa = gNa * m[i + 1] ** 3. * h[i + 1] * (vT - vNa)
iK = gK * n[i + 1] ** 4. * (vT - vK)
iL = gL * (vT - vL)
Inoise = (np.random.normal(0, 1) * knoise * np.sqrt(gNa * A))
IIon = ((iNa + iK + iL) * A) + Inoise #
# Compute change of voltage
v[i + 1] = (vT + ((-IIon + IStim) / C) * dt)[0] # in ((uA / cm ^ 2) / (uF / cm ^ 2)) * ms == mV
# stop simulation if it diverges
if math.isnan(v[i + 1]):
return [None]
# adjust the voltage to the resting potential
v = v + v_Rest
return v
def HH_model(stimulus, area_factor, runs=1000):
count = 0
t_end = 10 # in ms
delay = 0.1 # in ms
duration = 0.1 # in ms
dt = 0.0025 # in ms
area_factor = area_factor
# geometry
d = 2 # diameter in um
r = d / 2 # Radius in um
l = 10 # Length of the compartment in um
A = (1 * 10 ** (-8)) * area_factor # surface [cm^2]
stimulus = stimulus
C = c * A # uF
for j in range(0, runs):
v = predict_voltage(A, C, delay, dt, duration, stimulus, t_end)
if max(v[:] - v_Rest) > 60:
count += 1
return count
And the attempt to run one simulation:
import time
from ex_21.equations import c, predict_voltage
area_factor = 0.1
stimulus = 70
# input signal
count = 0
t_end = 10 # in ms
delay = 0.1 # in ms
duration = 0.1 # in ms
dt = 0.0025 # in ms
# geometry
d = 2 # diameter in um
r = d / 2 # Radius in um
l = 10 # Length of the compartment in um
A = (1 * 10 ** (-8)) * area_factor # surface [cm^2]
C = c * A # uF
start = time.time()
voltage_dynamic = predict_voltage(A, C, delay, dt, duration, stimulus, t_end)
elapse = time.time() - start
print(voltage_dynamic)
Output:
[None]

Why Won't This Python Code match the Formula for a European Call Option?

import math
import numpy as np
S0 = 100.; K = 100.; T = 1.0; r = 0.05; sigma = 0.2
M = 100; dt = T / M; I = 500000
S = np.zeros((M + 1, I))
S[0] = S0
for t in range(1, M + 1):
z = np.random.standard_normal(I)
S[t] = S[t - 1] * np.exp((r - 0.5 * sigma ** 2) * dt + sigma *
math.sqrt(dt) * z)
C0 = math.exp(-r * T) * np.sum(np.maximum(S[-1] - K, 0)) / I
print ("European Option Value is ", C0)
It gives a value of around 10.45 as you increase the number of simulations, but using the B-S formula the value should be around 10.09. Anybody know why the code isn't giving a number closer to the formula?

python Converting and solving of stiff ODE system

I have stiff system of differential equations given to the first-order ODE. This system is written in Maple. The default method used by Maple is the Rosenbrock method. Now my task is to solve these equations with python tools.
1) I do not know how to write the equations in the python code.
2) I do not know how to solve the equations with numpy, scipy, matplotlib or PyDSTool. For the library PyDSTool I did not find any examples at all, although I read that it is well suited for stiff systems.
Code:
import numpy
import scipy
import matplotlib
varepsilon = pow(10, -2); j = -2.5*pow(10, -2); e = 3.0; tau = 0.3; delta = 2.0
u0 = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) / 6
u = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) * (1 + delta) / 6
v = 1 / (1 - 2 / e) * math.sqrt(j ** 2 + (1 - 2 / e) * (e ** 2 * u ** 2 + 1))
y8 = lambda y1,y5,y7: 1 / (1 - 2 / y1) * math.sqrt(y5 ** 2 + (1 - 2 / y1) * (1 + y1 ** 2 * y7 ** 2))
E0 = lambda y1,y8: (1 - 2 / y1) * y8
Phi0 = lambda y1,y7: y1 ** 2 * y7
y08 = y8(y1=e, y5=j, y7=u0);
E = E0(y1=e, y8=y08); Phi = Phi0(y1=e, y7=u0)
# initial values
z01 = e; z03 = 0; z04 = 0; z05 = j; z07 = u0; z08 = y08;
p1 = -z1(x)*z5(x)/(z1(x)-2);
p3 = -z1(x)^2*z7(x);
p4 = z8(x)*(1-2/z1(x));
Q1 = -z5(x)^2/(z1(x)*(z1(x)-2))+(z8(x)^2/z1(x)^3-z7(x)^2)*(z1(x)-2);
Q3 = 2*z5(x)*z7(x)/z1(x);
Q4 = 2*z5(x)*z8(x)/(z1(x)*(z1(x)-2));
c1 = z1(x)*z7(x)*varepsilon;
c3 = -z1(x)*z5(x)*varepsilon;
C = z7(x)*varepsilon/z1(x)-z8(x)*(1-2/z1(x));
d1 = -z1(x)*z8(x)*varepsilon;
d3 = z1(x)*z5(x)*varepsilon;
B = z1(x)^2*z7(x)-z8(x)*varepsilon*(1-2/z1(x));
Omega = 1/(c1*d3*p3+c3*d1*p4-c3*d3*p1);
# differential equations
diff(z1(x), x) = z5(x);
diff(z3(x), x) = z7(x);
diff(z4(x), x) = z8(x);
diff(z5(x), x) = Omega*(-Q1*c1*d3*p3 - Q1*c3*d1*p4 + Q1*c3*d3*p1 + B*c3*p4 + C*d3*p3 + E*d3*p3 - Phi*c3*p4);
diff(z7(x), x) = -Omega*(Q3*c1*d3*p3 + Q3*c3*d1*p4 - Q3*c3*d3*p1 + B*c1*p4 - C*d1*p4 + C*d3*p1 - E*d1*p4 + E*d3*p1 - Phi*c1*p4);
diff(z8(x), x) = Omega*(-Q4*c1*d3*p3 - Q4*c3*d1*p4 + Q4*c3*d3*p1 + B*c1*p3 - B*c3*p1 - C*d1*p3 - E*d1*p3 - Phi*c1*p3 + Phi*c3*p1);
#features to be found and built curve
{z1(x), z3(x), z4(x), z5(x), z7(x), z8(x)}
After drifting on the Internet, I found something in principle:
import math
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
from scipy.signal import argrelextrema
from mpmath import mp, mpf
mp.dps = 50
varepsilon = pow(10, -2); j = 2.5*pow(10, -4); e = 3.0; tau = 0.5; delta = 2.0
u0 = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) / 6
u = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) * (1 + delta) / 6
v = 1 / (1 - 2 / e) * math.sqrt(j ** 2 + (1 - 2 / e) * (e ** 2 * u ** 2 + 1))
y8 = lambda y1,y5,y7: 1 / (1 - 2 / y1) * math.sqrt(y5 ** 2 + (1 - 2 / y1) * (1 + y1 ** 2 * y7 ** 2))
E0 = lambda y1,y8: (1 - 2 / y1) * y8
Phi0 = lambda y1,y7: y1 ** 2 * y7
y08 = y8(y1=e, y5=j, y7=u0);
E = E0(y1=e, y8=y08); Phi = Phi0(y1=e, y7=u0)
# initial values
z01 = e; z03 = 0.0; z04 = 0.0; z05 = j; z07 = u0; z08 = y08;
def model(x, z, varepsilon, E, Phi):
z1, z3, z4, z5, z7, z8 = z[0], z[1], z[2], z[3], z[4], z[5]
p1 = -z1*z5/(z1 - 2);
p3 = -pow(z1, 2) *z7;
p4 = z8*(1 - 2/z1);
Q1 = -pow(z5, 2)/(z1*(z1 - 2)) + (pow(z8, 2)/pow(z1, 3) - pow(z7, 2))*(z1 - 2);
Q3 = 2*z5*z7/z1;
Q4 = 2*z5*z8/(z1*(z1 - 2));
c1 = z1*z7*varepsilon;
c3 = -z1*z5*varepsilon;
C = z7*varepsilon/z1 - z8*(1 - 2/z1);
d1 = -z1*z8*varepsilon;
d3 = z1*z5*varepsilon;
B = pow(z1, 2)*z7 - z8*varepsilon*(1 - 2/z1);
Omega = 1/(c1*d3*p3+c3*d1*p4-c3*d3*p1);
# differential equations
dz1dx = z5;
dz3dx = z7;
dz4dx = z8;
dz5dx = Omega*(-Q1*c1*d3*p3 - Q1*c3*d1*p4 + Q1*c3*d3*p1 + B*c3*p4 + C*d3*p3 + E*d3*p3 - Phi*c3*p4);
dz7dx = -Omega*(Q3*c1*d3*p3 + Q3*c3*d1*p4 - Q3*c3*d3*p1 + B*c1*p4 - C*d1*p4 + C*d3*p1 - E*d1*p4 + E*d3*p1 - Phi*c1*p4);
dz8dx = Omega*(-Q4*c1*d3*p3 - Q4*c3*d1*p4 + Q4*c3*d3*p1 + B*c1*p3 - B*c3*p1 - C*d1*p3 - E*d1*p3 - Phi*c1*p3 + Phi*c3*p1);
dzdx = [dz1dx, dz3dx, dz4dx, dz5dx, dz7dx, dz8dx]
return dzdx
z0 = [z01, z03, z04, z05, z07, z08]
if __name__ == '__main__':
# Start by specifying the integrator:
# use ``vode`` with "backward differentiation formula"
r = integrate.ode(model).set_integrator('vode', method='bdf')
r.set_f_params(varepsilon, E, Phi)
# Set the time range
t_start = 0.0
t_final = 0.1
delta_t = 0.00001
# Number of time steps: 1 extra for initial condition
num_steps = np.floor((t_final - t_start)/delta_t) + 1
r.set_initial_value(z0, t_start)
t = np.zeros((int(num_steps), 1), dtype=np.float64)
Z = np.zeros((int(num_steps), 6,), dtype=np.float64)
t[0] = t_start
Z[0] = z0
k = 1
while r.successful() and k < num_steps:
r.integrate(r.t + delta_t)
# Store the results to plot later
t[k] = r.t
Z[k] = r.y
k += 1
# All done! Plot the trajectories:
Z1, Z3, Z4, Z5, Z7, Z8 = Z[:,0], Z[:,1] ,Z[:,2], Z[:,3], Z[:,4], Z[:,5]
plt.plot(t,Z1,'r-',label=r'$r(s)$')
plt.grid('on')
plt.ylabel(r'$r$')
plt.xlabel('proper time s')
plt.legend(loc='best')
plt.show()
plt.plot(t,Z5,'r-',label=r'$\frac{dr}{ds}$')
plt.grid('on')
plt.ylabel(r'$\frac{dr}{ds}$')
plt.xlabel('proper time s')
plt.legend(loc='best')
plt.show()
plt.plot(t, Z7, 'r-', label=r'$\frac{dϕ}{ds}$')
plt.grid('on')
plt.xlabel('proper time s')
plt.ylabel(r'$\frac{dϕ}{ds}$')
plt.legend(loc='upper center')
plt.show()
However, reviewing the solutions obtained by the library scipy,
I encountered the problem of inconsistency of the solutions obtained by scipy and Maple. The essence of the problem is that the solutions are quickly oscillating and the Maple catches these oscillations with high precision using Rosenbrock's method. While Pythonn has problems with this using Backward Differentiation Methods:
r = integrate.ode(model).set_integrator('vode', method='bdf')
http://www.scholarpedia.org/article/Backward_differentiation_formulas
I tried all the modes of integrating: “vode” ; “zvode”; “lsoda” ; “dopri5” ; “dop853” and I found that the best suited mode “vode” however, still does not meet my needs...
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html
So this method catches oscillations in the range j ~ 10^{-5}-10^{-3}..While the maple shows good results for any j.
I present the results obtained by scipy for j ~ 10^{-2}:
enter image description here
enter image description here
and the results obtained by Maple for j ~ 10^{-2}:
enter image description here
enter image description here
It is important that oscillations are physical solutions! That is, the Python badly captures oscillations for j ~ 10^{-2}((. Can anyone tell me what I'm doing wrong?? how to look at the absolute error of integration?

Categories