I'm using a system of ode's to model coffee bean roasting for a class assignment. The equations are below.
The parameters (other than X_b and T_b) are all constants.
When I try to use odeint to solve this system, it gives a constant T_b and X_b profile (which conceptually doesn't make sense).
Below is the code I'm using
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
# Write function for bean temperature T_b differential equation
def deriv(X,t):
T_b, X_b = X
dX_b = (-4.32*10**9*X_b**2)/(l_b**2)*np.exp(-9889/T_b)
dT_b = ((h_gb*A_gb*(T_gi - T_b))+(m_b*A_arh*np.exp(-H_a/R_g/T_b))+
(m_b*lam*dX_b))/(m_b*(1.099+0.0070*T_b+5*X_b)*1000)
return [dT_b, dX_b]
# Establish initial conditions
t = 0 #seconds
T_b = 298 # degrees K
X_b = 0.1 # mass fraction of moisture
# Set time step
dt = 1 # second
# Establish location to store data
history = [[t,T_b, X_b]]
# Use odeint to solve DE
while t < 600:
T_b, X_b = odeint(deriv, [T_b, X_b], [t+dt])[-1]
t += dt
history.append([t,T_b, X_b])
# Plot Results
def plot_history(history, labels):
"""Plots a simulation history."""
history = np.array(history)
t = history[:,0]
n = len(labels) - 1
plt.figure(figsize=(8,1.95*n))
for k in range(0,n):
plt.subplot(n, 1, k+1)
plt.plot(t, history[:,k+1])
plt.title(labels[k+1])
plt.xlabel(labels[0])
plt.grid()
plt.tight_layout()
plot_history(history, ['t (s)','Bean Temperature $T_b$ (K)', 'Bean Moisture Content $X_b$'])
plt.show()
Do you have any ideas why the integration step isn't working?
Thank You!!
You're repeatedly solving the system of equations for only a single timepoint.
From the odeint documentation, the odeint command takes an argument t which is:
A sequence of time points for which to solve for y. The initial value point should be the first element of this sequence.
Since you pass [t+dt] to odeint, there is only a single timepoint so you get back only a single value which is simply your initial condition.
The correct way to use odeint is similar to the following:
output = odeint(deriv, [T_b, X_b], np.linspace(0,600,600))
Here output, again according to the documentation is:
Array containing the value of y for each desired time in t, with the initial value y0 in the first row.
Related
I am encountering a strange problem with scipy.integrate.ode. Here is a minimal working example:
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from scipy.integrate import ode, complex_ode
def fun(t):
return np.exp( -t**2 / 2. )
def ode_fun(t, y):
a, b = y
f = fun(t)
c = np.conjugate(b)
dt_a = -2j*f*c + 2j*f*b
dt_b = 1j*f*a
return [dt_a, dt_b]
t_range = np.linspace(-10., 10., 10000)
init_cond = [-1, 0]
trajectory = np.empty((len(t_range), len(init_cond)), dtype=np.complex128)
### setup ###
r = ode(ode_fun).set_integrator('zvode', method='adams', with_jacobian=False)
r.set_initial_value(init_cond, t_range[0])
dt = t_range[1] - t_range[0]
### integration ###
for i, t_i in enumerate(t_range):
trajectory[i,:] = r.integrate(r.t+dt)
a_traj = trajectory[:,0]
b_traj = trajectory[:,1]
fun_traj = fun(t_range)
### plot ###
plt.figure(figsize=(10,5))
plt.subplot(121, title='ODE solution')
plt.plot(t_range, np.real(a_traj))
plt.subplot(122, title='Input')
plt.plot(t_range, fun_traj)
plt.show()
This code works correctly and the output figure is (the ODE is explicitly dependent on the input variable, right panel shows the input, left panel the ode solution for the first variable).
So in principle my code is working. What is strange is that if I simply replace the integration range
t_range = np.linspace(-10., 10., 10000)
by
t_range = np.linspace(-20., 20., 10000)
I get the output
So somehow the integrator just gave up on the integration and left my solution as a constant. Why does this happen? How can I fix it?
Some things I've tested: It is clearly not a resolution problem, the integration steps are really small already. Instead, it seems that the integrator does not even bother calling the ode function anymore after a few steps. I've tested that by including a print statement in the ode_fun().
My current suspicion is that the integrator decided that my function is constant after it did not change significantly during the first few integration steps. Do I maybe have to set some tolerance levels somewhere?
Any help appreciated!
"My current suspicion is that the integrator decided that my function is constant after it did not change significantly during the first few integration steps." Your suspicion is correct.
ODE solvers typically have an internal step size that is adaptively adjusted based on error estimates computed by the solver. These step sizes are independent of the times for which the output is requested; the output at the requested times is computed using interpolation of the solution at the points computed at the internal steps.
When you start your solver at t = -20, apparently the input changes so slowly that the solver's internal step size becomes large enough that by the time the solver gets near t = 0, the solver skips right over the input pulse.
You can limit the internal step size with the option max_step of the set_integrator method. If I set max_step to 2.0 (for example),
r = ode(ode_fun).set_integrator('zvode', method='adams', with_jacobian=False,
max_step=2.0)
I get the output that you expect.
TL;DR I've been implementing a python program to solve numerically equations for natural convection based on a particular similarity variable using runge-kutta 4 and the shooting method. However I don't get the right solutions when I plot it. Did I make a mistake somewhere ?
Hi !
Starting from a special case of natural convection, we get these similitude equations.
The first describe the fluid flow, the second describe the heat flow.
"Pr" is for Prandtl it's basically a dimensionless number used in fluid dynamics (Prandtl) :
These equations are subjects to the following boundary values such that the temperature near the plate is greater than the temperature outside the boundary layer and such that the fluid velocity is 0 far away from the boundary layer.
I've been trying to resolve these numerically with Runge-Kutta 4 and the shooting method to transform the boundary value problem into an initial value problem. The way the shooting method is implemented is with the newton method.
However, I don't get the right solutions.
As you can see in the following, the temperature (in red) is increasing as we are moving away from the plate whereas it should decrease exponentially.
It's more consistent for the fluid velocity (in blue), however the speed i think it should go up faster then go down faster. Here the curve is smoother.
Now, the fact is that we have a system of 2 coupled ODE. However, right now, I'm only trying to find the one of the two initials values (e.g. f''(0) = a, trying to find a) such that we have a solution to the boundary value problem (shooting method). Once found, I suppose we have the solution for the whole problem.
I guess I should maybe manage the two (f''(0) = a ; theta'(0) = b) but I don't know how to manage these two in parallel.
Last think to mention, if I try to get the initial value of theta' (so theta'(0)) I don't get the right heat profile.
Here is the code :
"""
The goal is to resolve a 3rd order non-linear ODE for the blasius problem.
It's made of 2 equations (flow / heat)
f''' = 3ff'' - 2(f')^2 + theta
3 Pr f theta' + theta'' = 0
RK4 + Shooting Method
"""
import numpy as np
import math
from scipy.integrate import odeint
from scipy.optimize import newton
from edo_solver.plot import plot
from constants import PRECISION
def blasius_edo(y, t, prandtl):
f = y[0:3]
theta = y[3:5]
return np.array([
# flow edo
f[1], # f' = df/dn
f[2], # f'' = d^2f/dn^2
- 3 * f[0] * f[2] + (2 * math.pow(f[1], 2)) - theta[0], # f''' = - 3ff'' + 2(f')^2 - theta,
# heat edo
theta[1], # theta' = dtheta/dn
- 3 * prandtl * f[0] * theta[1], # theta'' = - 3 Pr f theta'
])
def rk4(eta_range, shoot):
prandtl = 0.01
# initial values
f_init = [0, 0, shoot] # f(0), f'(0), f''(0)
theta_init = [1, shoot] # theta(0), theta'(0)
ci = f_init + theta_init # concatenate two ci
# note: tuple with single argument must have "," at the end of the tuple
return odeint(func=blasius_edo, y0=ci, t=eta_range, args=(prandtl,))
"""
if we have :
f'(t_0) = fprime_t0 ; f'(eta -> infty) = fprime_inf
we can transform it into :
f'(t_0) = fprime_t0 ; f''(t_0) = a
we define the function F(a) = f'(infty ; a) - fprime_inf
if F(a) has a root in "a",
then the solutions to the initial value problem with f''(t_0) = a
is also the solution the boundary problem with f'(eta -> infty) = fprime_inf
our goal is to find the root, we have the root...we have the solution.
it can be done with bissection method or newton method.
"""
def shooting(eta_range):
# boundary value
fprimeinf = 0 # f'(eta -> infty) = 0
# initial guess
# as far as I understand
# it has to be the good guess
# otherwise the result can be completely wrong
initial_guess = 10 # guess for f''(0)
# define our function to optimize
# our goal is to take big eta because eta should approach infty
# [-1, 1] : last row, second column => f'(eta_final) ~ f'(eta -> infty)
fun = lambda initial_guess: rk4(eta_range, initial_guess)[-1, 1] - fprimeinf
# newton method resolve the ODE system until eta_final
# then adjust the shoot and resolve again until we have a correct shoot
shoot = newton(func=fun, x0=initial_guess)
# resolve our system of ODE with the good "a"
y = rk4(eta_range, shoot)
return y
def compute_blasius_edo(title, eta_final):
ETA_0 = 0
ETA_INTERVAL = 0.1
ETA_FINAL = eta_final
# default values
title = title
x_label = "$\eta$"
y_label = "profil de vitesse $(f'(\eta))$ / profil de température $(\\theta)$"
legends = ["$f'(\eta)$", "$\\theta$"]
eta_range = np.arange(ETA_0, ETA_FINAL + ETA_INTERVAL, ETA_INTERVAL)
# shoot
y_set = shooting(eta_range)
plot(eta_range, y_set, title, legends, x_label, y_label)
compute_blasius_edo(
title="Convection naturelle - Solution de similitude",
eta_final=10
)
I could be completely off base here, but I wrote something similar to solve 1D fluid-reaction-heat equations. Try using solve_ivp and using the RADAU solver method, it helps with more difficult systems.
Also maybe try converting your system of ODES to a system of first order ODEs as that may help.
You are implementing the additional but wrong boundary condition f''(0) = theta'(0), as both slots get the same initial value in the shooting method. You need to hold them separate, giving 2 free variables and thus the need for a 2-dimensional Newton method or any other solver for non-scalar functions.
You could just as well use the solve_bvp routine with a sensible initial guess.
I am trying with help of this post: http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html, but the parameters remain the same, independently of what initial conditions are chosen.
#Zombie Display
# zombie apocalypse modeling
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy import integrate
from scipy.optimize import fmin
#=====================================================
#Notice we must import the Model Definition
#The Model
#=======================================================
def eq(par,initial_cond,start_t,end_t,incr):
#-time-grid-----------------------------------
t = np.linspace(start_t, end_t,incr)
#differential-eq-system----------------------
def funct(phi,t):
S=phi[0]
E=phi[1]
I=phi[2]
R=phi[3]
dS_dt = mu*(N-S)-beta*S*I/N-nu*S
dE_dt = beta*S*I/N-(mu+sigma)*E
dI_dt = sigma*E-(mu+gamma)*I
dR_dt=gamma*I-mu*R+nu*S
dphi_dt = [dS_dt,dE_dt,dI_dt,dR_dt]
return dphi_dt
#integrate------------------------------------
ds = integrate.odeint(funct,initial_cond,t)
return (ds[:,0],ds[:,1],ds[:,2],ds[:,3],t)
#=======================================================
#=====================================================
#1.Get Data
#====================================================
Td=np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])#time
Zd=np.array([274,326,547,639,2000,2700,4400,6000,7700,9700,11200,14300,17200,19200,20000])#zombie pop
#====================================================
#2.Set up Info for Model System
#===================================================
# model parameters
#----------------------------------------------------
beta=1.4
gamma=0.06
sigma=0.15
mu=0
nu=0
rates=(beta,gamma,sigma,mu,nu)
# model initial conditions
#---------------------------------------------------
N=11*pow(10,6)
S0 = N-1 # initial population
E0 = 105.1 # initial zombie population
I0 = 27.679 # initial death population
R0= 2.0
y0 = [S0, E0, I0, R0] # initial condition vector
# model steps
#---------------------------------------------------
start_time=0.0
end_time=140
intervals=5.0
mt=np.linspace(start_time,end_time,intervals)
# model index to compare to data
#----------------------------------------------------
findindex=lambda x:np.where(mt>=x)[0][0]
mindex=list(map(findindex,Td))
#=======================================================
#3.Score Fit of System
#=========================================================
def score(parms):
#a.Get Solution to system
F0,F1,F2,F3,T=eq(parms,y0,start_time,end_time,intervals)
#b.Pick of Model Points to Compare
Zm=F1[mindex]
#c.Score Difference between model and data points
ss=lambda data,model:((data-model)**2).sum()
return ss(Zd,Zm)
#========================================================
#=========================================================
#4.Optimize Fit
#=======================================================
fit_score=score(rates)
answ=fmin(score,(rates),full_output=1,maxiter=1000000)
bestrates=answ[0]
bestscore=answ[1]
beta, gamma, sigma, mu, nu=answ[0]
newrates=(beta,gamma,sigma,mu,nu)
#=======================================================
#5.Generate Solution to System
#=======================================================
F0,F1,F2,F3,T=eq(newrates,y0,start_time,end_time,intervals)
Zm=F1[mindex]
Tm=T[mindex]
#======================================================
#6. Plot Solution to System
#=========================================================
plt.figure()
plt.plot(T,F1,'b-',Tm,Zm,'ro',Td,Zd,'go')
plt.xlabel('days')
plt.ylabel('population')
title='Zombie Apocalypse Fit Score: '+str(bestscore)
plt.title(title)
plt.show()
I know that this is huge, but if somebody is expert from parameter estimations for differential equation system, I would be glad to hear any information. Thanks a lot!
You do not see any change in the parameters because you do not use them, the score function is constant. While you pass on parms to eq, inside eq you do not unpack the par parameter.
Add in eq as first line
beta,gamma,sigma,mu,nu = par
and the minimization algorithm does something non-trivial. It is still possible that no solution will be found in reasonable time. Set maxiter to something more reasonable. If the method then fails, it may also be a matter of the problem formulation. Perhaps the initial conditions need also be optimization variables, or the ODE system is not really suitable.
I am trying to solve an equation in Python. Basically what I want to do is to solve the equation:
(1/x^2)*d(Gam*dL/dx)/dx)+(a^2*x^2/Gam-(m^2))*L=0
This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. It suppose that we know m and Gam=x^2-2*x. The initial/boundary condition that I know are L(2+epsilon)=1 and L(infty)=0. Notice that the asymptotic behavior of the equation is
L(x-->infty)-->Exp[(m^2-a^2)*x]/x and Exp[-(m^2-a^2)*x]/x
Then, if a^2>m^2 we will have oscillatory solutions, while if a^2 < m^2 we will have a divergent and a decay solution.
What I am interested is in the decay solution, however when I am trying to solve the above equation transforming it as a system of first order differential equations and using the shooting method in order to find the "a" that can give me the behavior that I am interested about, I am always having a divergent solution. I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method?
Basically what I am doing is to add a new system of equation for "a"
(d^2a/dx^2=0, da/dx(2+epsilon)=0,a(2+epsilon)=a_0)
in order to have "a" as a constant. Then I am considering different values for "a_0" and asking if my boundary conditions are fulfilled.
Thanks for your time. Regards,
Luis P.
I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from math import *
from scipy.integrate import ode
These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$
def init_sch(u_sch):
om = u_sch[0]
return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx]
These are our system of equations
def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0):
L = IC[0]
ph = IC[1]
om = IC[2]
b = IC[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.
return [dR_dr,dph_dr,dom_dr,db_dr]
Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here
p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case
ep = 0.2
ep_r = 0.01
r_end = 500
n_r = 500000
n_omega = 1000
omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)
r = np.linspace(2+ep_r,r_end,n_r)
tol = 0.01
a = 0
for j in range(len(omega)):
print('trying with $omega =$',omega[j])
omeg = [omega[j]]
ini = init_sch(omeg)
Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000)
print Y[-1,0]
#Here I ask if my asymptotic behavior is fulfilled or not. This should be basically my value at infinity
if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol:
print(j,'times iterations in omega')
print("R'(inf)) = ", Y[-1,0])
print("\omega",omega[j])
omega_1 = [omega[j]]
a = 10
break
if a > 1:
break
Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from math import *
from scipy.integrate import solve_bvp
def bc(ya,yb,p_sch):
m = p_sch[1]
om = p_sch[4]
tol_s = p_sch[5]
r_end = p_sch[6]
return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])
def fun(r,y,p_sch):
rho_c = p_sch[0]
m = p_sch[1]
lam = p_sch[2]
l = p_sch[3]
L = y[0]
ph = y[1]
om = y[2]
b = y[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.*y[3]
return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))
eps_r=0.01
r_end = 500
n_r = 50000
r = np.linspace(2+eps_r,r_end,n_r)
y = np.zeros((4,r.size))
y[0]=1
tol_s = 0.0001
p_sch= (1,1,0,0,0.8,tol_s,r_end)
sol = solve_bvp(fun,bc, r, y, p_sch)
I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).
ValueError: bc return is expected to have shape (11,), but actually has (8,).
this is quite a specific problem I was hoping the community could help me out with. Thanks in advance.
So I have 2 sets of data, one is experimental and the other is based off of an equation. I am trying to fit my data points to this curve and hence obtain the missing variables I am interested in. Namely, a and b in the Ebfit function.
Here is the code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as spys
from scipy.optimize import curve_fit
time = [60,220,520,1840]
Moment = [0.64227262,0.468318916,0.197100772,0.104512508]
Temperature = 25 # Bake temperature in degrees C
Nb = len(Moment) # Number of bake measurements
Baketime_a = time #[s]
N_Device = 10000 # No. of devices considered in the array
T_ambient = 273 + Temperature
kt = 0.0256*(T_ambient/298) # In units of eV
f0 = 1e9 # Attempt frequency
def Ebfit(x,a,b):
Eb_mean = a*(0.0256/kt) # Eb at bake temperature
Eb_sigma = b*Eb_mean
Foursigma = 4*Eb_sigma
Eb_a = np.linspace(Eb_mean-Foursigma,Eb_mean+Foursigma,N_Device)
dEb = Eb_a[1] - Eb_a[0]
pdfEb_a = spys.norm.pdf(Eb_a,Eb_mean,Eb_sigma)
## Retention Time
DMom = np.zeros(len(x),float)
tau = (1/f0)*np.exp(Eb_a)
for bb in range(len(x)):
DMom[bb]= (1 - 2*(sum(pdfEb_a*(1 - np.exp(np.divide(-x[bb],tau))))*dEb))
return DMom
a = 30
b = 0.10
params,extras = curve_fit(Ebfit,time,Moment)
x_new = list(range(0,2000,1))
y_new = Ebfit(x_new,params[0],params[1])
plt.plot(time,Moment, 'o', label = 'data points')
plt.plot(x_new,y_new, label = 'fitted curve')
plt.legend()
The main problem I am having is that the fitting of the data to the function does not work when I use large number of points. In the above code When I use the 4 points (time & moment), this code works fine.
I get the following values for a and b.
array([ 29.11832766, 0.13918353])
The expected values for a is (23-50) and b is (0.06 - 0.15). So these values are within the acceptable range. This is the corresponding plot:
However, when I use my actual experimental normalized data with about 500 points.
EDIT: This data:
Normalized Data
https://www.dropbox.com/s/64zke4wckxc1r75/Normalized%20Data.csv?dl=0
Raw Data
https://www.dropbox.com/s/ojgse5ibp59r8nw/Data1.csv?dl=0
I get the following values and plot for a and b which are out of the acceptable range,
array([-13.76687781, -12.90494196])
I know these values are wrong and if I were to do it manually (slowly adjusting values to obtain the proper fit) it would be around a=30.1 and b=0.09. And when plotted looks as such:
I have tried changing the initial guess values for a & b, other sets of experimental data as well and other suggestions in similar threads. None seem to work for me. Any help you can provide is appreciated. Thanks.
.
.
.
.
ADDITIONAL INFORMATION
The model I am trying to fit the data to comes from the following equation:
where Dmom = 1 - 2*Psw
a is the Eb value while b is the Sigma value where, Eb has a range of values determined by the probability density function and 4 times of the sigma values (i.e. Foursigma). This distribution is then summed up to use for the final equation.
It looks like you do need to play around with the initial guesses for a and b after all. Perhaps the function you're fitting is not very well behaved, which is why it's so prone to fail for intitial guesses away from the global minumum. That being said, here's a working example of how to fit your data:
import pandas as pd
data_df = pd.read_csv('data.csv')
time = data_df['Time since start, Time [s]'].values
moment = data_df['Signal X direction, Moment [emu]'].values
params, extras = curve_fit(Ebfit, time, moment, p0=[40, 0.3])
Yields the values of a and b of:
In [6]: params
Out[6]: array([ 30.47553689, 0.08839412])
Which results in a nicely aligned fit of a function.
x_big = np.linspace(1, 1800, 2000)
y_big = Ebfit(x_big, params[0], params[1])
plt.plot(time, moment, 'o', alpha=0.5, label='all points')
plt.plot(x_big, y_big, label = 'fitted curve')
plt.legend()
plt.show()