I am working on simulation of a system that contains coupled differential equations. My main aim is to solve the mass balance in steady condition and feed the solution of steady state as initial guess for the dynamic simulation.
There are basically three state variables Ss,Xs and Xbh. The rate equations look like this:
r1=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh+Kh(
(Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh
r2=(1-fp)bH*Xbh-Kh( (Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh
r3=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh-bH*Xbh
And the main differential equations derived from mole balance for CSTR are:
dSs/dt = Q(Ss_in-Ss)+r1*V
dXs/dt= Q(Xs_in-Xs)+r2*V
dXbh/dt= Q(Xbh_in-Xbh)+r2*V
Here is my code till now:
import numpy as np
from scipy.optimize import fsolve
parameter=dict()
parameter['u_h']=6.0
parameter['k_oh']=0.20
parameter['k_s']=20.0
parameter['k_h']=3.0
parameter['k_x']=0.03
parameter['Y_h']=0.67
parameter['f_p']=0.08
parameter['b_h']=0.62
Bulk_DO=2.0 #mg/L
#influent components:
infcomp=[56.53,182.9,16.625] #mgCOD/l
Q=684000 #L/hr
V=1040000 #l
def steady(z,*args):
Ss=z[0]
Xs=z[1]
Xbh=z[2]
def monod(My_S,My_K):
return My_S/(My_S+My_K)
#Conversion rates
#Conversion of Ss
r1=((-1/parameter['Y_h'])*parameter['u_h']*monod(Ss,parameter['k_s'])\
+parameter['k_h']*monod(Xs/Xbh,parameter['k_x'])*monod(Bulk_DO,parameter['k_oh']))\
*Xbh*monod(Bulk_DO,parameter['k_oh'])
#Conversion of Xs
r2=((1-parameter['f_p'])*parameter['b_h']-parameter['k_h']*monod(Xs/Xbh,parameter['k_x']))*Xbh
#Conversion of Xbh
r3=(parameter['u_h']*monod(Ss,parameter['k_s'])*monod(Bulk_DO,parameter['k_oh'])-parameter['b_h'])*Xbh
f=np.zeros(3)
f[0]=Q*(infcomp[0]-Ss)+r1*V
f[1]=Q*(infcomp[1]-Xs)+r2*V
f[2]=Q*(infcomp[2]-Xbh)+r3*V
return f
initial_guess=(0.1,0.1,0.1)
soln=fsolve(steady,initial_guess,args=parameter)
print (soln)
How can I plot steady condition like this?
steady state plot
The solution is also not what I want since the equations implies reduction in Ss and Xs and increase of Xbh values with time. Also one solution has negative value which is practically impossible.
Any suggestions would be highly appreciated. Thanks in advance !!
This is a solution to getting negative values for your solution: instead of using fsolve, use least_squares, which allows you to set bounds to the possible values.
In the top, import:
from scipy.optimize import least_squares
And replace the fsolve statement with:
soln = least_squares(steady, initial_guess, bounds=[(0,0,0),(np.inf,np.inf,np.inf)], args=parameter)
Related
During a project I've been working on, I collected experimental temperature data as a function of time from a metal casting at 4 locations in the sytsem. This temperature profile is complex with a period of initial cooling, followed by an enormous, sudden rise in temperature as the alloy arrives followed by a final period of cooling.
To understand the temperature environment between the measurement locations, I'm trying to use Python to solve the Heat equation which requires a combination of symbolic derivatives and integrals (for which I've used Sympy) and numerical calculations (which which I've used lambdify and numpy).
The issue comes when I want to use the collected temperature data as a boundary conditions in the calculation. I've use Scipy to interpolate between the data points to obtain a complete temperature data set for all times (and to obtain a new spline representing the derivative of the original spline) but I cannot obtain this interpolation response in a format the Sympy will understand for derivatives and integrals in the calculus
Any advice or suggestions?
########################################################################################################
The code I've used for the open step of defining the interpolation is detailed below. I appreciate it would be more efficiently written as a matrix, but I find it easier to see it all when its written out long hand (I generally simplify later if needed):
Note: The time (the x-axis parameter in the interpolation) is a strictly increasing parameter
import numpy as np #import the relevant packages/items
from scipy import signal
import sympy as sp
from scipy.interpolate import *
from scipy.interpolate import UnivariateSpline
x=sp.Symbol("x", real=True, positive=True) #define the sympy symbols
L=sp.Symbol("L", real=True, positive=True)
filename='.../Temperature Data for Code.csv'
data = np.array(pd.read_csv(filename,skiprows=1, header=None)) #reading in the datasets from file
Time_exp=np.array(data[:,0]) #assign the time (x-axis)
T_Alloy_centre_orig=np.array(data[:,1]) +273 #assign 4 temperature (y-axis) and convert to K
T_Alloy_edge_orig=np.array(data[:,2]) +273
T_Inner_orig=np.array(data[:,9]) +273
T_Outer_orig=np.array(data[:,11]) +273
T_Alloy_centre=T_Alloy_centre_orig #create copy of the original data before manipulation
T_Alloy_edge=T_Alloy_edge_orig
T_Inner=T_Inner_orig
T_Outer=T_Outer_orig
T_Alloy_centre=signal.savgol_filter(T_Alloy_centre,3,1) #basic filter to smooth experimental noise
T_Alloy_edge=signal.savgol_filter(T_Alloy_edge,3,1)
T_Inner=signal.savgol_filter(T_Inner,3,1)
T_Outer=signal.savgol_filter(T_Outer,3,1)
T_Alloy_centre_xt = UnivariateSpline(Time_exp, T_Alloy_centre,k=3) #perform spline interpolation
T_Alloy_edge_xt = UnivariateSpline(Time_exp, T_Alloy_edge,k=3)
T_Inner_xt = UnivariateSpline(Time_exp, T_Inner,k=3)
T_Outer_xt = UnivariateSpline(Time_exp, T_Outer,k=3)
diff_T_Alloy_centre_xt = T_Alloy_centre_xt.derivative(n=1) #new spline for derivative of previous
diff_T_Alloy_edge_xt = T_Alloy_edge_xt.derivative(n=1)
diff_T_Inner_xt = T_Inner_xt.derivative(n=1)
diff_T_Outer_xt = T_Outer_xt.derivative(n=1)
#############################################################################
here is where the speculation begins. I've tried several things to try and convert the resulting interpolation into something that can be used by Sympy but unsuccessfully.
First, I tried an sympy.implemented_function approach and just using lambda, although this just gave a string as far as I can tell:
T_Alloy_centre_f = implemented_function('T_Alloy_centre_f', lambda t: T_Alloy_centre_xt)
T_Alloy_centre_f = lambda t: T_Alloy_centre_xt
Secondly, I tried using the interpolation functions available within Sympy (interpolating_spline) although this was running for 15 minutes without achieving a result for only one of the 4 measurements. It would be useful if this worked as it is already within Sympy, although the calculation time is extreme. Possibly as the data is not smooth, featuring a sudden, massive increase in temperature on arrival of the molten alloy.
T_Alloy_centre_xt = sp.interpolating_spline(3, x, Time_exp, T_Alloy_centre)
Finally, to pulled the spline coefficients and knots out of the interpolation with the aim of building the function manually before converting, but I could not come up with a convenient way of getting the piecewise function. Nor did the previous implemented_function approach seem to work here either.
spline_coeffs = T_Alloy_centre_xt.get_coeffs()
spline_knots = T_Alloy_centre_xt.get_knots()
I'm not sure how to proceed from here. I need something from this interpolation that can be passed through sp.diff and sp.integrate
###########################################################################
#if I can get past the above conversion, the next step in the code is evaluating the derivative at a specified value and performing an integral as below:
F_A=-1*sp.diff(T_Inner_f, t) - x/L*(sp.diff(T_Outer_f,t)-sp.diff(T_Inner_f,t))
f_n_A=(2/L)*sp.integrate(F_A*sp.sin(lamda*x),(x,0,L))
Any assistance would be hugely appreciated.
I have solved a single second order differential equation with two boundary conditions using the module solve_bvp. However, now I am trying to solve the system of two second order differential equations;
U'' + a*B' = 0
B'' + b*U' = 0
with the boundary conditions U(+/-0.5) = +/-0.01 and B(+/-0.5) = 0. I have split this into a system of first ordinary differential equations and I am trying to use solve_bvp to solve them numerically. However, I am just getting arrays full of zeros for my solution. I believe I am implementing the boundary conditions wrong. It is not clear to me how to handle more than two equations from the documentation. My attempt is below
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_bvp
alpha = 1E-8
zeta = 8E-3
C_k = 0.05
sigma = 0.01
def fun(x, y):
return np.vstack((y[1],-((alpha)/(C_k*sigma))*y[2],y[2], -(1/(C_k*zeta))*y[1]))
def bc(ya, yb):
return np.array([ya[0]+0.001, yb[0]-0.001,ya[0]-0, yb[0]-0])
x = np.linspace(-0.5, 0.5, 5000)
y = np.zeros((4, x.size))
print(y)
sol = solve_bvp(fun, bc, x, y)
print(sol)
In my question I have just relabeled a and b, but they're just parameters that I input. I have the analytic solution for this set of equations so I know one exists that is non-trivial. Any help would be greatly appreciated.
It is most times really helpful if you state at least once in a comment or by assignment to specifically named variables how you want to compose the state vector.
By the form of the derivative return vector, I would think you intend
U, U', B, B'
which means that U=y[0], U'=y[1] and B=y[2],B'=y[3], so that your derivatives vector should correctly be
return y[1], -((alpha)/(C_k*sigma))*y[3], y[3], -(1/(C_k*zeta))*y[1]
and the boundary conditions
return ya[0]+0.001, yb[0]-0.001, ya[2]-0, yb[2]-0
Especially your boundary condition should throw the algorithm in the first step because of a singular Jacobian, always check the .success field and the .message field of the solution structure.
Note that by default the absolute and relative tolerance of the experimental solve_bvp is 1e-3, and the number of nodes is limited to 500.
Setting the initial node number to 50 (5000 is much too much, the solver refines where necessary), and the tolerance to 1-6, I get the following solution plots that visibly satisfy the boundary conditions.
I studying how to solve differential equations in Python with odeint and for test, I try to solve the following ODE (the following example came from of https://apmonitor.com/pdc/index.php/Main/SolveDifferentialEquations):
# first import the necessary libraries
import numpy as np
from scipy.integrate import odeint
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k*y
return dydt
#Initial condition
y0 = 5.0
# Time points
t = np.linspace(0,20)
# Solve ODE
def y(t):
return odeint(model,y0,t)
So if I plot the results with matplotlib, or more simply, give the command print(y(t)) then this work perfectly! But if I try compute the value of the function for a fixed value of time, for instance, t1 = t[2] ( = 0.8163 ) so I get the error
t1 = t[2]
print(y(t1))
ValueError("diff requires input that is at least one dimensional")
why I only can compute the value for y(t) for a interval t = np.linspace(0,20) but not for a number in this interval? There is some manner to fix this?
Thank you very much.
The odeint function solves you differential equation numerically. To do that you need to specify the points where you want your solution to be evaluated. These points also influence the accuracy of the solution. Generally, the more points you give to odeint the better the result (when solving the same time interval).
This means that there is no way for odeint to know what accuracy you want if you only supply a single time at which you want to evaluate the function. Instead you always need to supply a range of times (like you did with np.linspace). odeint then returns the value of the solution at all these times.
y(t) is an array of values of your solution and the third value in the array corresponds to the solution at the third time in t:
The solution evaluated at t[0] is y(t)[0] = y0
The solution evaluated at t[1] is y(t)[1]
The solution evaluated at t[2] is y(t)[2]
...
So instead of
print(y(t[2]))
you need to use
print(y(t)[2])
So, I'm trying to write a code that solves the (what we called) differential equation of an orbit in the kepler potential V(r)=-1/r
when you do the math you get a differential equation that looks like this:
d^2u/d(fi)^2 + u - m/M^2=0
where u=1/r
and we are ultimately looking for r(fi)
and now i tried to solve it using the numerical method, first i said du/dfi=y
then definig a function (i took some arbitrary M and m)
def func(y,fi):
m=4
M=5
return [y[1],m/M^2-y[0]]$
and imported from scipy.integrate import odeint
and then put in
ts = np.linspace(0,15,150)
ys = odeint(func, y0, ts)
now this gets me an array of 150 arrays of two numbers
and i don't really understand what dodes the first number mean and what does the second number mean is is
ys=[fi,u(fi)]
or something else?
The state for your order one system is [value, derivative]. The result of the integration is a list of state pairs of the same type.
I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.
An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.
Simplified example
Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form
In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.
Strategy 1
The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint
np.random.seed(seed=17)
def f(y, t, R_t):
""" Function to integrate.
"""
# Unpack parameters
Q_t = y[0]
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# #############################################################################
# User input
T = 10 # Time constant (days)
Q0 = 0. # Initial condition for outflow rate (mm/day)
days = 300 # Number of days to simulate
# #############################################################################
# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
index=range(days+20))
# Smooth with a moving window to make more sensible
df['R'] = pd.rolling_mean(df['R'], window=20)
# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)
# List to store results
Q_vals = []
# Vector of initial conditions
y0 = [Q0, ]
# Loop over each day in the R dataset
for step in range(days):
# We want to find the value of Q at the end of this time step
t = [0, 1]
# Get R for this step
R_t = float(df.ix[step])
# Solve the ODEs
soln = odeint(f, y0, t, args=(R_t,))
# Extract flow at end of step from soln
Q = float(soln[1])
# Append result
Q_vals.append(Q)
# Update initial condition for next step
y0 = [Q, ]
# Add results to df
df['Q'] = Q_vals
Strategy 2
The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:
def f(y, t):
""" Function used integrate.
"""
# Unpack incremental values for S and D
Q_t = y[0]
# Get the value for R at this t
idx = df.index.get_loc(t, method='ffill')
R_t = float(df.ix[idx])
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# Vector of initial parameter values
y0 = [Q0, ]
# Time grid
t = np.arange(0, days, 1)
# solve the ODEs
soln = odeint(f, y0, t)
# Add result to df
df['Q'] = soln[:, 0]
Both of these approaches give identical answers, which look like this:
However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?
My questions
Is strategy 1 the best approach here, or is there a better way?
Is strategy 2 a bad idea and why is it so slow?
Thank you!
1.) Yes
2.) Yes
Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.
This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.
You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.