Implementing an macroeconomic model in Python's GEKKO - python

This question is focussed somewhat on economic optimisation, and somewhat on python implementation, but maybe some in the community are able to help. I'm trying to implement a standard continuous-time macroeconomic savings model in Python's GEKKO platform, but haven't been able to get it to solve. I've taken the economic example provided in GEKKO's documentation, and adapted to the basic savings decision model, but things are not quite working out. The model maximises the sum of utility from consumption, where consumption + investment = output. E.g. max integral(U(y-i)). Output y = k^ALPHA. investment = dk/dt+delta*k.
Can anyone tell why my code can't be solved? Is the platform even capable of solving such a model? I haven't seen many examples of economists using this platform to solve models, but not sure if this is because the platform is not suited or otherwise. It's a great platform and really keen to make it work if possible. Thank you in advance.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO()
n=501
m.time = np.linspace(0,10,n)
ALPHA,DELTA = 0.333,0.99
i = m.MV(value=0)
i.STATUS = 1
i.DCOST = 0
x = m.Var(value=20,lb=0) # fish population
m.Equation(x.dt() == i-DELTA*x)
J = m.Var(value=0) # objective (profit)
Jf = m.FV() # final objective
Jf.STATUS = 1
m.Connection(Jf,J,pos2='end')
m.Equation(J.dt() == m.log(x**ALPHA-i))
m.Obj(-Jf) # maximize profit
m.options.IMODE = 6 # optimal control
m.options.NODES = 3 # collocation nodes
m.options.SOLVER = 3 # solver (IPOPT)
m.solve(disp=True) # Solve

You are getting NaN in the equation dJ/dt = ln(x**ALPHA-i). When you include bounds i>0 and i<1, the solver finds a solution.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO()
n=501
m.time = np.linspace(0,10,n)
ALPHA,DELTA = 0.333,0.99
i = m.MV(value=0,lb=0,ub=1)
i.STATUS = 1
i.DCOST = 0
x = m.Var(value=20,lb=0) # fish population
m.Equation(x.dt() == i-DELTA*x)
J = m.Var(value=0) # objective (profit)
Jf = m.FV() # final objective
Jf.STATUS = 1
m.Connection(Jf,J,pos2='end')
m.Equation(J.dt() == m.log(x**ALPHA-i))
m.Obj(-Jf) # maximize profit
m.options.IMODE = 6 # optimal control
m.options.NODES = 3 # collocation nodes
m.options.SOLVER = 3 # solver (IPOPT)
m.solve(disp=True) # Solve
plt.subplot(2,1,1)
plt.plot(m.time,x.value)
plt.ylabel('x')
plt.subplot(2,1,2)
plt.plot(m.time,i.value)
plt.ylabel('i')
plt.show()
Instead of m.Obj() (minimize) you can use the newer functions m.Minimize() or m.Maximize() to clarify the objective function intent. For example, you could switch to m.Maximize(Jf) to make it more readable.
There are also a couple other examples that may help you with integral objectives (see solution 2) and economic dynamic optimization.

Related

How to write conditions and Tolerance in GEKKO?

Need help for the following conditions to be implemented in GEKKO python.
For Matlab, i have the following conditions
if t<15
x1 = 1e-7;
else x1 = 0;
end
For python I have written the code as
m.time = np.linspace(0,60)
t = m.Var(0)
m.Equation(t.dt()==1)
x1 = m.if2(t-15,1e-7,0)
But that didn't work. Basically x1 is my input and I want that x1 to be available for 15min only, after that it is 0. Please let me know the solution to this.
2.effect=min((0.2x17+0.8x19)/APequil, 1)
in Matlab
In python I have used the following
effect=m.min2(((0.2x17+0.8x19)/APequil),1)
Please check if its okay? As removing min2 is not affecting my solution.
In matlab, have used
options=odeset('InitialStep',0.0001,'RelTol',1e-09),
how to use the same in GEKKO python? As I have successful solution in matlab, but the same output is not achieved in Python, i think it is due to this tolerance value or what?
Use a list of values to give different values based on the time or position.
x1 = m.Param([0 if i<15 else 1e-7 for i in range(101)])
Use a slack variable s to clip the value of effect at an upper bound of 1. This is more efficient than using the if2() or if3() function.
effect = m.Var(ub=1)
s = m.Var(lb=0)
m.Minimize(s)
m.Equation(effect==x1*3e7-s)
The tolerance can be set with m.options.RTOL (equation residual tolerance) and m.options.OTOL (objective tolerance). Here is an example:
import numpy as np
from gekko import GEKKO
m = GEKKO(remote=False)
t = np.linspace(0,100,101); m.time = t
x1 = m.Param([0 if i<15 else 1e-7 for i in range(101)])
effect = m.Var(ub=1)
s = m.Var(lb=0)
m.Minimize(s)
m.Equation(effect==x1*3e7-s)
m.options.IMODE=6
m.options.RTOL = 1e-6
m.options.OTOL = 1e-6
m.solve()
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(t,x1,'k--',label='x1')
plt.legend()
plt.subplot(2,1,2)
plt.plot(t,effect,'r--',label='Effect')
plt.plot(t,s,'b.-',label='Slack')
plt.legend(); plt.xlabel('Time')
plt.show()
There are additional examples and documentation that can also help.

Why don't the parameters change when I am trying to impelement parameter estimation in Python?

I am trying with help of this post: http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html, but the parameters remain the same, independently of what initial conditions are chosen.
#Zombie Display
# zombie apocalypse modeling
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy import integrate
from scipy.optimize import fmin
#=====================================================
#Notice we must import the Model Definition
#The Model
#=======================================================
def eq(par,initial_cond,start_t,end_t,incr):
#-time-grid-----------------------------------
t = np.linspace(start_t, end_t,incr)
#differential-eq-system----------------------
def funct(phi,t):
S=phi[0]
E=phi[1]
I=phi[2]
R=phi[3]
dS_dt = mu*(N-S)-beta*S*I/N-nu*S
dE_dt = beta*S*I/N-(mu+sigma)*E
dI_dt = sigma*E-(mu+gamma)*I
dR_dt=gamma*I-mu*R+nu*S
dphi_dt = [dS_dt,dE_dt,dI_dt,dR_dt]
return dphi_dt
#integrate------------------------------------
ds = integrate.odeint(funct,initial_cond,t)
return (ds[:,0],ds[:,1],ds[:,2],ds[:,3],t)
#=======================================================
#=====================================================
#1.Get Data
#====================================================
Td=np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])#time
Zd=np.array([274,326,547,639,2000,2700,4400,6000,7700,9700,11200,14300,17200,19200,20000])#zombie pop
#====================================================
#2.Set up Info for Model System
#===================================================
# model parameters
#----------------------------------------------------
beta=1.4
gamma=0.06
sigma=0.15
mu=0
nu=0
rates=(beta,gamma,sigma,mu,nu)
# model initial conditions
#---------------------------------------------------
N=11*pow(10,6)
S0 = N-1 # initial population
E0 = 105.1 # initial zombie population
I0 = 27.679 # initial death population
R0= 2.0
y0 = [S0, E0, I0, R0] # initial condition vector
# model steps
#---------------------------------------------------
start_time=0.0
end_time=140
intervals=5.0
mt=np.linspace(start_time,end_time,intervals)
# model index to compare to data
#----------------------------------------------------
findindex=lambda x:np.where(mt>=x)[0][0]
mindex=list(map(findindex,Td))
#=======================================================
#3.Score Fit of System
#=========================================================
def score(parms):
#a.Get Solution to system
F0,F1,F2,F3,T=eq(parms,y0,start_time,end_time,intervals)
#b.Pick of Model Points to Compare
Zm=F1[mindex]
#c.Score Difference between model and data points
ss=lambda data,model:((data-model)**2).sum()
return ss(Zd,Zm)
#========================================================
#=========================================================
#4.Optimize Fit
#=======================================================
fit_score=score(rates)
answ=fmin(score,(rates),full_output=1,maxiter=1000000)
bestrates=answ[0]
bestscore=answ[1]
beta, gamma, sigma, mu, nu=answ[0]
newrates=(beta,gamma,sigma,mu,nu)
#=======================================================
#5.Generate Solution to System
#=======================================================
F0,F1,F2,F3,T=eq(newrates,y0,start_time,end_time,intervals)
Zm=F1[mindex]
Tm=T[mindex]
#======================================================
#6. Plot Solution to System
#=========================================================
plt.figure()
plt.plot(T,F1,'b-',Tm,Zm,'ro',Td,Zd,'go')
plt.xlabel('days')
plt.ylabel('population')
title='Zombie Apocalypse Fit Score: '+str(bestscore)
plt.title(title)
plt.show()
I know that this is huge, but if somebody is expert from parameter estimations for differential equation system, I would be glad to hear any information. Thanks a lot!
You do not see any change in the parameters because you do not use them, the score function is constant. While you pass on parms to eq, inside eq you do not unpack the par parameter.
Add in eq as first line
beta,gamma,sigma,mu,nu = par
and the minimization algorithm does something non-trivial. It is still possible that no solution will be found in reasonable time. Set maxiter to something more reasonable. If the method then fails, it may also be a matter of the problem formulation. Perhaps the initial conditions need also be optimization variables, or the ODE system is not really suitable.

Using m.CV vs m.Var

I'm optimizing a tubular column design using gekko python. I experimented with the code using the different variable types m.SV and m.CV in place of m.Var and there was no apparent effect on the solver or the results. What purpose do these different variable types serve?
I've included my model below.
m = GEKKO()
#%% Constants
pi = m.Const(3.14159,'pi')
P = 2300 # compressive load (kg_f)
o_y = 450 # yield stress (kg_f/cm^2)
E = 0.65e6 # elasticity (kg_f/cm^2)
p = 0.0020 # weight density (kg_f/cm^3)
l = 300 # length of the column (cm)
#%% Variables
d = m.CV(value=8.0,lb=2.0,ub=14.0) # mean diameter (cm)
t = m.SV(value=0.3,lb=0.2,ub=0.8) # thickness (cm)
cost = m.Var()
#%% Intermediates
d_i = m.Intermediate(d - t)
d_o = m.Intermediate(d + t)
W = m.Intermediate(p*l*pi*(d_o**2 - d_i**2)/4) # weight (kgf)
o_i = m.Intermediate(P/(pi*d*t)) # induced stress
# second moment of area of the cross section of the column
I = m.Intermediate((pi/64)*(d_o**4 - d_i**4))
# buckling stress (Euler buckling load/cross-sectional area)
o_b = m.Intermediate((pi**2*E*I/l**2)*(1/(pi*d*t)))
#%% Equations
m.Equations([
o_i - o_y <= 0,
o_i - o_b <= 0,
cost == 5*W + 2*d
])
#%% Objective
m.Obj(cost)
#%% Solve and print solution
m.options.SOLVER = 1
m.solve()
print('Optimal cost: ' + str(cost[0]))
print('Optimal mean diameter: ' + str(d[0]))
print('Optimal thickness: ' + str(t[0]))
Variables
Variables are values that are adjusted by the solver to satisfy an equation or determine the best outcome among many options. There is typically at least one variable for every equation. To avoid over-specification, a simulation often has equal numbers of equations and variables. For optimization problems, there are typically more variables than equations. The extra variables are changed to minimize or maximize an objective function. There is more information on these objects in the Gekko documentation and APMonitor documentation.
x = m.Var(5) # declare a variable with initial condition
There are also "special" types of variables that perform certain functions. For example, additional equations are added to the model for variables that have a measurement for data reconciliation. To avoid adding these extra equations for all variables, the measurement equations are only added for those designated as Controlled Variables (CVs). State Variables (SVs) may also be measured are typically designated as such just for monitoring purposes.
State Variables (SVs)
States are model variables that may be measured or are of special interest for observation. For time-varying simulations, the SVs change over the time horizon to satisfy equation feasibility.
x = m.SV() # state variable
Controlled Variables (CVs)
Controlled variables are model variables that are included in the objective of a controller or optimizer. These variables are controlled to a range, maximized, or minimized. Controlled variables may also be measured values that are included for data reconciliation. For time-varying simulations, the CVs change over the time horizon to satisfy the equations and minimize the objective function.
x = m.CV() # controlled variable
Example Application
There is documentation for options for the different variable and parameter types (FV, MV, SV, CV). Below is a Model Predictive Control Application that shows the use of a Manipulated Variable and Controlled Variable.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO()
m.time = np.linspace(0,20,41)
# Parameters
mass = 500
b = m.Param(value=50)
K = m.Param(value=0.8)
# Manipulated variable
p = m.MV(value=0, lb=0, ub=100)
p.STATUS = 1 # allow optimizer to change
p.DCOST = 0.1 # smooth out gas pedal movement
p.DMAX = 20 # slow down change of gas pedal
# Controlled Variable
v = m.CV(value=0)
v.STATUS = 1 # add the SP to the objective
m.options.CV_TYPE = 2 # squared error
v.SP = 40 # set point
v.TR_INIT = 1 # set point trajectory
v.TAU = 5 # time constant of trajectory
# Process model
m.Equation(mass*v.dt() == -v*b + K*b*p)
m.options.IMODE = 6 # control
m.solve(disp=False)
# get additional solution information
import json
with open(m.path+'//results.json') as f:
results = json.load(f)
plt.figure()
plt.subplot(2,1,1)
plt.plot(m.time,p.value,'b-',label='MV Optimized')
plt.legend()
plt.ylabel('Input')
plt.subplot(2,1,2)
plt.plot(m.time,results['v1.tr'],'k-',label='Reference Trajectory')
plt.plot(m.time,v.value,'r--',label='CV Response')
plt.ylabel('Output')
plt.xlabel('Time')
plt.legend(loc='best')
plt.show()

Avoiding divergent solutions with odeint? shooting method

I am trying to solve an equation in Python. Basically what I want to do is to solve the equation:
(1/x^2)*d(Gam*dL/dx)/dx)+(a^2*x^2/Gam-(m^2))*L=0
This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. It suppose that we know m and Gam=x^2-2*x. The initial/boundary condition that I know are L(2+epsilon)=1 and L(infty)=0. Notice that the asymptotic behavior of the equation is
L(x-->infty)-->Exp[(m^2-a^2)*x]/x and Exp[-(m^2-a^2)*x]/x
Then, if a^2>m^2 we will have oscillatory solutions, while if a^2 < m^2 we will have a divergent and a decay solution.
What I am interested is in the decay solution, however when I am trying to solve the above equation transforming it as a system of first order differential equations and using the shooting method in order to find the "a" that can give me the behavior that I am interested about, I am always having a divergent solution. I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method?
Basically what I am doing is to add a new system of equation for "a"
(d^2a/dx^2=0, da/dx(2+epsilon)=0,a(2+epsilon)=a_0)
in order to have "a" as a constant. Then I am considering different values for "a_0" and asking if my boundary conditions are fulfilled.
Thanks for your time. Regards,
Luis P.
I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from math import *
from scipy.integrate import ode
These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$
def init_sch(u_sch):
om = u_sch[0]
return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx]
These are our system of equations
def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0):
L = IC[0]
ph = IC[1]
om = IC[2]
b = IC[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.
return [dR_dr,dph_dr,dom_dr,db_dr]
Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here
p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case
ep = 0.2
ep_r = 0.01
r_end = 500
n_r = 500000
n_omega = 1000
omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)
r = np.linspace(2+ep_r,r_end,n_r)
tol = 0.01
a = 0
for j in range(len(omega)):
print('trying with $omega =$',omega[j])
omeg = [omega[j]]
ini = init_sch(omeg)
Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000)
print Y[-1,0]
#Here I ask if my asymptotic behavior is fulfilled or not. This should be basically my value at infinity
if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol:
print(j,'times iterations in omega')
print("R'(inf)) = ", Y[-1,0])
print("\omega",omega[j])
omega_1 = [omega[j]]
a = 10
break
if a > 1:
break
Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from math import *
from scipy.integrate import solve_bvp
def bc(ya,yb,p_sch):
m = p_sch[1]
om = p_sch[4]
tol_s = p_sch[5]
r_end = p_sch[6]
return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])
def fun(r,y,p_sch):
rho_c = p_sch[0]
m = p_sch[1]
lam = p_sch[2]
l = p_sch[3]
L = y[0]
ph = y[1]
om = y[2]
b = y[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.*y[3]
return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))
eps_r=0.01
r_end = 500
n_r = 50000
r = np.linspace(2+eps_r,r_end,n_r)
y = np.zeros((4,r.size))
y[0]=1
tol_s = 0.0001
p_sch= (1,1,0,0,0.8,tol_s,r_end)
sol = solve_bvp(fun,bc, r, y, p_sch)
I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).
ValueError: bc return is expected to have shape (11,), but actually has (8,).

solving system of coupled odes with odeint

I'm using a system of ode's to model coffee bean roasting for a class assignment. The equations are below.
The parameters (other than X_b and T_b) are all constants.
When I try to use odeint to solve this system, it gives a constant T_b and X_b profile (which conceptually doesn't make sense).
Below is the code I'm using
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
# Write function for bean temperature T_b differential equation
def deriv(X,t):
T_b, X_b = X
dX_b = (-4.32*10**9*X_b**2)/(l_b**2)*np.exp(-9889/T_b)
dT_b = ((h_gb*A_gb*(T_gi - T_b))+(m_b*A_arh*np.exp(-H_a/R_g/T_b))+
(m_b*lam*dX_b))/(m_b*(1.099+0.0070*T_b+5*X_b)*1000)
return [dT_b, dX_b]
# Establish initial conditions
t = 0 #seconds
T_b = 298 # degrees K
X_b = 0.1 # mass fraction of moisture
# Set time step
dt = 1 # second
# Establish location to store data
history = [[t,T_b, X_b]]
# Use odeint to solve DE
while t < 600:
T_b, X_b = odeint(deriv, [T_b, X_b], [t+dt])[-1]
t += dt
history.append([t,T_b, X_b])
# Plot Results
def plot_history(history, labels):
"""Plots a simulation history."""
history = np.array(history)
t = history[:,0]
n = len(labels) - 1
plt.figure(figsize=(8,1.95*n))
for k in range(0,n):
plt.subplot(n, 1, k+1)
plt.plot(t, history[:,k+1])
plt.title(labels[k+1])
plt.xlabel(labels[0])
plt.grid()
plt.tight_layout()
plot_history(history, ['t (s)','Bean Temperature $T_b$ (K)', 'Bean Moisture Content $X_b$'])
plt.show()
Do you have any ideas why the integration step isn't working?
Thank You!!
You're repeatedly solving the system of equations for only a single timepoint.
From the odeint documentation, the odeint command takes an argument t which is:
A sequence of time points for which to solve for y. The initial value point should be the first element of this sequence.
Since you pass [t+dt] to odeint, there is only a single timepoint so you get back only a single value which is simply your initial condition.
The correct way to use odeint is similar to the following:
output = odeint(deriv, [T_b, X_b], np.linspace(0,600,600))
Here output, again according to the documentation is:
Array containing the value of y for each desired time in t, with the initial value y0 in the first row.

Categories