I have a fitting task where I am using GEKKO.
There are a lot of variables, arrays of variables, some variables that must contain arrays, and so on.
I didn't have success with the fitting,
so I need to do step-by-step verification of all parameters that I am providing for GEKKO and all the calculated intermediate values.
Is there a way to print out the values of each variable for debugging purposes?
Or to view the values of the variables in line-by-line execution?
for example, I have an array that is saved like a variable ro:
phi = model.Intermediate( c * ro) # phase shift
where c is some constant defined somewhere above in the model definition.
How can I view the values inside phi that will be used in the next steps?
I need to view/save all the values of all variables/constants/intermediates used during the model creation - before a try to solve. Is it possible?
Turn up the DIAGLEVEL to 2 or higher to produce diagnostic files in the run directory m.path.
from gekko import GEKKO
m = GEKKO(remote=False)
c = 2
x = m.Param(3,name='x')
ro = m.Var(value=4,lb=0,ub=10,name='ro')
y = m.Var()
phi = m.Intermediate(c*ro,name='phi')
m.Equation(y==phi**2+x)
m.Maximize(y)
m.options.SOLVER = 1
m.options.DIAGLEVEL=2
m.open_folder()
m.solve()
Here is a summary of the diagnostic files that are produced:
Variables, Equations, Jacobian, Lagrange Multipliers, Objective
apm_eqn.txt
apm_jac.txt
apm_jac_fv.txt
apm_lam.txt
apm_lbt.txt
apm_obj.txt
apm_obj_grad.txt
apm_var.txt
Solver Output and Options
APOPT.out
apopt_current_options.opt
Model File
gk_model0.apm
Data File
gk_model0.csv
Options Files
gk_model0.dbs
options.json
Specification File for FV, MV, SV, CV
gk_model0.info
Inputs to the Model
dbs_read.rpt
input_defaults.dbs
input_gk_model0.dbs
input_measurements.dbs
input_overrides.dbs
measurements.dbs
Results
rto.t0
results.csv
results.json
gk_model0_r_2022y12m04d08h12m28.509s.t0
Initialization Steps Before Solve
rto_1.t0
rto_2.t0
rto_3.t0
rto_3_eqn.txt
rto_3_eqn_var.txt
rto_3_var.t0
Reports After Solve
rto_4.t0
rto_4_eqn.txt
rto_4_eqn_var.txt
rto_4_var.t0
The files of interest for you are likely the rto* initialization files. The name changes based on the IMODE that you run. It is mpu* for your application for a Model Parameter Update with IMODE=2.
I have written the following two functions to calibrate a model :
The main function is:
def function_Price(para,y,t,T,tau,N,C):
# y= price array
# C = Auto and cross correlation array
# a= paramters need to be calibrated
a=para[0:]
temp=0
for j in range(N):
price_j = a[j]*C[j]*P[t:T-tau,j]
temp=temp+price_j
Price=temp
return Price
The objective function is :
def GError_function_Price(para,y,k,t,T,tau,N,C):
# k is the price need to be fitted
return sum((function_Price(para,y,t,T,tau,N,C)-k[t+tau:T]) ** 2)
Now, I am calling these two functions to do the optimization of the model:
import numpy as np
from scipy.optimize import minimize
# Prices (example)
y = np.array([[1,2,3,4,5,4], [4,5,6,7,8,9], [6,7,8,7,8,6], [13,14,15,11,12,19]])
# Correaltion (example)
Corr= np.array([[1,2,3,4,5,4], [4,5,6,7,8,9], [6,7,8,7,8,6], [13,14,15,11,12,19],[1,2,3,4,5,4],[6,7,8,7,8,6]])
# Define
tau=1
Size = y.shape
N = Size[1]
T = Size[0]
t=0
# initial Values
para=np.zeros(N)
# Bounds
B = np.zeros(shape=(N,2))
for n in range(N):
B[n][0]= float('-inf')
B[n][1]= float('inf')
# Calibration
A = np.zeros(shape=(N,N))
for i in range (N):
k=y[:,i] #fitted one
C=Corr[i,:]
parag=minimize(GError_function_Price,para,args=(y,Y,t,T,tau,N,C),method='SLSQP',bounds=B)
A[i,:]=parag.x
Once, I run the model, It should produce an N by N array of optimized values of paramters. But, except for the first column, it keeps zeros for the rest. Something is wrong.
Can you help me fix the problem, please?
I know how to do it in Matlab.
The following is Matlab Code :
main function
function Price=function_Price(para,P,t,T,tau,N,C)
a=para(:,:);
temp=0;
for j=1:N
price_j = a(j).*C(j).*P(t:T-tau,j);
temp=temp+price_j;
end
Price=temp;
end
The objective function:
function gerr=GError_function_Price(para,P,Y,t,T,tau,N,C)
gerr=sum((function_Price(para,P,t,T,tau,N,C)-Y(t+tau:T)).^2);
end
Now, I call these two functions in the following way:
P = [1,2,3,4,5,4;4,5,6,7,8,9;6,7,8,7,8,6;13,14,15,11,12,19];
AutoAndCrossCorr= [1,2,3,4,5,4;4,5,6,7,8,9;6,7,8,7,8,6;13,14,15,11,12,19;1,2,3,4,5,4;6,7,8,7,8,6];
tau=1;
Size = size(P);
N =6;
T =4;
t=1;
for i=1:N
Y=P(:,i); % fitted one
C=AutoAndCrossCorr(i,:);
para=zeros(1,N);
lb= repmat(-inf,N,1);
ub= repmat(inf ,N,1);
parag=fminsearchbnd(#(para)abs(GError_function_Price(para,P,Y,t,T,tau,N,C)),para,lb,ub);
a(i,:)=parag;
end
The problem seems to be that you're passing the result of a function call to minimize, rather than the function itself. The arguments get passed by the args parameter. So instead of:
minimize(GError_function_Price(para,y,k,t,T,tau,N,C),para,method='SLSQP',bounds=B)
the following should work
minimize(GError_function_Price,para,args=(y,k,t,T,tau,N,C),method='SLSQP',bounds=B)
Trying to use statsmodels.glm with constraints and unable to get (what I think should be) a simple requirement to work...
import numpy as np
import pandas as pd
from statsmodels.tools import add_constant
from statsmodels.formula.api import glm
test = np.array([[10,29,30],[32,26,23],[34,39,46]])
exog = add_constant(test)
y = np.array([11,23,27])
formula = 'y ~ x + q'
formula_df = formula_df = pd.DataFrame({'y' : y,'x':test[:,0],'q':test[:,1],'r':test[:,2],'int':exog[:,3]})
glmm = glm(formula=formula,data=formula_df)
glmm_results = glmm.fit_constrained(constraints='q = 0')
print(glmm_results.summary())
from the docs for fit constraints:
The constraints are of the form R params = q where R is the constraint_matrix and q is the vector of constraint_values.
The estimation creates a new model with transformed design matrix, exog, and converts the results back to the original parameterization.
Parameters
constraintsformula expression or tuple
If it is a tuple, then the constraint needs to be given by two arrays (constraint_matrix, constraint_value), i.e. (R, q). Otherwise, the constraints can be given as strings or list of strings. see t_test for details
however, if I try to simply ensure that q is > 0 by changing the constraint to:
glm_results = glmm.fit_constrained(constraints='q > 0')
then i get an unrecognized token error
PatsyError: unrecognized token in constraint
q > 0
^
I've tried '>>' as well. There isn't much documentation I can find for statsmodels and writing constraints beyond what I copy/pasted above. How do I get this to work?
An alternate question would be how do I write the 4 simplest types of constraints on coefficients in the q,R format (x = 0, x<0, x>0, a<x<b)?
I'm a newbie in gekko, and want to use it in my linear programming problems.
I have variable names, costs, minimum and maximum bounds in separate dictionaries (my_vars, Cost, Min and Max) with variable names as their keys, and the objective is minimizing total cost with determining the amount of variables satisfying the constraints.
I did as below;
LP = GEKKO(remote=False)
vars = LP.Array(LP.Var, (len(my_vars)))
i=0
for xi in vars:
xi.lower = Min[list(my_vars)[i]]
xi.upper = Max[list(my_vars)[i]]
i += 1
Here I'd like to use variable original names instead of xi, is there any way?
it continues as;
LP.Minimize(sum(float(Cost[list(my_vars)[i]])*vars[i] for i in range(len(my_vars))))
LP.Equation(sum(vars) == 100)
Also I have constraint's left hand side (LHS) (coefficients of variables) and right hand side (RHS) numbers in two pandas data frame files, and like to construct equations using a for loop.
I don't know how to do this?
Here is one way to use your dictionary values to construct the problem:
from gekko import GEKKO
# stored as list
my_vars = ['x1','x2']
# stored as dictionaries
Cost = {'x1':100,'x2':125}
Min = {'x1':0,'x2':0}
Max = {'x1':70,'x2':40}
LP = GEKKO(remote=False)
va = LP.Array(LP.Var, (len(my_vars))) # array
vd = {} # dictionary
for i,xi in enumerate(my_vars):
vd[xi] = va[i]
vd[xi].lower = Min[xi]
vd[xi].upper = Max[xi]
# Cost function
LP.Minimize(LP.sum([Cost[xi]*vd[xi] for xi in my_vars]))
# Summation as an array
LP.Equation(LP.sum(va)==100)
# This also works as a dictionary
LP.Equation(LP.sum([vd[xi] for xi in my_vars])==100)
LP.solve(disp=True)
for xi in my_vars:
print(xi,vd[xi].value[0])
print ('Cost: ' + str(LP.options.OBJFCNVAL))
This produces a solution:
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 10750.00174236579
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 0.012199999999999996 sec
Objective : 10750.00174236579
Successful solution
---------------------------------------------------
x1 69.999932174
x2 30.0000682
Cost: 10750.001742
Here are a few examples of efficient linear programming with Gekko by exploiting problem sparsity.
I'm relatively new to Python, and am encountering some issues in writing a piece of code that generates and then solves a system of differential equations.
My approach to doing this was to create a set of variables and coefficients, (x0, x1, ..., xn) and (c0, c1 ,..., cn) repsectively, in a list with the function var(). Then the equations are constructed in EOM1(). The initial conditions, along with the set of equations, are all put together in EOM2() and solved using odeint.
Currently the code below runs, albeit not efficiently the reason for which I believe is because odeint runs through all the code with each interaction (that's something else I need to fix but isn't the main problem!).
import sympy as sy
from scipy.integrate import odeint
n=2
cn0list = [0.01, 0.05]
xn0list = [0.01, 0.01]
def var():
xnlist=[]
cnlist=[]
for i in range(n+1):
xnlist.append('x{0}'.format(i))
cnlist.append('c{0}'.format(i))
return xnlist, cnlist
def EOM1():
drdtlist=[]
for i in range(n):
cn1=sy.Symbol(var()[1][i])
xn0=sy.Symbol(var()[0][i])
xn1=sy.Symbol(var()[0][i+1])
eom=cn1*xn0*(1.0-xn1)-cn1*xn1-xn1
drdtlist.append(eom)
xi=sy.Symbol(var()[0][0])
xf=sy.Symbol(var()[0][n])
drdtlist[n-1]=drdtlist[n-1].subs(xf,xi)
return drdtlist
def EOM2(xn, t, cn):
x0, x1 = xn
c0, c1 = cn
f = EOM1()
output = []
for part in f:
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
return output
abserr = 1.0e-6
relerr = 1.0e-4
stoptime = 10.0
numpoints = 20
t = [stoptime * float(i) / (numpoints - 1) for i in range(numpoints)]
wsol = odeint(EOM2, xn0list, t, args=(cn0list,), atol=abserr, rtol=relerr)
My problem is that I had difficulty getting Python to treat the variables generated by Sympy appropriately. I got around this with the line
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
in EOM2(). Unfortunately, I do not know how to generalize this line to a list of variables from x0 to xn, and from c0 to cn. The same applies to the earlier line in EOM2(),
x0, x1 = xn
c0, c1 = cn
In other words I set n to an arbitrary number, is there a way for Python to interpret each element as it does with the ones I manually entered above? I have tried the following
output.append(part.evalf(subs={'x{0}'.format(j):var(n)[0][j], 'c{0}'.format(j):var(n)[1][j]}))
yet this yields the error that led me to use evalf in the first place,
TypeError: can't convert expression to float
Is there any way do what I want to, generate a set of n equations which are then solved with odeint?
Instead of using evalf you want to look into using sympy.lambdify to generate a callback for use with SciPy. You will need to create a function with the expected signature of odeint, e.g.:
y, params = sym.symbols('y:3'), sym.symbols('kf kb')
ydot = rhs(y, p=params)
f = sym.lambdify((y, t) + params, ydot)
yout = odeint(f, y0, tout, param_values)
We gave a tutorial on (among other things) how to use lambdify with odeint at the SciPy 2017 conference, the material is available here: http://www.sympy.org/scipy-2017-codegen-tutorial/
If you are open to use an external library to handle the function signatures of external solvers you may be interested in a library I've authored: pyodesys
If I understand correctly, you want to make an arbitrary number of substitutions in a SymPy expression. This is how it can be done:
n = 10
syms = sy.symbols('x0:{}'.format(n)) # an array of n symbols
expr = sum(syms) # some expression with those symbols
floats = [1/(j+1) for j in range(n)] # numbers to put in
expr.subs({symbol: value for symbol, value in zip(syms, floats)})
The result of subs is a float in this case (no evalf needed).
Note that the function symbols can directly create any number of symbols for you, via the colon notation. No need for a loop.