I'm a newbie in gekko, and want to use it in my linear programming problems.
I have variable names, costs, minimum and maximum bounds in separate dictionaries (my_vars, Cost, Min and Max) with variable names as their keys, and the objective is minimizing total cost with determining the amount of variables satisfying the constraints.
I did as below;
LP = GEKKO(remote=False)
vars = LP.Array(LP.Var, (len(my_vars)))
i=0
for xi in vars:
xi.lower = Min[list(my_vars)[i]]
xi.upper = Max[list(my_vars)[i]]
i += 1
Here I'd like to use variable original names instead of xi, is there any way?
it continues as;
LP.Minimize(sum(float(Cost[list(my_vars)[i]])*vars[i] for i in range(len(my_vars))))
LP.Equation(sum(vars) == 100)
Also I have constraint's left hand side (LHS) (coefficients of variables) and right hand side (RHS) numbers in two pandas data frame files, and like to construct equations using a for loop.
I don't know how to do this?
Here is one way to use your dictionary values to construct the problem:
from gekko import GEKKO
# stored as list
my_vars = ['x1','x2']
# stored as dictionaries
Cost = {'x1':100,'x2':125}
Min = {'x1':0,'x2':0}
Max = {'x1':70,'x2':40}
LP = GEKKO(remote=False)
va = LP.Array(LP.Var, (len(my_vars))) # array
vd = {} # dictionary
for i,xi in enumerate(my_vars):
vd[xi] = va[i]
vd[xi].lower = Min[xi]
vd[xi].upper = Max[xi]
# Cost function
LP.Minimize(LP.sum([Cost[xi]*vd[xi] for xi in my_vars]))
# Summation as an array
LP.Equation(LP.sum(va)==100)
# This also works as a dictionary
LP.Equation(LP.sum([vd[xi] for xi in my_vars])==100)
LP.solve(disp=True)
for xi in my_vars:
print(xi,vd[xi].value[0])
print ('Cost: ' + str(LP.options.OBJFCNVAL))
This produces a solution:
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 10750.00174236579
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 0.012199999999999996 sec
Objective : 10750.00174236579
Successful solution
---------------------------------------------------
x1 69.999932174
x2 30.0000682
Cost: 10750.001742
Here are a few examples of efficient linear programming with Gekko by exploiting problem sparsity.
Related
Objective: To add boundary/initial conditions (BCs/ICs) to a system of ODEs
I have used the method of lines to convert a system of PDEs into a system of ODEs. The ODEs themselves involve a lot of variables so I will present simple versions here
For the purpose of this simulation all I am interested in is the structure of the code to set up the BCs/ICs, so I have removed any correlations etc and just put constants
For reference, the actual system is the flow of a gas through a packed bed. For terminology, I am calling the initial conditions the constant conditions along the bed initially (the bed temperature, the fluid temperature, the density of the fluid in the bed) and the boundary conditions would be the mass flow into the bed, the temperature of the flow into the bed.
I have two functions. df, which is used to set up the non IC/BC ODEs (generally happy with the structure of this one):
def df(t,x,*constants):
##the time derivative of x
a,b,c,d = constants ## unpack constants
##setting up solution array
x = x.reshape(2,number_of_nodes)
dxdt = np.zeros_like(x)
## internal nodes
for j in range(2,2*n,2): ##start at index 2 (to avoid ICs/BCs), go to 2*n and increment by 2
##equation 1
dxdt[0][j] = a*(x[1][j-1] - b*x[1][j])
## equation 2
dxdt[1][j] = c*(x[1][j-1] - d*x[1][j])
return dxdt.reshape(-1)
And initialize_system, which is supposed to set up the BCs/ICs (which is currently incorrect):
def initialize_system(*constants):
#sets the appropriate initial/boundary conditions
a,b,c,d = constants #unpack constants
x0 = np.zeros((2, number_of_nodes))
y01 = 1
y02 = 1
#set up initial boundary values
# eq 1
x0[0][0] = a*(y01 - b*x0[1][0])
# eq 2
x0[1][0] = c * (y02 - d*x0[1][0])
for j in range(2, 2*n, 2):
# eq 1
x0[0][j] = a*(y0CO2 - b*x0[1][j])
# eq 2
x0[1][j] = b*(y0H2O - d*x0[1][j])
return x0.reshape(-1)
My question is:
How can I correctly set up the ICs/BCs in initialize_system here?
Edit: There are very limited examples of implementing the method of lines in python on the internet. I would also highly appreciate any resources on this
I am using GEKKO for solving an MINLP problem in Python that involves a co-simulation with PowerFactory, a power system simulation software. The MINLP problem objective function needs to call PowerFactory from within and execute particular tasks before returning the values to the MILP problem definition. In the equality constraint definition, I need to also use two Pandas data frames (tables with 10000x2 cells) to get particular values corresponding to the values of the decision variables in the constraint expression. But, I am getting errors while writing any code involving loops, or Pandas 'loc' or 'iloc' functions, within the objective/constraint functions due to some issues with the nature of GEKKO variables in these function calls. Any help in this regard would be much appreciated. The structure of the code is given below:
import powerfactory as pf
from gekko import GEKKO
import pandas as pd
# Execute the PF setup commands for Python
# Pandas dataframe in question
df1 = pd.read_csv("table1.csv")
df2 = pd.read_csv("table2.csv")
def constraint1(X):
P = 0
for i in range(length(X)):
V = X[i] * 1000
D = round(V, 1)
P += df1.loc[D, "ColumnName"]
return P
def objective(X):
for i in range(length(X)):
V = X[i] * 1000
D = round(V, 1)
I = df2.loc[D, "ColumnName"]
# A list with the different values of 'I' is generated for passing to PF
# Some PowerFactory based commands below, involving inputs, and extracting results from PF, using a list generated from the values of 'I'. PF returns some result values to the code
return results
# MINLP problem definition:
m = GEKKO(remote=False)
x = m.Array(m.Var, nInv, value=1.0, lb=0.533, ub=1.0)
m.Equation(constraint1(x) == 30)
m.Minimize(objective(x))
m.options.SOLVER = 3
m.solve(disp=False)
# Command for exporting the results to a .txt file
In another problem formulation, I am trying to run MINLP optimization problems within the objective and constraint function as a nested optimization problem. However, I am running into errors in that as well. The structure of the code is as follows:
import powerfactory as pf
from gekko import GEKKO
# Execute the PF setup commands for Python
def constraint1(X):
P = 0
for i in range(length(X)):
V = X[i] * 1000
# 2nd MINLP problem: Finds 'I' from value of 'V', a single element
# Calculate 'Pcal' from 'I'
P += Pcal
return P
def objective(X):
Iset = []
for i in range(length(X)):
V = X[i] * 1000
# 3rd MINLP problem: Finds 'I' from value of 'V'
# Appends 'I' to a 'Iset'
# 'Iset' list passed on to PF
# Some PowerFactory based commands below, involving inputs, and extracting results from PF, using the 'Iset' list. PF returns some result values to the code
return results
# Main MINLP problem definition:
m = GEKKO(remote=False)
x = m.Array(m.Var, nInv, value=1.0, lb=0.533, ub=1.0)
m.Equation(constraint1(x) == 30)
m.Minimize(objective(x))
m.options.SOLVER = 3
m.solve(disp=False)
# Command for exporting the results to a .txt file
Gekko does not have call-backs to external functions. This is because the solvers are gradient-based and a precursor steps is automatic differentiation to obtain exact 1st and 2nd derivatives in sparse form. Similar to Gekko and CoolProp, one option is to build an approximation to your external (in this case PowerFactory) model that the optimizer can use. Two options are the cubic spline (cspline) or the basis spline (bspline).
If you can't use these approximations then I recommend switching to a different platform for solving the optimization problem. Perhaps you could try scipy.optimize.minimize that can obtain gradients by finite difference and add branch and bound to solve the mixed integer portion.
TL;DR I've been implementing a python program to solve numerically equations for natural convection based on a particular similarity variable using runge-kutta 4 and the shooting method. However I don't get the right solutions when I plot it. Did I make a mistake somewhere ?
Hi !
Starting from a special case of natural convection, we get these similitude equations.
The first describe the fluid flow, the second describe the heat flow.
"Pr" is for Prandtl it's basically a dimensionless number used in fluid dynamics (Prandtl) :
These equations are subjects to the following boundary values such that the temperature near the plate is greater than the temperature outside the boundary layer and such that the fluid velocity is 0 far away from the boundary layer.
I've been trying to resolve these numerically with Runge-Kutta 4 and the shooting method to transform the boundary value problem into an initial value problem. The way the shooting method is implemented is with the newton method.
However, I don't get the right solutions.
As you can see in the following, the temperature (in red) is increasing as we are moving away from the plate whereas it should decrease exponentially.
It's more consistent for the fluid velocity (in blue), however the speed i think it should go up faster then go down faster. Here the curve is smoother.
Now, the fact is that we have a system of 2 coupled ODE. However, right now, I'm only trying to find the one of the two initials values (e.g. f''(0) = a, trying to find a) such that we have a solution to the boundary value problem (shooting method). Once found, I suppose we have the solution for the whole problem.
I guess I should maybe manage the two (f''(0) = a ; theta'(0) = b) but I don't know how to manage these two in parallel.
Last think to mention, if I try to get the initial value of theta' (so theta'(0)) I don't get the right heat profile.
Here is the code :
"""
The goal is to resolve a 3rd order non-linear ODE for the blasius problem.
It's made of 2 equations (flow / heat)
f''' = 3ff'' - 2(f')^2 + theta
3 Pr f theta' + theta'' = 0
RK4 + Shooting Method
"""
import numpy as np
import math
from scipy.integrate import odeint
from scipy.optimize import newton
from edo_solver.plot import plot
from constants import PRECISION
def blasius_edo(y, t, prandtl):
f = y[0:3]
theta = y[3:5]
return np.array([
# flow edo
f[1], # f' = df/dn
f[2], # f'' = d^2f/dn^2
- 3 * f[0] * f[2] + (2 * math.pow(f[1], 2)) - theta[0], # f''' = - 3ff'' + 2(f')^2 - theta,
# heat edo
theta[1], # theta' = dtheta/dn
- 3 * prandtl * f[0] * theta[1], # theta'' = - 3 Pr f theta'
])
def rk4(eta_range, shoot):
prandtl = 0.01
# initial values
f_init = [0, 0, shoot] # f(0), f'(0), f''(0)
theta_init = [1, shoot] # theta(0), theta'(0)
ci = f_init + theta_init # concatenate two ci
# note: tuple with single argument must have "," at the end of the tuple
return odeint(func=blasius_edo, y0=ci, t=eta_range, args=(prandtl,))
"""
if we have :
f'(t_0) = fprime_t0 ; f'(eta -> infty) = fprime_inf
we can transform it into :
f'(t_0) = fprime_t0 ; f''(t_0) = a
we define the function F(a) = f'(infty ; a) - fprime_inf
if F(a) has a root in "a",
then the solutions to the initial value problem with f''(t_0) = a
is also the solution the boundary problem with f'(eta -> infty) = fprime_inf
our goal is to find the root, we have the root...we have the solution.
it can be done with bissection method or newton method.
"""
def shooting(eta_range):
# boundary value
fprimeinf = 0 # f'(eta -> infty) = 0
# initial guess
# as far as I understand
# it has to be the good guess
# otherwise the result can be completely wrong
initial_guess = 10 # guess for f''(0)
# define our function to optimize
# our goal is to take big eta because eta should approach infty
# [-1, 1] : last row, second column => f'(eta_final) ~ f'(eta -> infty)
fun = lambda initial_guess: rk4(eta_range, initial_guess)[-1, 1] - fprimeinf
# newton method resolve the ODE system until eta_final
# then adjust the shoot and resolve again until we have a correct shoot
shoot = newton(func=fun, x0=initial_guess)
# resolve our system of ODE with the good "a"
y = rk4(eta_range, shoot)
return y
def compute_blasius_edo(title, eta_final):
ETA_0 = 0
ETA_INTERVAL = 0.1
ETA_FINAL = eta_final
# default values
title = title
x_label = "$\eta$"
y_label = "profil de vitesse $(f'(\eta))$ / profil de température $(\\theta)$"
legends = ["$f'(\eta)$", "$\\theta$"]
eta_range = np.arange(ETA_0, ETA_FINAL + ETA_INTERVAL, ETA_INTERVAL)
# shoot
y_set = shooting(eta_range)
plot(eta_range, y_set, title, legends, x_label, y_label)
compute_blasius_edo(
title="Convection naturelle - Solution de similitude",
eta_final=10
)
I could be completely off base here, but I wrote something similar to solve 1D fluid-reaction-heat equations. Try using solve_ivp and using the RADAU solver method, it helps with more difficult systems.
Also maybe try converting your system of ODES to a system of first order ODEs as that may help.
You are implementing the additional but wrong boundary condition f''(0) = theta'(0), as both slots get the same initial value in the shooting method. You need to hold them separate, giving 2 free variables and thus the need for a 2-dimensional Newton method or any other solver for non-scalar functions.
You could just as well use the solve_bvp routine with a sensible initial guess.
I'm optimizing a tubular column design using gekko python. I experimented with the code using the different variable types m.SV and m.CV in place of m.Var and there was no apparent effect on the solver or the results. What purpose do these different variable types serve?
I've included my model below.
m = GEKKO()
#%% Constants
pi = m.Const(3.14159,'pi')
P = 2300 # compressive load (kg_f)
o_y = 450 # yield stress (kg_f/cm^2)
E = 0.65e6 # elasticity (kg_f/cm^2)
p = 0.0020 # weight density (kg_f/cm^3)
l = 300 # length of the column (cm)
#%% Variables
d = m.CV(value=8.0,lb=2.0,ub=14.0) # mean diameter (cm)
t = m.SV(value=0.3,lb=0.2,ub=0.8) # thickness (cm)
cost = m.Var()
#%% Intermediates
d_i = m.Intermediate(d - t)
d_o = m.Intermediate(d + t)
W = m.Intermediate(p*l*pi*(d_o**2 - d_i**2)/4) # weight (kgf)
o_i = m.Intermediate(P/(pi*d*t)) # induced stress
# second moment of area of the cross section of the column
I = m.Intermediate((pi/64)*(d_o**4 - d_i**4))
# buckling stress (Euler buckling load/cross-sectional area)
o_b = m.Intermediate((pi**2*E*I/l**2)*(1/(pi*d*t)))
#%% Equations
m.Equations([
o_i - o_y <= 0,
o_i - o_b <= 0,
cost == 5*W + 2*d
])
#%% Objective
m.Obj(cost)
#%% Solve and print solution
m.options.SOLVER = 1
m.solve()
print('Optimal cost: ' + str(cost[0]))
print('Optimal mean diameter: ' + str(d[0]))
print('Optimal thickness: ' + str(t[0]))
Variables
Variables are values that are adjusted by the solver to satisfy an equation or determine the best outcome among many options. There is typically at least one variable for every equation. To avoid over-specification, a simulation often has equal numbers of equations and variables. For optimization problems, there are typically more variables than equations. The extra variables are changed to minimize or maximize an objective function. There is more information on these objects in the Gekko documentation and APMonitor documentation.
x = m.Var(5) # declare a variable with initial condition
There are also "special" types of variables that perform certain functions. For example, additional equations are added to the model for variables that have a measurement for data reconciliation. To avoid adding these extra equations for all variables, the measurement equations are only added for those designated as Controlled Variables (CVs). State Variables (SVs) may also be measured are typically designated as such just for monitoring purposes.
State Variables (SVs)
States are model variables that may be measured or are of special interest for observation. For time-varying simulations, the SVs change over the time horizon to satisfy equation feasibility.
x = m.SV() # state variable
Controlled Variables (CVs)
Controlled variables are model variables that are included in the objective of a controller or optimizer. These variables are controlled to a range, maximized, or minimized. Controlled variables may also be measured values that are included for data reconciliation. For time-varying simulations, the CVs change over the time horizon to satisfy the equations and minimize the objective function.
x = m.CV() # controlled variable
Example Application
There is documentation for options for the different variable and parameter types (FV, MV, SV, CV). Below is a Model Predictive Control Application that shows the use of a Manipulated Variable and Controlled Variable.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO()
m.time = np.linspace(0,20,41)
# Parameters
mass = 500
b = m.Param(value=50)
K = m.Param(value=0.8)
# Manipulated variable
p = m.MV(value=0, lb=0, ub=100)
p.STATUS = 1 # allow optimizer to change
p.DCOST = 0.1 # smooth out gas pedal movement
p.DMAX = 20 # slow down change of gas pedal
# Controlled Variable
v = m.CV(value=0)
v.STATUS = 1 # add the SP to the objective
m.options.CV_TYPE = 2 # squared error
v.SP = 40 # set point
v.TR_INIT = 1 # set point trajectory
v.TAU = 5 # time constant of trajectory
# Process model
m.Equation(mass*v.dt() == -v*b + K*b*p)
m.options.IMODE = 6 # control
m.solve(disp=False)
# get additional solution information
import json
with open(m.path+'//results.json') as f:
results = json.load(f)
plt.figure()
plt.subplot(2,1,1)
plt.plot(m.time,p.value,'b-',label='MV Optimized')
plt.legend()
plt.ylabel('Input')
plt.subplot(2,1,2)
plt.plot(m.time,results['v1.tr'],'k-',label='Reference Trajectory')
plt.plot(m.time,v.value,'r--',label='CV Response')
plt.ylabel('Output')
plt.xlabel('Time')
plt.legend(loc='best')
plt.show()
I'm trying to solve a differential equation numerically, and am writing an equation that will give me an array of the solution to each time point.
import numpy as np
import matplotlib.pylab as plt
pi=np.pi
sin=np.sin
cos=np.cos
sqrt=np.sqrt
alpha=pi/4
g=9.80665
y0=0.0
theta0=0.0
sina = sin(alpha)**2
second_term = g*sin(alpha)*cos(alpha)
x0 = float(raw_input('What is the initial x in meters?'))
x_vel0 = float(raw_input('What is the initial velocity in the x direction in m/s?'))
y_vel0 = float(raw_input('what is the initial velocity in the y direction in m/s?'))
t_f = int(raw_input('What is the maximum time in seconds?'))
r0 = x0
vtan = sqrt(x_vel0**2+y_vel0**2)
dt = 1000
n = range(0,t_f)
r_n = r0*(n*dt)
r_nm1 = r0((n-1)*dt)
F_r = ((vtan**2)/r_n)*sina-second_term
r_np1 = 2*r_n - r_nm1 + dt**2 * F_r
data = [r0]
for time in n:
data.append(float(r_np1))
print data
I'm not sure how to make the equation solve for r_np1 at each time in the range n. I'm still new to Python and would like some help understanding how to do something like this.
First issue is:
n = range(0,t_f)
r_n = r0*(n*dt)
Here you define n as a list and try to multiply the list n with the integer dt. This will not work. Pure Python is NOT a vectorized language like NumPy or Matlab where you can do vector multiplication like this. You could make this line work with
n = np.arange(0,t_f)
r_n = r0*(n*dt),
but you don't have to. Instead, you should move everything inside the for loop to do the calculation at each timestep. At the present point, you do the calculation once, then add the same only result t_f times to the data list.
Of course, you have to leave your initial conditions (which is a key part of ODE solving) OUTSIDE of the loop, because they only affect the first step of the solution, not all of them.
So:
# Initial conditions
r0 = x0
data = [r0]
# Loop along timesteps
for n in range(t_f):
# calculations performed at each timestep
vtan = sqrt(x_vel0**2+y_vel0**2)
dt = 1000
r_n = r0*(n*dt)
r_nm1 = r0*((n-1)*dt)
F_r = ((vtan**2)/r_n)*sina-second_term
r_np1 = 2*r_n - r_nm1 + dt**2 * F_r
# append result to output list
data.append(float(r_np1))
# do something with output list
print data
plt.plot(data)
plt.show()
I did not add any piece of code, only rearranged your lines. Notice that the part:
n = range(0,t_f)
for time in n:
Can be simplified to:
for time in range(0,t_f):
However, you use n as a time variable in the calculation (previously - and wrongly - defined as a list instead of a single number). Thus you can write:
for n in range(0,t_f):
Note 1: I do not know if this code is right mathematically, as I don't even know the equation you're solving. The code runs now and provides a result - you have to check if the result is good.
Note 2: Pure Python is not the best tool for this purpose. You should try some highly optimized built-ins of SciPy for ODE solving, as you have already got hints in the comments.