Step by step guide on how to run Gekko optimization locally - python

I am new to programming and my first Python project is nonlinear programming. I am using the Gekko Optimization Suite and I have everything running properly, but need guidance on how exactly to run it locally. Below is the explanation and code provided by the documentation, but I could use some help on how exactly to do it myself and what exactly it all means. Please act as if you are explaining to a small child or a golden retriever.
The run directory m.path contains the model file gk0_model.apm and
other files required to run the optimization problem either remotely
(default) or locally (m=GEKKO(remote=False)). Use m.open_folder() to
open the run directory. The run directory also contains diagnostic
files such as infeasibilities.txt that is produced if the solver fails
to find a solution. The default run directory can be changed:
from gekko import GEKKO
import numpy as np
import os
# create and change run directory
rd=r'.\RunDir'
if not os.path.isdir(os.path.abspath(rd)):
os.mkdir(os.path.abspath(rd))
m = GEKKO(remote=False) # solve locally
m.path = os.path.abspath(rd) # change run directory

Local Solve
m=GEKKO(remote=False)
The only option needed to run Gekko locally is remote=False. With remote=False, no Internet connection is required. There is no need to change the run directory. A default directory at m.path is created in the temporary folder to store the files that are compiled to byte code. This folder can be accessed with m.open_folder().
Local Solve with Intranet Server
m=GEKKO(remote=True,server='http://10.0.0.10')
There is an APMonitor server option (see Windows APMonitor Server or Linux APMonitor Server) for remote=True and server=http://10.0.0.10 (change to IP of local Intranet server). This is used as a local compute engine that runs control and optimization problems for microprocessors. This is useful for compute architectures that do not have sufficient memory or CPU power to solve the optimization problems, but when it is desirable to solve locally. This is an Edge compute option to complete the solution in the required cycle time (e.g. for a Model Predictive Controller). Some organizations use this option to have multiple clients that connect to one compute engine. This compute server can be upgraded so that all of the clients automatically use the updated version.
Remote Solve
m=GEKKO(remote=True)
A public server is available as a default server option. With remote=True (default option), Gekko sends the optimization problem to a remote server and then returns the solution. The public server is running a Linux APMonitor Server but with additional solver options that aren't in the local options due to distribution constraints.
Example Gekko and Scipy Optimize Minimize Solutions
GEKKO is a Python package for machine learning and optimization of mixed-integer and differential algebraic equations (see documentation). It is coupled with large-scale solvers for linear, quadratic, nonlinear, and mixed integer programming (LP, QP, NLP, MILP, MINLP). Modes of operation include parameter regression, data reconciliation, real-time optimization, dynamic simulation, and nonlinear predictive control. GEKKO is an object-oriented Python library to facilitate local execution of APMonitor. Below is a simple optimization example with remote=False (local solution mode). There are local options for MacOS, Windows, Linux, and Linux ARM. Other architectures need to use the remote=True option.
Python Gekko
from gekko import GEKKO
m = GEKKO(remote=False)
x = m.Array(m.Var,4,value=1,lb=1,ub=5)
x1,x2,x3,x4 = x
# change initial values
x2.value = 5; x3.value = 5
m.Equation(x1*x2*x3*x4>=25)
m.Equation(x1**2+x2**2+x3**2+x4**2==40)
m.Minimize(x1*x4*(x1+x2+x3)+x3)
m.solve()
print('x: ', x)
print('Objective: ',m.options.OBJFCNVAL)
Scipy Optimize Minimize
import numpy as np
from scipy.optimize import minimize
def objective(x):
return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
def constraint1(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def constraint2(x):
sum_eq = 40.0
for i in range(4):
sum_eq = sum_eq - x[i]**2
return sum_eq
# initial guesses
n = 4
x0 = np.zeros(n)
x0[0] = 1.0
x0[1] = 5.0
x0[2] = 5.0
x0[3] = 1.0
# show initial objective
print('Initial Objective: ' + str(objective(x0)))
# optimize
b = (1.0,5.0)
bnds = (b, b, b, b)
con1 = {'type': 'ineq', 'fun': constraint1}
con2 = {'type': 'eq', 'fun': constraint2}
cons = ([con1,con2])
solution = minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
x = solution.x
# show final objective
print('Final Objective: ' + str(objective(x)))
# print solution
print('Solution')
print('x1 = ' + str(x[0]))
print('x2 = ' + str(x[1]))
print('x3 = ' + str(x[2]))
print('x4 = ' + str(x[3]))
Additional Examples
There are many additional optimization packages in Python and additional Gekko tutorials and benchmark problems. One additional example is a Mixed Integer Linear Programming solution.
from gekko import GEKKO
m = GEKKO(remote=False)
x,y = m.Array(m.Var,2,integer=True,lb=0)
m.Maximize(y)
m.Equations([-x+y<=1,
3*x+2*y<=12,
2*x+3*y<=12])
m.options.SOLVER = 1
m.solve()
print('Objective: ', -m.options.OBJFCNVAL)
print('x: ', x.value[0])
print('y: ', y.value[0])
The APOPT solver is a Mixed Integer Nonlinear Programming (MINLP) solver (that also solves MILP problems) and is included as a local solver for MacOS, Linux, and Windows.

Related

GEKKO "Memory allocation failed"

I'm trying to use GEKKO to solve quite a large optimization problem locally (with remote=False).
When running the code, I get the error:
Error: At line 463 of file custom.f90
Traceback: not available, compile with -ftrace=frame or -ftrace=full
Operating system error: Not enough memory resources are available to process this command.
Memory allocation failed
So that hints that the operating system doesn't let GEKKO use enough memory.
However, I'm using a 32GB RAM machine, with nearly 25 GB free, while the model probably don't even need 10GB.
I've tried using m.options.MAX_MEMORY = 10, but this doesn't seem to matter.
Any thoughts on how to allow it to allocate more memory?
Here is some (simplified) code that triggers this error:
from gekko import GEKKO
quantiles = [(x+1)*.01 for x in range(300)]
#Initialize Model
m = GEKKO(remote=False)
#Set global options
m.options.IMODE = 3 #steady state optimization
m.options.SOLVER=3
m.options.MAX_ITER=100000
m.options.MAX_MEMORY = 10
m.options.REDUCE=10
#initialize variables
Est_array = m.Array(m.Var,(2, 16),value=1,lb=0,ub=48)
P_ij_t = m.Array(m.Var,(4, 16, 300), lb=0, ub=1)
Exp_ij_t = m.Array(m.Var,(4, 16, 300),value=1,lb=-36,ub=36)
C_t = m.Array(m.Var,300,lb=0,ub=5)
#Equations
for h in range(16):
for q in range(300):
m.Equation(m.sum([P_ij_t[i,h,q] for i in range(3)]) == 1)
for (q,t) in enumerate(quantiles):
m.Equation(C_t[q] == ( m.sum([P_ij_t[i+2,h,q]*(Est_array[i,h]-t)**2 for i in range(2) for h in range(16)]) + \
m.sum([P_ij_t[i,h,q]*(Est_array[1-i,15-h]-t)**2 for i in range(2) for h in range(16)])
)
)
#Objective
m.Minimize(C_t[0])
#Solve simulation
#m.open_folder()
m.solve()
#Results
print('C = ' + str(C_t[0].value[0]))
(All of the m.options.* parameters are things that I tried to get the solver to run, but none seem to help with the memory allocation problem).
The Windows binary is 32-bit while the Linux, MacOS, and ARM Linux are 64-bit executables when remote=False with Gekko v1.0.2. With remote=True, it runs on a Linux server that has 64 GB of RAM and uses a 64-bit executable. It is running into a memory limit issue with the Windows binary up to 4 GB RAM because of the 32-bit executable. The 64-bit executables have a 16 billion GB capacity (no limit). The 64-bit Windows local executable is a planned development with a future release. The Linux VM or an APM Linux server (such as host IP 10.0.0.10) are options for those who need to solve large problems on a Local Network with a Windows local gekko client.
m = GEKKO(remote=True, server='https://10.0.0.10')

DCPError in CVXPY when multiplying a scalar parameter with a scalar variable

When a scalar parameter is multiplied with a variable, CVXPY outputs the DCPError
DCPError: Product of two non-constant expressions is not DCP.
but
problem.is_dcp() returns True
Here is the code to reproduce this error.
import cvxpy as cp
import numpy as np
lam = cp.Variable(pos=True)
lam_min = cp.Parameter(pos=True, value=1.0)
g = cp.Parameter(pos=True, value=1.0)
constraints = [
lam >= lam_min,
]
objective_fn = g*lam
problem = cp.Problem(cp.Minimize(objective_fn), constraints)
print(problem.is_dcp())
problem.solve()
It is solving the following simple linear problem in one dimension
minimize g*lam
subject to lam >= lam_min
The question is
Why does the solver say that the problem is DCP and at the same time output a DCPError?
Update following sascha'a comment
I was running the script from Spyder on Ubuntu. Following sascha's comment that the snippet was generating no error for them, I ran the script inside a terminal and indeed there is no error!
Why is there this error when the script is run within Spyder but not in a terminal?

Implementation of MINLP solver "apopt" in Pyomo

I have a mixed integer non linear problem in Pyomo with an objective function and several constraints consisting of non-linear terms and binary variables.
The popular solver "ipopt" finds a solution, but it treats the binary variables as continuous variables.
opt=SolverFactory("ipopt")
results=opt.solve(instance)
results.write()
instance.load(results)
Now I have already tried desperately to try two solvers that can solve mixed-integer non-linear problems.
First I tried the MindPy solver ( https://pyomo.readthedocs.io/en/stable/contributed_packages/mindtpy.html). Unfortunately without success:
I always get the error message: "type NoneType doesn't define round method". This surprises me, because the ipopt-solver finds a solution without problems and the mindtpy-solver is a mixture of a linear solver and a non-linear solver and should actually get this solved.
opt=SolverFactory('mindtpy').solve(instance, mip_solver="glpk", nlp_solver="ipopt", tee=True)
results=opt.solve(instance)
results.write()
instance.load(results)
2)Then I tried the apopt solver. You have to download it separately from "https://github.com/APMonitor/apopt" and put all files into the working directory.
Then I tried to execute the following code, unfortunately without success:
opt=SolverFactory("apopt.py")
results=opt.solve(instance)
results.write()
instance.load(results)
I always get the following error message: "Error message: [WinError 193] %1 is not a valid Win32 application". This is probably related to the fact that my Python interpreter requires an apopt.exe since I have a Windows machine. Attempts such as converting the .py to an .exe file have failed. Also, specifying Solverfactory(..., "executable=C\Users\Python...\\apopt.py" separately did not work.
Does anyone have an idea how to get the solver "apopt" and/or the solver "Mindtpy" to work and can do something with the error messages?
Thank you very much in advance!
Edit:
Here is an exemplary and simple concrete model. I have tried to translate it into easier code. As I've already said, the ipopt solver finds a solution:
model = pyo.ConcreteModel()
model.x = pyo.Var([1,2,3,4], domain=pyo.NonNegativeReals)
model.x = pyo.Var([5], domain=pyo.Binary)
model.OBJ = pyo.Objective(expr = 2*model.x[1] + 3*model.x[2] + 3*model.x[3] + 4*model.x[4])
model.Constraint1 = pyo.Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
model.Constraint2 = pyo.Constraint(expr = 3*model.x[3] + 4*model.x[4] >= 1)
model.Constraint3 =pyo.Constraint(expr = 1000*cos(model.x[3]) < 1000)
model. Constraint4=pyo.Constraint(expr = 1000*sin(model.x[4]) < 1000)
model.Constraint5=pyo.Constraint(expr = model.x[2] <= 10000*(1-model.x[5])
model.Constraint6= pyo.Constraint (expr=model.x[2] <= 10000*(model.x[5]))
Try adding the path to apopt.py to the PATH variable. The apopt.py program acts like an executable with the model.nl as an argument to the solver and it produces a sol solution file that is then processed to retrieve the solution. Unlike other solvers in AIMS or Pyomo, APOPT computes remotely on a public server. Here are additional instructions on running APOPT.
APOPT Solver
APOPT (for Advanced Process OPTimizer) is a software package for solving large-scale optimization problems of any of these forms:
Linear programming (LP)
Quadratic programming (QP)
Quadratically constrained quadratic program (QCQP)
Nonlinear programming (NLP)
Mixed integer programming (MIP)
Mixed integer linear programming (MILP)
Mixed integer nonlinear programming (MINLP)
Applications of the APOPT include chemical reactors, friction stir welding, prevention of hydrate formation in deep-sea pipelines, computational biology, solid oxide fuel cells, and flight controls for Unmanned Aerial Vehicles (UAVs). APOPT is supported in AMPL, APMonitor, Gekko, and Pyomo.
APOPT Online Solver for Mixed Integer Nonlinear Programming Reads output from AMPL, Pyomo, or other NL File Writers. Similar to other solvers, this script reads the model (NL) file and produces a solution (sol) file. It sends the NL file to a remote server, computes a solution (remotely), and retrieves a solution (sol) file through an internet connection. It communicates with the server http://byu.apopt.com that is hosting the APOPT solver. Contact support#apmonitor.com for support, especially if there is a feature request or a concern about a problem solution.
Instructions for usage:
Place apopt.py in an appropriate folder in the system path (e.g. Linux, /usr/bin/)
Set appropriate permissions to make the script executable (e.g. chmod 775 apopt.py)
In AMPL, Pyomo, or other NL file write, set solver option to apopt.py
Test installation by running apopt.py -test
Visit apopt.com for additional information and solver option help
Information on the APOPT solver with references can be found at the Wikipedia article for APOPT. APOPT has integration with Gekko and can run locally with m=GEKKO(remote=False).
"type NoneType doesn't define round method"
You should (almost) never use a round() function in your MINLP model. It is not needed either. Instead, use an integer variable, like in:
x-0.5 <= y <= x+0.5
x continuous variable
y integer variable
The reason why round() is really, really bad, is because it is non-differentiable and not continuous. Almost all NLP and MINLP solvers assume smooth functions (sometimes it is useful to read the documentation).
After fixing your model (quite a few problems with it), I could not reproduce the error message about round().
D:\tmp>type pyom1.py
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.x = pyo.Var([1,2,3,4], domain=pyo.NonNegativeReals)
model.y = pyo.Var(domain=pyo.Binary)
model.OBJ = pyo.Objective(expr = 2*model.x[1] + 3*model.x[2] + 3*model.x[3] + 4*model.x[4])
model.Constraint1 = pyo.Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
model.Constraint2 = pyo.Constraint(expr = 3*model.x[3] + 4*model.x[4] >= 1)
model.Constraint3 = pyo.Constraint(expr = 1000*pyo.cos(model.x[3]) <= 1000)
model.Constraint4 = pyo.Constraint(expr = 1000*pyo.sin(model.x[4]) <= 1000)
model.Constraint5 = pyo.Constraint(expr = model.x[2] <= 10000*(1-model.y))
model.Constraint6 = pyo.Constraint (expr=model.x[2] <= 10000*(model.y))
pyo.SolverFactory('mindtpy').solve(model, mip_solver='cbc', nlp_solver='ipopt', tee=True)
D:\tmp>python.exe pyom1.py
INFO: ---Starting MindtPy---
INFO: Original model has 6 constraints (2 nonlinear) and 0 disjunctions, with
5 variables, of which 1 are binary, 0 are integer, and 4 are continuous.
INFO: rNLP is the initial strategy being used.
INFO: NLP 1: Solve relaxed integrality
INFO: NLP 1: OBJ: 1.666666661289117 LB: -inf UB: inf
INFO: ---MindtPy Master Iteration 0---
INFO: MIP 1: Solve master problem.
INFO: MIP 1: OBJ: 1.6666666499999998 LB: 1.6666666499999998 UB: inf
INFO: NLP 2: Solve subproblem for fixed binaries.
INFO: NLP 2: OBJ: 1.6666666716089886 LB: 1.6666666499999998 UB:
1.6666666716089886
INFO: MindtPy exiting on bound convergence. LB: 1.6666666499999998 + (tol
0.0001) >= UB: 1.6666666716089886
D:\tmp>

to solve non-linear constrained problem in python using l1 minimization

I am Currently working on some optimization problem which involves non-linear constraint. The problem is as follows :
I need to perform either of the two minimizations shown in image in python. I found library scipy which has optimize.minimize() function but I am unable to fit in the Non-Linear constraint using scipy.NonLinearConstraint. Can anyone guide? How to solve this? Is there also a way to solve it using some homotopy function given in any of the libraries? I have tried (adding constraint for the alternative one)as:
con = lambda A,x,y : np.matmul(A,x) - y
nlc = NonlinearConstraint(con, 0, epsilon)
So, I finally could solve the above optimization by using spgl1 solver in python. It can be used as shown below:
from spgl1 import spgl1, spg_bp, spg_bpdn
#x01 = psueodinverse of A multiplied by y
x,resid,grad,info = spgl1(A, y, tau = tau, sigma = epsilon, x0 = x01, iter_lim = 150)
one can refer about the above solver at github link or pypi link

Long Vector Linear Programming in R?

Hello and thanks in advance. Fresh off the heels of this question I acquired some more RAM and now have enough memory to fit all the matrices I need to run a linear programming solver. Now the problem is none of the Linear Programming packages in R seem to support long vectors (ie large matrices).
I've tried functions Rsymphony_solve_LP, Rglpk_solve_LP and lp from packages Rsymphony, Rglpk, and lpSolve respectively. All report a similar error to the one below:
Error in rbind(const.mat, const.dir.num, const.rhs) :
long vectors not supported yet: bind.c:1544
I also have my code below in case that helps...the constraint matrix mat is my big matrix (7062 rows by 364520 columns) created using the package bigmemory. When I run this this line the matrix is pulled into memory and then after a while the errors show.
Rsymph <- Rsymphony_solve_LP(obj,mat[1:nrow(mat),1:ncol(mat)],dir,rhs,types=types,max=max, write_lp=T)
I'm guessing this is a hard-coded error in each of the three functions? Is there currently a linear programming solver in R or even Python that supports long vectors? Should I contact the package maintainers or just edit the code myself? Thanks!
The package lpSolveAPI can solve long-vector linear programming problems. You have to first start my declaring a Linear Programming object, then add the constraints:
library(lpSolveAPI)
#Generate Linear Programming Object
lprec <- make.lp(nrow = 0 # Number of Constraints
, ncol = ncol(mat) # Number of Decision Variables
)
#Set Objective Function to Minimize
set.objfn(lprec, obj)
#Add Constraints
#Note Direction and RHS is included along with Constraint Value
for(i in 1:nrow(mat) ){
add.constraint(lprec,mat[i,], dir[i], rhs[i])
print(i)
}
#Set Decision Variable Type
set.type(lprec, c(1:ncol(mat)), type = c("binary"))
#Solve Model
solve(lprec)
#Obtain Solutions
get.total.iter(lprec)
get.objective(lprec)
get.variables(lprec)
There's a good introduction to this package here.

Categories