I currently have an (integer) LP problem solved which has, amongst others, the following mathematical constraint as pseudocode.
Packages_T1 + Packages_T2 + Packages_T3 + RPackages = 25
It represents three package trucks (T1, T2 and T3) to each of which packages can be assigned plus a residual/spilled package variable which is used in the objective function. The current value of 25 represents the total package demand.
Lets say I want to re-solve this problem but change the current demand of 25 packages to 35 packages. When I warm start from the previous solution with 25 packages CPLEX errors out in stating that the provided solution is infeasible: which makes perfect sense. However, it subsequently fails to repair the previous solution even though the most straight-forward way to do this would be to "up" the RPackages variable for each of these constraints.
My question is whether there is any possibility to still use the information from the previously solved problem as a warm start to the new one. Is there a way to, for example, drop all RPackages from the solution and have them recalculated to fit the new constraint right-hand side? A "last resort" effort I thought of would be to manually recalculate all these RPackages values myself and replace them in the old solution but a more automated solution to this problem would be preferred. I am using the standard CPLEX Python API for reference.
Thank you in advance.
even if the warmstart is not feasible CPLEX can use some information.
Let me use the zoo example from
https://www.linkedin.com/pulse/making-optimization-simple-python-alex-fleischer/
from docplex.mp.model import Model
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
mdl.minimize(nbbus40*500 + nbbus30*400)
warmstart=mdl.new_solution()
warmstart.add_var_value(nbbus40,4)
warmstart.add_var_value(nbbus30,0)
mdl.add_mip_start(warmstart)
sol=mdl.solve(log_output=True)
for v in mdl.iter_integer_vars():
print(v," = ",v.solution_value)
gives
Warning: No solution found from 1 MIP starts.
Retaining values of one MIP start for possible repair.
Related
I have a mixed integer non linear problem in Pyomo with an objective function and several constraints consisting of non-linear terms and binary variables.
The popular solver "ipopt" finds a solution, but it treats the binary variables as continuous variables.
opt=SolverFactory("ipopt")
results=opt.solve(instance)
results.write()
instance.load(results)
Now I have already tried desperately to try two solvers that can solve mixed-integer non-linear problems.
First I tried the MindPy solver ( https://pyomo.readthedocs.io/en/stable/contributed_packages/mindtpy.html). Unfortunately without success:
I always get the error message: "type NoneType doesn't define round method". This surprises me, because the ipopt-solver finds a solution without problems and the mindtpy-solver is a mixture of a linear solver and a non-linear solver and should actually get this solved.
opt=SolverFactory('mindtpy').solve(instance, mip_solver="glpk", nlp_solver="ipopt", tee=True)
results=opt.solve(instance)
results.write()
instance.load(results)
2)Then I tried the apopt solver. You have to download it separately from "https://github.com/APMonitor/apopt" and put all files into the working directory.
Then I tried to execute the following code, unfortunately without success:
opt=SolverFactory("apopt.py")
results=opt.solve(instance)
results.write()
instance.load(results)
I always get the following error message: "Error message: [WinError 193] %1 is not a valid Win32 application". This is probably related to the fact that my Python interpreter requires an apopt.exe since I have a Windows machine. Attempts such as converting the .py to an .exe file have failed. Also, specifying Solverfactory(..., "executable=C\Users\Python...\\apopt.py" separately did not work.
Does anyone have an idea how to get the solver "apopt" and/or the solver "Mindtpy" to work and can do something with the error messages?
Thank you very much in advance!
Edit:
Here is an exemplary and simple concrete model. I have tried to translate it into easier code. As I've already said, the ipopt solver finds a solution:
model = pyo.ConcreteModel()
model.x = pyo.Var([1,2,3,4], domain=pyo.NonNegativeReals)
model.x = pyo.Var([5], domain=pyo.Binary)
model.OBJ = pyo.Objective(expr = 2*model.x[1] + 3*model.x[2] + 3*model.x[3] + 4*model.x[4])
model.Constraint1 = pyo.Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
model.Constraint2 = pyo.Constraint(expr = 3*model.x[3] + 4*model.x[4] >= 1)
model.Constraint3 =pyo.Constraint(expr = 1000*cos(model.x[3]) < 1000)
model. Constraint4=pyo.Constraint(expr = 1000*sin(model.x[4]) < 1000)
model.Constraint5=pyo.Constraint(expr = model.x[2] <= 10000*(1-model.x[5])
model.Constraint6= pyo.Constraint (expr=model.x[2] <= 10000*(model.x[5]))
Try adding the path to apopt.py to the PATH variable. The apopt.py program acts like an executable with the model.nl as an argument to the solver and it produces a sol solution file that is then processed to retrieve the solution. Unlike other solvers in AIMS or Pyomo, APOPT computes remotely on a public server. Here are additional instructions on running APOPT.
APOPT Solver
APOPT (for Advanced Process OPTimizer) is a software package for solving large-scale optimization problems of any of these forms:
Linear programming (LP)
Quadratic programming (QP)
Quadratically constrained quadratic program (QCQP)
Nonlinear programming (NLP)
Mixed integer programming (MIP)
Mixed integer linear programming (MILP)
Mixed integer nonlinear programming (MINLP)
Applications of the APOPT include chemical reactors, friction stir welding, prevention of hydrate formation in deep-sea pipelines, computational biology, solid oxide fuel cells, and flight controls for Unmanned Aerial Vehicles (UAVs). APOPT is supported in AMPL, APMonitor, Gekko, and Pyomo.
APOPT Online Solver for Mixed Integer Nonlinear Programming Reads output from AMPL, Pyomo, or other NL File Writers. Similar to other solvers, this script reads the model (NL) file and produces a solution (sol) file. It sends the NL file to a remote server, computes a solution (remotely), and retrieves a solution (sol) file through an internet connection. It communicates with the server http://byu.apopt.com that is hosting the APOPT solver. Contact support#apmonitor.com for support, especially if there is a feature request or a concern about a problem solution.
Instructions for usage:
Place apopt.py in an appropriate folder in the system path (e.g. Linux, /usr/bin/)
Set appropriate permissions to make the script executable (e.g. chmod 775 apopt.py)
In AMPL, Pyomo, or other NL file write, set solver option to apopt.py
Test installation by running apopt.py -test
Visit apopt.com for additional information and solver option help
Information on the APOPT solver with references can be found at the Wikipedia article for APOPT. APOPT has integration with Gekko and can run locally with m=GEKKO(remote=False).
"type NoneType doesn't define round method"
You should (almost) never use a round() function in your MINLP model. It is not needed either. Instead, use an integer variable, like in:
x-0.5 <= y <= x+0.5
x continuous variable
y integer variable
The reason why round() is really, really bad, is because it is non-differentiable and not continuous. Almost all NLP and MINLP solvers assume smooth functions (sometimes it is useful to read the documentation).
After fixing your model (quite a few problems with it), I could not reproduce the error message about round().
D:\tmp>type pyom1.py
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.x = pyo.Var([1,2,3,4], domain=pyo.NonNegativeReals)
model.y = pyo.Var(domain=pyo.Binary)
model.OBJ = pyo.Objective(expr = 2*model.x[1] + 3*model.x[2] + 3*model.x[3] + 4*model.x[4])
model.Constraint1 = pyo.Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
model.Constraint2 = pyo.Constraint(expr = 3*model.x[3] + 4*model.x[4] >= 1)
model.Constraint3 = pyo.Constraint(expr = 1000*pyo.cos(model.x[3]) <= 1000)
model.Constraint4 = pyo.Constraint(expr = 1000*pyo.sin(model.x[4]) <= 1000)
model.Constraint5 = pyo.Constraint(expr = model.x[2] <= 10000*(1-model.y))
model.Constraint6 = pyo.Constraint (expr=model.x[2] <= 10000*(model.y))
pyo.SolverFactory('mindtpy').solve(model, mip_solver='cbc', nlp_solver='ipopt', tee=True)
D:\tmp>python.exe pyom1.py
INFO: ---Starting MindtPy---
INFO: Original model has 6 constraints (2 nonlinear) and 0 disjunctions, with
5 variables, of which 1 are binary, 0 are integer, and 4 are continuous.
INFO: rNLP is the initial strategy being used.
INFO: NLP 1: Solve relaxed integrality
INFO: NLP 1: OBJ: 1.666666661289117 LB: -inf UB: inf
INFO: ---MindtPy Master Iteration 0---
INFO: MIP 1: Solve master problem.
INFO: MIP 1: OBJ: 1.6666666499999998 LB: 1.6666666499999998 UB: inf
INFO: NLP 2: Solve subproblem for fixed binaries.
INFO: NLP 2: OBJ: 1.6666666716089886 LB: 1.6666666499999998 UB:
1.6666666716089886
INFO: MindtPy exiting on bound convergence. LB: 1.6666666499999998 + (tol
0.0001) >= UB: 1.6666666716089886
D:\tmp>
I have some code that generates 150 different lineups. I want to make cplex use as close to a greedy solution as possible. I read that if you make the epgap large enough it will mimic that of a greedy approach. Is this true? and if so what should I set the epgap to?
import pulp
from pulp import *
from pulp.solvers import CPLEX_PY
from pydfs_lineup_optimizer import get_optimizer, Site, Sport,CSVLineupExporter
from pydfs_lineup_optimizer.solvers.pulp_solver import PuLPSolver
import time
start_time = time.time()
class CustomPuLPSolver(PuLPSolver):
LP_SOLVER = pulp.CPLEX_PY(msg=0,epgap=.1)
optimizer = get_optimizer(Site.FANDUEL, Sport.BASEBALL, solver=CustomPuLPSolver)
optimizer.load_players_from_csv("/Users/austi/Desktop/MLB/PLAYERS_LIST.csv")
optimizer.restrict_positions_for_opposing_team(['P'], ['1B','C','2B','3B','SS','OF','UTIL'])
optimizer.set_spacing_for_positions(['SS','C','1B','3B','OF','2B'], 4)
optimizer.set_team_stacking([4])
optimizer.set_max_repeating_players(7)
lineups = list(optimizer.optimize(n=150))
for lineup in lineups:
print(lineup)
exporter = CSVLineupExporter(lineups)
exporter.export('MLB_result.csv')
print(round(((time.time() - start_time)/60)), "minutes run time")
No, this is not true (where did you read that?). Setting parameter epgap to N tells CPLEX to stop as soon as the relative difference between the best known feasible solution and the lower bound (for minimization problems) on an optimal solution falls below N.
This does not say anything about how the best known feasible solution was found. It could have come from any heuristic or even from an integral node.
If you explicitly need a greedy solution then you have two options:
Compute that greedy solution yourself
Modify your model so that the only feasible solution is the greedy solution.
I'm fairly new to this, so I'm just going to shoot and hope I'm as precise as possible and you'll think it warrants an answer.
I'm trying to optimize (minimize) a cost/quantity model, where both are continuous variables. Global cost should be minimized, but is dependent on total quantity, which is dependent on specific cost.
My code looks like this so far:
# create model
m = Model('Szenario1')
# create variables
X_WP = {}
X_IWP = {}
P_WP = {}
P_IWP = {}
for year in df1.index:
X_WP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Wärmepumpe%d" % year)
X_IWP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Industrielle Wärmepumpe%d" % year)
#Price in year i = Base.price * ((Sum of newly installed capacity + sum of historical capacity)^(math.log(LearningRate)/math.log(2)))
P_WP[year] = P_WP0 * (quicksum(X_WP[year] for year in df1.index) ** (learning_factor)
P_IWP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Preis Industrielle Wärmepumpe%d" % year)
X_WP[2016] = 0
X_IWP[2016] = 0
# Constraints and Objectives
for year in df1.index:
m.addConstr((X_WP[year]*VLST_WP+X_IWP[year]*VLST_IWP == Wärmemenge[year]), name="Demand(%d)" % year)
obj = quicksum(
((X_WP[year]-X_WP[year-1])*P_WP[year]+X_WP[year]*Strompreis_WP*VLST_WP)+
((X_IWP[year]-X_IWP[year-])*P_IWP[year]+X_IWP[year]*Strompreis_EHK*VLST_IWP)
for year in Wärmemenge.index)
m.setObjective(obj, GRB.MINIMIZE)
m.update()
m.optimize()
X is quantity and P is price. WP and IWP are two different technologies (more will be added later). Since X and P are multiplied the problem is nonlinear, now I haven't found a solution so far as to feed gurobi an objective, that it can handle.
My research online and on stackoverflow basically let me to the conclusion that I can either linearize and solve with gurobi, find another solver that can solve this MINLP or formulate my objective in a way that Gurobi can solve. Since I've already made myself familiar with Gurobi, that would be my prefered choice.
Any advice on what's best at this point?
Would be highly appreciated!
I'd suggest rewriting your Python code using Pyomo.
This is a general-purpose optimization modeling framework for Python which can construct valid inputs for Gurobi as well as a number of other optimization tools.
In particular, it will allow you to use Ipopt as a backend, which does solve (at least some) nonlinear problems. Even if Ipopt cannot solve your nonlinear problem, using Pyomo will allow you to test that quickly and then easily move back to a linearized representation in Gurobi if things don't work out.
I am working on an overly stiff, Michaelis-Menten-type system (already implemented and published in Matlab, solved easily with ode15s). I translated it to Python, but no solver can move on beyond step 2 in the integration.
I have tried:
#time
t_start = -10
t_end = 12
t_step = 0.01
# Number of time steps: 1 extra for initial condition
num_steps = np.floor((t_end - t_start)/t_step) + 1
[...]
#integration
r = integrate.ode(JahanModel_ODE).set_integrator('lsoda', atol=1e-8,rtol=1e-6)
and also for the integrator:
r = integrate.ode(JahanModel_ODE).set_integrator('vode',method='bdf', order=5)
with different tolerances.
All I get is
UserWarning: lsoda: Excess work done on this call (perhaps wrong Dfun
type). 'Unexpected istate=%s' % istate))
or
UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
'Unexpected istate=%s' % istate))
I also tried different values for t_step.
There already seemed to be a satisfying answer here: Integrate stiff ODEs with Python, but the links are not working anymore and googling suggested that lsoda is already superior to LSODE.
EDIT: Here is the complete code, without the plotting instances. Gistlink
I am using the PuLP linear programming module for Python to solve a linear problem.
I set up the problem, the constraints, and I use the default solver provided with PuLP which is CBC (the solver executable on my mac is called cbc-osx-64 for obvious reasons). When running this executable:
Welcome to the CBC MILP Solver
Version: 2.7.6
Build Date: Mar 3 2013
Revision Number: 1770
OK, I run the solver via PuLP and get a solution. When verifying that the constraints are satisfied I get a difference between the solution and what I requested (for some of the constraints, not all), which is less than 1e-6 but greater than 1e-7 (1.6e-7, e.g.).
Of course it makes sense to have a constraint tolerance, that is fine. But I need to be able to control this and I think this should be a very central and important parameter in any LP task?
So let us look at the "help" from the CBC solver (run the executable and type "?"), these are the arguments I can change:
Commands are:
Double parameters:
dualB(ound) dualT(olerance) primalT(olerance) primalW(eight) zeroT(olerance)
Branch and Cut double parameters:
allow(ableGap) cuto(ff) inc(rement) integerT(olerance) preT(olerance)
pumpC(utoff) ratio(Gap) sec(onds)
Integer parameters:
force(Solution) idiot(Crash) maxF(actor) maxIt(erations) output(Format)
slog(Level) sprint(Crash)
Branch and Cut integer parameters:
cutD(epth) cutL(ength) depth(MiniBab) hot(StartMaxIts) log(Level) maxN(odes)
maxS(olutions) passC(uts) passF(easibilityPump) passT(reeCuts) pumpT(une)
strat(egy) strong(Branching) trust(PseudoCosts)
Keyword parameters:
allC(ommands) chol(esky) crash cross(over) direction error(sAllowed)
fact(orization) keepN(ames) mess(ages) perturb(ation) presolve
printi(ngOptions) scal(ing) timeM(ode)
Branch and Cut keyword parameters:
clique(Cuts) combine(Solutions) combine2(Solutions) cost(Strategy) cplex(Use)
cuts(OnOff) Dins DivingS(ome) DivingC(oefficient) DivingF(ractional)
DivingG(uided) DivingL(ineSearch) DivingP(seudoCost) DivingV(ectorLength)
feas(ibilityPump) flow(CoverCuts) gomory(Cuts) greedy(Heuristic)
heur(isticsOnOff) knapsack(Cuts) lagomory(Cuts) lift(AndProjectCuts)
local(TreeSearch) mixed(IntegerRoundingCuts) node(Strategy)
pivotAndC(omplement) pivotAndF(ix) preprocess probing(Cuts)
rand(omizedRounding) reduce(AndSplitCuts) residual(CapacityCuts) Rens Rins
round(ingHeuristic) sos(Options) two(MirCuts) Vnd(VariableNeighborhoodSearch)
Actions or string parameters:
allS(lack) barr(ier) basisI(n) basisO(ut) directory dualS(implex)
either(Simplex) end exit export gsolu(tion) help import initialS(olve)
max(imize) min(imize) para(metrics) primalS(implex) printM(ask) quit
saveS(olution) solu(tion) stat(istics) stop
Branch and Cut actions:
branch(AndCut) doH(euristic) prio(rityIn) solv(e)
The values of these parameters have values:
dualTolerance has value 1e-07
primalTolerance has value 1e-07
zeroTolerance has value 1e-20
allowableGap has value 0
integerTolerance has value 1e-06
preTolerance has value 1e-08
ratioGap has value 0
The only parameter which could be associated with the constraint tolerance and also consistent with my observations is the "integerTolerance".
So, I changed this tolerance to 1e-8 but got the same result (that is, the solution differed from the ground truth by more than 1e-7).
Questions:
Can anyone shed some light on this? In particular, is there a way to set the constraint tolerance (the difference between a found solution and what we requested)?
If not for CBC, do you know of any other solver (GLPK, Gurobi, etc.) where this quantity can be set?
Thanks.
At least in the latest pulp Version you can set it directly via parameter.
https://pythonhosted.org/PuLP/solvers.html
Parameter fracgap should do and does for me.
I can't give you an exact answer, but I'd try primal or dual tolerance. Integer tolerance doesn't make sense for constraints to me.
Do you know how to change these options via the Python interface (I would like to experiment with it, but don't want to call the command line tool, and I'm not able to pass options to the solver) ?