Use updated values from Pyomo model for warmstarts - python

I am working with some MIP models that only have binary variables as integer variables which represent optional elements in a network. Instead of solving the complete MIP I want to try out an algorithm where I do the following steps:
Solve the problem with all binaries fixed to zero
Solve the problem with all binaries fixed to one
Use the difference in objective as budget to remove some of the binaries with too expensive objective contribution from the pool of options and solve again; repeat this step until either no binaries remain or no binary is removed during the iteration
Solve the reduced MIP where some of the binaries are set to zero which reduces the number of practical binary variables
From my understanding this algorithm would benefit from using warmstarts as the last solution including the changes through variables fixings after the solve() calls would provide a feasible solution and bound for the model. I also use the deactivate()- and activate()-methods to change objectives and remove a constraint during step 2 and 3. For the constraint I also wrote code that sets the variables to a feasible solution after reactivating it.
When executing
results = optimizer.solve(self.pyM, warmstart=True, tee=True)
using Gurobi it seems that gurobi doesn't use the current variable values in the pyomo model but instead only uses the values from the last solve()-results without the changes afterwards (fixing variables to one/zero, adjusting values for the constraint).
I assume this because if I don't reactivate the constraint and run a model where no binaries can be removed the log reports a working MIP Start while when activating it does give the same output:
Read MIP start from file /tmp/tmp3lq2wbki.gurobi.mst
[some parameters settings and model characteristics]
User MIP start did not produce a new incumbent solution
User MIP start violates constraint c_e_x96944_ by 0.450000000
regardless if I comment out the code that adjusts the values or not. I also expect that code snippet to work properly as I tested it separately and checked with help of the display()-method that the value of the constraint body lies between both bounds. In Step 2 only the read MIP Start line from above is in the log, but no statement what happend to it.
Is it possible to tell Pyomo to use the values from the Pyomo model instead or to update the .mst-file with the updated pyomo model values?
I found this Gurobi-persistent Class
https://pyomo.readthedocs.io/en/stable/library_reference/solvers/gurobi_persistent.html
I tried
import pyomo.solvers.plugins.solvers.gurobi_persistent as gupyomo
[...]
test = gupyomo.GurobiPersistent(model = self.pyM)
for variable in adjustedVariables:
test.update_var(variable)
test.update()
but this neither produced output/errors nor changed the behaviour. Therefore I assume this is either not the correct way or I used it wrong.
Additional infos:
gurobi 9.0.2
pyomo 5.7.1
If specific parts of the code might be helpful I can provide them but I wasn't sure if and what part could be relevant for the question

So what seemed to work for me was instead using
optimizer.set_instance(self.pyM) at the start of my code and using optimizer.update_var(var[index]) whenever I change something like fixing the variable var[index].
, instead of above code.

Related

IPOPT in Pyomo does not work from source only with executable

I have a complicated non-linear problem I cannot copy here, where I have used IPOPT.exe so far (v 3.9.1) in a pyomo environment without a problem. Lately I experienced that the value of the objective function highly varies based on the settings:
if I decrease the tolerance the value gets more accurate at first (for some digits), but then it jumps to another value, which is 10% bigger than the previous
my optimization problem is the same for every run, except the value of one parameter (the hour I am evaluating the function in), still IPOPT can sometimes find the solution in 2 sec, sometimes I get infeasible result or max iteration reached (the parameter change is in a range that should not cause any anomaly in the system)
since it is a nasty problem I expected that ipopt randomly "jumps into" the problem, but it is not the case, it returns the same exit and value even if trying the problem several times (with for attempt: - try: - etc.).
also the result is different depending on if I use "exact" or "limited memory" hessian approximation.
since I do not like this behavior I wanted to use the multistart option but I cannot define an "executable" path with the option so I figured I change from the executable file to an installed solver.
I installed cyipopt and the ipopt to have everything and it works fine with another sample example but does not on my problem. It returns: ERROR: Solver (ipopt) returned non-zero return code (3221225501) without any feedback. I cannot figure out the cause.
So the questions would be:
how do I know which options to set and which is its proper value?
why do I keep always getting the same result even if I start the process completely new?
can I define an executable with the multistart option? and will it change anything or I should focus more on initializing the variables to a better value?
which is the easiest way to foster convergence?
I know it is not one issue but they belong together in the same context. Thanks for the answers in advance!

Using ipopt within Pyomo, how can I query whether the current solution is feasible?

I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)

How to use the LazyConstraintCallback in the CPLEX Python API to solve a MILP problem through Benders' decomposition

I have a MILP problem with binary and continuous variables. I want to use the Benders' decomposition algorithm to solve it. I am following this presentation: http://www.iems.ucf.edu/qzheng/grpmbr/seminar/Yuping_Intro_to_BendersDecomp.pdf
I separate my problem into a master problem composed of the discrete variables and a slave problem that has the continuous variables.
I am using the CPLEX Python API to solve it, based on the ATSP example: https://github.com/cswaroop/cplex-samples/blob/master/bendersatsp.py
I create the master problem and the dual of the slave as in this example.
I use the BendersLazyConsCallback, but I am not sure I understand it entirely. Please help me through.
The function separate is called when the current master solution is obtained, then the dual objective function is updated, and the dual problem is re-solved.
If the dual is unbounded then it adds the ray to the master problem constraints, e.g., self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs), which happens in the BendersLazyConsCallback class.
But the example does not include the code when the dual is optimal.
So, when the dual is optimal, I add a similar call and add the constraint to the master problem based on the dual solution.
However, if I try to print the master problem constraints, e.g., problem.linear_constraints.get_rows(), I do not see the newly included constraints. It seems that the self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs) command does not push this to the master constraints but keeps it as a member of the LazyConstraintCallback class. Is this correct? How can I see that these new constraints are actually added?
Also, how does the algorithm stop? In the traditional Benders' algorithm the lower and upper bounds of the problem are updated based the the dual and master solution and when they are equal we have convergence.
In the ATSP example, I don't see where this is happening. When exactly is the BendersLazyConsCallback triggered and how does it know when to stop?
The example that you linked to above is from CPLEX 12.6. This is quite old, by now (currently, the latest version is 12.9). If you haven't updated to a more recent version yet, it would probably be a good idea to do so. One reason to update is that with CPLEX 12.7, there is support for automatic Benders' decomposition. With CPLEX 12.8, there is a new generic callback (and a new bendersatsp2.py example to demonstrate it).
With that said, I'll try to answer some of your other questions.
First, as described here, if you write out the model, it will not include the lazy constraints that you have added dynamically in the callback. You can print them out in the callback yourself easily enough (e.g., print("LC:", workerLP.cutLhs, "G", workerLP.cutRhs). You can determine if your constraints have been applied, by the presence of a message at the end of the engine log, like:
User cuts applied: 3
As to your final questions about how lazy constraints work, please see the section on legacy callbacks in the CPLEX User's Manual. Also, there is the section on termination conditions of the MIP optimizer. The lazy constraint callback is invoked whenever CPLEX finds an integer feasible solution (see this thread on the IBM developerWorks forum for more on that).

Python external constraint function in MINLP

Is it possible to add as a dynamic constraint an external custom function in Mixed Integer Nonlinear Programming libraries in Python? I am working with boolean variables and Numpy matrices (size m x n) where i want to minimize the sum of total values requested (e.g tot_vals = 2,3......n). Therefore, i want to add some "spatial" constraints, I've created the functions (based on Boolean indexing) and i try to implement them in my optimization procedure. In CVXPY, it fails as i can only add CVXPY's formatted constraints (as far as i know), PULP fails as it works only for LP problems, maybe a choice could be Pyomo, OpenOpt or PySCIPopt?
Thank you in advance for your help
With PySCIPOpt this is possible. You would need to create a custom constraint handler which checks the current LP solution for feasibility and maybe adds valid inequalities to circumvent the infeasibility in the next node.
One example of this procedure is the TSP implementation in PySCIPOpt. This is also explained in some more detail in this tutorial article about PySCIPOpt.

Incorrect value of objective function in simple example solved with pyomo

I have recently started to use pyomo for my research, and I'm studying its use with the book "Pyomo-Optimization modelling in Python".
As my research has to do with heat exchanger networks I am currently trying to build and solve a very simple problem before expanding into more complex and meaningful ones.
Here is the model I input into pyomo.
from coopr.pyomo import*
model=AbstractModel()
Tcin1=300
Thin1=500
mc= 135
mh=128
Cpc=3.1
Cph=2.2
model.Thout1=Var(initialize=480, within=PositiveReals)
model.Tcout1=Var(initialize=310, within=PositiveReals)
model.Q=Var(initialize=2000, within=PositiveReals)
import math
def HeatEx(model):
return ((Thin1-model.Tcout1)-(model.Thout1-Tcin1))/(math.log(Thin1-model.Tcout1)-math.log(model.Thout1-Tcin1))
model.obj=Objective(rule=HeatEx, sense=minimize)
model.con1 = Constraint(expr=(mc*Cpc*(Thin1-model.Thout1) ==
mh*Cph*(model.Tcout1 - Tcin1)))
model.con2=Constraint(expr=(model.Q==mc*Cpc*(Thin1-model.Thout1)))
model.con3=Constraint(expr=(model.Tcout1==310))
I've been running it through the terminal using the ipopt solver as pyomo --solver=ipopt --summary NoFouling.py.
My problem is that I get an incorrect value for the objective. It's says the objective is -60.5025857388 (with variable Thout1 = 493.271206691) which is incorrect. In an attempt to realize what the problem is, I replaced model.Thout1 in the objective function with the value 493.271206691,re-ran the model and obtained the correct objective value which is 191.630949982. This is very strange because all the variable values coming out of pyomo are correct even when the objective function value is wrong. In brief, if I take those values that seemingly give a wrong result and calculate manually the function from those, I get the correct result.
What is the cause of this difference? How can I resolve this problem?
For the record I'm running Python2.7 via Enthought Canopy, on a computer running CentOS 6.5. I also have to confess that I'm a bit new to both python and using a linux system. I have searched through the internet for pyomo answers, but this one seems to be too specific and I have found nothing really useful.
Many thanks
In Python 2.7 the default '/' behaviour is integer division.
I'm assuming you want floating point division in your objective function, if so add the following line at the beginning of your script
from __future__ import division

Categories