Python external constraint function in MINLP - python

Is it possible to add as a dynamic constraint an external custom function in Mixed Integer Nonlinear Programming libraries in Python? I am working with boolean variables and Numpy matrices (size m x n) where i want to minimize the sum of total values requested (e.g tot_vals = 2,3......n). Therefore, i want to add some "spatial" constraints, I've created the functions (based on Boolean indexing) and i try to implement them in my optimization procedure. In CVXPY, it fails as i can only add CVXPY's formatted constraints (as far as i know), PULP fails as it works only for LP problems, maybe a choice could be Pyomo, OpenOpt or PySCIPopt?
Thank you in advance for your help

With PySCIPOpt this is possible. You would need to create a custom constraint handler which checks the current LP solution for feasibility and maybe adds valid inequalities to circumvent the infeasibility in the next node.
One example of this procedure is the TSP implementation in PySCIPOpt. This is also explained in some more detail in this tutorial article about PySCIPOpt.

Related

Use updated values from Pyomo model for warmstarts

I am working with some MIP models that only have binary variables as integer variables which represent optional elements in a network. Instead of solving the complete MIP I want to try out an algorithm where I do the following steps:
Solve the problem with all binaries fixed to zero
Solve the problem with all binaries fixed to one
Use the difference in objective as budget to remove some of the binaries with too expensive objective contribution from the pool of options and solve again; repeat this step until either no binaries remain or no binary is removed during the iteration
Solve the reduced MIP where some of the binaries are set to zero which reduces the number of practical binary variables
From my understanding this algorithm would benefit from using warmstarts as the last solution including the changes through variables fixings after the solve() calls would provide a feasible solution and bound for the model. I also use the deactivate()- and activate()-methods to change objectives and remove a constraint during step 2 and 3. For the constraint I also wrote code that sets the variables to a feasible solution after reactivating it.
When executing
results = optimizer.solve(self.pyM, warmstart=True, tee=True)
using Gurobi it seems that gurobi doesn't use the current variable values in the pyomo model but instead only uses the values from the last solve()-results without the changes afterwards (fixing variables to one/zero, adjusting values for the constraint).
I assume this because if I don't reactivate the constraint and run a model where no binaries can be removed the log reports a working MIP Start while when activating it does give the same output:
Read MIP start from file /tmp/tmp3lq2wbki.gurobi.mst
[some parameters settings and model characteristics]
User MIP start did not produce a new incumbent solution
User MIP start violates constraint c_e_x96944_ by 0.450000000
regardless if I comment out the code that adjusts the values or not. I also expect that code snippet to work properly as I tested it separately and checked with help of the display()-method that the value of the constraint body lies between both bounds. In Step 2 only the read MIP Start line from above is in the log, but no statement what happend to it.
Is it possible to tell Pyomo to use the values from the Pyomo model instead or to update the .mst-file with the updated pyomo model values?
I found this Gurobi-persistent Class
https://pyomo.readthedocs.io/en/stable/library_reference/solvers/gurobi_persistent.html
I tried
import pyomo.solvers.plugins.solvers.gurobi_persistent as gupyomo
[...]
test = gupyomo.GurobiPersistent(model = self.pyM)
for variable in adjustedVariables:
test.update_var(variable)
test.update()
but this neither produced output/errors nor changed the behaviour. Therefore I assume this is either not the correct way or I used it wrong.
Additional infos:
gurobi 9.0.2
pyomo 5.7.1
If specific parts of the code might be helpful I can provide them but I wasn't sure if and what part could be relevant for the question
So what seemed to work for me was instead using
optimizer.set_instance(self.pyM) at the start of my code and using optimizer.update_var(var[index]) whenever I change something like fixing the variable var[index].
, instead of above code.

Using ipopt within Pyomo, how can I query whether the current solution is feasible?

I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)

CPLEX12.9: Strong branching is not available for mixed-integer problems?

I want to get strong branching scores by cplex and python, and for the first step I just tried to use "cplex.advanced.strong_branching" to solve a very simple MILP problem (my code followed the example usage of this function exactly). However it told me that "CPLEX Error 1017: Not available for mixed-integer problems", which made me quite confused because SB should be a traditional branch-and-bound algorithm. But when I used it to solve a LP problem it worked well.
The error seemed to be raised from "CPXXstrongbranch", a base C/C++ API, which also made me question that how cplex could make SB decisions when I set the branching strategy parameter to SB. A similar question is that I know Python API doesn't have the important "CPXgetcallbacknodelp" function, so how could "cplex.advanced.strong_branching" work? Could it be the reason of this error?
I don't totally understand how "CPXstrongbranch" works in C, so the following information may be incorrect: I tried to use "CPXstrongbranch" in the user-set branch callback of the example "adlpex1.c", and the same error was raised; it stopped me to use "ctypes" to get the "CPXgetcallbacknodelp" function.
Could it be a version problem? Does Cplex block the access of SB? Because I have read a paper which relied on the SB scores in Cplex 12.6.1 and C API. Or I just made some mistakes.
My question is whether Cplex can do SB and deliver its results to users in the MILP problem.
cplex.advanced.strong_branching does not carry out any branching. The doc is a bit confusing here. What this function does is computing the strong branching scores for the variables you pass (or all variables if you don't pass a list).
This requires an LP because usually in a MIP search tree you call this function with the current LP relaxation.
If you want to use CPLEX with strong branching then set the variable selection parameter to "strong branching":
with cplex.Cplex() as cpx:
cpx.parameters.mip.strategy.variableselect.set(cpx.parameters.mip.strategy.variableselect.values.strong_branching)
cpx.solve()
The strong_branching function is only needed if you want to implement your own branching algorithm.

How to use the LazyConstraintCallback in the CPLEX Python API to solve a MILP problem through Benders' decomposition

I have a MILP problem with binary and continuous variables. I want to use the Benders' decomposition algorithm to solve it. I am following this presentation: http://www.iems.ucf.edu/qzheng/grpmbr/seminar/Yuping_Intro_to_BendersDecomp.pdf
I separate my problem into a master problem composed of the discrete variables and a slave problem that has the continuous variables.
I am using the CPLEX Python API to solve it, based on the ATSP example: https://github.com/cswaroop/cplex-samples/blob/master/bendersatsp.py
I create the master problem and the dual of the slave as in this example.
I use the BendersLazyConsCallback, but I am not sure I understand it entirely. Please help me through.
The function separate is called when the current master solution is obtained, then the dual objective function is updated, and the dual problem is re-solved.
If the dual is unbounded then it adds the ray to the master problem constraints, e.g., self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs), which happens in the BendersLazyConsCallback class.
But the example does not include the code when the dual is optimal.
So, when the dual is optimal, I add a similar call and add the constraint to the master problem based on the dual solution.
However, if I try to print the master problem constraints, e.g., problem.linear_constraints.get_rows(), I do not see the newly included constraints. It seems that the self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs) command does not push this to the master constraints but keeps it as a member of the LazyConstraintCallback class. Is this correct? How can I see that these new constraints are actually added?
Also, how does the algorithm stop? In the traditional Benders' algorithm the lower and upper bounds of the problem are updated based the the dual and master solution and when they are equal we have convergence.
In the ATSP example, I don't see where this is happening. When exactly is the BendersLazyConsCallback triggered and how does it know when to stop?
The example that you linked to above is from CPLEX 12.6. This is quite old, by now (currently, the latest version is 12.9). If you haven't updated to a more recent version yet, it would probably be a good idea to do so. One reason to update is that with CPLEX 12.7, there is support for automatic Benders' decomposition. With CPLEX 12.8, there is a new generic callback (and a new bendersatsp2.py example to demonstrate it).
With that said, I'll try to answer some of your other questions.
First, as described here, if you write out the model, it will not include the lazy constraints that you have added dynamically in the callback. You can print them out in the callback yourself easily enough (e.g., print("LC:", workerLP.cutLhs, "G", workerLP.cutRhs). You can determine if your constraints have been applied, by the presence of a message at the end of the engine log, like:
User cuts applied: 3
As to your final questions about how lazy constraints work, please see the section on legacy callbacks in the CPLEX User's Manual. Also, there is the section on termination conditions of the MIP optimizer. The lazy constraint callback is invoked whenever CPLEX finds an integer feasible solution (see this thread on the IBM developerWorks forum for more on that).

ampl vs gams MINLP portfolio optimization syntax

I am looking for a MINLP optimizer to solve portfolio optimisation problem which minimizes x'.S.x where x is a vector S is a given matrix. There are integer constraints which x elements depend on ex; x[i] = g[i].K[i] where g[i] is an integer and K[i] is a given vector, thus we need to find g[i]s while minimizing objective.
I am considering using AMPL or gams. Main program is in python. I am not sure if these are the best MINLP out there, but anyways there seems to be some examples on both websites. In terms of matrix multiplication for the minimization objective, I am not clear if there is a simple way of writing this in AMPL, do I need to write it as an algebraic expansion? Can you provide an example of x'.S.x operation in AMPL language?
In terms of gams, I see the package is free only for limited usage of a number of variables. Therefore I was considering AMPL, but maybe for smaller problems gams might be the solution if I cannot figure out AMPL notation for matrix vector multiplications
The AMPL syntax is very straightforward:
sum{i in I, j in I} x[i]*S[i,j]*x[j]
Note that many portfolio models do not need a full-blown MINLP solver but can be solved with the quadratic (and SOCP) capabilities present in systems like Cplex and Gurobi. Your question is somewhat difficult to parse (at least for me) but I believe your model falls in this category.

Categories