I am using Gurobi to solve a MIP problem, and I want to see how its MIPGap changes with respect to the Runtime. Is there any efficient way to collect these two information? The callback code does not provide MIPGap, and I don't know how to retrieve this attribute before the model terminates (get optimal solution).
Also, how could we output the MIP solving details (how it gets around the perceived non-convexity etc.) after it completes preprocessing?
Below are the codes that I am currently applying for gap and runtime:
def mycallback(m, where):
if where == GRB.Callback.MIPSOL:
time=m.cbGet(GRB.Callback.RUNTIME);
a=m.cbGet(GRB.Callback.MIPSOL_OBJBST);
b=m.cbGet(GRB.Callback.MIPSOL_OBJBND);
gap=(b-a)/abs(a)*100;
model.optimize(mycallback)
Thank you very much!
Best,
Grace
Related
I am trying to run a differential equation system solver with GEKKO in Python 3 (via Jupyter notebook).
For bad initial parameters it of course immediately halts and says solution not found. Or immediately concludes with no changes done to the parameters / functions.
For other initial parameters the CPU (APM.exe in taskmanager) is busy for 30 seconds (depending on problem size), then it goes back down to 0% usage. But some calculation is still running and does not produce output. The only way to stop it (other than killing python completely) is to stop the APM.exe. I have not get a solution.
When doing this, I get the disp=True output me it has done a certain number of iterations (same for same initial parameters) and that:
No such file or directory: 'C:\\Users\\MYUSERNAME\\AppData\\Local\\Temp\\tmpfetrvwwwgk_model7\\options.json'
The code is very lengthy and depends on loading data from file. So I can not really provide a MWE.
Any ideas?
Here are a few suggestions on troubleshooting your application:
Sequential Simulation: If you are simulating a system with IMODE=4 then I would recommend that you switch to m.options.IMODE=7 and leave m.solve(disp=True) to see where the model is failing.
Change Solver: Sometimes it helps to switch solvers as well with m.options.SOLVER=1.
Short Time Step Solution: You can also try using a shorter time horizon initially with m.time=[0,0.0001] to see if your model can converge for even a short time step.
Coldstart: For optimization problems, you can try m.options.COLDSTART=2 to help identify equations or constraints that lead to an infeasible solution. It breaks the problem down into successive pieces that it can solve and tries to find the place that is causing convergence problems.
There are additional tutorials on simulation with Gekko and a troubleshooting guide (see #18) that helps you dig further into the application. When there are problems with convergence it is typically from one of the following issues:
Divide by zero: If you have an equation such as m.Equation(x.dt()==(x+1)/y) then replace it with m.Equation(y*x.dt()==x+1).
Non-continuously differentiable functions: Replace m.abs(x) with m.abs2(x) or m.abs3(x) that do not have a problem with x=0.
Infeasible: Remove any upper or lower bounds on variables.
Too Many Time Points: Try fewer time points or a shorter horizon initially to verify the model.
Let us know if any of this helps.
I want to get strong branching scores by cplex and python, and for the first step I just tried to use "cplex.advanced.strong_branching" to solve a very simple MILP problem (my code followed the example usage of this function exactly). However it told me that "CPLEX Error 1017: Not available for mixed-integer problems", which made me quite confused because SB should be a traditional branch-and-bound algorithm. But when I used it to solve a LP problem it worked well.
The error seemed to be raised from "CPXXstrongbranch", a base C/C++ API, which also made me question that how cplex could make SB decisions when I set the branching strategy parameter to SB. A similar question is that I know Python API doesn't have the important "CPXgetcallbacknodelp" function, so how could "cplex.advanced.strong_branching" work? Could it be the reason of this error?
I don't totally understand how "CPXstrongbranch" works in C, so the following information may be incorrect: I tried to use "CPXstrongbranch" in the user-set branch callback of the example "adlpex1.c", and the same error was raised; it stopped me to use "ctypes" to get the "CPXgetcallbacknodelp" function.
Could it be a version problem? Does Cplex block the access of SB? Because I have read a paper which relied on the SB scores in Cplex 12.6.1 and C API. Or I just made some mistakes.
My question is whether Cplex can do SB and deliver its results to users in the MILP problem.
cplex.advanced.strong_branching does not carry out any branching. The doc is a bit confusing here. What this function does is computing the strong branching scores for the variables you pass (or all variables if you don't pass a list).
This requires an LP because usually in a MIP search tree you call this function with the current LP relaxation.
If you want to use CPLEX with strong branching then set the variable selection parameter to "strong branching":
with cplex.Cplex() as cpx:
cpx.parameters.mip.strategy.variableselect.set(cpx.parameters.mip.strategy.variableselect.values.strong_branching)
cpx.solve()
The strong_branching function is only needed if you want to implement your own branching algorithm.
I have a MILP problem with binary and continuous variables. I want to use the Benders' decomposition algorithm to solve it. I am following this presentation: http://www.iems.ucf.edu/qzheng/grpmbr/seminar/Yuping_Intro_to_BendersDecomp.pdf
I separate my problem into a master problem composed of the discrete variables and a slave problem that has the continuous variables.
I am using the CPLEX Python API to solve it, based on the ATSP example: https://github.com/cswaroop/cplex-samples/blob/master/bendersatsp.py
I create the master problem and the dual of the slave as in this example.
I use the BendersLazyConsCallback, but I am not sure I understand it entirely. Please help me through.
The function separate is called when the current master solution is obtained, then the dual objective function is updated, and the dual problem is re-solved.
If the dual is unbounded then it adds the ray to the master problem constraints, e.g., self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs), which happens in the BendersLazyConsCallback class.
But the example does not include the code when the dual is optimal.
So, when the dual is optimal, I add a similar call and add the constraint to the master problem based on the dual solution.
However, if I try to print the master problem constraints, e.g., problem.linear_constraints.get_rows(), I do not see the newly included constraints. It seems that the self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs) command does not push this to the master constraints but keeps it as a member of the LazyConstraintCallback class. Is this correct? How can I see that these new constraints are actually added?
Also, how does the algorithm stop? In the traditional Benders' algorithm the lower and upper bounds of the problem are updated based the the dual and master solution and when they are equal we have convergence.
In the ATSP example, I don't see where this is happening. When exactly is the BendersLazyConsCallback triggered and how does it know when to stop?
The example that you linked to above is from CPLEX 12.6. This is quite old, by now (currently, the latest version is 12.9). If you haven't updated to a more recent version yet, it would probably be a good idea to do so. One reason to update is that with CPLEX 12.7, there is support for automatic Benders' decomposition. With CPLEX 12.8, there is a new generic callback (and a new bendersatsp2.py example to demonstrate it).
With that said, I'll try to answer some of your other questions.
First, as described here, if you write out the model, it will not include the lazy constraints that you have added dynamically in the callback. You can print them out in the callback yourself easily enough (e.g., print("LC:", workerLP.cutLhs, "G", workerLP.cutRhs). You can determine if your constraints have been applied, by the presence of a message at the end of the engine log, like:
User cuts applied: 3
As to your final questions about how lazy constraints work, please see the section on legacy callbacks in the CPLEX User's Manual. Also, there is the section on termination conditions of the MIP optimizer. The lazy constraint callback is invoked whenever CPLEX finds an integer feasible solution (see this thread on the IBM developerWorks forum for more on that).
I have tried to solve several similar optimization problems using Gurobi and Pyomo. The problems differ just by a few parameters. Each problem is run three times, each time with the same constraints but a different objective function.
There was one problem for which Gurobi could solve for only two of the objective functions I had specified. For the third objective function, it couldn't solve the problem, specifying the problem as 'infeasible'. But this didn't make sense to me as the feasible domain was exactly the same as when the other two objective functions were used to solve the problem.
I turned my computer off and on, and tried to solve the problem again on both PyCharm and Spyder. Gurobi was still telling me the problem was infeasible. However, I was sure the problem was perfectly feasible based on my observations of the other similar problems. So I tried to solve the exact same problem on another computer. This time it worked!...
I thus have partially solved my problem. I have the results I need but since I don't understand what's going on with Gurobi on my computer, I may be facing the same kind of problem in the future.
Does anyone have any idea about this "bug"?
You should verify that you Gurobi is reporting a status "infeasible" and not "infeasible or unbounded". You can check the return status or the log or turn off the presolve Dual Reductions. If you have large coefficients, you can see the previous question about integer programming and leaky abstractions. You can check one measure of the numerical difficulty of your problem with the attribute KappaExact. As #LarrySnyter610 suggested in the comments, Gurobi might also be running out of time.
If the objective is really the only thing that changes, you can likely resolve all these difficulties and get a performance boost by solving the problems consecutively. That way, Gurobi can use the solution from the previous objective as a warm start.
Suppose you have 2 objectives, obj1, obj2 for the same set of objectives. You can solve them consecutively by
m.setObjective(obj1)
m.optimize()
if m.SolCount > 0:
solution1 = m.getAttr('X', m.getVars())
m.setObjective(obj2)
if m.SolCount > 0:
solution2 = m.getAttr('X', m.getVars())
I apply optimization tool to solve pratical production planning problem.
My current problem is doing planning for a factory with various items in a unique production flow stage. In each stage, there are few parallel machines as graph below.
I have done maths MILP model, and try to solve by CPLEX but it too hard to handle the big scale model by itself.
Currently, I prepare to use Genetic Algorithm to solve it, but don't know where to start.
I have some knowdlege in Python Language. My friends, please advise how I start to deal with this problem?
Do someone have a similar solved problem with code, that I can have a reference?
Before you start writing code(or using someone else's) you must understand the theory behind the scene.
What are the main entities of your Production System?
You need to formulate optimization objectives,optimal schedule but defined in terms of optimization problem.
One example that comes to my mind
https://github.com/jpuigcerver/jsp-ga
Take a look at this thesis
http://lancet.mit.edu/~mbwall/phd/thesis/thesis.pdf