I have tried to solve several similar optimization problems using Gurobi and Pyomo. The problems differ just by a few parameters. Each problem is run three times, each time with the same constraints but a different objective function.
There was one problem for which Gurobi could solve for only two of the objective functions I had specified. For the third objective function, it couldn't solve the problem, specifying the problem as 'infeasible'. But this didn't make sense to me as the feasible domain was exactly the same as when the other two objective functions were used to solve the problem.
I turned my computer off and on, and tried to solve the problem again on both PyCharm and Spyder. Gurobi was still telling me the problem was infeasible. However, I was sure the problem was perfectly feasible based on my observations of the other similar problems. So I tried to solve the exact same problem on another computer. This time it worked!...
I thus have partially solved my problem. I have the results I need but since I don't understand what's going on with Gurobi on my computer, I may be facing the same kind of problem in the future.
Does anyone have any idea about this "bug"?
You should verify that you Gurobi is reporting a status "infeasible" and not "infeasible or unbounded". You can check the return status or the log or turn off the presolve Dual Reductions. If you have large coefficients, you can see the previous question about integer programming and leaky abstractions. You can check one measure of the numerical difficulty of your problem with the attribute KappaExact. As #LarrySnyter610 suggested in the comments, Gurobi might also be running out of time.
If the objective is really the only thing that changes, you can likely resolve all these difficulties and get a performance boost by solving the problems consecutively. That way, Gurobi can use the solution from the previous objective as a warm start.
Suppose you have 2 objectives, obj1, obj2 for the same set of objectives. You can solve them consecutively by
m.setObjective(obj1)
m.optimize()
if m.SolCount > 0:
solution1 = m.getAttr('X', m.getVars())
m.setObjective(obj2)
if m.SolCount > 0:
solution2 = m.getAttr('X', m.getVars())
Related
I am currently trying to solve a difficult integer linear program using CPLEX with PuLP on Python.
I actually put a relative gap to the solver so that I get a solution in a shorter time. Here is the code line for the solving of my model :
solver = pulp.CPLEX_CMD(path=path_to_cplex, gapRel=0.003)
model.solve(solver)
model is my puLP.lpProblem
My problem is that in certain case, the solver returns an unfeasible error when I put a gap, so the solution is None. In this case, is there a way to get the last relaxed solution found by PuLP, so that I can at least use a "partially feasible" solution ?
Thanks by advance
I checked on the documentation of CPLEX_CMD with PuLP to see if there was an option to do what I want, but I did not find anything matching.
I have a complicated non-linear problem I cannot copy here, where I have used IPOPT.exe so far (v 3.9.1) in a pyomo environment without a problem. Lately I experienced that the value of the objective function highly varies based on the settings:
if I decrease the tolerance the value gets more accurate at first (for some digits), but then it jumps to another value, which is 10% bigger than the previous
my optimization problem is the same for every run, except the value of one parameter (the hour I am evaluating the function in), still IPOPT can sometimes find the solution in 2 sec, sometimes I get infeasible result or max iteration reached (the parameter change is in a range that should not cause any anomaly in the system)
since it is a nasty problem I expected that ipopt randomly "jumps into" the problem, but it is not the case, it returns the same exit and value even if trying the problem several times (with for attempt: - try: - etc.).
also the result is different depending on if I use "exact" or "limited memory" hessian approximation.
since I do not like this behavior I wanted to use the multistart option but I cannot define an "executable" path with the option so I figured I change from the executable file to an installed solver.
I installed cyipopt and the ipopt to have everything and it works fine with another sample example but does not on my problem. It returns: ERROR: Solver (ipopt) returned non-zero return code (3221225501) without any feedback. I cannot figure out the cause.
So the questions would be:
how do I know which options to set and which is its proper value?
why do I keep always getting the same result even if I start the process completely new?
can I define an executable with the multistart option? and will it change anything or I should focus more on initializing the variables to a better value?
which is the easiest way to foster convergence?
I know it is not one issue but they belong together in the same context. Thanks for the answers in advance!
I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)
I am using Gurobi to solve a MIP problem, and I want to see how its MIPGap changes with respect to the Runtime. Is there any efficient way to collect these two information? The callback code does not provide MIPGap, and I don't know how to retrieve this attribute before the model terminates (get optimal solution).
Also, how could we output the MIP solving details (how it gets around the perceived non-convexity etc.) after it completes preprocessing?
Below are the codes that I am currently applying for gap and runtime:
def mycallback(m, where):
if where == GRB.Callback.MIPSOL:
time=m.cbGet(GRB.Callback.RUNTIME);
a=m.cbGet(GRB.Callback.MIPSOL_OBJBST);
b=m.cbGet(GRB.Callback.MIPSOL_OBJBND);
gap=(b-a)/abs(a)*100;
model.optimize(mycallback)
Thank you very much!
Best,
Grace
I am trying to run a differential equation system solver with GEKKO in Python 3 (via Jupyter notebook).
For bad initial parameters it of course immediately halts and says solution not found. Or immediately concludes with no changes done to the parameters / functions.
For other initial parameters the CPU (APM.exe in taskmanager) is busy for 30 seconds (depending on problem size), then it goes back down to 0% usage. But some calculation is still running and does not produce output. The only way to stop it (other than killing python completely) is to stop the APM.exe. I have not get a solution.
When doing this, I get the disp=True output me it has done a certain number of iterations (same for same initial parameters) and that:
No such file or directory: 'C:\\Users\\MYUSERNAME\\AppData\\Local\\Temp\\tmpfetrvwwwgk_model7\\options.json'
The code is very lengthy and depends on loading data from file. So I can not really provide a MWE.
Any ideas?
Here are a few suggestions on troubleshooting your application:
Sequential Simulation: If you are simulating a system with IMODE=4 then I would recommend that you switch to m.options.IMODE=7 and leave m.solve(disp=True) to see where the model is failing.
Change Solver: Sometimes it helps to switch solvers as well with m.options.SOLVER=1.
Short Time Step Solution: You can also try using a shorter time horizon initially with m.time=[0,0.0001] to see if your model can converge for even a short time step.
Coldstart: For optimization problems, you can try m.options.COLDSTART=2 to help identify equations or constraints that lead to an infeasible solution. It breaks the problem down into successive pieces that it can solve and tries to find the place that is causing convergence problems.
There are additional tutorials on simulation with Gekko and a troubleshooting guide (see #18) that helps you dig further into the application. When there are problems with convergence it is typically from one of the following issues:
Divide by zero: If you have an equation such as m.Equation(x.dt()==(x+1)/y) then replace it with m.Equation(y*x.dt()==x+1).
Non-continuously differentiable functions: Replace m.abs(x) with m.abs2(x) or m.abs3(x) that do not have a problem with x=0.
Infeasible: Remove any upper or lower bounds on variables.
Too Many Time Points: Try fewer time points or a shorter horizon initially to verify the model.
Let us know if any of this helps.