If we need dual solutions of a LP solve, is it necessary to switch off presolve and disable propagation? Is there some other (more relaxed) setting which achieves the same goal?
Also, can we get the entire dual solution vector instead of calling getDualsolLinear for every constraint?
Thanks!
Yes, I think you need to turn them off, since SCIP does not offer dual postsolving (since it is itself not an LP solver). Therefore, you will not be able to retrieve dual solutions of constraints that get removed/changed/etc during presolving or propagation.
If you really need presolving, I suggest to just directly use an LP solver (e.g. SoPlex or HiGHS) although I'm not sure about the availability of a Python interface in those cases.
There is no method that returns the entire solutions vector (though it would of course be easy to just implement a small wrapper yourself that calls getDualSolLinear on every constraint.
Related
I am currently trying to solve a difficult integer linear program using CPLEX with PuLP on Python.
I actually put a relative gap to the solver so that I get a solution in a shorter time. Here is the code line for the solving of my model :
solver = pulp.CPLEX_CMD(path=path_to_cplex, gapRel=0.003)
model.solve(solver)
model is my puLP.lpProblem
My problem is that in certain case, the solver returns an unfeasible error when I put a gap, so the solution is None. In this case, is there a way to get the last relaxed solution found by PuLP, so that I can at least use a "partially feasible" solution ?
Thanks by advance
I checked on the documentation of CPLEX_CMD with PuLP to see if there was an option to do what I want, but I did not find anything matching.
Question is as above. After reading the documentation, I can change the integrator itself (RK45,RK23, DOP853, etc), however I cannot find information on the order of these integrators, or on ways to limit the integrator to 1st order.
How can this be done? Do I have to use a particular ODE solver method that is by default 1st order, or can I edit any method to be 1st order?
For many integrators, the order is a fixed property. There are some methods – let’s call them meta integrators – that switch between different integrators, but they are still limited to the order of these integrators. Thus, you cannot simply control the order of the integrator and leave everything else the same.
If you really want a first-order method, it’s easy to implement the Euler method – unless you want step-size adaption.
Mind that the order of an integrator denotes how its error behaves for small step sizes. In this respect, a higher order is nothing that should cause a problem per se. I would therefore find it remarkable if using a first-order method solves a problem. Sometimes, individual methods can have problems or the problem is stiff, but here the solution is to use another solver (for stiff problems), not a first-order solver. If you consistently observe a result for all solvers, it is by far more likely that this your true result or you made a mistake defining your derivative or similar.
I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)
I have a MILP problem with binary and continuous variables. I want to use the Benders' decomposition algorithm to solve it. I am following this presentation: http://www.iems.ucf.edu/qzheng/grpmbr/seminar/Yuping_Intro_to_BendersDecomp.pdf
I separate my problem into a master problem composed of the discrete variables and a slave problem that has the continuous variables.
I am using the CPLEX Python API to solve it, based on the ATSP example: https://github.com/cswaroop/cplex-samples/blob/master/bendersatsp.py
I create the master problem and the dual of the slave as in this example.
I use the BendersLazyConsCallback, but I am not sure I understand it entirely. Please help me through.
The function separate is called when the current master solution is obtained, then the dual objective function is updated, and the dual problem is re-solved.
If the dual is unbounded then it adds the ray to the master problem constraints, e.g., self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs), which happens in the BendersLazyConsCallback class.
But the example does not include the code when the dual is optimal.
So, when the dual is optimal, I add a similar call and add the constraint to the master problem based on the dual solution.
However, if I try to print the master problem constraints, e.g., problem.linear_constraints.get_rows(), I do not see the newly included constraints. It seems that the self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs) command does not push this to the master constraints but keeps it as a member of the LazyConstraintCallback class. Is this correct? How can I see that these new constraints are actually added?
Also, how does the algorithm stop? In the traditional Benders' algorithm the lower and upper bounds of the problem are updated based the the dual and master solution and when they are equal we have convergence.
In the ATSP example, I don't see where this is happening. When exactly is the BendersLazyConsCallback triggered and how does it know when to stop?
The example that you linked to above is from CPLEX 12.6. This is quite old, by now (currently, the latest version is 12.9). If you haven't updated to a more recent version yet, it would probably be a good idea to do so. One reason to update is that with CPLEX 12.7, there is support for automatic Benders' decomposition. With CPLEX 12.8, there is a new generic callback (and a new bendersatsp2.py example to demonstrate it).
With that said, I'll try to answer some of your other questions.
First, as described here, if you write out the model, it will not include the lazy constraints that you have added dynamically in the callback. You can print them out in the callback yourself easily enough (e.g., print("LC:", workerLP.cutLhs, "G", workerLP.cutRhs). You can determine if your constraints have been applied, by the presence of a message at the end of the engine log, like:
User cuts applied: 3
As to your final questions about how lazy constraints work, please see the section on legacy callbacks in the CPLEX User's Manual. Also, there is the section on termination conditions of the MIP optimizer. The lazy constraint callback is invoked whenever CPLEX finds an integer feasible solution (see this thread on the IBM developerWorks forum for more on that).
I'm looking for a good library that will integrate stiff ODEs in Python. The issue is, scipy's odeint gives me good solutions sometimes, but the slightest change in the initial conditions causes it to fall down and give up. The same problem is solved quite happily by MATLAB's stiff solvers (ode15s and ode23s), but I can't use it (even from Python, because none of the Python bindings for the MATLAB C API implement callbacks, and I need to pass a function to the ODE solver). I'm trying PyGSL, but it's horrendously complex. Any suggestions would be greatly appreciated.
EDIT: The specific problem I'm having with PyGSL is choosing the right step function. There are several of them, but no direct analogues to ode15s or ode23s (bdf formula and modified Rosenbrock if that makes sense). So what is a good step function to choose for a stiff system? I have to solve this system for a really long time to ensure that it reaches steady-state, and the GSL solvers either choose a miniscule time-step or one that's too large.
If you can solve your problem with Matlab's ode15s, you should be able to solve it with the vode solver of scipy. To simulate ode15s, I use the following settings:
ode15s = scipy.integrate.ode(f)
ode15s.set_integrator('vode', method='bdf', order=15, nsteps=3000)
ode15s.set_initial_value(u0, t0)
and then you can happily solve your problem with ode15s.integrate(t_final). It should work pretty well on a stiff problem.
(See also Link)
Python can call C. The industry standard is LSODE in ODEPACK. It is public-domain. You can download the C version. These solvers are extremely tricky, so it's best to use some well-tested code.
Added: Be sure you really have a stiff system, i.e. if the rates (eigenvalues) differ by more than 2 or 3 orders of magnitude. Also, if the system is stiff, but you are only looking for a steady-state solution, these solvers give you the option of solving some of the equations algebraically. Otherwise, a good Runge-Kutta solver like DVERK will be a good, and much simpler, solution.
Added here because it would not fit in a comment: This is from the DLSODE header doc:
C T :INOUT Value of the independent variable. On return it
C will be the current value of t (normally TOUT).
C
C TOUT :IN Next point where output is desired (.NE. T).
Also, yes Michaelis-Menten kinetics is nonlinear. The Aitken acceleration works with it, though. (If you want a short explanation, first consider the simple case of Y being a scalar. You run the system to get 3 Y(T) points. Fit an exponential curve through them (simple algebra). Then set Y to the asymptote and repeat. Now just generalize to Y being a vector. Assume the 3 points are in a plane - it's OK if they're not.) Besides, unless you have a forcing function (like a constant IV drip), the MM elimination will decay away and the system will approach linearity. Hope that helps.
PyDSTool wraps the Radau solver, which is an excellent implicit stiff integrator. This has more setup than odeint, but a lot less than PyGSL. The greatest benefit is that your RHS function is specified as a string (typically, although you can build a system using symbolic manipulations) and is converted into C, so there are no slow python callbacks and the whole thing will be very fast.
I am currently studying a bit of ODE and its solvers, so your question is very interesting to me...
From what I have heard and read, for stiff problems the right way to go is to choose an implicit method as a step function (correct me if I am wrong, I am still learning the misteries of ODE solvers). I cannot cite you where I read this, because I don't remember, but here is a thread from gsl-help where a similar question was asked.
So, in short, seems like the bsimp method is worth taking a shot, although it requires a jacobian function. If you cannot calculate the Jacobian, I will try with rk2imp, rk4imp, or any of the gear methods.