GEKKO optimisation gets stuck (APM.exe) - python

I am trying to run a differential equation system solver with GEKKO in Python 3 (via Jupyter notebook).
For bad initial parameters it of course immediately halts and says solution not found. Or immediately concludes with no changes done to the parameters / functions.
For other initial parameters the CPU (APM.exe in taskmanager) is busy for 30 seconds (depending on problem size), then it goes back down to 0% usage. But some calculation is still running and does not produce output. The only way to stop it (other than killing python completely) is to stop the APM.exe. I have not get a solution.
When doing this, I get the disp=True output me it has done a certain number of iterations (same for same initial parameters) and that:
No such file or directory: 'C:\\Users\\MYUSERNAME\\AppData\\Local\\Temp\\tmpfetrvwwwgk_model7\\options.json'
The code is very lengthy and depends on loading data from file. So I can not really provide a MWE.
Any ideas?

Here are a few suggestions on troubleshooting your application:
Sequential Simulation: If you are simulating a system with IMODE=4 then I would recommend that you switch to m.options.IMODE=7 and leave m.solve(disp=True) to see where the model is failing.
Change Solver: Sometimes it helps to switch solvers as well with m.options.SOLVER=1.
Short Time Step Solution: You can also try using a shorter time horizon initially with m.time=[0,0.0001] to see if your model can converge for even a short time step.
Coldstart: For optimization problems, you can try m.options.COLDSTART=2 to help identify equations or constraints that lead to an infeasible solution. It breaks the problem down into successive pieces that it can solve and tries to find the place that is causing convergence problems.
There are additional tutorials on simulation with Gekko and a troubleshooting guide (see #18) that helps you dig further into the application. When there are problems with convergence it is typically from one of the following issues:
Divide by zero: If you have an equation such as m.Equation(x.dt()==(x+1)/y) then replace it with m.Equation(y*x.dt()==x+1).
Non-continuously differentiable functions: Replace m.abs(x) with m.abs2(x) or m.abs3(x) that do not have a problem with x=0.
Infeasible: Remove any upper or lower bounds on variables.
Too Many Time Points: Try fewer time points or a shorter horizon initially to verify the model.
Let us know if any of this helps.

Related

Optuna CMA-ES IPOP doesn't restart

I am using optuna to maximize my criteria.
More specificaly, I am using CMA-ES sampler with "ipop" strategy.
sampler = optuna.samplers.CmaEsSampler(restart_strategy="ipop",inc_popsize=2,)
As far as I understand it CMA-ES is supposed to restart with increasing population size when an optimum is reached. But in my case, it seems that there is no restart.
I want to maximize my output, I reach almost my maximum at trial ~1/2k but nothing happen after that, CMAES is only doing exploitation, where I expect exploration.
Even if I do not clearly understand the restart conditions, I would expect at least one restart with that much trials.
(see paper http://www.cmap.polytechnique.fr/~nikolaus.hansen/cec2005ipopcmaes.pdf )
So my question is, why is optuna not restarting ?
I have seen that on optuna website:
I hope it is just a typo and this also work for local maximum.
Thanks in advance for your feed-backs.

SciPy: status of solve_ivp during integration

I am running a long ODE-integration in Python using scipy.integrate.solve_ivp. Is it possible to access the status of the integration or check at which integration step the routine is, while it is running? My integration is taking longer than expected and I would like to know whether the integrator is stuck at some step or whether the individual steps just take really long.
For future tasks; if I split the integration with solve_ivp into sub-intervals to print status messages in between, could this mess with the step-size adaptivity of certain solvers?
Thanks for any feedback!
There was a GitHub pull request to add a verbose option to solve_ivp, but this has not yet been done. You can either implement it yourself by modifying scipy's solve_ivp function (should be easy), or just print the time t that is given by the solver to your ODE function. That's what I do. If your system is not too small, then you don't loose much time because of the printing.
Splitting the integration the way you suggest can work, however if you split every few time steps, you will lose time as the solver restarts each tile. The impact is related with implicit algorithms, as they compute the Jacobian of your system anew at each start.

IPOPT in Pyomo does not work from source only with executable

I have a complicated non-linear problem I cannot copy here, where I have used IPOPT.exe so far (v 3.9.1) in a pyomo environment without a problem. Lately I experienced that the value of the objective function highly varies based on the settings:
if I decrease the tolerance the value gets more accurate at first (for some digits), but then it jumps to another value, which is 10% bigger than the previous
my optimization problem is the same for every run, except the value of one parameter (the hour I am evaluating the function in), still IPOPT can sometimes find the solution in 2 sec, sometimes I get infeasible result or max iteration reached (the parameter change is in a range that should not cause any anomaly in the system)
since it is a nasty problem I expected that ipopt randomly "jumps into" the problem, but it is not the case, it returns the same exit and value even if trying the problem several times (with for attempt: - try: - etc.).
also the result is different depending on if I use "exact" or "limited memory" hessian approximation.
since I do not like this behavior I wanted to use the multistart option but I cannot define an "executable" path with the option so I figured I change from the executable file to an installed solver.
I installed cyipopt and the ipopt to have everything and it works fine with another sample example but does not on my problem. It returns: ERROR: Solver (ipopt) returned non-zero return code (3221225501) without any feedback. I cannot figure out the cause.
So the questions would be:
how do I know which options to set and which is its proper value?
why do I keep always getting the same result even if I start the process completely new?
can I define an executable with the multistart option? and will it change anything or I should focus more on initializing the variables to a better value?
which is the easiest way to foster convergence?
I know it is not one issue but they belong together in the same context. Thanks for the answers in advance!

Using ipopt within Pyomo, how can I query whether the current solution is feasible?

I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)

fmin_bfgs does not complete

I'm trying to minimize a function with lots of parameters (a little over 7000) using fmin_bfgs() or fmin_l_bfgs_b(). When I enter the command
opt_pars = fmin_l_bfgs_b(obj_f, pars, approx_grad=1)
(where obj_f is the function I'm trying to minimize and pars is the vector of initial parameters) the function just runs forever until python tells me that it has to terminate the program. There is never any output. I tried adding the argument maxfunc = 2 to see if it was getting anywhere at all and the same thing happened (ran forever then python terminated the program).
I'm just trying to figure out what could be going wrong with the function. It seems like maybe it's getting caught in a while loop or something. Has anyone encountered this problem? If not, I could also use some general debugging help here (as I'm relatively new to Python) on how to monitor what the function is doing.
And finally, maybe someone can recommend a different function or package for the task I'm attempting. I'm trying to fit a lasso regularized Poisson regression to sparse data with about 12 million observations of 7000 variables.
PS Sorry for not including the -log likelihood function I'm trying to minimize, but it would be completely uninterpretable.
Thanks a lot for any help!
Zach
Since you don't provide gradients to fmin_bfgs and fmin_l_bfgs_b, your objective function is evaluated len(x) > 7000 times each time the gradient is needed. If the objective function is slow to evaluate, that will add up.
The maxfun option doesn't apparently count the gradient estimation, so it's possible that it's actually not an infinite loop, just that it takes a very long time.
What do you mean by "python tells me that it has to terminate the program"?
Please in any case try to provide a reproducible test case here. It doesn't matter if the objective function is incomprehensible --- what is important is that people interested can reproduce the condition you encounter.
I don't see infinite loops problem on my system even for 7000 parameters. However, the function evaluation count was about 200000 for a simple 7000-parameter problem with l_bfgs_b and no gradient provided. Profile your code to see what such evaluation counts would mean for you. With gradient provided, it was 35 (+ 35 times the gradient). Providing a gradient may then help. (If the function is complicated, automatic differentiation may still work --- there are libraries for that in Python.)
Other optimization libraries for Python, see: http://scipy.org/Topical_Software (can't say which are the best ones, though --- ipopt or coin-or could be worth a try)
For reference: the L-BFGS-B implementation in Scipy is this one (and is written by guys who are supposed to know what they are doing):
http://users.eecs.northwestern.edu/~nocedal/lbfgsb.html
***
You can debug what is going on e.g. by using Python debugger pdb, python -m pdb your_script.py. Or just by inserting print statements inside it.
Also try to Google "debug python" and "profile python" ;)

Categories