Same optimization code different results on different computers - python

I am running nested optimization code.
sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05})
sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True})
The first minimization is the part of the function B, so it is kind of running inside the second minimization.
And the whole optimization is based on the data, there's no random number involved.
I run the exactly same code on two different computers, and get the totally different results.
I have installed different versions of anaconda, but
scipy, numpy, and all the packages used have the same versions.
I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit)
I am trying to figure out what might be causing this.
Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same?
or are there any options for sp.optimize that default values are set to be different from computer to computer?
PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers?

You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the "real" result, but only numerical approximations. During a long optimization task these differences can sum up.
Furthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point.
Can you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description.
If this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too?
You see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this.

I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.

Related

Does numpy.random.seed make results fixed on different computers?

I know that when you use numpy.random.seed(0) you get the same result on your own computer every time. I am wondering if it is also true for different computers and different installations of numpy.
It all depends upon type of algorithm implemented internally by numpy random function. In case of numpy, which is operated by pseudo-random number generators (PRNGs) algorithm. What this means is that if you provide the same seed( as of starting input ), you will get the same output. And if you change the seed, you will get a different output. So this kind of algorithm is no system dependent.
But for a true random number generator (TRNG) these often rely on some kind of specialized hardware that does some physical measurement of something unpredictable in the environment such as light or temperature electrical noise radioactive material. So if an module implements t
his kind of algorithm then it will be system dependent.

IPOPT in Pyomo does not work from source only with executable

I have a complicated non-linear problem I cannot copy here, where I have used IPOPT.exe so far (v 3.9.1) in a pyomo environment without a problem. Lately I experienced that the value of the objective function highly varies based on the settings:
if I decrease the tolerance the value gets more accurate at first (for some digits), but then it jumps to another value, which is 10% bigger than the previous
my optimization problem is the same for every run, except the value of one parameter (the hour I am evaluating the function in), still IPOPT can sometimes find the solution in 2 sec, sometimes I get infeasible result or max iteration reached (the parameter change is in a range that should not cause any anomaly in the system)
since it is a nasty problem I expected that ipopt randomly "jumps into" the problem, but it is not the case, it returns the same exit and value even if trying the problem several times (with for attempt: - try: - etc.).
also the result is different depending on if I use "exact" or "limited memory" hessian approximation.
since I do not like this behavior I wanted to use the multistart option but I cannot define an "executable" path with the option so I figured I change from the executable file to an installed solver.
I installed cyipopt and the ipopt to have everything and it works fine with another sample example but does not on my problem. It returns: ERROR: Solver (ipopt) returned non-zero return code (3221225501) without any feedback. I cannot figure out the cause.
So the questions would be:
how do I know which options to set and which is its proper value?
why do I keep always getting the same result even if I start the process completely new?
can I define an executable with the multistart option? and will it change anything or I should focus more on initializing the variables to a better value?
which is the easiest way to foster convergence?
I know it is not one issue but they belong together in the same context. Thanks for the answers in advance!

Using ipopt within Pyomo, how can I query whether the current solution is feasible?

I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)

Gurobi generates wrong solutions when dealing with small numbers

I am using Gurobi on Python interface, to solve a mathematical programming model, for a class of single machine scheduling problem, that contains both Binary and Continuous variables. In some cases, when dealing with small numbers, the solution generated by Gurobi is not valid.
The thing that Gurobi does that makes the solution valid from its perspective is that some Binary variables have values like 0.9999912 or 0.000000002. Doing that, the model generates a solution in which two jobs occupy the machine at the same time, that is invalid. Although the amount of the time that the two jobs overlap is very small (for example 0.004 time unit), it makes the solution incorrect.
I would like to know if I can modify the parameters in a way that resolves the issue.
Please look below link, Modelling and Algorithms section, Question 26. You will find answer. http://www.gurobi.com/support/faqs#modeling-and-algorithms

Comparing fsolve results in python and matlab

I have a follow up question to the post written a couple days ago, thank you for the previous feedback:
Finding complex roots from set of non-linear equations in python
I have gotten the set non-linear equations set up in python now so that fsolve will handle the real and imaginary parts independently. However, there are still problems with the python "fsolve" converging to the correct solution. I have exactly the same inputs that are used in Matlab, and after double checking, the set of equations are exactly the same as well. Matlab, no matter how I set the initial values, will always converge to the correct solution. With python however, every initial condition produces a different result, and never the correct one. After a fraction of a second, the following warning appears with python:
/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/scipy/optimize/minpack.py:227:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I was wondering if there are some known differences between the fsolve in python and Matlab, and if there are some known methods to optimize the performance in python.
Thank you very much
I don't think that you should rely on the fact that the names are the same. I see from your other question that you are specifying that Matlab's fsolve use the 'levenberg-marquardt' algorithm rather than the default. Python's scipy.optimize.fsolve uses MINPACK's hybrd algorithms. Levenberg-Marquardt finds roots approximately by minimizing the sum of squares of the function and is quite robust. It is not a true root-finding method like the default 'trust-region-dogleg' algorithm. I don't know how the hybrd schemes work, but they claim to be a modification of Powell's method.
If you want something similar to what you're doing in Matlab, I'd look for an optimization scheme that implements Levenberg-Marquardt, such as scipy.optimize.root, which you were also using in your previous question. Is there a reason why you're not using that?

Categories