i am using the PyOpt module to solve a problem of convex optimization.
The optimization always gives me a result and the value to which it converges looks like it is minimizing my target function, but for different runs of my code i get different solutions.
My problem is convex but not strictly convex, so I'd expect the existence of different solutions, but since the starting point of my algorithm is basically the same for the two runs I was wondering if this could be due to some random procedure in the algorithm I am using.
I am using the slsqp algorithm, does anybody know if it uses any random procedure?
Related
To solve the traveling salesman problem with simulated annealing, we need to start with an initial solution, a random permutation of the cities. This is the order in which the salesman is supposed to visit them. Then, we switch to a "neighboring solution" by swapping two cities. And then there are details around when to switch to the neighboring solution, etc.
I could implement all of this from scratch. But I wanted to see if I could use the inbuilt Scipy annealing solver. Looking at the documentation, looks like the old anneal method has been deprecated. The new methods that is supposed to have replaced it is basinhopping. But looking through the documentation and source code, it seems these are more towards optimizing a function of some array where any float values of that array are permissible (and then there are local and global optima). That's what all the examples in the documentation or code comments are geared towards. I can't imagine how I would use any of these inbuilt routines to solve the famous traveling salesman problem itself since the array there is a permutation array. If you just perturb its values by some floating point numbers, you won't get a valid solution.
So, the conclusion seems to be that those standard routines are inapplicable on a combinatorial optimization problem like the traveling salesman? Or am I missing something?
I have an ipopt model that sometimes suffers from small numerical issues. I have a correction algorithm that can fix them, but may cause small violations of other inequalities. I need a way to determine whether the current solution is feasible without manually querying each variable bound and constraint. How can I do this within pyomo?
I know there's a way to log infeasible constraints to standard output, but this does not help me. I need my code to react dynamically to these infeasibilities, such as running a few more iterations post-correction.
More info:
I have a (relatively) small but highly nonlinear problem modeled in Pyomo. The model sometimes suffers from numerical issues with ipopt when certain variables are near zero. (I'm modeling unit vectors from full vectors via max*uvec=vec with some magnitude constraint, which becomes really messy when the magnitude of a vector is near-zero.) Fortunately, it is possible to compute everything in the model from a few key driving variables, so that small numerical infeasibilities in definition-type constraints (e.g. defining unit vectors) are easily resolved, but such resolution may cause small violations of the main problem constraints.
Some things I've tried:
The log_infeasible_constraints function from from pyomo.util.infeasible: only prints to standard output, and I cannot find any documentation on the function (to see if there are flags allowing it to be used for my needs). (The function returns None, so I can't e.g. simply check return string length.)
Update: I found the source code at https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py, which could be salvaged to create a (clunky) solution. However, I suspect this could still miss some things (e.g. other tolerance criteria) that could cause ipopt to consider a solution infeasible.
is_feasible = (results.solver.status == SolverStatus.ok) #(where opt=SolverFactory('ipopt'), and results=opt.solve(model))
This only works immediately after the solve. If the solver runs out of iterations or variables are changed post-solve, this gives no indication of current model feasibility.
(Currently, I blindly run the correction step after each solve since I can't query model feasibility.)
I have written a constrained optimization function using scipy.optimize.minimize. However, when I run the exact same optimization problem multiple times I get variable results.
I know that it is possible when generating a (pseudo) random number, for example, to provide a seed so that the same numbers are generated on every simulation.
Is there a similar provision for running a constrained optimization problem in scipy?
or to put it another way, is there any random component to the scipy.optimize.minimize function.
I am using the SLSQP method and I am running the optimization with the same starting point.
I have a theoretical question. I am using as optimisation method the scipy.optimize.minimize. and especially the Nelder-Mead method. I have spotted that the results are changing. I am using it to minimise the sum of square differences(SSD) between two images. Running it at different times returns different results, sometimes the error is more than 50% e.g. SSD at first optimisation equals to 150 to the second optimisation 450.
Theoretically, it can be based to the heuristic search method of the algorithm and the non derivative nature of it.
Is it normal such a big difference?
I am sorry about the my theoretical question but I am very curious to know/discuss.
I have a follow up question to the post written a couple days ago, thank you for the previous feedback:
Finding complex roots from set of non-linear equations in python
I have gotten the set non-linear equations set up in python now so that fsolve will handle the real and imaginary parts independently. However, there are still problems with the python "fsolve" converging to the correct solution. I have exactly the same inputs that are used in Matlab, and after double checking, the set of equations are exactly the same as well. Matlab, no matter how I set the initial values, will always converge to the correct solution. With python however, every initial condition produces a different result, and never the correct one. After a fraction of a second, the following warning appears with python:
/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/scipy/optimize/minpack.py:227:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I was wondering if there are some known differences between the fsolve in python and Matlab, and if there are some known methods to optimize the performance in python.
Thank you very much
I don't think that you should rely on the fact that the names are the same. I see from your other question that you are specifying that Matlab's fsolve use the 'levenberg-marquardt' algorithm rather than the default. Python's scipy.optimize.fsolve uses MINPACK's hybrd algorithms. Levenberg-Marquardt finds roots approximately by minimizing the sum of squares of the function and is quite robust. It is not a true root-finding method like the default 'trust-region-dogleg' algorithm. I don't know how the hybrd schemes work, but they claim to be a modification of Powell's method.
If you want something similar to what you're doing in Matlab, I'd look for an optimization scheme that implements Levenberg-Marquardt, such as scipy.optimize.root, which you were also using in your previous question. Is there a reason why you're not using that?