I have an optimization problem that I could set up as either a rather complex univariate problem with a non-linear optimization function, or as a multivariate problem with linear optimization function.
Is it possible to somewhat generalize whether from a programming perspective, one is more efficient than the other? I.e. even without going into the details of the functions and constraints, are solvers more efficient with functionally complex univariate problems over simple multivariate problems or vice versa?
I tried to google the problem but the results I read were all looking at examples that didn't even require numerical solving. (e.g. this)
Once I figured out the best approach I'll implement it in python.
Related
I have a Python function where I want to find the global minimum.
I am looking for a resource on how to choose the appropriate algorithm for it.
When looking at scipy.optimize.minimize documentation, I can find some not very specific statements like
has proven good performance even for non-smooth optimizations
Suitable for large-scale problems
recommended for medium and large-scale problems
Apart from that I am unsure whether for my function basinhopping might be better or even bayesian-optimization. For the latter I am unsure whether that is only meant for machine learning cost functions or can be used generally.
I am looking for a cheat sheet like
but just for minimization problems.
Does something like that already exist?
If not, how would I be able to choose the most appropriate algorithm based on my constraints (that are: time consuming to compute, 4 input variables, known boundaries)?
How can I code a bi-level optimization problem using scipy.optimize.minimize which consists of the following two levels, (1) being the upper-level problem, which is subject to (2) being the lower-level:
minimize linear programming problem, subject to a
minimize quadratic programming problem
I know how to write a single objective function for quadratic programming as a typical scipy.optimize.minimize routine (step 2 by itself), but how can I add an upper level minimization problem as well that has its own objective function? Can scipy handle a minimization problem subject to a lower minimization problem? A rough outline of the code using minimize and linprog would help
I don't think scipy solvers are very suited for this.
A standard approach for bilevel problems is to form the KKT conditions of the inner problem and add these as constraints to the outer problem. The complementarity conditions make the problem hard. You need a global NLP solver or a MIP solver to solve this thing to proven optimality. Here are some formulations for linear bilevel problems. If the inner problem is a (convex) QP problem, things become a little bit more difficult, but the same idea can be applied (form KKT conditions of the inner problem).
For more information see Bard, Practical Bilevel Optimization: Algorithms And Applications, Springer, 1998.
I want to build a model that describes a curve that fits the data shown in the scatterplot. I thought it would be straight forward using sklearn. But the choice and application of the different methods gets rather confusing.
Which algorithms would you use to tackle this problem?
This is really a question for CrossValidated rather than a Python question.
Your data seems to strongly indicate a simple underlying model which is linear until the very end, when it perhaps becomes polynomial.
As a first step, if possible, I would investigate this phenomenon. It's unusual. Perhaps there's something wrong with the data source. But maybe not. For example, a physical phenomenon with two distinct phases might produce data like these.
As to models, I would suggest natural cubic splines for this data. They are simple and involve cutting the data up into windows which you fit with cubic polynomials (a special case of which is a line).
You might also consider smoothing splines, and local regression.
For information on these, see the free online textbook, An Introduction to Statistical Learning.
I have a follow up question to the post written a couple days ago, thank you for the previous feedback:
Finding complex roots from set of non-linear equations in python
I have gotten the set non-linear equations set up in python now so that fsolve will handle the real and imaginary parts independently. However, there are still problems with the python "fsolve" converging to the correct solution. I have exactly the same inputs that are used in Matlab, and after double checking, the set of equations are exactly the same as well. Matlab, no matter how I set the initial values, will always converge to the correct solution. With python however, every initial condition produces a different result, and never the correct one. After a fraction of a second, the following warning appears with python:
/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/scipy/optimize/minpack.py:227:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I was wondering if there are some known differences between the fsolve in python and Matlab, and if there are some known methods to optimize the performance in python.
Thank you very much
I don't think that you should rely on the fact that the names are the same. I see from your other question that you are specifying that Matlab's fsolve use the 'levenberg-marquardt' algorithm rather than the default. Python's scipy.optimize.fsolve uses MINPACK's hybrd algorithms. Levenberg-Marquardt finds roots approximately by minimizing the sum of squares of the function and is quite robust. It is not a true root-finding method like the default 'trust-region-dogleg' algorithm. I don't know how the hybrd schemes work, but they claim to be a modification of Powell's method.
If you want something similar to what you're doing in Matlab, I'd look for an optimization scheme that implements Levenberg-Marquardt, such as scipy.optimize.root, which you were also using in your previous question. Is there a reason why you're not using that?
I'm trying to call upon the famous multilateration algorithm in order to pinpoint a radiation emission source given a set of arrival times for various detectors. I have the necessary data, but I'm still having trouble implementing this calculation; I am relatively new with Python.
I know that, if I were to do this by hand, I would use matrices and carry out elementary row operations in order to find my 3 unknowns (x,y,z), but I'm not sure how to code this. Is there a way to have Python implement ERO, or is there a better way to carry out my computation?
Depending on your needs, you could try:
NumPy if your interested in numerical solutions. As far as I remember, it could solve linear equations. Don't know how it deals with non-linear resolution.
SymPy for symbolic math. It solves symbolically linear equations ... according to their main page.
The two above are "generic" math packages. I doubt you will find (easily) any dedicated (and maintained) library for your specific need. Their was already a question on that topic here: Multilateration of GPS Coordinates