are scipys solver method interior-point and revised-simplex really slow? - python

I'm working on some LP and I'm testing the scipy solvers
https://docs.scipy.org/doc/scipy/reference/optimize.linprog-interior-point.html
and
https://docs.scipy.org/doc/scipy/reference/optimize.linprog-revised_simplex.html
I know that there are the newer highs methods
https://docs.scipy.org/doc/scipy/reference/optimize.linprog-highs.html#optimize-linprog-highs
but I want to use the old ones as I want to test a "normal" simplex implementation...
I'm solving a 200x200 Problem with a constraint matrix of 400,40000
the revised simplex method took 16 minutes to solve...IP took about 10 minutes.
this is unbelievable slow?!
Am I doing something wrong or are those old methods really so bad?
If someone knows a good running python implementation of the simplex, IP or dual simplex feel free to share.. I want to see how the perform in comparison to scipy
cheers

Related

Can I use the Scipy simulated annealing library to solve the traveling salesman problem?

To solve the traveling salesman problem with simulated annealing, we need to start with an initial solution, a random permutation of the cities. This is the order in which the salesman is supposed to visit them. Then, we switch to a "neighboring solution" by swapping two cities. And then there are details around when to switch to the neighboring solution, etc.
I could implement all of this from scratch. But I wanted to see if I could use the inbuilt Scipy annealing solver. Looking at the documentation, looks like the old anneal method has been deprecated. The new methods that is supposed to have replaced it is basinhopping. But looking through the documentation and source code, it seems these are more towards optimizing a function of some array where any float values of that array are permissible (and then there are local and global optima). That's what all the examples in the documentation or code comments are geared towards. I can't imagine how I would use any of these inbuilt routines to solve the famous traveling salesman problem itself since the array there is a permutation array. If you just perturb its values by some floating point numbers, you won't get a valid solution.
So, the conclusion seems to be that those standard routines are inapplicable on a combinatorial optimization problem like the traveling salesman? Or am I missing something?

Global optimisation of function with many parameters (~40) and local minima in Python

Does anyone have some experience with global optimization problems for complex objective functions with multiple local minima and many parameters? For me, CPU time is less of a constraint but actually finding the global minimum is the most important. So far I have tried Scipy 'dual_annealing' (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html) but after running on 32 cores for 24 hours no global minimum was found (perhaps I need to fine-tune this minimizer a bit more or provide stronger bounds?). Are there any other minimization routines that could be more suited to this application?
Many thanks in advance.
Hyperparameter optimizer frameworks that you can try.
Optuna
microsoft nni
Nevergrad
With 40+ parameters is going to be a very tough one, especially if you want to be reasonably certain that you located the global minimum (not that any algorithm will be able to tell you that anyway)… I had good results with Dual Annealing from SciPy, albeit on lower-dimensional problems.
But still, 24 hours? How expensive is your objective function to evaluate? Do you have the analytical gradient available? I guess the answer is “no”, but if you do there may be quite a few promising techniques to try. Are you specifying bounds for your parameters?
Have you looked at NLopt (https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/)? In the past I had surprisingly good results on larger problems using the Controlled Random Search (CRS) algorithm, but your mileage may vary.

Gekko(python) for lap time optimization

I was wondering if it was a good idea to use Gekko to solve a lap time optimization:
finding the optimal path on a track to minimize total time by controlling the steering angle and the power output.
I'm fairly new to optimal control problem so if you had pointers on how to start that would be great.
Thanks
The bicycle optimal trajectory problem is possible in Gekko. I recommend that you start by working out simple benchmark problems and then take a staged approach (1D to 3D) to building your application. Also, if the authors are willing to share their model, it is often easier to replicate and extend their results. Here are some links to help you get started or see what is possible with a complex trajectory optimization problem (HALE aircraft).
Example problems
Introductory Optimal Control Benchmark Problems with the minimized final time.
Energy optimization for HALE aircraft trajectory optimization (source code).
Inverted Pendulum
There is also the machine learning and dynamic optimization course that is freely available online if you need additional help getting started.

Method of moments in scipy?

Following from this question, is there a way to use any method other than MLE (maximum-likelihood estimation) for fitting a continuous distribution in scipy? I think that my data may be resulting in the MLE method diverging, so I want to try using the method of moments instead, but I can't find out how to do it in scipy. Specifically, I'm expecting to find something like
scipy.stats.genextreme.fit(data, method=method_of_moments)
Does anyone know if this is possible, and if so how to do it?
Few things to mention:
1) scipy does not have support for GMM. There is some support for GMM via statsmodels (http://statsmodels.sourceforge.net/stable/gmm.html), you can also access many R routines via Rpy2 (and R is bound to have every flavour of GMM ever invented): http://rpy.sourceforge.net/rpy2/doc-2.1/html/index.html
2) Regarding stability of convergence, if this is the issue, then probably your problem is not with the objective being maximised (eg. likelihood, as opposed to a generalised moment) but with the optimiser. Gradient optimisers can be really fussy (or rather, the problems we give them are not really suited for gradient optimisation, leading to poor convergence).
If statsmodels and Rpy do not give you the routine you need, it is perhaps a good idea to write out your moment computation out verbose, and see how you can maximise it yourself - perhaps a custom-made little tool would work well for you?

Integrate stiff ODEs with Python

I'm looking for a good library that will integrate stiff ODEs in Python. The issue is, scipy's odeint gives me good solutions sometimes, but the slightest change in the initial conditions causes it to fall down and give up. The same problem is solved quite happily by MATLAB's stiff solvers (ode15s and ode23s), but I can't use it (even from Python, because none of the Python bindings for the MATLAB C API implement callbacks, and I need to pass a function to the ODE solver). I'm trying PyGSL, but it's horrendously complex. Any suggestions would be greatly appreciated.
EDIT: The specific problem I'm having with PyGSL is choosing the right step function. There are several of them, but no direct analogues to ode15s or ode23s (bdf formula and modified Rosenbrock if that makes sense). So what is a good step function to choose for a stiff system? I have to solve this system for a really long time to ensure that it reaches steady-state, and the GSL solvers either choose a miniscule time-step or one that's too large.
If you can solve your problem with Matlab's ode15s, you should be able to solve it with the vode solver of scipy. To simulate ode15s, I use the following settings:
ode15s = scipy.integrate.ode(f)
ode15s.set_integrator('vode', method='bdf', order=15, nsteps=3000)
ode15s.set_initial_value(u0, t0)
and then you can happily solve your problem with ode15s.integrate(t_final). It should work pretty well on a stiff problem.
(See also Link)
Python can call C. The industry standard is LSODE in ODEPACK. It is public-domain. You can download the C version. These solvers are extremely tricky, so it's best to use some well-tested code.
Added: Be sure you really have a stiff system, i.e. if the rates (eigenvalues) differ by more than 2 or 3 orders of magnitude. Also, if the system is stiff, but you are only looking for a steady-state solution, these solvers give you the option of solving some of the equations algebraically. Otherwise, a good Runge-Kutta solver like DVERK will be a good, and much simpler, solution.
Added here because it would not fit in a comment: This is from the DLSODE header doc:
C T :INOUT Value of the independent variable. On return it
C will be the current value of t (normally TOUT).
C
C TOUT :IN Next point where output is desired (.NE. T).
Also, yes Michaelis-Menten kinetics is nonlinear. The Aitken acceleration works with it, though. (If you want a short explanation, first consider the simple case of Y being a scalar. You run the system to get 3 Y(T) points. Fit an exponential curve through them (simple algebra). Then set Y to the asymptote and repeat. Now just generalize to Y being a vector. Assume the 3 points are in a plane - it's OK if they're not.) Besides, unless you have a forcing function (like a constant IV drip), the MM elimination will decay away and the system will approach linearity. Hope that helps.
PyDSTool wraps the Radau solver, which is an excellent implicit stiff integrator. This has more setup than odeint, but a lot less than PyGSL. The greatest benefit is that your RHS function is specified as a string (typically, although you can build a system using symbolic manipulations) and is converted into C, so there are no slow python callbacks and the whole thing will be very fast.
I am currently studying a bit of ODE and its solvers, so your question is very interesting to me...
From what I have heard and read, for stiff problems the right way to go is to choose an implicit method as a step function (correct me if I am wrong, I am still learning the misteries of ODE solvers). I cannot cite you where I read this, because I don't remember, but here is a thread from gsl-help where a similar question was asked.
So, in short, seems like the bsimp method is worth taking a shot, although it requires a jacobian function. If you cannot calculate the Jacobian, I will try with rk2imp, rk4imp, or any of the gear methods.

Categories