Bi-level optimization using Python scipy.optimize - python

How can I code a bi-level optimization problem using scipy.optimize.minimize which consists of the following two levels, (1) being the upper-level problem, which is subject to (2) being the lower-level:
minimize linear programming problem, subject to a
minimize quadratic programming problem
I know how to write a single objective function for quadratic programming as a typical scipy.optimize.minimize routine (step 2 by itself), but how can I add an upper level minimization problem as well that has its own objective function? Can scipy handle a minimization problem subject to a lower minimization problem? A rough outline of the code using minimize and linprog would help

I don't think scipy solvers are very suited for this.
A standard approach for bilevel problems is to form the KKT conditions of the inner problem and add these as constraints to the outer problem. The complementarity conditions make the problem hard. You need a global NLP solver or a MIP solver to solve this thing to proven optimality. Here are some formulations for linear bilevel problems. If the inner problem is a (convex) QP problem, things become a little bit more difficult, but the same idea can be applied (form KKT conditions of the inner problem).
For more information see Bard, Practical Bilevel Optimization: Algorithms And Applications, Springer, 1998.

Related

How to efficiently formalize an optimization problem for python

I have an optimization problem that I could set up as either a rather complex univariate problem with a non-linear optimization function, or as a multivariate problem with linear optimization function.
Is it possible to somewhat generalize whether from a programming perspective, one is more efficient than the other? I.e. even without going into the details of the functions and constraints, are solvers more efficient with functionally complex univariate problems over simple multivariate problems or vice versa?
I tried to google the problem but the results I read were all looking at examples that didn't even require numerical solving. (e.g. this)
Once I figured out the best approach I'll implement it in python.

Non convex quadratic optimization solvers in python

Let me start saying that I am by no means expert in optimization, so any suggestion would be greatly appreciated. I have a non-convex quadratic optimization problem for which I need a solver. I myself am used to working with the modelling language CVXPY, using some of the default solvers there (ECOS, CVXOPT,...) so something similar to it would be great. I cannot use cvxpy on this problem because it is only suitable for convex optimization problems.
After googling, I saw some people recommending Pyomo, saying it could deal with non convex problems, but my problem requires matrix algebra, and as far as I know, Pyomo does not include matrix algebra capabilities. So I would require a non convex modeling language or solver for non convex quadratic problems.
The type of problem I am actually interested in solving has the following structure:
where wis the vector variable to be optimized, Xis a known data matrix and tis a prespecified parameter value.

Solving first-order ODE, which contains another ODE (odeint / solve_ivp in Python)

I'm trying to set up a fast numerical solver in Python for a differential problem of the form:
where r is some constant.
I want to integrate A over some time period, t of interest. However, this is complicated by the fact that the dA/dt equation includes another variable B, which itself is described by an ODE dB/dt. B is actually a vector, but I've simplified the expression to try and highlight my problems more clearly.
I currently have a solution using a manual Euler method: ie compute dB/dt (then use B = B_previous + dB/dt * dt) and manually step along using a fixed time step size dt. However, this is slow and unreliable. I imagine it would be far better to use the built-in ODE solvers in Numpy, but I'm not sure this is possible given the coupled nature of the problem I'm trying to solve?
Is this possible using Numpy odeint or solve_ivp please? And if so, can anyone suggest any pointers please! Thanks.
What you have is a coupled differential equation which are standard to solve using Runge kutta, Eulers, and many other methods. You can use this example to guide you in writting your python code:
https://scipy-cookbook.readthedocs.io/items/CoupledSpringMassSystem.html
Keep in mind that that not all equations can be solved with ODEINT. If your ODE is a "stiff" ODE then you will have to choose your algorithm precisely. The definition of a stiff ODE is not completely defined but usually they arise if you have large or non-integral powers of your dependent variable in your ODE.
The first step in solving a coupled ODE though is to use standard methods. If they don't work then look into something else.

Algorithm suggestions for solving Euler-Lagrange equations numerically

For my coursework I've been told to write an algorithm in Python to solve the Euler-Lagrange equations with Dirichlet boundary conditions.
For reasons I don't understand we have only studied finite difference methods yet (very briefly) in lectures. I was wondering if I should start with that algorithm, or would you recommend another method that is not too complicated but would be more efficient?
Thanks a lot!
Shooting methods, when they work, are fast and accurate. There
are some problems for which the BVP is well-posed but the IVP
used in the shooting method is not (e.g. the IVP has infinite
solutions for some values of z).
That's chapter 13 page 13 of your lecture notes

Integrate stiff ODEs with Python

I'm looking for a good library that will integrate stiff ODEs in Python. The issue is, scipy's odeint gives me good solutions sometimes, but the slightest change in the initial conditions causes it to fall down and give up. The same problem is solved quite happily by MATLAB's stiff solvers (ode15s and ode23s), but I can't use it (even from Python, because none of the Python bindings for the MATLAB C API implement callbacks, and I need to pass a function to the ODE solver). I'm trying PyGSL, but it's horrendously complex. Any suggestions would be greatly appreciated.
EDIT: The specific problem I'm having with PyGSL is choosing the right step function. There are several of them, but no direct analogues to ode15s or ode23s (bdf formula and modified Rosenbrock if that makes sense). So what is a good step function to choose for a stiff system? I have to solve this system for a really long time to ensure that it reaches steady-state, and the GSL solvers either choose a miniscule time-step or one that's too large.
If you can solve your problem with Matlab's ode15s, you should be able to solve it with the vode solver of scipy. To simulate ode15s, I use the following settings:
ode15s = scipy.integrate.ode(f)
ode15s.set_integrator('vode', method='bdf', order=15, nsteps=3000)
ode15s.set_initial_value(u0, t0)
and then you can happily solve your problem with ode15s.integrate(t_final). It should work pretty well on a stiff problem.
(See also Link)
Python can call C. The industry standard is LSODE in ODEPACK. It is public-domain. You can download the C version. These solvers are extremely tricky, so it's best to use some well-tested code.
Added: Be sure you really have a stiff system, i.e. if the rates (eigenvalues) differ by more than 2 or 3 orders of magnitude. Also, if the system is stiff, but you are only looking for a steady-state solution, these solvers give you the option of solving some of the equations algebraically. Otherwise, a good Runge-Kutta solver like DVERK will be a good, and much simpler, solution.
Added here because it would not fit in a comment: This is from the DLSODE header doc:
C T :INOUT Value of the independent variable. On return it
C will be the current value of t (normally TOUT).
C
C TOUT :IN Next point where output is desired (.NE. T).
Also, yes Michaelis-Menten kinetics is nonlinear. The Aitken acceleration works with it, though. (If you want a short explanation, first consider the simple case of Y being a scalar. You run the system to get 3 Y(T) points. Fit an exponential curve through them (simple algebra). Then set Y to the asymptote and repeat. Now just generalize to Y being a vector. Assume the 3 points are in a plane - it's OK if they're not.) Besides, unless you have a forcing function (like a constant IV drip), the MM elimination will decay away and the system will approach linearity. Hope that helps.
PyDSTool wraps the Radau solver, which is an excellent implicit stiff integrator. This has more setup than odeint, but a lot less than PyGSL. The greatest benefit is that your RHS function is specified as a string (typically, although you can build a system using symbolic manipulations) and is converted into C, so there are no slow python callbacks and the whole thing will be very fast.
I am currently studying a bit of ODE and its solvers, so your question is very interesting to me...
From what I have heard and read, for stiff problems the right way to go is to choose an implicit method as a step function (correct me if I am wrong, I am still learning the misteries of ODE solvers). I cannot cite you where I read this, because I don't remember, but here is a thread from gsl-help where a similar question was asked.
So, in short, seems like the bsimp method is worth taking a shot, although it requires a jacobian function. If you cannot calculate the Jacobian, I will try with rk2imp, rk4imp, or any of the gear methods.

Categories