minimise function with constraints on non-optimisation variables - python

I need to optimise a function with a constraint on a variable which is calculated by solving a set of equations.
The optimisation parameters are the input variables to the system of equations and one of the calculated variables have a constraint.
Here is an extremely simplified example:
def opt(x):
x1, x2, x3 = x
z1 = x1 + x2 + x3
z2 = z1**2
.
.
.
z100 = f(x1, x2, x3, z1, ..., z99)
return some objective function
minimise opt(x)
s.t. z100 < a
I am familiar with scipy.optimize.minimize but I can not set a constraint on z100 and it is extremely difficult to calculate a function for z100 with only the variables x1, x2, x3.

This is very broad, but some remarks:
and it is extremely difficult to calculate a function for z100 with only the variables x1, x2, x3
Where is the problem? Just use the args-argument inside the call to minimize and you can use all the objects you want
I am familiar with scipy.optimize.minimize but I can not set a constraint on z100
This somewhat implicates, that z100 is depending on the optimization-variables (which makes it a constraint on optimization-variables opposed to the title; or there is no effect / reason to use it at all as we are only modifying opt-vars and no a-priori constraint-logic is affected)
Just introduce an auxiliary-variable y0 and constraints:
y0 < a
y0 == f(...)
But: this might contradict the solver's assumptions about smoothness (depending on f of course). So you probably should think again about your problem (and for us there is nothing to do as the structure of that problem is not posted)
And: eq-constraints sometimes are a bit of a hassle in regards to numerical-trouble

Related

How to create interaction variable between only 1 variable with all other variables

When using the SciKit learn PolynomialFeatures package it is possible to do the following:
Take features: x1, x2, x3
Create interaction variables: [x1, x2, x3, x1x2, x2x3, x1x3] using PolynomialFeatures(interaction_only=True)
My problem is that I only want the interaction terms between x1 and all other terms meaning:
[x1, x2, x3, x1x2, x1x3], I do not want x2x3.
Is it possible to do this?
You could just make them yourself no? PolynomialFeatures doesn't do anything particularly innovative.

Differential equations with coupled derivatives in python

I am trying to solve a set of differential equations using sympy and scipy, but cannot figure out how to bring them in the appropriate form. In brief, I have a set of two coupled second order differential equations that I can re-write into a system of four first order differential equations of the form:
dot(x1) = x2
dot(x2) = f(x1, x2, x3, x4, dot(x4))
dot(x3) = x4
dot(x4) = f(x1, x2, x3, x4, dot(x2))
My problem is the coupling of the first derivatives. In Matlab, I can use a time- and state-variable dependent mass matrix to solve this, but in python, I have only seen approaches with a time-invariant mass matrix. Any advice would be much appreciated.

Bounded sympy symbol variables

I would like to solve a multivariate optimization problem by converting it to a system of non-linear equations and solving using sympy's "solve" function as so:
xopt = sympy.solve(gradf, x, set=True)
Trouble is, this particular equation has an infinite set of solutions and calling solve just freezes my computer.
If i could set lower and upper bounds for my symbolic variables, i.e introduce a set of constraints:
l1 <= x1 <= u1, l2 <= x2 <= u2, ..., ln <= xn <= un
... i could limit the set of solutions to a finite one, but im having a hard time finding out how to do this with sympy's API. Can anyone help out ?

Multiprocessing nested numerical integrals in python

I'm working with nested numerical integrals in python where the limits of each layer depends on the next layer out. The overall structure of my code looks like
import numpy as np
import scipy.integrate as si
def func(x1, x2, x3, x4):
return x1**2 - x2**3+x3*x2 - x4*x3**3
def int1():
"""integrates `int2` over x1"""
a1, b1 = -1, 3
def int2(x1):
"""integrates `func` over x2 at given x1."""
#partial_func1 = lambda x2: func(x1, x2)
b2 = 1 - np.abs(x1)
a2 = -np.abs(x1**3)
def int3(x2):
a3 = x2
b3 = -a3
def int4(x3):
partial_func = lambda x4: func(x1, x2, x3, x4)
a4 = 1+np.abs(x3)
b4 = - a4
return si.quad(partial_func,a4,b4)[0]
return si.quad(int4, a3, b3)[0]
return si.quad(int3, a2, b2)[0]
return si.quad(int2, a1, b1)[0]
result = int1() # -22576720.048151683
In the full version of my code, the integral and the limits are complex and it takes several hours to run, which is inconvenient. Each integral seems like it could be easily parallelized though: it seems like I should be able to use multiprocessing to distribute the integration to multiple CPUs and speed up the run time.
Referring to some other posts on stack overflow, I tried the following:
def testfunc(intfunc,fmin,fmax):
return scint.quad(intfun,fmin,fmax,epsabs=10**-40)[0]
result = pool.map(partial(partial(testfunc, intfunc = int4),fmin = a3),[b3])
But I got an error that the local object can't be pickled.
Another resource I came across was at http://catherineh.github.io/programming/2016/10/04/parallel-integration-for-mere-mortals
But I need a function where I can pass the limits through as inputs as well (hence my use of partials).
Does anyone know how to resolve these issues? I think a solution would be some version of pool.map that could handle multiple inputs would be great, but if there's something wrong with my use of partials, that would be great to find out too.
Thanks in advance and let me know if there's anything here that can be cleared up!
This answer probably isn't satisfactory, but hopefully it'll give some insight into what field the question falls.
To reiterate, the original problem is to compute the quadruple integral
integrate(
integrate(
integrate(
integrate(
f(x1, x2, x3, x4),
[1+abs(x3), -1-abs(x3)]
),
[x2, -x2]
),
[1-abs(x1), -x1**3]
),
[-3, 1])
Mathematically, one could formulate this as
integrate(f(x1, x2, x3, x4), Omega)
where Omega is a four-dimensional domain defined by the integral limits above. Had the domain been in one, two, or three dimensions, then the answer to your question would be clear:
Discretize your complex domain into lines, triangles, or tetrahedra (those are the simplices in dimensions 1, 2, 3, respectively) (using one of many mesh tools), and then
use numerical quadrature on each of the lines/triangles/tetrahedra (e.g., from here).
Unfortunately, I'm not aware of any tool that discretizes a four-dimensional domain into 4-simplices, nor of quadrature rules for 4-simplices (except perhaps the vertex and midpoint rules). Both, however, would be possible to create in general; particularly a bunch of quadrature rules should be easy to come up with.
For the sake of completeness, let me mention that there is at least one class of domains for which integration rules exist in any dimension: the hypercube.
Update:
After much testing and restructuring, it seems that the best way to take care of this is not to nest the functions or the definitions, but rather to make use of the args parameter in the scipy.integrate.quad function to pass external variables through to inner integrations.
Many thanks to those who commented!

How to setup a constraint that depends on interim minimized function?

I am trying to setup a constraint which depends on the minimized function value.
The problem I have is of the following nature:
fmin = minimize (d1x1 +d2x2 ... +d5x5)
Where I want to optmize with following constraints:
x1+X2+x3+x4+x5 = 1
0.003 <x1 .. X5 < 0.05
d1x1/fmin = y1
(d2x2+d3x4)/fmin = y2
(d4x4+d5x5)/fmin = y3
Here in this case y1.. yn are scalar constants.
The problem I am having is that I dont know how to setup the A_ub or A_eq
In linprog so that B_ub = y1*fmin for d1x1 for example.
So somehow I need to define:
x1d1/fmin = y1 as one of the constraints.
Here the optimal value vector will be (d1 .. dn). However, this must also satisfy the constraint d1/minimized(d1.. dn) = y1 as an example here.
How should I set this up ? What kind of optimizer do I use ?
I am able to do this very easily using excel solver - but now I want to code this in python. I am trying using scipy.linprog but I am not sure if this is a linear programming problem or do I need to use another approach. I am not able to think of a way to setup the constraints in linprog for this problem. Can anyone help me ?
Assuming that d1, ..., dn are scalar constants too, then for instance the constraint
d1*x1/fmin==y1
can be rewritten as
d1*x1==y1*d1*x1+y1*d2*x2+...+y1*dn*xn
This can be normalized to
(d1-y1*d1)*x1 - y1*d2*x2 - y1*d3*x3 - ... - y1*dn*xn == 0
which can be used as input for a linear solver.

Categories