I've been searching a little in the web and I did not found any solution to this question, so I decided to formally ask here.
I have a function that has multiple outputs:
f(input1,input2,... ,inputn) = output1,ouput2,output3
My wish is to minimize output1 constraining other outputs, for instance:
output2 >= 0 and output3 <= 7 (*constraints*)
My idea was to have the following program structure:
def function(input1,input2,...,inputn):
#very secret part of code
return(output1,ouput2,output3)
inputs=[1,31,22]
_res = optimize.minimize(function, inputs, method='nelder-mead',
options=options,*constraints*)
Now I believe that the answer should be in the documentation of the Scipy minimize function.
The fact is that, from what I've understood, I can only constrain my function inputs and not some of the outputs of my function. Am I missing something?
Here is an example of optimization with constraint:
from scipy.optimize import minimize
def f(xy):
x, y = xy
return x**2 + y**2
def constraint1(xy):
x, y = xy
return x-1
def constraint2(xy):
x, y = xy
return y-2
constraints = ({'type': 'ineq', 'fun': constraint1},
{'type': 'ineq', 'fun': constraint2})
xy_start = [0, 0]
res = minimize(f, xy_start, method='SLSQP', constraints=constraints)
which gives:
fun: 4.999999999999997
jac: array([2., 4.])
message: 'Optimization terminated successfully.'
nfev: 13
nit: 3
njev: 3
status: 0
success: True
x: array([1., 2.])
The idea is that your function you want to minimize is well scalar: the returned value have to be only output1. Additional functions have to be added in order to define each constraint: constraint1(inputs)->output2, constraint2(inputs)->output3... etc
Maybe you could separate the 'very secret part' of your code into different functions, which will be used both by the objective function and by the constraints functions
Related
I am trying to solve a couple minimization problems using Python but the setup with constraints is difficult for me to understand. I have:
minimize: x+y+2z^2
subject to: x = 1 and x^2+y^2 = 1
This is very easy obviously and I know the solution is x=1,y=0,z=0. I tried to use scipy.optimize.L-BFGS-B but had issues.
I also have:
minimize: 2x1^2+x2^2
subject to: x1+x2=1
I need to use a gradient based optimizer so I chose scipy.optimizer.COBYLA but had issues using an equality constraint as it only takes inequality constraints. The code for this is:
def objective(x):
x1 = x[0]
x2 = x[1]
return 2*(x1**2)+ x2
def constraint1(x):
return x[0]+x[1]-1
#Try an initial condition of x1=1 and x2=0
#Our initial condition satisfies the constraint already
x0 = [0.3,0.7]
print(objective(x0))
xnew = [0.25,0.75]
print(objective(xnew))
#Since we have already calculated on paper we know that x1 and x2 fall between 0 and 1
#We can set our bounds for both variables as being between 0 and 1
b = (0,1)
bounds = (b,b)
#Lets make note of the type of constraint we have for out optimizer
con1 = {'type': 'eq', 'fun':constraint1}
cons = [con1]
sol_gradient = minimize(objective,x0,method='COBYLA',bounds=bounds, constraints=cons)
Then I get error about using equality constraints with this optimizer.
A few things:
Your objective function does not match with the description you have provided. Should it be this: 2*(x1**2) + x2**2?
From the docs scipy.optimize.minimize you can see that COBYLA does not support eq as a constraint. From the page:
Note that COBYLA only supports inequality constraints.
Since you said you want to use a Gradient based optimizer, one option could be to use the Sequential Least Squares Programming (SLSQP) optimizer.
Below is the code replacing 'COBYLA' with 'SLSQP' and changing the objective function according to 1:
def objective(x):
x1 = x[0]
x2 = x[1]
return 2*(x1**2)+ x2**2
def constraint1(x):
return x[0]+x[1]-1
#Try an initial condition of x1=1 and x2=0
#Our initial condition satisfies the constraint already
x0 = [0.3,0.7]
print(objective(x0))
xnew = [0.25,0.75]
print(objective(xnew))
#Since we have already calculated on paper we know that x1 and x2 fall between 0 and 1
#We can set our bounds for both variables as being between 0 and 1
b = (0,1)
bounds = (b,b)
#Lets make note of the type of constraint we have for out optimizer
con1 = {'type': 'eq', 'fun':constraint1}
cons = [con1]
sol_gradient = minimize(objective,x0,method='SLSQP',bounds=bounds, constraints=cons)
print(sol_gradient)
Which gives the final answer as:
fun: 0.6666666666666665
jac: array([1.33333336, 1.33333335])
message: 'Optimization terminated successfully'
nfev: 7
nit: 2
njev: 2
status: 0
success: True
x: array([0.33333333, 0.66666667])
I want to optimise my portfolio using Markowitz theory (risk minimization by the Markowitz method for a given income = 15%) and Scipy.minimize
I have risk function
def objective(x):
x1=x[0];x2=x[1];x3=x[2]; x4=x[3]
return 1547.87020*x1**2 + 125.26258*x1*x2 + 1194.3433*x1*x3 + 63.6533*x1*x4 \
+ 27.3176649*x2**2 + 163.28848*x2*x3 + 4.829816*x2*x4 \
+ 392.11819*x3**2 + 56.50518*x3*x4 \
+ 34.484063*x4**2
Sum of parts of stocks(in %) = 1
def constraint1(x):
return (x[0]+x[1]+x[2]+x[3]-1.0)
Income function with restriction
def constraint2(x):
return (-1.37458*x[0] + 0.92042*x[1] + 5.06189*x[2] + 0.35974*x[3] - 15.0)
And I test it using:
x0=[0,1,1,0] #Initial value
b=(0.0,1.0)
bnds=(b,b,b,b)
con1={'type':'ineq','fun':constraint1}
con2={'type':'eq','fun':constraint2}
cons=[con1,con2]
sol=minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
And my result is:
fun: 678.5433939
jac: array([1383.25920868, 222.75363159, 1004.03005219, 130.30312347])
message: 'Positive directional derivative for linesearch'
nfev: 216
nit: 20
njev: 16
status: 8
success: False
x: array([0., 1., 1., 1.])
But how? Sum of parts of portfolio cant be more than 1(now parts of stock 2=stock3=stock4=100%). Its constraint1. Where is problem?
The output says "success: False"
So it is telling you that it failed to find a solution to the problem.
Also, why did you put
con1={'type':'ineq','fun':constraint1}
Don't you want
con1={'type':'eq','fun':constraint1}
I got success using method='BFGS'
Your code is returning values that do not respect your constraint due to false definition of the first constraint (a-b >= 0 => a>b) so in your case a=1(the order in an inequality is important). On the other hand your x0 must also respect your constraints and sum([0,1,1,0]) = 2 > 1.
I slightly improved your code and fixed the aforementioned issues, but I still think that you need to review your second constraint:
import numpy as np
from scipy.optimize import minimize
def objective(x):
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
coefficients = np.array([1547.87020, 125.26258, 1194.3433, 63.6533, 27.3176649, 163.28848, 4.829816, 392.11819, 56.50518, 34.484063])
xs = np.array([ x1**2, x1*x2, x1*x3, x1*x4, x2**2, x2*x3, x2*x4, x3**2, x3*x4, x4**2])
return np.dot(xs, coefficients)
const1 = lambda x: 1 - sum(x)
const2 = lambda x: np.dot(np.array([-1.37458, 0.92042, 5.06189, 0.35974]), x) - 15.0
x0 = [0, 0, 0, 0] #Initial value
b = (0.0, 1.0)
bnds = (b, b, b, b)
cons = [{'type':'ineq','fun':const1}, {'type':'eq', 'fun':const2}]
# minimize
sol = minimize(objective,
x0,
method = 'SLSQP',
bounds = bnds,
constraints = cons)
print(sol)
output:
fun: 392.1181900000138
jac: array([1194.34332275, 163.28847885, 784.23638535, 56.50518036])
message: 'Positive directional derivative for linesearch'
nfev: 92
nit: 11
njev: 7
status: 8
success: False
x: array([0.00000000e+00, 5.56638069e-14, 1.00000000e+00, 8.29371293e-14])
I have a least squares minimization problem subject to inequality constraints which I am trying to solve using scipy.optimize.minimize. It seems that there are two options for inequality constraints: COBYLA and SLSQP.
I first tried SLSQP since it allow for explicit partial derivatives of the function to be minimized. Depending on the scaling of the problem, it fails with error:
Positive directional derivative for linesearch (Exit mode 8)
whenever interval or more general inequality constraints are imposed.
This has been observed previously e.g., here. Manual scaling of the function to be minimized (along with the associated partial derivatives) seems to get rid of the problem, but I cannot achieve the same effect by changing ftol in the options.
Overall, this whole thing is causing me to have doubts about the routine working in a robust manner. Here's a simplified example:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, A, y):
e = y - A.dot(x)
rss = np.sum(e ** 2)
return rss
def cost_deriv(x, A, y):
e = y - A.dot(x)
deriv0 = -2 * e.dot(A[:,0])
deriv1 = -2 * e.dot(A[:,1])
deriv = np.array([deriv0, deriv1])
return deriv
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
cons_SLSQP = ({'type': 'ineq', 'fun' : lambda x: np.array([x[0] - x[1]]),
'jac' : lambda x: np.array([1.0, -1.0])})
# works correctly
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# fails
A = 100 * A
y = A.dot(x_true)
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# works if bounds and inequality constraints removed
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv,
method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
How should ftol be set to avoid failure? More generally, can a similar problem arise with COBYLA? Is COBYLA a better choice for this type of inequality constrained least squares optimization problem?
Using a square root in the cost function was found to improve performance. However, for a non-linear re-paramterization of the problem (simpler but closer to what I need to do in practice), it fails again. Here are the details:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, y, g):
e = ((y - x[1]) / x[0]) - g
rss = np.sqrt(np.sum(e ** 2))
return rss
def cost_deriv(x, y, g):
e = ((y- x[1]) / x[0]) - g
factor = 0.5 / np.sqrt(e.dot(e))
deriv0 = -2 * factor * e.dot(y - x[1]) / (x[0]**2)
deriv1 = -2 * factor * np.sum(e) / x[0]
deriv = np.array([deriv0, deriv1])
return deriv
x_true = np.array([1/300, .1])
N = 20
t = 20 * np.arange(N)
g = 100 * np.cos(2 * np.pi * 1e-3 * (t - t[-1] / 2))
y = g * x_true[0] + x_true[1]
x_guess = x_true / 2
prm_bounds = ((1e-4, 1e-2), (0, .4))
# check derivatives
delta = 1e-9
C0 = cost(x_guess, y, g)
C1 = cost(x_guess + np.array([delta, 0]), y, g)
approx_deriv0 = (C1 - C0) / delta
C1 = cost(x_guess + np.array([0, delta]), y, g)
approx_deriv1 = (C1 - C0) / delta
approx_deriv = np.array([approx_deriv0, approx_deriv1])
deriv = cost_deriv(x_guess, y, g)
# fails
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(y, g), jac=cost_deriv,
bounds=prm_bounds, method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
Instead of minimizing np.sum(e ** 2), minimize sqrt(np.sum(e ** 2)), or better (in terms of calculation): np.linalg.norm(e)!
This modification:
does not change your solution in regards to x
will need post-processing if the original objective is needed (probably not)
is much more robust
With this change, all cases work, even using numerical-differentiation (i was too lazy to modify the gradient, which needs to reflect this!).
Example output (number of func-evals gives away num-diff):
Optimization terminated successfully. (Exit mode 0)
Current function value: 3.815547437029837e-06
Iterations: 16
Function evaluations: 88
Gradient evaluations: 16
fun: 3.815547437029837e-06
jac: array([-6.09663382, -2.48862544])
message: 'Optimization terminated successfully.'
nfev: 88
nit: 16
njev: 16
status: 0
success: True
x: array([ 2.00000037, 0.10000018])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0002354577991007501
Iterations: 23
Function evaluations: 114
Gradient evaluations: 23
fun: 0.0002354577991007501
jac: array([ 435.97259208, 288.7483819 ])
message: 'Optimization terminated successfully.'
nfev: 114
nit: 23
njev: 23
status: 0
success: True
x: array([ 1.99999977, 0.10000014])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0003392807206384532
Iterations: 21
Function evaluations: 112
Gradient evaluations: 21
fun: 0.0003392807206384532
jac: array([ 996.57340243, 51.19298764])
message: 'Optimization terminated successfully.'
nfev: 112
nit: 21
njev: 21
status: 0
success: True
x: array([ 2.00000008, 0.10000104])
While there are probably some problems with SLSQP, it's still one of the most tested and robust codes given that broad application-spectrum!
I would also expect SLSQP to be much better here compared to COBYLA, as the latter is based heavily on linearizations. (but just take it as a guess; it's easy to try given the minimize-interface!)
Alternative
In general, an Interior-point based solver for Convex Quadratic Programming will be the best approach here. But for this, you need to leave scipy. (or maybe an SOCP-solver would be better... i'm not sure).
cvxpy brings a nice-modelling system and a good open-source solver (ECOS; although technically a conic-solver -> more general and less robust; but should beat SLSQP).
Using cvxpy and ECOS, this looks like:
import numpy as np
import cvxpy as cvx
""" Problem data """
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
# problematic case
A = 100 * A
y = A.dot(x_true)
""" Solve """
x = cvx.Variable(len(x_true))
constraints = [x[0] >= x[1]]
for ind, (lb, ub) in enumerate(prm_bounds): # ineffecient -> matrix-based expr better!
constraints.append(x[ind] >= lb)
constraints.append(x[ind] <= ub)
objective = cvx.Minimize(cvx.norm(A*x - y))
problem = cvx.Problem(objective, constraints)
problem.solve(solver=cvx.ECOS, verbose=False)
print(problem.status)
print(problem.value)
print(x.value.T)
# optimal
# -6.67593652593801e-10
# [[ 2. 0.1]]
Consider the following (convex) optimization problem:
minimize 0.5 * y.T * y
s.t. A*x - b == y
where the optimization (vector) variables are x and y and A, b are a matrix and vector, respectively, of appropriate dimensions.
The code below finds a solution easily using the SLSQP method from Scipy:
import numpy as np
from scipy.optimize import minimize
# problem dimensions:
n = 10 # arbitrary integer set by user
m = 2 * n
# generate parameters A, b:
np.random.seed(123) # for reproducibility of results
A = np.random.randn(m,n)
b = np.random.randn(m)
# objective function:
def obj(z):
vy = z[n:]
return 0.5 * vy.dot(vy)
# constraint function:
def cons(z):
vx = z[:n]
vy = z[n:]
return A.dot(vx) - b - vy
# constraints input for SLSQP:
cons = ({'type': 'eq','fun': cons})
# generate a random initial estimate:
z0 = np.random.randn(n+m)
sol = minimize(obj, x0 = z0, constraints = cons, method = 'SLSQP', options={'disp': True})
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.12236220865
Iterations: 6
Function evaluations: 192
Gradient evaluations: 6
Note that the constraint function is a convenient 'array-output' function.
Now, instead of an array-output function for the constraint, one could in principle use an equivalent set of 'scalar-output' constraint functions (actually, the scipy.optimize documentation discusses only this type of constraint functions as input to minimize).
Here is the equivalent constraint set followed by the output of minimize (same A, b, and initial value as the above listing):
# this is the i-th element of cons(z):
def cons_i(z, i):
vx = z[:n]
vy = z[n:]
return A[i].dot(vx) - b[i] - vy[i]
# listable of scalar-output constraints input for SLSQP:
cons_per_i = [{'type':'eq', 'fun': lambda z: cons_i(z, i)} for i in np.arange(m)]
sol2 = minimize(obj, x0 = z0, constraints = cons_per_i, method = 'SLSQP', options={'disp': True})
Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: 6.87999270692
Iterations: 1
Function evaluations: 32
Gradient evaluations: 1
Evidently, the algorithm fails (the returning objective value is actually the objective value for the given initialization), which I find a bit weird. Note that running [cons_per_i[i]['fun'](sol.x) for i in np.arange(m)] shows that sol.x, obtained using the array-output constraint formulation, satisfies all scalar-output constraints of cons_per_i as expected (within numerical tolerance).
I would appreciate if anyone has some explanation for this issue.
You've run into the "late binding closures" gotcha. All the calls to cons_i are being made with the second argument equal to 19.
A fix is to use the args dictionary element in the dictionary that defines the constraints instead of the lambda function closures:
cons_per_i = [{'type':'eq', 'fun': cons_i, 'args': (i,)} for i in np.arange(m)]
With this, the minimization works:
In [417]: sol2 = minimize(obj, x0 = z0, constraints = cons_per_i, method = 'SLSQP', options={'disp': True})
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.1223622086
Iterations: 6
Function evaluations: 192
Gradient evaluations: 6
You could also use the the suggestion made in the linked article, which is to use a lambda expression with a second argument that has the desired default value:
cons_per_i = [{'type':'eq', 'fun': lambda z, i=i: cons_i(z, i)} for i in np.arange(m)]
I am trying to maximize the following function using Python's scipy.optimize. However, after lots of trying, it doesn't seem to work. The function and my code are pasted below. Thanks for helping!
Problem
Maximize [sum (x_i / y_i)**gamma]**(1/gamma)
subject to the constraint sum x_i = 1; x_i is in the interval (0,1).
x is a vector of choice variables; y is a vector of parameters; gamma is a parameter. The xs must sum to one. And each x must be in the interval (0,1).
Code
def objective_function(x, y):
sum_contributions = 0
gamma = 0.2
for count in xrange(len(x)):
sum_contributions += (x[count] / y[count]) ** gamma
value = math.pow(sum_contributions, 1 / gamma)
return -value
cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})
y = [0.5, 0.3, 0.2]
initial_x = [0.2, 0.3, 0.5]
opt = minimize(objective_function, initial_x, args=(y,), method='SLSQP',
constraints=cons,bounds=[(0, 1)] * len(x))
Sometimes, numerical optimizer doesn't work for whatever reason. We can parametrize the problem slightly different and it will just work. (and might work faster)
For example, for bounds of (0,1), we can have a transform function such that values in (-inf, +inf), after being transformed, will end up in (0,1)
We can do a similar trick with the equality constraints. For example, we can reduce the dimension from 3 to 2, since the last element in x has to be 1-sum(x).
If it still won't work, we can switch to a optimizer that dose not require information from derivative, such as Nelder Mead.
And also there is Lagrange multiplier.
In [111]:
def trans_x(x):
x1 = x**2/(1+x**2)
z = np.hstack((x1, 1-sum(x1)))
return z
def F(x, y, gamma = 0.2):
z = trans_x(x)
return -(((z/y)**gamma).sum())**(1./gamma)
In [112]:
opt = minimize(F, np.array([0., 1.]), args=(np.array(y),),
method='Nelder-Mead')
opt
Out[112]:
status: 0
nfev: 96
success: True
fun: -265.27701747828007
x: array([ 0.6463264, 0.7094782])
message: 'Optimization terminated successfully.'
nit: 52
The result is:
In [113]:
trans_x(opt.x)
Out[113]:
array([ 0.29465097, 0.33482303, 0.37052601])
And we can visualize it, with:
In [114]:
x1 = np.linspace(0,1)
y1 = np.linspace(0,1)
X,Y = np.meshgrid(x1,y1)
Z = np.array([F(item, y) for item
in np.vstack((X.ravel(), Y.ravel())).T]).reshape((len(x1), -1), order='F')
Z = np.fliplr(Z)
Z = np.flipud(Z)
plt.contourf(X, Y, Z, 50)
plt.colorbar()
Even tough this questions is a bit dated I wanted to add an alternative solution which might be useful for others stumbling upon this question in the future.
It turns our your problem is solvable analytically. You can start by writing down the Lagrangian of the (equality constrained) optimization problem:
L = \sum_i (x_i/y_i)^\gamma - \lambda (\sum x_i - 1)
The optimal solution is found by setting the first derivative of this Lagrangian to zero:
0 = \partial L / \partial x_i = \gamma x_i^{\gamma-1}/\y_i - \lambda
=> x_i \propto y_i^{\gamma/(\gamma - 1)}
Using this insight the optimization problem can be solved simply and efficiently by:
In [4]:
def analytical(y, gamma=0.2):
x = y**(gamma/(gamma-1.0))
x /= np.sum(x)
return x
xanalytical = analytical(y)
xanalytical, objective_function(xanalytical, y)
Out [4]:
(array([ 0.29466774, 0.33480719, 0.37052507]), -265.27701765929692)
CT Zhu's solution is elegant but it might violate the positivity constraint on the third coordinate. For gamma = 0.2 this does not seem to be a problem in practice, but for different gammas you easily run into trouble:
In [5]:
y = [0.2, 0.1, 0.8]
opt = minimize(F, np.array([0., 1.]), args=(np.array(y), 2.0),
method='Nelder-Mead')
trans_x(opt.x), opt.fun
Out [5]:
(array([ 1., 1., -1.]), -11.249999999999998)
For other optimization problems with the same probability simplex constraints as your problem, but for which there is no analytical solution, it might be worth looking into projected gradient methods or similar. These methods leverage the fact that there is fast algorithm for the projection of an arbitrary point onto this set see https://en.wikipedia.org/wiki/Simplex#Projection_onto_the_standard_simplex.
(To see the complete code and a better rendering of the equations take a look at the Jupyter notebook http://nbviewer.jupyter.org/github/andim/pysnippets/blob/master/optimization-simplex-constraints.ipynb)