Lagrange multipliers numerical approach - python

I have the following function that I'm trying to optimize that is subject to a constraint:
def damage(a, e, cr, cd):
return 100*(1+a)*(1+e)*(1+cr*cd)
def constraint(a, e, cr, cd):
return (100/0.466)*(a+e)+(100/0.622)*(2*cr+cd)
When solving for the Lagrangian by hand I get this output:
import numpy as np
import sympy as smp
a, e, c, d, l = smp.symbols('a e c d l')
eq1 = smp.Eq(1/(1+a), (100/46.6)*l)
eq2 = smp.Eq(1/(1+e), (100/46.6)*l)
eq3 = smp.Eq(d/(1+c*d), (100/62.2)*2*l)
eq4 = smp.Eq(c/(1+c*d), (100/62.2)*l)
eq5 = smp.Eq((100/46.6)*(a+e)+(100/62.2)*(2*c+d) - 300, 0)
solution = np.array(smp.solve([eq1, eq2, eq3, eq4, eq5], [a, e, c, d, l]))
print(solution[0]/100)
print('Constraint', '{:,.0f}'.format(constraint(*(solution/100)[0][:-1])))
print('Max damage', '{:,.0f}'.format(float(round(damage(*(solution/100)[0][:-1])))))
[0.344658405015485 0.344658405015485 0.236481193219279 0.472962386438559 0.000131394038153319]
Constraint 300
Max damage 201
To be able to solve this through a numerical approach, I modified the formulation of the problem by explicitly stating the constraints individually (separating the primary constraint into smaller constraints). I expressly stated the required relationships between the variables and constrained only one of the variables, which then determined the states of all of the other variables.
# We first convert this into a minimization problem.
from scipy import optimize
def damage_min(x):
return -100*(1+x[0])*(1+x[1])*(1+x[2]*x[3])
# next we define the constrains (equal to 0)
def constraints(x):
c1 = x[0] - x[1]
c2 = 2*x[2] - x[3]
c3 = x[0]/x[3] - 0.466/0.622
c4 = x[3] - 0.466
return np.array([c1, c2, c3, c4])
cons = ({'type': 'eq',
'fun' : constraints})
# We solve the minimization problem
x_initial = np.array([34.4658405015485, 34.4658405015485, 23.6481193219279, 47.2962386438559])
solution = optimize.minimize(damage_min, x_initial, constraints=cons)
print(solution.x)
print('Constraint', '{:,.0f}'.format(constraint(*(solution.x))))
print('Max damage', '{:,.0f}'.format(float(round(damage(*(solution.x))))))
[0.3491254 0.3491254 0.233 0.466 ]
Constraint 300
Max damage 202
My question is as follows. How can I recreate the optimal results above by numerically optimizing a single function, e.g., the Lagrangian multiplier? When I try to put both functions into a single function, I get this output.
const = 300
def lagrangian(a, e, cr, cd, lam):
return -damage(a, e, cr, cd) + lam*(round(constraint(a, e, cr, cd)) - const)
def vector_lagrangian(x):
return lagrangian(x[0], x[1], x[2], x[3], x[4])
x_initial = np.array([32.4658405015485, 34.4658405015485, 23.6481193219279, 47.2962386438559, 1])
solution = optimize.minimize(vector_lagrangian, x_initial)
fun: -2.140132414183526e+37
hess_inv: array([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]])
jac: array([0., 0., 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 119
nit: 1
njev: 17
status: 0
success: True
x: array([ 6.90178344e+08, 6.51257507e+08, 9.75839219e+08, 4.87919645e+08,
-5.08835272e+06])
'constraint': '680,080,111,963'
The constraint, in this case, isn't being held and it converges on a local minimum. Why is this the case? Is the problem caused by the solver, the specific function that is being optimized, or is there some other reason?

As already mentioned in the comments, your math is wrong since minimizing the Lagrangian doesn't yield a local minimum of the corresponding optimization problem. Assuming f : R^n -> R and g : R^n -> R^m are both differentiable functions and you want to solve the optimization problem
min f(x) s.t. g(x) = 0
then the first-order necessary optimality condition (FOC) is
∇L(x, λ) = ∇f(x) + ∇g(x)^T * λ = 0
g(x) = 0
where L is the Lagrangian, ∇f the objective gradient and ∇g the transposed Jacobian of the function g. Consequently, you need to find a root of the function H(x,λ) = (∇f(x) + λ^T * ∇g(x), g(x))^T to solve the FOC, which can be done by means of scipy.optimize.root:
from scipy.optimize import minimize, root
from scipy.optimize._numdiff import approx_derivative
def damage_min(x):
return -100*(1+x[0])*(1+x[1])*(1+x[2]*x[3])
def constraints(x):
c1 = x[0] - x[1]
c2 = 2*x[2] - x[3]
c3 = x[0]/x[3] - 0.466/0.622
c4 = x[3] - 0.466
return np.array([c1, c2, c3, c4])
def f_grad(x):
return approx_derivative(damage_min, x)
def g_jac(x):
return approx_derivative(constraints, x)
def H(z, f_grad, g, g_jac):
g_evaluated = g(z)
x, λ = np.split(z, (-g_evaluated.size, ))
eq1 = f_grad(x) + g_jac(x).T # λ
eq2 = g_evaluated
return np.array([*eq1, *eq2])
# res.x contains the solution
res = root(lambda z: H(z, f_grad, constraints, g_jac), x0=np.ones(8))
which yields the solution (consisting of x and the lagrangian multipliers λ):
array([ 3.49125402e-01, 3.49125402e-01, 2.33000000e-01, 4.66000000e-01,
-1.49561074e+02, 4.24092469e+01, 1.39390921e+02, 3.08919653e+02])
A few notes:
In general, it's highly recommended to provide exact derivatives instead of approximating them by finite differences by means of approx_derivate.
If you really want to solve a minimization problem, you can solve the FOC by minimizing the euclidean norm of the function H. This is exactly what the root method does under the hood.

Related

GEKKO gives TypeError: must be real number, not GK_Operators, I have several approaches that is recommended

I am trying to solve an NLP with GEKKO, however I have a few problem while implementing the Python code. The model that I am trying to solve is pretty trivial, I am trying to find the optimal point that has minimum loss function value in a 3D convex set.
def calculateLossFunction(h, x, y, lmbd, n):
sum = 0
x_star = np.dot(np.transpose(lmbd), x)
y_star = np.dot(np.transpose(lmbd), y)
for i in range(n):
RNJ = math.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
P = 1 / (math.degrees(math.atan(h[i] / RNJ)))
sum += A * P + B
return sum
This is the objective function for my problem and I am using this as follows
m = GEKKO(remote=True)
eq = m.Param()
H = [500, 1500, 2500]
locations = np.array([[1, 2],
[2, 3],
[3, 1]])
XN = locations[:, 0]
YN = locations[:, 1]
n = len(locations)
lambdas = m.Array(m.Var,n,lb=0, ub = 1, value = 0)
lambdas[0].value = 1
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
m.Equation(sum(lambdas) == 1)
m.solve(disp=True) # solve on public server
#Results
print('')
print('Results')
print('x1: ' + str(lambdas[0].value))
print('x2: ' + str(lambdas[1].value))
print('x3: ' + str(lambdas[2].value))
The thing is, although I've checked similar problems that are raised in Stack Overflow and tried to mimic the recommended solutions, at this point seems like I cannot figure out what is wrong because above code gives the following error.
Traceback (most recent call last):
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
File "C:\Users\admin\PycharmProjects\nonlinear.py", line 13, in calculateLossFunction
RNJ = math.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
TypeError: must be real number, not GK_Operators
I've also read the documentation but couldn't find any solution.
Thanks in advance for your answers.
Use the gekko functions m.sqrt() and m.atan() instead of math.sqrt() and math.atan(). The TypeError: must be real number, not GK_Operators is from the math function. There is no math.degrees() equivalent in gekko, so use 360.0/(2.0*np.pi) for the conversion. Gekko uses gradient-based optimizers that require overloading of the operators and functions for automatic differentiation to provide exact 1st and 2nd derivatives of constraints and objectives. Some functions are compatible such as np.dot() while others do not return a symbolic solution, such as math.sqrt(). Here is a complete problem that solves successfully:
from gekko import GEKKO
import numpy as np
A = 1.0; B=0.0
def calculateLossFunction(h, x, y, lmbd, n):
sum = 0
x_star = np.dot(np.transpose(lmbd), x)
y_star = np.dot(np.transpose(lmbd), y)
for i in range(n):
RNJ = m.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
P = 1 / (360.0*(m.atan(h[i] / RNJ)/(2.0*np.pi)))
sum += A * P + B
return sum
m = GEKKO(remote=True)
eq = m.Param()
H = [500, 1500, 2500]
locations = np.array([[1, 2],
[2, 3],
[3, 1]])
XN = locations[:, 0]
YN = locations[:, 1]
n = len(locations)
lambdas = m.Array(m.Var,n,lb=0, ub = 1, value = 0)
lambdas[0].value = 1
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
m.Equation(sum(lambdas) == 1)
m.solve(disp=True) # solve on public server
print('Results')
print('x1: ' + str(lambdas[0].value))
print('x2: ' + str(lambdas[1].value))
print('x3: ' + str(lambdas[2].value))
Solution with sample A=1.0 and B=0.0 values:
Results
x1: [0.99999702144]
x2: [1.9787728836e-06]
x3: [9.9978717975e-07]
and the solver output:
Number of Iterations....: 113
(scaled) (unscaled)
Objective...............: 3.3346336759950239e-02 3.3346336759950239e-02
Dual infeasibility......: 8.4348140936638533e-07 8.4348140936638533e-07
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.0000010522025397e-11 1.0000010522025397e-11
Overall NLP error.......: 8.4348140936638533e-07 8.4348140936638533e-07
Number of objective function evaluations = 1237
Number of objective gradient evaluations = 114
Number of equality constraint evaluations = 1237
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 114
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 113
Total CPU secs in IPOPT (w/o function evaluations) = 0.067
Total CPU secs in NLP function evaluations = 0.034
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 3.334633675995024E-002
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 0.131699999998091 sec
Objective : 3.334633675995024E-002
Successful solution
---------------------------------------------------
Trigonometric functions sometimes need constraints on the variables to ensure that a NaN value is not returned or to make a solution unique (such as cos(np.pi) and cos(-np.pi).

Is it normal in scipy.optimise?

I want to optimise my portfolio using Markowitz theory (risk minimization by the Markowitz method for a given income = 15%) and Scipy.minimize
I have risk function
def objective(x):
x1=x[0];x2=x[1];x3=x[2]; x4=x[3]
return 1547.87020*x1**2 + 125.26258*x1*x2 + 1194.3433*x1*x3 + 63.6533*x1*x4 \
+ 27.3176649*x2**2 + 163.28848*x2*x3 + 4.829816*x2*x4 \
+ 392.11819*x3**2 + 56.50518*x3*x4 \
+ 34.484063*x4**2
Sum of parts of stocks(in %) = 1
def constraint1(x):
return (x[0]+x[1]+x[2]+x[3]-1.0)
Income function with restriction
def constraint2(x):
return (-1.37458*x[0] + 0.92042*x[1] + 5.06189*x[2] + 0.35974*x[3] - 15.0)
And I test it using:
x0=[0,1,1,0] #Initial value
b=(0.0,1.0)
bnds=(b,b,b,b)
con1={'type':'ineq','fun':constraint1}
con2={'type':'eq','fun':constraint2}
cons=[con1,con2]
sol=minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
And my result is:
fun: 678.5433939
jac: array([1383.25920868, 222.75363159, 1004.03005219, 130.30312347])
message: 'Positive directional derivative for linesearch'
nfev: 216
nit: 20
njev: 16
status: 8
success: False
x: array([0., 1., 1., 1.])
But how? Sum of parts of portfolio cant be more than 1(now parts of stock 2=stock3=stock4=100%). Its constraint1. Where is problem?
The output says "success: False"
So it is telling you that it failed to find a solution to the problem.
Also, why did you put
con1={'type':'ineq','fun':constraint1}
Don't you want
con1={'type':'eq','fun':constraint1}
I got success using method='BFGS'
Your code is returning values that do not respect your constraint due to false definition of the first constraint (a-b >= 0 => a>b) so in your case a=1(the order in an inequality is important). On the other hand your x0 must also respect your constraints and sum([0,1,1,0]) = 2 > 1.
I slightly improved your code and fixed the aforementioned issues, but I still think that you need to review your second constraint:
import numpy as np
from scipy.optimize import minimize
def objective(x):
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
coefficients = np.array([1547.87020, 125.26258, 1194.3433, 63.6533, 27.3176649, 163.28848, 4.829816, 392.11819, 56.50518, 34.484063])
xs = np.array([ x1**2, x1*x2, x1*x3, x1*x4, x2**2, x2*x3, x2*x4, x3**2, x3*x4, x4**2])
return np.dot(xs, coefficients)
const1 = lambda x: 1 - sum(x)
const2 = lambda x: np.dot(np.array([-1.37458, 0.92042, 5.06189, 0.35974]), x) - 15.0
x0 = [0, 0, 0, 0] #Initial value
b = (0.0, 1.0)
bnds = (b, b, b, b)
cons = [{'type':'ineq','fun':const1}, {'type':'eq', 'fun':const2}]
# minimize
sol = minimize(objective,
x0,
method = 'SLSQP',
bounds = bnds,
constraints = cons)
print(sol)
output:
fun: 392.1181900000138
jac: array([1194.34332275, 163.28847885, 784.23638535, 56.50518036])
message: 'Positive directional derivative for linesearch'
nfev: 92
nit: 11
njev: 7
status: 8
success: False
x: array([0.00000000e+00, 5.56638069e-14, 1.00000000e+00, 8.29371293e-14])

Is there any quadratic programming function that can have both lower and upper bounds - Python

Normally I have been using GNU Octave to solve quadratic programming problems.
I solve problems like
x = 1/2x'Qx + c'x
With subject to
A*x <= b
lb <= x <= ub
Where lb and ub are lower bounds and upper bounds, e.g limits for x
My Octave code looks like this when I solve. Just one simple line
U = quadprog(Q, c, A, b, [], [], lb, ub);
The square brackets [] are empty because I don't need the equality constraints
Aeq*x = beq,
So my question is:
Is there a easy to use quadratic solver in Python for solving problems
x = 1/2x'Qx + c'x
With subject to
A*x <= b
lb <= x <= ub
Or subject to
b_lb <= A*x <= b_ub
lb <= x <= ub
You can write your own solver based scipy.optimize, here is a small example on how to code your custom python quadprog():
# python3
import numpy as np
from scipy import optimize
class quadprog(object):
def __init__(self, H, f, A, b, x0, lb, ub):
self.H = H
self.f = f
self.A = A
self.b = b
self.x0 = x0
self.bnds = tuple([(lb, ub) for x in x0])
# call solver
self.result = self.solver()
def objective_function(self, x):
return 0.5*np.dot(np.dot(x.T, self.H), x) + np.dot(self.f.T, x)
def solver(self):
cons = ({'type': 'ineq', 'fun': lambda x: self.b - np.dot(self.A, x)})
optimum = optimize.minimize(self.objective_function,
x0 = self.x0.T,
bounds = self.bnds,
constraints = cons,
tol = 10**-3)
return optimum
Here is how to use this, using the same variables from the first example provided in matlab-quadprog:
# init vars
H = np.array([[ 1, -1],
[-1, 2]])
f = np.array([-2, -6]).T
A = np.array([[ 1, 1],
[-1, 2],
[ 2, 1]])
b = np.array([2, 2, 3]).T
x0 = np.array([1, 2])
lb = 0
ub = 2
# call custom quadprog
quadprog = quadprog(H, f, A, b, x0, lb, ub)
print(quadprog.result)
The output of this short snippet is:
fun: -8.222222222222083
jac: array([-2.66666675, -4. ])
message: 'Optimization terminated successfully.'
nfev: 8
nit: 2
njev: 2
status: 0
success: True
x: array([0.66666667, 1.33333333])
For more information on how to use scipy.optimize.minimize please refer to the docs.
If you need a general quadratic programming solver like quadprog, I would suggest the open-source software cvxopt as noted in one of the comments. This is robust and really state-of-the-art. The main contributor is a major expert in the field and the co-author of a classic book on Convex Optimization.
The function you want to use is cvxopt.solvers.qp. A simple wrapper to use it in Numpy like quadprog is the following. Note that bounds can be included as a special case of inequality constraints.
import numpy as np
from cvxopt import matrix, solvers
def quadprog(P, q, G=None, h=None, A=None, b=None, options=None):
"""
Quadratic programming problem with both linear equalities and inequalities
Minimize 0.5 * x # P # x + q # x
Subject to G # x <= h
and A # x = b
"""
P, q = matrix(P), matrix(q)
if G is not None:
G, h = matrix(G), matrix(h)
if A is not None:
A, b = matrix(A), matrix(b)
sol = solvers.qp(A, b, G, h, A, b, options=options)
return np.array(sol['x']).ravel()
cvxopt used to be difficult to install, but is nowadays also included in the Anaconda distribution and can be installed (even on Windows) with conda install cvxopt.
If instead, you are interested in the more specific case of linear least-squares optimisation with bounds, which is a subset of the general quadratic programming, namely
Minimize || A # x - b ||
subject to lb <= x <= ub
Then Scipy has the specific function scipy.optimize.lsq_linear(A, b, bounds).
Note that the accepted answer is a very inefficient approach and should not be recommended. It makes no use of the crucial fact that the function you want to optimize is quadratic but instead uses a generic nonlinear optimization program and does not even specify the analytic gradient.
You could use the solve_qp function from qpsolvers. It solves quadratic programs in the following form:
minimize_x 1/2 x' P x + q'x
subject to G x <= h
A x == b
lb <= x <= ub
The function wraps the many QP solvers available in Python (full list here) via its solver keyword argument. Make sure to try different solvers to find the one that fits your problem best.
Here is a snippet for solving a small problem:
from numpy import array, dot
from qpsolvers import solve_qp
M = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = dot(M.T, M) # this is a positive definite matrix
q = dot(array([3., 2., 3.]), M)
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.])
A = array([1., 1., 1.])
b = array([1.])
x = solve_qp(P, q, G, h, A, b, solver="osqp")
print(f"QP solution: x = {x}")
And if you are interested linear least-squares with linear or box (bounds) constraints, there is also a solve_ls function. Here is a short tutorial on solving such problems.

Python's scipy.optimize.minimize with SLSQP fails with "Positive directional derivative for linesearch"

I have a least squares minimization problem subject to inequality constraints which I am trying to solve using scipy.optimize.minimize. It seems that there are two options for inequality constraints: COBYLA and SLSQP.
I first tried SLSQP since it allow for explicit partial derivatives of the function to be minimized. Depending on the scaling of the problem, it fails with error:
Positive directional derivative for linesearch (Exit mode 8)
whenever interval or more general inequality constraints are imposed.
This has been observed previously e.g., here. Manual scaling of the function to be minimized (along with the associated partial derivatives) seems to get rid of the problem, but I cannot achieve the same effect by changing ftol in the options.
Overall, this whole thing is causing me to have doubts about the routine working in a robust manner. Here's a simplified example:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, A, y):
e = y - A.dot(x)
rss = np.sum(e ** 2)
return rss
def cost_deriv(x, A, y):
e = y - A.dot(x)
deriv0 = -2 * e.dot(A[:,0])
deriv1 = -2 * e.dot(A[:,1])
deriv = np.array([deriv0, deriv1])
return deriv
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
cons_SLSQP = ({'type': 'ineq', 'fun' : lambda x: np.array([x[0] - x[1]]),
'jac' : lambda x: np.array([1.0, -1.0])})
# works correctly
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# fails
A = 100 * A
y = A.dot(x_true)
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# works if bounds and inequality constraints removed
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv,
method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
How should ftol be set to avoid failure? More generally, can a similar problem arise with COBYLA? Is COBYLA a better choice for this type of inequality constrained least squares optimization problem?
Using a square root in the cost function was found to improve performance. However, for a non-linear re-paramterization of the problem (simpler but closer to what I need to do in practice), it fails again. Here are the details:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, y, g):
e = ((y - x[1]) / x[0]) - g
rss = np.sqrt(np.sum(e ** 2))
return rss
def cost_deriv(x, y, g):
e = ((y- x[1]) / x[0]) - g
factor = 0.5 / np.sqrt(e.dot(e))
deriv0 = -2 * factor * e.dot(y - x[1]) / (x[0]**2)
deriv1 = -2 * factor * np.sum(e) / x[0]
deriv = np.array([deriv0, deriv1])
return deriv
x_true = np.array([1/300, .1])
N = 20
t = 20 * np.arange(N)
g = 100 * np.cos(2 * np.pi * 1e-3 * (t - t[-1] / 2))
y = g * x_true[0] + x_true[1]
x_guess = x_true / 2
prm_bounds = ((1e-4, 1e-2), (0, .4))
# check derivatives
delta = 1e-9
C0 = cost(x_guess, y, g)
C1 = cost(x_guess + np.array([delta, 0]), y, g)
approx_deriv0 = (C1 - C0) / delta
C1 = cost(x_guess + np.array([0, delta]), y, g)
approx_deriv1 = (C1 - C0) / delta
approx_deriv = np.array([approx_deriv0, approx_deriv1])
deriv = cost_deriv(x_guess, y, g)
# fails
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(y, g), jac=cost_deriv,
bounds=prm_bounds, method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
Instead of minimizing np.sum(e ** 2), minimize sqrt(np.sum(e ** 2)), or better (in terms of calculation): np.linalg.norm(e)!
This modification:
does not change your solution in regards to x
will need post-processing if the original objective is needed (probably not)
is much more robust
With this change, all cases work, even using numerical-differentiation (i was too lazy to modify the gradient, which needs to reflect this!).
Example output (number of func-evals gives away num-diff):
Optimization terminated successfully. (Exit mode 0)
Current function value: 3.815547437029837e-06
Iterations: 16
Function evaluations: 88
Gradient evaluations: 16
fun: 3.815547437029837e-06
jac: array([-6.09663382, -2.48862544])
message: 'Optimization terminated successfully.'
nfev: 88
nit: 16
njev: 16
status: 0
success: True
x: array([ 2.00000037, 0.10000018])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0002354577991007501
Iterations: 23
Function evaluations: 114
Gradient evaluations: 23
fun: 0.0002354577991007501
jac: array([ 435.97259208, 288.7483819 ])
message: 'Optimization terminated successfully.'
nfev: 114
nit: 23
njev: 23
status: 0
success: True
x: array([ 1.99999977, 0.10000014])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0003392807206384532
Iterations: 21
Function evaluations: 112
Gradient evaluations: 21
fun: 0.0003392807206384532
jac: array([ 996.57340243, 51.19298764])
message: 'Optimization terminated successfully.'
nfev: 112
nit: 21
njev: 21
status: 0
success: True
x: array([ 2.00000008, 0.10000104])
While there are probably some problems with SLSQP, it's still one of the most tested and robust codes given that broad application-spectrum!
I would also expect SLSQP to be much better here compared to COBYLA, as the latter is based heavily on linearizations. (but just take it as a guess; it's easy to try given the minimize-interface!)
Alternative
In general, an Interior-point based solver for Convex Quadratic Programming will be the best approach here. But for this, you need to leave scipy. (or maybe an SOCP-solver would be better... i'm not sure).
cvxpy brings a nice-modelling system and a good open-source solver (ECOS; although technically a conic-solver -> more general and less robust; but should beat SLSQP).
Using cvxpy and ECOS, this looks like:
import numpy as np
import cvxpy as cvx
""" Problem data """
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
# problematic case
A = 100 * A
y = A.dot(x_true)
""" Solve """
x = cvx.Variable(len(x_true))
constraints = [x[0] >= x[1]]
for ind, (lb, ub) in enumerate(prm_bounds): # ineffecient -> matrix-based expr better!
constraints.append(x[ind] >= lb)
constraints.append(x[ind] <= ub)
objective = cvx.Minimize(cvx.norm(A*x - y))
problem = cvx.Problem(objective, constraints)
problem.solve(solver=cvx.ECOS, verbose=False)
print(problem.status)
print(problem.value)
print(x.value.T)
# optimal
# -6.67593652593801e-10
# [[ 2. 0.1]]

Minimization in Python to find shortest path between two points

I'm trying to find the shortest path between two points, (0,0) and (1000,-100). The path is to be defined by a 7th order polynomial function:
p(x) = a0 + a1*x + a2*x^2 + ... + a7*x^7
To do so I tried to minimize the function that calculates the total path length from the polynomial function:
length = int from 0 to 1000 of { sqrt(1 + (dp(x)/dx)^2 ) }
Obviously the correct solution will be a linear line, however later on I want to add constraints to the problem. This one was supposed to be a first approach.
The code I implemented was:
import numpy as np
import matplotlib.pyplot as plt
import math
import sys
import scipy
def path_tracer(a,x):
return a[0] + a[1]*x + a[2]*x**2 + a[3]*x**3 + a[4]*x**4 + a[5]*x**5 + a[6]*x**6 + a[7]*x**7
def lof(a):
upper_lim = a[8]
L = lambda x: np.sqrt(1 + (a[1] + 2*a[2]*x + 3*a[3]*x**2 + 4*a[4]*x**3 + 5*a[5]*x**4 + 6*a[6]*x**5 + 7*a[7]*x**6)**2)
length_of_path = scipy.integrate.quad(L,0,upper_lim)
return length_of_path[0]
a = np.array([-4E-11, -.4146,.0003,-7e-8,0,0,0,0,1000]) # [polynomial parameters, x end point]
xx = np.linspace(0,1200,1200)
y = [path_tracer(a,x) for x in xx]
cons = ({'type': 'eq', 'fun': lambda x:path_tracer(a,a[8])+50})
c = scipy.optimize.minimize(lof, a, constraints = cons)
print(c)
When I ran it however the minimization routine fails and returns the initial parameters unchanged. The output is:
fun: 1022.9651540965604
jac: array([ 0.00000000e+00, -1.78130722e+02, -1.17327499e+05,
-7.62458172e+07, 9.42803815e+11, 9.99924786e+14,
9.99999921e+17, 1.00000000e+21, 1.00029755e+00])
message: 'Singular matrix C in LSQ subproblem'
nfev: 11
nit: 1
njev: 1
status: 6
success: False
x: array([ -4.00000000e-11, -4.14600000e-01, 3.00000000e-04,
-7.00000000e-08, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.00000000e+03])
Am I doing something wrong or is the routine just not appropriate to solve this kind of problems? If so, is there an alternative in Python?
You can use this routine, but there are some problems with your approach:
The domain of the polynomial should be normalized to something reasonable, like [0, 1]. This makes the optimization much easier. You can revert this after you are done with the optimization
You could simplify the code by using polyval and related functions
The optimal solution to this is quite obviously -0.1 x, so I'm not sure why you feel the need to optimize.
A solution that works is
import numpy as np
import scipy.optimize
x = np.linspace(0, 1, 1000)
def obj_fun(p):
deriv = np.polyval(np.polyder(p), x)
return np.sum(np.sqrt(1 + deriv ** 2))
cons = ({'type': 'eq', 'fun': lambda p: np.polyval(p, [0, 1]) - [0, -100]})
p0 = np.zeros(8)
c = scipy.optimize.minimize(obj_fun, p0, constraints = cons)
Where we can plot the result
import matplotlib.pyplot as plt
plt.plot(np.polyval(c.x, x), label='result')
plt.plot(-100 * x, label='optimal')
plt.legend()

Categories