programming with scipy.optimize.linprog - variable coefficients - python

Trying to optimize using scipy.optimize.linprog a cost function, where the cost coefficients are function of the variables; e.g.
Cost = c1 * x1 + c2 * x2 # (x1,x2 are the variables)
for example
if x1 = 1, c1 = 0.5
if x1 = 2, c1 = 1.25
etc.
* Just to clarify *
we are looking for a minimum cost of variables; xi; i=1,2,3,...
xi are positive integers.
however, the cost coefficient per xi, is a function of the value of xi.
cost is x1*f1(x1) + x2*f2(x2) + ... + c0
fi - is a "rate" table; e.g. - f1(0) = 0; f1(1) = 2.00; f1(2) = 3.00, etc.
the xi are under constrains, and they can't be negative and can't be over qi =>
0 <= xi <= qi
fi() values are calculated for each possible value of xi
I hope it clarifies the model.

Here is some prototype-code to show you how, that your problem is quite hard (regarding formulation and performance; the former is visible in the code).
The implementation uses cvxpy for modelling (convex-programming only) and is based on the mixed-integer approach.
Code
import numpy as np
from cvxpy import *
"""
x0 == 0 -> f(x) = 0
x0 == 1 -> f(x) = 1
...
x1 == 0 -> f(x) = 1
x1 == 1 -> f(x) = 4
...
"""
rate_table = np.array([[0, 1, 3, 5], [1, 4, 5, 6], [1.3, 1.7, 2.25, 3.0]])
bounds_x = (0, 3) # inclusive; bounds are needed for linearization!
# Vars
# ----
n_vars = len(rate_table)
n_values_per_var = [len(x) for x in rate_table]
I = Bool(n_vars, n_values_per_var[0]) # simplified assumption: rate-table sizes equal
X = Int(n_vars)
X_ = Variable(n_vars, n_values_per_var[0]) # X_ = mul_elemwise(I*X) broadcasted
# Constraints
# -----------
constraints = []
# X is bounded
constraints.append(X >= bounds_x[0])
constraints.append(X <= bounds_x[1])
# only one value in rate-table active (often formulated with SOS-type-1 constraints)
for i in range(n_vars):
constraints.append(sum_entries(I[i, :]) <= 1)
# linearization of product of BIN * INT (INT needs to be bounded!)
# based on Erwin's answer here:
# https://www.or-exchange.org/questions/10775/how-to-linearize-product-of-binary-integer-and-integer-variables
for i in range(n_values_per_var[0]):
constraints.append(bounds_x[0] * I[:, i] <= X_[:, i])
constraints.append(X_[:, i] <= bounds_x[1] * I[:, i])
constraints.append(X - bounds_x[1]*(1-I[:, i]) <= X_[:, i])
constraints.append(X_[:, i] <= X - bounds_x[0]*(1-I[:, i]))
# Fix chosings -> if table-entry x used -> integer needs to be x
# assumptions:
# - table defined for each int
help_vec = np.arange(n_values_per_var[0])
constraints.append(I * help_vec == X)
# ONLY FOR DEBUGGING -> make simple max each X solution infeasible
constraints.append(sum_entries(mul_elemwise([1, 3, 2], square(X))) <= 15)
# Objective
# ---------
objective = Maximize(sum_entries(mul_elemwise(rate_table, X_)))
# Problem & Solve
# ---------------
problem = Problem(objective, constraints)
problem.solve() # choose other solver if needed, e.g. commercial ones like Gurobi, Cplex
print('Max-objective: ', problem.value)
print('X:\n' + str(X.value))
Output
('Max-objective: ', 20.70000000000001)
X:
[[ 3.]
[ 1.]
[ 1.]]
Idea
Transform the objective max: x0*f(x0) + x1*f(x1) + ...
into: x0*f(x0==0) + x0*f(x0==1) + ... + x1*f(x1==0) + x1*f(x1==1)+ ...
Introduce binary-variables to formulate:
f(x0==0) as I[0,0]*table[0,0]
f(x1==2) as I[1,2]*table[0,2]
Add constraints to limit the above I to have one nonzero entry only for each variable x_i (only one of the expanded objective-components will be active)
Linearize the product x0*f(x0==0) == x0*I[0,0]*table(0,0) (integer * binary * constant)
Fix the table-lookup: using table-entry with index x (of x0) should result in x0 == x
assuming, that there are no gaps in the table, this can be done formulated as I * help_vec == X) where help_vec == vector(lower_bound, ..., upper_bound)
cvxpy is automatically (by construction) proving, that our formulation is convex, which is needed for most solvers (and in general not easy to recognize).
Just for fun: bigger-problem and commercial-solver
Input generated by:
def gen_random_growing_table(size):
return np.cumsum(np.random.randint(1, 10, size))
SIZE = 100
VARS = 100
rate_table = np.array([gen_random_growing_table(SIZE) for v in range(VARS)])
bounds_x = (0, SIZE-1) # inclusive; bounds are needed for linearization!
...
...
constraints.append(sum_entries(square(X)) <= 150)
Output:
Explored 19484 nodes (182729 simplex iterations) in 129.83 seconds
Thread count was 4 (of 4 available processors)
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5231e-05) exceeds tolerance
Best objective -1.594000000000e+03, best bound -1.594000000000e+03, gap 0.0%
('Max-objective: ', 1594.0000000000005)

Related

How to use scipy `minimize` on a difference between two vectors?

I have two vectors w1 and w2 (each of length 100), and I want to minimize the sum of their absolute difference i.e.
import numpy as np
def diff(w: np.ndarray) -> float:
"""Get the sum of absolute differences in the vector w.
Args:
w: A flattened vector of length 200, with the first 100 elements
pertaining to w1, and the last 100 elements pertaining to w2.
Returns:
sum of absolute differences.
"""
return np.sum(np.absolute(w[:100] - w[-100:]))
I need to write diff() as only taking one argument since scipy.opyimize.minimize requires the array passed to the x0 argument to be 1 dimensional.
As for constraints, I have
w1 is fixed and not allowed to change
w2 is allowed to change
The sum of absolute values w2 is between 0.1 and 1.1: 0.1 <= sum(abs(w2)) <= 1.1
|w2_i| < 0.01 for any element i in w2
I am confused as to how we code these constraints using the Bounds and LinearConstraints objects. What I've tried so far is the following
from scipy.optimize import minimize, Bounds, LinearConstraint
bounds = Bounds(lb=[-0.01] * 200, ub=[0.01] * 200) # constraint #4
lc = LinearConstraint([[1] * 200], [0.1], [1.1]) # constraint #3
res = minimize(
fun=diff,
method='trust-constr',
x0=w, # my flattened vector containing w1 first 100 elements, and w2 in last 100 elements
bounds=bounds,
constraints=(lc)
)
My logic for the bounds variable is from constrain #4, and for the lc variable comes from constrain #3. However I know I've coded this wrong because because the lower and upper bounds are of length 200 which seems to indicate they are applied to both w1 and w2 whereas I only wan't to apply the constrains to w2 (I get an error ValueError: operands could not be broadcast together with shapes (200,) (100,) if I try to change the length of the array in Bounds from 200 to 100).
The shapes and argument types for LinearConstraint are especially confusing to me, but I did try to follow the scipy example.
This current implementation never seems to finish, it just hangs forever.
How do I properly implement bounds and LinearConstraint so that it satisfies my constraints list above, if that is even possible?
Your problem can easily be formulated as a linear optimization problem (LP). You only need to reformulate all absolute values of the optimization variables.
Changing the notation slightly (x is now the optimization variable w2 and w is just your given vector w1), your problem reads as
min |w_1 - x_1| + .... + |w_N - x_N|
s.t. lb <= |x1| + ... + |xN| <= ub (3)
|x_i| <= 0.01 - eps (4) (models the strict inequality)
where eps is just a sufficiently small number in order to model the strict inequality.
Let's consider the constraint (3). Here, we add additional positive variables z and define z_i = |x_i|. Then, we replace all absolute values |x_i| by z_i and impose the constraints -x_i <= z_i <= x_i which model the relationship z_i = |x_i|. Similarly, you can proceed with the objective and the constraint (4). The latter is by the way trivial and equivalent to -(0.01 - eps) <= x_i <= 0.01 - eps.
In the end, your optimization problem should read (assuming that all your w_i are positive):
min u1 + .... + uN
s.t. lb <= z1 + ... + zN <= ub
-x <= z <= x
-0.01 + eps <= x <= 0.01 - eps
-(w-x) <= u <= w - x
0 <= z
0 <= u
with 3*N optimization variables x1, ..., xN, u1, ..., uN, z1, ..., zN. It isn't hard to write these constraints as an matrix-vector product A_ineq * x <= b_ineq. Then, you can solve it by scipy.optimize.linprog as follows:
import numpy as np
from scipy.optimize import minimize, linprog
n = 100
w = np.abs(np.random.randn(n))
eps = 1e-10
lb = 0.1
ub = 1.1
# linear constraints: A_ub * (x, z, u)^T <= b_ub
A_ineq = np.block([
[np.zeros(n), np.ones(n), np.zeros(n)],
[np.zeros(n), -np.ones(n), np.zeros(n)],
[-np.eye(n), np.eye(n), np.zeros((n, n))],
[-np.eye(n), -np.eye(n), np.zeros((n, n))],
[ np.eye(n), np.zeros((n, n)), -np.eye(n)],
[ np.eye(n), np.zeros((n, n)), np.eye(n)],
])
b_ineq = np.hstack((ub, -lb, np.zeros(n), np.zeros(n), w, w))
# bounds: lower <= (x, z, u)^T <= upper
lower = np.hstack(((-0.01 + eps) * np.ones(n), np.zeros(n), np.zeros(n)))
upper = np.hstack((( 0.01 - eps) * np.ones(n), np.inf*np.ones(n), np.inf*np.ones(n)))
bounds = [(l, u) for (l, u) in zip(lower, upper)]
# objective: c^T * (x, z, u)
c = np.hstack((np.zeros(n), np.zeros(n), np.ones(n)))
# solve the problem
res = linprog(c, A_ub=A_ineq, b_ub=b_ineq, method="highs")
# your solution
x = res.x[:n]
print(res.message)
print(x)
Some notes in arbitrary order:
It's highly recommended to solve linear optimization problems with linprog instead of minimize. The former provides an interface to HiGHS, a high-performance LP solver HiGHs that outperforms all algorithms under the hood of minimize. However, it's also worth mentioning that minimize is meant to be used for nonlinear optimization problems.
In case your values w are not all positive, we need to change the formulation.
You can (and perhaps should, for clarity), use the args argument in minimize, and provide the fixed vector as an extra argument to your function.
If you set up your equation as follows:
def diff(w2, w1):
return np.sum(np.absolute(w1 - w2))
and your constraints with
bounds = Bounds(lb=[-0.01] * 100, ub=[0.01] * 100) # constraint #4
lc = LinearConstraint([[1] * 100], [0.1], [1.1]) # constraint #3
and then do
res = minimize(
fun=diff,
method='trust-constr',
x0=w1,
args=(w2,),
bounds=bounds,
constraints=[lc]
)
Then:
print(res.success, res.status, res.nit, np.abs(res.x).sum(), all(np.abs(res.x) < 0.01))
yields (for me at least)
(True, 1, 17, 0.9841520351691752, True)
which seems to be what you want.
Note that my test inputs are:
w1 = (np.arange(100) - 50) / 1000
w2 = np.ones(100, dtype=float)
which may or may not be favourable to the fitting procedure.

GEKKO gives TypeError: must be real number, not GK_Operators, I have several approaches that is recommended

I am trying to solve an NLP with GEKKO, however I have a few problem while implementing the Python code. The model that I am trying to solve is pretty trivial, I am trying to find the optimal point that has minimum loss function value in a 3D convex set.
def calculateLossFunction(h, x, y, lmbd, n):
sum = 0
x_star = np.dot(np.transpose(lmbd), x)
y_star = np.dot(np.transpose(lmbd), y)
for i in range(n):
RNJ = math.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
P = 1 / (math.degrees(math.atan(h[i] / RNJ)))
sum += A * P + B
return sum
This is the objective function for my problem and I am using this as follows
m = GEKKO(remote=True)
eq = m.Param()
H = [500, 1500, 2500]
locations = np.array([[1, 2],
[2, 3],
[3, 1]])
XN = locations[:, 0]
YN = locations[:, 1]
n = len(locations)
lambdas = m.Array(m.Var,n,lb=0, ub = 1, value = 0)
lambdas[0].value = 1
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
m.Equation(sum(lambdas) == 1)
m.solve(disp=True) # solve on public server
#Results
print('')
print('Results')
print('x1: ' + str(lambdas[0].value))
print('x2: ' + str(lambdas[1].value))
print('x3: ' + str(lambdas[2].value))
The thing is, although I've checked similar problems that are raised in Stack Overflow and tried to mimic the recommended solutions, at this point seems like I cannot figure out what is wrong because above code gives the following error.
Traceback (most recent call last):
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
File "C:\Users\admin\PycharmProjects\nonlinear.py", line 13, in calculateLossFunction
RNJ = math.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
TypeError: must be real number, not GK_Operators
I've also read the documentation but couldn't find any solution.
Thanks in advance for your answers.
Use the gekko functions m.sqrt() and m.atan() instead of math.sqrt() and math.atan(). The TypeError: must be real number, not GK_Operators is from the math function. There is no math.degrees() equivalent in gekko, so use 360.0/(2.0*np.pi) for the conversion. Gekko uses gradient-based optimizers that require overloading of the operators and functions for automatic differentiation to provide exact 1st and 2nd derivatives of constraints and objectives. Some functions are compatible such as np.dot() while others do not return a symbolic solution, such as math.sqrt(). Here is a complete problem that solves successfully:
from gekko import GEKKO
import numpy as np
A = 1.0; B=0.0
def calculateLossFunction(h, x, y, lmbd, n):
sum = 0
x_star = np.dot(np.transpose(lmbd), x)
y_star = np.dot(np.transpose(lmbd), y)
for i in range(n):
RNJ = m.sqrt((x_star - x[i]) ** 2 + (y_star - y[i]) ** 2)
P = 1 / (360.0*(m.atan(h[i] / RNJ)/(2.0*np.pi)))
sum += A * P + B
return sum
m = GEKKO(remote=True)
eq = m.Param()
H = [500, 1500, 2500]
locations = np.array([[1, 2],
[2, 3],
[3, 1]])
XN = locations[:, 0]
YN = locations[:, 1]
n = len(locations)
lambdas = m.Array(m.Var,n,lb=0, ub = 1, value = 0)
lambdas[0].value = 1
m.Minimize(calculateLossFunction(H, XN, YN, lambdas, n))
m.Equation(sum(lambdas) == 1)
m.solve(disp=True) # solve on public server
print('Results')
print('x1: ' + str(lambdas[0].value))
print('x2: ' + str(lambdas[1].value))
print('x3: ' + str(lambdas[2].value))
Solution with sample A=1.0 and B=0.0 values:
Results
x1: [0.99999702144]
x2: [1.9787728836e-06]
x3: [9.9978717975e-07]
and the solver output:
Number of Iterations....: 113
(scaled) (unscaled)
Objective...............: 3.3346336759950239e-02 3.3346336759950239e-02
Dual infeasibility......: 8.4348140936638533e-07 8.4348140936638533e-07
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.0000010522025397e-11 1.0000010522025397e-11
Overall NLP error.......: 8.4348140936638533e-07 8.4348140936638533e-07
Number of objective function evaluations = 1237
Number of objective gradient evaluations = 114
Number of equality constraint evaluations = 1237
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 114
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 113
Total CPU secs in IPOPT (w/o function evaluations) = 0.067
Total CPU secs in NLP function evaluations = 0.034
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 3.334633675995024E-002
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 0.131699999998091 sec
Objective : 3.334633675995024E-002
Successful solution
---------------------------------------------------
Trigonometric functions sometimes need constraints on the variables to ensure that a NaN value is not returned or to make a solution unique (such as cos(np.pi) and cos(-np.pi).

Calculating the value of the objective function for a manual input in Gurobi

I have a complex model and I want to calculate the objective function value for different options (not just the optimal solution).
I created the toy example below:
import gurobipy as gp
from gurobipy import GRB
import numpy as np
m = gp.Model()
X = {}
for i in range(5):
X[i] = m.addVar(vtype= GRB.INTEGER, name=f"x[{i}]")
obj_func = 0
for i in range(5):
np.random.seed(i)
obj_func += np.random.rand() * X[i]
m.setObjective(obj_func, GRB.MINIMIZE)
m.addConstr(2 * X[1] + 5 * X[3] <= 10, name = 'const1')
m.addConstr(X[0] + 3 * X[4] >= 4, name = 'const2')
m.addConstr(3 * X[1] - X[4] <= 3, name = 'const3')
m.optimize()
for x_index, x in X.items():
if round(x.X)>0:
print(x_index, x.X)
# 0 1.0
# 4 1.0
How can I can calculate the objective function for the manual input of X = [0,1,1,0,0] or X=[1,0,0,0,1]. I want the output return the objective function value if this input is a feasible solution, otherwise a warning. I can hard-code the problem, but I would rather to extract the objective coefficients directly form m (model) and multiply them with the new X input.
A pretty simple idea: You could fix all the variables in the model by setting the lower and upper bounds to your given inputs and then solve the model. Then, the model.objVal attribute contains the objective value for your given input solution. A straightforward implementation looks like this:
import numpy as np
given_sols = np.array([[0, 1, 1, 0, 0], [1, 0, 0, 0, 1]])
# Evaluate the objective for multiple given solutions
def get_obj_vals(model, given_sols):
obj_vals = np.nan * np.zeros(given_sols.shape[0])
# for each given solution..
for k, x in enumerate(given_sols):
# reset the model
model.reset()
# Fix the lower/upper bounds
for i, var in enumerate(model.getVars()):
var.lb = x[i]
var.ub = x[i]
# solve the problem
model.optimize()
if model.Status == GRB.Status.OPTIMAL:
obj_vals[k] = model.objVal
return obj_vals
Here, nan in obj_vals[i] means that given_sols[i] is not feasible.

Linear inequality constraint not working in Drake

I am learning how to use Drake to solving optimization problems.
This problem was to find the optimal length and width of a fence, the fence must have a perimeter less than or equal to 40. The code below only works when the perimeter constraint is an equality constraint. It should work as an inequality constraint, but my optimal solution results in x=[nan nan]. Does anyone know why this is the case?
from pydrake.solvers.mathematicalprogram import MathematicalProgram, Solve
import numpy as np
import matplotlib.pyplot as plt
prog = MathematicalProgram()
#add two decision variables
x = prog.NewContinuousVariables(2, "x")
#adds objective function where
#
# min 0.5 xt * Q * x + bt * x
#
# Q = [0,-1
# -1,0]
#
# bt = [0,
# 0]
#
Q = [[0,-1],[-1,0]]
b = [[0],[0]]
prog.AddQuadraticCost(Q , b, vars=[x[0],x[1]])
# Adds the linear constraints.
prog.AddLinearEqualityConstraint(2*x[0] + 2*x[1] == 40)
#prog.AddLinearConstraint(2*x[0] + 2*x[1] <= 40)
prog.AddLinearConstraint(0*x[0] + -1*x[1] <= 0)
prog.AddLinearConstraint(-1*x[0] + 0*x[1] <= 0)
# Solve the program.
result = Solve(prog)
print(f"optimal solution x: {result.GetSolution(x)}")
I get [nan, nan] for both inequality constraint and equality constraint.
As Russ mentioned, the problem is the cost being non-convex, and Drake incurred the wrong solver. For the moment, I would suggest to explicitly designate a solver. You could do
from pydrake.solvers.ipopt_solver import IpoptSolver
from pydrake.solvers.mathematicalprogram import MathematicalProgram, Solve
import numpy as np
import matplotlib.pyplot as plt
prog = MathematicalProgram()
#add two decision variables
x = prog.NewContinuousVariables(2, "x")
#adds objective function where
#
# min 0.5 xt * Q * x + bt * x
#
# Q = [0,-1
# -1,0]
#
# bt = [0,
# 0]
#
Q = [[0,-1],[-1,0]]
b = [[0],[0]]
prog.AddQuadraticCost(Q , b, vars=[x[0],x[1]])
# Adds the linear constraints.
prog.AddLinearEqualityConstraint(2*x[0] + 2*x[1] == 40)
#prog.AddLinearConstraint(2*x[0] + 2*x[1] <= 40)
prog.AddLinearConstraint(0*x[0] + -1*x[1] <= 0)
prog.AddLinearConstraint(-1*x[0] + 0*x[1] <= 0)
# Solve the program.
solver = IpoptSolver()
result = solver.Solve(prog)
print(f"optimal solution x: {result.GetSolution(x)}")
I will work on a fix on the Drake side, to make sure it incur the right solver when you have non-convex quadratic cost.

How to add several constraints to differential_evolution?

I have the same problem as in this question but don't want to add only one but several constraints to the optimization problem.
So e.g. I want to maximize x1 + 5 * x2 with the constraints that the sum of x1 and x2 is smaller than 5 and x2 is smaller than 3 (needless to say that the actual problem is far more complicated and cannot just thrown into scipy.optimize.minimize as this one; it just serves to illustrate the problem...).
I can to an ugly hack like this:
from scipy.optimize import differential_evolution
import numpy as np
def simple_test(x, more_constraints):
# check wether all constraints evaluate to True
if all(map(eval, more_constraints)):
return -1 * (x[0] + 5 * x[1])
# if not all constraints evaluate to True, return a positive number
return 10
bounds = [(0., 5.), (0., 5.)]
additional_constraints = ['x[0] + x[1] <= 5.', 'x[1] <= 3']
result = differential_evolution(simple_test, bounds, args=(additional_constraints, ), tol=1e-6)
print(result.x, result.fun, sum(result.x))
This will print
[ 1.99999986 3. ] -16.9999998396 4.99999985882
as one would expect.
Is there a better/ more straightforward way to add several constraints than using the rather 'dangerous' eval?
An example is something like this::
additional_constraints = [lambda(x): x[0] + x[1] <= 5., lambda(x):x[1] <= 3]
def simple_test(x, more_constraints):
# check wether all constraints evaluate to True
if all(constraint(x) for constraint in more_constraints):
return -1 * (x[0] + 5 * x[1])
# if not all constraints evaluate to True, return a positive number
return 10
There is a proper solution to the problem described in the question, to enforce multiple nonlinear constraints with scipy.optimize.differential_evolution.
The proper way is by using the scipy.optimize.NonlinearConstraint function.
Here below I give a non-trivial example of optimizing the classic Rosenbrock function inside a region defined by the intersection of two circles.
import numpy as np
from scipy import optimize
# Rosenbrock function
def fun(x):
return 100*(x[1] - x[0]**2)**2 + (1 - x[0])**2
# Function defining the nonlinear constraints:
# 1) x^2 + (y - 3)^2 < 4
# 2) (x - 1)^2 + (y + 1)^2 < 13
def constr_fun(x):
r1 = x[0]**2 + (x[1] - 3)**2
r2 = (x[0] - 1)**2 + (x[1] + 1)**2
return r1, r2
# No lower limit on constr_fun
lb = [-np.inf, -np.inf]
# Upper limit on constr_fun
ub = [4, 13]
# Bounds are irrelevant for this problem, but are needed
# for differential_evolution to compute the starting points
bounds = [[-2.2, 1.5], [-0.5, 2.2]]
nlc = optimize.NonlinearConstraint(constr_fun, lb, ub)
sol = optimize.differential_evolution(fun, bounds, constraints=nlc)
# Accurate solution by Mathematica
true = [1.174907377273171, 1.381484428610871]
print(f"nfev = {sol.nfev}")
print(f"x = {sol.x}")
print(f"err = {sol.x - true}\n")
This prints the following with default parameters:
nfev = 636
x = [1.17490808 1.38148613]
err = [7.06260962e-07 1.70116282e-06]
Here is a visualization of the function (contours) and the feasible region defined by the nonlinear constraints (shading inside the green line). The constrained global minimum is indicated by the yellow dot, while the magenta one shows the unconstrained global minimum.
This constrained problem has an obvious local minimum at (x, y) ~ (-1.2, 1.4) on the boundary of the feasible region which will make local optimizers fail to converge to the global minimum for many starting locations. However, differential_evolution consistently finds the global minimum as expected.

Categories