Scipy optimize.minimize exits successfully when constraints aren't satisfied - python

I've been using scipy.optimize.minimize (docs)
and noticed some strange behavior when I define a problem with impossible to satisfy constraints. Here's an example:
from scipy import optimize
# minimize f(x) = x^2 - 4x
def f(x):
return x**2 - 4*x
def x_constraint(x, sign, value):
return sign*(x - value)
# subject to x >= 5 and x<=0 (not possible)
constraints = []
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [1, 5]})
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [-1, 0]})
optimize.minimize(f, x0=3, constraints=constraints)
Resulting output:
fun: -3.0
jac: array([ 2.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 5
njev: 1
status: 0
success: True
x: array([ 3.])
There is no solution to this problem that satisfies the constraints, however, minimize() returns successfully using the initial condition as the optimal solution.
Is this behavior intended? If so, is there a way to force failure if the optimal solution doesn't satisfy the constraints?

This appears to be a bug. I added a comment with a variation of your example to the issue on github.
If you use a different method, such as COBYLA, the function correctly fails to find a solution:
In [10]: optimize.minimize(f, x0=3, constraints=constraints, method='COBYLA')
Out[10]:
fun: -3.75
maxcv: 2.5
message: 'Did not converge to a solution satisfying the constraints. See `maxcv` for magnitude of violation.'
nfev: 7
status: 4
success: False
x: array(2.5)

Related

Issue with scipy minimizer and equation

I am trying to minimize a simple equation with sklearn minimizer but, weirdly, it seems the minimizer does not even try and send me back really bad result.
The equation has two different variable that I'd like to optimize for the formula to be minimized, here is the code I use:
from scipy.stats import poisson
import scipy.optimize
def objective_function(guess):
x = guess[0]
y = guess[1]
return poisson.pmf(1,x) * poisson.pmf(2,y) - 1/9.4 + poisson.pmf(1,x) * poisson.pmf(3,y) - 1/14
initialGuess = [0.0, 0.0]
scipy.optimize.minimize(objective_function, initialGuess)
and here is the result I guess from the minimizer
fun: -0.1778115501519757
hess_inv: array([[1, 0],
[0, 1]])
jac: array([0., 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 0
njev: 1
status: 0
success: True
x: array([0., 0.])
Trying on my side I can clearly see that it is not even close the best answer as [1, 1.5] will for example return me -0.03.
Is there a big thing I am missing with the optimizer from scipy?

Unable to use Scipy.optimize.minimize 'SLSQP' due to 'Positive directional derivative for linesearch' or 'Constraint inconsistent'

I am struggling to solve a simple optimisation problem using scipy.optimize.mininize with 'SLSQP'
Apologies in advance I am not an expert but reading through some of the posts but apparently the message 'Positive directional derivative for linesearch' is thrown when SLSQP solver couldn't find a min (fast enough).
Appreciate if some one can advise if I am using incorrect solver for this problem, anyway I can get around the issue?
import numpy as np
import scipy.optimize as sciopt
target=142
V=np.array([173.3, 5678.8,67898.98, 67898.0, 678987.0, 9876.87, 7659.9 ])
C=np.array([0.1,0.2,0.56,0.56,0.22,0.35,0.21])
L=np.array([1,1,0,0,0,0,1])
init_wts=np.array([0,0,0,0,0,0,0])
def min_cost(wts):
return np.dot(C,np.multiply(wts,V))
def constraint1(wts):
return np.dot(wts,V)-target
def constraint2(wts):
return 0.2*target - np.dot(L,np.multiply(wts,V))
cons1 = ({'type': 'eq','fun': constraint1})
cons2 = ({'type': 'ineq','fun': constraint2})
bnds = tuple((0,1) for wts in range(len(V)))
sol =sciopt.minimize(min_cost, init_wts, method ='SLSQP', bounds=bnds, constraints=[cons1,cons2])
print(sol)
This produces following result:
fun: 0.0
jac: array([1.73300000e+01, 1.13576000e+03, 3.80234288e+04, 3.80228800e+04,
1.49377140e+05, 3.45690450e+03, 1.60857900e+03])
message: 'Positive directional derivative for linesearch'
nfev: 9
nit: 5
njev: 1
status: 8
success: False
x: array([0., 0., 0., 0., 0., 0., 0.])

Constrained Optimization Problem : Python

I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here

Solve a nonlinear equation system with constraints on the variables

Some hypothetical example solving a nonlinear equation system with fsolve:
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print(equations((x, y)))
Is it somehow possible to solve it using scipy.optimize.brentq with some interval, e.g. [-1,1]? How does the unpacking work in that case?
As sascha suggested, constrained optimization is the easiest way to proceed. The least_squares method is convenient here: you can directly pass your equations to it, and it will minimize the sum of squares of its components.
from scipy.optimize import least_squares
res = least_squares(equations, (1, 1), bounds = ((-1, -1), (2, 2)))
The structure of bounds is ((min_first_var, min_second_var), (max_first_var, max_second_var)), or similarly for more variables.
The resulting object has a bunch of fields, shown below. The most relevant ones are: res.cost is essentially zero, which means a root was found; and res.x says what the root is: [ 0.62034453, 1.83838393]
active_mask: array([0, 0])
cost: 1.1745369255773682e-16
fun: array([ -1.47918522e-08, 4.01353883e-09])
grad: array([ 5.00239352e-11, -5.18964300e-08])
jac: array([[ 1. , 3.67676787],
[ 3.69795254, 0.62034452]])
message: '`gtol` termination condition is satisfied.'
nfev: 7
njev: 7
optimality: 8.3872972696740977e-09
status: 1
success: True
x: array([ 0.62034453, 1.83838393])

Trying to understand how scipy.optimize can be used?

I am trying to use stats.optimize.minimize function. First, I am trying something very simple.
I define:
lik1 = lambda n,k,p: math.log(stats.binom.pmf(k,n,p))
I am trying to see if minimize will give me the correct MLE, which is, k/n == p.
Then I try:
optimize.minimize(lik1, 0.5, args=(10,2))
where I am assuming n == 10 and k == 2 and my guess for p (the argument x0) is 0.5. I get the following error:
fun: nan
hess_inv: array([[1]])
jac: array([ nan])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 3
nit: 0
njev: 1
status: 2
success: False
x: array([ 0.5])
What am I doing wrong?
A few changes:
Select a more appropriate minimization method for this problem. The minimize function defaults to the BFGS method when no constraints or bounds are provided which is a method for unconstrained optimization. It fails because it tries to evaluate the function for values of p > 1. You could provide some reasonable bounds, or I've found here that using the TNC method works in this instance.
The order of the function arguments should be (p, n, k)
You want to maximize the log, or equivalently minimize the negative of the log.
Code:
import scipy as sp
import scipy.stats
import scipy.optimize
lik1 = lambda p, n, k: -sp.log(sp.stats.binom.pmf(k, n, p))
res = sp.optimize.minimize(lik1, 0.5, args=(10, 2), method='TNC')
print(res)
Output:
fun: array([ 1.19736175])
jac: array([ 1.22124533e-05])
message: 'Converged (|f_n-f_(n-1)| ~= 0)'
nfev: 10
nit: 4
status: 1
success: True
x: array([ 0.20000019])

Categories