Issue with scipy minimizer and equation - python

I am trying to minimize a simple equation with sklearn minimizer but, weirdly, it seems the minimizer does not even try and send me back really bad result.
The equation has two different variable that I'd like to optimize for the formula to be minimized, here is the code I use:
from scipy.stats import poisson
import scipy.optimize
def objective_function(guess):
x = guess[0]
y = guess[1]
return poisson.pmf(1,x) * poisson.pmf(2,y) - 1/9.4 + poisson.pmf(1,x) * poisson.pmf(3,y) - 1/14
initialGuess = [0.0, 0.0]
scipy.optimize.minimize(objective_function, initialGuess)
and here is the result I guess from the minimizer
fun: -0.1778115501519757
hess_inv: array([[1, 0],
[0, 1]])
jac: array([0., 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 0
njev: 1
status: 0
success: True
x: array([0., 0.])
Trying on my side I can clearly see that it is not even close the best answer as [1, 1.5] will for example return me -0.03.
Is there a big thing I am missing with the optimizer from scipy?

Related

Unable to use Scipy.optimize.minimize 'SLSQP' due to 'Positive directional derivative for linesearch' or 'Constraint inconsistent'

I am struggling to solve a simple optimisation problem using scipy.optimize.mininize with 'SLSQP'
Apologies in advance I am not an expert but reading through some of the posts but apparently the message 'Positive directional derivative for linesearch' is thrown when SLSQP solver couldn't find a min (fast enough).
Appreciate if some one can advise if I am using incorrect solver for this problem, anyway I can get around the issue?
import numpy as np
import scipy.optimize as sciopt
target=142
V=np.array([173.3, 5678.8,67898.98, 67898.0, 678987.0, 9876.87, 7659.9 ])
C=np.array([0.1,0.2,0.56,0.56,0.22,0.35,0.21])
L=np.array([1,1,0,0,0,0,1])
init_wts=np.array([0,0,0,0,0,0,0])
def min_cost(wts):
return np.dot(C,np.multiply(wts,V))
def constraint1(wts):
return np.dot(wts,V)-target
def constraint2(wts):
return 0.2*target - np.dot(L,np.multiply(wts,V))
cons1 = ({'type': 'eq','fun': constraint1})
cons2 = ({'type': 'ineq','fun': constraint2})
bnds = tuple((0,1) for wts in range(len(V)))
sol =sciopt.minimize(min_cost, init_wts, method ='SLSQP', bounds=bnds, constraints=[cons1,cons2])
print(sol)
This produces following result:
fun: 0.0
jac: array([1.73300000e+01, 1.13576000e+03, 3.80234288e+04, 3.80228800e+04,
1.49377140e+05, 3.45690450e+03, 1.60857900e+03])
message: 'Positive directional derivative for linesearch'
nfev: 9
nit: 5
njev: 1
status: 8
success: False
x: array([0., 0., 0., 0., 0., 0., 0.])

Scipy optimize.minimize exits successfully when constraints aren't satisfied

I've been using scipy.optimize.minimize (docs)
and noticed some strange behavior when I define a problem with impossible to satisfy constraints. Here's an example:
from scipy import optimize
# minimize f(x) = x^2 - 4x
def f(x):
return x**2 - 4*x
def x_constraint(x, sign, value):
return sign*(x - value)
# subject to x >= 5 and x<=0 (not possible)
constraints = []
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [1, 5]})
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [-1, 0]})
optimize.minimize(f, x0=3, constraints=constraints)
Resulting output:
fun: -3.0
jac: array([ 2.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 5
njev: 1
status: 0
success: True
x: array([ 3.])
There is no solution to this problem that satisfies the constraints, however, minimize() returns successfully using the initial condition as the optimal solution.
Is this behavior intended? If so, is there a way to force failure if the optimal solution doesn't satisfy the constraints?
This appears to be a bug. I added a comment with a variation of your example to the issue on github.
If you use a different method, such as COBYLA, the function correctly fails to find a solution:
In [10]: optimize.minimize(f, x0=3, constraints=constraints, method='COBYLA')
Out[10]:
fun: -3.75
maxcv: 2.5
message: 'Did not converge to a solution satisfying the constraints. See `maxcv` for magnitude of violation.'
nfev: 7
status: 4
success: False
x: array(2.5)

Solve a nonlinear equation system with constraints on the variables

Some hypothetical example solving a nonlinear equation system with fsolve:
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print(equations((x, y)))
Is it somehow possible to solve it using scipy.optimize.brentq with some interval, e.g. [-1,1]? How does the unpacking work in that case?
As sascha suggested, constrained optimization is the easiest way to proceed. The least_squares method is convenient here: you can directly pass your equations to it, and it will minimize the sum of squares of its components.
from scipy.optimize import least_squares
res = least_squares(equations, (1, 1), bounds = ((-1, -1), (2, 2)))
The structure of bounds is ((min_first_var, min_second_var), (max_first_var, max_second_var)), or similarly for more variables.
The resulting object has a bunch of fields, shown below. The most relevant ones are: res.cost is essentially zero, which means a root was found; and res.x says what the root is: [ 0.62034453, 1.83838393]
active_mask: array([0, 0])
cost: 1.1745369255773682e-16
fun: array([ -1.47918522e-08, 4.01353883e-09])
grad: array([ 5.00239352e-11, -5.18964300e-08])
jac: array([[ 1. , 3.67676787],
[ 3.69795254, 0.62034452]])
message: '`gtol` termination condition is satisfied.'
nfev: 7
njev: 7
optimality: 8.3872972696740977e-09
status: 1
success: True
x: array([ 0.62034453, 1.83838393])

Trying to understand how scipy.optimize can be used?

I am trying to use stats.optimize.minimize function. First, I am trying something very simple.
I define:
lik1 = lambda n,k,p: math.log(stats.binom.pmf(k,n,p))
I am trying to see if minimize will give me the correct MLE, which is, k/n == p.
Then I try:
optimize.minimize(lik1, 0.5, args=(10,2))
where I am assuming n == 10 and k == 2 and my guess for p (the argument x0) is 0.5. I get the following error:
fun: nan
hess_inv: array([[1]])
jac: array([ nan])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 3
nit: 0
njev: 1
status: 2
success: False
x: array([ 0.5])
What am I doing wrong?
A few changes:
Select a more appropriate minimization method for this problem. The minimize function defaults to the BFGS method when no constraints or bounds are provided which is a method for unconstrained optimization. It fails because it tries to evaluate the function for values of p > 1. You could provide some reasonable bounds, or I've found here that using the TNC method works in this instance.
The order of the function arguments should be (p, n, k)
You want to maximize the log, or equivalently minimize the negative of the log.
Code:
import scipy as sp
import scipy.stats
import scipy.optimize
lik1 = lambda p, n, k: -sp.log(sp.stats.binom.pmf(k, n, p))
res = sp.optimize.minimize(lik1, 0.5, args=(10, 2), method='TNC')
print(res)
Output:
fun: array([ 1.19736175])
jac: array([ 1.22124533e-05])
message: 'Converged (|f_n-f_(n-1)| ~= 0)'
nfev: 10
nit: 4
status: 1
success: True
x: array([ 0.20000019])

linear programming slack output more than input

VERSIONS:
SciPy: 0.16
PROBLEM
I'm trying optimize the function of benefits (code below), but slack output doesn't apear correct (red circle) with the result that would be.
The last two results are similar, but one (120) is lost. I don't know why?
In [3]:
A = np.array([[1,0],[0,1],[1,2]])
In [4]:
# dispo
b = [60, 50, 120]
bounds = ([1,None],[1,None])
In [5]:
c = np.array([80, 120])
In [10]:
sol = linprog(-c, A, b, bounds=bounds)
In [17]:
sol
Out[17]:
status: 0
slack: array([ 0., 20., 0., 59., 29.])
nit: 5
success: True
fun: -8400.0
message: 'Optimization terminated successfully.'
x: array([ 60., 30.])
For better context link to gist
You are looking at the wrong place in your table. linprog computes sol.x as the values on the "Producción" row. It does not return the values in the column you circled, but you can easily compute them yourself.

Categories