I am trying to use stats.optimize.minimize function. First, I am trying something very simple.
I define:
lik1 = lambda n,k,p: math.log(stats.binom.pmf(k,n,p))
I am trying to see if minimize will give me the correct MLE, which is, k/n == p.
Then I try:
optimize.minimize(lik1, 0.5, args=(10,2))
where I am assuming n == 10 and k == 2 and my guess for p (the argument x0) is 0.5. I get the following error:
fun: nan
hess_inv: array([[1]])
jac: array([ nan])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 3
nit: 0
njev: 1
status: 2
success: False
x: array([ 0.5])
What am I doing wrong?
A few changes:
Select a more appropriate minimization method for this problem. The minimize function defaults to the BFGS method when no constraints or bounds are provided which is a method for unconstrained optimization. It fails because it tries to evaluate the function for values of p > 1. You could provide some reasonable bounds, or I've found here that using the TNC method works in this instance.
The order of the function arguments should be (p, n, k)
You want to maximize the log, or equivalently minimize the negative of the log.
Code:
import scipy as sp
import scipy.stats
import scipy.optimize
lik1 = lambda p, n, k: -sp.log(sp.stats.binom.pmf(k, n, p))
res = sp.optimize.minimize(lik1, 0.5, args=(10, 2), method='TNC')
print(res)
Output:
fun: array([ 1.19736175])
jac: array([ 1.22124533e-05])
message: 'Converged (|f_n-f_(n-1)| ~= 0)'
nfev: 10
nit: 4
status: 1
success: True
x: array([ 0.20000019])
Related
I've been trying to find an efficient way of maximizing the following monster function in four variables but the program is taking ages to run and I'm not even sure if the results are correct. Can anyone help me code it better in Python?
Here's the function:
where
a=[p,q,r,s].
Y is the measured data sampled at 30 points.
Here's my code.
import numpy as np
import math
Y=Y_t #Y_t is a predefined column vector with 30 entries.
tstep=0.05 #in s
N=30
cov=np.zeros([30,30])
def R(p,q,r,t):
om_D=p*np.sqrt(1-q**2)
return np.pi*r*(np.exp(-q*p*abs(t)))*(np.cos(om_D*t)+(q/(np.sqrt(1-q**2)))*(np.sin(om_D*abs(t))))/(2*q*(p**3))
def I(m,p):
if m==p:
return 1
else:
return 0
def func(a):
a1=a[0] #natural angular frequency bounds=[3,20]
a2=a[1] #damping ratio bounds=[0,1]
a3=a[2] #psd of forcing signal bounds=[300,600]
a4=a[3] #variance of noise bounds=[0,0.0001] in m
#assuming uniform prior for a, we only have to maximise the likelihood function
for i in range(30):
for j in range(30):
cov[i,j]+=R(a1,a2,a3,(j-i)*tstep)+a4*I(i,j)
P=((2*np.pi)**(-N/2)) * ((np.linalg.det(cov))**(-0.5)) * np.exp((-0.5) *np.linalg.multi_dot([np.transpose(Y),np.linalg.inv(cov),Y]))
return (-1)*P[0]
a_start=[5,0.05,100,0.00001]
bnds=((5,20),(0,1),(300,600),(0,0.0001))
result=spo.differential_evolution(func,bounds=bnds)
print(result.x) ```
There is an issue in cov initialization that is why it does not converge. Also an issue on bound for damping ratio, was (0, 1) now (0.0001, 0.999) the ratio should not be 0 or 1 because if it is there will be division by zero error in R(). Code is fixed now see also the output.
Code
import time
import numpy as np
from scipy.optimize import differential_evolution
Y = [[-0.00445551], [-0.01164452], [-0.02171495], [-0.03475491], [-0.00770873], [ 0.0492236 ],
[ 0.07264838], [ 0.03066707], [-0.02457141], [-0.04065968], [-0.01135125], [ 0.02677074], [ 0.06517749],
[ 0.09611112], [ 0.12300657], [ 0.0923581 ], [ 0.03982604], [-0.01473844], [-0.09024497], [-0.14304097],
[-0.17447606], [-0.16926952], [-0.12006193], [-0.00120763], [ 0.11006087], [ 0.19978283], [ 0.24388584],
[ 0.18768875], [ 0.12844553], [ 0.03099409]] #Y_t is a predefined column vector with 30 entries.
tstep = 0.05 #in s
N = 30
def R(p,q,r,t):
om_D = p*np.sqrt(1-q**2)
return np.pi*r*(np.exp(-q*p*abs(t)))*(np.cos(om_D*t)+(q/(np.sqrt(1-q**2)))*(np.sin(om_D*abs(t))))/(2*q*(p**3))
def I(m,p):
if m==p:
return 1
else:
return 0
def func(a):
cov=np.zeros([N,N])
a1=a[0] #natural angular frequency bounds=[3,20]
a2=a[1] #damping ratio bounds=[0,1]
a3=a[2] #psd of forcing signal bounds=[300,600]
a4=a[3] #variance of noise bounds=[0,0.0001] in m
#assuming uniform prior for a, we only have to maximise the likelihood function
for i in range(N):
for j in range(N):
cov[i,j]+=R(a1,a2,a3,(j-i)*tstep)+a4*I(i,j)
P=((2*np.pi)**(-N/2)) * ((np.linalg.det(cov))**(-0.5)) * np.exp((-0.5) *np.linalg.multi_dot([np.transpose(Y),np.linalg.inv(cov),Y]))
return (-1)*P[0]
if __name__ == '__main__':
t0 = time.perf_counter()
a_start = [5, 0.05, 350, 0.00001]
bnds = ((5, 20), (0.0001, 0.999), (300, 600), (0, 0.0001))
result=differential_evolution(func, x0=a_start, bounds=bnds, maxiter=1000)
print(result)
print(f'elapse: {time.perf_counter() - t0:0.0f}s')
Output
fun: array([-2.76736878e+11])
jac: array([-2.91459845e+11, -4.55652161e+12, 1.27377279e+10, 3.34234132e+14])
message: 'Optimization terminated successfully.'
nfev: 3430
nit: 56
success: True
x: array([ 20. , 0.999, 300. , 0. ])
elapse: 55s
Scipy minimize is very fast
Changes:
from scipy.optimize import minimize
result = minimize(func, x0=a_start, bounds=bnds, options={'maxiter': 100, 'disp': True})
Output:
fun: array([-2.76736878e+11])
hess_inv: <4x4 LbfgsInvHessProduct with dtype=float64>
jac: array([-2.91459845e+11, -4.55652161e+12, 1.27377279e+10, 3.34234132e+14])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 30
nit: 4
njev: 6
status: 0
success: True
x: array([ 20. , 0.999, 300. , 0. ])
elapse: 0.5s
Optuna
Optuna after 1000 trials is right there too.
value is positive here because I use maximize direction. In both scipy's DE and minimize values have to be negated.
best param: {'a1': 20, 'a2': 0.9989999999999999, 'a3': 300, 'a4': 0.0}
best value: 276736878140.3103
best trial num: 73
elapse: 22s
I am trying to minimize a simple equation with sklearn minimizer but, weirdly, it seems the minimizer does not even try and send me back really bad result.
The equation has two different variable that I'd like to optimize for the formula to be minimized, here is the code I use:
from scipy.stats import poisson
import scipy.optimize
def objective_function(guess):
x = guess[0]
y = guess[1]
return poisson.pmf(1,x) * poisson.pmf(2,y) - 1/9.4 + poisson.pmf(1,x) * poisson.pmf(3,y) - 1/14
initialGuess = [0.0, 0.0]
scipy.optimize.minimize(objective_function, initialGuess)
and here is the result I guess from the minimizer
fun: -0.1778115501519757
hess_inv: array([[1, 0],
[0, 1]])
jac: array([0., 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 0
njev: 1
status: 0
success: True
x: array([0., 0.])
Trying on my side I can clearly see that it is not even close the best answer as [1, 1.5] will for example return me -0.03.
Is there a big thing I am missing with the optimizer from scipy?
I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here
I've been using scipy.optimize.minimize (docs)
and noticed some strange behavior when I define a problem with impossible to satisfy constraints. Here's an example:
from scipy import optimize
# minimize f(x) = x^2 - 4x
def f(x):
return x**2 - 4*x
def x_constraint(x, sign, value):
return sign*(x - value)
# subject to x >= 5 and x<=0 (not possible)
constraints = []
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [1, 5]})
constraints.append({'type': 'ineq', 'fun': x_constraint, 'args': [-1, 0]})
optimize.minimize(f, x0=3, constraints=constraints)
Resulting output:
fun: -3.0
jac: array([ 2.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 5
njev: 1
status: 0
success: True
x: array([ 3.])
There is no solution to this problem that satisfies the constraints, however, minimize() returns successfully using the initial condition as the optimal solution.
Is this behavior intended? If so, is there a way to force failure if the optimal solution doesn't satisfy the constraints?
This appears to be a bug. I added a comment with a variation of your example to the issue on github.
If you use a different method, such as COBYLA, the function correctly fails to find a solution:
In [10]: optimize.minimize(f, x0=3, constraints=constraints, method='COBYLA')
Out[10]:
fun: -3.75
maxcv: 2.5
message: 'Did not converge to a solution satisfying the constraints. See `maxcv` for magnitude of violation.'
nfev: 7
status: 4
success: False
x: array(2.5)
Some hypothetical example solving a nonlinear equation system with fsolve:
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print(equations((x, y)))
Is it somehow possible to solve it using scipy.optimize.brentq with some interval, e.g. [-1,1]? How does the unpacking work in that case?
As sascha suggested, constrained optimization is the easiest way to proceed. The least_squares method is convenient here: you can directly pass your equations to it, and it will minimize the sum of squares of its components.
from scipy.optimize import least_squares
res = least_squares(equations, (1, 1), bounds = ((-1, -1), (2, 2)))
The structure of bounds is ((min_first_var, min_second_var), (max_first_var, max_second_var)), or similarly for more variables.
The resulting object has a bunch of fields, shown below. The most relevant ones are: res.cost is essentially zero, which means a root was found; and res.x says what the root is: [ 0.62034453, 1.83838393]
active_mask: array([0, 0])
cost: 1.1745369255773682e-16
fun: array([ -1.47918522e-08, 4.01353883e-09])
grad: array([ 5.00239352e-11, -5.18964300e-08])
jac: array([[ 1. , 3.67676787],
[ 3.69795254, 0.62034452]])
message: '`gtol` termination condition is satisfied.'
nfev: 7
njev: 7
optimality: 8.3872972696740977e-09
status: 1
success: True
x: array([ 0.62034453, 1.83838393])