Optimize non-linear function with two variables in Python - python

I am trying to optimize the following function:
f(x,a+,a-,b) = a+*((1/(1+exp(-b*x)) - 1/2) if x>=0
= a-*((1/1+exp(-b*x)) -1/2) if x<0
constraints: a+ * b <=4, a-*b <=4
a+/2 <= max(if(x>0))
a-/2 <= -min(if(x<0))
I have tried to use the minimize function in scpiy with by putting the bounds [(0,2), (0,2), (1,None)] and constraints as defined above, but the function is not providing the right results, especially if I use beta to be an args in the constraints.
from scipy.optimize import minimize
init_params=[0.0,0.0,20.0]
bnds = [(0.0,2.0), (0.0,2.0), (1.0,None)]
S_curve = pd.DataFrame()
S_curve['year'] = dfs2.index
S_curve['H_Change'] = np.array(dfs2.loc[:,'annualH_chg'])
S_curve['R_Change'] = np.array(dfs2.loc[:,'annualR_chg'])
S_curve['Weight'] = 1
S_curve.reset_index(drop=True)
weighted = S_curve[S_curve.Weight !=0]
minimum = -S_curve['R_Change'].min()
maximum = S_curve['R_Change'].max()
beta=init_params[2]
def constraint1(up, beta):
return 4.0-(up*beta)
def constraint2(down, beta):
return 4.0-(down*beta)
def constraint3(up, maximum):
return maximum - up/2.0
def constraint4(down, minimum):
return minimum - down/2.0
cons = [{'type':'ineq', 'fun':constraint1, 'args':(beta,)},
{'type':'ineq', 'fun':constraint2, 'args':(beta,)},
{'type':'ineq', 'fun':constraint3, 'args':(maximum,)},
{'type':'ineq', 'fun':constraint4, 'args':(minimum,)}]
soln = minimize(func, init_params, bounds=bnds, constraints=cons, method='SLQSP')
expecting the first and second constraint to be satisfied, and beta (b) is not constant.

Related

How make an optimizer get the correct minima in a function?

I've seen multiple SO answers such as SO1, SO2, SO3, SO4, SO5 but can't seem to get the answer I want.
I have a cost function that changes with a parameter (c_vir). I use the ln(c_vir) to be exact so that the minimizer doesn't go off to unexpected regions.
I get correct answers if the minima is way down than the cost function in general but it doesn't seem to converge if the function is slightly flat.
I use scipy local and global minimizers and also iminuit for minimizing my cost function.
Here is a graph that plots down the cost function (using a brute force for loop) and finds the minimum of the cost curve. It also shows the two minimizers and where they lie in accordance with the actual minimum.
Here is one of the cost functions using scipy Basinhopping method and Iminuit Simplex Method
Here is same cost function but now using scipy SHGO method and Iminuit Simplex Method
These are the synatax for my minimizers:
optres = iminuit.minimize(cost, [np.log(5)],
args=(den, eps, self.Mvir, self.Rvir, mask,
cost_func),
method='simplex',
bounds=(np.log(1e-5), np.log(50)),
tol=1e-2,
options={'stra': 2, 'maxfun': 500})
optres = so.basinhopping(cost, np.log(5),stepsize=1,
minimizer_kwargs={"method": "Nelder-Mead",
"args": (den, eps, self.Mvir, self.Rvir, mask, cost_func)})
optres = so.shgo(cost, bounds=[(np.log(1e-2), np.log(50))],
args=(den, eps, self.Mvir, self.Rvir, mask, cost_func),
sampling_method='sobol',
minimizer_kwargs={'method': 'Nelder-Mead'})
Changing the initial guess from np.log(5) to np.log(10) yields the same results and the bounds seem to only be constraining the more extreme cost function but not these type of almost flat functions.
Underlying Cost Function
#jit
def cost(lncvir, obs, epsilon, M, Rvir, mask, func="gaussian"): # theta is Rs, M, Rvir
Rs = Rvir / np.exp(lncvir)
# if lncvir < 0:
# return np.inf
# Rs = Rvir / lncvir
_, model = rho_r(Rs, M, Rvir, mask)
Cost = chisq(obs, model, epsilon, func)
return Cost
#njit(fastmath=True)
def chisq(obs: np.ndarray, model: np.ndarray, epsilon: float, func: str="gaussian"):
residual = obs - model
# residual ** 2 * cinv for every bin
if func == "gaussian":
return np.sum(np.square(residual) / np.square((epsilon * obs)))
elif func == "lorentz":
temp = np.square(residual) / np.square((epsilon * obs))
return np.sum(np.log(1 + temp))
elif func == 'abs':
return np.sum(np.abs(residual) / ((epsilon * obs)))
#njit(fastmath=True)
def rho_o(M: float, Rvir: float, Rs: float):
c = Rvir / Rs
ln_term = np.log(1.0 + c) - (c / (1.0 + c))
rho_not = M / (4.0 * np.pi * (Rs**3.0) * ln_term)
return rho_not
#njit()
def rho_r(Rs: float, M: float, Rvir: float, mask: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
r = RADIUS[mask]
term = r / Rs
rho_not = rho_o(M, Rvir, Rs)
return r, rho_not / (term * ((1.0 + term)**2.0))

Problem minimizing a constrained function in Python with scipy.optimize.minimize

I'm trying to minimize a constrained function of several variables adopting the algorithm scipy.optimize.minimize. The function concerns the minimization of 3*N parameters, where Nis an input. More specifically, my minimization parameters are given in three arrays H = H[0],H[1],...,H[N-1], a = a[0],a[1],...,a[N-1] and b = b[0],b[1],...,b[N-1] which I concatenated in only one array named mins, with len(mins)=3*N.
Those parameters are also subjected to constraints as follows:
0 <= H and sum(H) = 0.5
0 <= a <= Pi/2
0 <= b <= Pi/2
So, my code for the constraints read as:
import numpy as np
# constraints on x:
def Hlhs(mins): # left hand side
return np.diag(np.ones(N)) # mins.reshape(3,N)[0]
def Hrhs(mins): # right hand side
return np.sum(mins.reshape(3,N)[0]) - 0.5
con1H = {'type': 'ineq', 'fun': lambda H: Hlhs(H)}
con2H = {'type': 'eq', 'fun': lambda H: Hrhs(H)}
# constraints on a:
def alhs(mins):
return np.diag(np.ones(N)) # mins.reshape(3,N)[1]
def arhs(mins):
return -np.diag(np.ones(N)) # mins.reshape(3,N)[1] + (np.ones(N))*np.pi/2
con1a = {'type': 'ineq', 'fun': lambda a: alhs(a)}
con2a = {'type': 'ineq', 'fun': lambda a: arhs(a)}
# constraints on b:
def blhs(mins):
return np.diag(np.ones(N)) # mins.reshape(3,N)[2]
def brhs(mins):
return -np.diag(np.ones(N)) # mins.reshape(3,N)[2] + (np.ones(N))*np.pi/2
con1b = {'type': 'ineq', 'fun': lambda b: blhs(b)}
con2b = {'type': 'ineq', 'fun': lambda b: brhs(b)}
My function, with the other parameters (and adopting N=3) to be minimized, is given by (I'm sorry if it is too long):
gamma = 17
C = 85
T = 0
Hf = 0.5
Li = 2
Bi = 1
N = 3
def FUN(mins):
H, a, b = mins.reshape(3,N)
S1 = 0; S2 = 0
B = np.zeros(N); L = np.zeros(N);
for i in range(N):
sbi=Bi; sli=Li
for j in range(i+1):
sbi += 2*H[j]*np.tan(b[j])
sli += 2*H[j]*np.tan(a[j])
B[i]=sbi
L[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i])) + T*np.sin(a[i])) * (Bi*H[i]+H[i]**2*np.tan(b[i]))/np.cos(a[i]) + \
(C*(1-np.sin(b[i])) + T*np.sin(b[i])) * (Li*H[i]+H[i]**2*np.tan(a[i]))/np.cos(b[i])
S2 += (gamma*H[0]/12)*(Bi*Li + 4*(B[0]-H[0]*np.tan(b[0]))*(L[0]-H[0]*np.tan(a[0])) + B[0]*L[0])
j=1
while j<(N):
S2 += (gamma*H[j]/12)*(B[j-1]*L[j-1] + 4*(B[j]-H[j]*np.tan(b[j]))*(L[j]-H[j]*np.tan(a[j])) + B[j]*L[j])
j += 1
F = 2*(S1+S2)
return F
And, finally, adopting an initial guess for the values as 0, the minimization is given by:
x0 = np.zeros(3*N)
res = scipy.optimize.minimize(FUN,x0,constraints=(con1H,con2H,con1a,con2a,con1b,con2b),tol=1e-25)
My problems are:
a) Observing the result res, some values got negative even though I have constraints for them to be positive. The success of the minimization was False, and the message was: Positive directional derivative for linesearch. Also, the result is very far from the minimum expected.
b) Adopting the method='trust-constr' I got a value closer to what I was expecting but with a false success and the message The maximum number of function evaluations is exceeded.. Is there any way to improve this?
I know that there is a minimum very close to these values:
H = [0.2,0.15,0.15]
a = [1.0053,1.0053,1.2566]
b = [1.0681,1.1310,1.3195]
where the value for the function is 123,45. I've checked the function several times and it seems to be working properly. Can anyone help me to find where my problem is? I've tried to change xtol and maxiter but with no success.
Here are a few hints:
Your initial point x0 is not feasible since it doesn't satisfy the constraint sum(H) = 0.5. Providing a feasible initial point should fix your first problem.
Except for the constraint sum(H) = 0.5, all constraints are simple bounds on the variables. In general, it's recommended to pass variable bounds via the bounds parameter of minimize. You can simply define and pass all the bounds like this
from scipy.optimize import minimize
import numpy as np
# ..your variables and functions ..
bounds = [(0, None)]*N + [(0, np.pi/2)]*2*N
x0 = np.zeros(3*N)
x0[0] = 0.5
res = minimize(FUN, x0, constraints=(con2H,), bounds=bounds,
method="trust-constr", options={'maxiter': 20000})
where each tuple contains the lower and upper bound for each variable.
Unfortunately, 'trust-constr' has still trouble to converge to a local minimizer. In this case, you can either try other initial points or you can use the state-of-the-art open source solver Ipopt instead. The Cython wrapper cyipopt provides a interface similar to scipy:
from cyipopt import minimize_ipopt
# rest as above
res = minimize_ipopt(FUN, x0, constraints=(con2H,), bounds=bounds)
this gives me a solution with objective value 122.9.
Last but not least, it's always a good idea to provide exact gradients, jacobians and hessians.

Scipy minimize name 'init_weigths' is not defined

Anyone knows why i get this undefined error on the minimize function ?
The variable init_weights in an array of float defined and filled before callin minimize function. However it doesnt seems to read it
def port_ret(weights):
return ret.dot(weights.T).mean() * 252
# calculate annualized portfolio volatility (based on weights)
def port_vol(weights):
return ret.dot(weights.T).std() * np.sqrt(252)
# define function to be minimized (sco only supports minimize, not maximize)
# -> maximize sharpe ratio == minimize sharpe ratio * (-1)
def min_func_sharpe(weights):
return ((rf - port_ret(weights)) / port_vol(weights)) * -1 # sharpe ratio *
num_stocks = float(len(stocks.columns))
num_stock = len(stocks.columns)
init_weights = []
ueight = float(1/num_stocks)
for i in range(num_stock):
init_weights.append(ueight)
# bounds: all weights shall be between 0 and 1 -> can be changed
bnds = tuple((0, 1) for i in range(num_stock))
# constraint: weights must sum up to 1 -> sum of weights - 1 = 0
cons = ({"type": "eq", "fun": lambda x: np.sum(x) - 1})
# run optimization based on function to be minimized, starting with equal weights and based on respective bounds and constraints
opts = minimize(fun=min_func_sharpe, x0=init_weigths, method="SLSQP",
bounds=bnds, constraints=cons)
eweights = np.array(init_weights)
I had to transform the normal array in a numpy array before passing it to minimize

How to return the final values of the Lagrange multipliers and penalty parameter for the Augmented Lagrangian method (LD_AUGLAG) in nlopt for Python?

I am looking to use the Augmented Lagrangian method (LD_AUGLAG) in NLOPT in Python to solve a subproblem for another optimisation strategy. However, to do so, I need to know the final values of the Lagrange multipliers and the penalty parameter at termination.
I have looked through the return options available with the Python version of NLOPT, but have been unable to find a return option for these values.
import nlopt
import numpy as np
def myfunc(x, grad):
d = x.size
val = 0.0
for i in range(d):
val += 0.5 * (x[i]**4 - 16.0*x[i]**2 + 5.0*x[i])
return val
def mycons(x, grad):
val = np.dot(x,x) - 30.0
return val
n = 2
x0 = np.zeros(n)
local_opt = nlopt.opt(nlopt.LN_BOBYQA, n)
local_opt.set_ftol_rel(1e-10)
opt = nlopt.opt(nlopt.LD_AUGLAG, n)
opt.set_local_optimizer(local_opt)
opt.add_inequality_constraint(mycons, 1e-08)
opt.set_min_objective(myfunc)
x1 = opt.optimize(x0)
print(opt.last_optimum_value())
I was hoping within the opt object would be an option to return the value of the Lagrange multiplier and the penalty parameter used at termination. However, there does not appear to be such an option within NLOPT.

Scipy.optimize.minimize SLSQP with linear constraints fails

Consider the following (convex) optimization problem:
minimize 0.5 * y.T * y
s.t. A*x - b == y
where the optimization (vector) variables are x and y and A, b are a matrix and vector, respectively, of appropriate dimensions.
The code below finds a solution easily using the SLSQP method from Scipy:
import numpy as np
from scipy.optimize import minimize
# problem dimensions:
n = 10 # arbitrary integer set by user
m = 2 * n
# generate parameters A, b:
np.random.seed(123) # for reproducibility of results
A = np.random.randn(m,n)
b = np.random.randn(m)
# objective function:
def obj(z):
vy = z[n:]
return 0.5 * vy.dot(vy)
# constraint function:
def cons(z):
vx = z[:n]
vy = z[n:]
return A.dot(vx) - b - vy
# constraints input for SLSQP:
cons = ({'type': 'eq','fun': cons})
# generate a random initial estimate:
z0 = np.random.randn(n+m)
sol = minimize(obj, x0 = z0, constraints = cons, method = 'SLSQP', options={'disp': True})
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.12236220865
Iterations: 6
Function evaluations: 192
Gradient evaluations: 6
Note that the constraint function is a convenient 'array-output' function.
Now, instead of an array-output function for the constraint, one could in principle use an equivalent set of 'scalar-output' constraint functions (actually, the scipy.optimize documentation discusses only this type of constraint functions as input to minimize).
Here is the equivalent constraint set followed by the output of minimize (same A, b, and initial value as the above listing):
# this is the i-th element of cons(z):
def cons_i(z, i):
vx = z[:n]
vy = z[n:]
return A[i].dot(vx) - b[i] - vy[i]
# listable of scalar-output constraints input for SLSQP:
cons_per_i = [{'type':'eq', 'fun': lambda z: cons_i(z, i)} for i in np.arange(m)]
sol2 = minimize(obj, x0 = z0, constraints = cons_per_i, method = 'SLSQP', options={'disp': True})
Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: 6.87999270692
Iterations: 1
Function evaluations: 32
Gradient evaluations: 1
Evidently, the algorithm fails (the returning objective value is actually the objective value for the given initialization), which I find a bit weird. Note that running [cons_per_i[i]['fun'](sol.x) for i in np.arange(m)] shows that sol.x, obtained using the array-output constraint formulation, satisfies all scalar-output constraints of cons_per_i as expected (within numerical tolerance).
I would appreciate if anyone has some explanation for this issue.
You've run into the "late binding closures" gotcha. All the calls to cons_i are being made with the second argument equal to 19.
A fix is to use the args dictionary element in the dictionary that defines the constraints instead of the lambda function closures:
cons_per_i = [{'type':'eq', 'fun': cons_i, 'args': (i,)} for i in np.arange(m)]
With this, the minimization works:
In [417]: sol2 = minimize(obj, x0 = z0, constraints = cons_per_i, method = 'SLSQP', options={'disp': True})
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.1223622086
Iterations: 6
Function evaluations: 192
Gradient evaluations: 6
You could also use the the suggestion made in the linked article, which is to use a lambda expression with a second argument that has the desired default value:
cons_per_i = [{'type':'eq', 'fun': lambda z, i=i: cons_i(z, i)} for i in np.arange(m)]

Categories