How to generate constraints dynamically in scipy.optimize? - python

Well what I was trying to do was to model the following using scipy.optimize.minimize.
What I'm trying to optimize is this function with its constraints:
Here variable V is a list of variables, list's length is equal to the size of Omega.
What I did so far is the following:
import numpy as np
from scipy.linalg import norm
from scipy.optimize import minimize
# set of concepts
M = ['linear algebra','seq2seq', 'artificial neural network','pointer networks']
#subchapters
S1=['linear algebra', 'differential calculus','backpropagation']
S2=['linear algebra','seq2seq', 'artificial neural network']
S3=['linear algebra','seq2seq', 'artificial neural network','pointer networks']
#vector representation of the subchapter in the concept space
x=[[1,1,0,0],
[1,1,1,0],
[1,1,1,1]]
# set of prerequisite relations among subchapters (entered manually for testing)
Omega = [(1, 2),(2,3),(1,3)]
# number of concepts
m = len(M)
# define of theta and lambda constants (manual)
theta = 2
lamda = 1
# matrix A is a m x m matrix , where m is the number of concepts
# it represents the prerequisite relations among concepts
# A is generated randomly
np.random.seed(43)
#A = np.zeros((m,m), dtype = int)
A = np.random.randint(2, size=(m,m))
# define the slack variable V as an array of size equal to len(Omega)
V = np.empty(len(Omega), dtype=float)
bnds=[]
# bounds -1 and 1 , create the array
# -1 <= a[i][j] <= 1
bnds_a_s_t = [bnds.append((-1, 1)) for _ in range(np.size(A))]
# bounds for the slack variable V, V is positive
bnds_V_i_j = [bnds.append((0,np.inf)) for _ in range(np.size(V))]
#constraints
cons=[]
#equality constraint
#a[s][t] + a[t][s] = 0
def equality_constraint(X):
A_no_flatten=X[:len(X)-len(Omega)]
#reconstruct matrix A
A=np.reshape(A_no_flatten,(m,m))
for s in range(m):
for t in range(m):
r=A[s][t]+A[t][s]
#r=0 constraint
con = {'type': 'eq', 'fun': lambda X: r}
cons.append(con)
# inequality constraint
#x[i].T # (C[i][j] * A) # x[j]
def inequality_constraint(X,x):
for couple in Omega:
# get the i and j
i = couple[0]
j = couple[1]
#initialize C to 1s
C = np.full((m,m), 1, dtype = int)
# take all elements from X except last len(Omega) elements
A_no_flatten=X[:len(X)-len(Omega)]
# reconstruct list V
V=X[-len(Omega):]
#index for V
f=0
#reconstruct matrix A
A=np.reshape(A_no_flatten,(m,m))
#construct C[i][j]
for s in range(m):
for t in range(m):
if x[i][t]>0 or x[j][s]>0:
C[s][t]=0
else:
C[s][t]=1
first= x[i].T
second = C*A
third = x[j]
first_sec = first#second
res=first_sec#third
ineq_con = {'type': 'ineq', 'fun': lambda X: res -theta +V[f]}
f+=1
cons.append(ineq_con)
# arguments passed to the function
# here we pass x matrix
# arguments are passed and used in constraints and in the objective function
# the objective function will minimize A and V which are matrix A and slack variable V
arguments=(x,)
# objective function
def objective(X, x):
A_no_flatten=X[:len(X)-len(Omega)]
# reconstruct list V
V=X[-len(Omega):]
#reconstruct matrix A
A=np.reshape(A_no_flatten,(m,m))
# sum of square V
sum_square=0.0
for it in V:
sum_square+=it**2
# sum of square V * lambda
sum_square_lambda=sum_square*lamda
return norm(A, 1) + sum_square_lambda
# list of variables to pass to the objective function
#pass the x0.flatten() which is the A + V combined, and then when in objective function we recreate them
# the first one A is all except the last s items where s is the size of V
# and then V is the rest
B = A.flatten()
p0 = np.append(B,V)
# scipy minimize
sol = minimize(objective, x0 = p0, args=arguments, bounds=bnds, constraints=cons)
print(sol.x)
What I get is the following:
[-7.73458566e-010 0.00000000e+000 4.99999997e-001 1.00000000e+000
1.00000000e+000 0.00000000e+000 -5.00000003e-001 1.00000000e+000
1.00000000e+000 1.00000000e+000 4.99999997e-001 -7.73458566e-010
-7.73458566e-010 0.00000000e+000 4.99999997e-001 -7.73458566e-010
6.01347002e-154 1.07176259e-311 0.00000000e+000]
Which doesn't respect the constraints and is not what I expected
What I don't know is that is it correct to add constraints like that, because I don't seem to call the constraints function, and I need to add them in a loop, and each function depends on X which is the list to minimize.
When I print the cons array it is empty and I know, but I didn't find another way to add the constraint a[s][t]+a[t][s]=0 and the other one, I don't know if my approach is correct, thank you for your help in advance, much appreciated.

This isn't a complete answer, but it may get you started. As already mentioned in the comments, your list of constraints cons is empty when passed to the minimize method. So let's consider your first equality constraint. There are a few problems:
Each time you call the function equality_constraint, you'd append new constraints to your list cons.
Since you pass each constraint A[s][t] + A[t][s] == 0 as scalar function, it is quite cumbersome. Instead, you could use a single vector-valued function:
def constraint1(X):
A_no_flatten = X[:len(X)-len(Omega)]
A = np.reshape(A_no_flatten,(m,m))
return A.flatten() + A.T.flatten()
As the name indicates, the .flatten() method flattens the matrix to a vector and A.T is just the transpose of A. Now you can add this constraint:
cons = []
cons.append({'type': 'eq', 'fun': constraint1})
Proceed similarly for the other constraint.

Related

Problem minimizing a constrained function in Python with scipy.optimize.minimize

I'm trying to minimize a constrained function of several variables adopting the algorithm scipy.optimize.minimize. The function concerns the minimization of 3*N parameters, where Nis an input. More specifically, my minimization parameters are given in three arrays H = H[0],H[1],...,H[N-1], a = a[0],a[1],...,a[N-1] and b = b[0],b[1],...,b[N-1] which I concatenated in only one array named mins, with len(mins)=3*N.
Those parameters are also subjected to constraints as follows:
0 <= H and sum(H) = 0.5
0 <= a <= Pi/2
0 <= b <= Pi/2
So, my code for the constraints read as:
import numpy as np
# constraints on x:
def Hlhs(mins): # left hand side
return np.diag(np.ones(N)) # mins.reshape(3,N)[0]
def Hrhs(mins): # right hand side
return np.sum(mins.reshape(3,N)[0]) - 0.5
con1H = {'type': 'ineq', 'fun': lambda H: Hlhs(H)}
con2H = {'type': 'eq', 'fun': lambda H: Hrhs(H)}
# constraints on a:
def alhs(mins):
return np.diag(np.ones(N)) # mins.reshape(3,N)[1]
def arhs(mins):
return -np.diag(np.ones(N)) # mins.reshape(3,N)[1] + (np.ones(N))*np.pi/2
con1a = {'type': 'ineq', 'fun': lambda a: alhs(a)}
con2a = {'type': 'ineq', 'fun': lambda a: arhs(a)}
# constraints on b:
def blhs(mins):
return np.diag(np.ones(N)) # mins.reshape(3,N)[2]
def brhs(mins):
return -np.diag(np.ones(N)) # mins.reshape(3,N)[2] + (np.ones(N))*np.pi/2
con1b = {'type': 'ineq', 'fun': lambda b: blhs(b)}
con2b = {'type': 'ineq', 'fun': lambda b: brhs(b)}
My function, with the other parameters (and adopting N=3) to be minimized, is given by (I'm sorry if it is too long):
gamma = 17
C = 85
T = 0
Hf = 0.5
Li = 2
Bi = 1
N = 3
def FUN(mins):
H, a, b = mins.reshape(3,N)
S1 = 0; S2 = 0
B = np.zeros(N); L = np.zeros(N);
for i in range(N):
sbi=Bi; sli=Li
for j in range(i+1):
sbi += 2*H[j]*np.tan(b[j])
sli += 2*H[j]*np.tan(a[j])
B[i]=sbi
L[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i])) + T*np.sin(a[i])) * (Bi*H[i]+H[i]**2*np.tan(b[i]))/np.cos(a[i]) + \
(C*(1-np.sin(b[i])) + T*np.sin(b[i])) * (Li*H[i]+H[i]**2*np.tan(a[i]))/np.cos(b[i])
S2 += (gamma*H[0]/12)*(Bi*Li + 4*(B[0]-H[0]*np.tan(b[0]))*(L[0]-H[0]*np.tan(a[0])) + B[0]*L[0])
j=1
while j<(N):
S2 += (gamma*H[j]/12)*(B[j-1]*L[j-1] + 4*(B[j]-H[j]*np.tan(b[j]))*(L[j]-H[j]*np.tan(a[j])) + B[j]*L[j])
j += 1
F = 2*(S1+S2)
return F
And, finally, adopting an initial guess for the values as 0, the minimization is given by:
x0 = np.zeros(3*N)
res = scipy.optimize.minimize(FUN,x0,constraints=(con1H,con2H,con1a,con2a,con1b,con2b),tol=1e-25)
My problems are:
a) Observing the result res, some values got negative even though I have constraints for them to be positive. The success of the minimization was False, and the message was: Positive directional derivative for linesearch. Also, the result is very far from the minimum expected.
b) Adopting the method='trust-constr' I got a value closer to what I was expecting but with a false success and the message The maximum number of function evaluations is exceeded.. Is there any way to improve this?
I know that there is a minimum very close to these values:
H = [0.2,0.15,0.15]
a = [1.0053,1.0053,1.2566]
b = [1.0681,1.1310,1.3195]
where the value for the function is 123,45. I've checked the function several times and it seems to be working properly. Can anyone help me to find where my problem is? I've tried to change xtol and maxiter but with no success.
Here are a few hints:
Your initial point x0 is not feasible since it doesn't satisfy the constraint sum(H) = 0.5. Providing a feasible initial point should fix your first problem.
Except for the constraint sum(H) = 0.5, all constraints are simple bounds on the variables. In general, it's recommended to pass variable bounds via the bounds parameter of minimize. You can simply define and pass all the bounds like this
from scipy.optimize import minimize
import numpy as np
# ..your variables and functions ..
bounds = [(0, None)]*N + [(0, np.pi/2)]*2*N
x0 = np.zeros(3*N)
x0[0] = 0.5
res = minimize(FUN, x0, constraints=(con2H,), bounds=bounds,
method="trust-constr", options={'maxiter': 20000})
where each tuple contains the lower and upper bound for each variable.
Unfortunately, 'trust-constr' has still trouble to converge to a local minimizer. In this case, you can either try other initial points or you can use the state-of-the-art open source solver Ipopt instead. The Cython wrapper cyipopt provides a interface similar to scipy:
from cyipopt import minimize_ipopt
# rest as above
res = minimize_ipopt(FUN, x0, constraints=(con2H,), bounds=bounds)
this gives me a solution with objective value 122.9.
Last but not least, it's always a good idea to provide exact gradients, jacobians and hessians.

Gekko optimization constraint with self-defined function

I am currently trying to solve a Mixed Integer Non Linear Problem with Gekko using its Branch&Bound implementation coupled with its warm-start method to speed up and improve the convergence process compared to vanilla branch& bound.
The algorithm finds a solution after a short amount of time. Nevertheless, it hurts the constraint, which I might have defined wrongly: I have a gekko array-variable X and need another gekko array-variable "indices_open" that saves every index of x where x == 1. This "indices_open" goes into another self-defined function which is expecting "indices_open" as an numpy array and does not accept a list or gekko-array of gekko-intermediate variables. The self-defined function returns a numpy array. This final array shall be used in m.Equations and I therefore cast it to a gekko variable array.
Needless to say, something went wrong and the current solution hurts the inequality constraint, while the equality constraint is met. While analyzing the result, I came to the conclusion that "indices_open" seems not to have updated in each iteration.
In the following my try so far:
m = GEKKO()
m.options.SOLVER = 1 # APOPT is an MINLP solver
# optional solver settings with APOPT
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
#Declare x
x = m.Array(m.Var,(65),lb=0,ub=1,integer=True)
for i, xi in enumerate(x[0:65]):
xi.value = np.random.choice(np.arange(0, 2), 1, p=[0.4, 0.6])[0]
#constr
m = ineq_constraint_new(x, m)
m = eq_constraint_new(x, m)
#target
m = objective(x,m)
#STArt
start_time = time.time()
#m.solve(disp=False)
m.solve()
print('Objective: ' + x)
print('Objective: ' + str(m.options.objfcnval))
# save x
m.x = [x[j].value[0] for j in range(65)]
def eq_constraint_new(x, m):
mask = np.isin(list_unique, specific_value)
indices_fixed = np.nonzero(mask)[0]
m.Equations([x[j] == 1 for j in indices_fixed])
return m
def ineq_constraint_new(x, m):
indices_open = [j for j in range(65) if x[j].value == 1]
# DOes not work
#indices_open_banks = [m.Intermediate(j) for j in range(65) if x[j].value == 1]
array_perc, _, _,_ = self_defined_f(indices_open, some_value)
#convert to gekko variables
gekko_vec_perc_upper_bound = m.Array(m.Var, (65))
for i, xi in enumerate(gekko_vec_perc_upper_bound[0:65]):
xi.value = some_array[i]
gekko_arr_perc = m.Array(m.Var, (65))
for i, xi in enumerate(gekko_arr_perc[0:65]):
xi.value = arr_perc[i]
diff = gekko_vec_perc_upper_bound - gekko_arr_perc
m.Equations([diff[j] >= 0 for j in range(65)])
return m
def objective(x,m):
indices_open = [j for j in range(65) if x[j].value == 1]
_, arr_2, arr_3, arr_4 = self_defined_f(indices_open,some_value )
# intermediates for objective
res_dist = [None] * self.ds.n_banks
res_wand = [None] * self.ds.n_banks
res_wand_er = [None] * self.ds.n_banks
x_closed = np.array([1]*len(x)) - x
for j in range(self.ds.n_banks):
res_dist[j] = m.Intermediate(arr_2[j] * some_factor )
res_wand[j] = m.Intermediate(arr_3[j] * some_factor)
res_wand_er[j] = m.Intermediate(arr_4[j] * some_factor)
res_sach = some_factor * (some_vector * x_closed)
# Will be added together
m.Minimize(sum(res_dist))
m.Minimize(sum(res_wand))
m.Minimize(sum(res_wand_er))
m.Maximize(sum(res_sach))
return m
There is an undefined function _, arr_2, arr_3, arr_4 = self_defined_f(indices_open,some_value ) that prevents the code from running. From a quick scan of the code, an expression like:
indices_open = [j for j in range(65) if x[j].value == 1]
is not allowed because gekko requires that the equations are all defined before the m.solve() command. A callback to a function in Python is not allowed. Instead, binary variables should be used to turn something On or Off in the optimization problem. This can be an equation such as binary variable b:
b = m.Var(lb=0,ub=1,integer=True)
m.Equation(x*(1-b)<=0)
m.Equation(x*b>=0)
This makes the value of b equal to 0 if x is less than zero and b equal to 1 if x if greater than zero. There is a tutorial on if3() functions in the APMonitor documentation that may also be useful.

How to create bound in scipy.optimize so optimizer can't reach more than 1 in summary of vector?

I have a function what takes params as vector.
I need to restrict any of the vector variables be less than 0 and vector should be summary equal to 1
I've tried to find something in goolge and scipy docs. No luck so far.
def portfolio_optimization(weight_vector):
return np.sqrt(cov_table.dot(weight_vector).sum())
bound what I need to apply:
sum(weight_vector) = 1
0 < weight_vector[i] < 1
The first condition is a constraint (sum(w)=1), as for the second you can use bounds for it. Here is a small example on how to use scipy.optimize.minimize with a weights vector having 4 elements:
import numpy as np
from scipy.optimize import minimize
# objective function
func = lambda w: np.sqrt(cov_table.dot(w).sum())
# constraint: sum(weights) = 1
fconst = lambda w: 1 - sum(w)
cons = ({'type':'eq','fun':fconst})
# initial weights
w0 = [0, 0, 0, 0]
# define bounds
b = (0.0, 1.0)
bnds = (b, b, b, b)
# minimize
sol = minimize(func,
w0,
bounds = bnds,
constraints = cons)
print(sol)
*Don't forget to assign a value to cov_table for the code to work.

L1 convex optimization with equality constraints in python

I need to minimize L_1(x) subject to Mx = y.
x is a vector with dimension b, y is a vector with dimension a, and M is a matrix with dimensions (a,b).
After some reading I determined to use scipy.optimize.minimize:
import numpy as np
from scipy.optimize import minimize
def objective(x): #L_1 norm objective function
return np.linalg.norm(x,ord=1)
constraints = [] #list of all constraint functions
for i in range(a):
def con(x,y=y,i=i):
return np.matmul(M[i],x)-y[i]
constraints.append(con)
#make constraints into tuple of dicts as required by scipy
cons = tuple({'type':'eq','fun':c} for c in constraints)
#perform the minimization with sequential least squares programming
opt = minimize(objective,x0 = x0,
constraints=cons,method='SLSQP',options={'disp': True})
First,
what can I use for x0? x is unknown, and I need an x0 which satisfies the constraint M*x0 = y: How can I find an initial guess which satisfies the constraint? M is a matrix of independent Gaussian variables (~N(0,1)) if that helps.
Second,
Is there a problem with the way I've set this up? When I use the true x (which I happen to know in the development phase) for x0, I expect it to return x = x0 quickly. Instead, it returns a zero vector x = [0,0,0...,0]. This behavior is unexpected.
Edit:
Here is a solution using cvxpy** solving min(L_1(x)) subject to Mx=y:
import cvxpy as cvx
x = cvx.Variable(b) #b is dim x
objective = cvx.Minimize(cvx.norm(x,1)) #L_1 norm objective function
constraints = [M*x == y] #y is dim a and M is dim a by b
prob = cvx.Problem(objective,constraints)
result = prob.solve(verbose=False)
#then clean up and chop the 1e-12 vals out of the solution
x = np.array(x.value) #extract array from variable
x = np.array([a for b in x for a in b]) #unpack the extra brackets
x[np.abs(x)<1e-9]=0 #chop small numbers to 0

How to create an array that can be accessed according to its indices in Numpy?

I am trying to solve the following problem via a Finite Difference Approximation in Python using NumPy:
$u_t = k \, u_{xx}$, on $0 < x < L$ and $t > 0$;
$u(0,t) = u(L,t) = 0$;
$u(x,0) = f(x)$.
I take $u(x,0) = f(x) = x^2$ for my problem.
Programming is not my forte so I need help with the implementation of my code. Here is my code (I'm sorry it is a bit messy, but not too bad I hope):
## This program is to implement a Finite Difference method approximation
## to solve the Heat Equation, u_t = k * u_xx,
## in 1D w/out sources & on a finite interval 0 < x < L. The PDE
## is subject to B.C: u(0,t) = u(L,t) = 0,
## and the I.C: u(x,0) = f(x).
import numpy as np
import matplotlib.pyplot as plt
# definition of initial condition function
def f(x):
return x^2
# parameters
L = 1
T = 10
N = 10
M = 100
s = 0.25
# uniform mesh
x_init = 0
x_end = L
dx = float(x_end - x_init) / N
#x = np.zeros(N+1)
x = np.arange(x_init, x_end, dx)
x[0] = x_init
# time discretization
t_init = 0
t_end = T
dt = float(t_end - t_init) / M
#t = np.zeros(M+1)
t = np.arange(t_init, t_end, dt)
t[0] = t_init
# Boundary Conditions
for m in xrange(0, M):
t[m] = m * dt
# Initial Conditions
for j in xrange(0, N):
x[j] = j * dx
# definition of solution to u_t = k * u_xx
u = np.zeros((N+1, M+1)) # NxM array to store values of the solution
# finite difference scheme
for j in xrange(0, N-1):
u[j][0] = x**2 #initial condition
for m in xrange(0, M):
for j in xrange(1, N-1):
if j == 1:
u[j-1][m] = 0 # Boundary condition
else:
u[j][m+1] = u[j][m] + s * ( u[j+1][m] - #FDM scheme
2 * u[j][m] + u[j-1][m] )
else:
if j == N-1:
u[j+1][m] = 0 # Boundary Condition
print u, t, x
#plt.plot(t, u)
#plt.show()
So the first issue I am having is I am trying to create an array/matrix to store values for the solution. I wanted it to be an NxM matrix, but in my code I made the matrix (N+1)x(M+1) because I kept getting an error that the index was going out of bounds. Anyways how can I make such a matrix using numpy.array so as not to needlessly take up memory by creating a (N+1)x(M+1) matrix filled with zeros?
Second, how can I "access" such an array? The real solution u(x,t) is approximated by u(x[j], t[m]) were j is the jth spatial value, and m is the mth time value. The finite difference scheme is given by:
u(x[j],t[m+1]) = u(x[j],t[m]) + s * ( u(x[j+1],t[m]) - 2 * u(x[j],t[m]) + u(x[j-1],t[m]) )
(See here for the formulation)
I want to be able to implement the Initial Condition u(x[j],t[0]) = x**2 for all values of j = 0,...,N-1. I also need to implement Boundary Conditions u(x[0],t[m]) = 0 = u(x[N],t[m]) for all values of t = 0,...,M. Is the nested loop I created the best way to do this? Originally I tried implementing the I.C. and B.C. under two different for loops which I used to calculate values of the matrices x and t (in my code I still have comments placed where I tried to do this)
I think I am just not using the right notation but I cannot find anywhere in the documentation for NumPy how to "call" such an array so at to iterate through each value in the proposed scheme. Can anyone shed some light on what I am doing wrong?
Any help is very greatly appreciated. This is not homework but rather to understand how to program FDM for Heat Equation because later I will use similar methods to solve the Black-Scholes PDE.
EDIT: So when I run my code on line 60 (the last "else" that I use) I get an error that says invalid syntax, and on line 51 (u[j][0] = x**2 #initial condition) I get an error that reads "setting an array element with a sequence." What does that mean?

Categories