scipy minimize SLSQP - 'Singular matrix C in LSQ subproblem' - python

I'm trying to solve a pretty basic optimization problem using SciPy. The problem is constrained and with variable bounds and I'm pretty sure it's linear.
When I run the following code the execution fails with the error message 'Singular matrix C in LSQ subproblem'. Does anyone know what the problem might be? Thanks in advance.
Edit: I'll add a short description of what the code should do here.
I define a 'demand' vector at the beginning of the code. This vector describes the demand of a certain product indexed over some period of time. What I want to figure out is how to place a set of orders so as to fill this demand under some constraints. These constraints are;
We must have the items in stock if there is a demand at a certain point in time (index in demand)
We can not place an additional order until 4 'time units' after an order has been placed
We can not place an order in the last 4 time units
This is my code;
from scipy.optimize import minimize
import numpy as np
demand = np.array([5, 10, 10, 7, 3, 7, 1, 0, 0, 0, 8])
orders = np.array([0.] * len(demand))
def objective(orders):
return np.sum(orders)
def items_in_stock(orders):
stock = 0
for i in range(len(orders)):
stock += orders[i]
stock -= demand[i]
if stock < 0.:
return -1.
return 0.
def four_weeks_order_distance(orders):
for i in range(len(orders)):
if orders[i] != 0.:
num_orders = (orders[i+1:i+5] != 0.).any()
if num_orders:
return -1.
return 0.
def four_weeks_from_end(orders):
if orders[-4:].any():
return -1.
else:
return 0.
con1 = {'type': 'eq', 'fun': items_in_stock}
con2 = {'type': 'eq', 'fun': four_weeks_order_distance}
con3 = {'type': 'eq', 'fun': four_weeks_from_end}
cons = [con1, con2, con3]
b = [(0, 100)]
bnds = b * len(orders)
x0 = orders
x0[0] = 10.
minimize(objective, x0, method='SLSQP', bounds=bnds, constraints=cons)

Though I am not an Operational Researcher, I believe it is because of the fact that the constraints you implemented are not continuous. I made little changes so that the constraints are now continuous in nature.
from scipy.optimize import minimize
import numpy as np
demand = np.array([5, 10, 10, 7, 3, 7, 1, 0, 0, 0, 8])
orders = np.array([0.] * len(demand))
def objective(orders):
return np.sum(orders)
def items_in_stock(orders):
"""In-equality Constraint: Idea is to keep the balance of stock and demand.
Cumulated stock should be greater than demand. Also, demand should never cross the stock.
"""
stock = 0
stock_penalty = 0
for i in range(len(orders)):
stock += orders[i]
stock -= demand[i]
if stock < 0:
stock_penalty -= abs(stock)
return stock_penalty
def four_weeks_order_distance(orders):
"""Equality Constraint: An order can't be placed until four weeks after any other order.
"""
violation_count = 0
for i in range(len(orders) - 6):
if orders[i] != 0.:
num_orders = orders[i + 1: i + 5].sum()
violation_count -= num_orders
return violation_count
def four_weeks_from_end(orders):
"""Equality Constraint: No orders in the last 4 weeks
"""
return orders[-4:].sum()
con1 = {'type': 'ineq', 'fun': items_in_stock} # Forces value to be greater than zero.
con2 = {'type': 'eq', 'fun': four_weeks_order_distance} # Forces value to be zero.
con3 = {'type': 'eq', 'fun': four_weeks_from_end} # Forces value to be zero.
cons = [con1, con2, con3]
b = [(0, 100)]
bnds = b * len(orders)
x0 = orders
x0[0] = 10.
res = minimize(objective, x0, method='SLSQP', bounds=bnds, constraints=cons,
options={'eps': 1})
Results
status: 0
success: True
njev: 22
nfev: 370
fun: 51.000002688311334
x: array([ 5.10000027e+01, 1.81989405e-15, -6.66999371e-16,
1.70908182e-18, 2.03187432e-16, 1.19349893e-16,
1.25059614e-16, 4.55582386e-17, 6.60988392e-18,
3.37907550e-17, -5.72760251e-18])
message: 'Optimization terminated successfully.'
jac: array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.])
nit: 23
[ round(l, 2) for l in res.x ]
[51.0, 0.0, -0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.0]
So, the solution suggests to make all the orders in the first week.
It avoids the out of stock situation
Single purchase(order) respects the no order in the next four week after an order.
No last 4 week purchase

Related

Computationally efficient way of maximizing a complicated multivariable function

I've been trying to find an efficient way of maximizing the following monster function in four variables but the program is taking ages to run and I'm not even sure if the results are correct. Can anyone help me code it better in Python?
Here's the function:
where
a=[p,q,r,s].
Y is the measured data sampled at 30 points.
Here's my code.
import numpy as np
import math
Y=Y_t #Y_t is a predefined column vector with 30 entries.
tstep=0.05 #in s
N=30
cov=np.zeros([30,30])
def R(p,q,r,t):
om_D=p*np.sqrt(1-q**2)
return np.pi*r*(np.exp(-q*p*abs(t)))*(np.cos(om_D*t)+(q/(np.sqrt(1-q**2)))*(np.sin(om_D*abs(t))))/(2*q*(p**3))
def I(m,p):
if m==p:
return 1
else:
return 0
def func(a):
a1=a[0] #natural angular frequency bounds=[3,20]
a2=a[1] #damping ratio bounds=[0,1]
a3=a[2] #psd of forcing signal bounds=[300,600]
a4=a[3] #variance of noise bounds=[0,0.0001] in m
#assuming uniform prior for a, we only have to maximise the likelihood function
for i in range(30):
for j in range(30):
cov[i,j]+=R(a1,a2,a3,(j-i)*tstep)+a4*I(i,j)
P=((2*np.pi)**(-N/2)) * ((np.linalg.det(cov))**(-0.5)) * np.exp((-0.5) *np.linalg.multi_dot([np.transpose(Y),np.linalg.inv(cov),Y]))
return (-1)*P[0]
a_start=[5,0.05,100,0.00001]
bnds=((5,20),(0,1),(300,600),(0,0.0001))
result=spo.differential_evolution(func,bounds=bnds)
print(result.x) ```
There is an issue in cov initialization that is why it does not converge. Also an issue on bound for damping ratio, was (0, 1) now (0.0001, 0.999) the ratio should not be 0 or 1 because if it is there will be division by zero error in R(). Code is fixed now see also the output.
Code
import time
import numpy as np
from scipy.optimize import differential_evolution
Y = [[-0.00445551], [-0.01164452], [-0.02171495], [-0.03475491], [-0.00770873], [ 0.0492236 ],
[ 0.07264838], [ 0.03066707], [-0.02457141], [-0.04065968], [-0.01135125], [ 0.02677074], [ 0.06517749],
[ 0.09611112], [ 0.12300657], [ 0.0923581 ], [ 0.03982604], [-0.01473844], [-0.09024497], [-0.14304097],
[-0.17447606], [-0.16926952], [-0.12006193], [-0.00120763], [ 0.11006087], [ 0.19978283], [ 0.24388584],
[ 0.18768875], [ 0.12844553], [ 0.03099409]] #Y_t is a predefined column vector with 30 entries.
tstep = 0.05 #in s
N = 30
def R(p,q,r,t):
om_D = p*np.sqrt(1-q**2)
return np.pi*r*(np.exp(-q*p*abs(t)))*(np.cos(om_D*t)+(q/(np.sqrt(1-q**2)))*(np.sin(om_D*abs(t))))/(2*q*(p**3))
def I(m,p):
if m==p:
return 1
else:
return 0
def func(a):
cov=np.zeros([N,N])
a1=a[0] #natural angular frequency bounds=[3,20]
a2=a[1] #damping ratio bounds=[0,1]
a3=a[2] #psd of forcing signal bounds=[300,600]
a4=a[3] #variance of noise bounds=[0,0.0001] in m
#assuming uniform prior for a, we only have to maximise the likelihood function
for i in range(N):
for j in range(N):
cov[i,j]+=R(a1,a2,a3,(j-i)*tstep)+a4*I(i,j)
P=((2*np.pi)**(-N/2)) * ((np.linalg.det(cov))**(-0.5)) * np.exp((-0.5) *np.linalg.multi_dot([np.transpose(Y),np.linalg.inv(cov),Y]))
return (-1)*P[0]
if __name__ == '__main__':
t0 = time.perf_counter()
a_start = [5, 0.05, 350, 0.00001]
bnds = ((5, 20), (0.0001, 0.999), (300, 600), (0, 0.0001))
result=differential_evolution(func, x0=a_start, bounds=bnds, maxiter=1000)
print(result)
print(f'elapse: {time.perf_counter() - t0:0.0f}s')
Output
fun: array([-2.76736878e+11])
jac: array([-2.91459845e+11, -4.55652161e+12, 1.27377279e+10, 3.34234132e+14])
message: 'Optimization terminated successfully.'
nfev: 3430
nit: 56
success: True
x: array([ 20. , 0.999, 300. , 0. ])
elapse: 55s
Scipy minimize is very fast
Changes:
from scipy.optimize import minimize
result = minimize(func, x0=a_start, bounds=bnds, options={'maxiter': 100, 'disp': True})
Output:
fun: array([-2.76736878e+11])
hess_inv: <4x4 LbfgsInvHessProduct with dtype=float64>
jac: array([-2.91459845e+11, -4.55652161e+12, 1.27377279e+10, 3.34234132e+14])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 30
nit: 4
njev: 6
status: 0
success: True
x: array([ 20. , 0.999, 300. , 0. ])
elapse: 0.5s
Optuna
Optuna after 1000 trials is right there too.
value is positive here because I use maximize direction. In both scipy's DE and minimize values have to be negated.
best param: {'a1': 20, 'a2': 0.9989999999999999, 'a3': 300, 'a4': 0.0}
best value: 276736878140.3103
best trial num: 73
elapse: 22s

Scipy error Inequality constraints incompatible (Exit mode 4)

I'm solving an optimization problem to do a constrained nonlinear regression using experimental data. I use scipy minimize and it works with the original data, but it doesn't work when I do a simple data transformation. For the transformed data I use the excel solver solution for the same problem as the initial condition so it should work butcan't figure out why it doesn not. Please any help is appreciated. Thanks beforehand btw.
Here is the code with the original data (works) and the transformation (does not work)
import numpy as np
from scipy.optimize import Bounds, minimize
def yp(x, time, mode = 'fit'):
y1 = x[0] + x[1]*time
y2 = x[2] + (x[3] - x[2])*np.exp(-x[5]*(time - x[4])/(x[3] - x[2]))
comparison = time < x[4]
yp = y1*comparison + y2*(~comparison)
if mode == 'fit':
return yp
elif mode == 'calc':
return y1, y2
else:
print('Unsupported mode, returning default behavior for fitting data')
return yp
def objective(x, time, y):
ypred = yp(x, time)
z = sum((ypred - y)**2)
return z
#***********************
#Original data
#***********************
data_x = np.array([0,30,60,90,120,150,180,210,240,270,300,330,360,420,480,540,600,720,840])
data_y = np.array([11.06468023,10.03242418,9.771158736,8.873720137,8.618127786,
8.397702515,7.581607582,7.131636821,6.537043245,6.358885017,
5.898468977,5.25275811,4.983989976,4.141791045,2.602472349,
2.07395813,1.078129376,0.551764193,0.480052971])
x0 = [11.5, -0.0211, 0.6, 3.26, 400, 0.01919]
lbound = [9, -0.1, 0.3, 1, 200, 0]
ubound = [14, -1e-5, 1, 4, 800, 0.1]
bounds = Bounds(lbound,ubound)
constraint = dict(type = 'ineq',
fun = lambda x: 0.1 - abs(x[0] + x[1]*x[4] - x[3]))
res = minimize(fun = objective,
x0 = x0,
args = (data_x, data_y),
method = 'SLSQP',
constraints = constraint,
options = {'disp':True},
bounds = bounds)
print(res)
Optimization terminated successfully (Exit mode 0)
Current function value: 0.6681037696841838
Iterations: 20
Function evaluations: 149
Gradient evaluations: 20
fun: 0.6681037696841838
jac: array([ 1.19826198e-03, 5.93313336e-01, 9.38165262e-02, 6.77183270e-04,
1.15633011e-05, -3.35602835e-02])
message: 'Optimization terminated successfully'
nfev: 149
nit: 20
njev: 20
status: 0
success: True
x: array([ 1.06185481e+01, -1.59476490e-02, 3.00000000e-01, 3.86162000e+00,
4.29964822e+02, 2.80661182e-02])
#***********************
#Transformed data
#***********************
data_y_rel = data_y/data_y[0]
x0_rel = [1, -0.00207571, 0.03, 0.359269446, 313.497571, 0.001970666]
lbound_rel = [1, -0.1, 0.03, 0.1, 200, 0]
ubound_rel = [1, -1e-5, 0.1, 0.4, 800, 0.1]
bounds_rel = Bounds(lbound_rel,ubound_rel)
constraint_rel = dict(type = 'ineq',
fun = lambda x: 0.01 - abs(x[0] + x[1]*x[4] - x[3]))
res_rel = minimize(fun = objective,
x0 = x0_rel,
args = (data_x, data_y_rel),
method = 'SLSQP',
constraints = constraint_rel,
options = {'disp':True},
bounds = bounds_rel)
print(res_rel)
Inequality constraints incompatible (Exit mode 4)
Current function value: 0.1593965203706159
Iterations: 1
Function evaluations: 7
Gradient evaluations: 1
fun: 0.1593965203706159
jac: array([ nan, -2.88985475e+02, -1.53672213e-01, -1.13128023e+00,
-1.58630125e-03, 5.45240970e+01])
message: 'Inequality constraints incompatible'
nfev: 7
nit: 1
njev: 1
status: 4
success: False
x: array([ 1.00000000e+00, -2.07571000e-03, 3.00000000e-02, 3.59269446e-01,
3.13497571e+02, 1.97066600e-03])
C:\Users\username\Anaconda3\lib\site-packages\scipy\optimize\_numdiff.py:519: RuntimeWarning: invalid value encountered in true_divide
J_transposed[i] = df / dx
Changing the method from 'SLSQP' to 'trust-constr' worked for me. 'COBYLA' is also an option.

Get different results from Pulp and Linprog

I am new to linear programming and trying both Pulp and (SciPy) Linprog. Each gives me different results.
I think it might be because Linprog is using interior-point method whereas Pulp is probably using simplex? If so, is there a way to get Pulp produce the same result is Linprog?
import pulp
from pulp import *
from scipy.optimize import linprog
# Pulp
# Upper bounds
r = {1: 11, 2: 11, 3: 7, 4: 11, 5: 7}
# Create the model
model = LpProblem(name="small-problem", sense=LpMaximize)
# Define the decision variables
x = {i: LpVariable(name=f"x{i}", lowBound=0, upBound=r[i]) for i in range(1, 6)}
# Add constraints
model += (lpSum(x.values()) <= 35, "headroom")
# Set the objective
model += lpSum([7 * x[1], 7 * x[2], 11 * x[3], 7 * x[4], 11 * x[5]])
# Solve the optimization problem
status = model.solve()
# Get the results
print(f"status: {model.status}, {LpStatus[model.status]}")
print(f"objective: {model.objective.value()}")
for var in x.values():
print(f"{var.name}: {var.value()}")
for name, constraint in model.constraints.items():
print(f"{name}: {constraint.value()}")
# linprog
c = [-7, -7, -11, -7, -11]
bounds = [(0, 11), (0, 11), (0, 7), (0, 11), (0, 7)]
A_ub = [[1, 1, 1, 1, 1]]
B_ub = [[35]]
res = linprog(c, A_ub=A_ub, b_ub=B_ub, bounds=bounds)
print(res)
Output from code above:
status: 1, Optimal
objective: 301.0
x1: 10.0
x2: 0.0
x3: 7.0
x4: 11.0
x5: 7.0
headroom: 0.0
con: array([], dtype=float64)
fun: -300.9999999581466
message: 'Optimization terminated successfully.'
nit: 4
slack: array([4.60956784e-09])
status: 0
success: True
x: array([7., 7., 7., 7., 7.])
Bonus question: How would I formulate a problem where I want to maximum values for x[i]'s given some constraints? Above I am trying to maximise sum of x[i]'s but wondering if there is a better way.
As #Erwin Kalvelagen has already pointed out in the comments not all LPs have a unique solution. In your case you have two groups of variables {x1, x2, x4} and {x3, x5} that have the same coefficients in all occurrences.
In your case it is optimal to use the maximal possible value for x3, x5 and what ever is still available towards 35 in your constraint is distributed between x1, x2, x4 arbitrarily (as it makes no difference for the objective).
Note that your pulp solution is a basic solution while your scipy solution is not. And yes, this likely is because the two use different algorithms to solve the problem.

Constrained Optimization Problem : Python

I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here

linspace that would always include the final point?

For arbitrary pair of 2D points in the plane, I want to break the connecting vector to parts specified by a precision factor. However I want it to always include the start and endpoint. As an extra feature I am expecting the segmenting from the end of the vector to the beginning would give me the same segmentation from the beginning to end(of course after a flipping) . As I can see, numpy.linspace naturally satisfies this condition except for the situations where
the precision is too big that it only consists of one point. Is there any built-in function to take care of this situation or any hints that I would be able to correct this behaviour?
import numpy as np
alpha = np.array([0,0])
beta = np.array([1,1])
alpha_beta_dist = np.linalg.norm(beta - alpha)
for i in range(10):
precision = np.random.random(1)
traversal = np.linspace(0.0, 1.0, num = alpha_beta_dist / float(precision))
traversal2 = np.fliplr([np.linspace(1.0, 0.0, num = alpha_beta_dist / float(precision))])
traversal2 = traversal2[0]
if (traversal != traversal2).all():
print 'precision: ', precision
print 'taversal: ', traversal
print 'taversal2: ', traversal2[0]
Make sure num is at least 2:
traversal = np.linspace(0.0, 1.0,
num=max(alpha_beta_dist/float(precision), 2))
np.linspace will return both endpoints (by default) unless num is less than 2:
In [23]: np.linspace(0, 1, num=0)
Out[23]: array([], dtype=float64)
In [24]: np.linspace(0, 1, num=1)
Out[24]: array([ 0.])
In [25]: np.linspace(0, 1, num=2)
Out[25]: array([ 0., 1.])

Categories