I am new to linear programming and trying both Pulp and (SciPy) Linprog. Each gives me different results.
I think it might be because Linprog is using interior-point method whereas Pulp is probably using simplex? If so, is there a way to get Pulp produce the same result is Linprog?
import pulp
from pulp import *
from scipy.optimize import linprog
# Pulp
# Upper bounds
r = {1: 11, 2: 11, 3: 7, 4: 11, 5: 7}
# Create the model
model = LpProblem(name="small-problem", sense=LpMaximize)
# Define the decision variables
x = {i: LpVariable(name=f"x{i}", lowBound=0, upBound=r[i]) for i in range(1, 6)}
# Add constraints
model += (lpSum(x.values()) <= 35, "headroom")
# Set the objective
model += lpSum([7 * x[1], 7 * x[2], 11 * x[3], 7 * x[4], 11 * x[5]])
# Solve the optimization problem
status = model.solve()
# Get the results
print(f"status: {model.status}, {LpStatus[model.status]}")
print(f"objective: {model.objective.value()}")
for var in x.values():
print(f"{var.name}: {var.value()}")
for name, constraint in model.constraints.items():
print(f"{name}: {constraint.value()}")
# linprog
c = [-7, -7, -11, -7, -11]
bounds = [(0, 11), (0, 11), (0, 7), (0, 11), (0, 7)]
A_ub = [[1, 1, 1, 1, 1]]
B_ub = [[35]]
res = linprog(c, A_ub=A_ub, b_ub=B_ub, bounds=bounds)
print(res)
Output from code above:
status: 1, Optimal
objective: 301.0
x1: 10.0
x2: 0.0
x3: 7.0
x4: 11.0
x5: 7.0
headroom: 0.0
con: array([], dtype=float64)
fun: -300.9999999581466
message: 'Optimization terminated successfully.'
nit: 4
slack: array([4.60956784e-09])
status: 0
success: True
x: array([7., 7., 7., 7., 7.])
Bonus question: How would I formulate a problem where I want to maximum values for x[i]'s given some constraints? Above I am trying to maximise sum of x[i]'s but wondering if there is a better way.
As #Erwin Kalvelagen has already pointed out in the comments not all LPs have a unique solution. In your case you have two groups of variables {x1, x2, x4} and {x3, x5} that have the same coefficients in all occurrences.
In your case it is optimal to use the maximal possible value for x3, x5 and what ever is still available towards 35 in your constraint is distributed between x1, x2, x4 arbitrarily (as it makes no difference for the objective).
Note that your pulp solution is a basic solution while your scipy solution is not. And yes, this likely is because the two use different algorithms to solve the problem.
Related
Let's assume i have 100 different kinds of items, each item got a name and a physical weight.
I know the names of all 100 items but only the weight of 80 items.
When i ship items, i pack them in groups of 10 and sum the weight of these items.
Due to some items are missing their weight, this will give an inaccurate sum when im about to ship.
I have different shipments with missing weights
Shipment 1
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 71
-
Item 77
-
Total weight: 75
Shipment 2
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 71
-
Item 92
-
Total weight: 90
Shipment 3
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 55
35
Item 77
-
Total weight: 100
Since some of the shipments share the same items with missing weights and i have the shipments total weight, is there a way with machine learning to determine the weight of these items without im unpacking the entire shipment?
Or would it just be a, in this case, 100x3 Matrix with a lot of empty values?
At this point im not really sure if i should use some type of regression to solve this or if its just a matrix, that would expand a lot if i had n more items to ship.
I also wondered if this was some type of knapsack problem, but i hope anyone can guide my in the right direction.
Forget about machine learning. This is a simple system of linear equations.
w_71 + w_77 = 25
w_71 + w_92 = 40
w_77 = 15
You can solve it with sympy.solvers.solveset.linsolve, or scipy.optimize.linprog, or scipy.linalg.lstsq, or numpy.linalg.lstsq
sympy.linsolve is maybe the easiest to understand if you are not familiar with matrices; however, if the system is underdetermined, then instead of returning a particular solution to the system, sympy.linsolve will return the general solution in parametric form.
scipy.lstsq or numpy.lstsq expect the problem to be given in matrix form. If there is more than one possible solution, they will return the most "average" solution. However, they cannot take any positivity constraint into account: they might return a solution where one of the variables is negative. You can maybe fix this behaviour by adding a new equation to the system to manually force a variable to be positive, then solve again.
scipy.linprog expects the problem to be given in matrix form; it also expects you to specify a linear objective function, to choose which particular solution is "best" in case there is more than one possible solution. linprog also considers that all variables are nonnegative by default, or allows you to specify explicit bounds for the variables yourself. It also allows you to add inequality constraints, in addition to the equations, if you wish to.
Using sympy.solvers.solveset.linsolve
from sympy.solvers.solveset import linsolve
from sympy import symbols
w71, w77, w92 = symbols('w71 w77 w92')
eqs = [w71+w77-25, w71+w92-40, w77-15]
solution = linsolve(eqs, [w71, w77, w92])
# solution = {(10, 15, 30)}
In your example, there is only one possible solution, so linsolve returned that solution: w71 = 10, w77 = 15, w92 = 30.
However, in case there is more than one possible solution, linsolve will return a parametric form for the general solution:
x,y,z = symbols('x y z')
eqs = [x+y-10, y+z-20]
solution = linsolve(eqs, [x, y, z])
# solution = {(z - 10, 20 - z, z)}
Here there is an infinity of possible solutions. linsolve is telling us that we can pick any value for z, and then we'll get the corresponding x and y as x = z - 10 and y = 20 - z.
Using numpy.linalg.lstsq
lstsq expects the system of equations to be given in matrix form. If there is more than one possible solution, then it will return the most "average" solution. For instance, if the system of equation is simply x + y = 10, then lstsq will return the particular solution x = 5, y = 5 and will ignore more "extreme" solutions such as x = 10, y = 0.
from numpy.linalg import lstsq
# w_71 + w_77 = 25
# w_71 + w_92 = 40
# w_77 = 15
A = [[1, 1, 0], [1, 0, 1], [0, 1, 0]]
b = [25, 40, 15]
solution = lstsq(A, b)
solution[0]
# array([10., 15., 30.])
Here lstsq found the unique solution, w71 = 10, w77=15, w92 = 30.
# x + y = 10
# y + z = 20
A = [[1, 1, 0], [0, 1, 1]]
b = [10, 20]
solution = lstsq(A, B)
solution[0]
# array([-3.55271368e-15, 1.00000000e+01, 1.00000000e+01])
Here lstsq had to choose a particular solution, and chose the one it considered most "average", x = 0, y = 10, z = 10. You might want to round the solution to integers.
One drawback of lstsq is that it doesn't take into account your non-negativity constraint. That is, it might return a solution where one of the variables is negative:
# x + y = 2
# y + z = 20
A = [[1, 1, 0], [0, 1, 1])
b = [2, 20]
solution = lstsq(A, b)
solution[0]
# array([-5.33333333, 7.33333333, 12.66666667])
See how lstsq ignored the possible positive solution x = 1, y = 1, z = 18 and instead returned the solution it considered most "average", x = -5.33, y = 7.33, z = 12.67.
One way to fix this is to add an equation yourself to force the offending variable to be positive. For instance, here we noticed that lstsq wanted x to be negative, so we can manually force x to be equal to 1 instead, and solve again:
# x + y = 2
# y + z = 20
# x = 1
A = [[1, 1, 0], [0, 1, 1], [1, 0, 0]]
b = [2, 20, 1]
solution = lstsq(A, b)
solution[0]
# array([ 1., 1., 19.])
Now that we manually forced x to be 1, lstsq found solution x=1, y=1, z=19 which we're more happy with.
Using scipy.optimize.linprog
The particularity of linprog is that it expects you to specify the "objective" used to choose a particular solution, in case there is more than one possible solution.
Also, linprog allows you to specify bounds for the variables. The default is that all variables are nonnegative, which is what you want.
from scipy.optimize import linprog
# w_71 + w_77 = 25
# w_71 + w_92 = 40
# w_77 = 15
A = [[1, 1, 0], [1, 0, 1], [0, 1, 0]]
b = [25, 40, 15]
c = [1, 1, 1] # coefficients for objective: minimise w71 + w77 + w92.
solution = linprog(c, A_eq = A, b_eq = b)
solution.x
# array([10., 15., 30.])
I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here
I am looking to implement constraints on my optimization variable :
X=variable(2)
How can I specify constraints on X components, like "X[i] <= 1" for example, which I tried but don't seem to work" ? I did not find anything in the cvxpy documentation on this specific case, although it seems pretty basic...
I tried this simple example :
import cvxpy
X=variable(2)
constraints = [x[0] <= 5,
x[1] <= 5]
obj=Maximize(x[0]+x[1])
Pb=Problem(obj, constraints)
but cvxpy does not find any solution
Thanks !
The documentation shows an example of this on the main page. You specify the constraints when you create the Problem. Here's a simple example:
import cvxpy
x = cvxpy.Variable(5)
constraints = [x[3] >= 3, x >= 0]
problem = cvxpy.Problem(cvxpy.Minimize(cvxpy.sum_squares(x)), constraints)
problem.solve()
x.value
Which outputs:
array([-0., -0., -0., 3., -0.])
the exact problem you described yields the expected solution:
import cvxpy as cvx
x = cvx.Variable(2)
constraints = [x[0] <= 5, x[1] <= 5]
obj = cvx.Maximize(x[0] + x[1])
prob = cvx.Problem(obj, constraints)
prob.solve()
10.0
x.value
array([5., 5.])
I want to minimize the following LPP:
c=60x+40y+50z
subject to
20x+10y+10z>=350 ,
10x+10y+20z>=400, x,y,z>=0
my code snippet is the following(I'm using scipy package for the first time)
from scipy.optimize import linprog
c = [60, 40, 50]
A = [[20,10], [10,10],[10,20]]
b = [350,400]
res = linprog(c, A, b)
print(res)
The output is : screenshot of the output in Pycharm
1.Can someone explain the parameters of the linprog function in detail, especially how the bound will be calculated?
2.Have I written the parameters right?
I am naive with LPP basics, I think I am understanding the parameters wrong.
linprog expects A to have one row per inequation and one column per variable, and not the other way around. Try this:
from scipy.optimize import linprog
c = [60, 40, 50]
A = [[20, 10, 10], [10, 10, 20]]
b = [350, 400]
res = linprog(c, A, b)
print(res)
Output:
fun: -0.0
message: 'Optimization terminated successfully.'
nit: 0
slack: array([ 350., 400.])
status: 0
success: True
x: array([ 0., 0., 0.])
The message is telling you that your A_ub matrix has incorrect dimension. It is currently a 3x2 matrix which cannot left-multiply your 3x1 optimization variable x. You need to write:
A = [[20,10, 10], [10,10,20]]
which is a 2x3 matrix and can left multiply x.
Some hypothetical example solving a nonlinear equation system with fsolve:
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print(equations((x, y)))
Is it somehow possible to solve it using scipy.optimize.brentq with some interval, e.g. [-1,1]? How does the unpacking work in that case?
As sascha suggested, constrained optimization is the easiest way to proceed. The least_squares method is convenient here: you can directly pass your equations to it, and it will minimize the sum of squares of its components.
from scipy.optimize import least_squares
res = least_squares(equations, (1, 1), bounds = ((-1, -1), (2, 2)))
The structure of bounds is ((min_first_var, min_second_var), (max_first_var, max_second_var)), or similarly for more variables.
The resulting object has a bunch of fields, shown below. The most relevant ones are: res.cost is essentially zero, which means a root was found; and res.x says what the root is: [ 0.62034453, 1.83838393]
active_mask: array([0, 0])
cost: 1.1745369255773682e-16
fun: array([ -1.47918522e-08, 4.01353883e-09])
grad: array([ 5.00239352e-11, -5.18964300e-08])
jac: array([[ 1. , 3.67676787],
[ 3.69795254, 0.62034452]])
message: '`gtol` termination condition is satisfied.'
nfev: 7
njev: 7
optimality: 8.3872972696740977e-09
status: 1
success: True
x: array([ 0.62034453, 1.83838393])