Updating the RHS and LHS of specific constraints in Gurobi and Python - python

Using gurobi and python I am trying to solve a water balance(similar to the classic transportation problem) linear programming problem in the form of:
minimize c'x subject to:
Ax=b
lb<=x<=ub
A, L are sparse crs scipy matrices, c,b,lb,ub are vectors.
My problem should be updated for a number of steps and some elements are updated with new values. Specifically A is fixed, and all other elements get new values at each step. The following snippet works perfectly and is the basis I used so far (ignore the "self", as the model is embedded in a solver class, while "water_network is the graph object holding values and properties for each step):
### Snippet 1: Formulating/initializing the problem
# unitC is the c vector
# Bounds holds both lb and ub values for each x
self.model = gurobipy.Model()
rows, cols = len(self.water_network.node_list), len(self.water_network.edge_name_list)
self.x1 = []
for j in range(cols):
self.x1.append(self.model.addVar(lb=self.water_network.Bounds[j,0], ub=self.water_network.Bounds[j,1],obj=self.water_network.unitC[j]))
self.model.update()
self.EqualityConstraintA=[]
for i in range(rows):
start = self.water_network.A_sparse.indptr[i]
end = self.water_network.A_sparse.indptr[i+1]
variables = [self.x1[j] for j in self.water_network.A_sparse.indices[start:end]]
coeff = self.water_network.A_sparse.data[start:end]
expr = gurobipy.LinExpr(coeff, variables)
self.EqualityConstraintA.append(self.model.addConstr(lhs=expr, sense=gurobipy.GRB.EQUAL, rhs=self.water_network.b [i],name='A'+str(i)))
self.model.update()
self.model.ModelSense = 1
self.model.optimize()
The following simple snippet is used to update the problem at each step. Note i use the getConstrs function:
#### Snippet 2: Updating the constraints, working ok for every step.
self.model.setAttr("LB",self.model.getVars(), self.water_network.Bounds[:,0])
self.model.setAttr("UB", self.model.getVars(), self.water_network.Bounds[:,1])
self.model.setAttr("OBJ", self.model.getVars(), self.water_network.unitC)
self.model.setAttr("RHS", self.model.getConstrs(),self.water_network.b)
The problem arised when a new set of constraints should be added to the problem, in the form of:
Lx=0 where L is a sparse matrix that is updated every step! Now in the formulation I add the following just after the snippet 1:
self.EqualityConstraintL=[]
leakrows= len(self.water_network.ZeroVector)
for i in range(leakrows):
start = self.water_network.L_sparse.indptr[i]
end=self.water_network.L_sparse.indptr[i+1]
variables=[self.x1[j] for j in self.water_network.L_sparse.indices[start:end]]
coeff=self.water_network.L_sparse.data[start:end]
expr = gurobipy.LinExpr(coeff, variables)
self.EqualityConstraintL.append(self.model.addConstr(lhs=expr, sense=gurobipy.GRB.EQUAL, rhs=self.water_network.ZeroVector[i],name='L'+str(i)))
However, I can no longer use the getConstrs to update all constraints at once, as some need only the RHS changed and others need only the LHS changed. So I did the following for the update (Snippet 3):
self.model.setAttr("LB",self.model.getVars(), self.water_network.Bounds[:,0])
self.model.setAttr("UB", self.model.getVars(), self.water_network.Bounds[:,1])
self.model.setAttr("OBJ", self.model.getVars(), self.water_network.unitC)
# Update A rhs...
for i in range(len(self.water_network.edge_name_list)):
self.model.setAttr("RHS", self.model.getConstrs()[i],self.water_network.b[i])
# Update L expr...
x1=self.model.getVars()
n=len(self.water_network.node_list) # because there are n rows in the A constrains, and L constraints are added after
# Now i rebuild the LHS expressions
for i in range(len(self.water_network.ZeroVector)):
start = self.water_network.L_sparse.indptr[i]
end=self.water_network.L_sparse.indptr[i+1]
variables=[x1[j] for j in self.water_network.L_sparse.indices[start:end]]
coeff=self.water_network.L_sparse.data[start:end]
expr = gurobipy.LinExpr(coeff, variables)
self.model.setAttr("LHS",self.model.getConstrs()[n+i],expr)
self.model.update()
self.model.optimize()
When I run the problem, it initializes fine, but at the second step it returns this error:
File "model.pxi", line 1709, in gurobipy.Model.setAttr
TypeError: object of type 'Constr' has no len()
and the offending line is:
self.model.setAttr("RHS", self.model.getConstrs()[i],self.water_network.b[i])
Two questions: 1) why is that happening? replacing getConstrs()[i] with getConstrByName('A'+str(i)) also fails with the exact same error. How to update the RHS/LHS of a specific constraint?
2) Is there a way to more efficiently update the RHS on the constraints contained in the self.EqualityConstraintA list and then the LHS on the other constraints contained in the self.EqualityConstraintL list ?
Many thanks in advance!
Di

The setAttr function on the model object is for
setting attributes globally on the model
setting attributes for a list of variables
setting attributes for a list of constraints
The individual constraint and variable objects have their own setAttr functions to set attributes on single variables and constraints. In your case,
for i in range(len(self.water_network.edge_name_list)):
self.model.getConstrs()[i].setAttr('RHS', self.water_network.b[i])
Which could be replaced by the more pythonic (and likely more efficient)
m = self.model
constrs = m.getConstrs()[:len(self.water_network.edge_name_list)]
m.setAttr('RHS', constrs, self.water_network.b)

Related

Variable definition as constraint in pyomo

This question is related to my previous question found here. I have managed to solve this problem (big thanks to #AirSquid!) My objective function is something like:
So the avgPrice_n variable is indexed by n. However, it is actually defined as
Meaning that it is indexed by n and i.
So at the moment my objective function is very messy as I have three sums. It looks something like (I expanded the brackets in the objective function and added each component separately, so the avgPrice_n*demand_n looks like):
expr += sum(sum(sum((1/12)*model.c[i]*model.allocation[i,n] for i in model.MP[t]) for t in model.M)*model.demand_n[n] for n in model.N)
And while this works, debugging was quite difficult because the terms are very long. So intead of using the actual definition of avgPrice_n, I was wondering if it would be possible to create a avgPrice_n variable, use this in the objective function and then create a constraint where I define avgPrice_n as I showed above.
The issue I am having is that I created my decision variable, x_{i,n}, as a variable but apparently I can't create a avgPrice_n as a variable where I index it by x_{i,n} as this results in a TypeError: Cannot apply a Set operator to an indexed Var component (allocation) error.
So as of now my decision variable looks like:
model.x = Var(model.NP_flat, domain = NonNegativeReals)
And I tried to create:
model.avg_Price = Var(model.x, domain = NonNegativeReals)
Which resulted in the above error. Any ideas or suggestions would be much appreciated!
You have a couple options. Realize you do not need the model.avg_price variable because you can construct it from other variables and you would have to make some constraints to constrain the value, etc. etc. and pollute your model.
The basic building blocks in the model are pyomo expressions, so you could put in a little "helper function" to build expressions (the cost function shown, which is dependent on n) which are not defined within the model, but just pop out an expression...totally legal). You can also "break up" large expressions into smaller expressions (like the other_stuff below) and then just kludge them all together in the objective (or where needed) this gives you the opportunity to evaluate them independently. I've made several models with an objective function that has a "cost" component and a "penalty" component by dividing it into 2 expressions.... Then when solved, you can inspect them independently.
My suggestion (if you don't like the triple sum in your current model) is to make an avg_cost(n) function to build the expression similar to what is done in the nonsensical function below, and use that as a substitute for a new variable.
Note: the initialization of the variables here is generally unnecessary. I just did it to "simulate solving" or they would be None...
Code:
import pyomo.environ as pyo
m = pyo.ConcreteModel()
m.N = pyo.Set(initialize=[0,1,2])
m.x = pyo.Var(m.N, initialize = 2.0)
def cost(n):
return m.x[n] + 2*m.x[n+1]
m.other_stuff = 3 * m.x[1] + 4 * m.x[2]
m.costs = sum(cost(n) for n in {0,1})
m.obj_expr = m.costs + m.other_stuff
m.obj = pyo.Objective(expr= m.obj_expr)
# inspect cost at a particular value of n...
print(cost(1))
print(pyo.value(cost(1)))
# inspect the pyomo expressions "other_stuff" and total costs...
print(m.other_stuff)
print(pyo.value(m.other_stuff))
print(m.costs)
print(pyo.value(m.costs))
# inspect the objective... which can be accessed by pprint() and display()
m.obj.pprint()
m.obj.display()
Output:
x[1] + 2*x[2]
6.0
3*x[1] + 4*x[2]
14.0
12.0
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : minimize : x[0] + 2*x[1] + x[1] + 2*x[2] + 3*x[1] + 4*x[2]
obj : Size=1, Index=None, Active=True
Key : Active : Value
None : True : 26.0

A strange (because i am new in python) result when i update self variable. Why this happens?

It is now one week that i am stuck with this and i don't know why it is happening. So i want to present to you my problem to see if you have a solution.
I have this code. My purpose here is to update this variables A and B using a loop. So, i first execute all calcs using A and B that i give when i call the class, then the code compute gamma and xi and finally i want to compute new A and B.
I create in my class first the functions that i use to find the values that i need to use for computing gamma and xi. Then i create functions that i use to find gamma and xi. In this function i call the previous functions with self.fun.
Then i create the function that i show here to compute new A and B values and i insert this in a loop because i want to iterate this function until i reach a convergence.
But...
This method works correctly to compute A, but, when it needs to compute B, it uses the new A before computing B. It implies that, when it computes A, old A and B are used to compute gamma and xi and then A. Where it computes B, it uses new A and B to compute gamma and xi but i want that it uses the same that it uses to compute first A.
def update(self):
for n in range(self.iter):
gamma = self.gamma()
xi = self.xi()
# new trans matrix
new_A = []
for i in range(len(self.A)):
temp = []
for j in range(len(self.A[i])):
numerator = 0
denominator = 0
for r in range(len(self.seq)):
for t in range(len(xi[r])):
numerator += xi[r][t][i][j]
denominator += gamma[r][t][i]
aij = numerator / denominator
temp.append(aij)
new_A.append(temp)
self.A = new_A
emission_signals = unique(self.seq[0])
emission_matrix = []
for k in range(len(emission_signals)):
emission_matrix.append([])
for i in range(len(self.A)):
gamma_vec = []
gamma_num = []
for r in range(len(self.seq)):
for t in range(len(self.seq[r])):
gamma_append = gamma[r][t][i]
gamma_vec.append(gamma_append)
if self.seq[r][t] == emission_signals[k]:
gamma_num_append = gamma[r][t][i]
gamma_num.append(gamma_num_append)
bik = sum(gamma_num) / sum(gamma_vec)
emission_matrix[k].append(bik)
new_B = {}
keys = emission_signals
for i in range(len(keys)):
new_B[keys[i]] = emission_matrix[i]
self.A = new_A
self.B = new_B
return {'A': self.A, 'B': self.B}
I don't know if i'm explaining well but this is my problem.
Hope that you can help me!
Thank you !
It's difficult without seeing the rest of your code, but it looks like you are referencing mutable objects A and B (list and dict) multiple times, and editing them from different places.
Just take into account, in python, every time you assign a mutable object to a variable, you are only creating a new reference to same object and not copying it. So every edit you do on that object, will be reflected on all its references.
Ok. the problem was this:
print(self.update().get('A'))
print(self.update().get('B'))
I change it in this way and now it works but i do not understand the logic behind this.
a = self.update()
print('\n---New transition matrix... ---')
print(a.get('A'))
print('\n---New Emission matrix... ---')
print(a.get('B'))
Maybe in the previous way i call update 2 times and then calculations go wrong. i don't know.

Distance objective optimisation

I'm modeling a reoptimisation model and I would like to include a constraint in order to reduce the distance between the initial solution and the reoptimized solution. I'm doing a staff scheduling and to do so I wanna penalized each assignment in the reoptimized solution that is different from the initial solution.
Before I start, I'm new to optimisation model and the way I built the constraint may be wrong.
#1 Extract the data from the initial solution of my main variable
ModelX_DictExtVal = model.x.extract_values()
# 2 Create a new binary variable which activate when the main variable `ModelX_DictExtVal[x,s,d]` of the initial
#solution is =1 (an employee n works days d and sifht s) and the value of `model.x[n,s,d]` of the reoptimized solution are different.
model.alpha_distance = Var(model.N_S_D, within=Binary)
#3 Model a constraint to activate my variable.
def constraint_distance(model, n, s, d):
v = ModelX_DictExtVal[n,s,d]
if v == 1 and ModelX_DictExtVal[n,s,d] != model.x[n,s,d]:
return model.alpha_distance[n,s,d] == 1
elif v == 0:
return model.alpha_distance[n,s,d] == 0
model.constraint_distance = Constraint(model.N_S_D, rule = constraint_distance)
#4 Penalize in my objective function every time the varaible is equal to one
ObjFunction = Objective(expr = sum(model.alpha_distance[n,s,d] * WeightDistance
for n in model.N for s in model.S for d in model.D))
Issue: I'm not sure about what I'm doing in part 3 and I get an index error when v == 1.
ERROR: Rule failed when generating expression for constraint
constraint_distance with index (0, 'E', 6): ValueError: Constraint
'constraint_distance[0,E,6]': rule returned None
I am wondering since I am reusing the same model for re-optimization if the model keeps the value of the initial solution of model.x [n, s, d] to do the comparison ModelX_DictExtVal [n, s, d]! = model.x [n, s, d] during the re-optimization phase instead of the new assignments...
You are right to suspect part 3. :)
So you have some "initial values" that could be either the original schedule (before optimizing) or some other preliminary optimization. And your decision variable is binary, indexed by [n,s,d] if I understand your question.
In your constraint you cannot employ an if-else structure based on a comparison test of your decision variable. The value of that variable is unknown at the time the constraint is built, right?
You are on the right track, though. So, what you really want to do is to have your alpha_distance (or penalty) variable capture any changes, indicating 1 where there is a change. That is an absolute value operation, but can be captured with 2 constraints. Consider (in pseudocode):
penalty = |x.new - x.old| # is what you want
So introduce 2 constraints, (indexed fully by [n,s,d]):
penalty >= x.new - x.old
penalty >= x.old - x.new
Then, as you are doing now, include the penalty in your objective, optionally multiplied by a weight.
Comment back if that doesn't make sense...

Pyomo | Creating simple model with indexed set

I am having trouble creating a simple model in pyomo. I want to define the following abstract model:
An attempt at creating an abstract model
I define
m.V = pyo.Set()
m.C = pyo.Set() # I first wanted to make this an indexed set in m.V, but this does not work as I cannot create variables with indexed sets (in next line)
m.Components = pyo.Var(m.V*m.C, domain=Binary)
Now I have no idea how to add the constraint. Just adding
Def constr(m,v):
return sum([m.Components[v,c] for c in m.C]) == 2
m.Constraint = Constraint(m.V, rule= constr)
will lead to the model also summing over components in m.C that should not fall under m.V (eg if I pass m.V = ['Cars', 'Boats'], and one of the 'Boats' components I want to pass is ‘New sails’; the above constraint will also put a constraint on m.Components[‘Cars’,’New sails’], which does not make much sense.
Trying to work out a concrete example
Now if I try to work through this problem in a concrete way and follow e.g. Variable indexed by an indexed Set with Pyomo, I still get an issue with the constraint. E.g. say I want to create a model that has this structure:
set_dict = {‘Car’:[ ‘New wheels’, ’New gearbox’, ’New seats’],’Boat’: [’New seats’, ‘New sail’, ‘New rudder‘]}
I then create these sets and variables:
m.V = pyo.Set(initialize=[‘Car’,’Boat’])
m.C = pyo.Set(initialize=[‘New wheels’, ’New gearbox’, ’New seats’, ‘New sail’, ‘New rudder‘])
m.VxC = pyo.Set(m.V*m.C, within = set_dict)
m.Components = pyo.Var(m.VxC, domain=Binary)
But now I still dont see a way to add the constraint in a pyomo native way. I cannot define a function that sums just over m.C as then it will sum over values that are not allowed again (e.g. as above, ‘New sail’ for the ‘Cars’ vehicle type). It seems the only way to do this is to refer back to the set_dict and loop & sum over that?
I need to create an abstract model, so I want to be able to write out this model in a pyomo native way, not relying on additional dictionaries and other objects to pass the right dimensions/sets into the model.
Any idea how I could do this?
You didn't say what form your data is in, but some variation of below should work. I'm not a huge fan of AbstractModels, but each format for the data should have some accommodation to build sparse sets which is what you want to do to represent the legal combinations of V x C.
By adding a membership test within your constraint(s), you can still sum across either V or C as needed.
import pyomo.environ as pyo
m = pyo.AbstractModel()
### SETS
m.V = pyo.Set()
m.C = pyo.Set()
m.VC = pyo.Set(within = m.V*m.C)
### VARS
m.select = pyo.Var(m.VC, domain=pyo.Binary)
### CONSTRAINTS
def constr(m,v):
return sum(m.select[v,c] for c in m.C if (v,c) in m.VC) == 2
m.Constraint = pyo.Constraint(m.V, rule= constr)

Nurse rostering using ortools constraint solver

I went through tutorial from google and I seem to understand most of the code. My problem is that they choose solutions only based on hard constraints. Most of papers also use soft constraints and every constraint has it's coeficient. Sum of all constraints each multiplied by their coeficient produces a cost of the roster, so the goal is to minimize this value. My question is, how can I add this to the code?
# Create the decision builder.
db = solver.Phase(shifts_flat, solver.CHOOSE_FIRST_UNBOUND,
solver.ASSIGN_MIN_VALUE)
# Create the solution collector.
solution = solver.Assignment()
solution.Add(shifts_flat)
collector = solver.AllSolutionCollector(solution)
solver.Solve(db, [collector])
I'm not sure what the decision builder does (or it's parameters), nor solver.Assignment(), nor solver.AllSolutionCollector(solution).
Only thing I found is this, but I'm not sure how to use it. (maybe call solver.Minimize(cost, ?) instead of assignment?)
0
If you look at:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py
The data defines employee requests:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py#L219
The model directly creates one bool var for each tuple (employee, day, shift).
Thus adding that to the objective is straightforward:
# Employee requests
for e, s, d, w in requests:
obj_bool_vars.append(work[e, s, d])
obj_bool_coeffs.append(w)
This is used in the minimize code:
# Objective
model.Minimize(
sum(obj_bool_vars[i] * obj_bool_coeffs[i]
for i in range(len(obj_bool_vars))) + sum(
obj_int_vars[i] * obj_int_coeffs[i]
for i in range(len(obj_int_vars))))

Categories