How do I convert an LpSum to an LPConstraint? - python

In Pulp I have constraints of the form,
lpSum([decision_vars[an_id][item] for item in a_vector]) == count_req[an_id], f'constraint_{user_id}'
and I want to convert this to use LpConstraint as a stepping stone to making this constraint elastic. i.e. LpConstraint(...).makeElasticSubProblem(...)
LpConstraint(
e=pl.lpSum([decision_vars[an_id][item] for item in a_vector]),
sense=LpConstraintEQ,
rhs=count_req[an_id],
name=f'constraint_{an_id}'
)
Are these equivalent?
Is there some cleaner example or documentation for converting an lpSum to an LpConstraint?

It's maybe not the answer you're looking for, but I recommend against using PuLP. cvxpy is easier to learn and use (free variables instead of variables bound to models) and allows you to easily extend your models to nonlinear convex systems, which gives you much more bang for your buck in terms of effort put into learning versus capability obtained.
For instance, in cvxpy, your problem might be cast as:
from collections import defaultdict
from cvxpy import cp
decision_vars = {}
for an_id in ids:
for item in a_vector:
decision_vars[an_id][item] = cp.Variable(pos=True)
constraints = []
for an_id in ids:
my_sum = sum(x for x in decision_vars[an_id].values())
constraints.append(count_req[an_id]*lower_proportion <= my_sum)
constraints.append(my_sum <= count_req[an_id]*upper_proportion)
problem = cp.Problem(cp.Minimize(OBJECTIVE_HERE), constraints)
optimal_value = problem.solve()
for an_id in ids:
for item in a_vector:
print(f"{an_id} {item} {decision_vars[an_id][item].value}")
Note that the use of free variables means that we can use standard Python constructs like dictionaries, sums, and comparison operators to build our problem. Sure, you have to construct the elastic constraint manually, but that's not challenging.

Related

Apriori Results in Python

I am trying to run an apriori algorithm in python. My specific problem is when I use the apriori function, I specify the min_length as 2. However, when I print the rules, I get rules that contain only 1 item. I am wondering why apriori does not filter out items less than 2, because I specified I only want rules with 2 things in the itemset.
from apyori import apriori
#store the transactions
transactions = []
total_transactions = 0
with open('browsing.txt', 'r') as file:
for transaction in file:
total_transactions += 1
items = []
for item in transaction.split():
items.append(item)
transactions.append(items)
#
support_threshold = (100/total_transactions)
print(support_threshold)
minimum_support = 100
frequent_items = apriori(transactions, min_length = 2, min_support = support_threshold)
association_results = list(frequent_items)
print(association_results[0])
print(association_results[1])
My results:
RelationRecord(items=frozenset({'DAI11223'}), support=0.004983762579981351, ordered_statistics=[OrderedStatistic(items_base=frozenset(), items_add=frozenset({'DAI11223'}), confidence=0.004983762579981351, lift=1.0)])
RelationRecord(items=frozenset({'DAI11778'}), support=0.0037619369152117293, ordered_statistics=[OrderedStatistic(items_base=frozenset(), items_add=frozenset({'DAI11778'}), confidence=0.0037619369152117293, lift=1.0)])
A look into the code from here: https://github.com/ymoch/apyori/blob/master/apyori.py revealed that there is no min_length keyword (only max_length). They way apyori is implemented it does not raise any warning or error when passing keyword arguments which are not used.
Why not filter the result afterwards?
association_results = filter(lambda x: len(x.items) > 1, association_results)
Limitation of first approach was need to converted data in a list fomat. when we see real life a store has many thousands of sku in that case it is computationally expensive.
Apyori package is outdated. i mean there is no recent update from past few years.
Results are coming in improper format which need to represent properly and that need computational operation to perform.
mlxtend used two way based approach which generate frequent itemset and association rules over that. -check here for more info
mlxtend are proper and has community support.

Nurse rostering using ortools constraint solver

I went through tutorial from google and I seem to understand most of the code. My problem is that they choose solutions only based on hard constraints. Most of papers also use soft constraints and every constraint has it's coeficient. Sum of all constraints each multiplied by their coeficient produces a cost of the roster, so the goal is to minimize this value. My question is, how can I add this to the code?
# Create the decision builder.
db = solver.Phase(shifts_flat, solver.CHOOSE_FIRST_UNBOUND,
solver.ASSIGN_MIN_VALUE)
# Create the solution collector.
solution = solver.Assignment()
solution.Add(shifts_flat)
collector = solver.AllSolutionCollector(solution)
solver.Solve(db, [collector])
I'm not sure what the decision builder does (or it's parameters), nor solver.Assignment(), nor solver.AllSolutionCollector(solution).
Only thing I found is this, but I'm not sure how to use it. (maybe call solver.Minimize(cost, ?) instead of assignment?)
0
If you look at:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py
The data defines employee requests:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py#L219
The model directly creates one bool var for each tuple (employee, day, shift).
Thus adding that to the objective is straightforward:
# Employee requests
for e, s, d, w in requests:
obj_bool_vars.append(work[e, s, d])
obj_bool_coeffs.append(w)
This is used in the minimize code:
# Objective
model.Minimize(
sum(obj_bool_vars[i] * obj_bool_coeffs[i]
for i in range(len(obj_bool_vars))) + sum(
obj_int_vars[i] * obj_int_coeffs[i]
for i in range(len(obj_int_vars))))

Can I provide a solver in Google's ortools package with a BFS to start?

I'm solving a very large LP — one which doesn't have 0 as a Basic feasible solution (BFS). I'm wondering if by passing the solver a basic feasible solution, I can speed up the process. Looking for something along the lines of: solver.setBasicFeasibleSolution(). I'll formulate a toy instance below (with a lot fewer constraints) and show you what I mean.
from ortools.linear_solver import pywraplp
def main():
# Instantiate solver
solver = pywraplp.Solver('Toy',
pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
# Variables
x = solver.NumVar(-1, solver.infinity(), 'x')
y = solver.NumVar(-1, solver.infinity(), 'y')
z = solver.NumVar(-1, solver.infinity(), 'z')
# Constraint 1: x + y >= 10.
constraint1 = solver.Constraint(10, solver.infinity())
constraint1.SetCoefficient(x, 1)
constraint1.SetCoefficient(y, 1)
# Constraint 2: x + z >= 5.
constraint2 = solver.Constraint(5, solver.infinity())
constraint2.SetCoefficient(x, 1)
constraint2.SetCoefficient(z, 1)
# Constraint 3: y + z >= 15.
constraint2 = solver.Constraint(15, solver.infinity())
constraint2.SetCoefficient(y, 1)
constraint2.SetCoefficient(z, 1)
# Objective function: min 2x + 3y + 4z.
objective = solver.Objective()
objective.SetCoefficient(x, 2)
objective.SetCoefficient(y, 3)
objective.SetCoefficient(z, 4)
objective.SetMinimization()
# What I want:
"""
solver.setBasicFeasibleSolution({x: 10, y: 5, z: 15})
"""
solver.Solve()
[print val.solution_value() for val in [x, y, z]]
Hoping something like this will speed things up (in case the solver has to use two phase simplex to find an initial BFS or the big M method).
Also, if anyone can point me to python API docs — not Google provided examples — that would be really helpful. Looking to understand what objects are available in ortools' solvers, what their methods are, and what their return values and patterns are. Sort of like the C++ docs.
Of course, other resources are also welcomed.
Crawling the docs, this seems to be the API-docs for the C++-based solver and swig-based python-binding is mentioned.
Within this, you will find MPSolver with this:
SetStartingLpBasis
Return type: void
Arguments: const std::vector& variable_statuses, const std::vector& constraint_statuses
Advanced usage: Incrementality. This function takes a starting basis to be used in the next LP Solve() call. The statuses of a current solution can be retrieved via the basis_status() function of a MPVariable or a MPConstraint. WARNING: With Glop, you should disable presolve when using this because this information will not be modified in sync with the presolve and will likely not mean much on the presolved problem.
The warning somewhat makes me wonder if this will work out for you (in terms of saving time).
If you don't have a good reason to stick with GLOP (looks interesting!), use CoinORs Clp (depressing state of docs; but imho the best open-source LP-solver including some interesting crashing-procedures)! I think it's even interfaced within ortools. (Mittelmann Benchmarks, where it's even beating CPLEX. But in regards to a scientific-eval it only shows that it's very competetive!)
Or if it's very large and you don't need Simplex-like solutions, go for an Interior-point method (Clp has one; no info about quality).

Get coefficients of a linear pyomo constraint

I would like to obtain the coefficients of a linear constraint c of a pyomo model m.
For instance, for
m= ConcreteModel()
m.x_1 = Var()
m.x_2 = Var()
m.x_3 = Var(within = Integers)
m.x_4 = Var(within = Integers)
m.c= Constraint(expr=2*m.x_1 + 5*m.x_2 + m.x_4 <= 2)
I would like to get the array c_coef = [2,5,0,1].
The answer to this question explains how to obtain all variables occurring in a linear constraint and I can easily use this to create the zero-coefficients for variables which don't occur in a constraint. However, I am struggling with the nonzero-coefficients. My current approach uses the private attribute _coef, that is c_nzcoef = m.c.body._coef which I probably should not use.
What would be the proper way to obtain the nonzero coefficients?
The easiest way to get the coefficients for a linear expression is to make use of the "Canonical Representation" data structure:
from pyomo.repn import generate_canonical_repn
# verify that the expression is linear
if m.c.body.polynominal_degree() == 1:
repn = generate_canonical_repn(m.c.body)
for i, coefficient in enumerate(repn.linear or []):
var = repn.variables[i]
This should be valid for any version of Pyomo from 4.0 through at least 5.3.

Maya Scale Constraint

I have two list
mainCTRL = ['nurbsCircle1','nurbsCircle2','nurbsCircle3']
grpCTRL = ['group1','group2','group3']
and for each object in mainCTRL and grpCTRL, I am trying to apply scale constraint to its same order in the grpCTRL. Scale should be apply in the order, like
'nurbsCicle1' should apply scale constraint to 'group1'
'nurbsCicle2' should apply scale constraint to 'group2'
'nurbsCicle3' should apply scale constraint to 'group3'
How can I have do this? How can I tell python to apply this command for each nurbsCircle to its grpCTRL.
cmds.scaleConstraint('eachnubrsCircle', 'eachgrp')
I am a newbie to python and learning things as I go. Any help is really appreciated.
Thank you very much :)
This is relatively easy with Python's zip method.
mainCTRL = ['nurbsCircle1','nurbsCircle2','nurbsCircle3']
grpCTRL = ['group1','group2','group3']
for ctrl, grp in zip(mainCTRL, grpCTRL):
cmds.scaleConstraint(ctrl, grp)
You can print zip(mainCTRL, grpCTRL) to see what it's actually returning.

Categories