Add constraint in GurobiPy using conditional decision variable - python

I am new to optimisation and have a fairly basic query.
I have a model with two decision variables x and y that vary in time. I'd like to add a conditional constraint on y at time t depending upon x[t-1], such that I've implemented the following code:
for t in model.timesteps:
if t>1:
if model.x[t-1] <= 1:
model.addConstr(model.y[t] >= 100)
elif model.x[t-1] <= 0.5:
model.addConstr(model.y[t] >= 50)
elif model.x[t-1] <= 0.3:
model.addConstr(model.y[t] >= 20)
However, the above code produces the error:
File "tempconstr.pxi", line 44, in gurobipy.TempConstr.bool
gurobipy.GurobiError: Constraint has no bool value (are you trying "lb <= expr <= ub"?)
Having done a little reading on previous related queries on this page, I believe I might need to use a binary indicator variable in order to implement the above. However, I'm not certain as to whether this would solve the above issue.
Could anyone point me in the right direction here please?
Many thanks in advance!

First, I assume your order of operations is incorrect; you intended that the right-hand side is 20 for 0 ≤ x[t-1] ≤ 0.3, 50 for 0.3 < x[t-1] ≤ 0.5 and 100 for 0.5 < x[t-1] ≤ 1.0.
The bigger issue is that you were mixing Python programming with MIP modeling. What you need is to convert that logic into a MIP model. There are several ways to do this. One is to use a piecewise linear constraint to represent the right-side values of the y[t] constraints. However, I prefer to model this explicitly. There are a few similar options; here is one I think is easy to understand: add binary variables z[0], z[1] and z[2] to represent the 3 ranges of x[t-1]; this gives the following code:
for t in model.timesteps:
if t>1:
z = model.addVars(3, vtype='B', name="z_%s" % str(t))
model.addConstr(x[t-1] <= z.prod([0.3, 0.5, 1.0]))
model.addConstr(y[t] >= z.prod([20, 50, 100]))
model.addConstr(z.sum() == 1)

Related

How do I evaluate this equation in z3 for python

I'm trying to evaluate a simple absolute value inequality like this using z3.
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < .01, y==1000)
The output is no solution every time. I know this is mathematically possible, I just can't figure out how z3 does stuff like this.
This is a common gotcha in z3py bindings. Constants are "promoted" to fit into the right type, following the usual Python methodology. But more often than not, it ends up doing the wrong conversion, and you end up with a very confusing situation.
Since your variables x and y are Int values, the comparison against .01 forces that constant to be 0 to fit the types, and that's definitely not what you wanted to say. The general advice is simply not to mix-and-match arithmetic like this: Cast this as a problem over real-values, not integers. (In general SMTLib doesn't allow mixing-and-matching types in numbers, though z3py does. I think that's misguided, but that's a different discussion.)
To address your issue, the simplest thing to do would be to wrap 0.01 into a real-constant, making the z3py bindings interpret it correctly. So, you'll have:
from z3 import *
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < RealVal(.01), y==1000)
Note the use of RealVal. This returns:
[x = 1000, y = 1000]
I guess this is what you are after.
But I'd, in general, recommend against using conversions like this. Instead, be very explicit yourself, and cast this as a problem, for instance, over Real values. Note that your division / 1000 is also interpreted in this equation as an integer division, i.e., one that produces an integer result. So, I'm guessing this isn't really what you want either. But I hope this gets you started on the right path.
Int('a') < 0.01 is turned (rightly or wrongly) into Int('a') < 0 and clearly the absolute value can never be smaller than 0.
I believe you want Int('a') <= 0 here.
Examples:
solve(Int('a') < 0.01, Int('a') > -1)
no solution
solve(Int('a') <= 0.01, Int('a') > -1)
[a = 0]
Int('a') < 0.01
a < 0

Add step size to a linear optimization

I'm working on a blending problem similar to the pulp example
I have this constrain to make sure the quantity produced is the desired one
prob += lpSum([KG[i] * deposit_vars[i] for i in deposit]) == 64, "KGRequirement"
But I also need to add another constraint for the minimun value different than zero, this is because is not convenient that I take for example, 0.002KG of one ingredient, I have to take either 0 or at least 2 kg, hence valid cases are e.g. 0, 2, 2.3, 6, 3.23.
I tried to make it this way:
for i in deposit:
prob += (KG[i] * deposit_vars[i] == 0) or (TM[i] * deposit_vars[i] >= 30)
But that is not working and it just make the problem Infeasible
EDIT
This my current code:
import pulp
from pulp import *
import pandas as pd
food = ["f1","f2","f3","f4"]
KG = [10,20,50,80]
Protein = [18,12,16,18]
Grass = [13,14,13,16]
price_per_kg = [15,11,10,22]
## protein,carbohydrates,kg
df = pd.DataFrame({"tkid":food,"KG":KG,"Protein":Protein,"Grass":Grass,"value":price_per_kg})
deposit = df["tkid"].values.tolist()
factor_volumen = 1
costs = dict((k,v) for k,v in zip(df["tkid"],df["value"]))
Protein = dict((k,v) for k,v in zip(df["tkid"],df["Protein"]))
Grass = dict((k,v) for k,v in zip(df["tkid"],df["Grass"]))
KG = dict((k,v) for k,v in zip(df["tkid"],df["KG"]))
prob = LpProblem("The Whiskas Problem", LpMinimize)
deposit_vars = LpVariable.dicts("Ingr",deposit,0)
prob += lpSum([costs[i]*deposit_vars[i] for i in deposit]), "Total Cost of Ingredients per can"
#prob += lpSum([deposit_vars[i] for i in deposit]) == 1.0, "PercentagesSum"
prob += lpSum([Protein[i] *KG[i] * deposit_vars[i] for i in deposit]) >= 17.2*14, "ProteinRequirement"
prob += lpSum([Grass[i] *KG[i] * deposit_vars[i] for i in deposit]) >= 12.8*14, "FatRequirement"
prob += lpSum([KG[i] * deposit_vars[i] for i in deposit]) == 14, "KGRequirement"
prob += lpSum([KG[i] * deposit_vars[i] for i in deposit]) <= 80, "KGRequirement1"
prob.writeLP("WhiskasModel.lp")
prob.solve()
# The status of the solution is printed to the screen
print ("Status:", LpStatus[prob.status])
# Each of the variables is printed with it's resolved optimum value
for v in prob.variables():
print (v.name, "=", v.varValue)
# The optimised objective function value is printed to the screen
print ("Total Cost of Ingredients per can = ", value(prob.objective))
The new contrain I want to add is in this part:
prob += lpSum([KG[i] * deposit_vars[i] for i in deposit]) <= 80, "KGRequirement1"
Where I want the product KG[i] * deposit_vars[i] be either 0 or to be between a and b
In the traditional linear programming formulation, all variables, objective function(s), and constraints need to be continuous. What you are asking is how to make this variable a discrete variable, i.e. it can only accept values a,b,... and not anything in between. When you have a combination of continuous and discrete variables, that is called a mixed integer problem (MIP). See PuLP documentation that reflects this explanation. I suggest you carefully read the blending problems mentions on "integers;" they are scattered about the page. According to PuLP's documentation, it can solve MIP problems by calling external MIP solver, some of which are already included.
Without a minimum working example, it is a little tricky to explain how to implement this. One way to do this would be to specify the variable(s) as an integer with the values it can take as a dict. Leaving the default solver, COIN-OR's CBC solver solver, will then solve the MIP. Meanwhile, here's a couple of resources for you to move forward:
https://www.toptal.com/algorithms/mixed-integer-programming#example-problem-scheduling
Note how it uses CBC solver, which is the default solver, to solve this problem
http://yetanothermathprogrammingconsultant.blogspot.com/2018/08/scheduling-easy-mip.html
A more explicit example on how they set-up their integer variables and call the CBC solver
'or' is not something you can use in an LP / MIP model directly. Remember, an LP/MIP consists of a linear objective and linear constraints.
To model x=0 or x≥L you can use socalled semi-continuous variables. Most advanced solvers support them. I don't believe Pulp supports this however. As a workaround you can also use a binary variable δ:
δ*L ≤ x ≤ δ*U
where U is an upperbound on x. It is easy to see this works:
δ = 0 ⇒ x = 0
δ = 1 ⇒ L ≤ x ≤ U
Semi-continuous variables don't require these constraints. Just tell the solver variable x is semi-continuous with bounds [L,U] (or just L if there is no upperbound).
The constraint
a*x=0 or L ≤ a*x ≤ U
can be rewritten as
δ*L ≤ x*a ≤ δ*U
δ binary variable
This is a fairly standard formulation. Semi-continuous variables are often used in finance (portfolio models) to prevent small allocations.
All of this keeps the model perfectly linear (not quadratic), so one can use a standard MIP solver and a standard LP/MIP modeling tool such as Pulp.

Comparing float variables precisely?

I have a code looking like this:
for i in range (1, 256):
if ((((i-1) * (1 / float(256))) <= proba) and (proba <= (i * (1 / float(256))))):
problist[i] += 1
With proba being a float between 0 and 1 (mostly 0.625 or 0.5).
I want to add proba which is calculated before to a specific interval. Problem is that python seems to assign one value to more than one interval due to rounding errors.
Is there another way to compare these two float numbers being more precise?
Has nothing to do with rounding errors. There aren't any. But if you have intervals [0.49609375, 0.5] and [0.5, 0.50390625], then 0.5 truly is in both of them. Use half-open intervals instead, i.e., change one of those <= to <.
Btw, it would be simpler and faster to simply calculate the interval number by multiplying with 256.
problist[min(int(proba * 256) + 1, 256)] += 1

(python) solving transcendental equation

i need to solve following equation:
0 = -1 / x**0.5) - 2 * log((alpha * x**0.5) + beta)
alpha and beta are given, i just need to iterate x until a certain extent.
I'm not a great python programmer, but like to implement this one.
How might this be possible?
Best regards
The smartest to do would be to implement a solve function like Stanislav recommended. You can't just iterate over values of x until the equation reaches 0 due to Floating Point Arithmetic. You would have to .floor or .ceil your value to avoid an infinity loop. An example of this would be something like:
x = 0
while True:
x += 0.1
print(x)
if x == 10:
break
Here you'd think that x eventually reaches 10 when it adds 0.1 to 9.9, but this will continue forever. Now, I don't know if your values are integers or floats, but what I'm getting at is: Don't iterate. Use already built solve libraries.

Adding lazy constraint in python-Gurobi interface

I am trying to add some lazy constraints to the first stage of a stochastic programming problem. For example, the optimal solution shows me that locations 16 and 20 are chosen together which I don't want to so I want to add a lazy constraint as follows:
First Stage
x1 + x2 + ... + x40 = 5
z_i,l <= x_i i=1,..,40 and l=1,2
Second Stage
....
def mycallback(model,where):
if where == GRB.Callback.MIPSOL:
sol = model.cbGetSolution([model._vars[s] for s in range(1,40)])
if sol[16] + sol[20] == 2:
Exp = LinExpr([(1,model._vars[16]),(1,model._vars[20])])
model.cbLazy(Exp <= 1)
model._vars = x
model.optimize(mycallback)
But after running this function, locations 16 and 20 are still in the optimal solution. Could you please let me know how should I attack this issue?
In your code, the test
if sol[16] + sol[20] == 2:
is comparing the sum of two floating point numbers with an integer using equality. Even if you declare decision variables to be integer, the solution values are floating point numbers. The floating point numbers don't even need to have integer values. Gurobi has a parameter IntFeasTol, which determines how far a value can be from 0 or 1 and still be considered binary. The default is 1e-5, so 0.999991 would be considered an integer. Your check should something like
if sol[16] + sol[20] > 1.5:

Categories