Using BigM equations to minimize a variable - python

I am trying to minimize the following equation:
P_ch_max = min(22, E_need*m_int) where E_need*m_int can be bigger or lower than 22 depending on the data.
I am using the following equations in pyomo to do it:
m.C6 = ConstraintList()
for t in m.ts2:
m.C6.add(expr = m.char_power - m.var_E_need[0,t]*m_int <= 100*m.Y[t])
m.C6.add(expr = m.var_E_need[0,t]*m_int - m.char_power <= 100 * (1-m.Y[t]))
m.C6.add(expr = m.var_P_ch_max[0,t] <= m.var_E_need[0,t]*m_int )
m.C6.add(expr = m.var_P_ch_max[0,t] <= m.char_power)
m.C6.add(expr = m.var_P_ch_max[0,t] >= m.var_E_need[0,t]*m_int - 100*(1-m.Y[t]))
m.C6.add(expr = m.var_P_ch_max[0,t] >= m.char_power - 100*m.Y[t])
m.char_power = 22; m.Y is a boolean; 100 is my big` in this case
When I substitute the values of Y manually, these equations make sense:
When Y=0 I get that P_ch_max<= 22 and P_ch_max>= 22 which would make P_ch_max == 22.
When Y=1 I get that P_ch_max<= E_need*m_int and P_ch_max>= E_need*m_int which would make P_ch_max = E_need*m_int.
However, when I run the code in pyomo it says it's unfeasible or unbounded and I don't understand why. Is there any other way to do this? Or can you tell me if I am doing something wrong pls?

It's pretty difficult to unwind your equations and figure out why it is infeasible. But you can do a couple things rather quickly to tackle this.
First, you can start to "comment out" constraints and see if you can get the model breathing (such that the solver doesn't complain about infeasibility), and then investigate from there.
Second, if you think you know a feasible solution (as you suggest in your comment about plugging in), then just plug in your values by assigning them and then display your model and it should stand out very quickly which constraints are violated. For example:
import pyomo.environ as pyo
m = pyo.ConcreteModel()
m.x = pyo.Var()
m.c1 = pyo.Constraint(expr=m.x >= 5)
m.x = 4
m.display()
Yields:
Model unknown
Variables:
x : Size=1, Index=None
Key : Lower : Value : Upper : Fixed : Stale : Domain
None : None : 4 : None : False : False : Reals
Objectives:
None
Constraints:
c1 : Size=1
Key : Lower : Body : Upper
None : 5.0 : 4 : None
[Finished in 210ms]

Related

Definition of binary variable in Pyomo is not working

I'm new to Pyomo (and optimization) and am trying to reproduce a simple approach (see comment from Fengyuan-Shi on https://github.com/Pyomo/pyomo/issues/821) to create a maximum constraint using the Big M method and binary variables. The code returns the correct answer, but the variables u_1 and u_2, which are supposed to be binary (taking values of 0 or 1 only) are actually taking values between 0 and 1. Can anyone see what I'm doing wrong?
import pyomo.environ as pyomo
m = pyomo.ConcreteModel()
m.x = pyomo.Param(initialize=5)
m.y = pyomo.Param(initialize=9)
m.z = pyomo.Var(domain = pyomo.NonNegativeReals)
m.u_1 = pyomo.Var(domain = pyomo.Binary)
m.u_2 = pyomo.Var(domain = pyomo.Binary)
m.M = pyomo.Param(initialize=1e3) # Big M
m.o = pyomo.Objective(expr = m.z + 8)
m.cons = pyomo.ConstraintList()
# ensure z is the maximum of x and y, per comment from Fengyuan Shi on https://github.com/Pyomo/pyomo/issues/821
# =============================================================================
m.cons.add(m.x <= m.z)
m.cons.add(m.y <= m.z)
m.cons.add(m.x >= m.z - m.M*(1-m.u_1))
m.cons.add(m.y >= m.z - m.M*(1-m.u_2))
m.cons.add(m.u_1 + m.u_2 >= 1)
m.pprint()
solver = pyomo.SolverFactory('ipopt')
status = solver.solve(m)
print("Status = %s" % status.solver.termination_condition)
for v in m.component_objects(pyomo.Var, active=True):
print ("Variable component object",v, v.value)
When the code is run, the output is:
Variable component object z 8.999999912504697 (correct, the maximum of x = 5 and y=9)
Variable component object u_1 0.5250908817936364 (expected this to be either 0 or 1)
Variable component object u_2 0.5274112114061761 (expected this to be either 0 or 1)
Your construct appears correct. You need to use a different solver.
ipopt is typically used for non-linear problems and it does not support integer requirements (which includes binary assignment). Specifically, it only supports continuous variables.
The problem you have is completely linear, so you should be using a linear solver that supports MIP formulations. Your problem is a "Mixed Integer Program" because of the binary requirements. I'd suggest cbc or glpk, both of which are freeware.

Python 1 of 2 stop conditions gets ignored in a loop

I am writing this simple function to use power iteration for the dominant eigenvalue. I want to put 2 stop conditionals. One for iterations and one for a precision threshold. But this error calculation does not work.
What a i doing wrong here in principle ?
#power ite. vanilla
A = np.random.uniform(low=-5.0, high=10.0, size=[3,3])
def power_iteration(A, maxiter, threshold):
b0 = np.random.rand(A.shape[1])
it = 0
error = 0
while True:
for i in range(maxiter):
b1 = np.dot(A, b0)
b1norm = np.linalg.norm(b1)
error = np.linalg.norm(b1-b0)
b0 = b1/b1norm
domeig = (b0#A#b0)/np.dot(b0, b0)
if error<threshold:
break
elif it>maxiter:
break
else:
error = 0
it = it + 1
return b0, domeig, it, error
result = power_iteration(A, 10, 0.1)
result
The output shows a very correct eigenvalue of ~9 and corresponding eigenvector ( i checked with numpy)
But the error is off. There is no way the length of the difference vector is 8. Considering the result is very close to the actual.
How i want to calculate error is the norm of the difference between the current eigenvector - the previous (b0). I start the error = 0 because the first iteration is guaranteed to give a big difference if b0 is chosen random
(array([ 0.06009408, 0.95411524, -0.2933476 ]),
9.001665234545708,
11,
8.001665234545815)
Tried to make a loop stop by 2 conditions. One gets ignored
Seems to work much better like this.
def power_it(matrix, iterations, threshold):
domeigenvector = np.random.rand(matrix.shape[1])
counter = np.random.rand(matrix.shape[1])
it = 0
error = 0
for i in range(iterations):
k1 = np.dot(A, domeigenvector)
k1norm = np.linalg.norm(k1)
domeigenvector = k1/k1norm
error = np.linalg.norm(domeigenvector-counter)
counter = domeigenvector
domeigenvalue = (domeigenvector#A#domeigenvector)/np.dot(domeigenvector, domeigenvector)
it = it + 1
if error < threshold:
break
return domeigenvalue, domeigenvector, it
I can now use Schur deflation to calculate the rest of the eigenpairs.

How to understand this Pyomo Constraint (rewriting the constraint)

I am new to Pyomo.
I wrote this constraint in Pyomo written below as shown in this equation:
model.amount_of_energy_con = pe.ConstraintList()
for t in model.time:
for b in model.boats:
for s in model.chargers:
lhs = model.charge_energy[b, t, s]
rhs = model.c_rating[s] * model.boat_battery_capacity * model.boats_availability[b][t] * model.charging[b, t, s]
model.amount_of_energy_con.add(expr= (lhs <= rhs))
For the Constraint above, I think my constraint should be something like this in the model object
Key : Lower : Body : Upper : Active
1 : -Inf : charge_energy[1,0,SC] : 15.75*charging[1,0,SC] : True
2 : -Inf : charge_energy[1,0,FC] : 126*charging[1,0,FC] : True
But I was getting this below using model.amount_of_energy_con.pprint()
Key : Lower : Body : Upper : Active
1 : -Inf : charge_energy[1,0,SC] - 15.75*charging[1,0,SC] : 0.0 : True
2 : -Inf : charge_energy[1,0,FC] - 126*charging[1,0,FC] : 0.0 : True
Note: The 0 in the equation was added as bounds when setting up the model.charge_energy Variable model.charge_energy = pe.Var(model.boats, model.time, model.chargers, bounds=(0, None)) and I still don't understand why my Lower is -Inf.
What am I doing wrong?

(Python) Markov, Chebyshev, Chernoff upper bound functions

I'm stuck with one task on my learning path.
For the binomial distribution X∼Bp,n with mean μ=np and variance σ**2=np(1−p), we would like to upper bound the probability P(X≥c⋅μ) for c≥1.
Three bounds introduced:
Formulas
The task is to write three functions respectively for each of the inequalities. They must take n , p and c as inputs and return the upper bounds for P(X≥c⋅np) given by the above Markov, Chebyshev, and Chernoff inequalities as outputs.
And there is an example of IO:
Code:
print Markov(100.,0.2,1.5)
print Chebyshev(100.,0.2,1.5)
print Chernoff(100.,0.2,1.5)
Output
0.6666666666666666
0.16
0.1353352832366127
I'm completely stuck. I just can't figure out how to plug in all that math into functions (or how to think algorithmically here). If someone could help me out, that would be of great help!
p.s. and all libs are not allowed by task conditions except math.exp
Ok, let's look at what's given:
Input and derived values:
n = 100
p = 0.2
c = 1.5
m = n*p = 100 * 0.2 = 20
s2 = n*p*(1-p) = 16
s = sqrt(s2) = sqrt(16) = 4
You have multiple inequalities of the form P(X>=a*m) and you need to provide bounds for the term P(X>=c*m), so you need to think how a relates to c in all cases.
Markov inequality: P(X>=a*m) <= 1/a
You're asked to implement Markov(n,p,c) that will return the upper bound for P(X>=c*m). Since from
P(X>=a*m)
= P(X>=c*m)
it's clear that a == c, you get 1/a = 1/c. Well, that's just
def Markov(n, p, c):
return 1.0/c
>>> Markov(100,0.2,1.5)
0.6666666666666666
That was easy, wasn't it?
Chernoff inequality states that P(X>=(1+d)*m) <= exp(-d**2/(2+d)*m)
First, let's verify that if
P(X>=(1+d)*m)
= P(X>=c *m)
then
1+d = c
d = c-1
This gives us everything we need to calculate the uper bound:
def Chernoff(n, p, c):
d = c-1
m = n*p
return math.exp(-d**2/(2+d)*m)
>>> Chernoff(100,0.2,1.5)
0.1353352832366127
Chebyshev inequality bounds P(X>=m+k*s) by 1/k**2
So again, if
P(X>=c*m)
= P(X>=m+k*s)
then
c*m = m+k*s
m*(c-1) = k*s
k = m*(c-1)/s
Then it's straight forward to implement
def Chebyshev(n, p, c):
m = n*p
s = math.sqrt(n*p*(1-p))
k = m*(c-1)/s
return 1/k**2
>>> Chebyshev(100,0.2,1.5)
0.16

to restrict parameter values strictly with in bounds

I am trying to optimize a function using l_bfgs constraint optimization routine in scipy.
But the optimization routine passes values to the function, which are not with in the Bounds.
my full code looks like,
def humpy(aParams):
aParams = numpy.asarray(aParams)
print aParams
####
# connect to some other software for simulation
# data[1] & data[2] are read
##### objective function
val = sum(0.5*(data[1] - data[2])**2)
print val
return val
####
def approx_fprime():
####
Initial = numpy.asarray([10.0, 15.0, 50.0, 10.0])
interval = [(5.0, 60000.0),(10.0, 50000.0),(26.0, 100000.0),(8.0, 50000.0)]
opt = optimize.fmin_l_bfgs(humpy,Initial,fprime=approx_fprime, bounds=interval ,pgtol=1.0000000000001e-05,iprint=1, maxfun=50000)
print 'optimized parameters',opt[0]
print 'Optimized function value', opt[1]
####### the end ####
based on the initial values(Initial) and bounds(interval)
opt = optimize.fmin_l_bfgs() will pass values to my software for simulation, but the values passed should be with in 'bounds'. Thats not the case..see below the values passed at various iterations
iter 1 = [ 10.23534209 15.1717302 50.5117245 10.28731118]
iter 2 = [ 10.23534209 15.1717302 50.01160842 10.39018429]
[ 11.17671043 15.85865102 50.05804208 11.43655591]
[ 11.17671043 15.85865102 50.05804208 11.43655591]
[ 11.28847754 15.85865102 50.05804208 11.43655591]
[ 11.17671043 16.01723753 50.05804208 11.43655591]
[ 11.17671043 15.85865102 50.5586225 11.43655591]
...............
...............
...............
[ 49.84670071 -4.4139714 62.2536381 23.3155698847]
at this iteration -4.4139714 is passed to my 2nd parameter but it should vary from (10.0, 50000.0), from where come -4.4139714 i don't know?
where should i change in the code? so that it passed values which should be with in bounds
You are trying to do bitwise exclusive or (the ^ operator) on floats, which makes no sense, so I don't think your code is actually the code you have problems with. However, I changed the ^ to ** assuming that was what you meant, and had no problems. The code worked fine for me with that change. The parameters are restricted exactly as defined.
Python 2.5.
Are you asking about doing something like this?
def humpy(aParams):
aParams = numpy.asarray(aParams)
x = aParams[0]
y = aParams[1]
z = aParams[2]
u = aParams[3]
v = aParams[4]
assert 2 <= x <= 50000
assert 1 <= y <= 35000
assert 1 <= z <= 45000
assert 2 <= u <= 50000
assert 2 <= v <= 60000
val=100.0*((y-x**2.0)^2.0+(z-y**2.0)^2.0+(u-z**2.0)^2.0+(v-u**2.0)^2.0)+(1-x)^2.0+(1-y)^2.0+(1-z)^2.0+(1-u)^2.0
return val

Categories