SCIP - Integral separation - python

I developed a SCIP/MIP model using LP relaxation which relies on branching on 0-1 variables. However, it is quite inefficient as I have not figured out how to use relevant SCIP callbacks.
Here is my code:
isMIP = False
while True:
model.optimize()
if isMIP:
print("Optimal value:", model.getObjVal())
break
else:
print("Intermediate value:", model.getObjVal())
x,y,u = model.data
fracvars = []
for j in y:
w = model.getVal(y[j])
if w > 0.001 and w < 0.999:
fracvars.append([j,abs(w-0.5)])
if fracvars:
fracvars.sort(key = itemgetter(1))
min_var, min_value = min([(val[0],val[1]) for val in fracvars])
model.freeTransform()
model.chgVarType(y[min_var],"I") # the very inefficient part...
print("Integer constraint on y[%s]" % min_var)
else:
isMIP = True
Could anyone help me speed up the code? Many thanks.

Please see http://scip.zib.de/doc-5.0.1/html/BRANCH.php for how to write a branching rule and http://scip.zib.de/doc-5.0.1/html/SEPA.php for cutting plane separators (I am still not sure what you want to do exactly...). This is the description for C plugins, but the equivalents should exist in PySCIPOpt or should be easy to add if you know what you need.

Related

Improving an algorithm for peak to peak measurements

I've implemented an algorithm to detect the negative part of a given peak. The main problem it gets effected by outliers easily, any recommendations on how to improve it?
def stepdown(self):
peak_location_y = self.spec[self.peak_idx]
peak_indcies = self.peak_idx
spec_y = self.spec
neg_peak = []
for peak_index, peak_y in zip(peak_indcies, peak_location_y):
i = 1
tmp = [0]
try:
while (
peak_y >= spec_y[peak_index + i]
and
spec_y[peak_index + i - 1] >= spec_y[peak_index + i]
):
tmp.append(peak_index + i)
i += 1
neg_peak.append(tmp[-1])
except IndexError:
print("Index Error")
return neg_peak
I know the quality of the code is horrible. I'm just prototyping.
Here is to examples when it works correctly and when it fail.
The upper part of the figure is negative peaks detected by the algorithm, and the lower part is positive peaks.
Would be great if you provided the rest of the class, e.g. what is self.spec?
Also, you could provide some detail on how you intend the algorithm to work.
As you have tagged scipy, I'll recommend you to check out scipy.signal.find_peaks if you haven't already.

Is there a convenient way to express the continuous relaxation of a MIP using cvxpy?

I want to solve a (convex) mixed integer program as well as its continuous relaxation using cvxpy. Is there a way to use the same implementation of the objective and the constraints for both calculations?
As an example, take a look at the MIP example problem from the cvxpy website with some added constraint 'x[0]>=2':
np.random.seed(0)
m, n= 40, 25
A = np.random.rand(m, n)
b = np.random.randn(m)
# Construct a CVXPY problem
x = cp.Variable(n, integer=True) # x is an integer variable
obj = cp.sum_squares(A#x - b)
objective = cp.Minimize(obj)
constraint = [x[0] >= 2]
prob = cp.Problem(objective, constraint)
prob.solve()
print("The optimal value is", prob.value)
print("A solution x is")
print(x.value)
x = cp.Variable(n) # Now, x is no longer an integer variable but continuous
obj = cp.sum_squares(A#x - b) # I want to leave out this line (1)
constraint = [x[0] >= 2] # I want to leave out this line (2)
objective = cp.Minimize(obj)
prob = cp.Problem(objective, constraint)
prob.solve()
print("The optimal value is", prob.value)
print("A solution x is")
print(x.value)
When leaving out line (2), the problem is solved without the constraint. When leaving out line (1), the mixed integer problem is solved (so, changing 'x' to a continuous variable did not have any effect).
I want to avoid reimplementing the objective function and constraints because a missed copy and paste may lead to weird, hard-to-find errors.
Thank you for your help!
Edit: Thank you, Sascha, for your reply. You are right, outsourcing the model building solves the problem. So
class ModelBuilder:
m, n = 40, 25
A = np.random.rand(m, n)
b = np.random.randn(m)
def __init__(self, solve_continuous):
np.random.seed(0)
if solve_continuous:
self.x = cp.Variable(self.n)
else:
self.x = cp.Variable(self.n, integer=True)
#staticmethod
def constraint_func(x):
return [x[0] >= 2]
def objective_func(self, x):
return cp.sum_squares(self.A#x - self.b)
def build_problem(self):
objective = cp.Minimize(self.objective_func(self.x))
constraint = self.constraint_func(self.x)
return cp.Problem(objective, constraint)
# Construct and solve mixed integer problem
build_cont_model = False
MIP_Model = ModelBuilder(build_cont_model)
MIP_problem = MIP_Model.build_problem()
MIP_problem.solve()
print("The optimal value is", MIP_problem.value)
print("A solution x is")
print(MIP_Model.x.value)
# Construct and solve continuous problem
build_cont_model = True
Cont_Model = ModelBuilder(build_cont_model)
Cont_problem = Cont_Model.build_problem()
Cont_problem.solve()
print("The optimal value is", Cont_problem.value)
print("A solution x is")
print(Cont_Model.x.value)
works just as expected. Since I did not have this simple idea, it shows me that I do not yet understand the concept of applying a cvxpy.Variable to an expression.
In my first attempt, I defined variable x and used it when defining obj. Then, I changed the value of x (one line before (1)). I thought that obj was linked to x by a pointer or something similar, so that it would change its behavior, as well. Apparently, this is not the case.
Do you know any resources that could help me understand this behavior? Or is it obvious to anyone that is familiar with Python? Then, where could I learn about it?

overflow in exp using python scipy.optimize.basinhopping

I am using scipy.optimize.basinhopping in order to fit a simple exponential function (aexp(-btime)) to real data. I try to have appropriate initial guesses (for a and b) but in some iterations (for some values basinhopping guesses) "overflow in exp" occurs. I know that it is because of a very large answer to be calculated by exp. By the way the result is something absolutely wrong.
Is there anyway to ask the code to ignore those error containing guesses in order to prevent wrong results in output?
+ time goes from 0 to something around e+06
Thanks for your care and help
here is my code. after running, I get overflow error for some values for bk, so the resulting value for ret is absolutely wrong, something far far from the correct answer. :(
def model(bk):
s = 0
realData = data()
modelData = []
modelData.append(realData[0])
for time in range(len(realData) - 1):
x = realData[0] * np.exp((bk[0] * np.exp(bk[1]*time))*time)
y = 1 - realData[0] + x
i = x / y
modelData.append(i)
s+=np.abs(i-realData[time])
return(s)
def optimize():
bk0 = [1,-1]
minimizer_kwargs = {"method" : "BFGS"}
ret = basinhopping(model, bk0, minimizer_kwargs=minimizer_kwargs, niter=100)
print(ret)
optimize()

Optimality cut in callback, cplex,python

I have an optimization model that can be decomposed. The master problem is a MIP and the subproblem is a LP model. I got inspired by the IBM example (bendersatsp) to expand my benders using a callback. The only difference is that I need to add optimality cut rather than feasibility cut. Again using the former posts here I did that. According to Bender's algorithm, for adding optimality cut I need to check the stopping condition as well as optimality. This is if y is optimal and c'y> z, the optimality cut will be added till c'y=z.
My problem is that when I do not consider the stopping condition for some instances I have the optimal solution and the algorithm works well, whereas it could not find the optimal solution in other instances. If I add the stopping condition it cannot find the optimal solution at all. Could you please help me to figure it out. I appreciate that.
Please find some parts of my code regarding a call back and separate function in the following.
class BendersLazyConsCallback(LazyConstraintCallback):
def__call__(self):
v = self.v
u = self.u
z = self.z
T = self.T
workerLP = self.workerLP
boxty = len(u)
ite=len(z)
sol1 = []
sol2 = []
sol3 = []
sol1= self.get_values(u);
for i in range(1, ite+1):
sol3.append([])
sol3[i-1]= self.get_values(v[i-1]);
sol4=0
sol4= self.get_values(T);
# Benders' cut separation
if workerLP.separate (sol3, sol1, sol4, v, u, T):
self.add(constraint = workerLP.cutLhs, sense = "L", rhs =
workerLP.cutRhs)
def separate(self, vSol,uSol,TSol, v, u, T):
…………
…………
cpx.solve()
print("v:",v) # v is a continuous decision variable in the master
problem
print("u:",u) # u is an integer decision variable in the master problem
print("T:",T) # T is the dummy decision variable added to the master
problem and is the z of the model in the link
print("w:",w) # w is the c'y of the model in the link
violatedCutFound = False
if cpx.solution.get_status() == cpx.solution.status.optimal and (w >T):
print("optimum")
cutVarsList = []
cutCoefsList = []
for i in items:
for k in boxtypes:
cutVarsList.append(v[i-1][k-1])
for k in boxtypes:
cutVarsList.append(u[k-1])
cutVarsList.extend(T)
cutCoefsList=d
cutCoefsList.append(-1)
cutLhs = cplex.SparsePair(ind = cutVarsList, val = cutCoefsList)
self.cutLhs = cutLhs
self.cutRhs = cutRhs
violatedCutFound = True
print("violatedCutFound ")
return violatedCutFound

Low Autocorrelation Binary Sequence problem? Python troubleshooting

I'm trying to model this problem (for details on it, http://www.mpi-hd.mpg.de/personalhomes/bauke/LABS/index.php)
I've seen that the proven minimum for a sequence of 10 digits is 13. However, my application seems to be getting 12 quite frequently. This implies some kind of error in my program. Is there an obvious error in the way I've modeled those summations in this code?
def evaluate(self):
self.fitness = 10000000000 #horrible practice, I know..
h = 0
for g in range(1, len(self.chromosome) - 1):
c = self.evaluateHelper(g)
h += c**2
self.fitness = h
def evaluateHelper(self, g):
"""
Helper for evaluate function. The c sub g function.
"""
totalSum = 0
for i in range(len(self.chromosome) - g - 1):
product = self.chromosome[i] * self.chromosome[(i + g) % (len(self.chromosome))]
totalSum += product
return totalSum
I can't spot any obvious bug offhand, but you're making things really complicated, so maybe a bug's lurking and hiding somewhere. What about
def evaluateHelper(self, g):
return sum(a*b for a, b in zip(self.chromosome, self.chomosome[g:]))
this should return the same values you're computing in that subtle loop (where I think the % len... part is provably redundant). Similarly, the evaluate method seems ripe for a similar 1-liner. But, anyway...
There's a potential off-by-one issue: the formulas in the article you point to are summing for g from 1 to N-1 included -- you're using range(1, len(...)-1), whereby N-1 is excluded. Could that be the root of the problem you observe?
Your bug was here:
for i in range(len(self.chromosome) - g - 1):
The maximum value for i will be len(self.chromosome) - g - 2, because range is exclusive. Thus, you don't consider the last pair. It's basically the same as your other bug, just in a different place.

Categories