Two Dimensional Bin Packing - python

I am using the following Integer Programming Model for solving the Two Dimensional Bin Packing Problem. The following model illustrates the one dimensional version. The code I have written incorporates the constraints for the additional dimension.
I am using Python PuLP for solving the optimization problem. The code is as follows :
from pulp import *
#knapsack problem
def knapsolve(item):
prob = LpProblem('BinPacking', LpMinimize)
ys = [LpVariable("y{0}".format(i+1), cat="Binary") for i in range(item.bins)]
xs = [LpVariable("x{0}{1}".format(i+1, j+1), cat="Binary")
for i in range(item.items) for j in range(item.bins)]
#minimize objective
nbins = sum(ys)
prob += nbins
print(nbins)
#constraints
t = nbins >= 1
print(t)
prob += t
for i in range(item.items):
con1 = sum(xs[(i + j*item.bins)] for j in range(item.bins))
t = con1 == 1
prob += t
print(t)
for k in range(item.bins):
x = xs[k*item.bins : (k+1)*item.bins]
con1 = sum([x1*w for x1, w in zip(x, item.itemweight)])
t = con1 <= item.binweight[k] * ys[k]
#t = con1 <= item.binweight[k]
prob += t
print(t)
for k in range(item.bins):
x = xs[k*item.bins : (k+1)*item.bins]
con1 = sum([x1*w for x1, w in zip(x, item.itemheight)])
t = con1 <= item.binheight[k] * ys[k]
#t = con1 <= item.binheight[k]
prob += t
print(t)
status = prob.solve()
print(LpStatus[status])
print("Objective value:", value(prob.objective))
print ('\nThe values of the variables : \n')
for v in prob.variables():
print(v.name, "=", v.varValue)
return
class Item:
#bins, binweight, items, weight, itemheight, binheight
bins = 5
items = 5
binweight = [2,3,2,5,3]
itemweight = [1,2,2,1,3]
itemheight = [2,1,4,5,3]
binheight = [4,9,10,8,10]
item = Item()
knapsolve(item)
It produces the following output :
y1 + y2 + y3 + y4 + y5
y1 + y2 + y3 + y4 + y5 >= 1
x11 + x21 + x31 + x41 + x51 = 1
x12 + x22 + x32 + x42 + x52 = 1
x13 + x23 + x33 + x43 + x53 = 1
x14 + x24 + x34 + x44 + x54 = 1
x15 + x25 + x35 + x45 + x55 = 1
x11 + 2*x12 + 2*x13 + x14 + 3*x15 - 2*y1 <= 0
x21 + 2*x22 + 2*x23 + x24 + 3*x25 - 3*y2 <= 0
x31 + 2*x32 + 2*x33 + x34 + 3*x35 - 2*y3 <= 0
x41 + 2*x42 + 2*x43 + x44 + 3*x45 - 5*y4 <= 0
x51 + 2*x52 + 2*x53 + x54 + 3*x55 - 3*y5 <= 0
2*x11 + x12 + 4*x13 + 5*x14 + 3*x15 - 4*y1 <= 0
2*x21 + x22 + 4*x23 + 5*x24 + 3*x25 - 9*y2 <= 0
2*x31 + x32 + 4*x33 + 5*x34 + 3*x35 - 10*y3 <= 0
2*x41 + x42 + 4*x43 + 5*x44 + 3*x45 - 8*y4 <= 0
2*x51 + x52 + 4*x53 + 5*x54 + 3*x55 - 10*y5 <= 0
Optimal
Objective value: 3.0
The values of the variables :
x11 = 0.0
x12 = 0.0
x13 = 0.0
x14 = 0.0
x15 = 0.0
x21 = 0.0
x22 = 0.0
x23 = 1.0
x24 = 0.0
x25 = 0.0
x31 = 0.0
x32 = 0.0
x33 = 0.0
x34 = 0.0
x35 = 0.0
x41 = 0.0
x42 = 1.0
x43 = 0.0
x44 = 0.0
x45 = 1.0
x51 = 1.0
x52 = 0.0
x53 = 0.0
x54 = 1.0
x55 = 0.0
y1 = 0.0
y2 = 1.0
y3 = 0.0
y4 = 1.0
y5 = 1.0
The sample input data that has been hard coded, should produce 1 bin as the output, that is one y variable should have the value 1. However this is not the case. Are the equations modeled properly? Is there another way to specify the constraints?

The mathematical model for the standard bin-packing problem uses x(bins,items) while in the Python model you seem to use a mix of of x(bins,items) and x(items,bins). The assignment to xs uses x(items,bins) but the construct xs[(i + j*item.bins)] implies x(bins,items). This is easily seen by inspecting the output: x11 + x21 + x31 + x41 + x51 = 1 which indicates x(bins,items). This type of modeling with explicit index calculations is rather unreliable in practice. This is a toy model, but for real models the lack of type checking can be very dangerous.
Different bin weights and heights should be no problem.
Also given your data
binweight = [2,3,2,5,3]
itemweight = [1,2,2,1,3]
itemheight = [2,1,4,5,3]
binheight = [4,9,10,8,10]
I don't believe this can be handled by just 1 bin as you claim. You need 3 bins for this (bins 2,4 and 5). (You are lucky here because although there are actually bugs in the Python code you get good solutions).

Related

find centerline of the longest side of a rectangle python

I am trying to put together a simple code without any python libraries.
the aim is to find the centerline of a rectangle (for the longest side specifically).
This is my code so far. (it is looking for the center coordinates).
I want to be able to fine the centerline nonmatter the orientation of the rectangle. how do you recommend I do it. I'm open to any advice.
import matplotlib.pyplot as plt
x = [4,4,-4,-4]
y = [8,-5,-5,8]
x1,y1 = x[0],y[0]
x2,y2 = x[1],y[1]
x3,y3 = x[2],y[2]
x4,y4 = x[3],y[3]
ch_y1 = y1-y2
ch_y2 = y1-y4
ch_x1 = x1-x2
ch_x2 = x1-x4
l1 = ((x1-x2)**2+(y1-y2)**2)**0.5
l4 = ((x1-x4)**2+(y1-y4)**2)**0.5
l2 = ((x2-x3)**2+(y2-y3)**2)**0.5
l3 = ((x3-x4)**2+(y3-y4)**2)**0.5
if l2 > l1:
if abs(1-l1/l3) < 0.3 :
if x1>x2:
x11 = x1 - l2/2
if x1<x2:
x11 = x1 + l2/2
if x1 == x2:
x11 = x1
if y1>y2:
y11 = y1 - l2/2
if y1<y2:
y11 = y1 + l2/2
if y1 == y2:
y11 = y1
if x2>x3:
x12 = x2 - l3/2
if x2<x3:
x12 = x2 + l3/2
if x2 == x3:
x12 = x2
if y2>y3:
y12 = y2 - l3/2
if y2<y3:
y12 = y2 + l3/2
if y2 == y3:
y12 = y2
if l1 >l2:
x1 = x1
x2 = x3
x3 = x4
y1 = y1
y2 = y3
y3 = y4
if abs(1-l2/l4) < 0.3 :
if x1>x2:
x11 = x1 - l2/2
if x1<x2:
x11 = x1 + l2/2
if x1 == x2:
x11 = x1
if y1>y2:
y11 = y1 - l2/2
if y1<y2:
y11 = y1 + l2/2
if y1 == y2:
y11 = y1
if x2>x3:
x12 = x2 - l4/2
if x2<x3:
x12 = x2 + l4/2
if x2 == x3:
x12 = x2
if y2>y3:
y12 = y2 - l4/2
if y2<y3:
y12 = y2 + l4/2
if y2 == y3:
y12 = y2
plt.plot(x,y)
plt.plot([x11,x12],[y11,y12])
That is way too complicated. You can just draw the lines going through the center of the segments like so (note that I added a copy of x[0] and y[0] as x[4] and y[4] for simplicity):
import matplotlib.pyplot as plt
x = [4,4,-4,-4]
y = [8,-5,-5,8]
x = x + [x[0]]
y = y + [y[0]]
plt.plot(x,y)
for i in range(2):
plt.plot([(x[i]+x[i+1])/2, (x[i+2]+x[i+3])/2], [(y[i]+y[i+1])/2, (y[i+2]+y[i+3])/2])

Neural Network in Python using just numpy

I am trying to code two neural networks. The architecture of the first network consists of an input layer, one hidden layer, and an output layer. The input layer is R^2 so it accepts two inputs (x1, x2), the hidden layer has two neurons, and the output layer has a single neuron. All the neurons use the rectified linear unit (ReLU) activation function. The only difference between the first and second neural network is that the second has four neurons in the hidden layer. Otherwise they are identical.
I finished the code for the first network and was able to run and plot results. I am mainly looking to get the neural network to learn how to separate two clusters in my data set. I generate 2000 points to form a single cluster, and then another 2000 for the next cluster. The output of the neural network will ideally find a separating plane (really multiple planes) to separate the two clusters. I have setup my plot to work when the error during the error from the testing phase is less then 0.05. I should also explain that I am trying to find the ideal learning rate and epoch for training so I have a few loops to iterate through different learning rates (alpha) and epochs.
My first network works fine, but when I add 2 neurons for some reason my network error and parameters (weights and bias) get all wonky. I can't get the 4 neuron network to get an error below 0.4. I think it has something to do with the error and weights. I have been running the network with print statements to see whats happening to the weights and noticed they don't update that well because the error during training gets stuck on 0 and so the weights never update, but I am not 100% sure that this always happens.
If anyone has clues as to why my weights and error are not updating properly I would greatly appreciate it. If you run the code you will see when you plot the two clusters the output of the neural network does not create a colored separation between the clusters. The code for the working two neuron architecture is the same but just remove the additional 2 neurons from the code.
Here is the code for the network:
import numpy as np
import random
import gc
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
nData = 2000 #2000 points used on each cluster for 4000 points total
nTrain = 1000 #Used for training loop and to create clusters
nEpoch = 1 #Initial epoch value
nTest = 2000 #Used for testing loop
#alpha = 0.001
#Initializing 2D array for x which will carry the x1 and x2 values
#Also creating the radius and theta values for the cluster data
std = 0.5
x = np.zeros((2*nData,2))
t = np.zeros((2*nData))
r = np.random.normal(0,std,2*nData);
theta = 2*np.pi*np.random.rand(2*nData);
#w11f and w12f are used to plot the value of weights w11 and w12 as they update
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
#Creating cluster 1 and target data
h = -6 + 12*np.random.rand(nData)
v = 5 + (h**2)/6
x[0:nData,0] = h + r[0:nData]*np.cos(theta[0:nData])
x[0:nData,1] = v + r[0:nData]*np.sin(theta[0:nData])
t[0:nData] = 0
#Creating cluster 2 and target data
h = -5 + 10*np.random.rand(nData)
v = 10 + (h**2)/4
x[nData:2*nData,0] = h + r[nData:2*nData]*np.cos(theta[nData:2*nData])
x[nData:2*nData,1] = v + r[nData:2*nData]*np.sin(theta[nData:2*nData])
t[nData:2*nData] = 1
#Normalization
x[:,0] = 1 + 0.1*x[:,0]
x[:,1] = 1 + 0.1*x[:,1]
#Parameter Initialization
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
#Creating a list from 0 to 3999
a = range(0,2*nData)
#Creating a 3D array (tensor) to store all the error values at the end of each 50 iteration loop
er_List = np.zeros((14,50,6))
#Creating the final array to store the counter of successful error. These are errors under 0.05 in value
#the rows represent the alpha values from 0.001 to 0.05 and the columns represent each epoch from 1 to 6. This way you can view the 2D array and see which alpha and epoch give the most successes for the lowest error.
nSuccess_Array = np.zeros((14,6))
#Part B - Creating nested loops to train for multiple alpha and epoch value
#pairs
#Training
for l in range(0,14): #loop for alpha values
alpha = [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01, 0.02, 0.03, 0.04, 0.05]
nEpoch=1
for n in range(0,6): #loop for incrementing epoch values
nSuccess = 0
#Initialize these again so the size updates as the epoch changes
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
for j in range(0,50):
#Initialize the parameters again so they are random every 50 iterations (for each new epoch
value)
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
sp = random.sample(a,nTrain + nTest)
p = 0
for epoch in range(0,nEpoch):
for i in range(0,nTrain):
#Neuron dot product
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
#Neuron activation function ReLU
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
#Output of neural network before activation function
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0 #activation function
e = t[sp[i]] - yy #error calculation
#Updating parameters
ww1 = ww1 + alpha[l]*e*xx1
ww2 = ww2 + alpha[l]*e*xx2
ww3 = ww3 + alpha[l]*e*xx3
ww4 = ww4 + alpha[l]*e*xx4
bb = bb + alpha[l]*e
w11 = w11 + alpha[l]*e*ww1*dxx1*x[sp[i],0]
w12 = w12 + alpha[l]*e*ww1*dxx1*x[sp[i],1]
w21 = w21 + alpha[l]*e*ww2*dxx2*x[sp[i],0]
w22 = w22 + alpha[l]*e*ww2*dxx2*x[sp[i],1]
w31 = w31 + alpha[l]*e*ww3*dxx3*x[sp[i],0]
w32 = w32 + alpha[l]*e*ww3*dxx3*x[sp[i],1]
w41 = w41 + alpha[l]*e*ww4*dxx4*x[sp[i],0]
w42 = w42 + alpha[l]*e*ww4*dxx4*x[sp[i],1]
b1 = b1 + alpha[l]*e*ww1*dxx1
b2 = b2 + alpha[l]*e*ww2*dxx2
b3 = b3 + alpha[l]*e*ww3*dxx3
b4 = b4 + alpha[l]*e*ww4*dxx4
w11f[p] = w11
w12f[p] = w12
p = p + 1
er = 0
#Training
for k in range(nTrain,nTrain + nTest):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = abs(t[sp[k]] - yy)
er = er + e #Accumulates error
er = er/nTest #Calculates average error
er_List[l,j,n] = er
if er_List[l,j,n] < 0.05:
nSuccess = nSuccess + 1
#Part C - Creating an Array that contains the success values of each
#alpha and epoch value pair
nSuccess_Array[l,n] = nSuccess #Array that contains the success
if nEpoch < 6:
nEpoch = nEpoch +1
print(er)
#Plotting
if er < 0.5:
plt.figure(1)
plt.scatter(x[0:nData,0],x[0:nData,1])
plt.scatter(x[nData:2*nData,0],x[nData:2*nData,1])
X = np.arange(0.25,1.75,0.02)
Y = np.arange(1.25,2.75,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
Z = yy > 0
plt.scatter(X,Y,c=Z+1,alpha=0.3)
plt.figure(2)
f=np.arange(0,nEpoch*nTrain,1)
plt.plot(f,w11f)
plt.figure(3)
plt.plot(f,w12f)
plt.figure(4)
ax = plt.axes(projection='3d')
ax.scatter(x[0:nData,0],x[0:nData,1],0,s=30)
ax.scatter(x[nData:2*nData,0],x[nData:2*nData,1],1,s=30)
#Plotting the separating planes
X = np.arange(0.25,1.75,0.02)
Y = np.arange(1.25,2.75,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
Z = yy > 0
ax.plot_surface(X,Y,Z,rstride=1, cstride=1,cmap='viridis',alpha=0.5)
plt.figure(5)
ax = plt.axes(projection='3d')
X = np.arange(0,5,0.02)
Y = np.arange(0,5,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
ax.plot_surface(X, Y, yy, rstride=1, cstride=1,cmap='viridis', edgecolor='none')
Yes, you can do it using np.matmul (a#b) and calculating gradients manually. Check out Fastai v3 course, part 2 https://course.fast.ai/videos/?lesson=8. Jeremy Howard manipulates PyTorch tensors, but you can do it in NumPy as well.

How to store values found within loops?

I am trying to run multiple nested loops and then retrieve an array of values based on a condition within the loop. I have 14 alpha values I need to test, and each on needs to be tested for an epoch of 1, 2, 3, 4, 5, 6. As I test each alpha value for all 6 epochs I want to record the number of times the error is less then 0.05. At the end I want a 2D array where the rows represent the 14 different alpha values and each column is a different epoch value.
I want to know if there is a better way of doing this then using a tensor with numpy. Using this method gives me a lot of issues when I try to scale this project.
For people who are interested this is just a 2 input, single hidden layer, output neural network to teach myself about back propagation. The code i submitted is for 2 neurons but I am trying to scale this to 4 right now and eventually n neurons. And storing the error values and calculating the success to output in a 2D array where I could see what alpha and epoch pair produce the best results would be very helpful.
I have already completed this task before using this code:
import numpy as np
for l in range(0,14):
alpha = [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01, 0.02, 0.03, 0.04, 0.05]
nEpoch=1
for n in range(0,6):
nSuccess = 0
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
for j in range(0,50):
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
sp = random.sample(a,nTrain + nTest)
p = 0
for epoch in range(0,nEpoch):
for i in range(0,nTrain):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = t[sp[i]] - yy
#Updating parameters
ww1 = ww1 + alpha[l]*e*xx1
ww2 = ww2 + alpha[l]*e*xx2
ww3 = ww3 + alpha[l]*e*xx3
ww4 = ww4 + alpha[l]*e*xx4
bb = bb + alpha[l]*e
w11 = w11 + alpha[l]*e*ww1*dxx1*x[sp[i],0]
w12 = w12 + alpha[l]*e*ww1*dxx1*x[sp[i],1]
w21 = w21 + alpha[l]*e*ww2*dxx2*x[sp[i],0]
w22 = w22 + alpha[l]*e*ww2*dxx2*x[sp[i],1]
w31 = w31 + alpha[l]*e*ww3*dxx3*x[sp[i],0]
w32 = w32 + alpha[l]*e*ww3*dxx3*x[sp[i],1]
w41 = w41 + alpha[l]*e*ww4*dxx4*x[sp[i],0]
w42 = w42 + alpha[l]*e*ww4*dxx4*x[sp[i],1]
b1 = b1 + alpha[l]*e*ww1*dxx1
b2 = b2 + alpha[l]*e*ww2*dxx2
b3 = b3 + alpha[l]*e*ww3*dxx3
b4 = b4 + alpha[l]*e*ww4*dxx4
w11f[p] = w11
w12f[p] = w12
p = p + 1
er = 0
for k in range(nTrain,nTrain + nTest):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = abs(t[sp[k]] - yy)
er = er + e #Accumulates error
er = er/nTest #Calculates average error
er_List[l,j,n] = er
if er_List[l,j,n] < 0.05:
nSuccess = nSuccess + 1
#Part C - Creating an Array that contains the success values of each
#alpha and epoch value pair
nSuccess_Array[l,n] = nSuccess #Array that contains the success
if nEpoch < 6:
nEpoch += 1

Python pulp constraint - Doubling the weight of any one variable which contributes the most

I am trying to use http://www.philipkalinda.com/ds9.html to set up a constrained optimisation.
prob = pulp.LpProblem('FantasyTeam', pulp.LpMaximize)
decision_variables = []
res = self.team_df
# Set up the LP
for rownum, row in res.iterrows():
variable = str('x' + str(rownum))
variable = pulp.LpVariable(str(variable), lowBound = 0, upBound = 1, cat= 'Integer') #make variables binary
decision_variables.append(variable)
print ("Total number of decision_variables: " + str(len(decision_variables)))
total_points = ""
for rownum, row in res.iterrows():
for i, player in enumerate(decision_variables):
if rownum == i:
formula = row['TotalPoint']* player
total_points += formula
prob += total_points
print ("Optimization function: " + str(total_points))
The above, however, creates an optimisation where if points earned by x1 = X1, x2=X2.... and xn=Xn, it maximises
x1*X1 + x2*X2 +..... + xn*XN. Here xi is the points earned by the XI variable. However, in my case, I need to double the points for the variable that earns the most points. How do I set this up?
Maximize
OBJ: 38.1 x0 + 52.5 x1 + 31.3 x10 + 7.8 x11 + 42.7 x12 + 42.3 x13 + 4.7 x14
+ 49.5 x15 + 21.2 x16 + 11.8 x17 + 1.4 x18 + 3.2 x2 + 20.8 x3 + 1.2 x4
+ 24 x5 + 25.9 x6 + 27.8 x7 + 6.2 x8 + 41 x9
When I maximise the sum x1 gets dropped but when I maximise with the top guy taking double points, it should be there
Here are the constraints I am using:-
Subject To
_C1: 10.5 x0 + 21.5 x1 + 17 x10 + 7.5 x11 + 11.5 x12 + 12 x13 + 7 x14 + 19 x15
+ 10.5 x16 + 5.5 x17 + 6.5 x18 + 6.5 x2 + 9.5 x3 + 9 x4 + 12 x5 + 12 x6
+ 9.5 x7 + 7 x8 + 14 x9 <= 100
_C10: x12 + x2 + x6 >= 1
_C11: x10 + x11 + x17 + x3 <= 4
_C12: x10 + x11 + x17 + x3 >= 1
_C13: x0 + x10 + x11 + x12 + x13 + x14 + x15 + x18 + x2 <= 5
_C14: x0 + x10 + x11 + x12 + x13 + x14 + x15 + x18 + x2 >= 3
_C15: x1 + x16 + x17 + x3 + x4 + x5 + x6 + x7 + x8 + x9 <= 5
_C16: x1 + x16 + x17 + x3 + x4 + x5 + x6 + x7 + x8 + x9 >= 3
_C2: x0 + x1 + x10 + x11 + x12 + x13 + x14 + x15 + x16 + x17 + x18 + x2 + x3
+ x4 + x5 + x6 + x7 + x8 + x9 = 8
_C3: x0 + x14 + x16 + x5 <= 4
_C4: x0 + x14 + x16 + x5 >= 1
_C5: x15 + x18 + x4 + x7 + x8 <= 4
_C6: x15 + x18 + x4 + x7 + x8 >= 1
_C7: x1 + x13 + x9 <= 4
_C8: x1 + x13 + x9 >= 1
_C9: x12 + x2 + x6 <= 4
Naturally, maximising A + B + C + D doesn't maximise max(2A+B+C+D, A+2B+C+D, A+B+2C+D, A+B+C+2D)
I'm going to answer the question I think you're asking and you can correct me if I'm wrong. My understanding of your question is:
I have a series of binary variables x0...xN, and if a variable is included is receives some points. If it is not included it receives no points.
There are some constraints which apply to the selection
If (and only if) a variable is selected and if (and only if) it is the chosen variable which receives the highest number of points, then that particular variable gets double points
The objective is to maximize the total points including the doubling of the highest scoring one.
Assuming that's your question here is a dummy example that does that. Basically we add an auxiliary binary variable for each variable which is true iff (if and only if) that variable scores the most number of points:
from pulp import *
n_vars = 4
idxs = range(n_vars)
points = [2.0, 3.0, 4.0, 5.0]
prob = pulp.LpProblem('FantasyTeam', pulp.LpMaximize)
# Variables
x = LpVariable.dicts('x', idxs, cat='Binary')
x_highest_score = LpVariable.dicts('x_highest_score', idxs, cat='Binary')
# Objective
prob += lpSum([points[i]*(x[i] + x_highest_score[i]) for i in idxs])
# Constraints
# Exactly one item has highest score:
prob += lpSum([x_highest_score[i] for i in idxs]) == 1
# If a score is to be highest, it has to be chosen
for i in idxs:
prob += x_highest_score[i] <= x[i]
# And some selection constraints:
prob += x[0] + x[1] + x[2] + 1.5*x[3] <= 3
prob += x[0] + x[2] + 3*x[3] <= 3
prob += x[0] + x[1] + x[2] + 2*x[3] <= 3
# etc...
# Solve problem
prob.solve()
# Get soln
x_soln = [x[i].varValue for i in idxs]
x_highest_soln = [x_highest_score[i].varValue for i in idxs]
# And print the outputs
print (("Status: "), LpStatus[prob.status])
print ("Total points: ", value(prob.objective))
print ("x = ", x_soln)
print ("x_highest_soln = ", x_highest_soln)
This should return the following:
Status: Optimal
Total points: 13.0
x = [0.0, 1.0, 0.0, 1.0]
x_highest_soln = [0.0, 0.0, 0.0, 1.0]
If you turn off the double-points option, by changing the constraint to the following:
prob += lpSum([x_highest_score[i] for i in idxs]) == 1
I.E. none scores highest, you'll find a different set of choices is made.

solve linear equations given variables and uncertainties: scipy-optimize?

I'd like to minimize a set of equations where the variables are known with their uncertainties. In essence I'd like to test the hypothesis that the given measured variables conform to the formula constraints given by the equations. This seems like something I should be able to do with scipy-optimize. For example I have three equations:
8 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4
4 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4
1 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4
And four measured unknowns with their 1-sigma uncertainty:
x1 = 0.246 ± 0.007
x2 = 0.749 ± 0.010
x3 = 1.738 ± 0.009
x4 = 2.248 ± 0.007
Looking for any pointers in the right direction.
This is my approach. Assuming x1-x4 are approximately normally distributed around each mean (1-sigma uncertainty), the problem is turning into one of minimizing the sum of square of errors, with 3 linear constrain functions. Therefore, we can attack it using scipy.optimize.fmin_slsqp()
In [19]:
def eq_f1(x):
return (x*np.array([0.5, 1.0, 1.5, 2.0])).sum()-8
def eq_f2(x):
return (x*np.array([0.0, 0.0, 1.0, 1.0])).sum()-4
def eq_f3(x):
return (x*np.array([1.0, 1.0, 0.0, 0.0])).sum()-1
def error_f(x):
error=(x-np.array([0.246, 0.749, 1.738, 2.248]))/np.array([0.007, 0.010, 0.009, 0.007])
return (error*error).sum()
In [20]:
so.fmin_slsqp(error_f, np.array([0.246, 0.749, 1.738, 2.248]), eqcons=[eq_f1, eq_f2, eq_f3])
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.17576389592
Iterations: 4
Function evaluations: 32
Gradient evaluations: 4
Out[20]:
array([ 0.25056582, 0.74943418, 1.74943418, 2.25056582])
I appear to me that I have a very similar problem. I am relatively new to py and used it mostly to sort and reduce data with pandas.
I have a set of linear equations, where I want to find the best fit parameters. However, the dataset has known uncertainties that need to be considered given in parentheses).
x1*99(1)+x2*45(1)=52(0.2)
x1*1(0.5)+x2*16(1)=15(0.1)
Moreover there are constraints:
x1>=0
x2>=0
x1+x2=1
My approach would be to treat the equations as constraints and solve the sum of the residues as it has been shown in the example above.
Solving this without uncertainties is not the issue. I ask to get a hint on how to account for the uncertainties while finding the best fit parameters.
As given, the problem has no solution. This is because if the inputs x1, x2, x3 and x4 are gaussian, then the outputs:
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8.0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4.0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1.0
are also gaussian.
Assuming that x1, x2, x3 and x4 are independent random variables, this is easy to see with OpenTURNS:
import openturns as ot
x1 = ot.Normal(0.246, 0.007)
x2 = ot.Normal(0.749, 0.010)
x3 = ot.Normal(1.738, 0.009)
x4 = ot.Normal(2.248, 0.007)
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8.0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4.0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1.0
The following script produces the graph:
graph1 = y1.drawPDF()
graph1.setLegends(["y1"])
graph2 = y2.drawPDF()
graph2.setLegends(["y2"])
graph3 = y3.drawPDF()
graph3.setLegends(["y3"])
graph1.add(graph2)
graph1.add(graph3)
graph1.setColors(["dodgerblue3",
"darkorange1",
"forestgreen"])
graph1.setXTitle("Y")
The previous script produces the following output.
Given the location of the 0.0 in this distribution, I would say that solving the equations is mathematically impossible, but physically consistent with the data.
Actually, I guess that the gaussian distributions you gave for x1, ..., x4 are estimated from data. So I would rather reformulate the problem as follows:
Given a sample of observed values of x1, x2, x3, x4, what is the value of e1, e2, e3 which is so that :
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8 + e1 = 0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4 + e2 = 0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1 + e3 = 0
This turns the problem into an inversion problem, which can be solved by calibrating e1, e2, e3. Furthermore, given the finite sample size of x1, ..., x4, we might want to produce the distribution of e1, e2, e3. This can be done by bootstraping the input / output pairs (x, y): the distribution of e1, e2, e3 then reflects the variability of these parameters depending on the sample at hand.
First, we have to generate a sample from the distribution (I suppose that you have this sample, but did not publish it so far):
distribution = ot.ComposedDistribution([x1, x2, x3, x4])
sampleSize = 10
xobs = distribution.getSample(sampleSize)
Then we define the model:
formulas = [
"y1 := 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 + e1 - 8.0",
"y2 := 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 + e2 - 4.0",
"y3 := 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 + e3 - 1.0"
]
program = ";".join(formulas)
g = ot.SymbolicFunction(["x1", "x2", "x3", "x4", "e1", "e2", "e3"],
["y1", "y2", "y3"],
program)
And set the observed outputs, which is a sample of zeros:
yobs = ot.Sample(sampleSize, 3)
We start with initial values equal to zero, and define the function to calibrate:
e1Initial = 0.0
e2Initial = 0.0
e3Initial = 0.0
thetaPrior = ot.Point([e1Initial,e2Initial,e3Initial])
calibratedIndices = [4, 5, 6]
mycf = ot.ParametricFunction(g, calibratedIndices, thetaPrior)
Then we can calibrate the model:
algo = ot.NonLinearLeastSquaresCalibration(mycf, xobs, yobs, thetaPrior)
algo.run()
calibrationResult = algo.getResult()
print(calibrationResult.getParameterMAP())
This prints:
[0.0265988,0.0153057,0.00495758]
This means that the errors e1, e2, e3 are rather small.
We can compute a confidence interval:
thetaPosterior = calibrationResult.getParameterPosterior()
print(thetaPosterior.computeBilateralConfidenceIntervalWithMarginalProbability(0.95)[0])
This prints:
[0.0110046, 0.0404756]
[0.00921992, 0.0210059]
[-0.00601084, 0.0156665]
The third parameter e3 might be zero, but neither e1 nor e2.
Finally, we can get the distribution of the errors:
thetaPosterior = calibrationResult.getParameterPosterior()
and draw it:
graph1 = thetaPosterior.getMarginal(0).drawPDF()
graph2 = thetaPosterior.getMarginal(1).drawPDF()
graph3 = thetaPosterior.getMarginal(2).drawPDF()
graph1.add(graph2)
graph1.add(graph3)
graph1.setColors(["dodgerblue3",
"darkorange1",
"forestgreen"])
graph1
This produces:
This shows that e3 might be zero given the variability in the observed inputs x1, ..., x4. But e1 and e2 cannot be zero. The conclusion for this sample is that the third equation is approximately solved by the observed values of x1, ..., x4, but not the two first equations.

Categories