I am trying to run multiple nested loops and then retrieve an array of values based on a condition within the loop. I have 14 alpha values I need to test, and each on needs to be tested for an epoch of 1, 2, 3, 4, 5, 6. As I test each alpha value for all 6 epochs I want to record the number of times the error is less then 0.05. At the end I want a 2D array where the rows represent the 14 different alpha values and each column is a different epoch value.
I want to know if there is a better way of doing this then using a tensor with numpy. Using this method gives me a lot of issues when I try to scale this project.
For people who are interested this is just a 2 input, single hidden layer, output neural network to teach myself about back propagation. The code i submitted is for 2 neurons but I am trying to scale this to 4 right now and eventually n neurons. And storing the error values and calculating the success to output in a 2D array where I could see what alpha and epoch pair produce the best results would be very helpful.
I have already completed this task before using this code:
import numpy as np
for l in range(0,14):
alpha = [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01, 0.02, 0.03, 0.04, 0.05]
nEpoch=1
for n in range(0,6):
nSuccess = 0
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
for j in range(0,50):
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
sp = random.sample(a,nTrain + nTest)
p = 0
for epoch in range(0,nEpoch):
for i in range(0,nTrain):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = t[sp[i]] - yy
#Updating parameters
ww1 = ww1 + alpha[l]*e*xx1
ww2 = ww2 + alpha[l]*e*xx2
ww3 = ww3 + alpha[l]*e*xx3
ww4 = ww4 + alpha[l]*e*xx4
bb = bb + alpha[l]*e
w11 = w11 + alpha[l]*e*ww1*dxx1*x[sp[i],0]
w12 = w12 + alpha[l]*e*ww1*dxx1*x[sp[i],1]
w21 = w21 + alpha[l]*e*ww2*dxx2*x[sp[i],0]
w22 = w22 + alpha[l]*e*ww2*dxx2*x[sp[i],1]
w31 = w31 + alpha[l]*e*ww3*dxx3*x[sp[i],0]
w32 = w32 + alpha[l]*e*ww3*dxx3*x[sp[i],1]
w41 = w41 + alpha[l]*e*ww4*dxx4*x[sp[i],0]
w42 = w42 + alpha[l]*e*ww4*dxx4*x[sp[i],1]
b1 = b1 + alpha[l]*e*ww1*dxx1
b2 = b2 + alpha[l]*e*ww2*dxx2
b3 = b3 + alpha[l]*e*ww3*dxx3
b4 = b4 + alpha[l]*e*ww4*dxx4
w11f[p] = w11
w12f[p] = w12
p = p + 1
er = 0
for k in range(nTrain,nTrain + nTest):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = abs(t[sp[k]] - yy)
er = er + e #Accumulates error
er = er/nTest #Calculates average error
er_List[l,j,n] = er
if er_List[l,j,n] < 0.05:
nSuccess = nSuccess + 1
#Part C - Creating an Array that contains the success values of each
#alpha and epoch value pair
nSuccess_Array[l,n] = nSuccess #Array that contains the success
if nEpoch < 6:
nEpoch += 1
Related
This code I wrote in Sympy is running slow. I want to write this with symengine. How can I translate? I had some difficulty with the Solve commands. Can you help me ?
Edit: Here is my code:
import sympy as sy
import time
#import numpy as np
import math as mat
from sympy import Eq
testere_capi=197
dis_sayisi=78
ic_acisi = 16
sirt_acisi = 8
derinlik_carpani = 0.4
kucuk_daire_carpani = 0.25
buyuk_daire_carpani = 0.8
son_dogrunun_carpani = 0.06
tas_kalinligi = 2.0
T = ((testere_capi * mat.pi) / dis_sayisi) # hatve
H = T * derinlik_carpani # derinlik
x = sy.symbols("x")
y = sy.symbols("y")
D2 = sy.Eq(y, H)
a8 = 0
b8 = 0
m_d1 = mat.tan(mat.radians(90 - ic_acisi))
D1 = sy.Eq(y - b8, m_d1 * (x - a8))
S1 = sy.solve((D2, D1), (x, y))
a1 = S1[x]
b1 = S1[y]
b2 = T * kucuk_daire_carpani
a2 = b2 / mat.tan(mat.radians(90 - ic_acisi) / 2)
r1 = T * kucuk_daire_carpani
D3 = sy.Eq((x - a2) ** 2 + (y - b2) ** 2, r1 ** 2)
D1 = sy.expand(D1)
D3 = sy.expand(D3)
S7 = sy.solve((D1,D3),(x,y))
a7 = S7[0][0]
b7 = S7[0][1]
Getting wrong output for this equation. Can someone review my code?
from math import cos
from math import sin
from math import pi
a0 = int(input("a0:"))
b0 = int(input("b0:"))
N = int(input("N:"))
L = int(input("L:"))
X = int(input("X:"))
n = 0
an = a0
bn = b0
y=0
for i in range(N):
an = an + 10 # since our first value would An = A0 +10 , we could just loop the values by adding 10 to it
bn = bn * 10
y= an * cos((n*pi*X/(L))) + bn*(sin(n*pi*X/(L)))
print(y)
y= an * cos((n*pi*X/(L))) + bn*(sin(n*pi*X/(L)))
There's never any change in the n variable! Given that n starts (and remains) at 0, you always calculate
an * cos(0) + bn * sin(0) == an * 1 + bn * 0 == an
Furthermore you need to add the result to the y variable, not just assign it. And you need to prime the y variable with a0.
an = a0
bn = b0
c = 0
d = pi * X / L # precalculating for efficiency
y = a0
for i in range(N):
an = an + 10
bn = bn * 10
c = c + d
y = y + an * cos(c) + bn * sin(c)
print(y)
I am trying to code two neural networks. The architecture of the first network consists of an input layer, one hidden layer, and an output layer. The input layer is R^2 so it accepts two inputs (x1, x2), the hidden layer has two neurons, and the output layer has a single neuron. All the neurons use the rectified linear unit (ReLU) activation function. The only difference between the first and second neural network is that the second has four neurons in the hidden layer. Otherwise they are identical.
I finished the code for the first network and was able to run and plot results. I am mainly looking to get the neural network to learn how to separate two clusters in my data set. I generate 2000 points to form a single cluster, and then another 2000 for the next cluster. The output of the neural network will ideally find a separating plane (really multiple planes) to separate the two clusters. I have setup my plot to work when the error during the error from the testing phase is less then 0.05. I should also explain that I am trying to find the ideal learning rate and epoch for training so I have a few loops to iterate through different learning rates (alpha) and epochs.
My first network works fine, but when I add 2 neurons for some reason my network error and parameters (weights and bias) get all wonky. I can't get the 4 neuron network to get an error below 0.4. I think it has something to do with the error and weights. I have been running the network with print statements to see whats happening to the weights and noticed they don't update that well because the error during training gets stuck on 0 and so the weights never update, but I am not 100% sure that this always happens.
If anyone has clues as to why my weights and error are not updating properly I would greatly appreciate it. If you run the code you will see when you plot the two clusters the output of the neural network does not create a colored separation between the clusters. The code for the working two neuron architecture is the same but just remove the additional 2 neurons from the code.
Here is the code for the network:
import numpy as np
import random
import gc
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
nData = 2000 #2000 points used on each cluster for 4000 points total
nTrain = 1000 #Used for training loop and to create clusters
nEpoch = 1 #Initial epoch value
nTest = 2000 #Used for testing loop
#alpha = 0.001
#Initializing 2D array for x which will carry the x1 and x2 values
#Also creating the radius and theta values for the cluster data
std = 0.5
x = np.zeros((2*nData,2))
t = np.zeros((2*nData))
r = np.random.normal(0,std,2*nData);
theta = 2*np.pi*np.random.rand(2*nData);
#w11f and w12f are used to plot the value of weights w11 and w12 as they update
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
#Creating cluster 1 and target data
h = -6 + 12*np.random.rand(nData)
v = 5 + (h**2)/6
x[0:nData,0] = h + r[0:nData]*np.cos(theta[0:nData])
x[0:nData,1] = v + r[0:nData]*np.sin(theta[0:nData])
t[0:nData] = 0
#Creating cluster 2 and target data
h = -5 + 10*np.random.rand(nData)
v = 10 + (h**2)/4
x[nData:2*nData,0] = h + r[nData:2*nData]*np.cos(theta[nData:2*nData])
x[nData:2*nData,1] = v + r[nData:2*nData]*np.sin(theta[nData:2*nData])
t[nData:2*nData] = 1
#Normalization
x[:,0] = 1 + 0.1*x[:,0]
x[:,1] = 1 + 0.1*x[:,1]
#Parameter Initialization
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
#Creating a list from 0 to 3999
a = range(0,2*nData)
#Creating a 3D array (tensor) to store all the error values at the end of each 50 iteration loop
er_List = np.zeros((14,50,6))
#Creating the final array to store the counter of successful error. These are errors under 0.05 in value
#the rows represent the alpha values from 0.001 to 0.05 and the columns represent each epoch from 1 to 6. This way you can view the 2D array and see which alpha and epoch give the most successes for the lowest error.
nSuccess_Array = np.zeros((14,6))
#Part B - Creating nested loops to train for multiple alpha and epoch value
#pairs
#Training
for l in range(0,14): #loop for alpha values
alpha = [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01, 0.02, 0.03, 0.04, 0.05]
nEpoch=1
for n in range(0,6): #loop for incrementing epoch values
nSuccess = 0
#Initialize these again so the size updates as the epoch changes
w11f = np.zeros(nEpoch*nTrain)
w12f = np.zeros(nEpoch*nTrain)
for j in range(0,50):
#Initialize the parameters again so they are random every 50 iterations (for each new epoch
value)
w11 = 0.5 - np.random.rand();
w12 = 0.5 - np.random.rand();
w21 = 0.5 - np.random.rand();
w22 = 0.5 - np.random.rand();
w31 = 0.5 - np.random.rand();
w32 = 0.5 - np.random.rand();
w41 = 0.5 - np.random.rand();
w42 = 0.5 - np.random.rand();
b4 = 0.5 - np.random.rand();
b3 = 0.5 - np.random.rand();
b2 = 0.5 - np.random.rand();
b1 = 0.5 - np.random.rand();
ww1 = 0.5 - np.random.rand();
ww2 = 0.5 - np.random.rand();
ww3 = 0.5 - np.random.rand();
ww4 = 0.5 - np.random.rand();
bb = 0.5 - np.random.rand();
sp = random.sample(a,nTrain + nTest)
p = 0
for epoch in range(0,nEpoch):
for i in range(0,nTrain):
#Neuron dot product
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
#Neuron activation function ReLU
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
#Output of neural network before activation function
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0 #activation function
e = t[sp[i]] - yy #error calculation
#Updating parameters
ww1 = ww1 + alpha[l]*e*xx1
ww2 = ww2 + alpha[l]*e*xx2
ww3 = ww3 + alpha[l]*e*xx3
ww4 = ww4 + alpha[l]*e*xx4
bb = bb + alpha[l]*e
w11 = w11 + alpha[l]*e*ww1*dxx1*x[sp[i],0]
w12 = w12 + alpha[l]*e*ww1*dxx1*x[sp[i],1]
w21 = w21 + alpha[l]*e*ww2*dxx2*x[sp[i],0]
w22 = w22 + alpha[l]*e*ww2*dxx2*x[sp[i],1]
w31 = w31 + alpha[l]*e*ww3*dxx3*x[sp[i],0]
w32 = w32 + alpha[l]*e*ww3*dxx3*x[sp[i],1]
w41 = w41 + alpha[l]*e*ww4*dxx4*x[sp[i],0]
w42 = w42 + alpha[l]*e*ww4*dxx4*x[sp[i],1]
b1 = b1 + alpha[l]*e*ww1*dxx1
b2 = b2 + alpha[l]*e*ww2*dxx2
b3 = b3 + alpha[l]*e*ww3*dxx3
b4 = b4 + alpha[l]*e*ww4*dxx4
w11f[p] = w11
w12f[p] = w12
p = p + 1
er = 0
#Training
for k in range(nTrain,nTrain + nTest):
y1 = b1 + w11*x[sp[i],0] + w12*x[sp[i],1]
y2 = b2 + w21*x[sp[i],0] + w22*x[sp[i],1]
y3 = b3 + w31*x[sp[i],0] + w32*x[sp[i],1]
y4 = b4 + w41*x[sp[i],0] + w42*x[sp[i],1]
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
yy = yy > 0
e = abs(t[sp[k]] - yy)
er = er + e #Accumulates error
er = er/nTest #Calculates average error
er_List[l,j,n] = er
if er_List[l,j,n] < 0.05:
nSuccess = nSuccess + 1
#Part C - Creating an Array that contains the success values of each
#alpha and epoch value pair
nSuccess_Array[l,n] = nSuccess #Array that contains the success
if nEpoch < 6:
nEpoch = nEpoch +1
print(er)
#Plotting
if er < 0.5:
plt.figure(1)
plt.scatter(x[0:nData,0],x[0:nData,1])
plt.scatter(x[nData:2*nData,0],x[nData:2*nData,1])
X = np.arange(0.25,1.75,0.02)
Y = np.arange(1.25,2.75,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
Z = yy > 0
plt.scatter(X,Y,c=Z+1,alpha=0.3)
plt.figure(2)
f=np.arange(0,nEpoch*nTrain,1)
plt.plot(f,w11f)
plt.figure(3)
plt.plot(f,w12f)
plt.figure(4)
ax = plt.axes(projection='3d')
ax.scatter(x[0:nData,0],x[0:nData,1],0,s=30)
ax.scatter(x[nData:2*nData,0],x[nData:2*nData,1],1,s=30)
#Plotting the separating planes
X = np.arange(0.25,1.75,0.02)
Y = np.arange(1.25,2.75,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
Z = yy > 0
ax.plot_surface(X,Y,Z,rstride=1, cstride=1,cmap='viridis',alpha=0.5)
plt.figure(5)
ax = plt.axes(projection='3d')
X = np.arange(0,5,0.02)
Y = np.arange(0,5,0.02)
X, Y = np.meshgrid(X,Y)
y1 = b1 + w11*X + w12*Y
y2 = b2 + w21*X + w22*Y
y3 = b3 + w31*X + w32*Y
y4 = b4 + w41*X + w42*Y
dxx1 = y1 > 0
xx1 = y1*dxx1
dxx2 = y2 > 0
xx2 = y2*dxx2
dxx3 = y3 > 0
xx3 = y3*dxx3
dxx4 = y4 > 0
xx4 = y4*dxx4
yy = bb + ww1*xx1 + ww2*xx2 + ww3*xx3 + ww4*xx4
ax.plot_surface(X, Y, yy, rstride=1, cstride=1,cmap='viridis', edgecolor='none')
Yes, you can do it using np.matmul (a#b) and calculating gradients manually. Check out Fastai v3 course, part 2 https://course.fast.ai/videos/?lesson=8. Jeremy Howard manipulates PyTorch tensors, but you can do it in NumPy as well.
I am having issues calculating a function, while the function itself is pretty straightforward.
I have the following dataframe:
import pandas as pd
import numpy as np
import math as m
from scipy.stats import norm
dff = pd.DataFrame({'SKU': ['001', '002', '003','004','005'],
'revenue_contribution_in_percentage': [0.2, 0.2, 0.3,0.1,0.2],
'BuyPrice' : [7.78,9.96,38.87,6.91,14.04],
'SellPrice' : [7.9725,12.25,43,7.1,19.6],
'margin' : [0.9725,2.2908,5.8305,0.2764,5.1948],
'Avg_per_week' : [71.95,75.65,105.7,85.95,66.1],
'StockOnHand' : [260,180,260,205,180],
'StockOnOrder': [0,0,0,0,0],
'Supplier' : ['ABC', 'ABC', 'ABC','ABC','ABC'],
'SupplierLeadTime': [12,12,12,12,12],
'cumul_value':[0.20,0.4,0.6,0.8,1],
'class_mention':['A','A','B','D','C'],
'std_week':[21.585,26.4775,21.14,31.802, 26.44],
'review_time' : [5,5,5,5,5],
'holding_cost': [0.35, 0.35, 0.35,0.35,0.35],
'aggregate_order_placement_cost': [1000, 1000,1000,1000,1000],
'periods' : [7,7,7,7,7]})
dff['holding_cost'] = 0.35
dff1 = dff.sort_values(['Supplier'])
df2 = pd.DataFrame(dff1)
df2['forecast_dts'] = 5
df2['sigma_rtlt'] = 0.5
i need passing some of this parameters into the function:
#
a0 = -5.3925569
a1 = 5.6211054
a2 = -3.883683
a3 = 1.0897299
b0 = 1
b1 = -0.72496485
b2 = 0.507326622
b3 = 0.0669136868
b4 = -0.00329129114
z = np.sqrt(np.log(25
/
(norm.pdf((df2['forecast_dts'])*(1-0.98)/df2['sigma_rtlt']) -
((df2['forecast_dts']*(1-0.98)/df2['sigma_rtlt']))* (1-norm.cdf(df2['forecast_dts']*(1-0.98)/df2['sigma_rtlt']))) ^ 2))
num = (a0 + a1 * z + a2 * z ^ 2 + a3 * z ^ 3)
den = (b0 + b1 * z + b2 * z ^ 2 + b3 * z ^ 3 + b4 * z ^ 4)
k = num / den
return k
but then calculating
calc = calc_invUnitNormalLossApprox()*df2['sigma_rtlt']
returns the error:
File "/usr/local/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 1280, in na_op
dtype=x.dtype, typ=type(y).__name__
TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool]
At this point I am not sure what is going on there, especially because i know the formula itself is correct, I am assuming there is something wrong with my use of norm pdf and cdf but I couldnt figure it out.
Any help would be really appreciated.
I think with the ^ operator you are trying to do a bitwise XOR
I think you need to use the ** operator.
This code works
def calc():
a0 = -5.3925569
a1 = 5.6211054
a2 = -3.883683
a3 = 1.0897299
b0 = 1
b1 = -0.72496485
b2 = 0.507326622
b3 = 0.0669136868
b4 = -0.00329129114
z = np.sqrt(np.log(25
/
(norm.pdf((df2['forecast_dts'])*(1-0.98)/df2['sigma_rtlt']) -
((df2['forecast_dts']*(1-0.98)/df2['sigma_rtlt']))* (1-norm.cdf(df2['forecast_dts']*(1-0.98)/df2['sigma_rtlt']))) ** 2))
num = (a0 + a1 * z + a2 * z ** 2 + a3 * z ** 3)
den = (b0 + b1 * z + b2 * z ** 2 + b3 * z ** 3 + b4 * z ** 4)
k = num / den
return k
Not : I have change the ^ operator to **
I am using the following Integer Programming Model for solving the Two Dimensional Bin Packing Problem. The following model illustrates the one dimensional version. The code I have written incorporates the constraints for the additional dimension.
I am using Python PuLP for solving the optimization problem. The code is as follows :
from pulp import *
#knapsack problem
def knapsolve(item):
prob = LpProblem('BinPacking', LpMinimize)
ys = [LpVariable("y{0}".format(i+1), cat="Binary") for i in range(item.bins)]
xs = [LpVariable("x{0}{1}".format(i+1, j+1), cat="Binary")
for i in range(item.items) for j in range(item.bins)]
#minimize objective
nbins = sum(ys)
prob += nbins
print(nbins)
#constraints
t = nbins >= 1
print(t)
prob += t
for i in range(item.items):
con1 = sum(xs[(i + j*item.bins)] for j in range(item.bins))
t = con1 == 1
prob += t
print(t)
for k in range(item.bins):
x = xs[k*item.bins : (k+1)*item.bins]
con1 = sum([x1*w for x1, w in zip(x, item.itemweight)])
t = con1 <= item.binweight[k] * ys[k]
#t = con1 <= item.binweight[k]
prob += t
print(t)
for k in range(item.bins):
x = xs[k*item.bins : (k+1)*item.bins]
con1 = sum([x1*w for x1, w in zip(x, item.itemheight)])
t = con1 <= item.binheight[k] * ys[k]
#t = con1 <= item.binheight[k]
prob += t
print(t)
status = prob.solve()
print(LpStatus[status])
print("Objective value:", value(prob.objective))
print ('\nThe values of the variables : \n')
for v in prob.variables():
print(v.name, "=", v.varValue)
return
class Item:
#bins, binweight, items, weight, itemheight, binheight
bins = 5
items = 5
binweight = [2,3,2,5,3]
itemweight = [1,2,2,1,3]
itemheight = [2,1,4,5,3]
binheight = [4,9,10,8,10]
item = Item()
knapsolve(item)
It produces the following output :
y1 + y2 + y3 + y4 + y5
y1 + y2 + y3 + y4 + y5 >= 1
x11 + x21 + x31 + x41 + x51 = 1
x12 + x22 + x32 + x42 + x52 = 1
x13 + x23 + x33 + x43 + x53 = 1
x14 + x24 + x34 + x44 + x54 = 1
x15 + x25 + x35 + x45 + x55 = 1
x11 + 2*x12 + 2*x13 + x14 + 3*x15 - 2*y1 <= 0
x21 + 2*x22 + 2*x23 + x24 + 3*x25 - 3*y2 <= 0
x31 + 2*x32 + 2*x33 + x34 + 3*x35 - 2*y3 <= 0
x41 + 2*x42 + 2*x43 + x44 + 3*x45 - 5*y4 <= 0
x51 + 2*x52 + 2*x53 + x54 + 3*x55 - 3*y5 <= 0
2*x11 + x12 + 4*x13 + 5*x14 + 3*x15 - 4*y1 <= 0
2*x21 + x22 + 4*x23 + 5*x24 + 3*x25 - 9*y2 <= 0
2*x31 + x32 + 4*x33 + 5*x34 + 3*x35 - 10*y3 <= 0
2*x41 + x42 + 4*x43 + 5*x44 + 3*x45 - 8*y4 <= 0
2*x51 + x52 + 4*x53 + 5*x54 + 3*x55 - 10*y5 <= 0
Optimal
Objective value: 3.0
The values of the variables :
x11 = 0.0
x12 = 0.0
x13 = 0.0
x14 = 0.0
x15 = 0.0
x21 = 0.0
x22 = 0.0
x23 = 1.0
x24 = 0.0
x25 = 0.0
x31 = 0.0
x32 = 0.0
x33 = 0.0
x34 = 0.0
x35 = 0.0
x41 = 0.0
x42 = 1.0
x43 = 0.0
x44 = 0.0
x45 = 1.0
x51 = 1.0
x52 = 0.0
x53 = 0.0
x54 = 1.0
x55 = 0.0
y1 = 0.0
y2 = 1.0
y3 = 0.0
y4 = 1.0
y5 = 1.0
The sample input data that has been hard coded, should produce 1 bin as the output, that is one y variable should have the value 1. However this is not the case. Are the equations modeled properly? Is there another way to specify the constraints?
The mathematical model for the standard bin-packing problem uses x(bins,items) while in the Python model you seem to use a mix of of x(bins,items) and x(items,bins). The assignment to xs uses x(items,bins) but the construct xs[(i + j*item.bins)] implies x(bins,items). This is easily seen by inspecting the output: x11 + x21 + x31 + x41 + x51 = 1 which indicates x(bins,items). This type of modeling with explicit index calculations is rather unreliable in practice. This is a toy model, but for real models the lack of type checking can be very dangerous.
Different bin weights and heights should be no problem.
Also given your data
binweight = [2,3,2,5,3]
itemweight = [1,2,2,1,3]
itemheight = [2,1,4,5,3]
binheight = [4,9,10,8,10]
I don't believe this can be handled by just 1 bin as you claim. You need 3 bins for this (bins 2,4 and 5). (You are lucky here because although there are actually bugs in the Python code you get good solutions).