I have built up a NN with following architecture:
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
(1901, 456, 3) (476, 456, 3) (1901, 3, 3) (476, 3, 3)
model = Sequential()
model.add(Flatten(input_shape=(456,3)))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(3 * 3))
model.add(Reshape((3, 3)))
model.compile('adam', 'mse')
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=100)
Now I want to replace this architecture with a analogue CNN which does the same; but when trying to implement this I always get problems with the dimensions of the different layers. And my error is always like this
ValueError: Error when checking input: expected conv2d_3_input to have 4 dimensions, but got array with shape (x, x, x)
the dataset remains the same, just the NN architecture changes and this is my first approach:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(1901,456,3)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
Can someone help me out to replace my first NN into a CNN?
Your network is well defined, the error you're getting is during the fit operation. And why is that the case.
Well Conv2D is looking for data with 4D shape as you can see here : doc
X_train shape must then be (samples, channels, rows, cols)
When you gave input_shape=(1901,456,3), you didn't have to specify the number of samples.
But during the fit operation you need to have a data shaped as (samples, channels, rows, cols) .
And now you see that you have a problem. Why is your X_train shaped like that, it seems that you only have one image. You can feed it by reshaping it using :
X_train = X_train.reshape((1, 1901, 456, 3))
But that seems odd, you're only feeding one image to your network.
Edit : after clarification on the comments, conv1D will be better in this type of case, here is how to do it:
model = Sequential()
model.add(Conv1D(32, kernel_size=3,
activation='relu',
input_shape=(456,3)))
model.add(Conv1D(64, 3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3 * 3, activation='softmax'))
model.add(Reshape((3, 3))
now everything worked with the architecture and there is also no problem when compiling the NN;
batch_size = 128
epochs = 12
model.compile(
optimizer='rmsprop',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'],
)
model.fit(X_test, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
but when trying to fit I get following error :
ValueError: Input arrays should have the same number of samples as target
arrays. Found 476 input samples and 1901 target samples.
what am I missing here?
Related
This is my model:
def make_model():
model = Sequential()
model.add(Conv2D(kernel_size=(3, 3), filters=16, input_shape=(32, 32,1), padding='same'))
model.add(LeakyReLU(0.1))
model.add(Conv2D(kernel_size=(3, 3), filters=32, padding='same'))
model.add(LeakyReLU(0.1))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(kernel_size=(3, 3), filters=32, padding='same'))
model.add(LeakyReLU(0.1))
model.add(Conv2D(kernel_size=(3, 3), filters=64, padding='same'))
model.add(LeakyReLU(0.1))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(LeakyReLU(0.1))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
return model
Compile part:
INIT_LR = 5e-3 # initial learning rate
BATCH_SIZE = 32
EPOCHS = 10
from tensorflow.keras import backend as K
K.clear_session()
model = make_model()
model.compile(
loss='categorical_crossentropy', # we train 10-way classification
optimizer=tf.keras.optimizers.Adamax(lr=INIT_LR), # for SGD
metrics=['accuracy'] # report accuracy during training
)
def lr_scheduler(epoch):
return INIT_LR * 0.9 ** epoch
# callback for printing of actual learning rate used by optimizer
class LrHistory(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs={}):
print("Learning rate:", K.get_value(model.optimizer.lr))
Fitting:
model.fit(
X_train.reshape(-1, 32, 32, 1), y_train, # prepared data
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler),
LrHistory(),
tfa.callbacks.TQDMProgressBar() ],
validation_data=(X_test, y_test),
shuffle=True,
verbose=0,
initial_epoch=None or 0
)
My Data_trainX shape:
My Data_trainy shape
My input shape is compatible with models Conv2D layer's input shape.
I've looked at other questions about this applied those solutions, but it didn't work.
It seems everything correct to me. Where am I doing wrong?
While you are reshaping the training data X_train to fit the model specifications, you are not doing this with the validation data X_test.
Reshape X_test as well and it should work fine:
model.fit(
X_train.reshape(-1, 32, 32, 1), y_train,
...
validation_data=(X_test.reshape(-1, 32, 32, 1), y_test), # <-- apply changes here
...
)
This code was given to us by a teacher, so it should work right off the bat. However, I can't get it to run.
K.image_data_format()
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 25
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I receive the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 32 but received input with shape [None, 32, 32, 3]
Just thrown off since no one else had this issue with the given code. I did have to change the first line from
K.set_image_dim_ordering('th')
to
K.image_data_format
as I was being told that set_image_dim_ordering was not a known function of Keras.backend
Any ideas here? Could my change has introduced this error?
Update: If data is in channel_last format, then change input shape from input_shape=(3, 32, 32) to input_shape=(32, 32, 3) in,
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
I found reference to set_image_dim_ordering below for keras 1.2.2,
https://faroit.com/keras-docs/1.2.2/backend/
Here, it mentions,
For 2D data (e.g. image), "tf" assumes (rows, cols, channels) while
"th" assumes (channels, rows, cols).
Since you are receving input shape [None, 32, 32, 3] that means it is tf format of (rows, cols, channels). So providing th means the input shapes mismatch. You were using theano backend with th instead of tensorflow.
I could not find any reference to set_image_dim_ordering or image_dim_ordering in latest tf.keras backend. Instead these below provide method for getting and setting 'channels_first' or 'channels_last' format.
https://www.tensorflow.org/api_docs/python/tf/keras/backend/image_data_format
https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_image_data_format
I am learning Keras using audio classification, Actually, I am implementing the code with modification from https://github.com/deepsound-project/genre-recognition/blob/master/train_model.py using Keras.
The shape of the dataset is
X_train shape = (800, 32, 1)
y_train shape = (800, 10)
X_test shape = (200, 32, 1)
y_test shape = (200, 10)
The model
model = Sequential()
model.add(Conv1D(filters=256, kernel_size=5, input_shape=(32,1), activation="relu"))
model.add(BatchNormalization(momentum=0.9))
model.add(MaxPooling1D(2))
model.add(Dropout(0.5))
model.add(Conv1D(filters=256, kernel_size=5, activation="relu"))
model.add(BatchNormalization(momentum=0.9))
model.add(MaxPooling1D(2))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation="relu", ))
model.add(Dense(10, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer = Adam(lr=0.001),
metrics = ['accuracy'],
)
model.summary()
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=2,factor=0.5,min_delta=0.01)
check=ModelCheckpoint(filepath=r'/content/drive/My Drive/Colab Notebooks/gen/cnn.hdf5', verbose=1, save_best_only = True)
History = model.fit(X_train,
y_train,
epochs=100,
#batch_size=512,
validation_data = (X_test, y_test),
verbose = 2,
callbacks=[check, red_lr],
shuffle=True )
The accuracy graph
Loss graph
I do not understand, Why the val_acc is in the range of 70%. I tried to modify the model architecture including optimizer, but no improvement.
And, Is it good to have a lot of difference between loss and val_loss.
how to improve the accuracy above 80... any help...
Thank you
I found it, I use concatenate function from Keras to concatenate all convolution layers and, it gives the best performance.
I was trying to run a 10 fold cross validation on my dataset.I had reshaped my data before training as follows
data = data.reshape(500,1,1028,1)
data_y = np_utils.to_categorical(data_y, 3)
After this i described my model
for train,test in kf.split(data):
fold+=1
print("Fold #{}".format(fold))
x_train = data[train]
y_train = data_y[train]
x_test = data[test]
y_test = data_y[test]
print(x_train.shape)
model.add(Conv2D(32, (1, 3),input_shape=(1,1028,1)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(1,2)))
model.add(Conv2D(34, (1, 4)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,2)))
model.add(Conv2D(64,(1, 3)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(1,2)))
model.add(Conv2D(64, (1, 4)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,2)))
model.add(Flatten())
#fully connected for new model
model.add(Dense(550))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(250))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(100))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(25))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(x_train.reshape(450,1,1028,1), y_train,
batch_size=5,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred, axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test, axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print("Fold score (accuracy): {}".format(score))
The problem is that when I run my code the code runs properly for fold 1 but for fold 2 it gives me the following error
ValueError: Input 0 is incompatible with layer conv2d_5: expected ndim=4, found ndim=2
When I checked the dimensions of x_train it was (450, 1, 1028, 1)
I am not sure what the error is.
You are adding model layers inside loop over and over. The error was produced when you try to add convolutional layer(for second iteration of loop) after softmax activation layer(the last layer for first iteration of the loop). After careful inspection I've come to the following solution to your question.
First split the dataset into train and test
for train_index, test_index in kf.split(data):
X_train, X_test = data[train_index], data[test_index]
y_train, y_test = data_y[train_index], data_y[test_index]
then add layers to model outside of the loop.
model.add(Conv2D(32, (1, 3),input_shape=(1,1028,1)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(1,2)))
model.add(Conv2D(34, (1, 4)))
model.add(BatchNormalization(axis=-1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,2)))
# ... The reset of the code
I'm experimenting with geometric shape classification. My datasets are 100x100 px thresholded black and white images of squares, circles and triangles in total 3000 and 1000 for each shape. They look like these:
But I got them as a csv file, where each row is the one dimensional representation of the image and last column is label.
I used MLP from sklearn to make a classifier. It performed well. Almost 99%.
df = pd.read_csv("img_data.csv", sep=";")
df = df.sample(frac=1) # shuffling the whole dataset
X = df.drop('label', axis=1) # Because 'label' is the column of label
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
clf = MLPClassifier(solver='adam', activation="relu",alpha=1e- 5,hidden_layer_sizes=(1000,), random_state=1, verbose=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('accuracy',accuracy_score(y_test, y_pred))
Then I wanted to try with CNN. For that I used keras with tensorflow backend. But accuracy here couldn't cross above 92% even after 20 epochs. Here's my code:
df = pd.read_csv("img_data.csv", sep=";")
df = df.sample(frac=1) # shuffling the whole dataset
X = df.drop('label', axis=1) # Because 'label' is the column of label
y = df['label']
X=X.as_matrix()
X = np.reshape(X, (-1, 100, 100, 1)) #made 1d to 2d
a = list(y)
label_binarizer = sklearn.preprocessing.LabelBinarizer()
label_binarizer.fit(range(max(a)))
y = label_binarizer.transform(a) # encoding one hot for labels
X_train, X_test, y_train, y_test = train_test_split(all_images, y, test_size=0.20)
model = Sequential()
model.add(Conv2D(32, 3, activation='relu', input_shape=[100, 100, 1]))
model.add(MaxPool2D())
model.add(BatchNormalization())
model.add(Conv2D(64, 3, activation='relu'))
model.add(MaxPool2D())
model.add(BatchNormalization())
model.add(Conv2D(128, 3, activation='relu'))
model.add(MaxPool2D())
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
epochs = 20
model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs, batch_size=64, verbose=1)
This seems to be a very simple problem. There is very little structure inside the data, so I think you could try to reduce the depth of the neural network by removing the last two convolution and max pooling layers. Instead increase the number of nodes in the fully-connected layer, like this:
model = Sequential()
model.add(Conv2D(32, 3, activation='relu', input_shape=[100, 100, 1]))
model.add(MaxPool2D())
model.add(BatchNormalization())
model.add(Conv2D(64, 3, activation='relu'))
model.add(MaxPool2D())
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(1000, activation='relu'))
model.add(Dense(3, activation='softmax'))
You could also try to use some image augmentation techniques like shifting and rotating to increase your dataset. Then I expect the convnet to outperform the standard mlp.
Best