CNN with keras, accuracy remains constant and does not improve - python

I am trying to train my model by finetuning a pretrained model(vggface). My model has 12 classes with 1774 training images and 313 validation images, each class having around 150 images.
So far I have been able to achieve a max accuracy around 80% with the following script in keras:
img_width, img_height = 224, 224
vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
out = Dense(12, activation='softmax', name='classifier')(x)
custom_vgg_model = Model(vggface.input, out)
# Create the model
model = models.Sequential()
# Add the convolutional base model
model.add(custom_vgg_model)
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1./255)
# Change the batchsize according to your system RAM
train_batchsize = 16
val_batchsize = 16
train_generator = train_datagen.flow_from_directory(
train_data_path,
target_size=(img_width, img_height),
batch_size=train_batchsize,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_data_path,
target_size=(img_width, img_height),
batch_size=val_batchsize,
class_mode='categorical',
shuffle=True)
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3),
metrics=['acc'])
# Train the model
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size ,
epochs=50,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
verbose=1)
So far I have tried:
By increasing the epochs upto 100.
By changing the learning rate.
By changing the optimizer to rmsprop. But that gave even worse results.
I was suggested to add more FC layers with dropout and batch normalization. I
did that, still the validation accuracy was around 79%.
Here's my change for that:
vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))
#vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
xx = Dense(256, activation = 'relu')(x)
x1 = BatchNormalization()(xx)
x2 = Dropout(0.3)(xx)
y = Dense(256, activation = 'relu')(x2)
yy = BatchNormalization()(y)
y1 = Dropout(0.3)(y)
z = Dense(256, activation = 'relu')(y1)
zz = BatchNormalization()(z)
z1 = Dropout(0.6)(zz)
x3 = Dense(12, activation='softmax', name='classifier')(z1)
custom_vgg_model = Model(vggface.input, x3)
I have made my activation as softmax now as suggested by SymbolixAU here. The val acc is now 81% still, while the training acc gets close to 99%
What am I doing wrong?

Be careful with your connections. In the first two blocks the BatchNormalization is not connected to the dropout. Change the input of the two first dropout.
xx = Dense(256, activation = 'relu')(x)
x1 = BatchNormalization()(xx)
x2 = Dropout(0.3)(x1)
y = Dense(256, activation = 'relu')(x2)
yy = BatchNormalization()(y)
y1 = Dropout(0.3)(yy)
The values you give implies that your network is overfitting. The batch normalization or adding more dropout might help. But, given the small number of images you should really try image augmentation to increase the number of training images.

Related

Model Accuracy is High but Val_Accuracy is low

I'm trying to improve my val accuracy as it is very low. I have tried changing the batch_size, the number of images being used for validation and training. Added in extra dense levels but none of them have worked. The dataset I'm using has not been split up yet into Training and Validation which is what I have done using partitioning. I have given the values for the samples as you can see below and have tried to increase the VALIDATION_SAMPLES but when I do, my cluster keeps crashing.
TRAINING_SAMPLES = 10000
VALIDATION_SAMPLES = 2000
TEST_SAMPLES = 2000
IMG_WIDTH = 178
IMG_HEIGHT = 218
BATCH_SIZE = 32
NUM_EPOCHS = 20
def generate_df(partition, attr, num_samples):
df_ = df_par_attr[(df_par_attr['partition'] == partition)
& (df_par_attr[attr] == 0)].sample(int(num_samples/2))
df_ = pd.concat([df_,
df_par_attr[(df_par_attr['partition'] == partition)
& (df_par_attr[attr] == 1)].sample(int(num_samples/2))])
# for Training and Validation
if partition != 2:
x_ = np.array([load_reshape_img(images_folder + fname) for fname in df_.index])
x_ = x_.reshape(x_.shape[0], 218, 178, 3)
y_ = np_utils.to_categorical(df_[attr],2)
# for Test
else:
x_ = []
y_ = []
for index, target in df_.iterrows():
im = cv2.imread(images_folder + index)
im = cv2.resize(cv2.cvtColor(im, cv2.COLOR_BGR2RGB), (IMG_WIDTH, IMG_HEIGHT)).astype(np.float32) / 255.0
im = np.expand_dims(im, axis =0)
x_.append(im)
y_.append(target[attr])
return x_, y_
My training model is build after the partitioning which you can see below
# Train data
x_train, y_train = generate_df(0, 'Male', TRAINING_SAMPLES)
# Train - Data Preparation - Data Augmentation with generators
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
)
train_datagen.fit(x_train)
train_generator = train_datagen.flow(
x_train, y_train,
batch_size=BATCH_SIZE,
)
The same also goes for the validation
# Validation Data
x_valid, y_valid = generate_df(1, 'Male', VALIDATION_SAMPLES)
# Validation - Data Preparation - Data Augmentation with generators
valid_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
)
valid_datagen.fit(x_valid)
validation_generator = valid_datagen.flow(
x_valid, y_valid,
)
I tried playing around with the layers but got told that it wouldn't really affect your val_accuracy
x = inc_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation="relu")(x)
x = Dropout(0.5)(x)
x = Dense(256, activation="relu")(x)
predictions = Dense(2, activation="softmax")(x)
I tried using the 'adam' optimizer but it made no difference when compared to sgd
model_.compile(optimizer=SGD(lr=0.0001, momentum=0.9)
, loss='categorical_crossentropy'
, metrics=['accuracy'])
hist = model_.fit_generator(train_generator
, validation_data = (x_valid, y_valid)
, steps_per_epoch= TRAINING_SAMPLES/BATCH_SIZE
, epochs= NUM_EPOCHS
, callbacks=[checkpointer]
, verbose=1
)
Who ever told you modifying the model won't effect validation accuracy in most cases is dead wrong. The problem you have in your model is it is not deep enough to extract the features of the images. Below is the code I have used on hundreds of models and has proved to be very accurate with respect to achieving low training and validation loss and avoid over fitting
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Dense, Activation,Dropout,Conv2D, MaxPooling2D,BatchNormalization, Flatten
from tensorflow.keras.optimizers import Adam, Adamax
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model, load_model
def make_model(img_img_size, class_count,lr=.001, trainable=True):
img_shape=(img_size[0], img_size[1], 3)
model_name='EfficientNetB3'
base_model=tf.keras.applications.efficientnet.EfficientNetB3(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max')
base_model.trainable=trainable
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(256, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006),
bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x)
x=Dropout(rate=.45, seed=123)(x)
output=Dense(class_count, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(learning_rate=lr), loss='categorical_crossentropy', metrics=['accuracy'])
return model, base_model # return the base_model so the callback can control its training state
TRAINING_SAMPLES = 10000
VALIDATION_SAMPLES = 2000
TEST_SAMPLES = 2000
IMG_WIDTH = 178
IMG_HEIGHT = 218
BATCH_SIZE = 32
NUM_EPOCHS = 20
img_size=(IMG_HEIGHT,IMG_WIDTH)
class_count=2
model, base_model=make_model(img_size, class_count, lr=.001, trainable=True)
I also recommend that you use two keras callbacks. One is to control the learning rate. Documentation for that is here. The other controls early stopping and saves the model with the lowest validation loss. Documentation for that is here.
My recommended code for these callbacks is shown below
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.5, patience=2,verbose=1)
estop=tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=4, verbose=1,restore_best_weights=True)
callbacks=[rlronp, estop]
put the above code prior to using model.fit. In model.fit set the parameter
callbacks=callbacks

Low accuracy after testing hyperparameters

I am using VGG19 pre-trained model with ImageNet weights to do transfer-learning on 4 classes with keras. However I do not know if there really is a difference between these 4 classes, I'd like to discover it. The goal would be to discover if these classes make sense or if there is no difference between these images classes.
These classes are made up of abstract paintings from the same individual.
I tried different models with different hyperparameters (Adam/SGD, learning rate, dropout, l2 regularization, FC layers size, batch size, unfreeze, and also weighted classes as the data is a little bit unbalanced
batch_size = 32
unfreeze = 17
dropout = 0.2
fc = 256
lr = 1e-4
l2_reg = 0.1
train_datagen = ImageDataGenerator(
preprocessing_function = preprocess_input,
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest'
)
test_datagen = ImageDataGenerator(preprocessing_function = preprocess_input)
train_generator = train_datagen.flow_from_directory(
'C:/Users/train',
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'C:/Users/test',
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical')
base_model = VGG19(
weights="imagenet",
input_shape=(224, 224, 3),
include_top=False,
)
last_layer = base_model.get_layer('block5_pool')
last_output = last_layer.output
x = Flatten()(last_output)
x = GlobalMaxPooling2D()(last_output)
x = Dense(fc)(x)
x = Activation('relu')(x)
x = BatchNormalization()(x)
x = Dropout(dropout)(x)
x = Dense(fc, activation='relu', kernel_regularizer = regularizers.l2(l2=l2_reg))(x)
x = layers.Dense(4, activation='softmax')(x)
model = Model(base_model.input, x)
for layer in model.layers:
layer.trainable = False
for layer in model.layers[unfreeze:]:
layer.trainable = True
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(learning_rate = lr),
metrics=['accuracy'])
class_weights = class_weight.compute_class_weight('balanced',
np.unique(train_generator.classes),
train_generator.classes)
class_weights_dict = dict(enumerate(class_weights))
history = model.fit(train_generator, epochs=epochs, validation_data=validation_generator,
validation_steps=392//batch_size,
steps_per_epoch=907//batch_size)
plot_model_history(history)
I also did feature extractions at every layer, and fed the extracted features to a SVM (for each layer), and the accuracy of these SVM was about 40%, which is higher than this model (30 to 33%). So, I may be wrong but I think this model could achieve a higher accuracy.
I have a few questions about my model.
First, is my code correct, or am I doing something wrong ?
If the validation set accuracy for a 4-classes classification task is ~30% (assuming the data are balanced or weighted), is it likely or very not likely to be able to improve it to something significantly better with other hyperparameters ?
What else can I try to have a better accuracy ?
When and how can I conclude that these classes do not make sense ?

Tensorflow: loss and accuracy stay flat training CNN on image classification

I copied / pasted this Tensorflow tutorial into a Jupyter notebook. (As of this writting they changed the tutorial to the flower data set instead of the dog one, but the question still applies).
https://www.tensorflow.org/tutorials/images/classification
The first part (without augmentation) runs fine and I get similar results.
But with data augmentation, my Loss and Accuracy stay flat across all epoch. I've checked this posts already on SO :
Keras accuracy does not change
How to fix flatlined accuracy and NaN loss in tensorflow image classification
Tensorflow: loss decreasing, but accuracy stable
None of this applied, since the dataset is a standard one, I don't have the problem of corrupted data, plus I printed a couple of images augmented and it works fine (see below).
I've tried adding more fully connected layers to increase the model capacity, dropout to limit over fitting,... nothing change here are the curve :
Any ideas as to why? Have I missed something in the code?
I know training a DL model is a lot of trial and error, but I'm sure there must be some logic or intuition beyond randomly turning the knobs until something happens.
Thanks !
Source Data :
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
Params :
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
Preprocessing stage :
image_gen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
horizontal_flip=True,
zoom_range=0.2)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][i] for i in range(5)]
plotImages(augmented_images)
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
Model :
model_new = Sequential([
Conv2D(16, 2, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 2, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 2, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
history = model_new.fit(
train_data_gen,
steps_per_epoch= total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps= total_val // batch_size
)
As suggested by #today, class_method= 'binary' was missing from the training data generator
Now the model is able to train properly.
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_method = 'binary')

Deploying a CNN: High training and test accuracy but low prediction accuracy

Just starting out in ML and created my first CNN to detect the orientation of an image of a face. I got the training and testing accuracy up to around 96-99% over 2 different sets of 1000 pictures (128x128 RGB). However, when I go to predict an image from the test set on its own, the model rarely predicts correctly. I think there must be a difference in the way I load data into the model during testing vs prediction. Here is how I load the data into the model to train and test:
datagen = ImageDataGenerator()
train_it = datagen.flow_from_directory('twoThousandTransformed/', class_mode='categorical', batch_size=32, color_mode="rgb", target_size=(64,64))
val_it = datagen.flow_from_directory('validation/', class_mode='categorical', batch_size=32, color_mode="rgb", target_size=(64,64))
test_it = datagen.flow_from_directory('test/', class_mode='categorical', batch_size=32, color_mode='rgb', target_size=(64,64))
And here is how I load an image to make a prediction:
image_path='inputPicture/02001.png'
image = tf.keras.preprocessing.image.load_img(image_path)
input_arr = keras.preprocessing.image.img_to_array(image)
reshaped_image = np.resize(input_arr, (64,64,3))
input_arr = np.array([reshaped_image])
predictions = model.predict(input_arr)
print(predictions)
classes = np.argmax(predictions, axis = 1)
print(classes)
There must be some difference in the way the ImageDataGenerator handles the images vs. how I am doing it in the prediction. Can y'all help a noobie out? Thanks!
Edit: Below is my model
imageInput = Input(shape=(64,64,3))
conv1 = Conv2D(128, kernel_size=16, activation='relu')(imageInput)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, kernel_size=12, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(64, kernel_size=4, activation='relu')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
flat = Flatten()(pool3)
hidden1 = Dense(16, activation='relu')(flat)
hidden2 = Dense(16, activation='relu')(hidden1)
hidden3 = Dense(10, activation='relu')(hidden2)
output = Dense(4, activation='softmax')(hidden3)
model = Model(inputs=imageInput, outputs=output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_it, steps_per_epoch=16, validation_data=val_it, validation_steps=8, epochs=25)
print('here we go!')
_, accuracy = model.evaluate(test_it)
print('Accuracy: %.2f' % (accuracy*100))
One thing you can try is to replicate the chosen image to resemble the batch size with which you trained the model. Also, because of such high training accuracy, it seems your model must be overfitting. So, try adding dropout or reducing the number of layers in your network, if the first thing doesn't work out.

how to fix overfitting or where is my fault in my code

I'm using a pre-trained model Vgg16 to do 100 classification problem. The dataset is tiny-imagenet, each class has 500 images, and I random choose 100 class from tiny-imagenet for my training(400) and validation(100) data. So I change input_shape of vgg16 for 32*32 size.
The results always look like overfitting. Training acc is high, but val_acc always stuck at almost 40%.
I used dropout, regularization L2, data augmentation ... , but val_acc is also stuck at almost 40%.
How could I do for overfitting or correct my code.
Thanks
img_width, img_height = 32, 32
epochs = 50
learning_rate = 1e-4
steps_per_epoch = 2500
train_path='./training_set_100A/'
valid_path='./testing_set_100A/'
test_path='./testing_set_100A/'
class_num = 100
train_batches = ImageDataGenerator(rescale=1. / 255
,rotation_range=20, zoom_range=0.15,
width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True, fill_mode="nearest"
).flow_from_directory(
train_path, target_size=(img_width,img_height),
batch_size=32, shuffle=True)
valid_batches = ImageDataGenerator(rescale=1. / 255).flow_from_directory(
valid_path, target_size=(img_width,img_height),
batch_size=10, shuffle=False)
test_batches = ImageDataGenerator(rescale=1. / 255).flow_from_directory(
test_path, target_size=
(img_width,img_height),batch_size=10,shuffle=False)
seqmodel = Sequential()
VGG16Model = VGG16(weights='imagenet', include_top=False)
input = Input(shape=(img_width, img_height, 3), name='image_intput')
output_vgg16_conv = VGG16Model(input)
x = Flatten()(output_vgg16_conv)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(class_num, activation='softmax')(x)
funcmodel = Model([input], [x])
funcmodel.summary()
funcmodel.compile(optimizer=SGD(lr=learning_rate, momentum=0.9),
loss='categorical_crossentropy', metrics=['accuracy'])
train_history = funcmodel.fit_generator(train_batches,
steps_per_epoch=steps_per_epoch, validation_data=valid_batches,
validation_steps=1000, epochs=epochs, verbose=1)
`
It seems you followed examples of implementing this from other sites, but you're training samples are very small to train the 2 new Dense layers of 4096 size each.
you have to either lower the size of you layers or add a lot more samples 20,000 instead of 500.
1) 50epoch is too much. Try running smaller epoch?
2) Check your validation accuracy for every epoch?
3) VGG is too deep for your small(32 * 32) image data. Try building your own network with lesser number of parameters. or Try Lenet?

Categories