How Can I Increase My CNN Model's Accuracy - python

I built a cnn model that classifies facial moods as happy , sad, energetic and neutral faces. I used Vgg16 pre-trained model and freezed all layers. After 50 epoch of training my model's test accuracy is 0.65 validatation loss is about 0.8 .
My train data folder has 16000(4x4000) , validation data folder has 2000(4x500) and Test data folder has 4000(4x1000) rgb images.
1)What is your suggestion to increase the model accuracy?
2)I have tried to do some prediction with my model , predicted class is always same. What can cause the problem?
What I Have Tried So Far ?
Add dropout layer (0.5)
Add Dense (256, relu) before last layer
Shuff the train and validation datas.
Decrease the learning rate to 1e-5
But I could not the increase validation and test accuracy.
My Codes
train_src = "/content/drive/MyDrive/Affectnet/train_class/"
val_src = "/content/drive/MyDrive/Affectnet/val_class/"
test_src="/content/drive/MyDrive/Affectnet/test_classs/"
train_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
)
train_generator = train_datagen.flow_from_directory(
train_src,
target_size=(224,224 ),
batch_size=32,
class_mode='categorical',
shuffle=True
)
validation_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255
)
validation_generator = validation_datagen.flow_from_directory(
val_src,
target_size=(224, 224),
batch_size=32,
class_mode='categorical',
shuffle=True
)
conv_base = tensorflow.keras.applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3)
)
for layer in conv_base.layers:
layer.trainable = False
model = tensorflow.keras.models.Sequential()
# VGG16 is added as convolutional layer.
model.add(conv_base)
# Layers are converted from matrices to a vector.
model.add(tensorflow.keras.layers.Flatten())
# Our neural layer is added.
model.add(tensorflow.keras.layers.Dropout(0.5))
model.add(tensorflow.keras.layers.Dense(256, activation='relu'))
model.add(tensorflow.keras.layers.Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=tensorflow.keras.optimizers.Adam(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
epochs=50,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=5,
workers=8
)
Loss and accuracy

Well a few things. For training set you say you have 16,0000 images. However with a batch size of 32 and steps_per_epoch= 100 then for any given epoch you are only training on 3,200 images. Similarly you have 2000 validation images, but with a batch size of 32 and validation_steps = 5 you are only validating on 5 X 32 = 160 images.
Now Vgg is an OK model but I don't use it because it is very large which increases the training time significantly and there are other models out there for transfer learning that are smaller and even more accurate. I suggest you try using EfficientNetB3. Use the code
conv_base = tensorflow.keras.applications.EfficientNetB3(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3)
pooling='max'
)
with pooling='max' you can eliminate the Flatten layer. Also EfficientNet models expect pixels in the range 0 to 255 so remove the rescale=1/255 in your generators.
Next thing to do is to use an adjustable learning rate. This can be done using Keras callbacks. Documentation for that is here. You want to use the ReduceLROnPlateau callback. Documentation for that is here. Set it up to monitor validation loss. My suggested code for that is below
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss",factor=0.5,
patience=1, verbose=1)
I also recommend you use the callback EarlyStopping. Documentation for that is here. . My recomended code for that is shown below
estop=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=4, verbose=1,
restore_best_weights=True)
Now in model.fit include
callbacks=[rlronp, estop]
set your learning rate to .001. Set epochs=50. The estop callback if tripped will return your model loaded with the weights from the epoch with the lowest validation loss. I notice you have the code
for layer in conv_base.layers:
layer.trainable = False
I know the tutorials tell you to do that but I get better results leaving it trainable and I have done this on hundreds of models.

Related

how to reducing validation loss and improving the test result in CNN Model

How may I improve the valid accuracy? Besides that, my test accuracy is also low. I am trying to do categorical image classification on pictures about weeds detection in the agriculture field.
Dataset: The total number of images is 5539 with 12 classes where 70% (3870 images) of Training set 15% (837 images) of Validation and 15% (832 images) of Testing set
#data augmentation by applying Augmentor
train_aug = Augmentor.Pipeline(source_directory="/content/dataset/train",
output_directory="/content/dataset/train")
# Defining augmentation parameters and generating 17600 samples
train_aug.flip_left_right(probability=0.4)
train_aug.flip_top_bottom(probability=0.8)
train_aug.rotate(probability=0.5, max_left_rotation=5, max_right_rotation=10)
train_aug.skew(0.4, 0.5)
train_aug.zoom(probability = 0.2, min_factor = 1.1, max_factor = 1.5)
train_aug.sample(17600)
def cnn_model():
Model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters = 32, kernel_size = (3,3) , activation ='relu',input_shape=(224,224,3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(filters = 96, kernel_size = (3,3) , activation ='relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(filters = 150, kernel_size = (3,3) , activation ='relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(filters = 64, kernel_size = (3,3) , activation ='relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Flatten() ,
tf.keras.layers.Dense(512,activation='relu') ,
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(416, [enter image description here][1]activation='relu') ,
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(12,activation='softmax') ,
])
Model.summary()
return Model
Model = cnn_model()
from tensorflow.keras.callbacks import ModelCheckpoint
checkpoint = ModelCheckpoint(filepath, monitor='accuracy', verbose=1, save_best_only=False, save_weights_only=True, mode='auto')
callbacks_list = [checkpoint]
model= Model.compile(tf.keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
History = Model.fit_generator(generator= train_data, steps_per_epoch= 3333//BATCH_SIZE , epochs= NO_OF_EPOCHS , validation_data= valid_data, validation_steps=1 ,callbacks=callbacks_list)
How may I increase my valid accuracy where my training accuracy is 98% and validation accuracy is 71%?
I would adjust the number of filters to size to 32, then 64, 128, 256. Then I would replace the flatten layer with
tf.keras.layers.GlobalAveragePooling2D()
I would also remove the checkpoint callback and replace with
es=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=3,
verbose=1, restore_best_weights=True)
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1,
verbose=1)
callback_list=[es, rlronp]
the early stopping callback will monitor validation loss and if it fails to reduce after 3 consecutive epochs it will halt training and restore the weights from the best epoch to the model. The ReduceLROnPlateau callback will monitor validation loss and reduce the learning rate by a factor of .5 if the loss does not reduce at the end of an epoch. Run this and if it does not do much better you can try to use a class_weight dictionary to try to compensate for the class imbalance. To calculate the dictionary find the class that has the HIGHEST number of samples. Then the weight for each class is
weight for class=highest number of samples/samples in class. So create a dictionary of the
form class integer:weight. Be careful to keep the order of the classes correct.
Also to help with the imbalance you can try image augmentation. If you use ImageDataGenerator.flow_from_directory to read in your data you can use the generator to provide image augmentation like horizontal flip. If not you can use the Keras augmentation layers directly in your model. Documentation is here..

Why is my validation accuracy stuck around 65% and how do i increase it?

I'm making an image classification CNN with 5 classes with each having 693 images with a width and height of 224px using VGG16, but my validation accuracy is stuck after 15-20 epochs around 60% - 65%.
I'm already using some data augmentation, batch normalization, and dropout and I have frozen the first 5 layers but I can't seem to increase my accuracy more than 65%.
these are my own layers
img_rows, img_cols, img_channel = 224, 224, 3
base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel))
for layer in base_model.layers[:5]:
layer.trainable = False
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dropout(0.5))
add_model.add(Dense(512, activation='relu'))
add_model.add(BatchNormalization())
add_model.add(Dropout(0.5))
add_model.add(Dense(5, activation='softmax'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001),
metrics=['accuracy'])
model.summary()
and this is my dataset with my model
batch_size = 64
epochs = 25
train_datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=.1,
height_shift_range=.1,
horizontal_flip=True)
train_datagen.fit(x_train)
history = model.fit_generator(
train_datagen.flow(x_train, y_train, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
)
I want to get a higher accuracy because what I get now is just not enough so any help or suggestions would be appreciated.
A few things you can try are:
Reduce your batch size.
Choose another optimizer: RMSprop, SGD...
Increase the learning rate by default and then use the callback ReduceLROnPlateau
But, as usual, it depends on the data you are using. Are well balanced?

Keras VGG16 with flow_from_directory val_acc not rising

I use keras and import VGG16 network with imagenet weights to classify male/female photos.
Strcture of directories is:
split_1/train/male/*.jpg
split_1/train/female/*.jpg
split_1/val/female/*.jpg
split_1/val/male/*.jpg
I tried most of the solutions I found over the internet but none of them worked:
changing batch_size
changing optimizers
changing class_mode/loss function
setting every layer to trainable
copying every layer from VGG to my sequential
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications
[...]
img_width, img_height = 224, 224
top_model_weights_path = "%s_retry2.h5" % split
train_data_dir = "%s/train" % split
validation_data_dir = "%s/val" % split
batch_size = 48
nb_train_samples = 4000
nb_validation_samples = ( 299 // batch_size ) * batch_size
epochs = 5
def train_top_model():
datagen = ImageDataGenerator(
horizontal_flip=True,
shear_range=0.2,
rescale=1. / 255)
vdatagen = ImageDataGenerator(rescale=1./255)
traingen = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
follow_links=True,
shuffle=True)
valgen = vdatagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
follow_links=True,
shuffle=True)
vgg_model = applications.VGG16(input_shape=(224,224,3), weights="imagenet", include_top=False)
model = Sequential()
model.add(vgg_model)
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
model.compile(optimizer="rmsprop", loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit_generator(traingen,
epochs=epochs,
steps_per_epoch=nb_train_samples // batch_size,
validation_data=valgen,
validation_steps=nb_validation_samples // batch_size)
It reports actual amount of images so it finds the jpgs properly.
Accuracy in val keeps being "random" and the same (~50%) during entire training.
Try reducing the learning rate, it may be the case where your model is overshooting the minima every time and hence not able to converge.
If any kind of hyper parameter tuning doesn't work then you need to fix your data but i think male/female classification data shouldn't be that tough to learn for a CNN model with pre-trained weights.
How Many sample do you have per class???
It seems that you don't have enough data to fine tune these large scale parameters that VGG16 has. (138 million if you trainable all the layers)
I suggest :
1. for gender classification problem, Try to use an official dataset such as IMDB-WIKI
2. If you wanna use your own data first collect more label sample and after that augment all of them
3. finally, Use state of the art CNN architectures such as Xception (you can load imagenet pre-traind of xception in keras) freeze 20 first layers and fune tune others

Keras: Transfer Learning - Image scaling worsens performance of the model significantly

I am working on an image classification problem with keras and tensorflow. I am using the VGG16 model with Imagenet weights and I am importing my data using the ImageDataGenerator from Keras.
Now I've been reading that one should always rescale the images using 1./255 for an efficient tranining. However, once I implement the scaling my model performs significantly worse than before. Changing the learning rate and batch size didn't help either.
Now I am questioning whether this is possible or if my model has some error. I am using standard .jpg image files.
from keras.preprocessing.image import ImageDataGenerator
IMAGE_SIZE = 224
BATCH_SIZE = 32
num_classes = 27
main_path = "C:/Users/abc/data"
final_path = os.path.join(main_path, "ML_DATA")
labels = listdir(gesamt_path)
data_generator = ImageDataGenerator(rescale=1./255, ### rescaling done here
validation_split=0.20)
train_generator = data_generator.flow_from_directory(final_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=True, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="training")
validation_generator = data_generator.flow_from_directory(final_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=False, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="validation")
Model definition and training
vgg16_model = keras.applications.vgg16.VGG16(weights='imagenet', include_top=True)
model = Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
for layer in model.layers:
layer.trainable = False
model.add(Dense(num_classes, activation='softmax'))
model.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit_generator(train_generator,
validation_data=validation_generator,
epochs=85, verbose=1,callbacks=[tbCallBack,earlystopCallback])
It could be that Imagenet Weights are not compatible with your new image dimension.
I see that your only trainable layer is the very last layer, a dense layer, which doesn’t know anything about image dimension. My suggestion is to also let the first few convolutional layers to be trainable, so that those layers can adapt to the rescaling.
Working with ResNet and imagenet weights I improved my results using:
ImageDataGenerator(preprocessing_function=preprocess_input)
With rescaling I obtained worse results too.
This information was useful to me:
https://github.com/matterport/Mask_RCNN/issues/231

How to use InceptionV3 bottlenecks as input in Keras 2.0

I want to use bottlenecks for transfer learning using InceptionV3 in Keras.
I've used some of the tips on creating, loading and using bottlenecks from
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
My problem is that I don't know how to use a bottleneck (numpy array) as input to an InceptionV3 with a new top layer.
I get the following error:
ValueError: Error when checking input: expected input_3 to have shape
(None, None, None, 3) but got array with shape (248, 8, 8, 2048)
248 refers to the total number of images in this case.
I know that this line is wrong, but I dont't know how to correct it:
model = Model(inputs=base_model.input, outputs=predictions)
What is the correct way to input the bottleneck into InceptionV3?
Creating the InceptionV3 bottlenecks:
def create_bottlenecks():
datagen = ImageDataGenerator(rescale=1. / 255)
model = InceptionV3(include_top=False, weights='imagenet')
# Generate bottlenecks for all training images
generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
nb_train_samples = len(generator.filenames)
bottlenecks_train = model.predict_generator(generator, int(math.ceil(nb_train_samples / float(batch_size))), verbose=1)
np.save(open(train_bottlenecks_file, 'w'), bottlenecks_train)
# Generate bottlenecks for all validation images
generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
nb_validation_samples = len(generator.filenames)
bottlenecks_validation = model.predict_generator(generator, int(math.ceil(nb_validation_samples / float(batch_size))), verbose=1)
np.save(open(validation_bottlenecks_file, 'w'), bottlenecks_validation)
Loading the bottlenecks:
def load_bottlenecks(src_dir, bottleneck_file):
datagen = ImageDataGenerator(rescale=1. / 255)
generator = datagen.flow_from_directory(
src_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False)
num_classes = len(generator.class_indices)
# load the bottleneck features saved earlier
bottleneck_data = np.load(bottleneck_file)
# get the class lebels for the training data, in the original order
bottleneck_class_labels = generator.classes
# convert the training labels to categorical vectors
bottleneck_class_labels = to_categorical(bottleneck_class_labels, num_classes=num_classes)
return bottleneck_data, bottleneck_class_labels
Starting training:
def start_training():
global nb_train_samples, nb_validation_samples
create_bottlenecks()
train_data, train_labels = load_bottlenecks(train_data_dir, train_bottlenecks_file)
validation_data, validation_labels = load_bottlenecks(validation_data_dir, validation_bottlenecks_file)
nb_train_samples = len(train_data)
nb_validation_samples = len(validation_data)
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 2 classes
predictions = Dense(2, activation='softmax')(x)
# What is the correct input? Obviously not base_model.input.
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=optimizers.SGD(lr=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# train the model on the new data for a few epochs
history = model.fit(train_data, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(validation_data, validation_labels),
)
Any help would be appreciated!
This error happens when you try to train your model with input data in a different shape from the shape your model supports.
Your model supports (None, None, None, 3), meaning:
Any number of images
Any height
Any width
3 channels
So, you must make sure that train_data (and validation_data) matches this shape.
The system is telling that train_data.shape = (248,8,8,2048)
I see that train_data comes from load_botlenecks. Is it really supposed to be coming from there? What is train data supposed to be? An image? Something else? What is a bottleneck?
Your model starts in the Inception model, and the Inception model takes images.
But if bottlenecks are already results of the Inception model, and you want to feed only bottlenecks, then the Inception model should not participate of anything at all.
Start from:
inputTensor = Input((8,8,2048)) #Use (None,None,2048) if bottlenecks vary in size
x = GlobalAveragePooling2D()(inputTensor)
.....
Create the model with:
model = Model(inputTensor, predictions)
The idea is:
Inception model: Image -> Inception -> Bottlenecks
Your model: Bottlenecks -> Model -> Labels
The combination of the two models is only necessary when you don't have the bottlenecks preloaded, but you have your own images for which you want to predict the bottlenecks first. (Of course you can work with separate models as well)
Then you're going to input only images (the bottlenecks will be created by Inception and passed to your model, everything internally):
Combined model: Image -> Inception ->(bottlenecks)-> Model -> Labels
For that:
inputImage = Input((None,None,3))
bottleNecks = base_model(inputImage)
predictions = model(bottleNecks)
fullModel = Model(inputImage, predictions)

Categories