Epoch does not start while training CNN with keras VGGFace Framework - python

I am trying to use VGG Face implementation with keras framework on my own dataset consisting of 12 classes of face images. I have applied augmentation on some classes with very less data in training set.
After finetuning with resnet50, when I try to train my model, it gets stuck in epoch i.e., it does not start to train but keep displaying Epoch 1/50.
Here's what it looks like:
Layer (type) Output Shape Param #
=================================================================
model_1 (Model) (None, 12) 23585740
=================================================================
Total params: 23,585,740
Trainable params: 23,532,620
Non-trainable params: 53,120
_________________________________________________________________
Found 1774 images belonging to 12 classes.
Found 313 images belonging to 12 classes.
Epoch 1/50
Here's my code:
train_data_path = 'dataset_cfps/train'
validation_data_path = 'dataset_cfps/validation'
#Parametres
img_width, img_height = 224, 224
vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))
#vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
out = Dense(12, activation='sigmoid', name='classifier')(x)
custom_vgg_model = Model(vggface.input, out)
# Create the model
model = models.Sequential()
# Add the convolutional base model
model.add(custom_vgg_model)
# Add new layers
# model.add(layers.Flatten())
# model.add(layers.Dense(1024, activation='relu'))
# model.add(BatchNormalization())
# model.add(layers.Dropout(0.5))
# model.add(layers.Dense(12, activation='sigmoid'))
# Show a summary of the model. Check the number of trainable parameters
model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1./255)
train_batchsize = 16
val_batchsize = 16
train_generator = train_datagen.flow_from_directory(
train_data_path,
target_size=(img_width, img_height),
batch_size=train_batchsize,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_data_path,
target_size=(img_width, img_height),
batch_size=val_batchsize,
class_mode='categorical',
shuffle=True)
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3),
metrics=['acc'])
# Train the model
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size ,
epochs=50,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
verbose=1)
# Save the model
model.save('facenet_resnet.h5')
Does anyone know what could be the possible problem? And how can I make my model better(if there's something I could do). Feel free to suggest me improvements.

Waiting did not solve it, I solved it by restarting the whole program.

Just you wait few hours(based on your gpu). Finally it will tell the loss and val_loss per each epochs.

Related

How can I solve Value error in resnet 50 implementation?

I am implementing resnet-50 on Kaggle and I am getting a value error. Kindly help me out
train_dir='../input/project/data/train'
test_dir='../input/project/data/test'
train_datagen=ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
color_mode='grayscale',
target_size=(28,28),
class_mode='binary',
batch_size=32,
)
test_generator = test_datagen.flow_from_directory(
test_dir,
color_mode='grayscale',
target_size=(28,28),
class_mode='binary',
batch_size=32,
shuffle='False',
)
model = Sequential()
model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path,input_tensor=Input(shape=(224,224,3))))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(2048, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(2, activation='sigmoid'))
model.layers[0].trainable = False
I am training a binary classifier and I am getting the error below
ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (512, 128, 1, 1) are incompatible
You have given input_tensor=Input(shape=(224,224,3)) while defining the ResNet50 base model. But you are giving target_size=(28,28) in your train_generator and test_generator. The training image shape which ResNet50 receiving i.e. target_size is different from what it expects i.e. input_tensor. Change your target_size to match with the shape mentioned in the input_tensor. Also, ResNet50 expects color_mode to be rgb rather grayscale.
This is because of weights which you are using(weights=resnet_weights_path). You've to use latest trained model. Input Image size can be any as per pre-trained model guidelines
Below worked for me
n_h, n_w, n_c =(256, 256, 3)
weights_path = '../input/d/aeryss/keras-pretrained- models/ResNet50_NoTop_ImageNet.h5'
ResNet50 = keras.applications.ResNet50(weights=weights_path ,include_top=False, input_shape=(n_h, n_w, n_c))
It looks like you are using pre-trained weights on your model. You should set the skip_mismatch=True keyword on the model.load_weights function. After returning the model variable, please set the following code:
model.load_weights(weights, by_name=True, skip_mismatch=True)
Where weight is your pre-trained model weight. It should ignore any mismatch of your model and the pre-trained weights.

Simple tf.keras Resnet50 model not converging

I'm using the ResNet50v2 model from keras.applications for image classification but I have had persisting problems trying to get the model to converge on any meaningful accuracy. Previously, I have developed this same model with the same data in Matlab and reached around 75% accuracy but now the training just hovers around 30% accuracy and the loss does not drop. I'm thinking that there is a really simple mistake somewhere but I can't find it.
import tensorflow as tf
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./224,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(main_dir,
class_mode='categorical',
batch_size=32,
target_size=(224,224),
shuffle=True,
subset='training')
validation_generator = train_datagen.flow_from_directory(main_dir,
target_size=(224, 224),
batch_size=32,
class_mode='categorical',
shuffle=True,
subset='validation')
IMG_SHAPE = (224, 224, 3)
base_model = tf.keras.applications.ResNet50V2(
input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
maxpool_layer = tf.keras.layers.GlobalMaxPooling2D()
prediction_layer = tf.keras.layers.Dense(4, activation='softmax')
model = tf.keras.Sequential([
base_model,
maxpool_layer,
prediction_layer
])
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=opt,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_generator,
steps_per_epoch = train_generator.samples // 32,
validation_data = validation_generator,
validation_steps = validation_generator.samples // 32,
epochs = 20)
Since your last layer contains a softmax activation, your loss doesn't need from_logits=True. However, if you didn't have a softmax activation, you would need from_logits=True. This is because categorical_crossentropy handles probability outputs differently from logits.

Why is my validation accuracy stuck around 65% and how do i increase it?

I'm making an image classification CNN with 5 classes with each having 693 images with a width and height of 224px using VGG16, but my validation accuracy is stuck after 15-20 epochs around 60% - 65%.
I'm already using some data augmentation, batch normalization, and dropout and I have frozen the first 5 layers but I can't seem to increase my accuracy more than 65%.
these are my own layers
img_rows, img_cols, img_channel = 224, 224, 3
base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel))
for layer in base_model.layers[:5]:
layer.trainable = False
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dropout(0.5))
add_model.add(Dense(512, activation='relu'))
add_model.add(BatchNormalization())
add_model.add(Dropout(0.5))
add_model.add(Dense(5, activation='softmax'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001),
metrics=['accuracy'])
model.summary()
and this is my dataset with my model
batch_size = 64
epochs = 25
train_datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=.1,
height_shift_range=.1,
horizontal_flip=True)
train_datagen.fit(x_train)
history = model.fit_generator(
train_datagen.flow(x_train, y_train, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
)
I want to get a higher accuracy because what I get now is just not enough so any help or suggestions would be appreciated.
A few things you can try are:
Reduce your batch size.
Choose another optimizer: RMSprop, SGD...
Increase the learning rate by default and then use the callback ReduceLROnPlateau
But, as usual, it depends on the data you are using. Are well balanced?

Not able to load weights after fine tuning the model with VGG16

I have loaded the weights from VGG16 and added to my Sequential Model. I want to train the lower weights of VGG16 by freezing the top layers (Fine Tuning).
Everything was good: I was able to build the model and predict new images. But now I want to load the model, which I was unable to do.
This is what I have tried shown as following code:
model1 = applications.VGG16(weights='imagenet',
include_top=False,input_shape=(img_width,img_height,3))
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(img_width, img_height),
batch_size=size_batch,
class_mode='binary',
shuffle=False)
# repeat with the validation data
test_generator = test_datagen.flow_from_directory(validation_data_dir,
target_size=(img_width, img_height),
batch_size=size_batch,
class_mode='binary',
shuffle=False)
model = Sequential()
model.add(Flatten(input_shape=model1.output_shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
new_model=Sequential()
for l in model1.layers:
new_model.add(l)
new_model.add(model)
for layer in new_model.layers[:25]:
layer.trainable=False
new_model.compile(optimizer=optimizers.SGD(lr=1e-3,
momentum=0.9),loss='binary_crossentropy',
metrics=['accuracy'])
checkpoint = ModelCheckpoint(fine_tuned_model_path, monitor='val_acc',
verbose=1, save_best_only=True,
save_weights_only=False, mode='auto')
# fine-tune the model
fit=new_model.fit_generator(train_generator,
steps_per_epoch=33,
nb_epoch=1,
validation_data=test_generator,
verbose=1,callbacks=[checkpoint])
I then was trying to load the model:
load_model("C:/Users/hi/POC/Fine_Tune/model.h5")
This is the error I am receiving:
ValueError: You are trying to load a weight file containing 14 layers
into a model with 1 layers.
According to Keras issue 8898, this error can be avoided by editing the Keras code keras/applications/vgg16.py
so that the line(s) that used to read
model.load_weights(weights_path) now read model.load_weights(weights_path, by_name=True)
I have found this to work for imagenet weights with other Applications models as well, e.g. nasnet.
I don't see why you had to define a new model and load the previous layers of VGG16 into your new model.
The best work around that l would advice is freezing the layer of the VGG16
architecture you want and have the ones you want as trainable layers, as you did in the last for loop
This will ultimately result in you removing the two for loops you have
embedded inside.
# the way you loaded your images and did not include_top layer
model1 = applications.VGG16(weights='imagenet', include_top=False, input_shape = (img_width, img_height, 3))
#to see the structure of your architecture
model.summary()
#freezing the layers you do not want for training in your architecture
for layer in model1.layers[:25]:
layer.trainable = False
#the rest is the same from here on forth with the exclusion of the two for loops
#which you need to remove as they are no longer required.
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
#etc...

A huge time to download the weights of deep networks

Hi I feel that there is something wrong with the way my code is running. I'm trying to load vgg and resnet for deep learning. This is the code I used.
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
# path to the model weights files.
weights_path = '../keras/examples/vgg16_weights.h5'
top_model_weights_path = 'fc_model.h5'
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'cats_and_dogs_small/train'
validation_data_dir = 'cats_and_dogs_small/validation'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
# build the VGG16 network
model = applications.VGG16(weights='imagenet', include_top=False)
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
model.add(top_model)
# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:25]:
layer.trainable = False
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
# fine-tune the model
model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
epochs=epochs,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
At the line 'model = applications.VGG16(weights='imagenet', include_top=False)'
programs starts to download weights and it displays as below.
This process would take around 5/6 days to complete fully. But it gets stuck at the middle. Is there a simple way that I can avoid this complete process. Manually downloading. Is there something I'm missing.
Please help

Categories