Training convolutional neural network from scratch on my own dataset with Keras and Tensorflow.
learning rate = 0.0001,
5 classes to sort,
no Dropout used,
dataset checked twice, no wrong labels found
Model:
model = models.Sequential()
model.add(layers.Conv2D(16,(2,2),activation='relu',input_shape=(75,75,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(16,(2,2),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(32,(2,2),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(128,activation='relu'))
model.add(layers.Dense(5,activation='sigmoid'))
model.compile(optimizer=optimizers.adam(lr=0.0001),
loss='categorical_crossentropy',
metrics=['acc'])
history = model.fit_generator(train_generator,
steps_per_epoch=100,
epochs=50,
validation_data=val_generator,
validation_steps=25)
Everytime when model achieves 25-35 epochs (80-90% accuracy) this happens:
Epoch 31/50
100/100 [==============================] - 3s 34ms/step - loss: 0.3524 - acc: 0.8558 - val_loss: 0.4151 - val_acc: 0.7992
Epoch 32/50
100/100 [==============================] - 3s 34ms/step - loss: 0.3393 - acc: 0.8700 - val_loss: 0.4384 - val_acc: 0.7951
Epoch 33/50
100/100 [==============================] - 3s 34ms/step - loss: 0.3321 - acc: 0.8702 - val_loss: 0.4993 - val_acc: 0.7620
Epoch 34/50
100/100 [==============================] - 3s 33ms/step - loss: 1.5444 - acc: 0.3302 - val_loss: 1.6062 - val_acc: 0.1704
Epoch 35/50
100/100 [==============================] - 3s 34ms/step - loss: 1.6094 - acc: 0.2935 - val_loss: 1.6062 - val_acc: 0.1724
There is some similar problems with answers, but mostly they recommend to lower learning rate, but it doesnt help at all.
UPD: almost all weights and biases in network became nan. Network somehow died inside
Solution in this case:
I changed sigmoid function in last layer to softmax function and drops are gone
Why this worked out?
sigmoid activation function is used for binary (two-class) classifications.
In multiclassification problems we should use softmax function - special extension of sigmoid function for multiclassification problems.
More information: Sigmoid vs Softmax
Special thanks to #desertnaut and #Shubham Panchal for error indication
Related
I want to use a resnet50 for a regression task. And I use a custom loss for training. I want to use checkpoints to save the best model which has the minimum loss on testing data. The codes for model's training are as follows:
input_shape = (32, 32, 1)
inputs = keras.Input(shape=input_shape)
outputs = tf.keras.applications.ResNet50(
include_top=False, weights=None, input_tensor=None,
input_shape=input_shape, pooling='max'
)(inputs)
outputs = keras.layers.Dense(1, activation=None)(outputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer='adam',
loss=EWC_loss(model,fisher_1,prior_weights_1,Lambda=1),
metrics='mse')
checkpoint_filepath_3 = 'F:/NTU_PyCode/CL_regression_mnist/saved_resnet/resnet50_task2_epoch=5(1).h5'
model_checkpoint_callback_2 = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath_3,
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
model.fit(x_train_2,y_train_2,batch_size=32,shuffle=True,
validation_data=(x_test_2, y_test_2), epochs=5,
callbacks=[model_checkpoint_callback_2])
And here are the training results. In my plan, the model's weights after the 3rd epoch will be saved to the checkpoint_filepath. Because it has the minimum val_loss (val_mse is not minimum because the custom loss involves other terms).
2/1875 [..............................] - ETA: 1:07 - loss: 8.4497 - mse: 8.4489WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0239s vs `on_train_batch_end` time: 0.0449s). Check your callbacks.
1875/1875 [==============================] - 136s 73ms/step - loss: 2.6100 - mse: 2.5062 - val_loss: 5.5797 - val_mse: 5.4108
Epoch 2/5
1875/1875 [==============================] - 129s 69ms/step - loss: 1.2896 - mse: 1.1265 - val_loss: 1.6604 - val_mse: 1.4745
Epoch 3/5
1875/1875 [==============================] - 128s 68ms/step - loss: 0.9861 - mse: 0.7998 - val_loss: 1.4171 - val_mse: 1.2161
Epoch 4/5
1875/1875 [==============================] - 128s 68ms/step - loss: 1.1695 - mse: 0.8958 - val_loss: 1.4705 - val_mse: 1.2034
Epoch 5/5
1875/1875 [==============================] - 129s 69ms/step - loss: 1.0095 - mse: 0.7305 - val_loss: 11.7203 - val_mse: 11.4236
But when I load the weights and use the evaluate function to evaluate on the same testing data, there comes the problem. The loss is not custom loss here but the metric is still mse. So I assume the mse in evaluation function should be the same to the result in fit function(same as val_mse in the 3rd epoch). But the MSEs are very different!
model.compile(optimizer='adam',
loss=tf.keras.losses.mse,
metrics='mse')
print("EWC model on Task 2")
model.load_weights(checkpoint_filepath_3)
model.evaluate(x_test_2,y_test_2)
EWC model on Task 2
313/313 [==============================] - 4s 13ms/step - loss: 9.1384 - mse: 9.1384
What causes this phenomenon? Is that the weights not be saved into the checkpoints? Or any other issues? Thank you in advance~
After more experiments, I found a puzzled phenomenon. If I run the codes of training and evaluation together, the results are correct! The results for 2 epochs in training and evaluation are showed as follows. And we can see the MSEs are the same.
2/1875 [..............................] - ETA: 59s - loss: 15.2813 - mse: 15.2805WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0190s vs `on_train_batch_end` time: 0.0439s). Check your callbacks.
1875/1875 [==============================] - 137s 73ms/step - loss: 2.0093 - mse: 1.9253 - val_loss: 1.8885 - val_mse: 1.7217
Epoch 2/2
1875/1875 [==============================] - 129s 69ms/step - loss: 1.1946 - mse: 1.0230 - val_loss: 1.1102 - val_mse: 0.9254
EWC model on Task 2
313/313 [==============================] - 4s 13ms/step - loss: 0.9254 - mse: 0.9254
But if I train and evaluate separately (run codes for training first, then just load the saved weights in model and evaluate), The results are different.
EWC model on Task 2
313/313 [==============================] - 4s 14ms/step - loss: 9.0702 - mse: 9.0702
Why is that? That's really confusing. Is there any difference between train and evaluate in one run and separately?
I don't understand the details but when you use model checkpoint and save the weights only or even the whole model when you execute model.load_weights it is a complex process that is described here. When you recompile before loading the weights that restoration process apparently gets messed up. I did find a note that says changing model.compile can cause the restoration to fail.
I tried to train a CNN to classify 9 class of image. Each class has 1000 image for training. I tried training on VGG16 and VGG19, both can achieve validation accuracy of 90%. But when I tried to train on InceptionResNetV2 model, the model seems to stuck around 20% and 30%. Below is my code for InceptionResNetV2 and the training. What can I do to improve the training?
base_model = tf.keras.applications.InceptionResNetV2(input_shape=(IMG_HEIGHT, IMG_WIDTH ,3),weights = 'imagenet',include_top=False)
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
Flatten(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dropout(0.5),
BatchNormalization(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dense(9, activation = 'softmax')])
optimizer_model = tf.keras.optimizers.Adam(learning_rate=0.0001, name='Adam', decay=0.00001)
loss_model = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(optimizer_model, loss="categorical_crossentropy", metrics=['accuracy'])
Epoch 1/10
899/899 [==============================] - 255s 283ms/step - loss: 4.3396 - acc: 0.3548 - val_loss: 4.2744 - val_acc: 0.3874
Epoch 2/10
899/899 [==============================] - 231s 257ms/step - loss: 3.5856 - acc: 0.4695 - val_loss: 3.9151 - val_acc: 0.3816
Epoch 3/10
899/899 [==============================] - 225s 250ms/step - loss: 3.1451 - acc: 0.4959 - val_loss: 4.8801 - val_acc: 0.2425
Epoch 4/10
899/899 [==============================] - 227s 252ms/step - loss: 2.7771 - acc: 0.5124 - val_loss: 3.7167 - val_acc: 0.3023
Epoch 5/10
899/899 [==============================] - 231s 257ms/step - loss: 2.4993 - acc: 0.5260 - val_loss: 3.7276 - val_acc: 0.3770
Epoch 6/10
899/899 [==============================] - 227s 252ms/step - loss: 2.3148 - acc: 0.5251 - val_loss: 3.7677 - val_acc: 0.3115
Epoch 7/10
899/899 [==============================] - 234s 260ms/step - loss: 2.1381 - acc: 0.5379 - val_loss: 3.4867 - val_acc: 0.2862
Epoch 8/10
899/899 [==============================] - 230s 256ms/step - loss: 2.0091 - acc: 0.5367 - val_loss: 4.1032 - val_acc: 0.3080
Epoch 9/10
899/899 [==============================] - 225s 251ms/step - loss: 1.9155 - acc: 0.5399 - val_loss: 4.1270 - val_acc: 0.2954
Epoch 10/10
899/899 [==============================] - 232s 258ms/step - loss: 1.8349 - acc: 0.5508 - val_loss: 4.3918 - val_acc: 0.2276
VGG-16/19 has a depth of 23/26 layers, whereas, InceptionResNetV2 has a depth of 572 layers. Now, there is minimal domain similarity between medical images and imagenet dataset. In VGG, due to low depth the features you're getting are not that complex and network is able to classify it on the basis of Dense layer features. However, in IRV2 network, as it's too much deep, the output of the fc layer is more complex (visualize it something object like but for imagenet dataset), and, then the features obtained from these layers are unable to connect to the Dense layer features, and, hence overfitting. I think you were able to get my point.
Check out my answer to very similar question of yours on this link: Link. It will help improve your accuracy.
I am working on abusive and violent content detection. When I train my model, the training log is as follows:
Train on 9087 samples, validate on 2125 samples
Epoch 1/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.3193 - accuracy: 0.8603 - val_loss: 0.2314 - val_accuracy: 0.9322
Epoch 2/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.1787 - accuracy: 0.9440 - val_loss: 0.2039 - val_accuracy: 0.9356
Epoch 3/5
9087/9087 [==============================] - 32s 4ms/step - loss: 0.1148 - accuracy: 0.9637 - val_loss: 0.2569 - val_accuracy: 0.9180
Epoch 4/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.0805 - accuracy: 0.9738 - val_loss: 0.3409 - val_accuracy: 0.9047
Epoch 5/5
9087/9087 [==============================] - 36s 4ms/step - loss: 0.0599 - accuracy: 0.9795 - val_loss: 0.3661 - val_accuracy: 0.9082
You can see in this graph.
As you can see, the train loss and accuracy decreases but the validation loss and accuracy increases..
The code for the model:
model = Sequential()
model.add(Embedding(8941, 256,input_length=20))
model.add(LSTM(32, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(4, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.Adam(lr=0.001),
metrics=['accuracy'])
history=model.fit(x, x_test,
batch_size=batch_size,
epochs=5,
verbose=1,
validation_data=(y, y_test))
Help would be appriciated.
It actually depends on your data, but it seems like the model overfits the train set very quickly (after the second epoch).
Try:
Reduce your learning rate
Increase your batch size
Add regularization
Increase your dropout rate
Furthermore, it seems like you use binary_crossentropy while your model outputs a 4-length output for each sample: model.add(Dense(4, activation='sigmoid')) this might cause problems too.
I am trying to train my model using transfer learning, for this I am using VGG16 model, stripped the top layers and froze first 2 layers for using imagenet initial weights. For fine tuning them I am using learning rate 0.0001, activation softmax, dropout 0.5, loss categorical crossentropy, optimizer SGD, classes 46.
I am just unable to understand the behavior while training. Train loss and acc both are fine (loss is decreasing, acc is increasing). Val loss is decreasing and acc is increasing as well, BUT they are always higher than the train loss and acc.
Assuming its overfitting I made the model less complex, increased the dropout rate, added more samples to val data, but nothing seemed to work. I am a newbie so any kind of help is appreciated.
26137/26137 [==============================] - 7446s 285ms/step - loss: 1.1200 - accuracy: 0.3810 - val_loss: 3.1219 - val_accuracy: 0.4467
Epoch 2/50
26137/26137 [==============================] - 7435s 284ms/step - loss: 0.9944 - accuracy: 0.4353 - val_loss: 2.9348 - val_accuracy: 0.4694
Epoch 3/50
26137/26137 [==============================] - 7532s 288ms/step - loss: 0.9561 - accuracy: 0.4530 - val_loss: 1.6025 - val_accuracy: 0.4780
Epoch 4/50
26137/26137 [==============================] - 7436s 284ms/step - loss: 0.9343 - accuracy: 0.4631 - val_loss: 1.3032 - val_accuracy: 0.4860
Epoch 5/50
26137/26137 [==============================] - 7358s 282ms/step - loss: 0.9185 - accuracy: 0.4703 - val_loss: 1.4461 - val_accuracy: 0.4847
Epoch 6/50
26137/26137 [==============================] - 7396s 283ms/step - loss: 0.9083 - accuracy: 0.4748 - val_loss: 1.4093 - val_accuracy: 0.4908
Epoch 7/50
26137/26137 [==============================] - 7424s 284ms/step - loss: 0.8993 - accuracy: 0.4789 - val_loss: 1.4617 - val_accuracy: 0.4939
Epoch 8/50
26137/26137 [==============================] - 7433s 284ms/step - loss: 0.8925 - accuracy: 0.4822 - val_loss: 1.4257 - val_accuracy: 0.4978
Epoch 9/50
26137/26137 [==============================] - 7445s 285ms/step - loss: 0.8868 - accuracy: 0.4851 - val_loss: 1.5568 - val_accuracy: 0.4953
Epoch 10/50
26137/26137 [==============================] - 7387s 283ms/step - loss: 0.8816 - accuracy: 0.4874 - val_loss: 1.4534 - val_accuracy: 0.4970
Epoch 11/50
26137/26137 [==============================] - 7374s 282ms/step - loss: 0.8779 - accuracy: 0.4894 - val_loss: 1.4605 - val_accuracy: 0.4912
Epoch 12/50
26137/26137 [==============================] - 7411s 284ms/step - loss: 0.8733 - accuracy: 0.4915 - val_loss: 1.4694 - val_accuracy: 0.5030
Yes, you are facing over-fitting issue. To mitigate, you can try to implement below steps
1.Shuffle the Data, by using shuffle=True in VGG16_model.fit. Code is shown below:
history = VGG16_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(x_validation, y_validation), shuffle = True)
2.Use Early Stopping. Code is shown below
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
3.Use Regularization. Code for Regularization is shown below (You can try l1 Regularization or l1_l2 Regularization as well):
from tensorflow.keras.regularizers import l2
Regularizer = l2(0.001)
VGG16_model.add(Conv2D(96,11, 11, input_shape = (227,227,3),strides=(4,4), padding='valid', activation='relu', data_format='channels_last',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
VGG16_model.add(Dense(units = 2, activation = 'sigmoid',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
4.You can try using BatchNormalization.
5.Perform Image Data Augmentation using ImageDataGenerator. Refer this link for more info about that.
6.If the Pixels are not Normalized, Dividing the Pixel Values with 255 also helps
I want to retrain Google's inception v3 for my own problem with Keras and imagenet weights. The problem is that, when you load inception v3 imagenet weights, you must specify that the number of classes are 1000, as follows:
base_network=inception_v3.InceptionV3(include_top=False, weights='imagenet',classes=1000)
If I the custom number of classes my dataset have (are not 1000), it raises an error saying that if use imagenet weights, you must, mandatory set classes to 1000.
In order to customize the top layer of inception, I've read that you can use a bottleneck. This is nothing more that not use the standard inception top layer and customize it, so, I can use the include_top=False parameter and program my own top layer.
If I do so as follows:
x = base_network.output
x = GlobalAveragePooling2D()(x)
x = Dense(128, activation='relu')(x)
predictions = Dense(globals_params.num_classes, activation='softmax')(x)
network = Model(inputs=base_network.input, outputs=predictions)
It works (trains) but validation loss and accuracy never change, as you can see (obviously, inception layers are set trainable=False in order to keep imagenet weights).
...
Epoch 00071: acc did not improve
Epoch 72/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0471 - acc: 0.9810 - val_loss: 8.5221 - val_acc: 0.4643
Epoch 00072: acc did not improve
Epoch 73/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0354 - acc: 0.9872 - val_loss: 8.4629 - val_acc: 0.4718
Epoch 00073: acc did not improve
Epoch 74/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0277 - acc: 0.9891 - val_loss: 8.2515 - val_acc: 0.4881
Epoch 00074: acc did not improve
Epoch 75/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0330 - acc: 0.9880 - val_loss: 8.5953 - val_acc: 0.4618
Epoch 00075: acc did not improve
Epoch 76/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0402 - acc: 0.9854 - val_loss: 8.3820 - val_acc: 0.4793
Epoch 00076: acc did not improve
Epoch 77/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0337 - acc: 0.9880 - val_loss: 8.1831 - val_acc: 0.4906
Epoch 00077: acc did not improve
Epoch 78/300
2741/2741 [==============================] - 12s 4ms/step - loss: 0.0381 - acc: 0.9858 - val_loss: 8.4118 - val_acc: 0.4756
...
The question is: how can I program a top layer for inception that allows me train on my own dataset and changing my validation accuracy? I've looked for every site on the internet and I didn't find anything.
Thanks in advance!