Losses and Accuracy could not improve - python

Im trying to train a Product Detection model with approximately 100,000 training images and 10,000 test images. However no matter what optimizer i used in my model, i have tried Adam, SGD with multiple learning rates, my loss and accuracy does not improve. Below is my code
First i read the train images
for x in train_data.category.tolist():
if x < 10:
x = "0" + str(x)
path = os.path.join(train_DATADIR,x)
else:
x = str(x)
path = os.path.join(train_DATADIR,x)
img_array = cv2.imread(os.path.join(path,str(train_data.filename[idx])), cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array,(100,100))
train_images.append(new_array)
idx += 1
print(f'{idx}/105392 - {(idx/105392)*100:.2f}%')
narray = np.array(train_images)
then i save the train_images data into a binary file
np.save(DIR_PATH + 'train_images_bitmap.npy', narray)
then i divide the train_images by 255.0
train_images = train_images / 255.0
and declared my model with input nodes of 100x100 as the images are resized to 100x100
model = keras.Sequential([
keras.layers.Flatten(input_shape=(100, 100)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(42)
])
then i compile the model, i tried adam, SGD(lr=0.01 up to 0.2 and as low to 0.001)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Next i fit the model with a callback of the epoch
model.fit(train_images, train_labels,epochs=2000)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,monitor='val_acc',
mode='max',save_best_only=True, save_weights_only=True, verbose=1)
but the output i got on the epoch wasnt improving, how can i improve the loss and accuracy? below is the output on the epochs
Epoch 6/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0249
Epoch 7/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0248
Epoch 8/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0255
Epoch 9/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0251
Epoch 10/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0254
Epoch 11/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0254
Epoch 12/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0243
Epoch 13/2000
3294/3294 [==============================] - 12s 3ms/step - loss: 3.7210 - accuracy: 0.0238
Epoch 14/2000
3294/3294 [==============================] - 11s 3ms/step - loss: 3.7210 - accuracy: 0.0251
Epoch 15/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0253
Epoch 16/2000
3294/3294 [==============================] - 11s 3ms/step - loss: 3.7210 - accuracy: 0.0243
Epoch 17/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0247
Epoch 18/2000
3294/3294 [==============================] - 12s 3ms/step - loss: 3.7210 - accuracy: 0.0247

I don't think the choice of optimizer is the main problem. With only a little experience on the matter, I can only suggest some things:
For images i would try using a 2d-convolution layer before the dense layer.
Try adding a dropout-layer to reduce the possibility of overfitting.
The first layer is 100*100, and a reduction to 128 is perhaps to aggressive (i dont know, but thats at least my intuition) Try increasing from 128 to a larger number, or even add an intermediate layer :)
Perhaps something like:
model = Sequential()
model.add(Conv2D(...))
model.add(MaxPooling2D(...)
model.add(Dropout(...))
model.add(Flatten(...))
model.add(Dense(...))
model.compile(...)```

Related

Tensorflow tf.data.Dataset.cache seems do not take the expected effect

I am trying to improve my model training performance following the Better performance with the tf.data API guideline. However, I have observed that the performance using .cache() is almost the same or even worse if compared to same settings without .cache().
datafile_list = load_my_files()
RAW_BYTES = 403*4
BATCH_SIZE = 32
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=datafile_list, record_bytes=RAW_BYTES, num_parallel_reads=10, buffer_size=1024*RAW_BYTES)
raw_dataset = raw_dataset.map(tf.autograph.experimental.do_not_convert(decode_and_prepare),
num_parallel_calls=tf.data.AUTOTUNE)
raw_dataset = raw_dataset.cache()
raw_dataset = raw_dataset.shuffle(buffer_size=1024)
raw_dataset = raw_dataset.batch(BATCH_SIZE)
raw_dataset = raw_dataset.prefetch(tf.data.AUTOTUNE)
The data in datafile_list hold 9.92GB which fairly fits the system total physical RAM available (100GB). System swap is disabled.
By training the model using the dataset:
model = build_model()
model.fit(raw_dataset, epochs=5, verbose=2)
results in:
Epoch 1/5
206247/206247 - 126s - loss: 0.0043 - mae: 0.0494 - mse: 0.0043
Epoch 2/5
206247/206247 - 125s - loss: 0.0029 - mae: 0.0415 - mse: 0.0029
Epoch 3/5
206247/206247 - 129s - loss: 0.0027 - mae: 0.0397 - mse: 0.0027
Epoch 4/5
206247/206247 - 125s - loss: 0.0025 - mae: 0.0386 - mse: 0.0025
Epoch 5/5
206247/206247 - 125s - loss: 0.0024 - mae: 0.0379 - mse: 0.0024
This result is frustrating. By the docs:
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
And from this guide:
When iterating over this dataset, the second iteration will be much faster than the first one thanks to the caching.
However, the elapsed time took by all epochs are almost the same. In addition, during the training both CPU and GPU usage are very low (see images below).
By commenting out the line raw_dataset = raw_dataset.cache() the results do not show any notable difference:
Epoch 1/5
206067/206067 - 129s - loss: 0.0042 - mae: 0.0492 - mse: 0.0042
Epoch 2/5
206067/206067 - 127s - loss: 0.0028 - mae: 0.0412 - mse: 0.0028
Epoch 3/5
206067/206067 - 134s - loss: 0.0026 - mae: 0.0393 - mse: 0.0026
Epoch 4/5
206067/206067 - 127s - loss: 0.0024 - mae: 0.0383 - mse: 0.0024
Epoch 5/5
206067/206067 - 126s - loss: 0.0023 - mae: 0.0376 - mse: 0.0023
As pointed out in the docs, my expectations were using cache would result in a much fast training time. I would like to know what I am doing wrong.
Attachments
GPU usage during training using cache:
GPU usage during training WITHOUT cache:
System Stats (Memory, CPU etc) during training using cache:
System Stats (Memory, CPU etc) during training WITHOUT cache:
Just a small observation using Google Colab. According to the docs:
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
And
Note: cache will produce exactly the same elements during each
iteration through the dataset. If you wish to randomize the iteration
order, make sure to call shuffle after calling cache.
I did notice a few differences when using caching and iterating over the dataset beforehand. Here is an example.
Prepare data:
import random
import struct
import tensorflow as tf
import numpy as np
RAW_N = 2 + 20*20 + 1
bytess = random.sample(range(1, 5000), RAW_N*4)
with open('mydata.bin', 'wb') as f:
f.write(struct.pack('1612i', *bytess))
def decode_and_prepare(register):
register = tf.io.decode_raw(register, out_type=tf.float32)
inputs = register[2:402]
label = tf.random.uniform(()) + register[402:]
return inputs, label
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=['/content/mydata.bin']*7000, record_bytes=RAW_N*4)
raw_dataset = raw_dataset.map(decode_and_prepare)
Train model without caching and iterating beforehand:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 3ms/step - loss: 0.1425
Epoch 2/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41be037d0>
Training model with caching but no iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 2ms/step - loss: 0.1428
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41fa87810>
Training model with caching and iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
_ = list(train_ds.as_numpy_iterator()) # iterate dataset beforehand
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 3s 3ms/step - loss: 0.1427
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41ac9c850>
Conclusion: The caching and the prior iteration of the dataset seem to have an effect on training, but in this example only 7000 files were used.

Why does the error of my NN not divergate to zero when my input reveals the result

Why does the error of my NN not divergate to zero when my input reveals the result? I always set input[2] to the right result, so the NN should set all weights to 0, except this one.
from random import random
import numpy
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
datax = []
datay = []
for i in range(100000):
input = []
for j in range(1000):
input.append(random())
yval=random()
# should be found out by the nn that input[2] is always the correct output
input[2] = yval
datax.append(input)
datay.append(yval)
datax = numpy.array(datax)
datay = numpy.array(datay)
model = Sequential()
model.add(Dense(10))
model.add(Dense(10))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=keras.optimizers.Adam())
model.fit(datax, datay, epochs=100, batch_size=32, verbose=1)
it oscillates around e-05 but never gets really better than that
Epoch 33/100
3125/3125 [==============================] - 4s 1ms/step - loss: 1.2802e-04
Epoch 34/100
3125/3125 [==============================] - 4s 1ms/step - loss: 3.7720e-05
Epoch 35/100
3125/3125 [==============================] - 4s 1ms/step - loss: 4.0858e-05
Epoch 36/100
3125/3125 [==============================] - 4s 1ms/step - loss: 8.5453e-05
Epoch 37/100
3125/3125 [==============================] - 5s 1ms/step - loss: 5.5722e-05
Epoch 38/100
3125/3125 [==============================] - 5s 1ms/step - loss: 3.6459e-05
Epoch 39/100
3125/3125 [==============================] - 5s 1ms/step - loss: 1.3339e-05
Epoch 40/100
3125/3125 [==============================] - 5s 1ms/step - loss: 5.8943e-05
...
Epoch 100/100
3125/3125 [==============================] - 4s 1ms/step - loss: 1.5929e-05
The step of the gradient descent method is calculated as gradient multiplied by learning rate. So theoretically - you can not reach minimum of loss function.
Try decaying learning rate though (decaying to zero). If you are lucky - I think it could be possible because of descrete nature of float types.

Regarding the accuracy of the Siamese CNN

# We have 2 inputs, 1 for each picture
left_input = Input(img_size)
right_input = Input(img_size)
# We will use 2 instances of 1 network for this task
convnet = MobileNetV2(weights='imagenet', include_top=False, input_shape=img_size,input_tensor=None)
convnet.trainable=True
x=convnet.output
x=tf.keras.layers.GlobalAveragePooling2D()(x)
x=Dense(320,activation='relu')(x)
x=Dropout(0.2)(x)
preds = Dense(101, activation='sigmoid')(x) # Apply sigmoid
convnet = Model(inputs=convnet.input, outputs=preds)
# Connect each 'leg' of the network to each input
# Remember, they have the same weights
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
# Getting the L1 Distance between the 2 encodings
L1_layer = Lambda(lambda tensor:K.abs(tensor[0] - tensor[1]))
# Add the distance function to the network
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid')(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
optimizer = Adam(lr, decay=2.5e-4)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss=keras.losses.binary_crossentropy,optimizer=optimizer,metrics=['accuracy'])
siamese_net.summary()
and the result of training is as follows
Epoch 1/10
126/126 [==============================] - 169s 1s/step - loss: 0.5683 - accuracy: 0.6840 - val_loss: 0.4644 - val_accuracy: 0.8044
Epoch 2/10
126/126 [==============================] - 163s 1s/step - loss: 0.2032 - accuracy: 0.9795 - val_loss: 0.2117 - val_accuracy: 0.9681
Epoch 3/10
126/126 [==============================] - 163s 1s/step - loss: 0.1110 - accuracy: 0.9925 - val_loss: 0.1448 - val_accuracy: 0.9840
Epoch 4/10
126/126 [==============================] - 164s 1s/step - loss: 0.0844 - accuracy: 0.9950 - val_loss: 0.1384 - val_accuracy: 0.9820
Epoch 5/10
126/126 [==============================] - 163s 1s/step - loss: 0.0634 - accuracy: 0.9990 - val_loss: 0.0829 - val_accuracy: 1.0000
Epoch 6/10
126/126 [==============================] - 165s 1s/step - loss: 0.0526 - accuracy: 0.9995 - val_loss: 0.0729 - val_accuracy: 1.0000
Epoch 7/10
126/126 [==============================] - 164s 1s/step - loss: 0.0465 - accuracy: 0.9995 - val_loss: 0.0641 - val_accuracy: 1.0000
Epoch 8/10
126/126 [==============================] - 163s 1s/step - loss: 0.0463 - accuracy: 0.9985 - val_loss: 0.0595 - val_accuracy: 1.0000
The model is predicting with good accuracy, when i am comparing two dissimilar images. Further it is predicting really good with same class of images.
But when I am comparing Image1 with image1 itself, it is predicting that they are similar only with the probability of 0.5.
in other case if I compare image1 with image2, then it is predicting correctly with a probability of 0.8.(here image1 and image2 belongs to same class)
when I am comparing individual images, it is predicting correctly, I have tried different alternatives did not workout.
May I know what might be the error?
The L1 distance between two equal vectors is always zero.
When you pass the same image, the encodings generated are equal (encoded_l is equal to encoded_r). Hence, the input to your final sigmoid layer is a zero vector.
And, sigmoid(0) = 0.5.
This is the reason providing identical inputs to your model gives 0.5 as the output.

InceptionResNetV2 validation accuracy stuck around 20% to 30%

I tried to train a CNN to classify 9 class of image. Each class has 1000 image for training. I tried training on VGG16 and VGG19, both can achieve validation accuracy of 90%. But when I tried to train on InceptionResNetV2 model, the model seems to stuck around 20% and 30%. Below is my code for InceptionResNetV2 and the training. What can I do to improve the training?
base_model = tf.keras.applications.InceptionResNetV2(input_shape=(IMG_HEIGHT, IMG_WIDTH ,3),weights = 'imagenet',include_top=False)
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
Flatten(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dropout(0.5),
BatchNormalization(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dense(9, activation = 'softmax')])
optimizer_model = tf.keras.optimizers.Adam(learning_rate=0.0001, name='Adam', decay=0.00001)
loss_model = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(optimizer_model, loss="categorical_crossentropy", metrics=['accuracy'])
Epoch 1/10
899/899 [==============================] - 255s 283ms/step - loss: 4.3396 - acc: 0.3548 - val_loss: 4.2744 - val_acc: 0.3874
Epoch 2/10
899/899 [==============================] - 231s 257ms/step - loss: 3.5856 - acc: 0.4695 - val_loss: 3.9151 - val_acc: 0.3816
Epoch 3/10
899/899 [==============================] - 225s 250ms/step - loss: 3.1451 - acc: 0.4959 - val_loss: 4.8801 - val_acc: 0.2425
Epoch 4/10
899/899 [==============================] - 227s 252ms/step - loss: 2.7771 - acc: 0.5124 - val_loss: 3.7167 - val_acc: 0.3023
Epoch 5/10
899/899 [==============================] - 231s 257ms/step - loss: 2.4993 - acc: 0.5260 - val_loss: 3.7276 - val_acc: 0.3770
Epoch 6/10
899/899 [==============================] - 227s 252ms/step - loss: 2.3148 - acc: 0.5251 - val_loss: 3.7677 - val_acc: 0.3115
Epoch 7/10
899/899 [==============================] - 234s 260ms/step - loss: 2.1381 - acc: 0.5379 - val_loss: 3.4867 - val_acc: 0.2862
Epoch 8/10
899/899 [==============================] - 230s 256ms/step - loss: 2.0091 - acc: 0.5367 - val_loss: 4.1032 - val_acc: 0.3080
Epoch 9/10
899/899 [==============================] - 225s 251ms/step - loss: 1.9155 - acc: 0.5399 - val_loss: 4.1270 - val_acc: 0.2954
Epoch 10/10
899/899 [==============================] - 232s 258ms/step - loss: 1.8349 - acc: 0.5508 - val_loss: 4.3918 - val_acc: 0.2276
VGG-16/19 has a depth of 23/26 layers, whereas, InceptionResNetV2 has a depth of 572 layers. Now, there is minimal domain similarity between medical images and imagenet dataset. In VGG, due to low depth the features you're getting are not that complex and network is able to classify it on the basis of Dense layer features. However, in IRV2 network, as it's too much deep, the output of the fc layer is more complex (visualize it something object like but for imagenet dataset), and, then the features obtained from these layers are unable to connect to the Dense layer features, and, hence overfitting. I think you were able to get my point.
Check out my answer to very similar question of yours on this link: Link. It will help improve your accuracy.

Validation loss and validation accuracy both are higher than training loss and acc and fluctuating

I am trying to train my model using transfer learning, for this I am using VGG16 model, stripped the top layers and froze first 2 layers for using imagenet initial weights. For fine tuning them I am using learning rate 0.0001, activation softmax, dropout 0.5, loss categorical crossentropy, optimizer SGD, classes 46.
I am just unable to understand the behavior while training. Train loss and acc both are fine (loss is decreasing, acc is increasing). Val loss is decreasing and acc is increasing as well, BUT they are always higher than the train loss and acc.
Assuming its overfitting I made the model less complex, increased the dropout rate, added more samples to val data, but nothing seemed to work. I am a newbie so any kind of help is appreciated.
26137/26137 [==============================] - 7446s 285ms/step - loss: 1.1200 - accuracy: 0.3810 - val_loss: 3.1219 - val_accuracy: 0.4467
Epoch 2/50
26137/26137 [==============================] - 7435s 284ms/step - loss: 0.9944 - accuracy: 0.4353 - val_loss: 2.9348 - val_accuracy: 0.4694
Epoch 3/50
26137/26137 [==============================] - 7532s 288ms/step - loss: 0.9561 - accuracy: 0.4530 - val_loss: 1.6025 - val_accuracy: 0.4780
Epoch 4/50
26137/26137 [==============================] - 7436s 284ms/step - loss: 0.9343 - accuracy: 0.4631 - val_loss: 1.3032 - val_accuracy: 0.4860
Epoch 5/50
26137/26137 [==============================] - 7358s 282ms/step - loss: 0.9185 - accuracy: 0.4703 - val_loss: 1.4461 - val_accuracy: 0.4847
Epoch 6/50
26137/26137 [==============================] - 7396s 283ms/step - loss: 0.9083 - accuracy: 0.4748 - val_loss: 1.4093 - val_accuracy: 0.4908
Epoch 7/50
26137/26137 [==============================] - 7424s 284ms/step - loss: 0.8993 - accuracy: 0.4789 - val_loss: 1.4617 - val_accuracy: 0.4939
Epoch 8/50
26137/26137 [==============================] - 7433s 284ms/step - loss: 0.8925 - accuracy: 0.4822 - val_loss: 1.4257 - val_accuracy: 0.4978
Epoch 9/50
26137/26137 [==============================] - 7445s 285ms/step - loss: 0.8868 - accuracy: 0.4851 - val_loss: 1.5568 - val_accuracy: 0.4953
Epoch 10/50
26137/26137 [==============================] - 7387s 283ms/step - loss: 0.8816 - accuracy: 0.4874 - val_loss: 1.4534 - val_accuracy: 0.4970
Epoch 11/50
26137/26137 [==============================] - 7374s 282ms/step - loss: 0.8779 - accuracy: 0.4894 - val_loss: 1.4605 - val_accuracy: 0.4912
Epoch 12/50
26137/26137 [==============================] - 7411s 284ms/step - loss: 0.8733 - accuracy: 0.4915 - val_loss: 1.4694 - val_accuracy: 0.5030
Yes, you are facing over-fitting issue. To mitigate, you can try to implement below steps
1.Shuffle the Data, by using shuffle=True in VGG16_model.fit. Code is shown below:
history = VGG16_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(x_validation, y_validation), shuffle = True)
2.Use Early Stopping. Code is shown below
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
3.Use Regularization. Code for Regularization is shown below (You can try l1 Regularization or l1_l2 Regularization as well):
from tensorflow.keras.regularizers import l2
Regularizer = l2(0.001)
VGG16_model.add(Conv2D(96,11, 11, input_shape = (227,227,3),strides=(4,4), padding='valid', activation='relu', data_format='channels_last',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
VGG16_model.add(Dense(units = 2, activation = 'sigmoid',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
4.You can try using BatchNormalization.
5.Perform Image Data Augmentation using ImageDataGenerator. Refer this link for more info about that.
6.If the Pixels are not Normalized, Dividing the Pixel Values with 255 also helps

Categories