Human Gender Classification- Train and Val accuracy not moving - python

I am having 0.3 million image in my Train set - Male/Female and around ~50K image in the test set - Male/Female . I am using below to work , also tried to add few more layers and more units . Also, I am doing data augmentation and others provided from keras docs.
targetSize =64
classifier.add(Conv2D(filters = 32,kernel_size =(3,3),input_shape=(targetSize,targetSize,3),activation ='relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Conv2D(filters = 32,kernel_size =(3,3),activation ='relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Conv2D(filters = 32,kernel_size =(3,3),activation ='relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Conv2D(filters = 32,kernel_size =(3,3),activation ='relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Flatten())
classifier.add(Dropout(rate = 0.6))
classifier.add(Dense(units = 64, activation='relu'))
classifier.add(Dropout(rate = 0.5))
classifier.add(Dense(units = 64, activation='relu'))
classifier.add(Dropout(rate = 0.2))
classifier.add(Dense(units = 1,activation='sigmoid')
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
height_shift_range = 0.2,
width_shift_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('<train_folder_loc>',
target_size = (img_size, img_size),
batch_size = batch_size_train,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('<test_folder_loc>',
target_size = (img_size, img_size),
batch_size = batch_size_test,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = <train_image_count>/batch_size_train,
epochs = n_epoch,
validation_data = test_set,
validation_steps = <test_image_count>/batch_size_test,
use_multiprocessing = True,
workers=<mycpu>)
But with many combinations tried I am getting result like below , train acc and val acc is not moving ahead . I tried till 100 epoch and its almost like same.
11112/11111 [==============================] - 156s 14ms/step - loss: 0.5628 - acc: 0.7403 - val_loss: 0.6001 - val_acc: 0.6967
Epoch 2/25
11112/11111 [==============================] - 156s 14ms/step - loss: 0.5516 - acc: 0.7403 - val_loss: 0.6096 - val_acc: 0.6968
Epoch 3/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5472 - acc: 0.7404 - val_loss: 0.5837 - val_acc: 0.6967
Epoch 4/25
11112/11111 [==============================] - 155s 14ms/step - loss: 0.5437 - acc: 0.7408 - val_loss: 0.5850 - val_acc: 0.6978
Epoch 5/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5409 - acc: 0.7417 - val_loss: 0.5844 - val_acc: 0.6991
Epoch 6/25
11112/11111 [==============================] - 155s 14ms/step - loss: 0.5386 - acc: 0.7420 - val_loss: 0.5828 - val_acc: 0.7011
Epoch 7/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5372 - acc: 0.7427 - val_loss: 0.5856 - val_acc: 0.6984
Epoch 8/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5347 - acc: 0.7437 - val_loss: 0.5847 - val_acc: 0.7017
Epoch 9/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5331 - acc: 0.7444 - val_loss: 0.5770 - val_acc: 0.7017
Epoch 10/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5323 - acc: 0.7443 - val_loss: 0.5803 - val_acc: 0.7037
Epoch 11/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5309 - acc: 0.7453 - val_loss: 0.5877 - val_acc: 0.7018
Epoch 12/25
11112/11111 [==============================] - 155s 14ms/step - loss: 0.5294 - acc: 0.7454 - val_loss: 0.5774 - val_acc: 0.7037
Epoch 13/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5282 - acc: 0.7464 - val_loss: 0.5807 - val_acc: 0.7024
Epoch 14/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5276 - acc: 0.7467 - val_loss: 0.5815 - val_acc: 0.7033
Epoch 15/25
11112/11111 [==============================] - 156s 14ms/step - loss: 0.5269 - acc: 0.7474 - val_loss: 0.5753 - val_acc: 0.7038
Epoch 16/25
11112/11111 [==============================] - 154s 14ms/step - loss: 0.5263 - acc: 0.7477 - val_loss: 0.5825 - val_acc: 0.7039
Epoch 17/25
11112/11111 [==============================] - 155s 14ms/step - loss: 0.5249 - acc: 0.7485 - val_loss: 0.5821 - val_acc: 0.7037
I need your suggestion on this or any snippet to try .

Make sure you are overfitting on a small sample before trying to extend the network.
I would remove some/all of the Dropout layers and see if it improves performance. I think 3 Dropout layers is quite high.
Try reducing the learning rate.

Try and understand some of the basic principles of CNNs and how they are constructed; implement a simple one which works before arbitrarily putting in your own parameters.
For example, typically the number of filters in successive convolutions increases in powers of two (e.g. 32, 64, 128 etc). Your use of dropout also is questionable, 0.6 is very high, not to mention stacking the three dropouts like you have doesn't make any sense.

Hmm if you look at it closely, its not that its not moviing. it is moving a bit. There are times when models only get better at a certain point no matter how long you train it, or even how much more layers you add. When that happens, it all boils down to the data. I think it would be best to determine what is hindering your model to improve. Also, my friend, training a good model doesn't happen overnight specially with real world data, much more with complex data such as images of humans.
I guess, if you are just following a tutorial which has achieved a better score than yours, you could check the version of packages their using, the data that you have, the steps they took and much more importantly the re run the model. There are instances where models could get different scores on different instances of training.
I suggest you should try playing with the layers more, or even use a different type of Neural Network. If not, you should try playing with your data more. 300k images are a lot but when it comes to image classification, it could be really hard.
Finally, I guess you could look into transfer learning by tensorflow. You can read about it there. It works by retraining pre-made image recognition models. Keras has a tutorial on Transfer learning too.

Related

TensorFlow model has zero accuracy

I am currently training a model using the Cars196 dataset from Stanford. However, with the dataset correctly imported and recognized by TensorFlow, my accuracy is still 0. I used a similar approach to train the model on other datasets and it works. Did I do anything wrong?
Here is my code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Flatten,Dense
car_dir = './src/'
test_dir = './src/cars_test/'
train_dir = './src/cars_train/'
train_labels_file = './src/labels-train.csv'
test_labels_file = './src/labels-test.csv'
IMG_SIZE = (150,150)
def read_labels(label_file:str):
pathAndClass = list()
with open(label_file) as csv_file:
reader = csv.reader(csv_file)
next(reader) # skip first row
for row in reader:
pathAndClass.append([row[5].lower(), row[4]])
return pd.DataFrame(pathAndClass,columns=['path', 'class'])
pathAndClass = read_labels(train_labels_file)
n_classes = np.size(np.unique(pathAndClass['class']))
pathAndClass['path'] = pathAndClass['path'].astype(str)
pathAndClass['class'] = pathAndClass['class'].astype(str)
data_gen = ImageDataGenerator(rescale = 1.0/255.0, validation_split=0.25)
BATCH_SIZE = 32
index_list = []
for i in range(0, n_classes):
index_list.append(str(i))
train_flow = data_gen.flow_from_dataframe(
dataframe=pathAndClass,
x_col='path',
y_col='class',
directory=train_dir,
subset="training",
seed=42,
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
shuffle=True,
classes=index_list,
class_mode='categorical')
valid_flow = data_gen.flow_from_dataframe(
dataframe=pathAndClass,
x_col='path',
y_col='class',
directory=train_dir,
subset="validation",
seed=42,
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
shuffle=True,
classes=index_list,
class_mode='categorical')
model_nn = Sequential()
model_nn.add(Flatten(input_shape=(150,150, 3)))
model_nn.add(Dense(300, activation="relu"))
model_nn.add(Dense(n_classes, activation="softmax"))
model_nn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_nn.summary())
training = model_nn.fit(
train_flow,
steps_per_epoch=train_flow.n//train_flow.batch_size,
epochs=10,
validation_data=valid_flow,
validation_steps=valid_flow.n//valid_flow.batch_size)
print(model_nn.evaluate(train_flow))
plt.plot(training.history['accuracy'])
plt.plot(training.history['val_accuracy'])
plt.plot(training.history['loss'])
plt.plot(training.history['val_loss'])
plt.title('Model accuracy/loss')
plt.ylabel('accuracy/loss')
plt.xlabel('epoch')
plt.legend(['accuracy', 'val_accuracy', 'loss', 'val_loss'])
plt.show()
The output I got
Found 6078 validated image filenames belonging to 196 classes.
Found 2026 validated image filenames belonging to 196 classes.
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_1 (Flatten) (None, 67500) 0
_________________________________________________________________
dense_2 (Dense) (None, 300) 20250300
_________________________________________________________________
dense_3 (Dense) (None, 196) 58996
=================================================================
Total params: 20,309,296
Trainable params: 20,309,296
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/10
189/189 [==============================] - 68s 361ms/step - loss: 9.6809 - accuracy: 0.0036 - val_loss: 5.2785 - val_accuracy: 0.0030
Epoch 2/10
189/189 [==============================] - 58s 307ms/step - loss: 5.2770 - accuracy: 0.0055 - val_loss: 5.2785 - val_accuracy: 0.0089
Epoch 3/10
189/189 [==============================] - 58s 307ms/step - loss: 5.2743 - accuracy: 0.0083 - val_loss: 5.2793 - val_accuracy: 0.0104
Epoch 4/10
189/189 [==============================] - 58s 306ms/step - loss: 5.2728 - accuracy: 0.0089 - val_loss: 5.2800 - val_accuracy: 0.0089
Epoch 5/10
189/189 [==============================] - 58s 307ms/step - loss: 5.2710 - accuracy: 0.0084 - val_loss: 5.2806 - val_accuracy: 0.0089
Epoch 6/10
189/189 [==============================] - 57s 305ms/step - loss: 5.2698 - accuracy: 0.0086 - val_loss: 5.2815 - val_accuracy: 0.0089
Epoch 7/10
189/189 [==============================] - 58s 307ms/step - loss: 5.2695 - accuracy: 0.0083 - val_loss: 5.2822 - val_accuracy: 0.0089
Epoch 8/10
189/189 [==============================] - 58s 310ms/step - loss: 5.2681 - accuracy: 0.0086 - val_loss: 5.2834 - val_accuracy: 0.0089
Epoch 9/10
189/189 [==============================] - 58s 306ms/step - loss: 5.2679 - accuracy: 0.0083 - val_loss: 5.2840 - val_accuracy: 0.0089
Epoch 10/10
189/189 [==============================] - 58s 308ms/step - loss: 5.2669 - accuracy: 0.0083 - val_loss: 5.2848 - val_accuracy: 0.0089
1578/Unknown - 339s 215ms/step - loss: 5.2657 - accuracy: 0.0085
Update 1
I have increased the training sample by decreasing the batch size to 8. I tried to train the model again. However, the accuracy is still nearly 0.
Epoch 1/10
759/759 [==============================] - 112s 147ms/step - loss: 7.6876 - accuracy: 0.0051 - val_loss: 5.2779 - val_accuracy: 0.0089
Epoch 2/10
759/759 [==============================] - 112s 148ms/step - loss: 5.2728 - accuracy: 0.0086 - val_loss: 5.2792 - val_accuracy: 0.0089
Epoch 3/10
759/759 [==============================] - 112s 148ms/step - loss: 5.2695 - accuracy: 0.0087 - val_loss: 5.2808 - val_accuracy: 0.0089
Epoch 4/10
759/759 [==============================] - 109s 143ms/step - loss: 5.2671 - accuracy: 0.0087 - val_loss: 5.2828 - val_accuracy: 0.0089
Epoch 5/10
759/759 [==============================] - 111s 146ms/step - loss: 5.2661 - accuracy: 0.0086 - val_loss: 5.2844 - val_accuracy: 0.0089
Epoch 6/10
759/759 [==============================] - 114s 151ms/step - loss: 5.2648 - accuracy: 0.0089 - val_loss: 5.2862 - val_accuracy: 0.0089
Epoch 7/10
759/759 [==============================] - 118s 156ms/step - loss: 5.2646 - accuracy: 0.0086 - val_loss: 5.2881 - val_accuracy: 0.0089
Epoch 8/10
759/759 [==============================] - 117s 155ms/step - loss: 5.2639 - accuracy: 0.0087 - val_loss: 5.2891 - val_accuracy: 0.0089
Epoch 9/10
759/759 [==============================] - 115s 151ms/step - loss: 5.2635 - accuracy: 0.0087 - val_loss: 5.2903 - val_accuracy: 0.0089
Epoch 10/10
759/759 [==============================] - 112s 147ms/step - loss: 5.2634 - accuracy: 0.0086 - val_loss: 5.2915 - val_accuracy: 0.0089
2390/Unknown - 141s 59ms/step - loss: 5.2611 - accuracy: 0.0088
Indeed the last dataset I used had less classes but more samples. Maybe there is another model that fits my dataset, any suggestions?
For computer vision problems, you want to look at Convolutional Neural Networks. If you're unfamiliar with them, they learn to identify features in images. Examples could be edges and textures in early layers, and then wheels, windows, doors, etc in later layers.
For this problem, I would suggest using an existing, pretrained network such as MobileNet V2 or InceptionNetV3 as a backbone, and then building your own classifier on top. This tutorial on the Tensorflow website will get you started https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets
Here's an excerpt from this tutorial:
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Then adding your model code from above, you could try:
model = tf.keras.Sequential([
base_model,
Flatten(input_shape=(150,150, 3)),
Dense(300, activation="relu")),
Dense(n_classes, activation='softmax')
])
model_nn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
This is the model I've used on similar datasets and got reasonable accuracy with it:
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
model = tf.keras.Sequential([
base_model,
GlobalAveragePooling2D(),,
Dense(n_classes, activation='softmax')
])
In you current model, you are not trying to extract any features from the images. A single hidden layer with 300 neurons is nowhere near enough to be able to learn the features in images and give meaningful results.
You also need to check your input image size. MobileNet V2 works well with 224x224 colour images.
As per other comments, you will need to use the full dataset, you are not going to get any meaningful results with a few hundred images.
I would suggest that the ~6000 Training Samples for almost 200 classes is simply way to little for the model to work well.
The model did ~2000 wheight updates (200 in each epoch), which is way to few for it to learn to distinguish between ~200 classifications.
Maybe you had less classes and more training data in the other training sets?

Cubic equation gets high loss [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm trying to learn some machine learning and after looking up some tutorials I managed to train a linear regression and second degree equation with acceptable precision. I then decided to step it up a notch and try with: y = x^3 + 9x^2 .
Since now everything worked fine, but with this new set my loss remains above 100k all the time and predictions are off by about +-100.
Here is a list of the things i tried:
Increase number or layers
Increase number of neurons
Increase number of layers and neurons
Vary batch size
Increase and decrease learning rate
Divided the number of epochs by 3 and trained him 3 times while feeding him a random data set each time
Remove the kernel_regularizer (still have to understand what this does)
None of this solutions worked, each time loss was above 100k. Moreover I noticed that it's not a steady decrease, the resulting loss looks pretty random going from 100k to 800k and down again to 400k and then up to 1 million and down again....you can only notice that the average loss is going down but it's still hard to tell in that randomness
Some examples:
Epoch 832/10000
32/32 [==============================] - 0s 3ms/step - loss: 757260.0625 - val_loss: 624795.0000
Epoch 833/10000
32/32 [==============================] - 0s 3ms/step - loss: 784539.6250 - val_loss: 257286.3906
Epoch 834/10000
32/32 [==============================] - 0s 3ms/step - loss: 481110.4688 - val_loss: 246353.5469
Epoch 835/10000
32/32 [==============================] - 0s 3ms/step - loss: 383954.2812 - val_loss: 508324.5312
Epoch 836/10000
32/32 [==============================] - 0s 3ms/step - loss: 516217.7188 - val_loss: 543258.3750
Epoch 837/10000
32/32 [==============================] - 0s 3ms/step - loss: 1042559.3125 - val_loss: 1702137.1250
Epoch 838/10000
32/32 [==============================] - 0s 3ms/step - loss: 3192045.2500 - val_loss: 1154483.5000
Epoch 839/10000
32/32 [==============================] - 0s 3ms/step - loss: 1195508.7500 - val_loss: 4658847.0000
Epoch 840/10000
32/32 [==============================] - 0s 3ms/step - loss: 1251505.8750 - val_loss: 275300.7188
Epoch 841/10000
32/32 [==============================] - 0s 3ms/step - loss: 294105.2188 - val_loss: 330317.0000
Epoch 842/10000
32/32 [==============================] - 0s 3ms/step - loss: 528083.4375 - val_loss: 4624526.0000
Epoch 843/10000
32/32 [==============================] - 0s 4ms/step - loss: 3371695.2500 - val_loss: 2008547.0000
Epoch 844/10000
32/32 [==============================] - 0s 3ms/step - loss: 723132.8125 - val_loss: 884099.5625
Epoch 845/10000
32/32 [==============================] - 0s 3ms/step - loss: 635335.8750 - val_loss: 372132.1562
Epoch 846/10000
32/32 [==============================] - 0s 3ms/step - loss: 424794.2812 - val_loss: 349575.8438
Epoch 847/10000
32/32 [==============================] - 0s 3ms/step - loss: 266175.3125 - val_loss: 247624.6719
Epoch 848/10000
32/32 [==============================] - 0s 3ms/step - loss: 387106.7500 - val_loss: 1091736.7500
This was my original (and cleaner) code:
import tensorflow as tf
import numpy as np
from tensorflow import keras
from time import sleep
model = tf.keras.Sequential([keras.layers.Dense(units=8, activation='relu', input_shape=[1], kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(units=8, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(units=8, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(units=8, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(units=1)])
lr = 1e-1
decay = lr/10000
optimizer = keras.optimizers.Adam(lr=lr, decay=decay)
model.compile(optimizer=optimizer, loss='mean_squared_error')
xs = np.random.random((10000, 1)) * 100 - 50;
ys = xs**3 + 9*xs**2
model.fit(xs, ys, epochs=10000, batch_size=256, validation_split=0.2)
print(model.predict([10.0]))
resp = input('Want to save model? y/n: ')
if resp == 'y':
model.save('zig-zag')
I also found this question where the reported solution would be to use relu, but I already had that implemented and copying the code didn't work either.
Am I missing something? What and why?
For numerical reasons neural networks often dont play nice with somewhat unbounded very large numbers. So just reducing the range of values for x from -50..50 to -5..5 will let your model train.
For your case you also want to remove the l2-regularizer since you cant really overfit here and definitely not have a decay of 1e-5. I gave it a go with lr=1e-2 and decay=lr/2
Epoch 1000/1000
32/32 [==============================] - 0s 2ms/step - loss: 0.1471 - val_loss: 0.1370
Full code:
import tensorflow as tf
import numpy as np
from tensorflow import keras
from time import sleep
model = tf.keras.Sequential([keras.layers.Dense(units=8, activation='relu', input_shape=[1]),
keras.layers.Dense(units=8, activation='relu'),
keras.layers.Dense(units=8, activation='relu'),
keras.layers.Dense(units=8, activation='relu'),
keras.layers.Dense(units=1)])
lr = 1e-2
decay = lr/2
optimizer = keras.optimizers.Adam(lr=lr, decay=decay)
model.compile(optimizer=optimizer, loss='mean_squared_error')
xs = np.random.random((10000, 1)) * 10 - 5
ys = xs**3 + 9*xs**2
print(np.shape(xs))
print(np.shape(ys))
model.fit(xs, ys, epochs=1000, batch_size=256, validation_split=0.2)
print(model.predict([4.0]))

Regarding the accuracy of the Siamese CNN

# We have 2 inputs, 1 for each picture
left_input = Input(img_size)
right_input = Input(img_size)
# We will use 2 instances of 1 network for this task
convnet = MobileNetV2(weights='imagenet', include_top=False, input_shape=img_size,input_tensor=None)
convnet.trainable=True
x=convnet.output
x=tf.keras.layers.GlobalAveragePooling2D()(x)
x=Dense(320,activation='relu')(x)
x=Dropout(0.2)(x)
preds = Dense(101, activation='sigmoid')(x) # Apply sigmoid
convnet = Model(inputs=convnet.input, outputs=preds)
# Connect each 'leg' of the network to each input
# Remember, they have the same weights
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
# Getting the L1 Distance between the 2 encodings
L1_layer = Lambda(lambda tensor:K.abs(tensor[0] - tensor[1]))
# Add the distance function to the network
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid')(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
optimizer = Adam(lr, decay=2.5e-4)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss=keras.losses.binary_crossentropy,optimizer=optimizer,metrics=['accuracy'])
siamese_net.summary()
and the result of training is as follows
Epoch 1/10
126/126 [==============================] - 169s 1s/step - loss: 0.5683 - accuracy: 0.6840 - val_loss: 0.4644 - val_accuracy: 0.8044
Epoch 2/10
126/126 [==============================] - 163s 1s/step - loss: 0.2032 - accuracy: 0.9795 - val_loss: 0.2117 - val_accuracy: 0.9681
Epoch 3/10
126/126 [==============================] - 163s 1s/step - loss: 0.1110 - accuracy: 0.9925 - val_loss: 0.1448 - val_accuracy: 0.9840
Epoch 4/10
126/126 [==============================] - 164s 1s/step - loss: 0.0844 - accuracy: 0.9950 - val_loss: 0.1384 - val_accuracy: 0.9820
Epoch 5/10
126/126 [==============================] - 163s 1s/step - loss: 0.0634 - accuracy: 0.9990 - val_loss: 0.0829 - val_accuracy: 1.0000
Epoch 6/10
126/126 [==============================] - 165s 1s/step - loss: 0.0526 - accuracy: 0.9995 - val_loss: 0.0729 - val_accuracy: 1.0000
Epoch 7/10
126/126 [==============================] - 164s 1s/step - loss: 0.0465 - accuracy: 0.9995 - val_loss: 0.0641 - val_accuracy: 1.0000
Epoch 8/10
126/126 [==============================] - 163s 1s/step - loss: 0.0463 - accuracy: 0.9985 - val_loss: 0.0595 - val_accuracy: 1.0000
The model is predicting with good accuracy, when i am comparing two dissimilar images. Further it is predicting really good with same class of images.
But when I am comparing Image1 with image1 itself, it is predicting that they are similar only with the probability of 0.5.
in other case if I compare image1 with image2, then it is predicting correctly with a probability of 0.8.(here image1 and image2 belongs to same class)
when I am comparing individual images, it is predicting correctly, I have tried different alternatives did not workout.
May I know what might be the error?
The L1 distance between two equal vectors is always zero.
When you pass the same image, the encodings generated are equal (encoded_l is equal to encoded_r). Hence, the input to your final sigmoid layer is a zero vector.
And, sigmoid(0) = 0.5.
This is the reason providing identical inputs to your model gives 0.5 as the output.

InceptionResNetV2 validation accuracy stuck around 20% to 30%

I tried to train a CNN to classify 9 class of image. Each class has 1000 image for training. I tried training on VGG16 and VGG19, both can achieve validation accuracy of 90%. But when I tried to train on InceptionResNetV2 model, the model seems to stuck around 20% and 30%. Below is my code for InceptionResNetV2 and the training. What can I do to improve the training?
base_model = tf.keras.applications.InceptionResNetV2(input_shape=(IMG_HEIGHT, IMG_WIDTH ,3),weights = 'imagenet',include_top=False)
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
Flatten(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dropout(0.5),
BatchNormalization(),
Dense(1024, activation = 'relu', kernel_regularizer=regularizers.l2(0.001)),
LeakyReLU(alpha=0.4),
Dense(9, activation = 'softmax')])
optimizer_model = tf.keras.optimizers.Adam(learning_rate=0.0001, name='Adam', decay=0.00001)
loss_model = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(optimizer_model, loss="categorical_crossentropy", metrics=['accuracy'])
Epoch 1/10
899/899 [==============================] - 255s 283ms/step - loss: 4.3396 - acc: 0.3548 - val_loss: 4.2744 - val_acc: 0.3874
Epoch 2/10
899/899 [==============================] - 231s 257ms/step - loss: 3.5856 - acc: 0.4695 - val_loss: 3.9151 - val_acc: 0.3816
Epoch 3/10
899/899 [==============================] - 225s 250ms/step - loss: 3.1451 - acc: 0.4959 - val_loss: 4.8801 - val_acc: 0.2425
Epoch 4/10
899/899 [==============================] - 227s 252ms/step - loss: 2.7771 - acc: 0.5124 - val_loss: 3.7167 - val_acc: 0.3023
Epoch 5/10
899/899 [==============================] - 231s 257ms/step - loss: 2.4993 - acc: 0.5260 - val_loss: 3.7276 - val_acc: 0.3770
Epoch 6/10
899/899 [==============================] - 227s 252ms/step - loss: 2.3148 - acc: 0.5251 - val_loss: 3.7677 - val_acc: 0.3115
Epoch 7/10
899/899 [==============================] - 234s 260ms/step - loss: 2.1381 - acc: 0.5379 - val_loss: 3.4867 - val_acc: 0.2862
Epoch 8/10
899/899 [==============================] - 230s 256ms/step - loss: 2.0091 - acc: 0.5367 - val_loss: 4.1032 - val_acc: 0.3080
Epoch 9/10
899/899 [==============================] - 225s 251ms/step - loss: 1.9155 - acc: 0.5399 - val_loss: 4.1270 - val_acc: 0.2954
Epoch 10/10
899/899 [==============================] - 232s 258ms/step - loss: 1.8349 - acc: 0.5508 - val_loss: 4.3918 - val_acc: 0.2276
VGG-16/19 has a depth of 23/26 layers, whereas, InceptionResNetV2 has a depth of 572 layers. Now, there is minimal domain similarity between medical images and imagenet dataset. In VGG, due to low depth the features you're getting are not that complex and network is able to classify it on the basis of Dense layer features. However, in IRV2 network, as it's too much deep, the output of the fc layer is more complex (visualize it something object like but for imagenet dataset), and, then the features obtained from these layers are unable to connect to the Dense layer features, and, hence overfitting. I think you were able to get my point.
Check out my answer to very similar question of yours on this link: Link. It will help improve your accuracy.

Validation loss and validation accuracy both are higher than training loss and acc and fluctuating

I am trying to train my model using transfer learning, for this I am using VGG16 model, stripped the top layers and froze first 2 layers for using imagenet initial weights. For fine tuning them I am using learning rate 0.0001, activation softmax, dropout 0.5, loss categorical crossentropy, optimizer SGD, classes 46.
I am just unable to understand the behavior while training. Train loss and acc both are fine (loss is decreasing, acc is increasing). Val loss is decreasing and acc is increasing as well, BUT they are always higher than the train loss and acc.
Assuming its overfitting I made the model less complex, increased the dropout rate, added more samples to val data, but nothing seemed to work. I am a newbie so any kind of help is appreciated.
26137/26137 [==============================] - 7446s 285ms/step - loss: 1.1200 - accuracy: 0.3810 - val_loss: 3.1219 - val_accuracy: 0.4467
Epoch 2/50
26137/26137 [==============================] - 7435s 284ms/step - loss: 0.9944 - accuracy: 0.4353 - val_loss: 2.9348 - val_accuracy: 0.4694
Epoch 3/50
26137/26137 [==============================] - 7532s 288ms/step - loss: 0.9561 - accuracy: 0.4530 - val_loss: 1.6025 - val_accuracy: 0.4780
Epoch 4/50
26137/26137 [==============================] - 7436s 284ms/step - loss: 0.9343 - accuracy: 0.4631 - val_loss: 1.3032 - val_accuracy: 0.4860
Epoch 5/50
26137/26137 [==============================] - 7358s 282ms/step - loss: 0.9185 - accuracy: 0.4703 - val_loss: 1.4461 - val_accuracy: 0.4847
Epoch 6/50
26137/26137 [==============================] - 7396s 283ms/step - loss: 0.9083 - accuracy: 0.4748 - val_loss: 1.4093 - val_accuracy: 0.4908
Epoch 7/50
26137/26137 [==============================] - 7424s 284ms/step - loss: 0.8993 - accuracy: 0.4789 - val_loss: 1.4617 - val_accuracy: 0.4939
Epoch 8/50
26137/26137 [==============================] - 7433s 284ms/step - loss: 0.8925 - accuracy: 0.4822 - val_loss: 1.4257 - val_accuracy: 0.4978
Epoch 9/50
26137/26137 [==============================] - 7445s 285ms/step - loss: 0.8868 - accuracy: 0.4851 - val_loss: 1.5568 - val_accuracy: 0.4953
Epoch 10/50
26137/26137 [==============================] - 7387s 283ms/step - loss: 0.8816 - accuracy: 0.4874 - val_loss: 1.4534 - val_accuracy: 0.4970
Epoch 11/50
26137/26137 [==============================] - 7374s 282ms/step - loss: 0.8779 - accuracy: 0.4894 - val_loss: 1.4605 - val_accuracy: 0.4912
Epoch 12/50
26137/26137 [==============================] - 7411s 284ms/step - loss: 0.8733 - accuracy: 0.4915 - val_loss: 1.4694 - val_accuracy: 0.5030
Yes, you are facing over-fitting issue. To mitigate, you can try to implement below steps
1.Shuffle the Data, by using shuffle=True in VGG16_model.fit. Code is shown below:
history = VGG16_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(x_validation, y_validation), shuffle = True)
2.Use Early Stopping. Code is shown below
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
3.Use Regularization. Code for Regularization is shown below (You can try l1 Regularization or l1_l2 Regularization as well):
from tensorflow.keras.regularizers import l2
Regularizer = l2(0.001)
VGG16_model.add(Conv2D(96,11, 11, input_shape = (227,227,3),strides=(4,4), padding='valid', activation='relu', data_format='channels_last',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
VGG16_model.add(Dense(units = 2, activation = 'sigmoid',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
4.You can try using BatchNormalization.
5.Perform Image Data Augmentation using ImageDataGenerator. Refer this link for more info about that.
6.If the Pixels are not Normalized, Dividing the Pixel Values with 255 also helps

Categories