I'm trying to do Simple classification of road picture ( 1way/2way) with CNN, my dataset is composed of 4k images of class 1 and ~4K of class 2, so normaly the classes are equilibrate, each class is stored in different folder.
But the metrics do some kind of "jump" ? i tryed different size of the input_shape, different optimizer ( 'adam','rmsprop'), batch size ( 10,16,20) and i get the same result... any one know what causes this beavior ?
code:
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(300, 300,3)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 10
train_datagen = ImageDataGenerator(
# rescale=1./255,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True
featurewise_std_normalization=True,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(featurewise_std_normalization=True,
rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'data/train',
target_size=(300, 300),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(300, 300),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
When i run this code i obtain this result:
Epoch 1/50
125/125 [==============================] - 253s 2s/step - loss: 0.8142 - acc: 0.5450 - val_loss: 0.4937 - val_acc: 0.8662
Epoch 2/50
125/125 [==============================] - 254s 2s/step - loss: 0.6748 - acc: 0.5980 - val_loss: 0.5782 - val_acc: 0.7859
Epoch 3/50
125/125 [==============================] - 255s 2s/step - loss: 0.6679 - acc: 0.6580 - val_loss: 0.5068 - val_acc: 0.8562
Epoch 4/50
125/125 [==============================] - 255s 2s/step - loss: 0.6438 - acc: 0.6780 - val_loss: 0.5018 - val_acc: 0.8766
Epoch 5/50
125/125 [==============================] - 257s 2s/step - loss: 0.6427 - acc: 0.7245 - val_loss: 0.3760 - val_acc: 0.9213
Epoch 6/50
125/125 [==============================] - 256s 2s/step - loss: 0.5635 - acc: 0.7435 - val_loss: 0.6140 - val_acc: 0.6398
Epoch 7/50
125/125 [==============================] - 254s 2s/step - loss: 0.6226 - acc: 0.7320 - val_loss: 0.1852 - val_acc: 0.9433
Epoch 8/50
125/125 [==============================] - 252s 2s/step - loss: 0.4858 - acc: 0.7765 - val_loss: 0.1617 - val_acc: 0.9437
Epoch 9/50
125/125 [==============================] - 253s 2s/step - loss: 0.4433 - acc: 0.8120 - val_loss: 0.5577 - val_acc: 0.6788
Epoch 10/50
125/125 [==============================] - 252s 2s/step - loss: 0.4621 - acc: 0.7935 - val_loss: 0.1000 - val_acc: 0.9762
Epoch 11/50
125/125 [==============================] - 254s 2s/step - loss: 0.4572 - acc: 0.8035 - val_loss: 0.3797 - val_acc: 0.8161
Epoch 12/50
125/125 [==============================] - 257s 2s/step - loss: 0.4707 - acc: 0.8105 - val_loss: 0.0903 - val_acc: 0.9761
Epoch 13/50
125/125 [==============================] - 254s 2s/step - loss: 0.4134 - acc: 0.8390 - val_loss: 0.1587 - val_acc: 0.9437
Epoch 14/50
125/125 [==============================] - 252s 2s/step - loss: 0.4023 - acc: 0.8355 - val_loss: 0.1149 - val_acc: 0.9584
Epoch 15/50
125/125 [==============================] - 253s 2s/step - loss: 0.4286 - acc: 0.8255 - val_loss: 0.0897 - val_acc: 0.9700
Epoch 16/50
125/125 [==============================] - 253s 2s/step - loss: 0.4665 - acc: 0.8140 - val_loss: 0.6411 - val_acc: 0.8136
Epoch 17/50
125/125 [==============================] - 252s 2s/step - loss: 0.4010 - acc: 0.8315 - val_loss: 0.1205 - val_acc: 0.9736
Epoch 18/50
125/125 [==============================] - 253s 2s/step - loss: 0.3790 - acc: 0.8550 - val_loss: 0.0993 - val_acc: 0.9613
Epoch 19/50
125/125 [==============================] - 251s 2s/step - loss: 0.3717 - acc: 0.8620 - val_loss: 0.1154 - val_acc: 0.9748
Epoch 20/50
125/125 [==============================] - 250s 2s/step - loss: 0.4434 - acc: 0.8405 - val_loss: 0.1251 - val_acc: 0.9537
Epoch 21/50
125/125 [==============================] - 253s 2s/step - loss: 0.4535 - acc: 0.7545 - val_loss: 0.6766 - val_acc: 0.3640
Epoch 22/50
125/125 [==============================] - 252s 2s/step - loss: 0.7482 - acc: 0.7140 - val_loss: 0.4803 - val_acc: 0.7950
Epoch 23/50
125/125 [==============================] - 252s 2s/step - loss: 0.3712 - acc: 0.8585 - val_loss: 0.1056 - val_acc: 0.9685
Epoch 24/50
125/125 [==============================] - 251s 2s/step - loss: 0.3836 - acc: 0.8545 - val_loss: 0.1267 - val_acc: 0.9673
Epoch 25/50
125/125 [==============================] - 250s 2s/step - loss: 0.3879 - acc: 0.8805 - val_loss: 0.8669 - val_acc: 0.8100
Epoch 26/50
125/125 [==============================] - 250s 2s/step - loss: 0.3735 - acc: 0.8825 - val_loss: 0.1472 - val_acc: 0.9685
Epoch 27/50
125/125 [==============================] - 250s 2s/step - loss: 0.4577 - acc: 0.8620 - val_loss: 0.3285 - val_acc: 0.8925
Epoch 28/50
125/125 [==============================] - 252s 2s/step - loss: 0.3805 - acc: 0.8875 - val_loss: 0.3930 - val_acc: 0.7821
Epoch 29/50
125/125 [==============================] - 250s 2s/step - loss: 0.3565 - acc: 0.8930 - val_loss: 0.1087 - val_acc: 0.9647
Epoch 30/50
125/125 [==============================] - 250s 2s/step - loss: 0.4680 - acc: 0.8845 - val_loss: 0.1012 - val_acc: 0.9688
Epoch 31/50
125/125 [==============================] - 250s 2s/step - loss: 0.3293 - acc: 0.9080 - val_loss: 0.0700 - val_acc: 0.9811
Epoch 32/50
125/125 [==============================] - 250s 2s/step - loss: 0.4197 - acc: 0.9060 - val_loss: 0.1464 - val_acc: 0.9700
Epoch 33/50
125/125 [==============================] - 251s 2s/step - loss: 0.3656 - acc: 0.9005 - val_loss: 8.8236 - val_acc: 0.4307
Epoch 34/50
125/125 [==============================] - 249s 2s/step - loss: 0.4593 - acc: 0.9015 - val_loss: 4.3916 - val_acc: 0.6826
Epoch 35/50
125/125 [==============================] - 250s 2s/step - loss: 0.4824 - acc: 0.8605 - val_loss: 0.0748 - val_acc: 0.9850
Epoch 36/50
125/125 [==============================] - 250s 2s/step - loss: 0.4629 - acc: 0.8875 - val_loss: 0.2257 - val_acc: 0.8728
Epoch 37/50
125/125 [==============================] - 250s 2s/step - loss: 0.3708 - acc: 0.9075 - val_loss: 0.1196 - val_acc: 0.9537
Epoch 38/50
125/125 [==============================] - 250s 2s/step - loss: 0.9151 - acc: 0.8605 - val_loss: 0.1266 - val_acc: 0.9559
Epoch 39/50
125/125 [==============================] - 250s 2s/step - loss: 0.3700 - acc: 0.9035 - val_loss: 0.1038 - val_acc: 0.9812
Epoch 40/50
125/125 [==============================] - 249s 2s/step - loss: 0.5900 - acc: 0.8625 - val_loss: 0.0838 - val_acc: 0.9887
Epoch 41/50
125/125 [==============================] - 250s 2s/step - loss: 0.4409 - acc: 0.9065 - val_loss: 0.0828 - val_acc: 0.9773
Epoch 42/50
125/125 [==============================] - 250s 2s/step - loss: 0.3415 - acc: 0.9115 - val_loss: 0.8084 - val_acc: 0.8788
Epoch 43/50
125/125 [==============================] - 250s 2s/step - loss: 0.5181 - acc: 0.8440 - val_loss: 0.0998 - val_acc: 0.9786
Epoch 44/50
125/125 [==============================] - 249s 2s/step - loss: 0.3270 - acc: 0.8970 - val_loss: 0.1155 - val_acc: 0.9625
Epoch 45/50
125/125 [==============================] - 250s 2s/step - loss: 0.3810 - acc: 0.9125 - val_loss: 0.2881 - val_acc: 0.9484
Epoch 46/50
125/125 [==============================] - 249s 2s/step - loss: 0.3499 - acc: 0.9220 - val_loss: 0.3109 - val_acc: 0.8564
Epoch 47/50
125/125 [==============================] - 250s 2s/step - loss: 0.3505 - acc: 0.9160 - val_loss: 0.0861 - val_acc: 0.9788
Epoch 48/50
125/125 [==============================] - 250s 2s/step - loss: 0.3073 - acc: 0.9225 - val_loss: 0.0999 - val_acc: 0.9874
Epoch 49/50
125/125 [==============================] - 250s 2s/step - loss: 0.4418 - acc: 0.9000 - val_loss: 0.0301 - val_acc: 0.9925
Epoch 50/50
125/125 [==============================] - 250s 2s/step - loss: 0.3501 - acc: 0.9190 - val_loss: 0.0351 - val_acc: 0.9861
it's overfit ? or just the random set parameters position of my loss function ? i will try to found other picture to build new validation dataset ...
"each class is stored in different folder"
So do you mean 1 class is inside 'train' folder,
and another class is inside 'validate' folder?
try setting batch size to 32
and size of training vs validate sets to the ratio of 0.8 vs 0.2
EDIT
I found a link that you may refer to:
https://stats.stackexchange.com/questions/187335/validation-error-less-than-training-error
EDIT
Try getting more samples.
If there's difficulty getting more samples,
try creating/modifying from the existing samples.
Related
I'm new to this technology, so I was trying to build a model on image dataset.
I have used this architecture -
model = keras.Sequential()
model.add(layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(32,32,1)))
model.add(layers.AveragePooling2D())
model.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'))
model.add(layers.AveragePooling2D())
model.add(layers.Flatten())
model.add(layers.Dense(units=120, activation='relu'))
model.add(layers.Dense(units=84, activation='relu'))
model.add(layers.Dense(units=1, activation = 'sigmoid'))
The accuracy and loss seems pretty good but not the validation accuracy -
Epoch 1/50
10/10 [==============================] - 17s 2s/step - loss: 20.8554 - accuracy: 0.5170 -
val_loss: 0.8757 - val_accuracy: 0.5946
Epoch 2/50
10/10 [==============================] - 14s 1s/step - loss: 1.5565 - accuracy: 0.5612 -
val_loss: 0.8725 - val_accuracy: 0.5811
Epoch 3/50
10/10 [==============================] - 14s 1s/step - loss: 0.8374 - accuracy: 0.6293 -
val_loss: 0.8483 - val_accuracy: 0.5405
Epoch 4/50
10/10 [==============================] - 14s 1s/step - loss: 1.0340 - accuracy: 0.5748 -
val_loss: 1.6252 - val_accuracy: 0.5135
Epoch 5/50
10/10 [==============================] - 14s 1s/step - loss: 1.1054 - accuracy: 0.5816 -
val_loss: 0.7324 - val_accuracy: 0.6486
Epoch 6/50
10/10 [==============================] - 15s 1s/step - loss: 0.5942 - accuracy: 0.7041 -
val_loss: 0.7412 - val_accuracy: 0.6351
Epoch 7/50
10/10 [==============================] - 15s 2s/step - loss: 0.6041 - accuracy: 0.6939 -
val_loss: 0.6918 - val_accuracy: 0.6622
Epoch 8/50
10/10 [==============================] - 14s 1s/step - loss: 0.4944 - accuracy: 0.7687 -
val_loss: 0.7083 - val_accuracy: 0.6216
Epoch 9/50
10/10 [==============================] - 14s 1s/step - loss: 0.5231 - accuracy: 0.7007 -
val_loss: 1.0332 - val_accuracy: 0.5270
Epoch 10/50
10/10 [==============================] - 14s 1s/step - loss: 0.5133 - accuracy: 0.7313 -
val_loss: 0.6859 - val_accuracy: 0.5811
Epoch 11/50
10/10 [==============================] - 14s 1s/step - loss: 0.6177 - accuracy: 0.6735 -
val_loss: 1.0781 - val_accuracy: 0.5135
Epoch 12/50
10/10 [==============================] - 14s 1s/step - loss: 0.9852 - accuracy: 0.6701 -
val_loss: 3.0853 - val_accuracy: 0.4865
Epoch 13/50
10/10 [==============================] - 13s 1s/step - loss: 1.0099 - accuracy: 0.6259 -
val_loss: 1.8193 - val_accuracy: 0.5000
Epoch 14/50
10/10 [==============================] - 13s 1s/step - loss: 0.7179 - accuracy: 0.7041 -
val_loss: 1.5659 - val_accuracy: 0.5135
Epoch 15/50
10/10 [==============================] - 14s 1s/step - loss: 0.4575 - accuracy: 0.7857 -
val_loss: 0.6865 - val_accuracy: 0.5946
Epoch 16/50
10/10 [==============================] - 14s 1s/step - loss: 0.6540 - accuracy: 0.7177 -
val_loss: 1.7108 - val_accuracy: 0.5405
Epoch 17/50
10/10 [==============================] - 13s 1s/step - loss: 1.3617 - accuracy: 0.6156 -
val_loss: 1.1215 - val_accuracy: 0.5811
Epoch 18/50
10/10 [==============================] - 14s 1s/step - loss: 0.6983 - accuracy: 0.7245 -
val_loss: 2.1121 - val_accuracy: 0.5135
Epoch 19/50
10/10 [==============================] - 15s 1s/step - loss: 0.6669 - accuracy: 0.7415 -
val_loss: 0.8061 - val_accuracy: 0.6216
Epoch 20/50
10/10 [==============================] - 14s 1s/step - loss: 0.3853 - accuracy: 0.8129 -
val_loss: 0.7368 - val_accuracy: 0.6757
Epoch 21/50
10/10 [==============================] - 13s 1s/step - loss: 0.5672 - accuracy: 0.7347 -
val_loss: 1.4207 - val_accuracy: 0.5270
Epoch 22/50
10/10 [==============================] - 14s 1s/step - loss: 0.4770 - accuracy: 0.7551 -
val_loss: 1.6060 - val_accuracy: 0.5135
Epoch 23/50
10/10 [==============================] - 14s 1s/step - loss: 0.7212 - accuracy: 0.7041 -
val_loss: 1.1835 - val_accuracy: 0.5811
Epoch 24/50
10/10 [==============================] - 14s 1s/step - loss: 0.5231 - accuracy: 0.7483 -
val_loss: 0.6802 - val_accuracy: 0.7027
Epoch 25/50
10/10 [==============================] - 13s 1s/step - loss: 0.3185 - accuracy: 0.8367 -
val_loss: 0.6644 - val_accuracy: 0.7027
Epoch 26/50
10/10 [==============================] - 14s 1s/step - loss: 0.2500 - accuracy: 0.8912 -
val_loss: 0.8569 - val_accuracy: 0.6486
Epoch 27/50
10/10 [==============================] - 14s 1s/step - loss: 0.2279 - accuracy: 0.9082 -
val_loss: 0.7515 - val_accuracy: 0.7162
Epoch 28/50
10/10 [==============================] - 14s 1s/step - loss: 0.2349 - accuracy: 0.9082 -
val_loss: 0.9439 - val_accuracy: 0.5811
Epoch 29/50
10/10 [==============================] - 13s 1s/step - loss: 0.2051 - accuracy: 0.9184 -
val_loss: 0.7895 - val_accuracy: 0.7027
Epoch 30/50
10/10 [==============================] - 14s 1s/step - loss: 0.1236 - accuracy: 0.9592 -
val_loss: 0.7387 - val_accuracy: 0.7297
Epoch 31/50
10/10 [==============================] - 14s 1s/step - loss: 0.1370 - accuracy: 0.9524 -
val_loss: 0.7387 - val_accuracy: 0.7297
Epoch 32/50
10/10 [==============================] - 14s 1s/step - loss: 0.0980 - accuracy: 0.9796 -
val_loss: 0.6901 - val_accuracy: 0.7162
Epoch 33/50
10/10 [==============================] - 14s 1s/step - loss: 0.0989 - accuracy: 0.9762 -
val_loss: 0.7754 - val_accuracy: 0.7162
Epoch 34/50
10/10 [==============================] - 14s 1s/step - loss: 0.1195 - accuracy: 0.9592 -
val_loss: 0.6639 - val_accuracy: 0.6622
Epoch 35/50
10/10 [==============================] - 14s 1s/step - loss: 0.0805 - accuracy: 0.9898 -
val_loss: 0.7666 - val_accuracy: 0.7162
Epoch 36/50
10/10 [==============================] - 14s 1s/step - loss: 0.0649 - accuracy: 0.9966 -
val_loss: 0.7543 - val_accuracy: 0.7162
Epoch 37/50
10/10 [==============================] - 14s 1s/step - loss: 0.0604 - accuracy: 0.9898 -
val_loss: 0.7472 - val_accuracy: 0.7297
Epoch 38/50
10/10 [==============================] - 14s 1s/step - loss: 0.0538 - accuracy: 1.0000 -
val_loss: 0.7287 - val_accuracy: 0.7432
Epoch 39/50
10/10 [==============================] - 13s 1s/step - loss: 0.0430 - accuracy: 0.9966 -
val_loss: 0.8989 - val_accuracy: 0.6622
Epoch 40/50
10/10 [==============================] - 14s 1s/step - loss: 0.0386 - accuracy: 1.0000 -
val_loss: 0.6951 - val_accuracy: 0.6892
Epoch 41/50
10/10 [==============================] - 13s 1s/step - loss: 0.0379 - accuracy: 1.0000 -
val_loss: 0.8485 - val_accuracy: 0.6892
Epoch 42/50
10/10 [==============================] - 14s 1s/step - loss: 0.0276 - accuracy: 1.0000 -
val_loss: 0.9726 - val_accuracy: 0.6486
Epoch 43/50
10/10 [==============================] - 13s 1s/step - loss: 0.0329 - accuracy: 1.0000 -
val_loss: 0.7336 - val_accuracy: 0.7568
Epoch 44/50
10/10 [==============================] - 14s 1s/step - loss: 0.0226 - accuracy: 1.0000 -
val_loss: 0.8846 - val_accuracy: 0.6892
Epoch 45/50
10/10 [==============================] - 13s 1s/step - loss: 0.0249 - accuracy: 1.0000 -
val_loss: 0.9542 - val_accuracy: 0.6892
Epoch 46/50
10/10 [==============================] - 14s 1s/step - loss: 0.0171 - accuracy: 1.0000 -
val_loss: 0.8792 - val_accuracy: 0.6892
Epoch 47/50
10/10 [==============================] - 15s 1s/step - loss: 0.0122 - accuracy: 1.0000 -
val_loss: 0.8564 - val_accuracy: 0.7162
Epoch 48/50
10/10 [==============================] - 13s 1s/step - loss: 0.0114 - accuracy: 1.0000 -
val_loss: 0.8900 - val_accuracy: 0.7027
Epoch 49/50
10/10 [==============================] - 13s 1s/step - loss: 0.0084 - accuracy: 1.0000 -
val_loss: 0.8981 - val_accuracy: 0.7027
I tried changing the parameters too yet no result. Would be helpful if I can get to know what's wrong with the val_accuracy. Thanks in advance.
You are using less dataset specially test dataset for validation. Try adding some more data to train the model and for validation, then you can see the difference in val_accuracy. You can also try by adding more layers to the model.
There are some other methods available like, data augmentation, dropout, regularizers to increase the accuracy of the model by avoiding overfitting problem.
Please follow this reference to overcome the overfitting problem and to best train your model.
I am trying to run an autoencoder for dimensionality reduction on a Fraud Detection dataset (https://www.kaggle.com/kartik2112/fraud-detection?select=fraudTest.csv) and am receiving very high loss values for each iteration. Below is the autoencoder code.
nb_epoch = 100
batch_size = 128
input_dim = X_train.shape[1]
encoding_dim = 14
hidden_dim = int(encoding_dim / 2)
learning_rate = 1e-7
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation="tanh", activity_regularizer=regularizers.l1(learning_rate))(input_layer)
encoder = Dense(hidden_dim, activation="relu")(encoder)
decoder = Dense(hidden_dim, activation='tanh')(encoder)
decoder = Dense(input_dim, activation='relu')(decoder)
autoencoder = Model(inputs=input_layer, outputs=decoder)
autoencoder.compile(metrics=['accuracy'],
loss='mean_squared_error',
optimizer='adam')
cp = ModelCheckpoint(filepath="autoencoder_fraud.h5",
save_best_only=True,
verbose=0)
tb = TensorBoard(log_dir='./logs',
histogram_freq=0,
write_graph=True,
write_images=True)
history = autoencoder.fit(X_train, X_train,
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=(X_test, X_test),
verbose=1,
callbacks=[cp, tb]).history
here is a snippet of the loss values.
Epoch 1/100
10131/10131 [==============================] - 32s 3ms/step - loss: 52445827358.6230 - accuracy: 0.3389 - val_loss: 9625651200.0000 - val_accuracy: 0.5083
Epoch 2/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52393605025.8066 - accuracy: 0.5083 - val_loss: 9621398528.0000 - val_accuracy: 0.5083
Epoch 3/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52486496629.1354 - accuracy: 0.5082 - val_loss: 9617147904.0000 - val_accuracy: 0.5083
Epoch 4/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52514002255.9432 - accuracy: 0.5070 - val_loss: 9612887040.0000 - val_accuracy: 0.5083
Epoch 5/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52436489238.6388 - accuracy: 0.5076 - val_loss: 9608664064.0000 - val_accuracy: 0.5083
Epoch 6/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52430005774.7556 - accuracy: 0.5081 - val_loss: 9604417536.0000 - val_accuracy: 0.5083
Epoch 7/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52474495714.5898 - accuracy: 0.5079 - val_loss: 9600195584.0000 - val_accuracy: 0.5083
Epoch 8/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52423052560.0695 - accuracy: 0.5076 - val_loss: 9595947008.0000 - val_accuracy: 0.5083
Epoch 9/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52442358260.0742 - accuracy: 0.5072 - val_loss: 9591708672.0000 - val_accuracy: 0.5083
Epoch 10/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52402494704.5369 - accuracy: 0.5089 - val_loss: 9587487744.0000 - val_accuracy: 0.5083
Epoch 11/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52396583628.3553 - accuracy: 0.5081 - val_loss: 9583238144.0000 - val_accuracy: 0.5083
Epoch 12/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52349824708.2700 - accuracy: 0.5076 - val_loss: 9579020288.0000 - val_accuracy: 0.5083
Epoch 13/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52332072133.6850 - accuracy: 0.5083 - val_loss: 9574786048.0000 - val_accuracy: 0.5083
Epoch 14/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52353680011.6731 - accuracy: 0.5086 - val_loss: 9570555904.0000 - val_accuracy: 0.5083
Epoch 15/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52347432594.5456 - accuracy: 0.5088 - val_loss: 9566344192.0000 - val_accuracy: 0.5083
Epoch 16/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52327825554.3435 - accuracy: 0.5076 - val_loss: 9562103808.0000 - val_accuracy: 0.5083
Epoch 17/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52347251610.1255 - accuracy: 0.5080 - val_loss: 9557892096.0000 - val_accuracy: 0.5083
Epoch 18/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52292632667.3636 - accuracy: 0.5079 - val_loss: 9553654784.0000 - val_accuracy: 0.5083
Epoch 19/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52354135093.7671 - accuracy: 0.5083 - val_loss: 9549425664.0000 - val_accuracy: 0.5083
Epoch 20/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52295668148.2006 - accuracy: 0.5086 - val_loss: 9545219072.0000 - val_accuracy: 0.5083
Epoch 21/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52314219115.3320 - accuracy: 0.5079 - val_loss: 9540980736.0000 - val_accuracy: 0.5083
Epoch 22/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52328022934.0829 - accuracy: 0.5079 - val_loss: 9536788480.0000 - val_accuracy: 0.5083
Epoch 23/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52268139834.5172 - accuracy: 0.5074 - val_loss: 9532554240.0000 - val_accuracy: 0.5083
Epoch 24/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52308370726.3040 - accuracy: 0.5077 - val_loss: 9528341504.0000 - val_accuracy: 0.5083
Epoch 25/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52224468101.4070 - accuracy: 0.5081 - val_loss: 9524126720.0000 - val_accuracy: 0.5083
Epoch 26/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52200100823.1694 - accuracy: 0.5080 - val_loss: 9519915008.0000 - val_accuracy: 0.5083
Any advice/solution will be highly appreciated. Thank you
I have scaled the numarical data using StandardScaler and encoded
categorical data using LabelEncoder
First of all, check what numerical data you scaled.
I think you wrongly scaled cc_num, because cc_num is a categorical column.
This should solve your problem with high loss, but it doen't mean your model will be good.
You should first make a good check on the features and try to get some useful relationships between label and features (data preprocessing/featurezation)
i have this CNN model with 3 block of VGG architecture
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import L2, L1, L1L2
from keras.optimizers import SGD, Adam, Adagrad, RMSprop
from keras.models import load_model, Model
import numpy as np
import keras as k
#Load data dan split data
(train_images, train_labels),(test_images, test_labels) = datasets.cifar10.load_data()
#Normalize Data
train_images = train_images / 255.0
test_images = test_images / 255.0
#Convert menjadi one-hot-encode
num_classes = 10
train_labels = k.utils.to_categorical(train_labels, num_classes)
test_labels = k.utils.to_categorical(test_labels, num_classes)
# Data Augmentation
datagen = ImageDataGenerator(
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,)
datagen.fit(train_images)
reg=None
num_filters=32
ac='relu'
adm=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
sgd=SGD(lr=0.01, momentum=0.9)
rms=RMSprop(lr=0.0001,decay=1e-6)
agr=Adagrad(learning_rate=0.0001,initial_accumulator_value=0.1,epsilon=1e-08)
opt=adm
drop_dense=0.5
drop_conv=0.2
model = models.Sequential()
model.add(layers.Conv2D(num_filters, (3, 3), activation=ac, kernel_regularizer=reg, input_shape=(32, 32, 3),padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(drop_conv))
model.add(layers.Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(2 * drop_conv))
model.add(layers.Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(3 * drop_conv))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation=ac,kernel_regularizer=reg))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(drop_dense))
model.add(layers.Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=adm)
model.summary()
history=model.fit_generator(datagen.flow(train_images, train_labels, batch_size=256),
steps_per_epoch = len(train_images) / 256, epochs=200,
validation_data=(test_images, test_labels))
loss, accuracy = model.evaluate(test_images, test_labels)
print("Accuracy is : ", accuracy * 100)
print("Loss is : ", loss)
N = 200
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), history.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), history.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), history.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), history.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
plt.show()
model.save("model_test_9.h5") # serialize weights to HDF5
FileLink(r'model_test_9.h5')
# ADM Improve Dropout dataaugment
output :
Epoch 40/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4334 - accuracy: 0.8507 - val_loss: 0.5041 - val_accuracy: 0.8357
Epoch 41/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4289 - accuracy: 0.8522 - val_loss: 0.5354 - val_accuracy: 0.8284
Epoch 42/200
195/195 [==============================] - 21s 110ms/step - loss: 0.4333 - accuracy: 0.8490 - val_loss: 0.4560 - val_accuracy: 0.8499: 0.4334 - ac - ETA: 1s - loss:
Epoch 43/200
195/195 [==============================] - 21s 110ms/step - loss: 0.4198 - accuracy: 0.8555 - val_loss: 0.4817 - val_accuracy: 0.8429
Epoch 44/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4130 - accuracy: 0.8556 - val_loss: 0.4768 - val_accuracy: 0.8407ccuracy: 0. - ETA: 5s - los
Epoch 45/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4180 - accuracy: 0.8544 - val_loss: 0.4526 - val_accuracy: 0.8483 accuracy
Epoch 46/200
195/195 [==============================] - 21s 108ms/step - loss: 0.4113 - accuracy: 0.8565 - val_loss: 0.4129 - val_accuracy: 0.8618
Epoch 47/200
195/195 [==============================] - 21s 108ms/step - loss: 0.4078 - accuracy: 0.8584 - val_loss: 0.4108 - val_accuracy: 0.8659
Epoch 48/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4184 - accuracy: 0.8538 - val_loss: 0.4370 - val_accuracy: 0.8557
Epoch 49/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3926 - accuracy: 0.8641 - val_loss: 0.3817 - val_accuracy: 0.8685
Epoch 50/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4044 - accuracy: 0.8587 - val_loss: 0.4225 - val_accuracy: 0.8571
Epoch 51/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3919 - accuracy: 0.8640 - val_loss: 0.4101 - val_accuracy: 0.8625
Epoch 52/200
195/195 [==============================] - 21s 106ms/step - loss: 0.4035 - accuracy: 0.8623 - val_loss: 0.4341 - val_accuracy: 0.8561059 - accuracy: 0.86 - ETA: 8s - loss: 0.4059 - accuracy: - ETA: 7s - loss: 0.4057 - ac - ETA: 7s - loss: 0.4054 - ac - ETA: 6s - - ETA: 0s - loss: 0.4036 - accura
Epoch 53/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3963 - accuracy: 0.8619 - val_loss: 0.4180 - val_accuracy: 0.8576
Epoch 54/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3901 - accuracy: 0.8635 - val_loss: 0.3744 - val_accuracy: 0.8712
Epoch 55/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3917 - accuracy: 0.8640 - val_loss: 0.3751 - val_accuracy: 0.87363909 - accu - ETA: 2s - loss: 0.3911 - ac
Epoch 56/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3795 - accuracy: 0.8679 - val_loss: 0.4697 - val_accuracy: 0.8445ss: 0.3764 - ac - ETA: 15s - loss: 0.3758 - accuracy:
Epoch 57/200
195/195 [==============================] - 22s 111ms/step - loss: 0.3844 - accuracy: 0.8656 - val_loss: 0.4058 - val_accuracy: 0.8620- los - ETA: 0s - loss: 0.3842 - accuracy - ETA: 0s - loss: 0.3843 - accura
Epoch 58/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3864 - accuracy: 0.8656 - val_loss: 0.4226 - val_accuracy: 0.8588
Epoch 59/200
195/195 [==============================] - 22s 110ms/step - loss: 0.3821 - accuracy: 0.8684 - val_loss: 0.3986 - val_accuracy: 0.8666
Epoch 60/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3728 - accuracy: 0.8708 - val_loss: 0.4196 - val_accuracy: 0.8638
Epoch 61/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3724 - accuracy: 0.8699 - val_loss: 0.3928 - val_accuracy: 0.8654loss: 0 - ETA: 3s - loss: 0.3 -
Epoch 62/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3724 - accuracy: 0.8712 - val_loss: 0.3615 - val_accuracy: 0.8782
Epoch 63/200
195/195 [==============================] - 22s 110ms/step - loss: 0.3758 - accuracy: 0.8691 - val_loss: 0.3976 - val_accuracy: 0.8707
Epoch 64/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3698 - accuracy: 0.8714 - val_loss: 0.4429 - val_accuracy: 0.8554
Epoch 65/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3570 - accuracy: 0.8750 - val_loss: 0.3702 - val_accuracy: 0.8740
Epoch 66/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3588 - accuracy: 0.8751 - val_loss: 0.3885 - val_accuracy: 0.8717
Epoch 67/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3597 - accuracy: 0.8749 - val_loss: 0.3781 - val_accuracy: 0.8777
Epoch 68/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3590 - accuracy: 0.8756 - val_loss: 0.4230 - val_accuracy: 0.8613
Epoch 69/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3540 - accuracy: 0.8756 - val_loss: 0.3972 - val_accuracy: 0.8694
Epoch 70/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3588 - accuracy: 0.8729 - val_loss: 0.4242 - val_accuracy: 0.8598
Epoch 71/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3608 - accuracy: 0.8748 - val_loss: 0.3887 - val_accuracy: 0.8683
Epoch 72/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3511 - accuracy: 0.8783 - val_loss: 0.3912 - val_accuracy: 0.8716
Epoch 73/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3516 - accuracy: 0.8769 - val_loss: 0.4673 - val_accuracy: 0.8515
Epoch 74/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3484 - accuracy: 0.8787 - val_loss: 0.3990 - val_accuracy: 0.8664
Epoch 75/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3506 - accuracy: 0.8780 - val_loss: 0.3869 - val_accuracy: 0.8666
Epoch 76/200
195/195 [==============================] - 20s 105ms/step - loss: 0.3484 - accuracy: 0.8795 - val_loss: 0.3447 - val_accuracy: 0.8853
Epoch 77/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3493 - accuracy: 0.8774 - val_loss: 0.3644 - val_accuracy: 0.8794
Epoch 78/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3443 - accuracy: 0.8813 - val_loss: 0.4117 - val_accuracy: 0.8665
Epoch 79/200
195/195 [==============================] - 20s 104ms/step - loss: 0.3436 - accuracy: 0.8796 - val_loss: 0.3695 - val_accuracy: 0.8758
Epoch 80/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3487 - accuracy: 0.8788 - val_loss: 0.3583 - val_accuracy: 0.8789
Epoch 81/200
accuracy: - ETA: 1s - los
Epoch 92/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3320 - accuracy: 0.8834 - val_loss: 0.3658 - val_accuracy: 0.8794
Epoch 93/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3251 - accuracy: 0.8858 - val_loss: 0.4003 - val_accuracy: 0.8646
Epoch 94/200
195/195 [==============================] - 20s 103ms/step - loss: 0.3202 - accuracy: 0.8894 - val_loss: 0.3943 - val_accuracy: 0.8695
Epoch 95/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3238 - accuracy: 0.8887 - val_loss: 0.3232 - val_accuracy: 0.8931
Epoch 96/200
195/195 [==============================] - 21s 105ms/step - loss: 0.3236 - accuracy: 0.8881 - val_loss: 0.3659 - val_accuracy: 0.8777
Epoch 97/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3116 - accuracy: 0.8912 - val_loss: 0.4218 - val_accuracy: 0.8634
Epoch 98/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3189 - accuracy: 0.8893 - val_loss: 0.3783 - val_accuracy: 0.8740s - loss: 0.3189 - accuracy - ETA: 0s - loss: 0.3189 - ac
Epoch 99/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3260 - accuracy: 0.8845 - val_loss: 0.3418 - val_accuracy: 0.8875
Epoch 100/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3143 - accuracy: 0.8893 - val_loss: 0.3974 - val_accuracy: 0.8671loss: 0.3141 - accu - ETA: 0s - loss: 0.314
Epoch 101/200
195/195 [==============================] - 20s 105ms/step - loss: 0.3209 - accuracy: 0.8898 - val_loss: 0.3688 - val_accuracy: 0.8780
Epoch 102/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3205 - accuracy: 0.8885 - val_loss: 0.3689 - val_accuracy: 0.8791
Epoch 103/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3157 - accuracy: 0.8884 - val_loss: 0.3420 - val_accuracy: 0.8857
Epoch 104/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3163 - accuracy: 0.8878 - val_loss: 0.3580 - val_accuracy: 0.8821
Epoch 105/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3105 - accuracy: 0.8915 - val_loss: 0.3696 - val_accuracy: 0.8800
Epoch 106/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3127 - accuracy: 0.8893 - val_loss: 0.3701 - val_accuracy: 0.8799
Epoch 107/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3087 - accuracy: 0.8917 - val_loss: 0.3604 - val_accuracy: 0.8831
Epoch 108/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3097 - accuracy: 0.8916 - val_loss: 0.3311 - val_accuracy: 0.8923
Epoch 109/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3096 - accuracy: 0.8907 - val_loss: 0.3421 - val_accuracy: 0.8880
Epoch 110/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3082 - accuracy: 0.8925 - val_loss: 0.3207 - val_accuracy: 0.8933
Epoch 111/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2997 - accuracy: 0.8967 - val_loss: 0.3400 - val_accuracy: 0.8858
Epoch 112/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3026 - accuracy: 0.8929 - val_loss: 0.3821 - val_accuracy: 0.8769
Epoch 113/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2996 - accuracy: 0.8940 - val_loss: 0.3453 - val_accuracy: 0.886193 - ac
Epoch 114/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3033 - accuracy: 0.8935 - val_loss: 0.3850 - val_accuracy: 0.8733
Epoch 115/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3046 - accuracy: 0.8942 - val_loss: 0.3396 - val_accuracy: 0.8880
Epoch 116/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2998 - accuracy: 0.8946 - val_loss: 0.3496 - val_accuracy: 0.8826
Epoch 117/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3100 - accuracy: 0.8914 - val_loss: 0.4213 - val_accuracy: 0.8632
Epoch 118/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3099 - accuracy: 0.8905 - val_loss: 0.3623 - val_accuracy: 0.8787- l -
Epoch 119/200
195/195 [==============================] - 22s 110ms/step - loss: 0.3096 - accuracy: 0.8929 - val_loss: 0.3523 - val_accuracy: 0.8841ss: 0.3098 - accu - ETA: 3s - loss: 0.3097 - ac - ETA: 2s - loss: 0.3097 - accu - ETA: 1s - loss: 0.3097 - accuracy - ETA: 1s -
Epoch 120/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2990 - accuracy: 0.8952 - val_loss: 0.3645 - val_accuracy: 0.8803
Epoch 121/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2986 - accuracy: 0.8940 - val_loss: 0.3947 - val_accuracy: 0.8701
Epoch 122/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3002 - accuracy: 0.8934 - val_loss: 0.3854 - val_accuracy: 0.8746
Epoch 123/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2957 - accuracy: 0.8962 - val_loss: 0.3649 - val_accuracy: 0.8787
Epoch 124/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2926 - accuracy: 0.8967 - val_loss: 0.3245 - val_accuracy: 0.8948
Epoch 125/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3024 - accuracy: 0.8933 - val_loss: 0.3376 - val_accuracy: 0.8896
Epoch 126/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2904 - accuracy: 0.8984 - val_loss: 0.3394 - val_accuracy: 0.8867
Epoch 127/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2974 - accuracy: 0.8974 - val_loss: 0.3591 - val_accuracy: 0.8842
Epoch 128/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2942 - accuracy: 0.8978 - val_loss: 0.3455 - val_accuracy: 0.8848
Epoch 129/200
195/195 [==============================] - 21s 106ms/step - loss: 0.2940 - accuracy: 0.8970 - val_loss: 0.3400 - val_accuracy: 0.8883
Epoch 130/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2973 - accuracy: 0.8973 - val_loss: 0.3286 - val_accuracy: 0.8905
Epoch 131/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2903 - accuracy: 0.8948 - val_loss: 0.4064 - val_accuracy: 0.8707
Epoch 132/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2962 - accuracy: 0.8963 - val_loss: 0.3689 - val_accuracy: 0.8773
Epoch 133/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2918 - accuracy: 0.8971 - val_loss: 0.3666 - val_accuracy: 0.8808
Epoch 134/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2894 - accuracy: 0.8991 - val_loss: 0.3306 - val_accuracy: 0.8918
Epoch 135/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2809 - accuracy: 0.9020 - val_loss: 0.3157 - val_accuracy: 0.8940
Epoch 136/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2878 - accuracy: 0.8996 - val_loss: 0.3568 - val_accuracy: 0.8847
Epoch 137/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2903 - accuracy: 0.8981 - val_loss: 0.3422 - val_accuracy: 0.8914
Epoch 138/200
195/195 [==============================] - 20s 104ms/step - loss: 0.2841 - accuracy: 0.8986 - val_loss: 0.3276 - val_accuracy: 0.8910
Epoch 139/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2892 - accuracy: 0.8994 - val_loss: 0.3350 - val_accuracy: 0.8909
Epoch 140/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2863 - accuracy: 0.9000 - val_loss: 0.3634 - val_accuracy: 0.8817
Epoch 141/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2884 - accuracy: 0.8983 - val_loss: 0.3368 - val_accuracy: 0.8903
Epoch 142/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2903 - accuracy: 0.8988 - val_loss: 0.3643 - val_accuracy: 0.8820
Epoch 143/200
195/195 [==============================] - 21s 105ms/step - loss: 0.2818 - accuracy: 0.8997 - val_loss: 0.3178 - val_accuracy: 0.8933
Epoch 144/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2713 - accuracy: 0.9042 - val_loss: 0.3584 - val_accuracy: 0.88400
Epoch 145/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2907 - accuracy: 0.8990 - val_loss: 0.3286 - val_accuracy: 0.8921loss: 0.2911 - accura - ETA
Epoch 146/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2745 - accuracy: 0.9045 - val_loss: 0.3450 - val_accuracy: 0.8890
Epoch 147/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2816 - accuracy: 0.9028 - val_loss: 0.3895 - val_accuracy: 0.8715
Epoch 148/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2777 - accuracy: 0.9041 - val_loss: 0.3372 - val_accuracy: 0.8896- loss: 0.2776 - accuracy:
Epoch 149/200
195/195 [==============================] - 21s 105ms/step - loss: 0.2700 - accuracy: 0.9070 - val_loss: 0.3615 - val_accuracy: 0.8803
Epoch 150/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2741 - accuracy: 0.9033 - val_loss: 0.3605 - val_accuracy: 0.8813
Epoch 151/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2890 - accuracy: 0.8979 - val_loss: 0.3490 - val_accuracy: 0.8854
Epoch 152/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2784 - accuracy: 0.9008 - val_loss: 0.3543 - val_accuracy: 0.8838s - los
Epoch 153/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2803 - accuracy: 0.9014 - val_loss: 0.3356 - val_accuracy: 0.8876
Epoch 154/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2719 - accuracy: 0.9031 - val_loss: 0.3338 - val_accuracy: 0.8894
Epoch 155/200
195/195 [==============================] - 21s 106ms/step - loss: 0.2830 - accuracy: 0.9019 - val_loss: 0.3505 - val_accuracy: 0.8893
Epoch 156/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2830 - accuracy: 0.9002 - val_loss: 0.3173 - val_accuracy: 0.8983
Epoch 157/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2764 - accuracy: 0.9015 - val_loss: 0.3789 - val_accuracy: 0.8765
Epoch 158/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2742 - accuracy: 0.9040 - val_loss: 0.3245 - val_accuracy: 0.8941
Epoch 159/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2801 - accuracy: 0.9014 - val_loss: 0.3342 - val_accuracy: 0.8905
Epoch 160/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2640 - accuracy: 0.9064 - val_loss: 0.3632 - val_accuracy: 0.8818
Epoch 161/200
195/195 [==============================] - 21s 106ms/step - loss: 0.2754 - accuracy: 0.9026 - val_loss: 0.3204 - val_accuracy: 0.8936
Epoch 162/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2745 - accuracy: 0.9040 - val_loss: 0.3921 - val_accuracy: 0.8769
Epoch 163/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2731 - accuracy: 0.9031 - val_loss: 0.3234 - val_accuracy: 0.8939
Epoch 164/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2699 - accuracy: 0.9062 - val_loss: 0.3466 - val_accuracy: 0.8873
Epoch 165/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2866 - accuracy: 0.9002 - val_loss: 0.3669 - val_accuracy: 0.8820
Epoch 166/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2657 - accuracy: 0.9058 - val_loss: 0.3208 - val_accuracy: 0.8930
Epoch 167/200
195/195 [==============================] - 20s 105ms/step - loss: 0.2769 - accuracy: 0.9014 - val_loss: 0.3339 - val_accuracy: 0.8912
Epoch 168/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2739 - accuracy: 0.9037 - val_loss: 0.3357 - val_accuracy: 0.8885
Epoch 169/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2739 - accuracy: 0.9059 - val_loss: 0.4047 - val_accuracy: 0.8727
Epoch 170/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2666 - accuracy: 0.9063 - val_loss: 0.3386 - val_accuracy: 0.8904
Epoch 171/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2660 - accuracy: 0.9073 - val_loss: 0.3169 - val_accuracy: 0.8945
Epoch 172/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2692 - accuracy: 0.9054 - val_loss: 0.3413 - val_accuracy: 0.8859
Epoch 173/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2672 - accuracy: 0.9050 - val_loss: 0.3230 - val_accuracy: 0.8930
Epoch 174/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2776 - accuracy: 0.9026 - val_loss: 0.3204 - val_accuracy: 0.8966
Epoch 175/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2646 - accuracy: 0.9073 - val_loss: 0.3433 - val_accuracy: 0.8937
Epoch 176/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2670 - accuracy: 0.9057 - val_loss: 0.3301 - val_accuracy: 0.8927
Epoch 177/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2697 - accuracy: 0.9046 - val_loss: 0.3110 - val_accuracy: 0.8979
Epoch 178/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2711 - accuracy: 0.9043 - val_loss: 0.3240 - val_accuracy: 0.8944712 - accuracy: 0. - ETA:
Epoch 179/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2628 - accuracy: 0.9072 - val_loss: 0.3265 - val_accuracy: 0.8931
Epoch 180/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2642 - accuracy: 0.9070 - val_loss: 0.3192 - val_accuracy: 0.8954
Epoch 181/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2626 - accuracy: 0.9067 - val_loss: 0.3404 - val_accuracy: 0.8875
Epoch 182/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2635 - accuracy: 0.9080 - val_loss: 0.3463 - val_accuracy: 0.8874
Epoch 183/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2630 - accuracy: 0.9075 - val_loss: 0.3342 - val_accuracy: 0.8909
Epoch 184/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2666 - accuracy: 0.9036 - val_loss: 0.2964 - val_accuracy: 0.9011
Epoch 185/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2671 - accuracy: 0.9067 - val_loss: 0.3400 - val_accuracy: 0.8905
Epoch 186/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2625 - accuracy: 0.9084 - val_loss: 0.3446 - val_accuracy: 0.8889
Epoch 187/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2606 - accuracy: 0.9097 - val_loss: 0.3242 - val_accuracy: 0.8955
Epoch 188/200
195/195 [==============================] - 22s 111ms/step - loss: 0.2588 - accuracy: 0.9094 - val_loss: 0.3240 - val_accuracy: 0.8958
Epoch 189/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2649 - accuracy: 0.9070 - val_loss: 0.3216 - val_accuracy: 0.8980
Epoch 190/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2587 - accuracy: 0.9077 - val_loss: 0.3403 - val_accuracy: 0.8891
Epoch 191/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2678 - accuracy: 0.9033 - val_loss: 0.3099 - val_accuracy: 0.9008
Epoch 192/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2538 - accuracy: 0.9094 - val_loss: 0.3170 - val_accuracy: 0.8968
Epoch 193/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2613 - accuracy: 0.9076 - val_loss: 0.2916 - val_accuracy: 0.9046
Epoch 194/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2651 - accuracy: 0.9077 - val_loss: 0.3159 - val_accuracy: 0.8968
Epoch 195/200
195/195 [==============================] - 22s 110ms/step - loss: 0.2576 - accuracy: 0.9097 - val_loss: 0.3446 - val_accuracy: 0.8901
Epoch 196/200
195/195 [==============================] - 21s 109ms/step - loss: 0.2554 - accuracy: 0.9094 - val_loss: 0.3227 - val_accuracy: 0.8978curacy:
Epoch 197/200
195/195 [==============================] - 21s 108ms/step - loss: 0.2620 - accuracy: 0.9090 - val_loss: 0.3174 - val_accuracy: 0.8958
Epoch 198/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2583 - accuracy: 0.9082 - val_loss: 0.3186 - val_accuracy: 0.8964
Epoch 199/200
195/195 [==============================] - 21s 107ms/step - loss: 0.2546 - accuracy: 0.9103 - val_loss: 0.3183 - val_accuracy: 0.8968
Epoch 200/200
195/195 [==============================] - 21s 110ms/step - loss: 0.2544 - accuracy: 0.9082 - val_loss: 0.3327 - val_accuracy: 0.8948
313/313 [==============================] - 1s 3ms/step - loss: 0.3327 - accuracy: 0.8948
Accuracy is : 89.48000073432922
Loss is : 0.3326900005340576
(i cut the first 40 epochs bcs body is limited to 30000 char, 1-40 epoch keep improving but so slow)
i have tried with 100 epoch, it gives me result of ~88% validation accuracy
in this code i add another 100 epochs and it give me only 1% improvement (~89%)
My questions are,
Do my model performed an overfitting model ?
Why my model performed so slow ?
do my model can improve if i add more epochs ?
how to increase the accuracy and decrease the loss, because it seems like stagnant to me ?
Model Performance plot here
No - because valuation loss not increasing
Your plots look fine. It is expected that the training process goes slower
Yes, but it doesn't make sense. If you train any model for infinity - its performance will permanently improved - e.g. you can get 89.5% accuracy (which is better than 89.48%) if you train it for year.
Try decaying learning rate with different schedules
I want to make a neural network for a classification problem. The training set takes 225 input of 48 dimensions and has a validation set of 50. the code is:
#building model
def build_model():
model = Sequential()
model.add(Dense(128, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(64, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(32, activation = "relu"))
model.add(Dropout(0.1))
model.add(Dense(16, activation = "softmax"))
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001),
loss=['mean_squared_error'],
metrics=['accuracy']
)
return model
model = build_model()
#train model
history = model.fit(
x_train,
y_train,
epochs=10,
batch_size=32,
validation_data=(
x_val,
y_val
),
callbacks=[ProgbarLogger(count_mode='steps',stateful_metrics=None)
]
)
but i get as output weird logs such as :
Train on 225 samples, validate on 50 samples
Epoch 1/10
Epoch 1/10
8/8 [==============================] - 2s 247ms/step - loss: 0.6517 - accuracy: 0.0178 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 2s 9ms/sample - loss: 0.7298 - accuracy: 0.0178 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 2/10
Epoch 2/10
8/8 [==============================] - 0s 10ms/step - loss: 0.6516 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 359us/sample - loss: 0.7297 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 3/10
Epoch 3/10
8/8 [==============================] - 0s 9ms/step - loss: 0.6517 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 341us/sample - loss: 0.7297 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 4/10
Epoch 4/10
8/8 [==============================] - 0s 10ms/step - loss: 0.6515 - accuracy: 0.0178 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 346us/sample - loss: 0.7296 - accuracy: 0.0178 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 5/10
Epoch 5/10
8/8 [==============================] - 0s 9ms/step - loss: 0.7457 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 319us/sample - loss: 0.7295 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 6/10
Epoch 6/10
8/8 [==============================] - 0s 10ms/step - loss: 1.0552 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 359us/sample - loss: 0.7296 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 7/10
Epoch 7/10
8/8 [==============================] - 0s 9ms/step - loss: 0.6514 - accuracy: 0.0089 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 328us/sample - loss: 0.7295 - accuracy: 0.0089 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 8/10
Epoch 8/10
8/8 [==============================] - 0s 9ms/step - loss: 0.6516 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 324us/sample - loss: 0.7295 - accuracy: 0.0133 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 9/10
Epoch 9/10
8/8 [==============================] - 0s 10ms/step - loss: 0.6515 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 355us/sample - loss: 0.7296 - accuracy: 0.0222 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
Epoch 10/10
Epoch 10/10
8/8 [==============================] - 0s 9ms/step - loss: 0.6515 - accuracy: 0.0267 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
225/225 [==============================] - 0s 328us/sample - loss: 0.7295 - accuracy: 0.0267 - val_loss: 0.2842 - val_accuracy: 0.0000e+00
can you help me understand this behavior of the losses and the excess of accuracy. shouldnt it be lower?
I am building a DNN with keras to classify between background or signal events (HEP). Nevertheless the loss and the accuracy are not changing.
I already tried changing the parameters on the optimizer, normalizing the data, changing the number of layers, neurons, epochs, initializing the weights, etc.
Here's the model:
epochs = 20
num_features = 2
num_classes = 2
batch_size = 32
# model
print("\n Building model...")
model = Sequential()
model.add(Dropout(0.2))
model.add(Dense(128, input_shape=(2,), activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes,activation=tf.nn.softmax))
print("\n Compiling model...")
opt = adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0,
amsgrad=False)
# compile model
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
print("\n Fitting model...")
history = model.fit(x_train, y_train, epochs = epochs,
batch_size = batch_size, validation_data = (x_test, y_test))
I'm expecting a change in the loss but it won't decrease from 0.69-ish.
The epochs report
Building model...
Compiling model...
Fitting model...
Train on 18400 samples, validate on 4600 samples
Epoch 1/20
18400/18400 [==============================] - 1s 71us/step - loss: 0.6939 - acc: 0.4965 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 2/20
18400/18400 [==============================] - 1s 60us/step - loss: 0.6935 - acc: 0.5045 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 3/20
18400/18400 [==============================] - 1s 69us/step - loss: 0.6937 - acc: 0.4993 - val_loss: 0.6934 - val_acc: 0.5000
Epoch 4/20
18400/18400 [==============================] - 1s 65us/step - loss: 0.6939 - acc: 0.4984 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 5/20
18400/18400 [==============================] - 1s 58us/step - loss: 0.6936 - acc: 0.5000 - val_loss: 0.6936 - val_acc: 0.5000
Epoch 6/20
18400/18400 [==============================] - 1s 57us/step - loss: 0.6937 - acc: 0.4913 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 7/20
18400/18400 [==============================] - 1s 58us/step - loss: 0.6935 - acc: 0.5008 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 8/20
18400/18400 [==============================] - 1s 63us/step - loss: 0.6936 - acc: 0.5013 - val_loss: 0.6936 - val_acc: 0.5000
Epoch 9/20
18400/18400 [==============================] - 1s 67us/step - loss: 0.6936 - acc: 0.4924 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 10/20
18400/18400 [==============================] - 1s 61us/step - loss: 0.6933 - acc: 0.5067 - val_loss: 0.6934 - val_acc: 0.5000
Epoch 11/20
18400/18400 [==============================] - 1s 64us/step - loss: 0.6938 - acc: 0.4972 - val_loss: 0.6931 - val_acc: 0.5000
Epoch 12/20
18400/18400 [==============================] - 1s 64us/step - loss: 0.6936 - acc: 0.4991 - val_loss: 0.6934 - val_acc: 0.5000
Epoch 13/20
18400/18400 [==============================] - 1s 70us/step - loss: 0.6937 - acc: 0.4960 - val_loss: 0.6935 - val_acc: 0.5000
Epoch 14/20
18400/18400 [==============================] - 1s 63us/step - loss: 0.6935 - acc: 0.4992 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 15/20
18400/18400 [==============================] - 1s 61us/step - loss: 0.6937 - acc: 0.4940 - val_loss: 0.6931 - val_acc: 0.5000
Epoch 16/20
18400/18400 [==============================] - 1s 68us/step - loss: 0.6933 - acc: 0.5067 - val_loss: 0.6936 - val_acc: 0.5000
Epoch 17/20
18400/18400 [==============================] - 1s 58us/step - loss: 0.6938 - acc: 0.4997 - val_loss: 0.6935 - val_acc: 0.5000
Epoch 18/20
18400/18400 [==============================] - 1s 56us/step - loss: 0.6936 - acc: 0.4972 - val_loss: 0.6941 - val_acc: 0.5000
Epoch 19/20
18400/18400 [==============================] - 1s 57us/step - loss: 0.6934 - acc: 0.5061 - val_loss: 0.6954 - val_acc: 0.5000
Epoch 20/20
18400/18400 [==============================] - 1s 58us/step - loss: 0.6936 - acc: 0.5037 - val_loss: 0.6939 - val_acc: 0.5000
Update: My data preparation contains this
np.random.shuffle(x_train)
np.random.shuffle(y_train)
np.random.shuffle(x_test)
np.random.shuffle(y_test)
And I'm thinking it's changing the class for each data point cause the shuffle is done separately.