I've fitted a model and this is the plot i get with te following code:
hist = model.fit(
xs, ys, epochs=300, batch_size=100, validation_split=0.1,
callbacks=[K.callbacks.EarlyStopping(patience=30)]
)
plt.figure(dpi=200)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.legend(["loss", "val_loss"])
However, the training logs are the following:
...
Epoch 200/300
9/9 [==============================] - 3s 384ms/step - loss: 514.2175 - val_loss: 584.2152
Epoch 201/300
9/9 [==============================] - 3s 385ms/step - loss: 510.9814 - val_loss: 581.8872
Epoch 202/300
9/9 [==============================] - 3s 391ms/step - loss: 518.9771 - val_loss: 582.4727
Epoch 203/300
9/9 [==============================] - 3s 383ms/step - loss: 521.8132 - val_loss: 582.9196
Epoch 204/300
9/9 [==============================] - 4s 393ms/step - loss: 516.8439 - val_loss: 584.0792
Epoch 205/300
9/9 [==============================] - 3s 391ms/step - loss: 513.7325 - val_loss: 582.6438
Epoch 206/300
9/9 [==============================] - 3s 390ms/step - loss: 514.4469 - val_loss: 583.5629
Epoch 207/300
9/9 [==============================] - 3s 391ms/step - loss: 522.0557 - val_loss: 581.7162
Epoch 208/300
9/9 [==============================] - 3s 389ms/step - loss: 518.6336 - val_loss: 582.8070
Epoch 209/300
9/9 [==============================] - 3s 391ms/step - loss: 518.0827 - val_loss: 582.4284
Epoch 210/300
9/9 [==============================] - 3s 389ms/step - loss: 514.1886 - val_loss: 582.4635
Epoch 211/300
9/9 [==============================] - 3s 390ms/step - loss: 514.4373 - val_loss: 582.1906
Epoch 212/300
9/9 [==============================] - 3s 391ms/step - loss: 514.9708 - val_loss: 582.1699
Epoch 213/300
9/9 [==============================] - 3s 388ms/step - loss: 521.1622 - val_loss: 582.3545
Epoch 214/300
9/9 [==============================] - 3s 388ms/step - loss: 513.5198 - val_loss: 582.7703
Epoch 215/300
9/9 [==============================] - 3s 392ms/step - loss: 514.6642 - val_loss: 582.3327
Epoch 216/300
9/9 [==============================] - 3s 392ms/step - loss: 514.0926 - val_loss: 583.9896
Epoch 217/300
9/9 [==============================] - 3s 385ms/step - loss: 520.9324 - val_loss: 583.9265
Epoch 218/300
9/9 [==============================] - 4s 397ms/step - loss: 510.2536 - val_loss: 584.6587
Epoch 219/300
9/9 [==============================] - 4s 394ms/step - loss: 515.7706 - val_loss: 583.2895
Epoch 220/300
9/9 [==============================] - 3s 391ms/step - loss: 520.9758 - val_loss: 582.2515
Epoch 221/300
9/9 [==============================] - 3s 386ms/step - loss: 517.8850 - val_loss: 582.1981
Epoch 222/300
9/9 [==============================] - 4s 395ms/step - loss: 514.2051 - val_loss: 583.0013
Epoch 223/300
9/9 [==============================] - 4s 402ms/step - loss: 509.3330 - val_loss: 583.7137
Epoch 224/300
9/9 [==============================] - 3s 388ms/step - loss: 516.6832 - val_loss: 582.0773
Epoch 225/300
9/9 [==============================] - 3s 387ms/step - loss: 515.5243 - val_loss: 582.2585
Epoch 226/300
9/9 [==============================] - 3s 389ms/step - loss: 517.6601 - val_loss: 582.3940
Epoch 227/300
9/9 [==============================] - 3s 388ms/step - loss: 515.7537 - val_loss: 582.3862
Epoch 228/300
9/9 [==============================] - 4s 394ms/step - loss: 516.1107 - val_loss: 582.7234
Epoch 229/300
9/9 [==============================] - 3s 389ms/step - loss: 517.5703 - val_loss: 583.3829
Epoch 230/300
9/9 [==============================] - 3s 388ms/step - loss: 516.7491 - val_loss: 583.4712
Epoch 231/300
9/9 [==============================] - 4s 392ms/step - loss: 520.6753 - val_loss: 583.2650
Epoch 232/300
9/9 [==============================] - 3s 388ms/step - loss: 516.1927 - val_loss: 581.9255
Epoch 233/300
9/9 [==============================] - 4s 393ms/step - loss: 512.5476 - val_loss: 583.1275
Epoch 234/300
9/9 [==============================] - 4s 392ms/step - loss: 513.5744 - val_loss: 583.0643
Epoch 235/300
9/9 [==============================] - 3s 385ms/step - loss: 520.2017 - val_loss: 582.6875
Epoch 236/300
9/9 [==============================] - 3s 386ms/step - loss: 518.7263 - val_loss: 583.0582
Epoch 237/300
9/9 [==============================] - 3s 382ms/step - loss: 521.4882 - val_loss: 582.8977
Infact, if with some regex I extract the training loss, and plot it, I get the following plot:
What am I missing about History?...
Related
I am trying to create a custom loss function but as soon as I try to create a copy of the y_pred (model predictions) tensor, the loss function stops working.
This function is working
def custom_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
loss = binary_crossentropy(y_true, y_pred)
return loss
The output is
Epoch 1/10
26/26 [==============================] - 5s 169ms/step - loss: 56.1577 - accuracy: 0.7867 - val_loss: 14.7032 - val_accuracy: 0.9185
Epoch 2/10
26/26 [==============================] - 4s 159ms/step - loss: 18.6890 - accuracy: 0.8762 - val_loss: 9.4140 - val_accuracy: 0.9185
Epoch 3/10
26/26 [==============================] - 4s 158ms/step - loss: 13.7425 - accuracy: 0.8437 - val_loss: 7.7499 - val_accuracy: 0.9185
Epoch 4/10
26/26 [==============================] - 4s 159ms/step - loss: 10.5267 - accuracy: 0.8510 - val_loss: 6.1037 - val_accuracy: 0.9185
Epoch 5/10
26/26 [==============================] - 4s 160ms/step - loss: 7.5695 - accuracy: 0.8544 - val_loss: 3.9937 - val_accuracy: 0.9185
Epoch 6/10
26/26 [==============================] - 4s 159ms/step - loss: 5.1320 - accuracy: 0.8538 - val_loss: 2.6940 - val_accuracy: 0.9185
Epoch 7/10
26/26 [==============================] - 4s 160ms/step - loss: 3.3265 - accuracy: 0.8557 - val_loss: 1.6613 - val_accuracy: 0.9185
Epoch 8/10
26/26 [==============================] - 4s 160ms/step - loss: 2.1421 - accuracy: 0.8538 - val_loss: 1.0443 - val_accuracy: 0.9185
Epoch 9/10
26/26 [==============================] - 4s 160ms/step - loss: 1.3384 - accuracy: 0.8601 - val_loss: 0.5159 - val_accuracy: 0.9184
Epoch 10/10
26/26 [==============================] - 4s 173ms/step - loss: 0.6041 - accuracy: 0.8895 - val_loss: 0.3164 - val_accuracy: 0.9185
testing
**********Testing model**********
training AUC : 0.6204090733263475
testing AUC: 0.6196677312833667
But this is not working
def custom_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
y_p = tf.identity(y_pred)
loss = binary_crossentropy(y_true, y_p)
return loss
I am getting this output
Epoch 1/10
26/26 [==============================] - 11s 179ms/step - loss: 1.3587 - accuracy: 0.9106 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 2/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 3/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 4/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 5/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 6/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 7/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 8/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 9/10
26/26 [==============================] - 4s 160ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 10/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
testing
**********Testing model**********
training AUC : 0.5
testing AUC : 0.5
Is there a problem with tf.identity() which is causing the issue?
Or is there any other way to copy tensors which I should be using?
I am building a text to speech ML model in keras. The training is run fine until loss[mse] reached 200 or 250 and then it stops improving.
my Code
model = Sequential([
layers.Embedding(4220,8,input_length=70),
layers.Conv1D(32,3,activation='relu'),
#layers.MaxPooling1D(2),
layers.Bidirectional(layers.GRU(32,return_sequences=True)),
layers.Bidirectional(layers.GRU(64,return_sequences=True)),
layers.Bidirectional(layers.GRU(128,return_sequences=True)),
layers.Bidirectional(layers.GRU(512,return_sequences=True)),
layers.Flatten(),
layers.Dense(70,activation='relu'),
layers.Dense(700,activation='relu'),
layers.Dense(82688,activation='linear')
])
my loss and val_loss
My epoch history :
Epoch 1/600
100/100 [==============================] - 184s 2s/step - loss: 1277.8810 - val_loss: 1351.1093
Epoch 2/600
100/100 [==============================] - ETA: 0s - loss: 1270.0303Epoch 3/600
100/100 [==============================] - 167s 2s/step - loss: 1259.7238 - val_loss: 1325.2167
Epoch 4/600
100/100 [==============================] - 166s 2s/step - loss: 1248.1462 - val_loss: 1323.8337
Epoch 5/600
100/100 [==============================] - 167s 2s/step - loss: 1243.7775 - val_loss: 1314.3406
Epoch 6/600
100/100 [==============================] - 167s 2s/step - loss: 1238.6522 - val_loss: 1309.3848
Epoch 7/600
100/100 [==============================] - 167s 2s/step - loss: 1236.4279 - val_loss: 1310.0287
Epoch 8/600
100/100 [==============================] - 166s 2s/step - loss: 1233.0273 - val_loss: 1302.7568
Epoch 9/600
100/100 [==============================] - 167s 2s/step - loss: 1230.3721 - val_loss: 1301.9248
Epoch 10/600
100/100 [==============================] - 167s 2s/step - loss: 1222.4075 - val_loss: 1291.3480
Epoch 11/600
100/100 [==============================] - 171s 2s/step - loss: 1212.7744 - val_loss: 1283.5021
Epoch 12/600
100/100 [==============================] - 167s 2s/step - loss: 1202.2744 - val_loss: 1267.7415
Epoch 13/600
100/100 [==============================] - 166s 2s/step - loss: 1192.2128 - val_loss: 1266.8981
Epoch 14/600
100/100 [==============================] - 166s 2s/step - loss: 1180.1565 - val_loss: 1245.8889
Epoch 15/600
100/100 [==============================] - 166s 2s/step - loss: 1160.5439 - val_loss: 1239.8575
Epoch 16/600
100/100 [==============================] - 167s 2s/step - loss: 1145.3345 - val_loss: 1230.9562
Epoch 17/600
100/100 [==============================] - 170s 2s/step - loss: 1123.7307 - val_loss: 1210.9500
Epoch 18/600
100/100 [==============================] - 169s 2s/step - loss: 1093.4578 - val_loss: 1175.7172
Epoch 19/600
100/100 [==============================] - 174s 2s/step - loss: 1075.5237 - val_loss: 1146.8014
Epoch 20/600
100/100 [==============================] - 171s 2s/step - loss: 1038.1080 - val_loss: 1102.3202
Epoch 21/600
100/100 [==============================] - 170s 2s/step - loss: 1007.4012 - val_loss: 1080.9078
Epoch 22/600
100/100 [==============================] - 169s 2s/step - loss: 964.5367 - val_loss: 1040.2711
Epoch 23/600
100/100 [==============================] - 169s 2s/step - loss: 935.2628 - val_loss: 1001.4338
Epoch 24/600
100/100 [==============================] - 170s 2s/step - loss: 905.0437 - val_loss: 964.1932
Epoch 25/600
100/100 [==============================] - 174s 2s/step - loss: 855.8116 - val_loss: 925.6329
Epoch 26/600
100/100 [==============================] - 171s 2s/step - loss: 841.3853 - val_loss: 906.4941
Epoch 27/600
100/100 [==============================] - 169s 2s/step - loss: 799.1079 - val_loss: 871.1532
Epoch 28/600
100/100 [==============================] - ETA: 0s - loss: 773.6056Epoch 29/600
100/100 [==============================] - 173s 2s/step - loss: 729.3723 - val_loss: 786.2578
Epoch 30/600
100/100 [==============================] - 171s 2s/step - loss: 700.6778 - val_loss: 746.0270
Epoch 31/600
100/100 [==============================] - 169s 2s/step - loss: 668.5749 - val_loss: 722.8984
Epoch 32/600
100/100 [==============================] - 171s 2s/step - loss: 663.7871 - val_loss: 700.9728
Epoch 33/600
100/100 [==============================] - 170s 2s/step - loss: 631.1932 - val_loss: 693.3641
Epoch 34/600
100/100 [==============================] - 169s 2s/step - loss: 605.1631 - val_loss: 632.8056
Epoch 35/600
100/100 [==============================] - 168s 2s/step - loss: 569.0238 - val_loss: 629.2961
Epoch 36/600
100/100 [==============================] - 169s 2s/step - loss: 558.2274 - val_loss: 605.8413
Epoch 37/600
100/100 [==============================] - 169s 2s/step - loss: 542.5693 - val_loss: 601.4443
Epoch 38/600
100/100 [==============================] - 168s 2s/step - loss: 536.8228 - val_loss: 573.8625
Epoch 39/600
100/100 [==============================] - 168s 2s/step - loss: 521.8893 - val_loss: 581.7361
Epoch 40/600
100/100 [==============================] - 168s 2s/step - loss: 513.2167 - val_loss: 552.5815
Epoch 41/600
100/100 [==============================] - 169s 2s/step - loss: 487.2066 - val_loss: 542.0209
Epoch 42/600
100/100 [==============================] - 168s 2s/step - loss: 476.7976 - val_loss: 526.0634
Epoch 43/600
100/100 [==============================] - 168s 2s/step - loss: 463.9364 - val_loss: 513.9921
Epoch 44/600
100/100 [==============================] - ETA: 0s - loss: 455.5290Epoch 45/600
100/100 [==============================] - 168s 2s/step - loss: 450.6151 - val_loss: 499.5727
Epoch 46/600
100/100 [==============================] - 168s 2s/step - loss: 442.5982 - val_loss: 517.9942
Epoch 47/600
100/100 [==============================] - 168s 2s/step - loss: 432.7837 - val_loss: 486.6371
Epoch 48/600
100/100 [==============================] - 168s 2s/step - loss: 435.1079 - val_loss: 468.2446
Epoch 49/600
100/100 [==============================] - 168s 2s/step - loss: 418.9065 - val_loss: 457.4920
Epoch 50/600
100/100 [==============================] - ETA: 0s - loss: 403.9354Epoch 51/600
100/100 [==============================] - 169s 2s/step - loss: 396.1986 - val_loss: 451.8055
Epoch 52/600
100/100 [==============================] - 168s 2s/step - loss: 395.5801 - val_loss: 444.5719
Epoch 53/600
100/100 [==============================] - 169s 2s/step - loss: 391.9252 - val_loss: 440.7292
Epoch 54/600
100/100 [==============================] - 168s 2s/step - loss: 390.3062 - val_loss: 435.1925
Epoch 55/600
100/100 [==============================] - 169s 2s/step - loss: 384.1246 - val_loss: 426.5699
Epoch 56/600
100/100 [==============================] - 168s 2s/step - loss: 378.4158 - val_loss: 416.9171
Epoch 57/600
100/100 [==============================] - 169s 2s/step - loss: 374.4364 - val_loss: 415.8269
Epoch 58/600
100/100 [==============================] - 169s 2s/step - loss: 373.1714 - val_loss: 412.0515
Epoch 59/600
100/100 [==============================] - ETA: 0s - loss: 372.6999Epoch 60/600
100/100 [==============================] - 168s 2s/step - loss: 366.2562 - val_loss: 411.2773
Epoch 61/600
100/100 [==============================] - 169s 2s/step - loss: 356.3777 - val_loss: 400.4987
Epoch 62/600
100/100 [==============================] - 167s 2s/step - loss: 347.1350 - val_loss: 393.9872
Epoch 63/600
100/100 [==============================] - 167s 2s/step - loss: 348.1526 - val_loss: 404.4293
Epoch 64/600
100/100 [==============================] - 167s 2s/step - loss: 351.8647 - val_loss: 394.9397
Epoch 65/600
100/100 [==============================] - 168s 2s/step - loss: 350.8889 - val_loss: 391.0047
Epoch 66/600
100/100 [==============================] - 170s 2s/step - loss: 345.6855 - val_loss: 384.2527
Epoch 67/600
100/100 [==============================] - 170s 2s/step - loss: 340.5289 - val_loss: 384.8631
Epoch 68/600
100/100 [==============================] - 170s 2s/step - loss: 335.2769 - val_loss: 383.4778
Epoch 69/600
100/100 [==============================] - 169s 2s/step - loss: 333.9134 - val_loss: 372.5128
Epoch 70/600
100/100 [==============================] - 174s 2s/step - loss: 331.4970 - val_loss: 379.6816
Epoch 71/600
100/100 [==============================] - 169s 2s/step - loss: 328.9393 - val_loss: 367.0433
Epoch 72/600
100/100 [==============================] - 168s 2s/step - loss: 324.4344 - val_loss: 366.4271
Epoch 73/600
100/100 [==============================] - 169s 2s/step - loss: 326.3188 - val_loss: 367.3526
Epoch 74/600
100/100 [==============================] - 169s 2s/step - loss: 323.1003 - val_loss: 366.2170
Epoch 75/600
100/100 [==============================] - 170s 2s/step - loss: 319.6701 - val_loss: 363.6429
Epoch 76/600
100/100 [==============================] - 169s 2s/step - loss: 317.2768 - val_loss: 362.6992
Epoch 77/600
100/100 [==============================] - 168s 2s/step - loss: 327.8936 - val_loss: 371.5877
Epoch 78/600
100/100 [==============================] - 170s 2s/step - loss: 327.2147 - val_loss: 372.3321
Epoch 79/600
100/100 [==============================] - 169s 2s/step - loss: 323.5416 - val_loss: 370.9175
Epoch 80/600
100/100 [==============================] - 169s 2s/step - loss: 314.2548 - val_loss: 350.6062
Epoch 81/600
100/100 [==============================] - 169s 2s/step - loss: 306.0438 - val_loss: 343.2444
Epoch 82/600
100/100 [==============================] - 169s 2s/step - loss: 297.7496 - val_loss: 339.3951
Epoch 83/600
100/100 [==============================] - 169s 2s/step - loss: 297.1666 - val_loss: 341.4118
Epoch 84/600
100/100 [==============================] - 168s 2s/step - loss: 304.3513 - val_loss: 346.7999
Epoch 85/600
100/100 [==============================] - 169s 2s/step - loss: 298.1828 - val_loss: 334.8227
Epoch 86/600
100/100 [==============================] - 168s 2s/step - loss: 291.7958 - val_loss: 331.8147
Epoch 87/600
100/100 [==============================] - 168s 2s/step - loss: 293.4404 - val_loss: 332.5406
Epoch 88/600
100/100 [==============================] - 169s 2s/step - loss: 291.5654 - val_loss: 332.8572
Epoch 89/600
100/100 [==============================] - 169s 2s/step - loss: 292.3975 - val_loss: 330.1451
Epoch 90/600
100/100 [==============================] - 169s 2s/step - loss: 292.7254 - val_loss: 332.5945
Epoch 91/600
100/100 [==============================] - 169s 2s/step - loss: 293.6047 - val_loss: 343.6122
Epoch 92/600
100/100 [==============================] - 168s 2s/step - loss: 290.7727 - val_loss: 335.6723
Epoch 93/600
100/100 [==============================] - 169s 2s/step - loss: 290.4878 - val_loss: 327.3437
Epoch 94/600
100/100 [==============================] - 169s 2s/step - loss: 290.0660 - val_loss: 326.8318
Epoch 95/600
100/100 [==============================] - 171s 2s/step - loss: 286.1216 - val_loss: 321.0185
Epoch 96/600
100/100 [==============================] - 170s 2s/step - loss: 279.6049 - val_loss: 316.1211
Epoch 97/600
100/100 [==============================] - 171s 2s/step - loss: 275.9084 - val_loss: 310.0760
Epoch 98/600
100/100 [==============================] - 171s 2s/step - loss: 274.8510 - val_loss: 310.8442
Epoch 99/600
100/100 [==============================] - 177s 2s/step - loss: 273.8518 - val_loss: 315.4663
Epoch 100/600
100/100 [==============================] - 174s 2s/step - loss: 277.0237 - val_loss: 316.6178
Epoch 101/600
100/100 [==============================] - 173s 2s/step - loss: 279.0191 - val_loss: 317.5232
Epoch 102/600
100/100 [==============================] - 173s 2s/step - loss: 278.4234 - val_loss: 315.9617
Epoch 103/600
100/100 [==============================] - ETA: 0s - loss: 278.2243Epoch 104/600
100/100 [==============================] - 172s 2s/step - loss: 281.0945 - val_loss: 321.7167
Epoch 105/600
100/100 [==============================] - 173s 2s/step - loss: 276.5663 - val_loss: 310.6582
Epoch 106/600
100/100 [==============================] - 173s 2s/step - loss: 273.8687 - val_loss: 315.6835
Epoch 107/600
100/100 [==============================] - 172s 2s/step - loss: 276.4300 - val_loss: 314.7690
Epoch 00107: early stopping
After finally fixing all the errors this code gave me, I have stumbled upon a new problem. This time it is the working model that supplies me with it. Here is the code I have created, this is now my third Deep Learning code I made and I am having a lot of fun making it, however, because I am a beginner in Python in general, some ideas are hard to grasp.
import pandas as pd
from sklearn.model_selection import train_test_split
import keras as kr
from keras import layers
from keras import Sequential
from keras.layers import Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
from pandas import DataFrame
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.InteractiveSession(config=config)
pd.set_option('display.max_columns', None)
headers = ['id', 'rated', 'created_at', 'last_move_at', 'turns', 'victory_status', 'winner', 'increment_code',
'white_id', 'white_rating', 'black_id', 'black_rating', 'moves', 'opening_eco', 'opening_name',
'opening_ply']
data = pd.read_csv(r'C:\games.csv', header=None, names=headers)
dataset = DataFrame(data)
dd = dataset.drop([0])
df = dd.drop(columns=['id', 'rated', 'opening_name', 'created_at', 'last_move_at', 'increment_code', 'white_id',
'black_id', 'opening_ply', 'opening_name', 'turns', 'victory_status', 'moves', 'opening_eco'],
axis=1)
df['winner'] = df['winner'].map({'black': 0, 'white': 1})
y = df['winner']
encoder = LabelEncoder()
encoder.fit(y)
encoded_y = encoder.transform(y)
X = df.drop('winner', axis=1)
X = X.astype("float32")
X_train, X_test, y_train, y_test = train_test_split(X, encoded_y, test_size=0.2)
sc = MinMaxScaler()
scaled_X_train = sc.fit_transform(X_train)
scaled_X_test = sc.fit_transform(X_test)
model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(tf.keras.Input(shape=(12, 2)))
model.add(Dense(4, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss=kr.losses.binary_crossentropy, optimizer='adam',
metrics=['accuracy'])
history = model.fit(scaled_X_train, y_train, batch_size=50, epochs=100, verbose=1, validation_data=(scaled_X_test,
y_test))
print(history.history)
score = model.evaluate(scaled_X_train, y_train, verbose=1)
My code seems to work fine with the first few epochs an increase in the accuracy. After that however, the accuracy doesn't seem to be making any progress anymore and lands on a modest accuracy of around 0.610, or specifically as seen below. With no idea on how to get this to be higher, I have come to you to ask you the question: 'How do I fix this?'
Epoch 1/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6386 - accuracy: 0.5463 - val_loss: 0.6208 - val_accuracy: 0.5783
Epoch 2/100
321/321 [==============================] - 0s 925us/step - loss: 0.6098 - accuracy: 0.6091 - val_loss: 0.6078 - val_accuracy: 0.5960
Epoch 3/100
321/321 [==============================] - 0s 973us/step - loss: 0.6055 - accuracy: 0.6102 - val_loss: 0.6177 - val_accuracy: 0.5833
Epoch 4/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6042 - accuracy: 0.6129 - val_loss: 0.6138 - val_accuracy: 0.5850
Epoch 5/100
321/321 [==============================] - 0s 973us/step - loss: 0.6041 - accuracy: 0.6106 - val_loss: 0.6233 - val_accuracy: 0.5763
Epoch 6/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6046 - accuracy: 0.6097 - val_loss: 0.6276 - val_accuracy: 0.5733
Epoch 7/100
321/321 [==============================] - 0s 973us/step - loss: 0.6033 - accuracy: 0.6086 - val_loss: 0.6238 - val_accuracy: 0.5733
Epoch 8/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6116 - val_loss: 0.6202 - val_accuracy: 0.5770
Epoch 9/100
321/321 [==============================] - 0s 973us/step - loss: 0.6030 - accuracy: 0.6091 - val_loss: 0.6210 - val_accuracy: 0.5738
Epoch 10/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6098 - val_loss: 0.6033 - val_accuracy: 0.5932
Epoch 11/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6094 - val_loss: 0.6166 - val_accuracy: 0.5780
Epoch 12/100
321/321 [==============================] - 0s 925us/step - loss: 0.6025 - accuracy: 0.6104 - val_loss: 0.6026 - val_accuracy: 0.5947
Epoch 13/100
321/321 [==============================] - 0s 925us/step - loss: 0.6021 - accuracy: 0.6099 - val_loss: 0.6243 - val_accuracy: 0.5733
Epoch 14/100
321/321 [==============================] - 0s 876us/step - loss: 0.6027 - accuracy: 0.6098 - val_loss: 0.6176 - val_accuracy: 0.5775
Epoch 15/100
321/321 [==============================] - 0s 925us/step - loss: 0.6029 - accuracy: 0.6091 - val_loss: 0.6286 - val_accuracy: 0.5690
Epoch 16/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6083 - val_loss: 0.6104 - val_accuracy: 0.5840
Epoch 17/100
321/321 [==============================] - 0s 876us/step - loss: 0.6021 - accuracy: 0.6102 - val_loss: 0.6039 - val_accuracy: 0.5897
Epoch 18/100
321/321 [==============================] - 0s 973us/step - loss: 0.6021 - accuracy: 0.6113 - val_loss: 0.6046 - val_accuracy: 0.5887
Epoch 19/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6083 - val_loss: 0.6074 - val_accuracy: 0.5860
Epoch 20/100
321/321 [==============================] - 0s 971us/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6194 - val_accuracy: 0.5738
Epoch 21/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6099 - val_loss: 0.6093 - val_accuracy: 0.5857
Epoch 22/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6154 - val_accuracy: 0.5773
Epoch 23/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6044 - val_accuracy: 0.5895
Epoch 24/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6112 - val_loss: 0.6305 - val_accuracy: 0.5710
Epoch 25/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6114 - val_loss: 0.6067 - val_accuracy: 0.5867
Epoch 26/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6102 - val_loss: 0.6140 - val_accuracy: 0.5800
Epoch 27/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6075 - val_loss: 0.6190 - val_accuracy: 0.5755
Epoch 28/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6029 - accuracy: 0.6087 - val_loss: 0.6337 - val_accuracy: 0.5666
Epoch 29/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6095 - val_loss: 0.6089 - val_accuracy: 0.5840
Epoch 30/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6106 - val_loss: 0.6273 - val_accuracy: 0.5690
Epoch 31/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6146 - val_accuracy: 0.5785
Epoch 32/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6116 - val_loss: 0.6093 - val_accuracy: 0.5837
Epoch 33/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6096 - val_loss: 0.6139 - val_accuracy: 0.5780
Epoch 34/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6087 - val_loss: 0.6090 - val_accuracy: 0.5850
Epoch 35/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6096 - val_loss: 0.6127 - val_accuracy: 0.5810
Epoch 36/100
321/321 [==============================] - 0s 876us/step - loss: 0.6024 - accuracy: 0.6091 - val_loss: 0.6001 - val_accuracy: 0.5975
Epoch 37/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6083 - val_accuracy: 0.5862
Epoch 38/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6090 - val_loss: 0.6073 - val_accuracy: 0.5875
Epoch 39/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6109 - val_loss: 0.6149 - val_accuracy: 0.5785
Epoch 40/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6085 - val_loss: 0.6175 - val_accuracy: 0.5758
Epoch 41/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6079 - val_loss: 0.6062 - val_accuracy: 0.5865
Epoch 42/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6097 - val_loss: 0.6060 - val_accuracy: 0.5867
Epoch 43/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6082 - val_loss: 0.6074 - val_accuracy: 0.5862
Epoch 44/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6096 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 45/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6112 - val_loss: 0.6241 - val_accuracy: 0.5740
Epoch 46/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6111 - val_loss: 0.6118 - val_accuracy: 0.5815
Epoch 47/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6073 - val_loss: 0.6110 - val_accuracy: 0.5835
Epoch 48/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6074 - val_loss: 0.6107 - val_accuracy: 0.5835
Epoch 49/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6081 - val_accuracy: 0.5862
Epoch 50/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6214 - val_accuracy: 0.5770
Epoch 51/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6093 - val_loss: 0.6011 - val_accuracy: 0.5952
Epoch 52/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6094 - val_loss: 0.6013 - val_accuracy: 0.5950
Epoch 53/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6022 - accuracy: 0.6079 - val_loss: 0.6158 - val_accuracy: 0.5770
Epoch 54/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6080 - val_accuracy: 0.5862
Epoch 55/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6095 - val_loss: 0.6180 - val_accuracy: 0.5775
Epoch 56/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6106 - val_accuracy: 0.5842
Epoch 57/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6078 - val_loss: 0.6232 - val_accuracy: 0.5740
Epoch 58/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6155 - val_accuracy: 0.5788
Epoch 59/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6119 - val_loss: 0.6150 - val_accuracy: 0.5775
Epoch 60/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6092 - val_loss: 0.5982 - val_accuracy: 0.6012
Epoch 61/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6022 - val_accuracy: 0.5947
Epoch 62/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6265 - val_accuracy: 0.5735
Epoch 63/100
321/321 [==============================] - 0s 899us/step - loss: 0.6019 - accuracy: 0.6099 - val_loss: 0.6172 - val_accuracy: 0.5775
Epoch 64/100
321/321 [==============================] - 0s 982us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6116 - val_accuracy: 0.5815
Epoch 65/100
321/321 [==============================] - 0s 969us/step - loss: 0.6015 - accuracy: 0.6099 - val_loss: 0.6230 - val_accuracy: 0.5738
Epoch 66/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6094 - val_loss: 0.6058 - val_accuracy: 0.5870
Epoch 67/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6250 - val_accuracy: 0.5723
Epoch 68/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6109 - val_loss: 0.6129 - val_accuracy: 0.5790
Epoch 69/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6061 - val_accuracy: 0.5867
Epoch 70/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6031 - accuracy: 0.6084 - val_loss: 0.5999 - val_accuracy: 0.5980
Epoch 71/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6080 - val_loss: 0.6065 - val_accuracy: 0.5862
Epoch 72/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6097 - val_loss: 0.6193 - val_accuracy: 0.5745
Epoch 73/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6024 - accuracy: 0.6081 - val_loss: 0.6183 - val_accuracy: 0.5753
Epoch 74/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6094 - val_loss: 0.6165 - val_accuracy: 0.5778
Epoch 75/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6091 - val_loss: 0.6008 - val_accuracy: 0.5955
Epoch 76/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6094 - val_loss: 0.6235 - val_accuracy: 0.5733
Epoch 77/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6178 - val_accuracy: 0.5773
Epoch 78/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6232 - val_accuracy: 0.5715
Epoch 79/100
321/321 [==============================] - 0s 973us/step - loss: 0.6024 - accuracy: 0.6052 - val_loss: 0.6262 - val_accuracy: 0.5705
Epoch 80/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6050 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 81/100
321/321 [==============================] - 0s 973us/step - loss: 0.6011 - accuracy: 0.6111 - val_loss: 0.6177 - val_accuracy: 0.5755
Epoch 82/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6124 - val_accuracy: 0.5783
Epoch 83/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6090 - val_loss: 0.6107 - val_accuracy: 0.5833
Epoch 84/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6102 - val_loss: 0.6110 - val_accuracy: 0.5800
Epoch 85/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6094 - val_loss: 0.6077 - val_accuracy: 0.5845
Epoch 86/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6069 - val_loss: 0.6109 - val_accuracy: 0.5798
Epoch 87/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6092 - val_loss: 0.6117 - val_accuracy: 0.5798
Epoch 88/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6105 - val_accuracy: 0.5808
Epoch 89/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6063 - val_loss: 0.6190 - val_accuracy: 0.5753
Epoch 90/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6083 - val_loss: 0.6211 - val_accuracy: 0.5740
Epoch 91/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6058 - val_loss: 0.6117 - val_accuracy: 0.5785
Epoch 92/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6077 - val_loss: 0.6200 - val_accuracy: 0.5740
Epoch 93/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6230 - val_accuracy: 0.5735
Epoch 94/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6087 - val_loss: 0.6113 - val_accuracy: 0.5810
Epoch 95/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6086 - val_loss: 0.6203 - val_accuracy: 0.5755
Epoch 96/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6013 - accuracy: 0.6088 - val_loss: 0.6273 - val_accuracy: 0.5693
Epoch 97/100
321/321 [==============================] - 0s 925us/step - loss: 0.6019 - accuracy: 0.6071 - val_loss: 0.6023 - val_accuracy: 0.5927
Epoch 98/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6072 - val_loss: 0.6093 - val_accuracy: 0.5810
Epoch 99/100
321/321 [==============================] - 0s 925us/step - loss: 0.6012 - accuracy: 0.6091 - val_loss: 0.6018 - val_accuracy: 0.5937
Epoch 100/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6092 - val_loss: 0.6255 - val_accuracy: 0.5710
Either there is a problem in your training data or model is too small.
By looking at the loss - which is obviously not changing at all - I'd say the problem is size of model. Try adding more neurons in dense layers.
Your model is not sufficiently big enough to handle the data.
So, try increasing your model size.
However, increasing your model size makes it more vulnerable to overfitting, but using some Dropout layers solves the issue.
model = Sequential()
model.add(Input(shape=(12, 2)))
model.add(Dense(48, activation='relu'))
model.add(Dropout(0.20))
model.add(Dense(32, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dropout(0.15))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
Furthermore, a lower learning rate allows the model to get to a better accuracy & loss.
You can tweak the learning rate of the Adam optimizer when compiling,
model.compile(keras.optimizers.Adam(lr=0.0002), loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
The default value of learning rate of the Adam optimizer is 0.001
I'm using Keras to train a netwok for classification problem on python, the model that i am using is as follows:
filter_size = (2,2)
maxpool_size = (2, 2)
dr = 0.5
inputs = Input((12,8,1), name='main_input')
main_branch = Conv2D(20, kernel_size=filter_size, padding="same", kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(inputs)
main_branch = BatchNormalization(momentum=0.9)(main_branch)
main_branch = Activation("relu")(main_branch)
main_branch = MaxPooling2D(pool_size=maxpool_size,strides=(1, 1))(main_branch)
main_branch = Conv2D(40, kernel_size=filter_size, padding="same", kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
main_branch = BatchNormalization(momentum=0.9)(main_branch)
main_branch = Activation("relu")(main_branch)
main_branch = Flatten()(main_branch)
main_branch = Dense(100,kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
main_branch = Dense(100, kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
SubArray_branch = Dense(496, activation='softmax', name='SubArray_output')(main_branch)
model = Model(inputs = inputs,
outputs = SubArray_branch)
opt = keras.optimizers.Adam(lr=1e-3, epsilon=1e-08, clipnorm=1.0)
model.compile(optimizer=opt,
loss={'SubArray_output': 'sparse_categorical_crossentropy'},
metrics=['accuracy'] )
history = model.fit({'main_input': Channel},
{'SubArray_output': array_indx},
validation_data=(test_Data,test_array),
epochs=100, batch_size=128,
verbose=1,
validation_split=0.2
)
when I am training this network on my training data I get high validation-loss compared to training-loss as you can see in below:
471/471 [==============================] - 5s 10ms/step - loss: 0.5723 - accuracy: 0.9010 - val_loss: 20.2040 - val_accuracy: 0.0126
Epoch 33/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5486 - accuracy: 0.9087 - val_loss: 35.2516 - val_accuracy: 0.0037
Epoch 34/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5342 - accuracy: 0.9159 - val_loss: 50.2577 - val_accuracy: 0.0043
Epoch 35/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5345 - accuracy: 0.9132 - val_loss: 26.0221 - val_accuracy: 0.0051
Epoch 36/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5333 - accuracy: 0.9140 - val_loss: 71.2754 - val_accuracy: 0.0043
Epoch 37/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5149 - accuracy: 0.9231 - val_loss: 67.2646 - val_accuracy: 3.3227e-04
Epoch 38/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5269 - accuracy: 0.9162 - val_loss: 17.7448 - val_accuracy: 0.0206
Epoch 39/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5198 - accuracy: 0.9201 - val_loss: 92.7240 - val_accuracy: 0.0015
Epoch 40/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5157 - accuracy: 0.9247 - val_loss: 30.9589 - val_accuracy: 0.0082
Epoch 41/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4961 - accuracy: 0.9316 - val_loss: 20.0444 - val_accuracy: 0.0141
Epoch 42/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5093 - accuracy: 0.9256 - val_loss: 16.7269 - val_accuracy: 0.0172
Epoch 43/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5092 - accuracy: 0.9267 - val_loss: 15.6939 - val_accuracy: 0.0320
Epoch 44/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5104 - accuracy: 0.9270 - val_loss: 103.2581 - val_accuracy: 0.0027
Epoch 45/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5074 - accuracy: 0.9286 - val_loss: 28.3097 - val_accuracy: 0.0154
Epoch 46/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4977 - accuracy: 0.9303 - val_loss: 28.6676 - val_accuracy: 0.0167
Epoch 47/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4823 - accuracy: 0.9375 - val_loss: 47.4671 - val_accuracy: 0.0015
Epoch 48/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5053 - accuracy: 0.9291 - val_loss: 39.3356 - val_accuracy: 0.0082
Epoch 49/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5110 - accuracy: 0.9287 - val_loss: 42.8834 - val_accuracy: 0.0082
Epoch 50/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4895 - accuracy: 0.9366 - val_loss: 11.7254 - val_accuracy: 0.0700
Epoch 51/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4909 - accuracy: 0.9351 - val_loss: 14.5519 - val_accuracy: 0.0276
Epoch 52/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4846 - accuracy: 0.9380 - val_loss: 22.5101 - val_accuracy: 0.0122
Epoch 53/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4991 - accuracy: 0.9315 - val_loss: 16.1494 - val_accuracy: 0.0283
Epoch 54/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4782 - accuracy: 0.9423 - val_loss: 14.8626 - val_accuracy: 0.0551
Epoch 55/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4807 - accuracy: 0.9401 - val_loss: 100.8670 - val_accuracy: 9.9681e-04
Epoch 56/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4759 - accuracy: 0.9420 - val_loss: 34.8571 - val_accuracy: 0.0047
Epoch 57/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4802 - accuracy: 0.9406 - val_loss: 23.2134 - val_accuracy: 0.0524
Epoch 58/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4998 - accuracy: 0.9334 - val_loss: 20.9038 - val_accuracy: 0.0207
Epoch 59/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4813 - accuracy: 0.9400 - val_loss: 19.5474 - val_accuracy: 0.0393
Epoch 60/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4846 - accuracy: 0.9399 - val_loss: 15.1594 - val_accuracy: 0.0439
Epoch 61/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4718 - accuracy: 0.9436 - val_loss: 30.0164 - val_accuracy: 0.0078
Epoch 62/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4897 - accuracy: 0.9375 - val_loss: 60.0498 - val_accuracy: 0.0144
Epoch 63/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4668 - accuracy: 0.9461 - val_loss: 18.8190 - val_accuracy: 0.0298
Epoch 64/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4598 - accuracy: 0.9485 - val_loss: 26.1101 - val_accuracy: 0.0231
Epoch 65/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4672 - accuracy: 0.9442 - val_loss: 108.7207 - val_accuracy: 2.6582e-04
Epoch 66/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4910 - accuracy: 0.9378 - val_loss: 45.6070 - val_accuracy: 0.0052
Epoch 67/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4805 - accuracy: 0.9429 - val_loss: 39.3904 - val_accuracy: 0.0057
Epoch 68/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4682 - accuracy: 0.9451 - val_loss: 21.5525 - val_accuracy: 0.0328
Epoch 69/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4613 - accuracy: 0.9472 - val_loss: 46.7714 - val_accuracy: 0.0027
Epoch 70/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4786 - accuracy: 0.9417 - val_loss: 13.4834 - val_accuracy: 0.0708
Epoch 71/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4756 - accuracy: 0.9442 - val_loss: 41.8796 - val_accuracy: 0.0199
Epoch 72/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4655 - accuracy: 0.9464 - val_loss: 57.7453 - val_accuracy: 0.0017
Epoch 73/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4795 - accuracy: 0.9428 - val_loss: 16.1949 - val_accuracy: 0.0285
Epoch 74/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4755 - accuracy: 0.9440 - val_loss: 68.2349 - val_accuracy: 0.0139
Epoch 75/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4807 - accuracy: 0.9425 - val_loss: 43.4699 - val_accuracy: 0.0233
Epoch 76/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4515 - accuracy: 0.9524 - val_loss: 175.2205 - val_accuracy: 0.0019
Epoch 77/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4715 - accuracy: 0.9467 - val_loss: 92.2833 - val_accuracy: 0.0017
Epoch 78/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4736 - accuracy: 0.9447 - val_loss: 94.7209 - val_accuracy: 0.0059
Epoch 79/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4661 - accuracy: 0.9473 - val_loss: 17.8870 - val_accuracy: 0.0386
Epoch 80/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4614 - accuracy: 0.9492 - val_loss: 28.1883 - val_accuracy: 0.0042
Epoch 81/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4569 - accuracy: 0.9507 - val_loss: 49.2823 - val_accuracy: 0.0032
Epoch 82/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4623 - accuracy: 0.9485 - val_loss: 29.8972 - val_accuracy: 0.0100
Epoch 83/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4799 - accuracy: 0.9429 - val_loss: 109.5044 - val_accuracy: 0.0062
Epoch 84/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4810 - accuracy: 0.9444 - val_loss: 71.2103 - val_accuracy: 0.0051
Epoch 85/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4452 - accuracy: 0.9552 - val_loss: 30.7861 - val_accuracy: 0.0100
Epoch 86/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4805 - accuracy: 0.9423 - val_loss: 48.1887 - val_accuracy: 0.0031
Epoch 87/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4564 - accuracy: 0.9512 - val_loss: 189.6711 - val_accuracy: 1.3291e-04
Epoch 88/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4479 - accuracy: 0.9537 - val_loss: 58.6349 - val_accuracy: 0.0199
Epoch 89/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4667 - accuracy: 0.9476 - val_loss: 95.7323 - val_accuracy: 0.0041
Epoch 90/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4808 - accuracy: 0.9436 - val_loss: 28.7513 - val_accuracy: 0.0191
Epoch 91/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4583 - accuracy: 0.9511 - val_loss: 16.4281 - val_accuracy: 0.0431
Epoch 92/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4458 - accuracy: 0.9541 - val_loss: 15.3890 - val_accuracy: 0.0517
Epoch 93/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4628 - accuracy: 0.9491 - val_loss: 37.3123 - val_accuracy: 0.0024
Epoch 94/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4716 - accuracy: 0.9481 - val_loss: 24.8934 - val_accuracy: 0.0123
Epoch 95/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4646 - accuracy: 0.9469 - val_loss: 54.6682 - val_accuracy: 5.9809e-04
Epoch 96/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4665 - accuracy: 0.9492 - val_loss: 89.1835 - val_accuracy: 0.0064
Epoch 97/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4533 - accuracy: 0.9527 - val_loss: 60.9850 - val_accuracy: 0.0035
Epoch 98/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4597 - accuracy: 0.9491 - val_loss: 41.6088 - val_accuracy: 0.0023
Epoch 99/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4511 - accuracy: 0.9537 - val_loss: 28.2131 - val_accuracy: 0.0025
Epoch 100/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4568 - accuracy: 0.9509 - val_loss: 121.8944 - val_accuracy: 0.0041
I am well aware that the problem I am facing is due to overfitting, but when I train the same network with the same training data on Matlab, the values of training-loss and validation-loss are close to each other. The picture of Training Progress on Matlab is linked as:
Training Progress
I would appreciate it, if anyone could explain to me why I can’t get the same result on python? What would you suggest to solve this problem?
I am using keras for image demosaicing. I am using different channels and then I am building a MLP. The code is as follows -:
modelRed = Sequential()
modelRed.add(Dense(12 , input_shape=(red_rows,)))
modelRed.add(Activation('relu'))
modelRed.add(Dense(8))
modelRed.add(Activation('relu'))
modelRed.add(Dense(4))
modelRed.add(Activation('relu'))
modelRed.add(Dense(1))
modelRed.add(Activation('relu'))
# for a mean squared error regression problem
modelRed.compile(optimizer='adam', loss='mean_squared_error')
modelRed.fit(X_train_red, Y_train_red, batch_size=batch_size, nb_epoch=nb_epoch, validation_data=(X_test_red, Y_test_red), verbose=1)
The input are 16 raw pixels. But as is visible the error is not dropping and it seems I am moving away from the data cloud. What could be the reason for this? Is is that I need to do some pre - processing before feeding into MLP? How to make sure that I converge. I am using simple MLP, do you think that I should change the model. Should I use CNN over the bayer pattern.
Train on 3189384 samples, validate on 1321608 samples
Epoch 1/250
3189384/3189384 [==============================] - 27s - loss: 198.4509 - val_loss: 115.0920
Epoch 2/250
3189384/3189384 [==============================] - 28s - loss: 164.1169 - val_loss: 112.7843
Epoch 3/250
3189384/3189384 [==============================] - 26s - loss: 163.5934 - val_loss: 112.5161
Epoch 4/250
3189384/3189384 [==============================] - 27s - loss: 163.1947 - val_loss: 112.1536
Epoch 5/250
3189384/3189384 [==============================] - 27s - loss: 162.7713 - val_loss: 111.6212
Epoch 6/250
3189384/3189384 [==============================] - 23s - loss: 162.1866 - val_loss: 111.6891
Epoch 7/250
3189384/3189384 [==============================] - 27s - loss: 161.2873 - val_loss: 110.5558
Epoch 8/250
3189384/3189384 [==============================] - 27s - loss: 160.2043 - val_loss: 109.1132
Epoch 9/250
3189384/3189384 [==============================] - 26s - loss: 159.4309 - val_loss: 108.6063
Epoch 10/250
3189384/3189384 [==============================] - 26s - loss: 158.9754 - val_loss: 108.2700
Epoch 11/250
3189384/3189384 [==============================] - 27s - loss: 158.4734 - val_loss: 107.8127
Epoch 12/250
3189384/3189384 [==============================] - 25s - loss: 158.0978 - val_loss: 108.4448
Epoch 13/250
3189384/3189384 [==============================] - 26s - loss: 157.7607 - val_loss: 107.6800
Epoch 14/250
3189384/3189384 [==============================] - 26s - loss: 157.4270 - val_loss: 107.1163
Epoch 15/250
3189384/3189384 [==============================] - 27s - loss: 157.0340 - val_loss: 106.3458
Epoch 16/250
3189384/3189384 [==============================] - 27s - loss: 156.8197 - val_loss: 106.3995
Epoch 17/250
3189384/3189384 [==============================] - 26s - loss: 156.5804 - val_loss: 105.8783
Epoch 18/250
3189384/3189384 [==============================] - 26s - loss: 156.3417 - val_loss: 105.9321
Epoch 19/250
3189384/3189384 [==============================] - 27s - loss: 156.2147 - val_loss: 106.3168
Epoch 20/250
3189384/3189384 [==============================] - 25s - loss: 155.9733 - val_loss: 105.2670
Epoch 21/250
3189384/3189384 [==============================] - 27s - loss: 155.7053 - val_loss: 108.8695
Epoch 22/250
3189384/3189384 [==============================] - 26s - loss: 155.3440 - val_loss: 105.9274
Epoch 23/250
3189384/3189384 [==============================] - 25s - loss: 155.0451 - val_loss: 107.7409
Epoch 24/250
3189384/3189384 [==============================] - 24s - loss: 154.9362 - val_loss: 104.4415
Epoch 25/250
3189384/3189384 [==============================] - 27s - loss: 154.6278 - val_loss: 104.0498
Epoch 26/250
3189384/3189384 [==============================] - 26s - loss: 154.4255 - val_loss: 104.9958
Epoch 27/250
3189384/3189384 [==============================] - 25s - loss: 154.1879 - val_loss: 103.9495
Epoch 28/250
3189384/3189384 [==============================] - 26s - loss: 154.0734 - val_loss: 103.5785
Epoch 29/250
3189384/3189384 [==============================] - 26s - loss: 153.9429 - val_loss: 107.9921
Epoch 30/250
3189384/3189384 [==============================] - 24s - loss: 153.7445 - val_loss: 103.5331
Epoch 31/250
3189384/3189384 [==============================] - 28s - loss: 153.5577 - val_loss: 103.7680
Epoch 32/250
3189384/3189384 [==============================] - 26s - loss: 153.4119 - val_loss: 103.7041
Epoch 33/250
3189384/3189384 [==============================] - 28s - loss: 153.2540 - val_loss: 103.2984
Epoch 34/250
3189384/3189384 [==============================] - 26s - loss: 152.9925 - val_loss: 103.0740
Epoch 35/250
3189384/3189384 [==============================] - 28s - loss: 152.7723 - val_loss: 107.2211
Epoch 36/250
3189384/3189384 [==============================] - 28s - loss: 152.6709 - val_loss: 102.5169
Epoch 37/250
3189384/3189384 [==============================] - 27s - loss: 152.4868 - val_loss: 103.2555
Epoch 38/250
3189384/3189384 [==============================] - 26s - loss: 152.3653 - val_loss: 101.7679
Epoch 39/250
3189384/3189384 [==============================] - 27s - loss: 152.1673 - val_loss: 102.5685
Epoch 40/250
3189384/3189384 [==============================] - 27s - loss: 151.9953 - val_loss: 105.8509
Epoch 41/250
3189384/3189384 [==============================] - 27s - loss: 151.8490 - val_loss: 101.6949
Epoch 42/250
3189384/3189384 [==============================] - 29s - loss: 151.8251 - val_loss: 104.2125
Epoch 43/250
3189384/3189384 [==============================] - 28s - loss: 151.7456 - val_loss: 101.3656
Epoch 44/250
3189384/3189384 [==============================] - 26s - loss: 151.6388 - val_loss: 100.9429
Epoch 45/250
3189384/3189384 [==============================] - 27s - loss: 151.4518 - val_loss: 101.2955
Epoch 46/250
3189384/3189384 [==============================] - 28s - loss: 151.4171 - val_loss: 101.2979
Epoch 47/250
3189384/3189384 [==============================] - 27s - loss: 151.3336 - val_loss: 101.6419
Epoch 48/250
3189384/3189384 [==============================] - 25s - loss: 151.3031 - val_loss: 101.1190
Epoch 49/250
3189384/3189384 [==============================] - 27s - loss: 151.1852 - val_loss: 102.2938
Epoch 50/250
3189384/3189384 [==============================] - 28s - loss: 151.0914 - val_loss: 101.3121
Epoch 51/250
3189384/3189384 [==============================] - 26s - loss: 151.0504 - val_loss: 102.0878
Epoch 52/250
3189384/3189384 [==============================] - 29s - loss: 151.0902 - val_loss: 101.4168
Epoch 53/250
3189384/3189384 [==============================] - 28s - loss: 150.9420 - val_loss: 101.3169
Epoch 54/250
3189384/3189384 [==============================] - 21s - loss: 150.7804 - val_loss: 101.0129
Epoch 55/250
3189384/3189384 [==============================] - 20s - loss: 150.7772 - val_loss: 100.8667
Epoch 56/250
3189384/3189384 [==============================] - 19s - loss: 150.7716 - val_loss: 101.2737
Epoch 57/250
3189384/3189384 [==============================] - 18s - loss: 150.6755 - val_loss: 102.6338
Epoch 58/250
3189384/3189384 [==============================] - 19s - loss: 150.7102 - val_loss: 100.6505
Epoch 59/250
3189384/3189384 [==============================] - 19s - loss: 150.6123 - val_loss: 100.5806
Epoch 60/250
3189384/3189384 [==============================] - 20s - loss: 150.4697 - val_loss: 101.9158
Epoch 61/250
3189384/3189384 [==============================] - 18s - loss: 150.5636 - val_loss: 100.8117
Epoch 62/250
3189384/3189384 [==============================] - 17s - loss: 150.4429 - val_loss: 100.8131
Epoch 63/250
3189384/3189384 [==============================] - 20s - loss: 150.4293 - val_loss: 100.3910
Epoch 64/250
3189384/3189384 [==============================] - 20s - loss: 150.3402 - val_loss: 100.8772
Epoch 65/250
3189384/3189384 [==============================] - 19s - loss: 150.3486 - val_loss: 100.7313
Epoch 66/250
3189384/3189384 [==============================] - 19s - loss: 150.3270 - val_loss: 100.1140
Epoch 67/250
3189384/3189384 [==============================] - 19s - loss: 150.2567 - val_loss: 100.7384
Epoch 68/250
3189384/3189384 [==============================] - 20s - loss: 150.2960 - val_loss: 100.5642
Epoch 69/250
3189384/3189384 [==============================] - 19s - loss: 150.2077 - val_loss: 101.8608
Epoch 70/250
3189384/3189384 [==============================] - 19s - loss: 150.1999 - val_loss: 101.6729
Epoch 71/250
3189384/3189384 [==============================] - 19s - loss: 150.1400 - val_loss: 100.6969
Epoch 72/250
3189384/3189384 [==============================] - 17s - loss: 150.1811 - val_loss: 101.6791
Epoch 73/250
3189384/3189384 [==============================] - 19s - loss: 150.1928 - val_loss: 100.3208
Epoch 74/250
3189384/3189384 [==============================] - 19s - loss: 150.1076 - val_loss: 101.1390
Epoch 75/250
3189384/3189384 [==============================] - 20s - loss: 150.1662 - val_loss: 100.2958
Epoch 76/250
3189384/3189384 [==============================] - 20s - loss: 150.1308 - val_loss: 101.4206
Epoch 77/250
3189384/3189384 [==============================] - 20s - loss: 150.0151 - val_loss: 100.8570
Epoch 78/250
3189384/3189384 [==============================] - 21s - loss: 150.1159 - val_loss: 100.0430
Epoch 79/250
3189384/3189384 [==============================] - 20s - loss: 150.0752 - val_loss: 100.1425
Epoch 80/250
3189384/3189384 [==============================] - 20s - loss: 150.0975 - val_loss: 100.9499
Epoch 81/250
3189384/3189384 [==============================] - 19s - loss: 150.0325 - val_loss: 100.6844
Epoch 82/250
3189384/3189384 [==============================] - 19s - loss: 150.0400 - val_loss: 100.1474
Epoch 83/250
3189384/3189384 [==============================] - 21s - loss: 150.1016 - val_loss: 100.3526
Epoch 84/250
3189384/3189384 [==============================] - 20s - loss: 149.9997 - val_loss: 99.9948
Epoch 85/250
3189384/3189384 [==============================] - 20s - loss: 150.0394 - val_loss: 100.8612
Epoch 86/250
3189384/3189384 [==============================] - 20s - loss: 150.0178 - val_loss: 100.2646
Epoch 87/250
3189384/3189384 [==============================] - 20s - loss: 149.9514 - val_loss: 100.0428
Epoch 88/250
3189384/3189384 [==============================] - 20s - loss: 149.9144 - val_loss: 99.9527
Epoch 89/250
3189384/3189384 [==============================] - 20s - loss: 149.9725 - val_loss: 100.2454
Epoch 90/250
3189384/3189384 [==============================] - 20s - loss: 149.9293 - val_loss: 100.7448
Epoch 91/250
3189384/3189384 [==============================] - 20s - loss: 149.9375 - val_loss: 100.8647
Epoch 92/250
3189384/3189384 [==============================] - 20s - loss: 149.8886 - val_loss: 100.9337
Epoch 93/250
3189384/3189384 [==============================] - 20s - loss: 149.8927 - val_loss: 100.0454
Epoch 94/250
3189384/3189384 [==============================] - 20s - loss: 149.8331 - val_loss: 102.0999
Epoch 95/250
3189384/3189384 [==============================] - 21s - loss: 149.7996 - val_loss: 100.8161
Epoch 96/250
3189384/3189384 [==============================] - 20s - loss: 149.9432 - val_loss: 101.4724
Epoch 97/250
3189384/3189384 [==============================] - 20s - loss: 149.8627 - val_loss: 101.4424
Epoch 98/250
3189384/3189384 [==============================] - 20s - loss: 149.7552 - val_loss: 101.1071
Epoch 99/250
3189384/3189384 [==============================] - 20s - loss: 149.7364 - val_loss: 100.0904
Epoch 100/250
3189384/3189384 [==============================] - 20s - loss: 149.7900 - val_loss: 100.8228
Epoch 101/250
3189384/3189384 [==============================] - 20s - loss: 149.7288 - val_loss: 100.9811
Epoch 102/250
3189384/3189384 [==============================] - 19s - loss: 149.7511 - val_loss: 100.0828
Epoch 103/250
3189384/3189384 [==============================] - 20s - loss: 149.5770 - val_loss: 100.5401
Epoch 104/250
3189384/3189384 [==============================] - 18s - loss: 149.5594 - val_loss: 101.6771
Epoch 105/250
3189384/3189384 [==============================] - 20s - loss: 149.5985 - val_loss: 99.9006
Epoch 106/250
3189384/3189384 [==============================] - 21s - loss: 149.5497 - val_loss: 99.6502
Epoch 107/250
3189384/3189384 [==============================] - 20s - loss: 149.5558 - val_loss: 100.1236
Epoch 108/250
3189384/3189384 [==============================] - 19s - loss: 149.4798 - val_loss: 100.0641
Epoch 109/250
3189384/3189384 [==============================] - 20s - loss: 149.5083 - val_loss: 100.3339
Epoch 110/250
3189384/3189384 [==============================] - 18s - loss: 149.5003 - val_loss: 99.9933
Epoch 111/250
3189384/3189384 [==============================] - 19s - loss: 149.4592 - val_loss: 100.7275
Epoch 112/250
3189384/3189384 [==============================] - 21s - loss: 149.4799 - val_loss: 99.7688
Epoch 113/250
3189384/3189384 [==============================] - 20s - loss: 149.4295 - val_loss: 99.6558
Epoch 114/250
3189384/3189384 [==============================] - 19s - loss: 149.5045 - val_loss: 100.4619
Epoch 115/250
3189384/3189384 [==============================] - 19s - loss: 149.3667 - val_loss: 99.7501
Epoch 116/250
3189384/3189384 [==============================] - 20s - loss: 149.3659 - val_loss: 99.8401
Epoch 117/250
3189384/3189384 [==============================] - 20s - loss: 149.3168 - val_loss: 100.2400
Epoch 118/250
3189384/3189384 [==============================] - 19s - loss: 149.4157 - val_loss: 99.6916
Epoch 119/250
3189384/3189384 [==============================] - 20s - loss: 149.3364 - val_loss: 99.8611
Epoch 120/250
3189384/3189384 [==============================] - 19s - loss: 149.3278 - val_loss: 100.0638
Epoch 121/250
3189384/3189384 [==============================] - 20s - loss: 149.3423 - val_loss: 100.7594
Epoch 122/250
3189384/3189384 [==============================] - 20s - loss: 149.3336 - val_loss: 100.1827
Epoch 123/250
3189384/3189384 [==============================] - 19s - loss: 149.3595 - val_loss: 100.6511
Epoch 124/250
3189384/3189384 [==============================] - 19s - loss: 149.2708 - val_loss: 101.1436
Epoch 125/250
3189384/3189384 [==============================] - 20s - loss: 149.3454 - val_loss: 99.6360
Epoch 126/250
3189384/3189384 [==============================] - 19s - loss: 149.3264 - val_loss: 99.7426
Epoch 127/250
3189384/3189384 [==============================] - 20s - loss: 149.3152 - val_loss: 99.5290
Epoch 128/250
3189384/3189384 [==============================] - 20s - loss: 149.2971 - val_loss: 100.0325
Epoch 129/250
3189384/3189384 [==============================] - 19s - loss: 149.2365 - val_loss: 101.4393
Epoch 130/250
3189384/3189384 [==============================] - 19s - loss: 149.2813 - val_loss: 99.9453
Epoch 131/250
3189384/3189384 [==============================] - 19s - loss: 149.2793 - val_loss: 99.4344
Epoch 132/250
3189384/3189384 [==============================] - 19s - loss: 149.2455 - val_loss: 99.4761
Epoch 133/250
3189384/3189384 [==============================] - 19s - loss: 149.1975 - val_loss: 99.2306
Epoch 134/250
3189384/3189384 [==============================] - 19s - loss: 149.2811 - val_loss: 100.6100
Epoch 135/250
3189384/3189384 [==============================] - 19s - loss: 149.2285 - val_loss: 103.0949
Epoch 136/250
3189384/3189384 [==============================] - 19s - loss: 149.2315 - val_loss: 99.8146
Epoch 137/250
3189384/3189384 [==============================] - 20s - loss: 149.2820 - val_loss: 99.6034
Epoch 138/250
3189384/3189384 [==============================] - 19s - loss: 149.2051 - val_loss: 99.6248
Epoch 139/250
3189384/3189384 [==============================] - 19s - loss: 149.1962 - val_loss: 100.9793
Epoch 140/250
3189384/3189384 [==============================] - 20s - loss: 149.1947 - val_loss: 100.4904
Epoch 141/250
3189384/3189384 [==============================] - 18s - loss: 149.1905 - val_loss: 100.3685
Epoch 142/250
3189384/3189384 [==============================] - 18s - loss: 149.2253 - val_loss: 101.2468
Epoch 143/250
3189384/3189384 [==============================] - 18s - loss: 149.1867 - val_loss: 99.5863
Epoch 144/250
3189384/3189384 [==============================] - 19s - loss: 149.1776 - val_loss: 100.3279
Epoch 145/250
3189384/3189384 [==============================] - 20s - loss: 149.2580 - val_loss: 101.8950
Epoch 146/250
3189384/3189384 [==============================] - 18s - loss: 149.1016 - val_loss: 99.3928
Epoch 147/250
3189384/3189384 [==============================] - 19s - loss: 149.1625 - val_loss: 99.4247
Epoch 148/250
3189384/3189384 [==============================] - 19s - loss: 149.1728 - val_loss: 99.8834
Epoch 149/250
3189384/3189384 [==============================] - 20s - loss: 149.1103 - val_loss: 99.5946
Epoch 150/250
3189384/3189384 [==============================] - 19s - loss: 149.2158 - val_loss: 100.1131
Epoch 151/250
3189384/3189384 [==============================] - 20s - loss: 149.1529 - val_loss: 99.7070
Epoch 152/250
3189384/3189384 [==============================] - 19s - loss: 149.1417 - val_loss: 100.2416
Epoch 153/250
3189384/3189384 [==============================] - 19s - loss: 149.1114 - val_loss: 99.7661
Epoch 154/250
3189384/3189384 [==============================] - 19s - loss: 149.1847 - val_loss: 99.3258
Epoch 155/250
3189384/3189384 [==============================] - 19s - loss: 149.1204 - val_loss: 100.7560
Epoch 156/250
3189384/3189384 [==============================] - 19s - loss: 149.0693 - val_loss: 100.3359
Epoch 157/250
3189384/3189384 [==============================] - 19s - loss: 149.1283 - val_loss: 99.6533
Epoch 158/250
3189384/3189384 [==============================] - 19s - loss: 149.0860 - val_loss: 99.5419
Epoch 159/250
3189384/3189384 [==============================] - 19s - loss: 149.0667 - val_loss: 99.1946
Epoch 160/250
3189384/3189384 [==============================] - 18s - loss: 149.0769 - val_loss: 99.4479
Epoch 161/250
3189384/3189384 [==============================] - 20s - loss: 149.0613 - val_loss: 99.4561
Epoch 162/250
3189384/3189384 [==============================] - 19s - loss: 149.0951 - val_loss: 99.4610
Epoch 163/250
3189384/3189384 [==============================] - 18s - loss: 149.0441 - val_loss: 100.1443
Epoch 164/250
3189384/3189384 [==============================] - 19s - loss: 149.0430 - val_loss: 99.0341
Epoch 165/250
3189384/3189384 [==============================] - 20s - loss: 149.0439 - val_loss: 99.2335
Epoch 166/250
3189384/3189384 [==============================] - 19s - loss: 149.0988 - val_loss: 101.0581
Epoch 167/250
3189384/3189384 [==============================] - 19s - loss: 149.0660 - val_loss: 99.5455
Epoch 168/250
3189384/3189384 [==============================] - 18s - loss: 149.0017 - val_loss: 99.0922
Epoch 169/250
3189384/3189384 [==============================] - 19s - loss: 149.0740 - val_loss: 99.7199
Epoch 170/250
3189384/3189384 [==============================] - 19s - loss: 149.0057 - val_loss: 99.3090
Epoch 171/250
3189384/3189384 [==============================] - 19s - loss: 149.0293 - val_loss: 99.7018
Epoch 172/250
3189384/3189384 [==============================] - 19s - loss: 149.0007 - val_loss: 99.1355
Epoch 173/250
3189384/3189384 [==============================] - 20s - loss: 149.0428 - val_loss: 99.4297
Epoch 174/250
3189384/3189384 [==============================] - 20s - loss: 148.9911 - val_loss: 99.7230
Epoch 175/250
3189384/3189384 [==============================] - 19s - loss: 148.9911 - val_loss: 101.1982
Epoch 176/250
3189384/3189384 [==============================] - 19s - loss: 148.9810 - val_loss: 99.5931
Epoch 177/250
3189384/3189384 [==============================] - 19s - loss: 149.0001 - val_loss: 99.3125
Epoch 178/250
3189384/3189384 [==============================] - 19s - loss: 148.9734 - val_loss: 99.3773
Epoch 179/250
3189384/3189384 [==============================] - 19s - loss: 148.9829 - val_loss: 99.4396
Epoch 180/250
3189384/3189384 [==============================] - 20s - loss: 148.9214 - val_loss: 99.7884
Epoch 181/250
3189384/3189384 [==============================] - 19s - loss: 148.9743 - val_loss: 99.2298
Epoch 182/250
3189384/3189384 [==============================] - 18s - loss: 148.8978 - val_loss: 99.1585
Epoch 183/250
3189384/3189384 [==============================] - 19s - loss: 148.9413 - val_loss: 99.2344
Epoch 184/250
3189384/3189384 [==============================] - 19s - loss: 148.9120 - val_loss: 99.0136
Epoch 185/250
3189384/3189384 [==============================] - 17s - loss: 148.8783 - val_loss: 100.9004
Epoch 186/250
3189384/3189384 [==============================] - 19s - loss: 148.8389 - val_loss: 100.4856
Epoch 187/250
3189384/3189384 [==============================] - 17s - loss: 148.8828 - val_loss: 99.3965
Epoch 188/250
3189384/3189384 [==============================] - 18s - loss: 148.8865 - val_loss: 98.8341
Epoch 189/250
3189384/3189384 [==============================] - 20s - loss: 148.8830 - val_loss: 99.0880
Epoch 190/250
3189384/3189384 [==============================] - 18s - loss: 148.8664 - val_loss: 98.9686
Epoch 191/250
3189384/3189384 [==============================] - 19s - loss: 148.8263 - val_loss: 98.8838
Epoch 192/250
3189384/3189384 [==============================] - 18s - loss: 148.7862 - val_loss: 99.0053
Epoch 193/250
3189384/3189384 [==============================] - 18s - loss: 148.8435 - val_loss: 99.5392
Epoch 194/250
3189384/3189384 [==============================] - 20s - loss: 148.7384 - val_loss: 99.9544
Epoch 195/250
3189384/3189384 [==============================] - 19s - loss: 148.7617 - val_loss: 99.5124
Epoch 196/250
3189384/3189384 [==============================] - 18s - loss: 148.8297 - val_loss: 99.7426
Epoch 197/250
3189384/3189384 [==============================] - 18s - loss: 148.8292 - val_loss: 99.0186
Epoch 198/250
3189384/3189384 [==============================] - 19s - loss: 148.7386 - val_loss: 98.9358
Epoch 199/250
3189384/3189384 [==============================] - 18s - loss: 148.8312 - val_loss: 99.1099
Epoch 200/250
3189384/3189384 [==============================] - 19s - loss: 148.8239 - val_loss: 99.0065
Epoch 201/250
3189384/3189384 [==============================] - 18s - loss: 148.6835 - val_loss: 99.0648
Epoch 202/250
3189384/3189384 [==============================] - 19s - loss: 148.7894 - val_loss: 99.2819
Epoch 203/250
3189384/3189384 [==============================] - 19s - loss: 148.7926 - val_loss: 99.2102
Epoch 204/250
3189384/3189384 [==============================] - 18s - loss: 148.7083 - val_loss: 101.1436
Epoch 205/250
3189384/3189384 [==============================] - 19s - loss: 148.7803 - val_loss: 99.2455
Epoch 206/250
3189384/3189384 [==============================] - 19s - loss: 148.7795 - val_loss: 99.3742
Epoch 207/250
3189384/3189384 [==============================] - 17s - loss: 148.6948 - val_loss: 100.0003
Epoch 208/250
3189384/3189384 [==============================] - 20s - loss: 148.7431 - val_loss: 100.4932
Epoch 209/250
3189384/3189384 [==============================] - 18s - loss: 148.7704 - val_loss: 98.9162
Epoch 210/250
3189384/3189384 [==============================] - 18s - loss: 148.7105 - val_loss: 99.9950
Epoch 211/250
3189384/3189384 [==============================] - 18s - loss: 148.7366 - val_loss: 99.1594
Epoch 212/250
3189384/3189384 [==============================] - 20s - loss: 148.7822 - val_loss: 98.9526
Epoch 213/250
3189384/3189384 [==============================] - 20s - loss: 148.7025 - val_loss: 99.2840
Epoch 214/250
3189384/3189384 [==============================] - 19s - loss: 148.7471 - val_loss: 99.0218
Epoch 215/250
3189384/3189384 [==============================] - 19s - loss: 148.6716 - val_loss: 99.0359
Epoch 216/250
3189384/3189384 [==============================] - 18s - loss: 148.6982 - val_loss: 98.9464
Epoch 217/250
3189384/3189384 [==============================] - 20s - loss: 148.7371 - val_loss: 100.0163
Epoch 218/250
3189384/3189384 [==============================] - 18s - loss: 148.6641 - val_loss: 99.8047
Epoch 219/250
3189384/3189384 [==============================] - 20s - loss: 148.6957 - val_loss: 99.2249
Epoch 220/250
3189384/3189384 [==============================] - 19s - loss: 148.7248 - val_loss: 98.8837
Epoch 221/250
3189384/3189384 [==============================] - 19s - loss: 148.6348 - val_loss: 99.4872
Epoch 222/250
3189384/3189384 [==============================] - 19s - loss: 148.6926 - val_loss: 99.2652
Epoch 223/250
3189384/3189384 [==============================] - 20s - loss: 148.6983 - val_loss: 99.6037
Epoch 224/250
3189384/3189384 [==============================] - 19s - loss: 148.6424 - val_loss: 99.7763
Epoch 225/250
3189384/3189384 [==============================] - 20s - loss: 148.6962 - val_loss: 100.1271
Epoch 226/250
3189384/3189384 [==============================] - 18s - loss: 148.6427 - val_loss: 99.7176
Epoch 227/250
3189384/3189384 [==============================] - 19s - loss: 148.6249 - val_loss: 99.0244
Epoch 228/250
3189384/3189384 [==============================] - 19s - loss: 148.7087 - val_loss: 98.9011
Epoch 229/250
3189384/3189384 [==============================] - 20s - loss: 148.6634 - val_loss: 99.3864
Epoch 230/250
3189384/3189384 [==============================] - 19s - loss: 148.6680 - val_loss: 99.0461
Epoch 231/250
3189384/3189384 [==============================] - 19s - loss: 148.6788 - val_loss: 98.8900
Epoch 232/250
3189384/3189384 [==============================] - 19s - loss: 148.7143 - val_loss: 100.0765
Epoch 233/250
3189384/3189384 [==============================] - 18s - loss: 148.6532 - val_loss: 99.7450
Epoch 234/250
3189384/3189384 [==============================] - 20s - loss: 148.6910 - val_loss: 99.0379
Epoch 235/250
3189384/3189384 [==============================] - 19s - loss: 148.6293 - val_loss: 99.5760
Epoch 236/250
3189384/3189384 [==============================] - 21s - loss: 148.5989 - val_loss: 99.0081
Epoch 237/250
3189384/3189384 [==============================] - 21s - loss: 148.5898 - val_loss: 98.7291
Epoch 238/250
3189384/3189384 [==============================] - 19s - loss: 148.6142 - val_loss: 99.5720
Epoch 239/250
3189384/3189384 [==============================] - 21s - loss: 148.6579 - val_loss: 99.6840
Epoch 240/250
3189384/3189384 [==============================] - 20s - loss: 148.6669 - val_loss: 100.9804
Epoch 241/250
3189384/3189384 [==============================] - 20s - loss: 148.6196 - val_loss: 99.1919
Epoch 242/250
3189384/3189384 [==============================] - 20s - loss: 148.6002 - val_loss: 98.8735
Epoch 243/250
3189384/3189384 [==============================] - 21s - loss: 148.6069 - val_loss: 98.9618
Epoch 244/250
3189384/3189384 [==============================] - 19s - loss: 148.6802 - val_loss: 99.0968
Epoch 245/250
3189384/3189384 [==============================] - 21s - loss: 148.6096 - val_loss: 99.0785
Epoch 246/250
3189384/3189384 [==============================] - 20s - loss: 148.6481 - val_loss: 101.0822
Epoch 247/250
3189384/3189384 [==============================] - 21s - loss: 148.6492 - val_loss: 99.1234
Epoch 248/250
3189384/3189384 [==============================] - 20s - loss: 148.6716 - val_loss: 99.0074
Epoch 249/250
3189384/3189384 [==============================] - 20s - loss: 148.5658 - val_loss: 98.9340
Epoch 250/250
3189384/3189384 [==============================] - 19s - loss: 148.6153 - val_loss: 99.0102