I am trying to run an autoencoder for dimensionality reduction on a Fraud Detection dataset (https://www.kaggle.com/kartik2112/fraud-detection?select=fraudTest.csv) and am receiving very high loss values for each iteration. Below is the autoencoder code.
nb_epoch = 100
batch_size = 128
input_dim = X_train.shape[1]
encoding_dim = 14
hidden_dim = int(encoding_dim / 2)
learning_rate = 1e-7
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation="tanh", activity_regularizer=regularizers.l1(learning_rate))(input_layer)
encoder = Dense(hidden_dim, activation="relu")(encoder)
decoder = Dense(hidden_dim, activation='tanh')(encoder)
decoder = Dense(input_dim, activation='relu')(decoder)
autoencoder = Model(inputs=input_layer, outputs=decoder)
autoencoder.compile(metrics=['accuracy'],
loss='mean_squared_error',
optimizer='adam')
cp = ModelCheckpoint(filepath="autoencoder_fraud.h5",
save_best_only=True,
verbose=0)
tb = TensorBoard(log_dir='./logs',
histogram_freq=0,
write_graph=True,
write_images=True)
history = autoencoder.fit(X_train, X_train,
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=(X_test, X_test),
verbose=1,
callbacks=[cp, tb]).history
here is a snippet of the loss values.
Epoch 1/100
10131/10131 [==============================] - 32s 3ms/step - loss: 52445827358.6230 - accuracy: 0.3389 - val_loss: 9625651200.0000 - val_accuracy: 0.5083
Epoch 2/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52393605025.8066 - accuracy: 0.5083 - val_loss: 9621398528.0000 - val_accuracy: 0.5083
Epoch 3/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52486496629.1354 - accuracy: 0.5082 - val_loss: 9617147904.0000 - val_accuracy: 0.5083
Epoch 4/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52514002255.9432 - accuracy: 0.5070 - val_loss: 9612887040.0000 - val_accuracy: 0.5083
Epoch 5/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52436489238.6388 - accuracy: 0.5076 - val_loss: 9608664064.0000 - val_accuracy: 0.5083
Epoch 6/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52430005774.7556 - accuracy: 0.5081 - val_loss: 9604417536.0000 - val_accuracy: 0.5083
Epoch 7/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52474495714.5898 - accuracy: 0.5079 - val_loss: 9600195584.0000 - val_accuracy: 0.5083
Epoch 8/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52423052560.0695 - accuracy: 0.5076 - val_loss: 9595947008.0000 - val_accuracy: 0.5083
Epoch 9/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52442358260.0742 - accuracy: 0.5072 - val_loss: 9591708672.0000 - val_accuracy: 0.5083
Epoch 10/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52402494704.5369 - accuracy: 0.5089 - val_loss: 9587487744.0000 - val_accuracy: 0.5083
Epoch 11/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52396583628.3553 - accuracy: 0.5081 - val_loss: 9583238144.0000 - val_accuracy: 0.5083
Epoch 12/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52349824708.2700 - accuracy: 0.5076 - val_loss: 9579020288.0000 - val_accuracy: 0.5083
Epoch 13/100
10131/10131 [==============================] - 31s 3ms/step - loss: 52332072133.6850 - accuracy: 0.5083 - val_loss: 9574786048.0000 - val_accuracy: 0.5083
Epoch 14/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52353680011.6731 - accuracy: 0.5086 - val_loss: 9570555904.0000 - val_accuracy: 0.5083
Epoch 15/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52347432594.5456 - accuracy: 0.5088 - val_loss: 9566344192.0000 - val_accuracy: 0.5083
Epoch 16/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52327825554.3435 - accuracy: 0.5076 - val_loss: 9562103808.0000 - val_accuracy: 0.5083
Epoch 17/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52347251610.1255 - accuracy: 0.5080 - val_loss: 9557892096.0000 - val_accuracy: 0.5083
Epoch 18/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52292632667.3636 - accuracy: 0.5079 - val_loss: 9553654784.0000 - val_accuracy: 0.5083
Epoch 19/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52354135093.7671 - accuracy: 0.5083 - val_loss: 9549425664.0000 - val_accuracy: 0.5083
Epoch 20/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52295668148.2006 - accuracy: 0.5086 - val_loss: 9545219072.0000 - val_accuracy: 0.5083
Epoch 21/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52314219115.3320 - accuracy: 0.5079 - val_loss: 9540980736.0000 - val_accuracy: 0.5083
Epoch 22/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52328022934.0829 - accuracy: 0.5079 - val_loss: 9536788480.0000 - val_accuracy: 0.5083
Epoch 23/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52268139834.5172 - accuracy: 0.5074 - val_loss: 9532554240.0000 - val_accuracy: 0.5083
Epoch 24/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52308370726.3040 - accuracy: 0.5077 - val_loss: 9528341504.0000 - val_accuracy: 0.5083
Epoch 25/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52224468101.4070 - accuracy: 0.5081 - val_loss: 9524126720.0000 - val_accuracy: 0.5083
Epoch 26/100
10131/10131 [==============================] - 30s 3ms/step - loss: 52200100823.1694 - accuracy: 0.5080 - val_loss: 9519915008.0000 - val_accuracy: 0.5083
Any advice/solution will be highly appreciated. Thank you
I have scaled the numarical data using StandardScaler and encoded
categorical data using LabelEncoder
First of all, check what numerical data you scaled.
I think you wrongly scaled cc_num, because cc_num is a categorical column.
This should solve your problem with high loss, but it doen't mean your model will be good.
You should first make a good check on the features and try to get some useful relationships between label and features (data preprocessing/featurezation)
Related
I am trying to create a custom loss function but as soon as I try to create a copy of the y_pred (model predictions) tensor, the loss function stops working.
This function is working
def custom_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
loss = binary_crossentropy(y_true, y_pred)
return loss
The output is
Epoch 1/10
26/26 [==============================] - 5s 169ms/step - loss: 56.1577 - accuracy: 0.7867 - val_loss: 14.7032 - val_accuracy: 0.9185
Epoch 2/10
26/26 [==============================] - 4s 159ms/step - loss: 18.6890 - accuracy: 0.8762 - val_loss: 9.4140 - val_accuracy: 0.9185
Epoch 3/10
26/26 [==============================] - 4s 158ms/step - loss: 13.7425 - accuracy: 0.8437 - val_loss: 7.7499 - val_accuracy: 0.9185
Epoch 4/10
26/26 [==============================] - 4s 159ms/step - loss: 10.5267 - accuracy: 0.8510 - val_loss: 6.1037 - val_accuracy: 0.9185
Epoch 5/10
26/26 [==============================] - 4s 160ms/step - loss: 7.5695 - accuracy: 0.8544 - val_loss: 3.9937 - val_accuracy: 0.9185
Epoch 6/10
26/26 [==============================] - 4s 159ms/step - loss: 5.1320 - accuracy: 0.8538 - val_loss: 2.6940 - val_accuracy: 0.9185
Epoch 7/10
26/26 [==============================] - 4s 160ms/step - loss: 3.3265 - accuracy: 0.8557 - val_loss: 1.6613 - val_accuracy: 0.9185
Epoch 8/10
26/26 [==============================] - 4s 160ms/step - loss: 2.1421 - accuracy: 0.8538 - val_loss: 1.0443 - val_accuracy: 0.9185
Epoch 9/10
26/26 [==============================] - 4s 160ms/step - loss: 1.3384 - accuracy: 0.8601 - val_loss: 0.5159 - val_accuracy: 0.9184
Epoch 10/10
26/26 [==============================] - 4s 173ms/step - loss: 0.6041 - accuracy: 0.8895 - val_loss: 0.3164 - val_accuracy: 0.9185
testing
**********Testing model**********
training AUC : 0.6204090733263475
testing AUC: 0.6196677312833667
But this is not working
def custom_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
y_p = tf.identity(y_pred)
loss = binary_crossentropy(y_true, y_p)
return loss
I am getting this output
Epoch 1/10
26/26 [==============================] - 11s 179ms/step - loss: 1.3587 - accuracy: 0.9106 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 2/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 3/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 4/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 5/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 6/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 7/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 8/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 9/10
26/26 [==============================] - 4s 160ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 10/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
testing
**********Testing model**********
training AUC : 0.5
testing AUC : 0.5
Is there a problem with tf.identity() which is causing the issue?
Or is there any other way to copy tensors which I should be using?
I'm new to this technology, so I was trying to build a model on image dataset.
I have used this architecture -
model = keras.Sequential()
model.add(layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(32,32,1)))
model.add(layers.AveragePooling2D())
model.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'))
model.add(layers.AveragePooling2D())
model.add(layers.Flatten())
model.add(layers.Dense(units=120, activation='relu'))
model.add(layers.Dense(units=84, activation='relu'))
model.add(layers.Dense(units=1, activation = 'sigmoid'))
The accuracy and loss seems pretty good but not the validation accuracy -
Epoch 1/50
10/10 [==============================] - 17s 2s/step - loss: 20.8554 - accuracy: 0.5170 -
val_loss: 0.8757 - val_accuracy: 0.5946
Epoch 2/50
10/10 [==============================] - 14s 1s/step - loss: 1.5565 - accuracy: 0.5612 -
val_loss: 0.8725 - val_accuracy: 0.5811
Epoch 3/50
10/10 [==============================] - 14s 1s/step - loss: 0.8374 - accuracy: 0.6293 -
val_loss: 0.8483 - val_accuracy: 0.5405
Epoch 4/50
10/10 [==============================] - 14s 1s/step - loss: 1.0340 - accuracy: 0.5748 -
val_loss: 1.6252 - val_accuracy: 0.5135
Epoch 5/50
10/10 [==============================] - 14s 1s/step - loss: 1.1054 - accuracy: 0.5816 -
val_loss: 0.7324 - val_accuracy: 0.6486
Epoch 6/50
10/10 [==============================] - 15s 1s/step - loss: 0.5942 - accuracy: 0.7041 -
val_loss: 0.7412 - val_accuracy: 0.6351
Epoch 7/50
10/10 [==============================] - 15s 2s/step - loss: 0.6041 - accuracy: 0.6939 -
val_loss: 0.6918 - val_accuracy: 0.6622
Epoch 8/50
10/10 [==============================] - 14s 1s/step - loss: 0.4944 - accuracy: 0.7687 -
val_loss: 0.7083 - val_accuracy: 0.6216
Epoch 9/50
10/10 [==============================] - 14s 1s/step - loss: 0.5231 - accuracy: 0.7007 -
val_loss: 1.0332 - val_accuracy: 0.5270
Epoch 10/50
10/10 [==============================] - 14s 1s/step - loss: 0.5133 - accuracy: 0.7313 -
val_loss: 0.6859 - val_accuracy: 0.5811
Epoch 11/50
10/10 [==============================] - 14s 1s/step - loss: 0.6177 - accuracy: 0.6735 -
val_loss: 1.0781 - val_accuracy: 0.5135
Epoch 12/50
10/10 [==============================] - 14s 1s/step - loss: 0.9852 - accuracy: 0.6701 -
val_loss: 3.0853 - val_accuracy: 0.4865
Epoch 13/50
10/10 [==============================] - 13s 1s/step - loss: 1.0099 - accuracy: 0.6259 -
val_loss: 1.8193 - val_accuracy: 0.5000
Epoch 14/50
10/10 [==============================] - 13s 1s/step - loss: 0.7179 - accuracy: 0.7041 -
val_loss: 1.5659 - val_accuracy: 0.5135
Epoch 15/50
10/10 [==============================] - 14s 1s/step - loss: 0.4575 - accuracy: 0.7857 -
val_loss: 0.6865 - val_accuracy: 0.5946
Epoch 16/50
10/10 [==============================] - 14s 1s/step - loss: 0.6540 - accuracy: 0.7177 -
val_loss: 1.7108 - val_accuracy: 0.5405
Epoch 17/50
10/10 [==============================] - 13s 1s/step - loss: 1.3617 - accuracy: 0.6156 -
val_loss: 1.1215 - val_accuracy: 0.5811
Epoch 18/50
10/10 [==============================] - 14s 1s/step - loss: 0.6983 - accuracy: 0.7245 -
val_loss: 2.1121 - val_accuracy: 0.5135
Epoch 19/50
10/10 [==============================] - 15s 1s/step - loss: 0.6669 - accuracy: 0.7415 -
val_loss: 0.8061 - val_accuracy: 0.6216
Epoch 20/50
10/10 [==============================] - 14s 1s/step - loss: 0.3853 - accuracy: 0.8129 -
val_loss: 0.7368 - val_accuracy: 0.6757
Epoch 21/50
10/10 [==============================] - 13s 1s/step - loss: 0.5672 - accuracy: 0.7347 -
val_loss: 1.4207 - val_accuracy: 0.5270
Epoch 22/50
10/10 [==============================] - 14s 1s/step - loss: 0.4770 - accuracy: 0.7551 -
val_loss: 1.6060 - val_accuracy: 0.5135
Epoch 23/50
10/10 [==============================] - 14s 1s/step - loss: 0.7212 - accuracy: 0.7041 -
val_loss: 1.1835 - val_accuracy: 0.5811
Epoch 24/50
10/10 [==============================] - 14s 1s/step - loss: 0.5231 - accuracy: 0.7483 -
val_loss: 0.6802 - val_accuracy: 0.7027
Epoch 25/50
10/10 [==============================] - 13s 1s/step - loss: 0.3185 - accuracy: 0.8367 -
val_loss: 0.6644 - val_accuracy: 0.7027
Epoch 26/50
10/10 [==============================] - 14s 1s/step - loss: 0.2500 - accuracy: 0.8912 -
val_loss: 0.8569 - val_accuracy: 0.6486
Epoch 27/50
10/10 [==============================] - 14s 1s/step - loss: 0.2279 - accuracy: 0.9082 -
val_loss: 0.7515 - val_accuracy: 0.7162
Epoch 28/50
10/10 [==============================] - 14s 1s/step - loss: 0.2349 - accuracy: 0.9082 -
val_loss: 0.9439 - val_accuracy: 0.5811
Epoch 29/50
10/10 [==============================] - 13s 1s/step - loss: 0.2051 - accuracy: 0.9184 -
val_loss: 0.7895 - val_accuracy: 0.7027
Epoch 30/50
10/10 [==============================] - 14s 1s/step - loss: 0.1236 - accuracy: 0.9592 -
val_loss: 0.7387 - val_accuracy: 0.7297
Epoch 31/50
10/10 [==============================] - 14s 1s/step - loss: 0.1370 - accuracy: 0.9524 -
val_loss: 0.7387 - val_accuracy: 0.7297
Epoch 32/50
10/10 [==============================] - 14s 1s/step - loss: 0.0980 - accuracy: 0.9796 -
val_loss: 0.6901 - val_accuracy: 0.7162
Epoch 33/50
10/10 [==============================] - 14s 1s/step - loss: 0.0989 - accuracy: 0.9762 -
val_loss: 0.7754 - val_accuracy: 0.7162
Epoch 34/50
10/10 [==============================] - 14s 1s/step - loss: 0.1195 - accuracy: 0.9592 -
val_loss: 0.6639 - val_accuracy: 0.6622
Epoch 35/50
10/10 [==============================] - 14s 1s/step - loss: 0.0805 - accuracy: 0.9898 -
val_loss: 0.7666 - val_accuracy: 0.7162
Epoch 36/50
10/10 [==============================] - 14s 1s/step - loss: 0.0649 - accuracy: 0.9966 -
val_loss: 0.7543 - val_accuracy: 0.7162
Epoch 37/50
10/10 [==============================] - 14s 1s/step - loss: 0.0604 - accuracy: 0.9898 -
val_loss: 0.7472 - val_accuracy: 0.7297
Epoch 38/50
10/10 [==============================] - 14s 1s/step - loss: 0.0538 - accuracy: 1.0000 -
val_loss: 0.7287 - val_accuracy: 0.7432
Epoch 39/50
10/10 [==============================] - 13s 1s/step - loss: 0.0430 - accuracy: 0.9966 -
val_loss: 0.8989 - val_accuracy: 0.6622
Epoch 40/50
10/10 [==============================] - 14s 1s/step - loss: 0.0386 - accuracy: 1.0000 -
val_loss: 0.6951 - val_accuracy: 0.6892
Epoch 41/50
10/10 [==============================] - 13s 1s/step - loss: 0.0379 - accuracy: 1.0000 -
val_loss: 0.8485 - val_accuracy: 0.6892
Epoch 42/50
10/10 [==============================] - 14s 1s/step - loss: 0.0276 - accuracy: 1.0000 -
val_loss: 0.9726 - val_accuracy: 0.6486
Epoch 43/50
10/10 [==============================] - 13s 1s/step - loss: 0.0329 - accuracy: 1.0000 -
val_loss: 0.7336 - val_accuracy: 0.7568
Epoch 44/50
10/10 [==============================] - 14s 1s/step - loss: 0.0226 - accuracy: 1.0000 -
val_loss: 0.8846 - val_accuracy: 0.6892
Epoch 45/50
10/10 [==============================] - 13s 1s/step - loss: 0.0249 - accuracy: 1.0000 -
val_loss: 0.9542 - val_accuracy: 0.6892
Epoch 46/50
10/10 [==============================] - 14s 1s/step - loss: 0.0171 - accuracy: 1.0000 -
val_loss: 0.8792 - val_accuracy: 0.6892
Epoch 47/50
10/10 [==============================] - 15s 1s/step - loss: 0.0122 - accuracy: 1.0000 -
val_loss: 0.8564 - val_accuracy: 0.7162
Epoch 48/50
10/10 [==============================] - 13s 1s/step - loss: 0.0114 - accuracy: 1.0000 -
val_loss: 0.8900 - val_accuracy: 0.7027
Epoch 49/50
10/10 [==============================] - 13s 1s/step - loss: 0.0084 - accuracy: 1.0000 -
val_loss: 0.8981 - val_accuracy: 0.7027
I tried changing the parameters too yet no result. Would be helpful if I can get to know what's wrong with the val_accuracy. Thanks in advance.
You are using less dataset specially test dataset for validation. Try adding some more data to train the model and for validation, then you can see the difference in val_accuracy. You can also try by adding more layers to the model.
There are some other methods available like, data augmentation, dropout, regularizers to increase the accuracy of the model by avoiding overfitting problem.
Please follow this reference to overcome the overfitting problem and to best train your model.
After finally fixing all the errors this code gave me, I have stumbled upon a new problem. This time it is the working model that supplies me with it. Here is the code I have created, this is now my third Deep Learning code I made and I am having a lot of fun making it, however, because I am a beginner in Python in general, some ideas are hard to grasp.
import pandas as pd
from sklearn.model_selection import train_test_split
import keras as kr
from keras import layers
from keras import Sequential
from keras.layers import Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
from pandas import DataFrame
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.InteractiveSession(config=config)
pd.set_option('display.max_columns', None)
headers = ['id', 'rated', 'created_at', 'last_move_at', 'turns', 'victory_status', 'winner', 'increment_code',
'white_id', 'white_rating', 'black_id', 'black_rating', 'moves', 'opening_eco', 'opening_name',
'opening_ply']
data = pd.read_csv(r'C:\games.csv', header=None, names=headers)
dataset = DataFrame(data)
dd = dataset.drop([0])
df = dd.drop(columns=['id', 'rated', 'opening_name', 'created_at', 'last_move_at', 'increment_code', 'white_id',
'black_id', 'opening_ply', 'opening_name', 'turns', 'victory_status', 'moves', 'opening_eco'],
axis=1)
df['winner'] = df['winner'].map({'black': 0, 'white': 1})
y = df['winner']
encoder = LabelEncoder()
encoder.fit(y)
encoded_y = encoder.transform(y)
X = df.drop('winner', axis=1)
X = X.astype("float32")
X_train, X_test, y_train, y_test = train_test_split(X, encoded_y, test_size=0.2)
sc = MinMaxScaler()
scaled_X_train = sc.fit_transform(X_train)
scaled_X_test = sc.fit_transform(X_test)
model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(tf.keras.Input(shape=(12, 2)))
model.add(Dense(4, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss=kr.losses.binary_crossentropy, optimizer='adam',
metrics=['accuracy'])
history = model.fit(scaled_X_train, y_train, batch_size=50, epochs=100, verbose=1, validation_data=(scaled_X_test,
y_test))
print(history.history)
score = model.evaluate(scaled_X_train, y_train, verbose=1)
My code seems to work fine with the first few epochs an increase in the accuracy. After that however, the accuracy doesn't seem to be making any progress anymore and lands on a modest accuracy of around 0.610, or specifically as seen below. With no idea on how to get this to be higher, I have come to you to ask you the question: 'How do I fix this?'
Epoch 1/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6386 - accuracy: 0.5463 - val_loss: 0.6208 - val_accuracy: 0.5783
Epoch 2/100
321/321 [==============================] - 0s 925us/step - loss: 0.6098 - accuracy: 0.6091 - val_loss: 0.6078 - val_accuracy: 0.5960
Epoch 3/100
321/321 [==============================] - 0s 973us/step - loss: 0.6055 - accuracy: 0.6102 - val_loss: 0.6177 - val_accuracy: 0.5833
Epoch 4/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6042 - accuracy: 0.6129 - val_loss: 0.6138 - val_accuracy: 0.5850
Epoch 5/100
321/321 [==============================] - 0s 973us/step - loss: 0.6041 - accuracy: 0.6106 - val_loss: 0.6233 - val_accuracy: 0.5763
Epoch 6/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6046 - accuracy: 0.6097 - val_loss: 0.6276 - val_accuracy: 0.5733
Epoch 7/100
321/321 [==============================] - 0s 973us/step - loss: 0.6033 - accuracy: 0.6086 - val_loss: 0.6238 - val_accuracy: 0.5733
Epoch 8/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6116 - val_loss: 0.6202 - val_accuracy: 0.5770
Epoch 9/100
321/321 [==============================] - 0s 973us/step - loss: 0.6030 - accuracy: 0.6091 - val_loss: 0.6210 - val_accuracy: 0.5738
Epoch 10/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6098 - val_loss: 0.6033 - val_accuracy: 0.5932
Epoch 11/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6094 - val_loss: 0.6166 - val_accuracy: 0.5780
Epoch 12/100
321/321 [==============================] - 0s 925us/step - loss: 0.6025 - accuracy: 0.6104 - val_loss: 0.6026 - val_accuracy: 0.5947
Epoch 13/100
321/321 [==============================] - 0s 925us/step - loss: 0.6021 - accuracy: 0.6099 - val_loss: 0.6243 - val_accuracy: 0.5733
Epoch 14/100
321/321 [==============================] - 0s 876us/step - loss: 0.6027 - accuracy: 0.6098 - val_loss: 0.6176 - val_accuracy: 0.5775
Epoch 15/100
321/321 [==============================] - 0s 925us/step - loss: 0.6029 - accuracy: 0.6091 - val_loss: 0.6286 - val_accuracy: 0.5690
Epoch 16/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6083 - val_loss: 0.6104 - val_accuracy: 0.5840
Epoch 17/100
321/321 [==============================] - 0s 876us/step - loss: 0.6021 - accuracy: 0.6102 - val_loss: 0.6039 - val_accuracy: 0.5897
Epoch 18/100
321/321 [==============================] - 0s 973us/step - loss: 0.6021 - accuracy: 0.6113 - val_loss: 0.6046 - val_accuracy: 0.5887
Epoch 19/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6083 - val_loss: 0.6074 - val_accuracy: 0.5860
Epoch 20/100
321/321 [==============================] - 0s 971us/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6194 - val_accuracy: 0.5738
Epoch 21/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6099 - val_loss: 0.6093 - val_accuracy: 0.5857
Epoch 22/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6154 - val_accuracy: 0.5773
Epoch 23/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6044 - val_accuracy: 0.5895
Epoch 24/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6112 - val_loss: 0.6305 - val_accuracy: 0.5710
Epoch 25/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6114 - val_loss: 0.6067 - val_accuracy: 0.5867
Epoch 26/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6102 - val_loss: 0.6140 - val_accuracy: 0.5800
Epoch 27/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6075 - val_loss: 0.6190 - val_accuracy: 0.5755
Epoch 28/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6029 - accuracy: 0.6087 - val_loss: 0.6337 - val_accuracy: 0.5666
Epoch 29/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6095 - val_loss: 0.6089 - val_accuracy: 0.5840
Epoch 30/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6106 - val_loss: 0.6273 - val_accuracy: 0.5690
Epoch 31/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6146 - val_accuracy: 0.5785
Epoch 32/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6116 - val_loss: 0.6093 - val_accuracy: 0.5837
Epoch 33/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6096 - val_loss: 0.6139 - val_accuracy: 0.5780
Epoch 34/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6087 - val_loss: 0.6090 - val_accuracy: 0.5850
Epoch 35/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6096 - val_loss: 0.6127 - val_accuracy: 0.5810
Epoch 36/100
321/321 [==============================] - 0s 876us/step - loss: 0.6024 - accuracy: 0.6091 - val_loss: 0.6001 - val_accuracy: 0.5975
Epoch 37/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6083 - val_accuracy: 0.5862
Epoch 38/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6090 - val_loss: 0.6073 - val_accuracy: 0.5875
Epoch 39/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6109 - val_loss: 0.6149 - val_accuracy: 0.5785
Epoch 40/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6085 - val_loss: 0.6175 - val_accuracy: 0.5758
Epoch 41/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6079 - val_loss: 0.6062 - val_accuracy: 0.5865
Epoch 42/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6097 - val_loss: 0.6060 - val_accuracy: 0.5867
Epoch 43/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6082 - val_loss: 0.6074 - val_accuracy: 0.5862
Epoch 44/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6096 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 45/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6112 - val_loss: 0.6241 - val_accuracy: 0.5740
Epoch 46/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6111 - val_loss: 0.6118 - val_accuracy: 0.5815
Epoch 47/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6073 - val_loss: 0.6110 - val_accuracy: 0.5835
Epoch 48/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6074 - val_loss: 0.6107 - val_accuracy: 0.5835
Epoch 49/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6081 - val_accuracy: 0.5862
Epoch 50/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6214 - val_accuracy: 0.5770
Epoch 51/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6093 - val_loss: 0.6011 - val_accuracy: 0.5952
Epoch 52/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6094 - val_loss: 0.6013 - val_accuracy: 0.5950
Epoch 53/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6022 - accuracy: 0.6079 - val_loss: 0.6158 - val_accuracy: 0.5770
Epoch 54/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6080 - val_accuracy: 0.5862
Epoch 55/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6095 - val_loss: 0.6180 - val_accuracy: 0.5775
Epoch 56/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6106 - val_accuracy: 0.5842
Epoch 57/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6078 - val_loss: 0.6232 - val_accuracy: 0.5740
Epoch 58/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6155 - val_accuracy: 0.5788
Epoch 59/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6119 - val_loss: 0.6150 - val_accuracy: 0.5775
Epoch 60/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6092 - val_loss: 0.5982 - val_accuracy: 0.6012
Epoch 61/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6022 - val_accuracy: 0.5947
Epoch 62/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6265 - val_accuracy: 0.5735
Epoch 63/100
321/321 [==============================] - 0s 899us/step - loss: 0.6019 - accuracy: 0.6099 - val_loss: 0.6172 - val_accuracy: 0.5775
Epoch 64/100
321/321 [==============================] - 0s 982us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6116 - val_accuracy: 0.5815
Epoch 65/100
321/321 [==============================] - 0s 969us/step - loss: 0.6015 - accuracy: 0.6099 - val_loss: 0.6230 - val_accuracy: 0.5738
Epoch 66/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6094 - val_loss: 0.6058 - val_accuracy: 0.5870
Epoch 67/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6250 - val_accuracy: 0.5723
Epoch 68/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6109 - val_loss: 0.6129 - val_accuracy: 0.5790
Epoch 69/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6061 - val_accuracy: 0.5867
Epoch 70/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6031 - accuracy: 0.6084 - val_loss: 0.5999 - val_accuracy: 0.5980
Epoch 71/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6080 - val_loss: 0.6065 - val_accuracy: 0.5862
Epoch 72/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6097 - val_loss: 0.6193 - val_accuracy: 0.5745
Epoch 73/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6024 - accuracy: 0.6081 - val_loss: 0.6183 - val_accuracy: 0.5753
Epoch 74/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6094 - val_loss: 0.6165 - val_accuracy: 0.5778
Epoch 75/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6091 - val_loss: 0.6008 - val_accuracy: 0.5955
Epoch 76/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6094 - val_loss: 0.6235 - val_accuracy: 0.5733
Epoch 77/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6178 - val_accuracy: 0.5773
Epoch 78/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6232 - val_accuracy: 0.5715
Epoch 79/100
321/321 [==============================] - 0s 973us/step - loss: 0.6024 - accuracy: 0.6052 - val_loss: 0.6262 - val_accuracy: 0.5705
Epoch 80/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6050 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 81/100
321/321 [==============================] - 0s 973us/step - loss: 0.6011 - accuracy: 0.6111 - val_loss: 0.6177 - val_accuracy: 0.5755
Epoch 82/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6124 - val_accuracy: 0.5783
Epoch 83/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6090 - val_loss: 0.6107 - val_accuracy: 0.5833
Epoch 84/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6102 - val_loss: 0.6110 - val_accuracy: 0.5800
Epoch 85/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6094 - val_loss: 0.6077 - val_accuracy: 0.5845
Epoch 86/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6069 - val_loss: 0.6109 - val_accuracy: 0.5798
Epoch 87/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6092 - val_loss: 0.6117 - val_accuracy: 0.5798
Epoch 88/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6105 - val_accuracy: 0.5808
Epoch 89/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6063 - val_loss: 0.6190 - val_accuracy: 0.5753
Epoch 90/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6083 - val_loss: 0.6211 - val_accuracy: 0.5740
Epoch 91/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6058 - val_loss: 0.6117 - val_accuracy: 0.5785
Epoch 92/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6077 - val_loss: 0.6200 - val_accuracy: 0.5740
Epoch 93/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6230 - val_accuracy: 0.5735
Epoch 94/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6087 - val_loss: 0.6113 - val_accuracy: 0.5810
Epoch 95/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6086 - val_loss: 0.6203 - val_accuracy: 0.5755
Epoch 96/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6013 - accuracy: 0.6088 - val_loss: 0.6273 - val_accuracy: 0.5693
Epoch 97/100
321/321 [==============================] - 0s 925us/step - loss: 0.6019 - accuracy: 0.6071 - val_loss: 0.6023 - val_accuracy: 0.5927
Epoch 98/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6072 - val_loss: 0.6093 - val_accuracy: 0.5810
Epoch 99/100
321/321 [==============================] - 0s 925us/step - loss: 0.6012 - accuracy: 0.6091 - val_loss: 0.6018 - val_accuracy: 0.5937
Epoch 100/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6092 - val_loss: 0.6255 - val_accuracy: 0.5710
Either there is a problem in your training data or model is too small.
By looking at the loss - which is obviously not changing at all - I'd say the problem is size of model. Try adding more neurons in dense layers.
Your model is not sufficiently big enough to handle the data.
So, try increasing your model size.
However, increasing your model size makes it more vulnerable to overfitting, but using some Dropout layers solves the issue.
model = Sequential()
model.add(Input(shape=(12, 2)))
model.add(Dense(48, activation='relu'))
model.add(Dropout(0.20))
model.add(Dense(32, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dropout(0.15))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
Furthermore, a lower learning rate allows the model to get to a better accuracy & loss.
You can tweak the learning rate of the Adam optimizer when compiling,
model.compile(keras.optimizers.Adam(lr=0.0002), loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
The default value of learning rate of the Adam optimizer is 0.001
I'm using Keras to train a netwok for classification problem on python, the model that i am using is as follows:
filter_size = (2,2)
maxpool_size = (2, 2)
dr = 0.5
inputs = Input((12,8,1), name='main_input')
main_branch = Conv2D(20, kernel_size=filter_size, padding="same", kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(inputs)
main_branch = BatchNormalization(momentum=0.9)(main_branch)
main_branch = Activation("relu")(main_branch)
main_branch = MaxPooling2D(pool_size=maxpool_size,strides=(1, 1))(main_branch)
main_branch = Conv2D(40, kernel_size=filter_size, padding="same", kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
main_branch = BatchNormalization(momentum=0.9)(main_branch)
main_branch = Activation("relu")(main_branch)
main_branch = Flatten()(main_branch)
main_branch = Dense(100,kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
main_branch = Dense(100, kernel_regularizer=l2(0.0001),bias_regularizer=l2(0.0001))(main_branch)
SubArray_branch = Dense(496, activation='softmax', name='SubArray_output')(main_branch)
model = Model(inputs = inputs,
outputs = SubArray_branch)
opt = keras.optimizers.Adam(lr=1e-3, epsilon=1e-08, clipnorm=1.0)
model.compile(optimizer=opt,
loss={'SubArray_output': 'sparse_categorical_crossentropy'},
metrics=['accuracy'] )
history = model.fit({'main_input': Channel},
{'SubArray_output': array_indx},
validation_data=(test_Data,test_array),
epochs=100, batch_size=128,
verbose=1,
validation_split=0.2
)
when I am training this network on my training data I get high validation-loss compared to training-loss as you can see in below:
471/471 [==============================] - 5s 10ms/step - loss: 0.5723 - accuracy: 0.9010 - val_loss: 20.2040 - val_accuracy: 0.0126
Epoch 33/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5486 - accuracy: 0.9087 - val_loss: 35.2516 - val_accuracy: 0.0037
Epoch 34/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5342 - accuracy: 0.9159 - val_loss: 50.2577 - val_accuracy: 0.0043
Epoch 35/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5345 - accuracy: 0.9132 - val_loss: 26.0221 - val_accuracy: 0.0051
Epoch 36/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5333 - accuracy: 0.9140 - val_loss: 71.2754 - val_accuracy: 0.0043
Epoch 37/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5149 - accuracy: 0.9231 - val_loss: 67.2646 - val_accuracy: 3.3227e-04
Epoch 38/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5269 - accuracy: 0.9162 - val_loss: 17.7448 - val_accuracy: 0.0206
Epoch 39/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5198 - accuracy: 0.9201 - val_loss: 92.7240 - val_accuracy: 0.0015
Epoch 40/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5157 - accuracy: 0.9247 - val_loss: 30.9589 - val_accuracy: 0.0082
Epoch 41/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4961 - accuracy: 0.9316 - val_loss: 20.0444 - val_accuracy: 0.0141
Epoch 42/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5093 - accuracy: 0.9256 - val_loss: 16.7269 - val_accuracy: 0.0172
Epoch 43/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5092 - accuracy: 0.9267 - val_loss: 15.6939 - val_accuracy: 0.0320
Epoch 44/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5104 - accuracy: 0.9270 - val_loss: 103.2581 - val_accuracy: 0.0027
Epoch 45/100
471/471 [==============================] - 5s 10ms/step - loss: 0.5074 - accuracy: 0.9286 - val_loss: 28.3097 - val_accuracy: 0.0154
Epoch 46/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4977 - accuracy: 0.9303 - val_loss: 28.6676 - val_accuracy: 0.0167
Epoch 47/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4823 - accuracy: 0.9375 - val_loss: 47.4671 - val_accuracy: 0.0015
Epoch 48/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5053 - accuracy: 0.9291 - val_loss: 39.3356 - val_accuracy: 0.0082
Epoch 49/100
471/471 [==============================] - 5s 11ms/step - loss: 0.5110 - accuracy: 0.9287 - val_loss: 42.8834 - val_accuracy: 0.0082
Epoch 50/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4895 - accuracy: 0.9366 - val_loss: 11.7254 - val_accuracy: 0.0700
Epoch 51/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4909 - accuracy: 0.9351 - val_loss: 14.5519 - val_accuracy: 0.0276
Epoch 52/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4846 - accuracy: 0.9380 - val_loss: 22.5101 - val_accuracy: 0.0122
Epoch 53/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4991 - accuracy: 0.9315 - val_loss: 16.1494 - val_accuracy: 0.0283
Epoch 54/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4782 - accuracy: 0.9423 - val_loss: 14.8626 - val_accuracy: 0.0551
Epoch 55/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4807 - accuracy: 0.9401 - val_loss: 100.8670 - val_accuracy: 9.9681e-04
Epoch 56/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4759 - accuracy: 0.9420 - val_loss: 34.8571 - val_accuracy: 0.0047
Epoch 57/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4802 - accuracy: 0.9406 - val_loss: 23.2134 - val_accuracy: 0.0524
Epoch 58/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4998 - accuracy: 0.9334 - val_loss: 20.9038 - val_accuracy: 0.0207
Epoch 59/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4813 - accuracy: 0.9400 - val_loss: 19.5474 - val_accuracy: 0.0393
Epoch 60/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4846 - accuracy: 0.9399 - val_loss: 15.1594 - val_accuracy: 0.0439
Epoch 61/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4718 - accuracy: 0.9436 - val_loss: 30.0164 - val_accuracy: 0.0078
Epoch 62/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4897 - accuracy: 0.9375 - val_loss: 60.0498 - val_accuracy: 0.0144
Epoch 63/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4668 - accuracy: 0.9461 - val_loss: 18.8190 - val_accuracy: 0.0298
Epoch 64/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4598 - accuracy: 0.9485 - val_loss: 26.1101 - val_accuracy: 0.0231
Epoch 65/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4672 - accuracy: 0.9442 - val_loss: 108.7207 - val_accuracy: 2.6582e-04
Epoch 66/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4910 - accuracy: 0.9378 - val_loss: 45.6070 - val_accuracy: 0.0052
Epoch 67/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4805 - accuracy: 0.9429 - val_loss: 39.3904 - val_accuracy: 0.0057
Epoch 68/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4682 - accuracy: 0.9451 - val_loss: 21.5525 - val_accuracy: 0.0328
Epoch 69/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4613 - accuracy: 0.9472 - val_loss: 46.7714 - val_accuracy: 0.0027
Epoch 70/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4786 - accuracy: 0.9417 - val_loss: 13.4834 - val_accuracy: 0.0708
Epoch 71/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4756 - accuracy: 0.9442 - val_loss: 41.8796 - val_accuracy: 0.0199
Epoch 72/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4655 - accuracy: 0.9464 - val_loss: 57.7453 - val_accuracy: 0.0017
Epoch 73/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4795 - accuracy: 0.9428 - val_loss: 16.1949 - val_accuracy: 0.0285
Epoch 74/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4755 - accuracy: 0.9440 - val_loss: 68.2349 - val_accuracy: 0.0139
Epoch 75/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4807 - accuracy: 0.9425 - val_loss: 43.4699 - val_accuracy: 0.0233
Epoch 76/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4515 - accuracy: 0.9524 - val_loss: 175.2205 - val_accuracy: 0.0019
Epoch 77/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4715 - accuracy: 0.9467 - val_loss: 92.2833 - val_accuracy: 0.0017
Epoch 78/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4736 - accuracy: 0.9447 - val_loss: 94.7209 - val_accuracy: 0.0059
Epoch 79/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4661 - accuracy: 0.9473 - val_loss: 17.8870 - val_accuracy: 0.0386
Epoch 80/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4614 - accuracy: 0.9492 - val_loss: 28.1883 - val_accuracy: 0.0042
Epoch 81/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4569 - accuracy: 0.9507 - val_loss: 49.2823 - val_accuracy: 0.0032
Epoch 82/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4623 - accuracy: 0.9485 - val_loss: 29.8972 - val_accuracy: 0.0100
Epoch 83/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4799 - accuracy: 0.9429 - val_loss: 109.5044 - val_accuracy: 0.0062
Epoch 84/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4810 - accuracy: 0.9444 - val_loss: 71.2103 - val_accuracy: 0.0051
Epoch 85/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4452 - accuracy: 0.9552 - val_loss: 30.7861 - val_accuracy: 0.0100
Epoch 86/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4805 - accuracy: 0.9423 - val_loss: 48.1887 - val_accuracy: 0.0031
Epoch 87/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4564 - accuracy: 0.9512 - val_loss: 189.6711 - val_accuracy: 1.3291e-04
Epoch 88/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4479 - accuracy: 0.9537 - val_loss: 58.6349 - val_accuracy: 0.0199
Epoch 89/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4667 - accuracy: 0.9476 - val_loss: 95.7323 - val_accuracy: 0.0041
Epoch 90/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4808 - accuracy: 0.9436 - val_loss: 28.7513 - val_accuracy: 0.0191
Epoch 91/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4583 - accuracy: 0.9511 - val_loss: 16.4281 - val_accuracy: 0.0431
Epoch 92/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4458 - accuracy: 0.9541 - val_loss: 15.3890 - val_accuracy: 0.0517
Epoch 93/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4628 - accuracy: 0.9491 - val_loss: 37.3123 - val_accuracy: 0.0024
Epoch 94/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4716 - accuracy: 0.9481 - val_loss: 24.8934 - val_accuracy: 0.0123
Epoch 95/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4646 - accuracy: 0.9469 - val_loss: 54.6682 - val_accuracy: 5.9809e-04
Epoch 96/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4665 - accuracy: 0.9492 - val_loss: 89.1835 - val_accuracy: 0.0064
Epoch 97/100
471/471 [==============================] - 5s 10ms/step - loss: 0.4533 - accuracy: 0.9527 - val_loss: 60.9850 - val_accuracy: 0.0035
Epoch 98/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4597 - accuracy: 0.9491 - val_loss: 41.6088 - val_accuracy: 0.0023
Epoch 99/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4511 - accuracy: 0.9537 - val_loss: 28.2131 - val_accuracy: 0.0025
Epoch 100/100
471/471 [==============================] - 5s 11ms/step - loss: 0.4568 - accuracy: 0.9509 - val_loss: 121.8944 - val_accuracy: 0.0041
I am well aware that the problem I am facing is due to overfitting, but when I train the same network with the same training data on Matlab, the values of training-loss and validation-loss are close to each other. The picture of Training Progress on Matlab is linked as:
Training Progress
I would appreciate it, if anyone could explain to me why I can’t get the same result on python? What would you suggest to solve this problem?
I state that I am not at all familiar with neural networks and this is the first time that I have tried to develop one.
The problem lies in predicting a week's pollution forecast, based on the previous month.
Unstructured data with 15 features are:
Start data
The data to be predicted is 'gas', for a total of 168 hours in the next week, is the hours in a week.
MinMaxScaler(feature_range (0,1)) is applied to the data. And then the data is split into train and test data. Since only one year of hourly measurements is available, the data is resampled in series of 672 hourly samples that each starts from every day of the year at midnight. Therefore, from about 8000 starting hourly surveys, about 600 series of 672 samples are obtained.
The 'date' is removed from the initial data and the form of train_x and train_y is:
Shape of train_x and train_y
In train_x[0] there are 672 hourly readings for the first 4 weeks of the data set and consist of all features including 'gas'.
In train_y [0], on the other hand, there are 168 hourly readings for the following week which begins when the month ends in train_x [0].
Train_X[0] where column 0 is 'gas' and Train_y[0] with only gas column for the next week after train_x[0]
TRAIN X SHAPE = (631, 672, 14)
TRAIN Y SHAPE = (631, 168, 1)
After organizing the data in this way (if it's wrong please let me know), I built the neural network as the following:
train_x, train_y = to_supervised(train, n_input)
train_x = train_x.astype(float)
train_y = train_y.astype(float)
# define parameters
verbose, epochs, batch_size = 1, 200, 50
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# define model
model = Sequential()
opt = optimizers.RMSprop(learning_rate=1e-3)
model.add(layers.GRU(14, activation='relu', input_shape=(n_timesteps, n_features),return_sequences=False, stateful=False))
model.add(layers.Dense(1, activation='relu'))
#model.add(layers.Dense(14, activation='linear'))
model.add(layers.Dense(n_outputs, activation='sigmoid'))
model.summary()
model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])
train_y = np.concatenate(train_y).reshape(len(train_y), 168)
callback_early_stopping = EarlyStopping(monitor='val_loss',
patience=5, verbose=1)
callback_tensorboard = TensorBoard(log_dir='./23_logs/',
histogram_freq=0,
write_graph=False)
callback_reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.1,
min_lr=1e-4,
patience=0,
verbose=1)
callbacks = [callback_early_stopping,
callback_tensorboard,
callback_reduce_lr]
history = model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose, shuffle=False
, validation_split=0.2, callbacks=callbacks)
When i fit the network i get:
11/11 [==============================] - 5s 305ms/step - loss: 0.1625 - accuracy: 0.0207 - val_loss: 0.1905 - val_accuracy: 0.0157
Epoch 2/200
11/11 [==============================] - 2s 179ms/step - loss: 0.1594 - accuracy: 0.0037 - val_loss: 0.1879 - val_accuracy: 0.0157
Epoch 3/200
11/11 [==============================] - 2s 169ms/step - loss: 0.1571 - accuracy: 0.0040 - val_loss: 0.1855 - val_accuracy: 0.0079
Epoch 4/200
11/11 [==============================] - 2s 165ms/step - loss: 0.1550 - accuracy: 0.0092 - val_loss: 0.1832 - val_accuracy: 0.0079
Epoch 5/200
11/11 [==============================] - 2s 162ms/step - loss: 0.1529 - accuracy: 0.0102 - val_loss: 0.1809 - val_accuracy: 0.0079
Epoch 6/200
11/11 [==============================] - 2s 160ms/step - loss: 0.1508 - accuracy: 0.0085 - val_loss: 0.1786 - val_accuracy: 0.0079
Epoch 7/200
11/11 [==============================] - 2s 160ms/step - loss: 0.1487 - accuracy: 0.0023 - val_loss: 0.1763 - val_accuracy: 0.0079
Epoch 8/200
11/11 [==============================] - 2s 158ms/step - loss: 0.1467 - accuracy: 0.0023 - val_loss: 0.1740 - val_accuracy: 0.0079
Epoch 9/200
11/11 [==============================] - 2s 159ms/step - loss: 0.1446 - accuracy: 0.0034 - val_loss: 0.1718 - val_accuracy: 0.0000e+00
Epoch 10/200
11/11 [==============================] - 2s 160ms/step - loss: 0.1426 - accuracy: 0.0034 - val_loss: 0.1695 - val_accuracy: 0.0000e+00
Epoch 11/200
11/11 [==============================] - 2s 162ms/step - loss: 0.1406 - accuracy: 0.0034 - val_loss: 0.1673 - val_accuracy: 0.0000e+00
Epoch 12/200
11/11 [==============================] - 2s 159ms/step - loss: 0.1387 - accuracy: 0.0034 - val_loss: 0.1651 - val_accuracy: 0.0000e+00
Epoch 13/200
11/11 [==============================] - 2s 159ms/step - loss: 0.1367 - accuracy: 0.0052 - val_loss: 0.1629 - val_accuracy: 0.0000e+00
Epoch 14/200
11/11 [==============================] - 2s 159ms/step - loss: 0.1348 - accuracy: 0.0052 - val_loss: 0.1608 - val_accuracy: 0.0000e+00
Epoch 15/200
11/11 [==============================] - 2s 161ms/step - loss: 0.1328 - accuracy: 0.0052 - val_loss: 0.1586 - val_accuracy: 0.0000e+00
Epoch 16/200
11/11 [==============================] - 2s 162ms/step - loss: 0.1309 - accuracy: 0.0052 - val_loss: 0.1565 - val_accuracy: 0.0000e+00
Epoch 17/200
11/11 [==============================] - 2s 171ms/step - loss: 0.1290 - accuracy: 0.0052 - val_loss: 0.1544 - val_accuracy: 0.0000e+00
Epoch 18/200
11/11 [==============================] - 2s 174ms/step - loss: 0.1271 - accuracy: 0.0052 - val_loss: 0.1523 - val_accuracy: 0.0000e+00
Epoch 19/200
11/11 [==============================] - 2s 161ms/step - loss: 0.1253 - accuracy: 0.0052 - val_loss: 0.1502 - val_accuracy: 0.0000e+00
Epoch 20/200
11/11 [==============================] - 2s 161ms/step - loss: 0.1234 - accuracy: 0.0052 - val_loss: 0.1482 - val_accuracy: 0.0000e+00
Epoch 21/200
11/11 [==============================] - 2s 159ms/step - loss: 0.1216 - accuracy: 0.0052 - val_loss: 0.1461 - val_accuracy: 0.0000e+00
Epoch 22/200
11/11 [==============================] - 2s 164ms/step - loss: 0.1198 - accuracy: 0.0052 - val_loss: 0.1441 - val_accuracy: 0.0000e+00
Epoch 23/200
11/11 [==============================] - 2s 164ms/step - loss: 0.1180 - accuracy: 0.0052 - val_loss: 0.1421 - val_accuracy: 0.0000e+00
Epoch 24/200
11/11 [==============================] - 2s 163ms/step - loss: 0.1162 - accuracy: 0.0052 - val_loss: 0.1401 - val_accuracy: 0.0000e+00
Epoch 25/200
11/11 [==============================] - 2s 167ms/step - loss: 0.1145 - accuracy: 0.0052 - val_loss: 0.1381 - val_accuracy: 0.0000e+00
Epoch 26/200
11/11 [==============================] - 2s 188ms/step - loss: 0.1127 - accuracy: 0.0052 - val_loss: 0.1361 - val_accuracy: 0.0000e+00
Epoch 27/200
11/11 [==============================] - 2s 169ms/step - loss: 0.1110 - accuracy: 0.0052 - val_loss: 0.1342 - val_accuracy: 0.0000e+00
Epoch 28/200
11/11 [==============================] - 2s 189ms/step - loss: 0.1093 - accuracy: 0.0052 - val_loss: 0.1323 - val_accuracy: 0.0000e+00
Epoch 29/200
11/11 [==============================] - 2s 183ms/step - loss: 0.1076 - accuracy: 0.0079 - val_loss: 0.1304 - val_accuracy: 0.0000e+00
Epoch 30/200
11/11 [==============================] - 2s 172ms/step - loss: 0.1059 - accuracy: 0.0079 - val_loss: 0.1285 - val_accuracy: 0.0000e+00
Epoch 31/200
11/11 [==============================] - 2s 164ms/step - loss: 0.1042 - accuracy: 0.0079 - val_loss: 0.1266 - val_accuracy: 0.0000e+00
Epoch 32/200
Accuracy always remains very low and sometimes (like this case) val_accuracy becomes 0 and never changes. While loss and val_loss do not converge well but decrease. I realize that I am certainly doing many things wrong and I cannot understand how I can fix it. I have obviously tried with other hyperparameters and also with other networks like LSTM, but I didn't get satisfactory results.
How can I improve the model so that the accuracy is at least decent? Any advice is welcome, thank you very much!