I am playing around with custom loss functions on Keras models. My "custom" loss seems to fail (in terms of accuracy score), even though I am only using a wrapper that returns an original keras loss.
As a toy example, I am using the "Basic classification" Tensorflow/Keras tutorial that uses a simple NN on the fashion-MNIST data set and I am following the related Keras documentation and this SO post.
This is the model:
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
Now, if I leave the sparse_categorical_crossentropy as a string argument in compile() function, the training results to a ~ 87% accuracy which is fine:
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
But when I just create a trivial wrapper function to call keras' cross-entropy I get a ~ 10% accuracy both on training and test sets:
from tensorflow.keras import losses
def my_loss(y_true, y_pred):
return losses.sparse_categorical_crossentropy(y_true, y_pred)
model.compile(optimizer='adam',
loss=my_loss,
metrics=['accuracy'])
Epoch 1/10 60000/60000 [==============================] - 3s 51us/sample - loss: 0.5030 - accuracy: 0.1032
Epoch 2/10 60000/60000 [==============================] - 3s 45us/sample - loss: 0.3766 - accuracy: 0.1035
...
Test accuracy: 0.1013
By plotting a few images and checking their classified labels, it doesn't look like the results differ in each case, but accuracies printed are very different. So, is it the case that the default metrics do not play nicely with custom losses? Can it be the case that what I see is the error rather than the accuracy? Am I missing something from the documentation?
Edit: The values of the loss functions in both cases end up roughly the same, so training indeed takes place. The accuracy is the point of failure.
Here's the reason:
When you use inbuilt loss and use loss='sparse_categorical_crossentropy' at that time accuracy metric used is sparse_categorical_accuracy But when you use custom loss function at that time accuracy metric used is categorical_accuracy.
Example:
model.compile(optimizer='adam',
loss=losses.sparse_categorical_crossentropy,
metrics=['categorical_accuracy', 'sparse_categorical_accuracy'])
model.fit(train_images, train_labels, epochs=1)
'''
Train on 60000 samples
60000/60000 [==============================] - 5s 86us/sample - loss: 0.4955 - categorical_accuracy: 0.1045 - sparse_categorical_accuracy: 0.8255
'''
model.compile(optimizer='adam',
loss=my_loss,
metrics=['accuracy', 'sparse_categorical_accuracy'])
model.fit(train_images, train_labels, epochs=1)
'''
Train on 60000 samples
60000/60000 [==============================] - 5s 87us/sample - loss: 0.4956 - acc: 0.1043 - sparse_categorical_accuracy: 0.8256
'''
Related
I am trying to train DenseNet121 (among other models) in tensorflow/keras and I need to keep track of accuracy and val_accuracy. However, running this does not log the val_accuracy in the model's history:
clf_model = tf.keras.models.Sequential()
clf_model.add(pre_trained_model)
clf_model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
clf_model.compile(loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = clf_model.fit(train_processed, epochs=10, validation_data=validation_processed)
output (no val_accuracy, I need val_accuracy):
Epoch 1/10
1192/1192 [==============================] - 75s 45ms/step - loss: 2.3908 - accuracy: 0.4374
Epoch 2/10
451/1192 [==========>...................] - ETA: 22s - loss: 1.3556 - accuracy: 0.6217
When I tried to pass val_accuracy to the metrics as follows:
clf_model = tf.keras.models.Sequential()
clf_model.add(pre_trained_model)
clf_model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
clf_model.compile(loss='sparse_categorical_crossentropy', metrics=['accuracy','val_accuracy'])
history = clf_model.fit(train_processed, epochs=10, validation_data=validation_processed)
I get the following error:
ValueError: Unknown metric function: val_accuracy. Please ensure this object is passed to the `custom_objects` argument.
See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
Any idea what am I doing wrong?
Update
It turned out the test dataset was empty.
My training loss started very low at 0.0181 whereas validation loss started at 2.4625, which is over 150 fold difference. Validation loss did improve as the model tries to learn and memorize generalizable features, yet it never gets to the level of the training loss and the training loss becomes extremely small as the model learns more epochs.
The model is trained on ~31K data instances (train+val) for a binary classification problem. The model adopts a Conv1D layer fed into Bidirectional GRU units, which is then output into two fully connected layers.
Total params: 294,737
Trainable params: 294,417
Non-trainable params: 320
Model architecture (figure): Model architecture
Training model....
(31639, 121, 4)
Epoch 1/100
learning rate: 0.01
15/15 [==============================] - 6s 190ms/step - loss: 0.0181 - accuracy: 0.0926 - val_loss: 2.4625 - val_accuracy: 0.0506
Epoch 2/100
learning rate: 0.01
15/15 [==============================] - 2s 148ms/step - loss: 0.0104 - accuracy: 0.0478 - val_loss: 2.1587 - val_accuracy: 0.0506
.....
Full model loss training history Model Loss
I trained it for 100 epochs with several callback modifications, and binary cross-entropy was used as the loss function. The dataset was highly imbalanced with 5% positive labels and 95% negative labels.
model = Model(inputs=[f_input, rc_input], outputs=predict)
adam=Adam(lr=hyperparam_dict['initial_lr'],beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])
Loss history see here (figure):
Model training/val loss vs epochs
Accuracy history see here (figure):
Model training/val accuracy vs epochs
Working on IMDB Dataset Google Colab, the model accuracy refuses to go above 50%.
The dataset has already been tokenized and cleaned before this.
Any suggestions on how the accuracy can be improved are welcome.
le=LabelEncoder()
df['sentiment']= le.fit_transform(df['sentiment'])
labels=to_categorical(df['sentiment'],num_classes=2) # this is output
max_len = 400
embeddings=256
sequences = tokenizer.texts_to_sequences(df['review'])
sequences_padded=pad_sequences(sequences,maxlen=max_len,padding='post',truncating='post')
num words = 10000
embeddings = 256
max_len=400
X_train,X_test,y_train,y_test=train_test_split(sequences_padded,labels,test_size=0.20,random_state=42)
model= keras.Sequential()
model.add(Embedding(num_words,embeddings,input_length=max_len))
model.add(Conv1D(256,10,activation='relu'))
model.add(keras.layers.Bidirectional(LSTM(128,return_sequences=True,kernel_regularizer=tf.keras.regularizers.l1(0.01),activity_regularizer=tf.keras.regularizers.l2(0.01))))
model.add(LSTM(64))
model.add(keras.layers.Dropout(0.4))
model.add(Dense(2,activation='softmax'))
model.summary()
history=model.fit(X_train,y_train,epochs=3, batch_size=128, verbose=1)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
Epoch 1/3
310/310 [==============================] - 157s 398ms/step - loss: 35.7756 - accuracy: 0.5007
Epoch 2/3
310/310 [==============================] - 123s 395ms/step - loss: 1.0212 - accuracy: 0.5003
Epoch 3/3
310/310 [==============================] - 123s 397ms/step - loss: 1.0211 - accuracy: 0.5015
Update:
Model accuracy started improving when I changed from post to pre padding. Any leads on why this happens would be highly appreciated.
You use binary_crossentropy + softmax instead of categorical_crossentropy + softmax.
Change to:
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
I'm trying to learn a bit about Tensorflow/Machine Learning. As a starting point, I'm trying to create a model that is trained on a simple 1-D function (y=x^2) and see how it behaves for other inputs outside of the training range.
The problem I'm having is that the training function doesn't really ever improve. I'm sure it's due to a lack of understanding and/or misconfiguration on my part, but there really doesn't seem to be any sort of "baby's first machine learning" out there that deals with a dataset of a known form.
My code is pretty simple, and is borrowed from TensorFlow's introduction notebook here
import tensorflow as tf
import numpy as np
# Load the dataset
x_train = np.linspace(0,10,1000)
y_train = np.power(x_train,2.0)
x_test = np.linspace(8,12,100)
y_test = np.power(x_test,2.0)
# (x_train, y_train), (x_test, y_test) = mnist.load_data()
# x_train, x_test = x_train / 255.0, x_test / 255.0
"""Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function for training:"""
from tensorflow.keras import layers
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='mse',
metrics=['mae'])
"""Train and evaluate the model:"""
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test, verbose=2)
and I get output like this:
Train on 1000 samples
Epoch 1/5
1000/1000 [==============================] - 0s 489us/sample - loss: 1996.3631 - mae: 33.2543
Epoch 2/5
1000/1000 [==============================] - 0s 36us/sample - loss: 1996.3540 - mae: 33.2543
Epoch 3/5
1000/1000 [==============================] - 0s 36us/sample - loss: 1996.3495 - mae: 33.2543
Epoch 4/5
1000/1000 [==============================] - 0s 33us/sample - loss: 1996.3474 - mae: 33.2543
Epoch 5/5
1000/1000 [==============================] - 0s 38us/sample - loss: 1996.3450 - mae: 33.2543
100/1 - 0s - loss: 15546.3655 - mae: 101.2603
Like I said, I'm positive that this is a misconfiguration/lack of understanding on my part. I really learn best when I can take something this simple and incrementally make it more complex rather than starting on something whose patterns I can't readily identify, but I can't find any tutorials, etc that take this approach. Can anyone recommend either a good tutorial source, or just educate me on what I am doing wrong here?
I think you have mix of the problems here. I try to explain to you one by one:
First of all, the problem you want to solve is to learn the function f=x^2. So this can fit into a regression task. For a regression task ( and any other tasks ^_^ ) you should pay attention to the activation function and also to what you really try to predict.
You have chosen softmax for activation function, which does not make sense at all. I suggest to replace it with a linear activation function ( if you remove it completely, it will be considered linear automatically by TF/Keras).
On the other hand, why you have a 10 DENSE at the last layer? Per each entry, you want to predict one value ( for 5 as the input value you wanna predict 25, right),
so one DENSE should be enough to generate your value.
On the other hand, since your network is not big, I would start by SGD as the optimizer, but Adam might be good as well. Additionally, for the problem you are trying to solve, I do not believe you really need 128 DENSE as the first hidden layer. you can start by a smaller number and look at how it goes. I would start by 3-4 DENSE as a start
Long story short, let's replace your model with these lines, and hopefully, it gets working
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1)
])
I'm encountering a very strange with a keras model using ImageDataGenerator, fit_generator, and evaluate_generator.
I'm creating the model like so:
classes = <list of classes>
num_classes = len(classes)
pretrained_model = Sequential()
pretrained_model.add(ResNet50(include_top=False, weights='imagenet', pooling='avg'))
pretrained_model.add(Dense(num_classes, activation='softmax'))
pretrained_model.layers[0].trainable = False
pretrained_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
And I'm training it like this:
idg_final = ImageDataGenerator(
data_format='channels_last',
rescale=1./255,
width_shift_range = 0.2,
height_shift_range = 0.2,
rotation_range=15,
)
traing_gen = idg_final.flow_from_directory('./train', classes=classes, target_size=(224, 224), class_mode='categorical')
pretrained_model.fit_generator(traing_gen, epochs=1, verbose=1)
fit_generator prints loss: 1.0297 - acc: 0.7546.
Then, I am trying to evaluate the model on the exact same data it was trained on.
debug_gen = idg_final.flow_from_directory('./train', target_size=(224, 224), class_mode='categorical', classes=classes, shuffle=True)
print(pretrained_model.evaluate_generator(debug_gen, steps=100))
Which prints [10.278913383483888, 0.0].
Why is the accuracy so different on the same exact data?
Edit: I also wanted to point out that sometimes the accuracy is above 0.0. For example, when I use a model trained with five epochs, evaluate_accuracy returns 6% accuracy.
Edit 2: Based on the answers below I made sure to train for more epochs and that the ImageDataGenerator for evaluation did not have random shifts and rotations. I'm still getting very high accuracy during training and extremely low accuracy during evaluation on the same dataset.
I'm training like
idg_final = ImageDataGenerator(
data_format='channels_last',
rescale=1./255,
width_shift_range = 0.2,
height_shift_range = 0.2,
rotation_range=15,
)
traing_gen = idg_final.flow_from_directory('./train', classes=classes, target_size=(224, 224), class_mode='categorical')
pretrained_model.fit_generator(traing_gen, epochs=10, verbose=1)
Which prints the following:
Found 9850 images belonging to 4251 classes.
Epoch 1/10
308/308 [==============================] - 3985s 13s/step - loss: 8.9218 - acc: 0.0860
Epoch 2/10
308/308 [==============================] - 3555s 12s/step - loss: 3.2710 - acc: 0.3403
Epoch 3/10
308/308 [==============================] - 3594s 12s/step - loss: 1.8597 - acc: 0.5836
Epoch 4/10
308/308 [==============================] - 3656s 12s/step - loss: 1.2712 - acc: 0.7058
Epoch 5/10
308/308 [==============================] - 3667s 12s/step - loss: 0.9556 - acc: 0.7795
Epoch 6/10
308/308 [==============================] - 3689s 12s/step - loss: 0.7665 - acc: 0.8207
Epoch 7/10
308/308 [==============================] - 3693s 12s/step - loss: 0.6581 - acc: 0.8498
Epoch 8/10
308/308 [==============================] - 3618s 12s/step - loss: 0.5874 - acc: 0.8636
Epoch 9/10
308/308 [==============================] - 3823s 12s/step - loss: 0.5144 - acc: 0.8797
Epoch 10/10
308/308 [==============================] - 4334s 14s/step - loss: 0.4835 - acc: 0.8854
And I'm evaluating like this on the exact same dataset
idg_debug = ImageDataGenerator(
data_format='channels_last',
rescale=1./255,
)
debug_gen = idg_debug.flow_from_directory('./train', target_size=(224, 224), class_mode='categorical', classes=classes)
print(pretrained_model.evaluate_generator(debug_gen))
Which prints the following very low accuracy: [10.743386410747084, 0.0001015228426395939]
The full code is here.
Two things I suspect.
1 - No, your data is not the same.
You're using three types of augmentation in ImageDataGenerator, and it seems there isn't a random seed being set. So, test data is not equal to training data.
And as it seems, you're also training for only one epoch, which is very little (unless you really have tons of data, but since you're using augmentation, maybe that's not the case). (PS: I don't see the steps_per_epoch argument in your fit_generator call...)
So, if you want to see good results, here are some solutions:
remove the augmentation arguments from the generator for this test (either training and test data) - This means, remove width_shift_range, height_shift_range and rotation_range;
if not, train for really long, enough for your model to really get used to all kinds of augmented images (as it seems, five epochs seem still to be way too little);
or set a random seed and guarantee that the test data is equal to the training data (argument seed in flow_from_directory)
2 - (This may happen if you're very new to Keras/programming, so please ignore if it's not the case) You might be running the code that defines the model again when testing.
If you run the code that defines the model again, it will replace all your previous training with random weights.
3 - Since we're out of suggestions:
Maybe save the weights instead of saving the model. I usually do this instead of saving the models. (For some reason I don't understand, I've never been able to load a model like that)
def createModel():
....
model = createModel()
...
model.fit_generator(....)
np.save('model_weights.npy',model.get_weights())
model = createModel()
model.set_weights(np.load('model_weights.npy'))
...
model.evaluate_generator(...)
Hint:
It's not related to the bug, but make sure that the base model layer is really layer 0. If I remember well, sequential models have an input layer and you should actually be making layer 1 untrainable instead.
Use the model.summary() to confirm the number of untrainable parameters.