Using Tensorflow Huber loss in Keras - python

I am trying to use huber loss in a keras model (writing DQN), but I am getting bad result, I think I am something doing wrong. My is code is below.
model = Sequential()
model.add(Dense(output_dim=64, activation='relu', input_dim=state_dim))
model.add(Dense(output_dim=number_of_actions, activation='linear'))
loss = tf.losses.huber_loss(delta=1.0)
model.compile(loss=loss, opt='sgd')
return model

I came here with the exact same question. The accepted answer uses logcosh which may have similar properties, but it isn't exactly Huber Loss. Here's how I implemented Huber Loss for Keras (note that I'm using Keras from Tensorflow 1.5).
import numpy as np
import tensorflow as tf
'''
' Huber loss.
' https://jaromiru.com/2017/05/27/on-using-huber-loss-in-deep-q-learning/
' https://en.wikipedia.org/wiki/Huber_loss
'''
def huber_loss(y_true, y_pred, clip_delta=1.0):
error = y_true - y_pred
cond = tf.keras.backend.abs(error) < clip_delta
squared_loss = 0.5 * tf.keras.backend.square(error)
linear_loss = clip_delta * (tf.keras.backend.abs(error) - 0.5 * clip_delta)
return tf.where(cond, squared_loss, linear_loss)
'''
' Same as above but returns the mean loss.
'''
def huber_loss_mean(y_true, y_pred, clip_delta=1.0):
return tf.keras.backend.mean(huber_loss(y_true, y_pred, clip_delta))
Depending if you want to reduce the loss or the mean of the loss, use the corresponding function above.

You can wrap Tensorflow's tf.losses.huber_loss in a custom Keras loss function and then pass it to your model.
The reason for the wrapper is that Keras will only pass y_true, y_pred to the loss function, and you likely want to also use some of the many parameters to tf.losses.huber_loss. So, you'll need some kind of closure like:
def get_huber_loss_fn(**huber_loss_kwargs):
def custom_huber_loss(y_true, y_pred):
return tf.losses.huber_loss(y_true, y_pred, **huber_loss_kwargs)
return custom_huber_loss
# Later...
model.compile(
loss=get_huber_loss_fn(delta=0.1)
...
)

I was looking through the losses of keras. Apparently logcosh has same properties as huber loss. More details of their similarity can be seen here.

How about:
loss=tf.keras.losses.Huber(delta=100.0)

Related

minimize two loss functions in Keras

I want to minimize two loss functions the mean squared error and the KL Divergence.
It is possible to implement this on Keras
something like
loss =tf.keras.losses.KLDivergence() + tf.keras.losses.MeanSquaredError()
model.compile(optimizer="Adam",
loss=loss
)
This code gives me error, as I can't sum those functions
You could define a custom loss like this -
import tensorflow.keras as K
def custom_loss(y_true,y_pred):
l = K.backend.sum(K.losses.KLDivergence(y_true, y_pred), K.losses.MeanSquaredError(y_true, y_pred))
return l
model.compile(optimizer='adam',
loss=custom_loss,
metrics=['accuracy'])
The reason you can sum those 2 is that you are trying to sum up the objects of the 2 classes. Instead, you need to call them and sum up their results.

How can I specify a loss function to be quadratic weighted kappa in Keras?

My understanding is that keras requires loss functions to have the signature:
def custom_loss(y_true, y_pred):
I am trying to use sklearn.metrics.cohen_kappa_score, which takes
(y1, y2, labels=None, weights=None, sample_weight=None)`
If I use it as is:
model.compile(loss=metrics.cohen_kappa_score,
optimizer='adam', metrics=['accuracy'])
Then the weights won't be set. I want to set that to quadtratic. Is there some what to pass this through?
There are two steps in implementing a parameterized custom loss function (cohen_kappa_score) in Keras. Since there are implemented function for your needs, there is no need for you to implement it yourself. However, according to TensorFlow Documentation, sklearn.metrics.cohen_kappa_score does not support weighted matrix.
Therefore, I suggest TensorFlow's implementation of cohen_kappa. However, using TensorFlow in Keras is not that easy...
According to this Question, they used control_dependencies to use a TensorFlow metric in Keras. Here is a example:
import keras.backend as K
def _cohen_kappa(y_true, y_pred, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None):
kappa, update_op = tf.contrib.metrics.cohen_kappa(y_true, y_pred, num_classes, weights, metrics_collections, updates_collections, name)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([update_op]):
kappa = tf.identity(kappa)
return kappa
Since Keras loss functions take (y_true, y_pred) as parameters, you need a wrapper function that returns another function. Here is some code:
def cohen_kappa_loss(num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None):
def cohen_kappa(y_true, y_pred):
return -_cohen_kappa(y_true, y_pred, num_classes, weights, metrics_collections, updates_collections, name)
return cohen_kappa
Finally, you can use it as follows in Keras:
# get the loss function and set parameters
model_cohen_kappa = cohen_kappa_loss(num_classes=3,weights=weights)
# compile model
model.compile(loss=model_cohen_kappa,
optimizer='adam', metrics=['accuracy'])
Regarding using the Cohen-Kappa metric as a loss function. In general it is possible to use weighted kappa as a loss function. Here is a paper using weighted kappa as a loss function for multi-class classification.
You can define it as a custom loss and yes you are right that keras accepts only two arguments in the loss function. Here is how you can define your loss:
def get_cohen_kappa(weights=None):
def cohen_kappa_score(y_true, y_pred):
"""
Define your code here. You can now use `weights` directly
in this function
"""
return score
return cohen_kappa_score
Now you can pass this function to your model as:
model.compile(loss=get_cohen_kappa_score(weights=weights),
optimizer='adam')
model.fit(...)

Loss function Keras out_dim > 1

I have a training data:
And, I have a model in Keras with more than one dimension of output. I want to predict A, B and C:
model = Sequential()
model.add(GRU(32, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(3))
model.compile(loss='mean_squared_error', optimizer='adam')
But I want the minimum mean_squared_error in A, i.e. only want to consider A for the loss function.
What I can do?
You can define a custom loss function and only compute the mean_squared_error() loss based on the value of A:
from keras import losses
def loss_A(y_true, y_pred):
return losses.mean_squared_error(y_true[:,0], y_pred[:,0])
#...
model.compile(loss=loss_A, optimizer='adam')
What you need to look into is a custom loss function:
def only_A_mean_squared(y_true, y_pred):
return keras.losses.mean_squared_error(y_true[:,0], y_pred[:,0])
And in order to use it:
model.compile(loss=only_A_mean_squared, optimizer='adam')
What i am doing in the above is creating a custom loss function, which only takes the first dimension (the 'A') and feed it to the normal keras mean squared error loss function.

Keras custom metrics with more than two inputs

I have a VAE model that I've broken down into the encoder and decoder parts, and implemented a custom loss. A simplified example is as below
input = Input(shape=(self.image_height, self.image_width, self.image_channel))
encoded = build_encoder(input)
decoded = build_decoder(encoded)
model = Model(input, decoded)
The loss (simplified) is
loss = K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.add_loss(loss)
model.compile(optimizer=self.optimizer)
My main problem is that I want to use Keras' modelcheckpoint function, which would then require me to set custom metrics. However, everything I have seen online is similar to https://keras.io/metrics/#custom_metrics. This only takes in y_true and y_pred, and modify the validation loss from there. How would I implement it in my example model, where the loss is calculated from multiple inputs, not only the final output of "decoded"?
Well apparently you can still use the variables (keras layers) without passing those into the custom loss function.
So for my example, the loss can be calculated as
def custom_loss(y_true, y_pred):
return K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.compile(optimizer=self.optimizer, loss=custom_loss)
y_true and y_pred is never used, but the actual required inputs can still be called (as long as they are in the same scope as the custom loss function of course).

Keras training only specific outputs

I am using Kears with tensorflow and I have a model with 3 output out of which I only want to train 2.
model = Model(input=input, output=[out1,out2,out3])
model.compile(loss=[loss1, loss2, loss3], optimizer=my_optimizer)
loss1(y_true, y_pred):
return calculate_loss1(y_true, y_pred)
loss2(y_true, y_pred):
return calculate_loss2(y_true, y_pred)
loss3(y_true, y_pred):
return 0.0*K.mean(y_pred)
I tried to do it with the code above but I am not sure it does what I want do do. So I think it adds up the losses and it trains each output with that loss meanwhile I do not wish to train out3 at all. (I need out3 because it is used in testing). Could anybody tell me how to achieve this or reassure me that the code actually dose what I want?
You have to create 2 different models like this
model1 = Model(input=input, output=[out1,out2])
model2 = Model(input=input, output=[out1,out2,out3])
You compile both but only fit the first. They will share the layers so model2, even if it wasn't trained, will have the weights learned from model1. But if there is a layer in out3 which is trainable but not in the flow between input and out1 and out2 of the graph, that layer wont be trained so will stay wirh its inital values.
Does that help? :-)
You can set one of the losses to None:
model = Model(input=input, output=[out1,out2,out3])
model.compile(loss=[loss1, loss2, None], optimizer=my_optimizer)
loss1(y_true, y_pred):
return calculate_loss1(y_true, y_pred)
loss2(y_true, y_pred):
return calculate_loss2(y_true, y_pred)

Categories