Custom loss with external parameters in Keras Tuner - python

While my code runs without any problems with Keras Tuner and standard loss functions like 'mse' I am trying to figure out how to write a custom loss function that accept an external argument in addition to true and forecasted y to use inside Keras Tuner for LSTM model selection. I am looking for the easiest and less painful way and I didn't find a working solution in old posts.
One approach I follewed is this one. Let's say I have these variables
# external vector needed in custom loss function
ex_loss= np.logical_not(klines_backtest.loc[i_sel,['d']].to_numpy(dtype=np.float32)[:sample_start])
# create data sequences for x and vector to forecasy y
x_train, y_train = lstm_data_sequence(dataset[:sample_start,:-1], dataset[:sample_start,-1], lstm_sequence)
# concatenate external vector to y so y shape is Nx2
y_train = np.vstack((y_train, ex_loss[lstm_sequence:,0])).T
I have defined the following loss function
def bande_loss(y_true, y_pred):
mse = K.square(y_pred - y_true[:,0])
i_loss = K.equal(y_true[:,1], 1) and K.greater_equal(y_pred, y_true[:,0])
i_loss = K.cast(~i_loss, 'float32')
return K.mean(mse*i_loss)
Basically I tryied to avoid the loss function override passing the additional variable (of the same size of y_true) I need in the loss function inside y_train where I expext to have y_true and the corresponding external variable correctly sized for the batch.
The LSTM for model selection is
def lstm_model(hp):
model = Sequential()
model.add(InputLayer(input_shape=(48*3, 13)))
num_layers = hp.Int('num_layers', min_value=4, max_value=8, step=2)
num_units = hp.Choice('units', values=[50, 100, 250, 500])
n_dropout = hp.Choice('n_dropout', values=[float(0), 0.10, 0.20])
n_rec_dropout = hp.Choice('n_rec_dropout', values=[float(0), 0.10, 0.20])
learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4, 1e-5, 1e-6])
for i in range(num_layers):
if i < num_layers - 1:
r_sequence = True
else:
r_sequence = False
model.add(LSTM(
units=num_units,
dropout=n_dropout,
recurrent_dropout=n_rec_dropout,
return_sequences=r_sequence))
model.add(Dense(1))
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=bande_loss,
metrics=[bande_loss])
return model
Executing this code
tuner = Hyperband(
hypermodel=lstm_model,
objective=Objective("bande_loss", direction="min"),
max_epochs=50,
hyperband_iterations=2,
executions_per_trial=1,
overwrite=True,
project_name='hyperband_tuner')
stop_early = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=3, verbose=1)
tuner.search(x_train, y_train, epochs=30, validation_split=p_train, callbacks=[stop_early],
shuffle=False, verbose=1)
I get this error
The second input must be a scalar, but it has shape [32]
[[{{node bande_loss/cond/switch_pred/_2736}}]] [Op:__inference_train_function_45266]
Function call stack:
train_function
Note that 32 is the (default) batch size.
Also running the same code with
def bande_loss(y_true, y_pred):
mse = K.square(y_pred - y_true[:,0])
return K.mean(mse)
seems to work fine while running with
def bande_loss(y_true, y_pred):
mse = K.square(y_pred - y_true[:,1])
return K.mean(mse)
gives me the same error and I cannot understand why.
I also tried the loss function override in this way
def lstm_model(hp):
model = Sequential()
model.add(InputLayer(input_shape=(48*3, 13)))
num_layers = hp.Int('num_layers', min_value=4, max_value=8, step=2)
num_units = hp.Choice('units', values=[50, 100, 250, 500])
n_dropout = hp.Choice('n_dropout', values=[float(0), 0.10, 0.20])
n_rec_dropout = hp.Choice('n_rec_dropout', values=[float(0), 0.10, 0.20])
learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4, 1e-5, 1e-6])
for i in range(num_layers):
if i < num_layers - 1:
r_sequence = True
else:
r_sequence = False
model.add(LSTM(
units=num_units,
dropout=n_dropout,
recurrent_dropout=n_rec_dropout,
return_sequences=r_sequence))
model.add(Dense(1))
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=bande_loss(ex_loss),
metrics=[bande_loss(ex_loss)])
return model
def bande_loss(ex_loss):
def loss(y_true, y_pred):
mse = K.square(y_pred - y_true)
i_loss = K.equal(ex_loss, True) and K.greater_equal(y_pred, y_true)
i_loss = K.cast(~i_loss, 'float32')
return K.mean(mse*i_loss)
return loss
...
# external vector needed in custom loss function
ex_loss= np.logical_not(klines_backtest.loc[i_sel,['d']].to_numpy(dtype=np.float32)[:sample_start])
# create data sequences for x and vector to forecasy y
x_train, y_train = lstm_data_sequence(dataset[:sample_start,:-1], dataset[:sample_start,-1], lstm_sequence)
ex_loss = K.variable(ex_loss[lstm_sequence:], dtype=bool)
tuner = Hyperband(
hypermodel=lstm_model,
objective=Objective("bande_loss(ex_loss)", direction="min"),
max_epochs=50,
hyperband_iterations=2,
executions_per_trial=1,
overwrite=True,
project_name='hyperband_tuner')
stop_early = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=3, verbose=1)
tuner.search(x_train, y_train, epochs=30, validation_split=p_train, callbacks=[stop_early],
shuffle=False, verbose=1)
But I get this error
tensorflow.python.framework.errors_impl.InvalidArgumentError: The second input must be a scalar, but it has shape [4176]
[[{{node cond/switch_pred/_12}}]] [Op:__inference_train_function_34471]
Function call stack:
train_function
Can anyone provide me help or a simpler and effective way to implement custom loss functions with external parameters inside Keras Tuner?

Related

How to add class_weight to a custom train_step function in a custom keras model?

I am using tensorflow 2.8. I followed a tutorial from tensorflow on how to create your own fit function by overwriting the train_step function in your custom keras model class.
I wanted to add class_weight but in their section "Supporting sample_weight & class_weight" they don't show how to actually use class_weight, only sample_weight.
Is there a way to use class_weight in a custom train_step function?
I also found this Colab notebook in a GitHub issue. However this creates a custom model class but doesn't even use it and is therefore of no help either.
When actually creating the custom model and calling fit() I get the error: TypeError: __call__() got an unexpected keyword argument 'class_weight', when the loss in train_step() is calculated.
Example code (with the error) of what I'm trying to do:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
class CustomModel(keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
if len(data) == 3:
x, y, class_weight = data
else:
x, y = data
with tf.GradientTape() as tape:
logits = self(x, training=True) # Forward pass
# Compute the loss value.
# The loss function is configured in `compile()`.
loss = self.compiled_loss(
y,
logits,
class_weight=class_weight,
regularization_losses=self.losses,
)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics.
# Metrics are configured in `compile()`.
self.compiled_metrics.update_state(y, y_pred, class_weight=class_weight)
# Return a dict mapping metric names to current value.
return {m.name: m.result() for m in self.metrics}
# Construct and compile an instance of CustomModel
model = CustomModel(inputs=inputs, outputs=outputs)
model.compile(optimizer=optimizer, loss=loss_fn, metrics=["accuracy"])
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
model.fit(train_dataset, class_weight=class_weight, epochs=3)
All those examples do not work because loss functions don't take class_weight as an argument, as far as I understand when looking at the documentation.
I tried to fix this by creating my own loss function:
#tf.function
def weighted_sparse_categorical_crossentropy(labels, logits, class_weight=None):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)(labels, logits)
if class_weight is None:
if debug:
print("None weights: NO WEIGHTS SET!")
return loss
# get all class weights as list
if type(class_weight) is not list:
class_weight = list(class_weight.values())
class_weights_gathered = tf.gather(class_weight, labels)
return tf.reduce_mean(class_weights_gathered * loss)
Then using this to compile my model and calling .fit()
model.compile(optimizer=optimizer, loss=weighted_sparse_categorical_crossentropy)
model.fit(X, class_weight=class_weight, epochs=1)
But I still get TypeError: __call__() got an unexpected keyword argument 'class_weight' despite class_weight being clearly an argument in my function.
I also looked at GitHub to see what Tensorflow actually does with the class_weight inside the .fit() function and seems to convert it to a sample_weight somehow.
So I'm not sure if what I want is even possible. But then the section in the official tensorflow tutorial would be wrong since there would be no support for class_weight.
Pretty sure the error is here:
loss = self.compiled_loss(
y,
logits,
class_weight=class_weight,
regularization_losses=self.losses,
)
because class_weight is not recognized... instead, you should use sample_weight, and from keras documentation seems that is as simple as expanding the incoming data, as you are doing:
def train_step(self, data):
x, y, sample_weight = data
your doubt about the "missing example", is due to the fact that keras will automatically transform your class_weight to sample_weight, as you can see from there code:
class M(K.Model):
def train_step(self, data):
tf.print(data)
return {"loss":0}
model = M()
model.compile(K.optimizers.Adam(), K.losses.SparseCategoricalCrossentropy(), run_eagerly=True)
model.fit(np.array([[-1], [-1], [-1]]), np.array([[0],[0],[1]]), class_weight={0:7, 1:9})
which prints:
([[-1][-1][-1]],
[[0] [1] [0]],
[7 9 7])
where you can clearly see that the first line are the x, the second line are the y, the third one are the associated class weights to y, which you can feed to a loss as usual:
x, y, sample_weight = data
...
loss = self.compiled_loss(
y,
logits,
sample_weight=sample_weight,
regularization_losses=self.losses,
)

Can't backward pass two losses in Classification Transformer Model

For my model I'm using a roberta transformer model and the Trainer from the Huggingface transformer library.
I calculate two losses:
lloss is a Cross Entropy Loss and dloss calculates the loss inbetween hierarchy layers.
The total loss is the sum of lloss and dloss. (Based on this)
When calling total_loss.backwards() however, I get the error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed
Any idea why that happens? Can I force it to only call backwards once? Here is the loss calculation part:
dloss = calculate_dloss(prediction, labels, 3)
lloss = calculate_lloss(predeiction, labels, 3)
total_loss = lloss + dloss
total_loss.backward()
def calculate_lloss(predictions, true_labels, total_level):
'''Calculates the layer loss.
'''
loss_fct = nn.CrossEntropyLoss()
lloss = 0
for l in range(total_level):
lloss += loss_fct(predictions[l], true_labels[l])
return self.alpha * lloss
def calculate_dloss(predictions, true_labels, total_level):
'''Calculate the dependence loss.
'''
dloss = 0
for l in range(1, total_level):
current_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l]), dim=1)
prev_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l-1]), dim=1)
D_l = self.check_hierarchy(current_lvl_pred, prev_lvl_pred, l) #just a boolean tensor
l_prev = torch.where(prev_lvl_pred == true_labels[l-1], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))
l_curr = torch.where(current_lvl_pred == true_labels[l], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))
dloss += torch.sum(torch.pow(self.p_loss, D_l*l_prev)*torch.pow(self.p_loss, D_l*l_curr) - 1)
return self.beta * dloss
There is nothing wrong with having a loss that is the sum of two individual losses, here is a small proof of principle adapted from the docs:
import torch
import numpy
from sklearn.datasets import make_blobs
class Feedforward(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = self.sigmoid(output)
return output
def blob_label(y, label, loc): # assign labels
target = numpy.copy(y)
for l in loc:
target[y == l] = label
return target
x_train, y_train = make_blobs(n_samples=40, n_features=2, cluster_std=1.5, shuffle=True)
x_train = torch.FloatTensor(x_train)
y_train = torch.FloatTensor(blob_label(y_train, 0, [0]))
y_train = torch.FloatTensor(blob_label(y_train, 1, [1,2,3]))
x_test, y_test = make_blobs(n_samples=10, n_features=2, cluster_std=1.5, shuffle=True)
x_test = torch.FloatTensor(x_test)
y_test = torch.FloatTensor(blob_label(y_test, 0, [0]))
y_test = torch.FloatTensor(blob_label(y_test, 1, [1,2,3]))
model = Feedforward(2, 10)
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
model.eval()
y_pred = model(x_test)
before_train = criterion(y_pred.squeeze(), y_test)
print('Test loss before training' , before_train.item())
model.train()
epoch = 20
for epoch in range(epoch):
optimizer.zero_grad() # Forward pass
y_pred = model(x_train) # Compute Loss
lossCE= criterion(y_pred.squeeze(), y_train)
lossSQD = (y_pred.squeeze()-y_train).pow(2).mean()
loss=lossCE+lossSQD
print('Epoch {}: train loss: {}'.format(epoch, loss.item())) # Backward pass
loss.backward()
optimizer.step()
There must be a real second time that you call directly or indirectly backward on some varaible that then traverses through your graph. It is a bit too much to ask for the complete code here, only you can check this or at least reduce it to a minimal example (while doing so, you might already find the issue). Apart from that, I would start checking:
Does it already occur in the first iteration of training? If not: are you reusing any calculation results for the second iteration without a detach?
When you do backward on your losses individually lloss.backward() followed by dloss.backward() (this has the same effect as adding them together first as gradients are accumulated): what happens? This will let you track down for which of the two losses the error occurs.
After backward() your comp. graph is freed so for the second backward you need to create a new graph by providing inputs again. If you want to reiterate the same graph after backward (for some reason) you need to specify retain_graph flag in backward as True. see retain_graph here.
P.S. As the summation of Tensors is automatically differentiable, summing the losses would not cause any issue in the backward.

How to apply a loss metric that will penalize predicting all zeros in multilabel classification problem?

Say I have a classification problem that has 30 potential binary labels. These labels are not mutually exclusive. The labels tend to be sparse--there is, on average, 1 positive label per all 30 labels but sometimes more than only 1. In the following code, how can I penalize the model from predicting all zeros? The accuracy will be high, but recall will be awful!
import numpy as np
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
OUTPUT_NODES = 30
np.random.seed(0)
def get_dataset():
"""
Get a dataset of X and y. This is a learnable problem as there is some signal in the features. 10% of the time, a
positive-output's index will also have a positive feature for that index
:return: X and y data for training
"""
n_observations = 30000
y = np.random.rand(n_observations, OUTPUT_NODES)
y = (y <= (1 / OUTPUT_NODES)).astype(int) # Makes a sparse output where there is roughly 1 positive label: ((1 / OUTPUT_NODES) * OUTPUT_NODES ≈ 1)
X = np.zeros((n_observations, OUTPUT_NODES))
for i in range(len(y)):
for j, feature in enumerate(y[i]):
if feature == 1:
X[i][j] = 1 if np.random.rand(1) > 0.9 else 0 # Makes the input features more noisy
# X[i][j] = 1 # Using this instead will make the model perform very well
return X, y
def create_model():
input_layer = Input(shape=(OUTPUT_NODES, ))
dense1 = Dense(100, activation='relu')(input_layer)
dense2 = Dense(100, activation='relu')(dense1)
output_layer = Dense(30, activation='sigmoid')(dense2)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['Recall'])
return model
def main():
X, y = get_dataset()
model = create_model()
model.fit(X, y, epochs=10, batch_size=10)
X_pred = np.random.randint(0, 2, (100, OUTPUT_NODES))
y_pred = model.predict(X_pred)
print(X_pred)
print(y_pred.round(1))
if __name__ == '__main__':
main()
I believe I read here that I could use:
weighted_cross_entropy_with_logits
to address this issue. How would that affect my final output layer's activation functions? Would I have to have an activation function? How do I specify a penalty to misclassifications of a true positive class?
Ok, it is an interesting problem
First you need to define a weighted cross entropy loss wrapper:
def wce_logits(positive_class_weight=1.):
def mylossw(y_true, logits):
cross_entropy = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=logits, labels=tf.cast(y_true, dtype=tf.float32), pos_weight=positive_class_weight))
return cross_entropy
return mylossw
The positive_class_weight is applied to the positive class data. You need this wrapper for tf.nn.weighted_cross_entropy_with_logits to get a loss function that takes y_true and y_pred (only) as inputs.
Note that you must cast y_true to float32.
Second, you can not use the predefined Recall, because it does not work with logits. I found a workaround in this discussion
class Recall(tf.keras.metrics.Recall):
def __init__(self, from_logits=False, *args, **kwargs):
super().__init__(*args, **kwargs)
self._from_logits = from_logits
def update_state(self, y_true, y_pred, sample_weight=None):
if self._from_logits:
super(Recall, self).update_state(y_true, tf.nn.sigmoid(y_pred), sample_weight)
else:
super(Recall, self).update_state(y_true, y_pred, sample_weight)
Finally, you need to remove the sigmoid activation from the last layer as you are using logits
def create_model():
input_layer = Input(shape=(OUTPUT_NODES, ))
dense1 = Dense(100, activation='relu')(input_layer)
dense2 = Dense(100, activation='relu')(dense1)
output_layer = Dense(30)(dense2)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='adam', loss=wce_logits(positive_class_weight=27.), metrics=[Recall(from_logits=True)])
return model
Note that the positive weight is set to 27 here. You can read a discussion on how to correctly calculate the weight

Get Gradients with Keras Tensorflow 2.0

I would like to keep track of the gradients over tensorboard.
However, since session run statements are not a thing anymore and the write_grads argument of tf.keras.callbacks.TensorBoard is depricated, I would like to know how to keep track of gradients during training with Keras or tensorflow 2.0.
My current approach is to create a new callback class for this purpose, but without success. Maybe someone else knows how to accomplish this kind of advanced stuff.
The code created for testing is shown below, but runs into errors independently of printing a gradient value to console or tensorboard.
import tensorflow as tf
from tensorflow.python.keras import backend as K
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu', name='dense128'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax', name='dense10')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
class GradientCallback(tf.keras.callbacks.Callback):
console = True
def on_epoch_end(self, epoch, logs=None):
weights = [w for w in self.model.trainable_weights if 'dense' in w.name and 'bias' in w.name]
loss = self.model.total_loss
optimizer = self.model.optimizer
gradients = optimizer.get_gradients(loss, weights)
for t in gradients:
if self.console:
print('Tensor: {}'.format(t.name))
print('{}\n'.format(K.get_value(t)[:10]))
else:
tf.summary.histogram(t.name, data=t)
file_writer = tf.summary.create_file_writer("./metrics")
file_writer.set_as_default()
# write_grads has been removed
tensorboard_cb = tf.keras.callbacks.TensorBoard(histogram_freq=1, write_grads=True)
gradient_cb = GradientCallback()
model.fit(x_train, y_train, epochs=5, callbacks=[gradient_cb, tensorboard_cb])
Priniting bias gradients to console (console parameter = True)
leads to: AttributeError: 'Tensor' object has no attribute 'numpy'
Writing to tensorboard (console parameter = False) creates:
TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the
value of a tensor.
To compute the gradients of the loss against the weights, use
with tf.GradientTape() as tape:
loss = model(model.trainable_weights)
tape.gradient(loss, model.trainable_weights)
This is (arguably poorly) documented on GradientTape.
We do not need to tape.watch the variable because trainable parameters are watched by default.
As a function, it can be written as
def gradient(model, x):
x_tensor = tf.convert_to_tensor(x, dtype=tf.float32)
with tf.GradientTape() as t:
t.watch(x_tensor)
loss = model(x_tensor)
return t.gradient(loss, x_tensor).numpy()
Also have a look here: https://github.com/tensorflow/tensorflow/issues/31542#issuecomment-630495970
richardwth wrote a child class of Tensorboard.
I adapted it as follows:
class ExtendedTensorBoard(tf.keras.callbacks.TensorBoard):
def _log_gradients(self, epoch):
writer = self._writers['train']
with writer.as_default(), tf.GradientTape() as g:
# here we use test data to calculate the gradients
features, y_true = list(val_dataset.batch(100).take(1))[0]
y_pred = self.model(features) # forward-propagation
loss = self.model.compiled_loss(y_true=y_true, y_pred=y_pred) # calculate loss
gradients = g.gradient(loss, self.model.trainable_weights) # back-propagation
# In eager mode, grads does not have name, so we get names from model.trainable_weights
for weights, grads in zip(self.model.trainable_weights, gradients):
tf.summary.histogram(
weights.name.replace(':', '_') + '_grads', data=grads, step=epoch)
writer.flush()
def on_epoch_end(self, epoch, logs=None):
# This function overwrites the on_epoch_end in tf.keras.callbacks.TensorBoard
# but we do need to run the original on_epoch_end, so here we use the super function.
super(ExtendedTensorBoard, self).on_epoch_end(epoch, logs=logs)
if self.histogram_freq and epoch % self.histogram_freq == 0:
self._log_gradients(epoch)

Keras custom loss function: Accessing current input pattern

In Keras (with Tensorflow backend), is the current input pattern available to my custom loss function?
The current input pattern is defined as the input vector used to produce the prediction. For example, consider the following: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False). Then the current input pattern is the current X_train vector associated with the y_train (which is termed y_true in the loss function).
When designing a custom loss function, I intend to optimize/minimize a value that requires access to the current input pattern, not just the current prediction.
I've taken a look through https://github.com/fchollet/keras/blob/master/keras/losses.py
I've also looked through "Cost function that isn't just y_pred, y_true?"
I am also familiar with previous examples to produce a customized loss function:
import keras.backend as K
def customLoss(y_true,y_pred):
return K.sum(K.log(y_true) - K.log(y_pred))
Presumably (y_true,y_pred) are defined elsewhere. I've taken a look through the source code without success and I'm wondering whether I need to define the current input pattern myself or whether this is already accessible to my loss function.
You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
You can verify that input_tensor and the loss value (mostly, the K.mean(input_tensor) part) will change as different X is passed to the model.
X = np.random.rand(1000, 10)
y = np.random.randint(2, size=1000)
model.test_on_batch(X, y) # => 1.1974642
X *= 1000
model.test_on_batch(X, y) # => 511.15466
You can use add_loss to pass external layers to your loss, in your case the input tensor.
Here an example:
def CustomLoss(y_true, y_pred, input_tensor):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
X = np.random.uniform(0,1, (1000,10))
y = np.random.randint(0,2, 1000)
inp = Input(shape=(10,))
hidden = Dense(100, activation='relu')(inp)
out = Dense(1, activation='sigmoid')(hidden)
target = Input((1,))
model = Model([inp,target], out)
model.add_loss( CustomLoss( target, out, inp ) )
model.compile(loss=None, optimizer='adam')
model.fit(x=[X,y], y=None, epochs=3)
To use the model in inference mode (removing the target from inputs)
final_model = Model(model.input[0], model.output)
final_model.predict(X)

Categories