Tensorflow / Keras custom loss function - python

I am currently trying to write my own loss functions in Keras, which checks if the values of my prediction are existing in the labels in any order.
Here is a code example written in python:
def my_loss(y_true, y_pred):
n_values = 5
loss = 0
for i in range(n_values):
if y_pred[i] not in y_true:
loss += 1
return loss
I have no idea how to write this code with keras.backend. I am not even able to find the docs for the keras functions like backend.sum(..), .flatten etc.

Related

Is there a way to write up a custom loss function in keras?

Is there a way to write up a custom MSE loss function in keras?
My training sample is cross-sectional data of k x n inputs and my outputs are a k x 1 at time t, but t ranges from t-1 to t-120 (monthly time stamps of cross-sectional data).
I want to write up a custom MSE loss function that essentially puts a lower weight on training samples t-120 and a higher weight on training samples t-1.
Is there a way to do this?
Here is some simple code to write up a custom loss function in keras.
def my_loss_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`
model.compile(optimizer='adam', loss=my_loss_fn)
You can use class weights in order to specify a custom weight for each of your output units and then specify it on model.fit() 1

Triplet networks using keras for RNN

I am trying to write a custom loss function for triplet loss(using keras), which takes 3 arguments anchor,positive and negative. The triplets are generated using gru layer and the arguments for model.fit is provided through data generators.
The problem I am facing is while training :
TypeError: Cannot convert a symbolic Keras input/output to a numpy array.
This error may indicate that you're trying to pass a symbolic value to a NumPy
call, which is not supported. Or, you may be trying to pass Keras symbolic
inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically
converting the API call to a lambda layer in the Functional Model.
Implementation of loss function
def batch_hard_triplet_loss(self, anchor_embeddings, pos_embeddings, neg_embeddings, margin):
def loss(y_true, y_pred):
'''print(anchor_embeddings)
print(pos_embeddings)
print(neg_embeddings)'''
# distance between the anchor and the positive
pos_dist = K.sum(K.square(anchor_embeddings - pos_embeddings), axis=-1)
max_pos_dist = K.max(pos_dist)
# distance between the anchor and the negative
neg_dist = K.sum(K.square(anchor_embeddings - neg_embeddings), axis=-1)
max_neg_dist = K.min(neg_dist)
# compute loss
basic_loss = max_pos_dist - max_neg_dist + margin
tr_loss = K.maximum(basic_loss, 0.0)
return tr_loss
#return triplet_loss
return loss
Can this be because keras is expecting array as returned loss but I am providing a scalar value?

Tensorflow Custom Regularization Term comparing the Prediction to the True value

Hello I am in need of a custom regularization term to add to my (binary cross entropy) Loss function. Can somebody help me with the Tensorflow syntax to implement this?
I simplified everything as much as possible so it could be easier to help me.
The model takes a dataset 10000 of 18 x 18 binary configurations as input and has a 16x16 of a configuration set as output. The neural network consists only of 2 Convlutional layer.
My model looks like this:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
EPOCHS = 10
model = models.Sequential()
model.add(layers.Conv2D(1,2,activation='relu',input_shape=[18,18,1]))
model.add(layers.Conv2D(1,2,activation='sigmoid',input_shape=[17,17,1]))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),loss=tf.keras.losses.BinaryCrossentropy())
model.fit(initial.reshape(10000,18,18,1),target.reshape(10000,16,16,1),batch_size = 1000, epochs=EPOCHS, verbose=1)
output = model(initial).numpy().reshape(10000,16,16)
Now I wrote a function which I'd like to use as an aditional regularization terme to have as a regularization term. This function takes the true and the prediction. Basically it multiplies every point of both with its 'right' neighbor. Then the difference is taken. I assumed that the true and prediction term is 16x16 (and not 10000x16x16). Is this correct?
def regularization_term(prediction, true):
order = list(range(1,4))
order.append(0)
deviation = (true*true[:,order]) - (prediction*prediction[:,order])
deviation = abs(deviation)**2
return 0.2 * deviation
I would really appreciate some help with adding something like this function as a regularization term to my loss for helping the neural network to train better to this 'right neighbor' interaction. I'm really struggling with using the customizable Tensorflow functionalities a lot.
Thank you, much appreciated.
It is quite simple. You need to specify a custom loss in which you define your adding regularization term. Something like this:
# to minimize!
def regularization_term(true, prediction):
order = list(range(1,4))
order.append(0)
deviation = (true*true[:,order]) - (prediction*prediction[:,order])
deviation = abs(deviation)**2
return 0.2 * deviation
def my_custom_loss(y_true, y_pred):
return tf.keras.losses.BinaryCrossentropy()(y_true, y_pred) + regularization_term(y_true, y_pred)
model.compile(optimizer='Adam', loss=my_custom_loss)
As stated by keras:
Any callable with the signature loss_fn(y_true, y_pred) that returns
an array of losses (one of sample in the input batch) can be passed to
compile() as a loss. Note that sample weighting is automatically
supported for any such loss.
So be sure to return an array of losses (EDIT: as I can see now it is possible to return also a simple scalar. It doesn't matter if you use for example the reduce function). Basically y_true and y_predicted have as first dimension the batch size.
here details: https://keras.io/api/losses/

custom class-wise loss function in tensorflow

For my problem, I want to predict customer review scores ranging from 1 to 5.
I thought it would be good to implement this as a regression problem because a predicted 1 from the model while 5 being the true value should be a "worse" prediction than 4.
It is also wished, that the model performs somehow equally good for all review score classes.
Because my dataset is highly unbalanced I want to create a metric/loss that is capable of capturing this (I think just as F1 for classification).
Therefore I created following metric (for now just mse is relevant):
def custom_metric(y_true, y_pred):
df = pd.DataFrame(np.column_stack([y_pred, y_true]), columns=["Predicted", "Truth"])
class_mse = 0
#class_mae = 0
print("MAE for Classes:")
for i in df.Truth.unique():
temp = df[df["Truth"]==i]
mse = mean_squared_error(temp.Truth, temp.Predicted)
#mae = mean_absolute_error(temp.Truth, temp.Predicted)
print("Class {}: {}".format(i, mse))
class_mse += mse
#class_mae += mae
print()
print("AVG MSE over Classes {}".format(class_mse/len(df.Truth.unique())))
#print("AVG MAE over Classes {}".format(class_mae/len(df.Truth.unique())))
Now an example prediction:
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error, mean_absolute_error
# sample predictions: "model" messed up at class 2 and 3
y_true = np.array((1,1,1,2,2,2,3,3,3,4,4,4,5,5,5))
y_pred = np.array((1,1,1,2,2,3,5,4,3,4,4,4,5,5,5))
custom_metric(y_true, y_pred)
Now my question: Is it able to create a custom tensorflow loss function which is able to act in a similar behaviour? I also worked on this implementation which is not yet ready for tensorflow but maybe more alike:
def custom_metric(y_true, y_pred):
mse_class = 0
num_classes = len(np.unique(y_true))
stacked = np.vstack((y_true, y_pred))
for i in np.unique(stacked[0]):
y_true_temp = stacked[0][np.where(stacked[0]==i)]
y_pred_temp = stacked[1][np.where(stacked[0]==i)]
mse = np.mean(np.square(y_pred_temp - y_true_temp))
mse_class += mse
return mse_class/num_classes
But still, I am not sure how to work around the for loop for a tensorflow like definition.
Thanks in advance for any help!
The for loop should be dealt with exactly by means of numpy/tensorflow operations on a tensor.
A custom metric example would be:
from keras import backend as K
def custom_mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
where y_true is the ground truth label, y_pred are your predictions. You can see there are not explicit for-loops.
The motivation for not using for loops is that vectorized operations (which are present both in numpy and tensorflow) take advantage of the modern CPU architectures, turning multiple iterative operations into matrix ones. Consider that a dot-product implementation in numpy takes approximately 30 times less than a regular for-loop in Python.

Keras custom metrics with more than two inputs

I have a VAE model that I've broken down into the encoder and decoder parts, and implemented a custom loss. A simplified example is as below
input = Input(shape=(self.image_height, self.image_width, self.image_channel))
encoded = build_encoder(input)
decoded = build_decoder(encoded)
model = Model(input, decoded)
The loss (simplified) is
loss = K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.add_loss(loss)
model.compile(optimizer=self.optimizer)
My main problem is that I want to use Keras' modelcheckpoint function, which would then require me to set custom metrics. However, everything I have seen online is similar to https://keras.io/metrics/#custom_metrics. This only takes in y_true and y_pred, and modify the validation loss from there. How would I implement it in my example model, where the loss is calculated from multiple inputs, not only the final output of "decoded"?
Well apparently you can still use the variables (keras layers) without passing those into the custom loss function.
So for my example, the loss can be calculated as
def custom_loss(y_true, y_pred):
return K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.compile(optimizer=self.optimizer, loss=custom_loss)
y_true and y_pred is never used, but the actual required inputs can still be called (as long as they are in the same scope as the custom loss function of course).

Categories