Tensorflow one custom metric for multioutput models - python

I can't find the info in the documentation so I am asking here.
I have a multioutput model with 3 different outputs:
model = tf.keras.Model(inputs=[input], outputs=[output1, output2, output3])
The predicted labels for validation are constructed from these 3 outputs to form only one, it's a post-processing step. The dataset used for training is a dataset of those 3 intermediary outputs, for validation I evaluate on a dataset of labels instead of the 3 kind of intermediary data.
I would like to evaluate my model using a custom metric that handle the post processing and comparaison with the ground truth.
My question is, in the code of the custom metric, will y_pred be a list of the 3 outputs of the model?
class MyCustomMetric(tf.keras.metrics.Metric):
def __init__(self, name='my_custom_metric', **kwargs):
super(MyCustomMetric, self).__init__(name=name, **kwargs)
def update_state(self, y_true, y_pred, sample_weight=None):
# ? is y_pred a list [batch_output_1, batch_output_2, batch_output_3] ?
def result(self):
pass
# one single metric handling the 3 outputs?
model.compile(optimizer=tf.compat.v1.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[MyCustomMetric()])

With your given model definition, this is a standard multi-output Model.
model = tf.keras.Model(inputs=[input], outputs=[output_1, output_2, output_3])
In general, all (custom) Metrics as well as (custom) Losses will be called on every output separately (as y_pred)! Within the loss/metric function you will only see one output together with the one
corresponding target tensor.
By passing a list of loss functions (length == number of outputs of your model) you can specifiy which loss will be used for which output:
model.compile(optimizer=Adam(), loss=[loss_for_output_1, loss_for_output_2, loss_for_output_3], loss_weights=[1, 4, 8])
The total loss (which is the objective function to minimize) will be the additive combination of all losses multiplied with the given loss weights.
It is almost the same for the metrics! Here you can pass (as for the loss) a list (lenght == number of outputs) of metrics and tell Keras which metric to use for which of your model outputs.
model.compile(optimizer=Adam(), loss='mse', metrics=[metrics_for_output_1, metrics_for_output2, metrics_for_output3])
Here metrics_for_output_X can be either a function or a list of functions, which all be called with the one corresponding output_X as y_pred.
This is explained in detail in the documentation of Multi-Output Models in Keras. They also show examples for using dictionarys (to map loss/metric functions to a specific output) instead of lists.
https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models
Further information:
If I understand you correctly you want to train your model using a loss function comparing the
three model outputs with three ground truth values and want to do some sort of performance evaluation by comparing
a derived value from the three model outputs and a single ground truth value.
Usually the model gets trained on the same objective it is evaluated on, otherwise you might get poorer results when
evaluating your model!
Anyways... for evaluating your model on a single label I suggest you either:
1. (The clean solution)
Rewrite your model and incorporate the post-processing steps. Add all the necessary operations (as layers) and map those
to an auxiliary output. For training your model you can set the loss_weight of the auxiliary output to zero.
Merge your Datasets so you can feed your model the model input, the intermediate target outputs as well as the labels.
As explained above you can define now a metric comparing the auxiliary model output with the given target labels.
2.
Or you train your model and derive the metric e.g. in a custom Callback by calculating your post-processing steps on the three outputs of model.predict(input).
This will make it necessary to write custom summaries if you want to track those values in your tensorboard! That's why I would not recommend this solution.

Related

why did i got 2 different losses for sparse_categorical_crossentropy and categorical_crossentropy?

I trained a model for multiclass classification. There were three classes. In the first approach, I trained a model by converting the classes into one-hot vectors and training a model with loss function, categorical crossentropy, I achieved a loss of 0.07 in 1000 epochs. But when I used the same approach, but this time I did not converted the classes into one-hot vectors and used sparse_categorical_crossentropy, as the loss function, this time i achieved a loss of 0.05 in 1000 epochs.. Does this mean that sparse_categorical_crossentropy is better than categorical_crossentropy?
Thank You!
You can't compare two loss functions in term of losses since the definition of loss itself changed. you can compare the performance on the same test dataset.
In general use sparse_categorical_crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical_crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).
You got different losses because the representation of the labels changes, actually in keras the sparse_categorical_crossentropy is defined as categorical crossentropy with integer targets

Accessing part of y_pred in customized loss function for calculating loss

I want to develop a neural network with three inputs pos,anc,neg and three outputs pos_out,anc_out,neg_out. While calculating loss in my customized loss function in keras, I want to access pos_out, anc_out, neg_out in y_pred. I can access y_pred as a whole. But how to access individual part pos_out, anc_out and neg_out
I have applied max function to y_pred. It calculates max value correctly. If I am passing only one output in Model as Model(input=[pos,anc,neg], output=pos_out) then also it calculates max value correctly. But when it comes to accessing max values form pos_out, anc_out and neg_out separately in customized function, it does not work.
def testmodel(input_shape):
pos = Input(shape=(14,300))
anc = Input(shape=(14,300))
neg = Input(shape=(14,300))
model = Sequential()
model.add(Flatten(batch_input_shape=(1,14,300)))
pos_out = model(pos)
anc_out = model(anc)
neg_out = model(neg)
model = Model(input=[pos,anc,neg], output=[pos_out,anc_out,neg_out])
return model
def customloss(y_true,y_pred):
print((K.int_shape(y_pred)[1]))
#loss = K.max((y_pred))
loss = K.max[pos_out]
return loss
You can create a loss function that contains a closure that lets you access the model and thus the targets and the model layer outputs.
class ExampleCustomLoss(object):
""" The loss function can access model.inputs, model.targets and the outputs
of specific layers. These are all tensors and will have the expected results
for the batch.
"""
def __init__(self, model):
self.model = model
def loss(self, y_true, y_pred, **kwargs):
...
return loss
model = Model(..., ...)
loss_calculator = ExampleCustomLoss(model)
model.compile('adam', loss_calculator.loss)
However, it may be simpler to do the inverse. i.e. have a single model output
out = Concatenate(axis=1)([pos_out, anc_out, neg_out])
And then in the loss function slice y_true and y_pred.
From the names of variables, it looks as if you are trying to use a triplet loss. You may find this other question useful:
How to deal with triplet loss when at time of input i have only two files i.e. at time of testing
Your loss function gets 2 arguments, model output and true label, your model output will have the shape that you define when you define the net. Your loss function needs to output a single difference value, between your model's output and the true value of the label while training.
Also please add some trainable layers to your model, because your custom loss function will be useless otherwise.

Custom loss function which depends on another neural network in keras

I have a "How can I do that" question with keras :
Assuming that I have a first neural network, say NNa which has 4 inputs (x,y,z,t) which is already trained.
If I have a second neural network, say NNb, and that its loss function depends on the first neural network.
The custom loss function of NNb customLossNNb calls the prediction of NNa with a fixed grid (x,y,z) and just modify the last variable t.
Here in pseudo-python-code what I would like to do to traine the second NN : NNb:
grid=np.mgrid[0:10:1,0:10:1,0:10:1].reshape(3,-1).T
Y[:,0]=time
Y[:,1]=something
def customLossNNb(NNa,grid):
def diff(y_true,y_pred):
for ii in range(y_true.shape[0]):
currentInput=concatenation of grid and y_true[ii,0]
toto[ii,:]=NNa.predict(currentInput)
#some stuff with toto
return #...
return diff
Then
NNb.compile(loss=customLossNNb(NNa,K.variable(grid)),optimizer='Adam')
NNb.fit(input,Y)
In fact the line that cause me troubles is currentInput=concatenation of grid and y_true[ii,0]
I tried to send to customLossNNb the grid as a tensor with K.variable(grid). But I can't defined a new tensor inside the loss function, something like CurrentY which has a shape (grid.shape[0],1) fill with y[ii,0](i.e. the current t) and then concatenate grid and currentY to build currentInput
Any ideas?
Thanks
You can include your custom loss function into the graph using functional API of keras. The model in this case can be used as a function, something like this:
for l in NNa.layers:
l.trainable=False
x=Input(size)
y=NNb(x)
z=NNa(y)
Predict method will not work, since loss function should be part of the graph, and predict method returns np.array
First, make NNa untrainable. Notice that you should do this recursively if your model has inner models.
def makeUntrainable(layer):
layer.trainable = False
if hasattr(layer, 'layers'):
for l in layer.layers:
makeUntrainable(l)
makeUntrainable(NNa)
Then you have two options:
Attach NNa to the end of your model (notice that both y_true and y_pred will be changed)
Then change your targets (predict with NNa) for correct results since your model is now expecting the output of NNa, not NNb.
Create a custom loss function that uses NNa inside it, without changing your targets
Option 1 - Attaching models
inputs = NNb.inputs
outputs = NNa(NNb.outputs) #make sure NNb is outputing 4 tensors to match NNa inputs
fullModel = Model(inputs,outputs)
#changing the targets:
newY_train = NNa.predict(oldY_train)
Option 2 - Creating a custom loss
Warning: please test whether NNa's weights are really frozen while training this configuration
from keras.losses import binary_crossentropy
def customLoss(true,pred):
true = NNa(true)
pred = NNa(pred)
#use some of the usual losses or create your own
binary_crossentropy(true,pred)
NNb.compile(optimizer=anything, loss = customLoss)

Regressor Neural Network built with Keras only ever predicts one value

I'm trying to build a NN with Keras and Tensorflow to predict the final chart position of a song, given a set of 5 features.
After playing around with it for a few days I realised that although my MAE was getting lower, this was because the model had just learned to predict the mean value of my training set for all input, and this was the optimal solution. (This is illustrated in the scatter plot below)
This is a random sample of 50 data points from my testing set vs what the network thinks they should be
At first I realised this was probably because my network was too complicated. I had one input layer with shape (5,) and a single node in the output layer, but then 3 hidden layers with over 32 nodes each.
I then stripped back the excess layers and moved to just a single hidden layer with a couple nodes, as shown here:
self.model = keras.Sequential([
keras.layers.Dense(4,
activation='relu',
input_dim=num_features,
kernel_initializer='random_uniform',
bias_initializer='random_uniform'
),
keras.layers.Dense(1)
])
Training this with a gradient descent optimiser still results in exactly the same prediction being made the whole time.
Then it occurred to me that perhaps the actual problem I'm trying to solve isn't hard enough for the network, that maybe it's linearly separable. Since this would respond better to not having a hidden layer at all, essentially just doing regular linear regression, I tried that. I changed my model to:
inp = keras.Input(shape=(num_features,))
out = keras.layers.Dense(1, activation='relu')(inp)
self.model = keras.Model(inp,out)
This also changed nothing. My MAE, the predicted value are all the same.
I've tried so many different things, different permutations of optimisation functions, learning rates, network configurations, and nothing can help. I'm pretty sure the data is good, but I've included a sample of it just in case.
chartposition,tagcount,dow,artistscore,timeinchart,finalpos
121,3925,5,35128,7,227
131,4453,3,85545,25,130
69,2583,4,17594,24,523
145,1165,3,292874,151,187
96,1679,5,102593,111,540
134,3494,5,1252058,37,370
6,34895,7,6824048,22,5
A sample of my dataset, finalpos is the value I'm trying to predict. Dataset contains ~40,000 records, split 80/20 - training/testing
def __init__(self, validation_split, num_features, should_log):
self.should_log = should_log
self.validation_split = validation_split
inp = keras.Input(shape=(num_features,))
out = keras.layers.Dense(1, activation='relu')(inp)
self.model = keras.Model(inp,out)
optimizer = tf.train.GradientDescentOptimizer(0.01)
self.model.compile(loss='mae',
optimizer=optimizer,
metrics=['mae'])
def train(self, data, labels, plot=False):
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)
history = self.model.fit(data,
labels,
epochs=self.epochs,
validation_split=self.validation_split,
verbose=0,
callbacks = [PrintDot(), early_stop])
if plot: self.plot_history(history)
All code relevant to constructing and training the networ
def normalise_dataset(df, mini, maxi):
return (df - mini)/(maxi-mini)
Normalisation of the input data. Both my testing and training data are normalised to the max and min of the testing set
Graph of my loss vs validation curves with the one hidden layer network with an adamoptimiser, learning rate 0.01
Same graph but with linear regression and a gradient descent optimiser.
So I am pretty sure that your normalization is the issue: You are not normalizing by feature (as is the de-fact industry standard), but across all data.
That means, if you have two different features that have very different orders of magnitude/ranges (in your case, compare timeinchart with artistscore.
Instead, you might want to normalize using something like scikit-learn's StandardScaler. Not only does this normalize per column (so you can pass all features at once), but it also does unit variance (which is some assumption about your data, but can potentially help, too).
To transform your data, use something along these lines
from sklearn.preprocessing import StandardScaler
import numpy as np
raw_data = np.array([[1,40], [2, 80]])
scaler = StandardScaler()
processed_data = scaler.fit_transform(raw_data)
# fit() calculates mean etc, transform() puts it to the new range.
print(processed_data) # returns [[-1, -1], [1,1]]
Note that you have two possibilities to normalize/standardize your training data:
Either scale them together with your training data, and then split afterwards,
or you instead only fit the training data, and then use the same scaler to transform your test data.
Never fit_transform your test set separate from training data!
Since you have potentially different mean/min/max values, you can end up with totally wrong predictions! In a sense, the StandardScaler is your definition of your "data source distribution", which is inherently still the same for your test set, even though they might be a subset not exactly following the same properties (due to small sample size etc.)
Additionally, you might want to use a more advanced optimizer, like Adam, or specify some momentum property (0.9 is a good choice in practic, as a rule of thumb) for your SGD.
Turns out the error was a really stupid and easy to miss bug.
When I was importing my dataset, I shuffle it, however when I performed the shuffling, I was accidentally applying the shuffling only to the labels set, not the whole dataset as a whole.
As a result, each label was being assigned to a completely random feature set, of course the model didn't know what to do with this.
Thanks to #dennlinger for suggesting for me to look in the place where I eventually found this bug.

Custom Loss Function in TensorFlow for weighting training data

I want to weight the training data based on a column in the training data set. Thereby giving more importance to certain training items than others. The weighting column should not be included as a feature for the input layer.
The Tensorflow documentation holds an example how to use the label of the item to assign a custom loss and thereby assigning weight:
# Ensures that the loss for examples whose ground truth class is `3` is 5x
# higher than the loss for all other examples.
weight = tf.multiply(4, tf.cast(tf.equal(labels, 3), tf.float32)) + 1
onehot_labels = tf.one_hot(labels, num_classes=5)
tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=weight)
I am using this in a custom DNN with three hidden layers. In theory i simply need to replace labels in the example above with a tensor containing the weight column.
I am aware that there are several threads that already discuss similar problems e.g. defined loss function in tensorflow?
For some reason i am running into a lot of problems trying to bring my weight column in. It's probably two easy lines of code or maybe there is an easier way to achieve the same result.
I believe i found the answer:
weight_tf = tf.range(features.get_shape()[0]-1, features.get_shape()[0])
loss = tf.losses.softmax_cross_entropy(target, logits, weights=weight_tf)
The weight is the last column in the features tensorflow.

Categories