Intuition behind categorical cross entropy - python

I'm trying to make categorical cross entropy loss function to better understand intuition behind it.
So far my implementation looks like this:
# Observations
y_true = np.array([[0, 1, 0], [0, 0, 1]])
y_pred = np.array([[0.05, 0.95, 0.05], [0.1, 0.8, 0.1]])
# Loss calculations
def categorical_loss():
loss1 = -(0.0 * np.log(0.05) + 1.0 * np.log(0.95) + 0 * np.log(0.05))
loss2 = -(0.0 * np.log(0.1) + 0.0 * np.log(0.8) + 1.0 * np.log(0.1))
loss = (loss1 + loss2) / 2 # divided by 2 because y_true and y_pred have 2 observations and 3 classes
return loss
# Show loss
print(categorical_loss()) # 1.176939193690798
However I do not understand how function should behave to return correct value when:
at least one number from y_pred is 0 or 1 because then log function returns -inf or 0 and how code implementation should look like in this case
at least one number from y_true is 0 because multiplication by 0 always returns 0 and value of np.log(0.95) will be discarded then and how code implementation should look like in this case as well

Regarding y_pred being 0 or 1, digging into the Keras backend source code for both binary_crossentropy and categorical_crossentropy, we get:
def binary_crossentropy(target, output, from_logits=False):
if not from_logits:
output = np.clip(output, 1e-7, 1 - 1e-7)
output = np.log(output / (1 - output))
return (target * -np.log(sigmoid(output)) +
(1 - target) * -np.log(1 - sigmoid(output)))
def categorical_crossentropy(target, output, from_logits=False):
if from_logits:
output = softmax(output)
else:
output /= output.sum(axis=-1, keepdims=True)
output = np.clip(output, 1e-7, 1 - 1e-7)
return np.sum(target * -np.log(output), axis=-1, keepdims=False)
from where you can clearly see that, in both functions, there is a clipping operation of the output (i.e. predictions), in order to avoid infinities from the logarithms:
output = np.clip(output, 1e-7, 1 - 1e-7)
So, here y_pred will never be exactly 0 or 1 in the underlying calculations. The handling is similar in other frameworks.
Regarding y_true being 0, there is not any issue involved - the respective terms are set to 0, as they should be according to the mathematical definition.

Related

Asymmetric Function for Loss

I'm using LightGBM and I need to realize a loss function that during the training give a penalty when the prediction is lower than the target. In other words I assume that underestimates are much worse than overestimates. I've found this suggestion that do exactly the opposite:
def custom_asymmetric_train(y_true, y_pred):
residual = (y_true - y_pred).astype("float")
grad = np.where(residual<0, -2*10.0*residual, -2*residual)
hess = np.where(residual<0, 2*10.0, 2.0)
return grad, hess
def custom_asymmetric_valid(y_true, y_pred):
residual = (y_true - y_pred).astype("float")
loss = np.where(residual < 0, (residual**2)*10.0, residual**2)
return "custom_asymmetric_eval", np.mean(loss), False
https://towardsdatascience.com/custom-loss-functions-for-gradient-boosting-f79c1b40466d)
How can I modify it for my purpose?
I believe this function is where you want to make a change.
def custom_asymmetric_valid(y_true, y_pred):
residual = (y_true - y_pred).astype("float")
loss = np.where(residual < 0, (residual**2)*10.0, residual**2)
return "custom_asymmetric_eval", np.mean(loss), False
The line where loss is worked out has a comparison.
loss = np.where(residual < 0, (residual**2)*10.0, residual**2)
When residual is less that 0, loss is residual^2 * 10
where whne about 0, loss is just redisual^2.
So if we change this less than to a greater than. This will flip the skew.
loss = np.where(residual > 0, (residual**2)*10.0, residual**2)
I think this would be helpful. Originated from Custom loss function with Keras to penalise more negative prediction
def customLoss(true,pred):
diff = pred - true
greater = K.greater(diff,0)
greater = K.cast(greater, K.floatx()) #0 for lower, 1 for greater
greater = greater + 1 #1 for lower, 2 for greater
#use some kind of loss here, such as mse or mae, or pick one from keras
#using mse:
return K.mean(greater*K.square(diff))
model.compile(optimizer = 'adam', loss = customLoss)

Keras custom loss function to ignore false negatives of a specific class during semantic segmentation?

See EDIT below, the initial post almost has no meaning now but the question still remains.
I developing a neural network to semantically segment imagery. I have worked through various loss functions (categorical cross entropy (CCE), weight CCE, focal loss, tversky loss, jaccard loss, focal tversky loss, etc) which attempt to handle highly skewed class representation, though none are producing the desired effect. My advisor mentioned attempting to create a custom loss function which ignores false negatives for a specific class (but still penalizes false positives).
I have a 6 class problem and my network is setup to work in/with one-hot encoded truth data. As a result my loss function will accept two tensors, y_true, y_pred, of shape (batch, row, col, class) (which is currently (8, 128, 128, 6)). To be able to utilize the losses I have already explored I would like to alter y_pred to set the predicted value for the specific class (the 0th class) to always be correct. That is where y_true == class 0 set y_pred == class 0, otherwise do nothing.
I have spent way too much time attempting to create this loss function as a result of tensorflow tensors being immutable. My first attempt (which I was led to through my experience with numpy)
def weighted_categorical_crossentropy_ignore(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
y_pred[tf.where(y_true == [1, 0, 0, 0, 0, 0])] = [1, 0, 0, 0, 0, 0]
# Scale predictions so that the class probs of each sample sum to 1
y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
# Clip to prevent NaN's and Inf's
y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(y_pred) * weights
loss = -K.sum(loss, -1)
return loss
return loss
Though obviously I cannot alter y_pred so this attempt failed. I ended up creating a few monstrosities attempting to "build" a tensor by iterating over [batch, row, col] and performing comparisons. While this(ese) attempts did not technically fail, they never actually began training. I assume it was taking on the order of minutes to compute the loss.
After many more failed efforts I started attempting to perform the requisite computation in pure numpy in a SSCCE. But keeping cognizant I was essentially limited to instantiating "simple" tensors (ie ones, zeros) and only performing "simple" operations like element-wise multiply, addition, and reshaping. Thus I arrived at this SSCCE
import numpy as np
from tensorflow.keras.utils import to_categorical
# Generate the "images" at random
true_flat = np.argmax(np.random.rand(1, 2, 2, 4), axis=3).astype('int')
true = to_categorical(true_flat, num_classes=4).astype('int')
pred_flat = np.argmax(np.random.rand(1, 2, 2, 4), axis=3).astype('int')
pred = to_categorical(pred_flat, num_classes=4).astype('int')
print('True:\n', true_flat)
print('Pred:\n', pred_flat)
# Create a mask representing an all "class 0" image
class_zero_label = np.array([1, 0, 0, 0])
czl_all = class_zero_label * np.ones(true.shape).astype('int')
# Mask both the truth and pred to locate class 0 pixels
czl_true_locs = czl_all * true
czl_pred_locs = czl_all * pred
# Subtract to create "addition" matrix
a = (czl_true_locs - czl_pred_locs) * czl_true_locs
print('a:\n', a)
# Do this
m = ((a + 1) - (a * 2))
print('m - ', m.shape, ':\n', m)
# Pull the front entry from 'm' and "expand" its value
#x = (m[:, :, :, 0].flatten() * np.ones(pred.shape).astype('int')).T.reshape(pred.shape)
m_front = m[:, :, :, 0]
print('m_front - ', m_front.shape, ':\n', m_front)
#m_flat = m_front.flatten()
m_flat = m_front.reshape(m_front.shape[0], m_front.shape[1]*m_front.shape[2])
print('m_flat - ', m_flat.shape, ':\n', m_flat)
m_expand = m_flat * np.ones(pred.shape).astype('int')
print('m_expand - ', m_expand.shape, ':\n', m_expand)
m_trans = m_expand.T
m_fixT = m_trans.reshape(pred.shape)
print('m_fixT - ', m_fixT.shape, ':\n', m_fixT)
m = m_fixT
print('m:\n', m.shape)
# Perform the math as described
pred = (pred * m) + a
print('Pred:\n', np.argmax(pred, axis=3))
This SSCCE, is well, terrible and complex. Essentially my goal here was to create two matrices, the "addition" and "multiplication" matrices. The multiplication matrix is meant to "zero out" every pixel in the predicted values where the truth value was equal to class 0. That is no matter the pixel value (ie a one-hot encoded vector) zero it out to be equal to [0, 0, 0, 0, 0, 0]. The addition matrix is then meant to add the vector [1, 0, 0, 0, 0, 0] to each of the zero'ed out locations. In the end this would achieve the goal of setting the predicted value of every truly class 0 pixel to correct.
The issue is that this SSCCE does not translate fully to tensorflow operations. The first issue is with the generation of the multiplication matrix, it is not defined correctly for when batch_size > 1. I thought no matter, just to see if it work I will break down and tf.unstack the y_true and y_pred tensors and iteration over them. Which has led me to the current instantiation of my loss function
def weighted_categorical_crossentropy_ignore(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
y_true_un = tf.unstack(y_true)
y_pred_un = tf.unstack(y_pred)
y_pred_new = []
for i in range(0, y_true.shape[0]):
yt = y_true_un[i]
yp = y_pred_un[i]
# Pred:
# [[[0 3] * [[[1 0] + [[[0 1] = [[[0 0]
# [3 1]]] [[1 1]]] [[0 0]]] [[3 1]]]
# If we multiple pred by a tensor which zeros out only incorrect class 0 labelleling
# Then add class zero to those zero'd out locations
# We can negate the effect of mis-classified class 0 pixels but still punish for
# incorrectly predicted class 0 labels for other classes.
# Create a mask respresenting an all "class 0" image
class_zero_label = K.variable([1.0, 0.0, 0.0, 0.0, 0.0, 0.0])
czl_all = class_zero_label * K.ones(yt.shape)
# Mask both true and pred to locate class 0 pixels
czl_true = czl_all * yt
czl_pred = czl_all * yp
# Subtract to create "addition matrix"
a = czl_true - czl_pred
# Do this.
m = ((a + 1) - (a * 2.))
# And this.
x = K.flatten(m[:, :, 0])
x = x * K.ones(yp.shape)
x = K.transpose(x)
x = K.reshape(x, yp.shape)
# Voila.
ypnew = (yp * x) + a
y_pred_new.append(ypnew)
y_pred_new = tf.concat(y_pred_new, 0)
# Continue calculating weighted categorical crossentropy
# -------------------------------------------------------
# Scale predictions so that the class probs of each sample sum to 1
y_pred_new /= K.sum(y_pred_new, axis=-1, keepdims=True)
# Clip to prevent NaN's and Inf's
y_pred_new = K.clip(y_pred_new, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(y_pred_new) * weights
loss = -K.sum(loss, -1)
return loss
return loss
The current issue with this loss function lies in the apparent difference in the behavior between numpy and tensorflow when performing the operation
x = K.flatten(m[:, :, 0])
x = x * K.ones(yp.shape)
Which is meant to represent the behavior
m_flat = m_front.flatten()
m_expand = m_flat * np.ones(pred.shape).astype('int')
from the SSCCE.
So at this point I feel like I have delved so far into caveman coding I can't get out of it. I have to image there is some simple way akin to my initial attempt to perform the described behavior.
So, I guess my direct question is How do I implement
y_pred[tf.where(y_true == [1, 0, 0, 0, 0, 0])] = [1, 0, 0, 0, 0, 0]
in a custom tensorflow loss function?
EDIT: After fumbling around quite a bit more I have finally determined how to call .numpy() on the y_true, y_pred tensors to utilize numpy operations (Apparently setting tf.compat.v1.enable_eager_execution at the start of the program "doesn't work". I had to pass run_eagerly=True to Model().compile(...)).
This has allowed me to implement essentially the first attempt outlined
def weighted_categorical_crossentropy_ignore(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
yp = y_pred.numpy()
yt = y_true.numpy()
yp[np.nonzero(np.all(yt == [1, 0, 0, 0, 0, 0], axis=3))] = [1, 0, 0, 0, 0, 0]
# Continue calculating weighted categorical crossentropy
# -------------------------------------------------------
# Scale predictions so that the class probs of each sample sum to 1
yp /= K.sum(yp, axis=-1, keepdims=True)
# Clip to prevent NaN's and Inf's
yp = K.clip(yp, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(yp) * weights
loss = -K.sum(loss, -1)
return loss
return loss
Though it seems by calling y_pred.numpy() (or the use of it thereafter) I have apparently "destroyed" the path/flow through the network. Based on the error when attempting to .fit
ValueError: No gradients provided for any variable: ['conv3d/kernel:0', <....>
I assume I somehow need to "remarshall" the tensor back to GPU memory? I have tried
yp = tf.convert_to_tensor(yp)
to no avail; same error. So I guess the same question still lies, but from a different motivation..
EDIT2: Well it seems from this SO Answer that I can't actually use numpy() to marshall the y_true, y_pred to use vanilla numpy operations. This necessarily "destroys" the network path and thus gradients cannot be calculated.
As I result I had realized with run_eagerly=True I can tf.Variable my y_true/y_pred and perform assignment. So in pure tensorflow I attempted to recreate the same code again
def weighted_categorical_crossentropy_ignore(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
# yp = y_pred.numpy().copy()
# yt = y_true.numpy().copy()
# yp[np.nonzero(np.all(yt == [1, 0, 0, 0, 0, 0], axis=3))] = [1, 0, 0, 0, 0, 0]
yp = K.variable(y_pred)
yt = K.variable(y_true)
#np.all
x = K.all(yt == [1, 0, 0, 0, 0, 0], axis=3)
#np.nonzero
ne = tf.not_equal(x, tf.constant(False))
y = tf.where(ne)
# Perform the desired operation
yp[y] = [1, 0, 0, 0, 0, 0]
# Continue calculating weighted categorical crossentropy
# -------------------------------------------------------
# Scale predictions so that the class probs of each sample sum to 1
#yp /= K.sum(yp, axis=-1, keepdims=True) # Cannot use \= on tf.var, must use var = var /
yp = yp / K.sum(yp, axis=-1, keepdims=True)
# Clip to prevent NaN's and Inf's
yp = K.clip(yp, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(yp) * weights
loss = -K.sum(loss, -1)
return loss
return loss
But alas, this apparently creates the same issue as when calling .numpy(); no gradients can be computed. So I am again seemingly back at square 1.
EDIT3: Using the solution proposed by gobrewers14 in the answer posted below but modified based on my knowledge of the problem I have produced this loss function
def weighted_categorical_crossentropy_ignore(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
print('y_true.shape: ', y_true.shape)
print('y_pred.shape: ', y_pred.shape)
# Generate modified y_pred where all truly class0 pixels are correct
y_true_class0_indicies = tf.where(tf.math.equal(y_true, [1., 0., 0., 0., 0., 0.]))
y_pred_updates = tf.repeat([
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]],
repeats=y_true_class0_indicies.shape[0],
axis=0)
yp = tf.tensor_scatter_nd_update(y_pred, y_true_class0_indicies, y_pred_updates)
# Continue calculating weighted categorical crossentropy
# -------------------------------------------------------
# Scale predictions so that the class probs of each sample sum to 1
yp /= K.sum(yp, axis=-1, keepdims=True)
# Clip to prevent NaN's and Inf's
yp = K.clip(yp, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(yp) * weights
loss = -K.sum(loss, -1)
return loss
return loss
Provided the original answer assumed y_true to be of shape [8, 128, 128] (ie a "flat" class representation, versus a one-hot encoded representation [8, 128, 128, 6]) I first print the shapes of the y_true and y_pred input tensors for sanity
y_true.shape: (8, 128, 128, 6)
y_pred.shape: (8, 128, 128, 6)
For further sanity, the output shape of the network, provided by the tail of model.summary is
conv2d_18 (Conv2D) (None, 128, 128, 6) 1542 dropout_5[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 128, 128, 6) 0 conv2d_18[0][0]
==================================================================================================
Total params: 535,551,494
Trainable params: 535,529,478
Non-trainable params: 22,016
__________________________________________________________________________________________________
I then follow "the pattern" in the proposed solution and replace the original tf.math.equal(y_true, 0) with tf.math.equal(y_true, [1., 0., 0., 0., 0., 0.]) to handle the one-hot encoded case. From my understanding of the proposed solution currently (after ~10min of inspecting it) I assumed this should work. Though when attempting to train a model the following exception is thrown
InvalidArgumentError: Inner dimensions of output shape must match inner dimensions of updates shape. Output: [8,128,128,6] updates: [684584,6] [Op:TensorScatterUpdate]
Thus it seems as if the production of the (as I have named them) y_pred_updates produces a "collapsed" tensor with "too many" elements. I understand the motivation of the use of tf.repeat but its specific use seems to be incorrect. I assume it should produce a tensor with shape (8, 128, 128, 6) based on what I understand tf.tensor_scatter_nd_update to do. I assume this most likely is just based on the selection of the repeats and axis during the call to tf.repeat.
If I understand your question correctly, you are looking for something like this:
import tensorflow as tf
# batch of true labels
y_true = tf.constant([5, 0, 1, 3, 4, 0, 2, 0], dtype=tf.int64)
# batch of class probabilities
y_pred = tf.constant(
[
[0.34670502, 0.04551039, 0.14020428, 0.14341979, 0.21430719, 0.10985339],
[0.25681055, 0.14013883, 0.19890164, 0.11124421, 0.14526634, 0.14763844],
[0.09199252, 0.21889475, 0.1170236 , 0.1929019 , 0.20311192, 0.17607528],
[0.3246354 , 0.23257554, 0.15549366, 0.17282239, 0.00000001, 0.11447308],
[0.16502093, 0.13163856, 0.14371352, 0.19880624, 0.23360236, 0.12721846],
[0.27362782, 0.21408406, 0.10917682, 0.13135742, 0.10814326, 0.16361059],
[0.20697299, 0.23721898, 0.06455399, 0.11071447, 0.18990229, 0.19063729],
[0.10320242, 0.22173141, 0.2547973 , 0.2314068 , 0.07063974, 0.11822232]
], dtype=tf.float32)
# find the indices in the batch where the true label is the class 0
indices = tf.where(tf.math.equal(y_true, 0))
# create a tensor with the number of updates you want to replace in `y_pred`
updates = tf.repeat(
[[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]],
repeats=indices.shape[0],
axis=0)
# insert the updates into `y_pred` at the specified indices
modified_y_pred = tf.tensor_scatter_nd_update(y_pred, indices, updates)
print(modified_y_pred)
# tf.Tensor(
# [[0.34670502, 0.04551039, 0.14020428, 0.14341979, 0.21430719, 0.10985339],
# [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000],
# [0.09199252, 0.21889475, 0.1170236 , 0.1929019 , 0.20311192, 0.17607528],
# [0.3246354 , 0.23257554, 0.15549366, 0.17282239, 0.00000001, 0.11447308],
# [0.16502093, 0.13163856, 0.14371352, 0.19880624, 0.23360236, 0.12721846],
# [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000],
# [0.20697299, 0.23721898, 0.06455399, 0.11071447, 0.18990229, 0.19063729],
# [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000]],
# shape=(8, 6), dtype=tf.float32)
This final tensor, modified_y_pred, can be used in differentiation.
EDIT:
It might be easier to do this with masks.
Example:
# these arent normalized to 1 but you get the point
probs = tf.random.normal([2, 4, 4, 6])
# raw labels per pixel
labels = tf.random.uniform(
shape=[2, 4, 4],
minval=0,
maxval=6,
dtype=tf.int64)
# your labels are already one-hot encoded
labels = tf.one_hot(labels, 6)
# boolean mask where classes are `0`
# converting back to int labels with argmax for purposes of
# using `tf.math.equal`. Matching on `[1, 0, 0, 0, 0, 0]` is
# potentially buggy; matching on an integer is a lot more
# explicit.
mask = tf.math.equal(tf.math.argmax(labels, -1), 0)[..., None]
# flip the mask to zero out the pixels across channels where
# labels are zero
probs *= tf.cast(tf.math.logical_not(mask), tf.float32)
# multiply the mask by the one-hot labels, and add back
# to the already masked probabilities.
probs += labels * tf.cast(mask, tf.float32)

Why are gradients incorrect for categorical crossentropy?

After answering this question, there are some interesting but confused findings I met in tensorflow 2.0. The gradients of logits looks incorrect to me. Let's say we have logits and labels here.
logits = tf.Variable([[0.8, 0.1, 0.1]], dtype=tf.float32)
labels = tf.constant([[1, 0, 0]],dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
loss = tf.reduce_sum(tf.keras.losses.categorical_crossentropy(labels, logits,
from_logits=False))
grads = tape.gradient(loss, logits)
print(grads)
Since logits is already a prob distribution, so I set from_logits=False in the loss function.
I thought tensorflow will use loss=-\Sigma_i(p_i)\log(q_i) to calculate the loss, and if we derive on q_i, we will have the derivative be -p_i/q_i. So, the expected grads should be [-1.25,0,0]. However, tensorflow will return [-0.25,1,1].
After reading the source code of tf.categorical_crossentropy, I found that even though we set from_logits=False, it still normalize the probabilities. That will change the final gradient expression. Specifically, the gradient will be -p_i/q_i+p_i/sum_j(q_j). If p_i=1 and sum_j(q_j)=1, the final gradient will plus one. That's why the gradient will be -0.25, however, I haven't figured out why the last two gradients would be 1.
To prove that all gradients are increased by 1/sum_j(q_j), I made up a logits, which is not prob distribution, and set from_logits=False still.
logits = tf.Variable([[0.5, 0.1, 0.1]], dtype=tf.float32)
labels = tf.constant([[1, 0, 0]],dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
loss = tf.reduce_sum(tf.keras.losses.categorical_crossentropy(labels, logits,
from_logits=False))
grads = tape.gradient(loss, logits)
print(grads)
The grads returned by tensorflow is [-0.57142866,1.4285713,1.4285713 ], which I thought should be [-2,0,0].
It shows that all gradients are increased by 1/(0.5+0.1+0.1). For the p_i==1, the gradient increased by 1/(0.5+0.1+0.1) makes sense to me. But I don't understand why p_i==0, the gradient is still increased by 1/(0.5+0.1+0.1).
Update
Thanks for #OverLordGoldDragon's kind reminder. After normalizing the probs, the correct gradients formula should be -p_i/q_i+1/sum_j(q_j). So the behaviors in the question are expected.
Categorical crossentropy is tricky, particularly w.r.t. one-hot encodings; the problem arises out of presuming that some predictions are "tossed out" in computing loss or gradient, when looking at how loss is computed:
loss = f(labels * preds) = f([1, 0, 0] * preds)
Why are the gradients incorrect? Above may suggest that preds[1:] don't matter, but note that this isn't actually preds - it's preds_normalized, which involves single element of preds. To get a better idea of what's happening, the Numpy backend is helpful; assuming from_logits=False:
losses = []
for label, pred in zip(labels, preds):
pred_norm = pred / pred.sum(axis=-1, keepdims=True)
losses.append(np.sum(label * -np.log(pred_norm), axis=-1, keepdims=False))
A more complete explanation of above - here. Below is my derivation of the gradients formula, with examples comparing its Numpy implementation with tf.GradientTape results. To skip the meaty details, scroll to "Main idea".
Formula + Derivation: proof of correctness at the bottom.
"""
grad = -y * sum(p_zeros) / (p_one * sum(pred)) + p_mask / sum(pred)
p_mask = abs(y - 1)
p_zeros = p_mask * pred
y = label: 1D array of length N, one-hot
p = prediction: 1D array of length N, float32 from 0 to 1
p_norm = normalized predictions
p_mask = prediction masks (see below)
"""
What's happening? Begin with a simple example to understand what tf.GradientTape is doing:
w = tf.Variable([0.5, 0.1, 0.1])
with tf.GradientTape(persistent=True) as tape:
f1 = w[0] + w[1] # f = function
f2 = w[0] / w[1]
f3 = w[0] / (w[0] + w[1] + w[2])
print(tape.gradient(f1, w)) # [1. 1. 0.]
print(tape.gradient(f2, w)) # [10. -50. 0.]
print(tape.gradient(f3, w)) # [0.40816 -1.02040 -1.02040]
Let w = [w1, w2, w3]. Then:
"""
grad = [df1/dw1, df1/dw2, df1/dw3]
grad1 = [d(w1 + w2)/w1, d(w1 + w2)/w2, d(w1 + w2)/w3] = [1, 1, 0]
grad2 = [d(w1 / w2)/w1, d(w1 / w2)/w2, d(w1 + w2)/w3] = [1/w2, -w1/w2^2, 0] = [10, -50, 0]
grad3 = [(w1 + w2)/K, - w2/K, -w3/K] = [0.40816 -1.02040 -1.02040] -- K = (w1 + w2 + w3)^2
"""
In other words, tf.GradientTape treats each element of the input tensor its differentiating against as a variable. This in mind, it suffices to implement categorical crossentropy via elementary tffunctions then derive its derivative by hand and see if they agree. It's what I've done at the bottom code, with loss better explained in answer linked above.
Formula explanation:
f3 above is the most insightful, as it's actually pred_norm; all we need now is to add a natural log, and handle two separate cases: grads for y==1, and for y==0; with a handy Wolf, derivatives can be computed in a flash. Adding more variables to denominator, we can see the following pattern:
d(loss)/d(p_one) = p_zeros / (p_one * sum(pred))
d(loss)/d(p_non_one) = -1 / sum(pred)
where p_one is pred where label == 1, p_non_one is any other pred element, and p_zeros is all pred elements except p_one. The code at bottom is simply an implementation of exactly this, using compact syntax.
Explanation example:
Suppose label = [1, 0, 0]; pred = [.5, .1, .1]. Below is numpy_gradient, step-by-step:
p_mask == [0, 1, 1] # effectively `label` "inverted", to exclude `p_one`
p_one == .5 # pred where `label` == 1
## grad_zeros
p_mask / np.sum(pred) == [0, 1, 1] / (.5 + .1 + .1) = [0, 1/.7, 1/.7]
## grad_one
p_one * np.sum(pred) == .5 * (.5 + .1 + .1) = .5 * .7 = .35
p_mask * pred == [0, 1, 1] * [.5, .1, .1] = [0, .1, .1]
np.sum(p_mask * pred) == .2
label * np.sum(p_mask * pred) == .2 * [1, 0, 0] = [.2, 0, 0]
label * np.sum(p_mask * pred) / (p_one * np.sum(pred))
== [.2, 0, 0] / .35 = 0.57142854
Per above, we can see that the gradient is effectively divided into two computations: grad_one, and grad_zeros.
Main idea: understandably, that's a lot of detail, so here's the main idea: every element of label and pred affects grad, and loss is computed using pred_norm, not pred, and the normalization step is backpropagated. We can run a little visual to confirm this:
labels = tf.constant([[1, 0, 0]],dtype=tf.float32)
grads = []
for i in np.linspace(0, 1, 100):
logits = tf.Variable([[0.5, 0.1, i]], dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
loss = tf.keras.losses.categorical_crossentropy(
labels, logits, from_logits=False)
grads.append(tape.gradient(loss, logits))
grads = np.vstack(grads)
plt.plot(grads)
Even though only logits[2] is varied, grads[1] varies exactly the same. The explanation's clear from grad_zeros above, but more intuitively, categorical crossentropy doesn't care "how wrong" the zero-label predictions are individually, only collectively - because it only semi-directly computes loss from pred[0] (i.e. pred[0] / sum(pred)), which is normalized by all other pred. So whether pred[1] == .9 and pred[2] == .2 or vice versa, p_norm is exactly the same.
Closing note: derived formulas are intended for a 1D case for simplicity, and may not work for N-dimensional labels and preds tensors, but can be easily generalized.
Numpy vs. tf.GradientTape:
def numpy_gradient(label, pred):
p_mask = np.abs(label - 1)
p_one = pred[np.where(label==1)[0][0]]
return p_mask / np.sum(pred) \
- label * np.sum(p_mask * pred) / (p_one * np.sum(pred))
def gtape_gradient(label, pred):
pred = tf.Variable(pred)
label = tf.Variable(label)
with tf.GradientTape() as tape:
loss = - tf.math.log(tf.reduce_sum(label * pred) / tf.reduce_sum(pred))
return tape.gradient(loss, pred).numpy()
label = np.array([1., 0., 0. ])
pred = np.array([0.5, 0.1, 0.1])
print(numpy_gradient(label, pred))
print(gtape_gradient(label, pred))
# [-0.57142854 1.4285713 1.4285713 ] <-- 100% agreement
# [-0.57142866 1.4285713 1.4285713 ] <-- 100% agreement

Keras replace log(0) in custom loss function

I am trying to use Poisson unscaled deviance as a loss function for my neural network, but there's a major flow with this : y_true can take (and will take very often) the value 0.
Unscaled deviance works like this for Poisson case :
If y_true = 0, then loss = 2 * d * y_pred
If y_true > 0, then loss = 2 * d *y_pred * (y_true * log(y_true)-y_true * log(y_pred)-y_true+y_pred
Note that as soon as log(0) is computed, the loss becomes -inf so my goal is to prevent this to happen.
I tried using the switch function to solve this but here's the trick:
If I have the value log(0), I don't want to replace it by 0 (with K.zeros()) because it would be considering that y_true = 1 since log(1) = 0.
Therefore I want to try using a large negative value in this case (-10000 for example) but I don't know how to do this since K.variable(-10000) gives the error:
ValueError: Rank of `condition` should be less than or equal to rank of `then_expression` and `else_expression`. ndim(condition)=1, ndim(then_expression)=0
Using K.zeros_like(y_true) instead of K.variable(-10000) will work for keras but it is mathematically incorrect and the optimisation doesn't work properly because of this.
I'd like to know how to replace the log by a large negative value in the switch function. Here's my attempt:
def custom_loss3(data, y_pred):
y_true = data[:, 0]
d = data[:, 1]
# condition
loss_value = KB.switch(KB.less_equal(y_true, 0),
2 * d * y_pred, 2 * d * (y_true * KB.switch(KB.less_equal(y_true, 0),
KB.variable(-10000), KB.log(y_true)) - y_true * KB.switch(KB.less_equal(y_pred, 0.), KB.variable(-10000), KB.log(y_pred)) - y_true + y_pred))
return loss_value

Weight different misclassifications differently keras

I want my model to increase the loss for a false positive prediction when training by creating a custom loss function.
The class_weight parameter in model.fit() does not work for this issue. The class_weight is already set to { 0: 1, 1:23 } as I have skewed training data where there are 23 times as many non-true labels as there are true labels.
I am not too experienced when working with the keras backend. I have mostly worked with the functional model.
What I want to create is:
def weighted_binary_crossentropy(y_true, y_pred):
#where y_true == 0 and y_pred == 1:
# weight this loss and make it 50 times larger
#return loss
I can do simple stuff with the tensors such as getting the mean squared error but I have no idea how to do logical stuff.
I have tried to do some hacky solution which doesnt work and feels totally wrong:
def weighted_binary_crossentropy(y_true, y_pred):
false_positive_weight = 50
thresh = 0.5
y_pred_true = K.greater_equal(thresh,y_pred)
y_not_true = K.less_equal(thresh,y_true)
false_positive_tensor = K.equal(y_pred_true,y_not_true)
loss_weights = K.ones_like(y_pred) + false_positive_weight*false_positive_tensor
return K.binary_crossentropy(y_true, y_pred)*loss_weights
I am using python 3 with keras 2 and tensorflow as backend.
Thanks in advance!
I think you're almost there...
from keras.losses import binary_crossentropy
def weighted_binary_crossentropy(y_true, y_pred):
false_positive_weight = 50
thresh = 0.5
y_pred_true = K.greater_equal(thresh,y_pred)
y_not_true = K.less_equal(thresh,y_true)
false_positive_tensor = K.equal(y_pred_true,y_not_true)
#changing from here
#first let's transform the bool tensor in numbers - maybe you need float64 depending on your configuration
false_positive_tensor = K.cast(false_positive_tensor,'float32')
#and let's create it's complement (the non false positives)
complement = 1 - false_positive_tensor
#now we're going to separate two groups
falsePosGroupTrue = y_true * false_positive_tensor
falsePosGroupPred = y_pred * false_positive_tensor
nonFalseGroupTrue = y_true * complement
nonFalseGroupPred = y_pred * complement
#let's calculate one crossentropy loss for each group
#(directly from the keras loss functions imported above)
falsePosLoss = binary_crossentropy(falsePosGroupTrue,falsePosGroupPred)
nonFalseLoss = binary_crossentropy(nonFalseGroupTrue,nonFalseGroupPred)
#return them weighted:
return (false_positive_weight*falsePosLoss) + nonFalseLoss

Categories