Keras: Generating target values based on Faster-RCNN predictions - python

SOLUTION
Ok, I did find a solution, maybe not the best one. Essentially just created a new method which used the GradientTape from TensorFlow. Essentially, get the predictions, generate the targets, calculate the losses and then update the gradients.
with tf.GradientTape() as tape:
# Forward pass
y_preds = self.model(x, training=True)
# Generate the target values from the predictions
actual_deltas, actual_objectness = self.generate_target_values(y_preds, labels)
#Get the loss
loss = self.model.compiled_loss([actual_deltas, actual_objectness], y_preds, regularization_losses=self.model.losses)
# Compute gradients
trainable_vars = self.model.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.model.optimizer.apply_gradients(zip(gradients, trainable_vars))
I understand that there is better ways to do this with Keras Subclassing, but this did the job.
ORIGINAL POST
I am currently trying to create a model, where the prediction is need to be run through a function, which compares them to the training label. This function will then return the target values. How do I train my model, such that the predictions will get fed to my function and it will return the function pred.
I am using Tensorflow 2.1.0 and Keras 2.2.4-tf
Edit:
The model is a modified Faster-RCNN model.
I am trying to add the function, which takes the predictions (a 2xN and a 4xN vector), converts them to bounding boxes, compares them to the ground truth bounding boxes, and then returns what each of the proposed bounding boxes values should have been, in order to overlay this bounding box properly.

Ok, I did find a solution, maybe not the best one. Essentially just created a new method which used the GradientTape from TensorFlow. Essentially, get the predictions, generate the targets, calculate the losses and then update the gradients.
with tf.GradientTape() as tape:
# Forward pass
y_preds = self.model(x, training=True)
# Generate the target values from the predictions
actual_deltas, actual_objectness = self.generate_target_values(y_preds, labels)
#Get the loss
loss = self.model.compiled_loss([actual_deltas, actual_objectness], y_preds, regularization_losses=self.model.losses)
# Compute gradients
trainable_vars = self.model.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.model.optimizer.apply_gradients(zip(gradients, trainable_vars))
I understand that there is better ways to do this with Keras Subclassing, but this did the job.

Related

Combining gradients from different "networks" in TensorFlow2

I'm trying to combine a few "networks" into one final loss function. I'm wondering if what I'm doing is "legal", as of now I can't seem to make this work. I'm using tensorflow probability :
The main problem is here:
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
Which gives me None gradients and throws on apply gradients:
AttributeError: 'list' object has no attribute 'device'
Full code:
univariate_gmm = tfp.distributions.MixtureSameFamily(
mixture_distribution=tfp.distributions.Categorical(probs=phis_true),
components_distribution=tfp.distributions.Normal(loc=mus_true,scale=sigmas_true)
)
x = univariate_gmm.sample(n_samples, seed=random_seed).numpy()
dataset = tf.data.Dataset.from_tensor_slices(x)
dataset = dataset.shuffle(buffer_size=1024).batch(64)
m_phis = keras.layers.Dense(2, activation=tf.nn.softmax)
m_mus = keras.layers.Dense(2)
m_sigmas = keras.layers.Dense(2, activation=tf.nn.softplus)
def neg_log_likelihood(y, phis, mus, sigmas):
a = tfp.distributions.Normal(loc=mus[0],scale=sigmas[0]).prob(y)
b = tfp.distributions.Normal(loc=mus[1],scale=sigmas[1]).prob(y)
c = np.log(phis[0]*a + phis[1]*b)
return tf.reduce_sum(-c, axis=-1)
# Instantiate a logistic loss function that expects integer targets.
loss_fn = neg_log_likelihood
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
# Iterate over the batches of the dataset.
for step, y in enumerate(dataset):
yy = np.expand_dims(y, axis=1)
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
phis = m_phis(yy)
mus = m_mus(yy)
sigmas = m_sigmas(yy)
# Loss value for this batch.
loss = loss_fn(yy, phis, mus, sigmas)
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights]))
# Logging.
if step % 100 == 0:
print("Step:", step, "Loss:", float(loss))
There are two separate problems to take into account.
1. Gradients are None:
Typically this happens, if non-tensorflow operations are executed in the code that is watched by the GradientTape. Concretely, this concerns the computation of np.log in your neg_log_likelihood functions. If you replace np.log with tf.math.log, the gradients should compute. It may be a good habit to try not to use numpy in your "internal" tensorflow components, since this avoids errors like this. For most numpy operations, there is a good tensorflow substitute.
2. apply_gradients for multiple trainables:
This mainly has to do with the input that apply_gradients expects. There you have two options:
First option: Call apply_gradients three times, each time with different trainables
optimizer.apply_gradients(zip(m_phis_gradients, m_phis.trainable_weights))
optimizer.apply_gradients(zip(m_mus_gradients, m_mus.trainable_weights))
optimizer.apply_gradients(zip(m_sigmas_gradients, m_sigmas.trainable_weights))
The alternative would be to create a list of tuples, like indicated in the tensorflow documentation (quote: "grads_and_vars: List of (gradient, variable) pairs.").
This would mean calling something like
optimizer.apply_gradients(
[
zip(m_phis_gradients, m_phis.trainable_weights),
zip(m_mus_gradients, m_mus.trainable_weights),
zip(m_sigmas_gradients, m_sigmas.trainable_weights),
]
)
Both options require you to split the gradients. You can either do that by computing the gradients and indexing them separately (gradients[0],...), or you can simply compute the gradiens separately. Note that this may require persistent=True in your GradientTape.
# [...]
# Open a GradientTape.
with tf.GradientTape(persistent=True) as tape:
# Forward pass.
phis = m_phis(yy)
mus = m_mus(yy)
sigmas = m_sigmas(yy)
# Loss value for this batch.
loss = loss_fn(yy, phis, mus, sigmas)
# Get gradients of the loss wrt the weights.
m_phis_gradients = tape.gradient(loss, m_phis.trainable_weights)
m_mus_gradients = tape.gradient(loss, m_mus.trainable_weights)
m_sigmas_gradients = tape.gradient(loss, m_sigmas .trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(
[
zip(m_phis_gradients, m_phis.trainable_weights),
zip(m_mus_gradients, m_mus.trainable_weights),
zip(m_sigmas_gradients, m_sigmas.trainable_weights),
]
)
# [...]

Use Hamming Distance Loss Function with Tensorflow GradientTape: no gradients. Is it not differentiable?

I'm using Tensorflow 2.1 and Python 3, creating my custom training model following the tutorial "Tensorflow - Custom training: walkthrough".
I'm trying to use Hamming Distance on my loss function:
import tensorflow as tf
import tensorflow_addons as tfa
def my_loss_hamming(model, x, y):
global output
output = model(x)
return tfa.metrics.hamming.hamming_loss_fn(y, output, threshold=0.5, mode='multilabel')
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
tape.watch(model.trainable_variables)
loss_value = my_loss_hamming(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
When I call it:
loss_value, grads = grad(model, feature, label)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
grads variable is a list with 38 None.
And I get the error:
No gradients provided for any variable: ['conv1_1/kernel:0', ...]
Is there any way to use Hamming Distance without "interrupts the gradient chain registered by the gradient tape"?
Apology if I'm saying something obvious, but the way how backpropagation works as a fitting algorithm for neural networks is through gradients - e.g. for each batch of training data you compute how much the loss function will improve/degrade if you move a particular trainable weight by a very small amount delta.
Hamming loss is by definition not differentiable, so for small movements of trainable weights you will never experience any changes in the loss. I imagine it is only added to be used for final measurements of trained models' performance rather than for training.
If you want to train a neural net through backpropagation you need to use some differentiable loss - such that can help the model to move weights in the right direction. Sometimes people use different techniques to smooth such losses as Hamming less and create approximations - e.g. here it could be something which would penalize less predictions which are closer to the target answer rather then just giving out 1 for everything above threshold and 0 for everything else.

What's the best way to access single gradients in a batch in TensorFlow?

I'm currently analyzing how gradients develop over the course of training of a CNN using Tensorflow 2.x. What I want to do is compare each gradient in a batch to the gradient resulting for the whole batch. At the moment I use this simple code snippet for each training step:
[...]
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
[...]
# One training step
# x_train is a batch of input data, y_train the corresponding labels
def train_step(model, optimizer, x_train, y_train):
# Process batch
with tf.GradientTape() as tape:
batch_predictions = model(x_train, training=True)
batch_loss = loss_object(y_train, batch_predictions)
batch_grads = tape.gradient(batch_loss, model.trainable_variables)
# Do something with gradient of whole batch
# ...
# Process each data point in the current batch
for index in range(len(x_train)):
with tf.GradientTape() as single_tape:
single_prediction = model(x_train[index:index+1], training=True)
single_loss = loss_object(y_train[index:index+1], single_prediction)
single_grad = single_tape.gradient(single_loss, model.trainable_variables)
# Do something with gradient of single data input
# ...
# Use batch gradient to update network weights
optimizer.apply_gradients(zip(batch_grads, model.trainable_variables))
train_loss(batch_loss)
train_accuracy(y_train, batch_predictions)
My main problem is that computation time explodes when calculating each of the gradients single-handedly although these calculations should have already been done by Tensorflow when calculating the batch's gradient. The reason is that GradientTape as well as compute_gradients always return a single gradient no matter whether single or several data points were given. So this computation has to be done for each data point.
I know that I could compute the batch's gradient to update the network by using all the single gradients calculated for each data point but this plays only a minor role in saving computation time.
Is there a more efficient way to compute single gradients?
You can use the jacobian method of the gradient tape to get the Jacobian matrix, which will give you the gradients for each individual loss value:
import tensorflow as tf
# Make a random linear problem
tf.random.set_seed(0)
# Random input batch of ten four-vector examples
x = tf.random.uniform((10, 4))
# Random weights
w = tf.random.uniform((4, 2))
# Random batch label
y = tf.random.uniform((10, 2))
with tf.GradientTape() as tape:
tape.watch(w)
# Prediction
p = x # w
# Loss
loss = tf.losses.mean_squared_error(y, p)
# Compute Jacobian
j = tape.jacobian(loss, w)
# The Jacobian gives you the gradient for each loss value
print(j.shape)
# (10, 4, 2)
# Gradient of the loss wrt the weights for the first example
tf.print(j[0])
# [[0.145728424 0.0756840706]
# [0.103099883 0.0535449386]
# [0.267220169 0.138780832]
# [0.280130595 0.145485848]]

How to detect vanishing and exploding gradients with Tensorboard?

I have two "sub-questions"
1) How can I detect vanishing or exploding gradients with Tensorboard, given the fact that currently write_grads=True is deprecated in the Tensorboard callback as per "un-deprecate write_grads for fit #31173" ?
2) I figured I can probably tell whether my model suffers from vanishing gradients based on the weights' distributions and histograms in the Distributions and Histograms tab in Tensorboard. My problem is that I have no frame of reference to compare with. Currently, my biases seem to be "moving" but I can't tell whether my kernel weights (Conv2D layers) are "moving"/"changing" "enough". Can someone help me by giving a rule of thumb to asses this visually in Tensorboard? I.e. if only the bottom 25% percentile of kernel weights are moving, that's good enough / not good enough? Or perhaps someone can post two reference images from tensorBoard of vanishing gradients vs, non vanishing gradients.
Here are my histograms and distributions, is it possible to tell whether my model suffers from Vanishing gradients? (some layers omitted for brevity) Thanks in advance.
I am currently facing the same question and approached the problem similarly using Tensorboard.
Even tho write_grads is deprecated you can still manage to log gradients for each layer of your network by subclassing the tf.keras.Model class and computing the gradients manually with gradient.Tape in the train_step method.
Something similar to this is working for me
from tensorflow.keras import Model
class TrainWithCustomLogsModel(Model):
def __init__(self, **kwargs):
super(TrainWithCustomLogsModel, self).__init__(**kwargs)
self.step = tf.Variable(0, dtype=tf.int64,trainable=False)
def train_step(self, data):
# Get batch images and labels
x, y = data
# Compute the batch loss
with tf.GradientTape() as tape:
p = self(x , training = True)
loss = self.compiled_loss(y, p, regularization_losses=self.losses)
# Compute gradients for each weight of the network. Note trainable_vars and gradients are list of tensors
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Log gradients in Tensorboard
self.step.assign_add(tf.constant(1, dtype=tf.int64))
#tf.print(self.step)
with train_summary_writer.as_default():
for var, grad in zip(trainable_vars, gradients):
name = var.name
var, grad = tf.squeeze(var), tf.squeeze(grad)
tf.summary.histogram(name, var, step = self.step)
tf.summary.histogram('Gradients_'+name, grad, step = self.step)
# Update model's weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
del tape
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, p)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
You should then be able to visualize distributions of your gradients for any train step of your training, along with distributions of your kernel's values.
Moreover, it might be worth try to plot the distribution of the norm through time instead of single values.

If one captures gradient with Optimizer, will it calculate twice the gradient?

I recently have some training performance bottleneck. I always add a lot of histograms in the summary. I want to know if by calculating gradients first then re-minimizing the lose will calculate twice the gradients. A simplified code:
# layers
...
# optimizer
loss = tf.losses.mean_squared_error(labels=y_true, predictions=logits)
opt = AdamOptimizer(learning_rate)
# collect gradients
gradients = opt.compute_gradients(loss)
# train operation
train_op = opt.minimize(loss)
...
# merge summary
...
Is there an minimize method in optimizers that use directly the gradients? Something like opt.minimize(gradients) instead of opt.minimize(loss)?
You could use apply_gradients after the calculation of the gradients with compute_gradients as follows :
grads_and_vars = opt.compute_gradients(loss)
train_op = opt.apply_gradients(grads_and_vars)

Categories