How would you do ReduceLROnPlateau in Tensorflow? - python

Keras has a callback that reduces the learning rate upon a plateauing of a specified metric, called ReduceLROnPlateau.
How do you create such a feature in native Tensorflow? In a Tensorflow model, is it possible to call on Keras callbacks? Or does it need to be written in native Tensorflow? If so, how would you set the learning rate in the middle of a training session?

I'm afraid tensorflow doesn't support this out-of-the-box (and keras callbacks aren't directly applicable neither). Here's the list of supported learning rate scheduling techniques: all of them are different algorithms, but are self-contained, i.e. independent from the training performance.
But the good news is that all optimizers accept the tensor for the learning rate. So you can create a variable or a placeholder for the learning rate and change its value based on validation performance (which you'll also need to calculate yourself). Here's an example from this wonderful answer:
learning_rate = tf.placeholder(tf.float32, shape=[])
# ...
train_step = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate).minimize(mse)
sess = tf.Session()
# Feed different values for learning rate to each training step.
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.01})
sess.run(train_step, feed_dict={learning_rate: 0.01})

Here's a not 1:1 conversion from the Keras 'ReduceLROnPlateau' I wrote up. It examines each batch's loss instead of sampling randomly at the end of each epoch. Cooldown & patience are still in terms of epoch though. It can be used just like tf.train.exponential_decay(...).
I think there's probably a better way to go about it than simply monitoring the minimum loss value, as the minimum value could be an extreme outlier. A metric in terms of some running average of the loss gradient might be better.
def plateau_decay(learning_rate, global_step, loss, data_count, batch_size, factor=0.1, patience=10, min_delta=1e-4, cooldown=0, min_lr=0):
steps_per_epoch = math.ceil(data_count // batch_size)
patient_steps = patience * steps_per_epoch
cooldown_steps = cooldown * steps_per_epoch
if not isinstance(learning_rate, tf.Tensor):
learning_rate = tf.get_variable('learning_rate', initializer=tf.constant(learning_rate), trainable=False, collections=[tf.GraphKeys.LOCAL_VARIABLES])
with tf.variable_scope('plateau_decay'):
step = tf.get_variable('step', trainable=False, initializer=global_step, collections=[tf.GraphKeys.LOCAL_VARIABLES])
best = tf.get_variable('best', trainable=False, initializer=tf.constant(np.Inf, tf.float32), collections=[tf.GraphKeys.LOCAL_VARIABLES])
def _update_best():
with tf.control_dependencies([
tf.assign(best, loss),
tf.assign(step, global_step),
tf.print('Plateau Decay: Updated Best - Step:', global_step, 'Next Decay Step:', global_step + patient_steps, 'Loss:', loss)
]):
return tf.identity(learning_rate)
def _decay():
with tf.control_dependencies([
tf.assign(best, loss),
tf.assign(learning_rate, tf.maximum(tf.multiply(learning_rate, factor), min_lr)),
tf.assign(step, global_step + cooldown_steps),
tf.print('Plateau Decay: Decayed LR - Step:', global_step, 'Next Decay Step:', global_step + cooldown_steps + patient_steps, 'Learning Rate:', learning_rate)
]):
return tf.identity(learning_rate)
def _no_op(): return tf.identity(learning_rate)
met_threshold = tf.less(loss, best - min_delta)
should_decay = tf.greater_equal(global_step - step, patient_steps)
return tf.cond(met_threshold, _update_best, lambda: tf.cond(should_decay, _decay, _no_op))

Related

Keras loss value significant jump

I am working on a simple neural network in Keras with Tensorflow. There is a significant jump in loss value from the last mini-batch of epoch L-1 to the first mini-batch of epoch L.
I am aware that the loss should decrease with an increase in the number of iterations but a significant jump in loss after each epoch does looks strange. Here is the code snippet
tf.keras.initializers.he_uniform(seed=None)
initializer = tf.keras.initializers.he_uniform()
def my_loss(y_true, y_pred):
epsilon=1e-30 #epsilon is added to avoid inf/nan
y_pred = K.cast(y_pred, K.floatx())
y_true = K.cast(y_true, K.floatx())
loss = y_true* K.log(y_pred+epsilon) + (1-y_true)*K.log(1-y_pred+epsilon)
loss = K.mean(loss, axis= -1)
loss = K.mean(loss)
loss = -1*loss
return loss
inputs = tf.keras.Input(shape=(140,))
x = tf.keras.layers.Dense(1000,kernel_initializer=initializer)(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dense(1000,kernel_initializer=initializer)(x)
x = tf.keras.layers.ReLU()(x)
x = tf.keras.layers.Dense(1000,kernel_initializer=initializer)(x)
x = tf.keras.layers.ReLU()(x)
x = tf.keras.layers.Dense(100, kernel_initializer=initializer)(x)
outputs = tf.keras.activations.sigmoid(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
opt = tf.keras.optimizers.Adam()
recall1 = tf.keras.metrics.Recall(top_k = 8)
c_entropy = tf.keras.losses.BinaryCrossentropy()
model.compile(loss=c_entropy, optimizer= opt , metrics = [recall1,my_loss], run_eagerly=True)
model.fit(X_train_test, Y_train_test, epochs=epochs, batch_size=batch, shuffle=True, verbose = 1)
When I search online, I found this article, which suggests that Keras calculates the moving average over the mini-batches. Also, I found somewhere that the array for calculating the moving average is reset after each epoch that's why we obtain a very smooth curve within an epoch but a jump after the epoch.
In order to avoid the moving average, I implemented my own loss function, which should output the loss values of the mini-batch instead of the moving average over the batches. As each mini-batch is different from each other; therefore the corresponding loss must also be different from each other. Due to this reason, I was expecting an arbitrary loss value on each mini-batch through my implementation of the loss function. Instead, I obtain exactly the same values as the loss function by Keras.
I am unclear about:
Is Keras calculating the moving average over the mini-batches, the array of which is reset after each epoch causing the jump. If not, then what is causing the jump behaviour in loss value.
Is my implementation of loss for each mini-batch correct? If not, then how can I obtain the loss value of the mini-batch during the training.
Keras in fact shows the moving average instead of the "raw" loss values. The moving average array is reset after each epoch that's why we can see a huge jump after each epoch. In order to acquire the raw loss values, one should implement a callback as shown below:
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
#initialize a list at the begining of training
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
mycallback = LossHistory()
Then call it in model.fit
model.fit(X, Y, epochs=epochs, batch_size=batch, shuffle=True, verbose = 0, callbacks=[mycallback])
print(mycallback.losses)
I tested with the following configuration
Keras 2.3.1
Tensorflow 2.1.0
Python 3.7.9
For some reason, it didn't work with the following configuration
Keras 2.4.3
Tensorflow 2.2.0
Python 3.8.5
To answer the second question, the implementation of the loss function my_loss is correct and the values obtained are pretty much close to the values generated by the built-in function.
tf.keras.losses.BinaryCrossentropy()
In TensorFlow version 2.2 and newer, the loss provided to on_train_batch_end is now the average loss of all batches up until the current batch within the given epoch. This is also the case for additional metrics, and applies to the built-in losses/metrics as well as any custom losses/metrics.
Fortunately, the loss for the current batch can be calculated from the average loss as follows:
from tensorflow.keras.callbacks import Callback
class CustomCallback(Callback):
''' This callback converts the average loss (default behavior in TF>=2.2)
into the loss for only the current batch.
'''
def on_epoch_begin(self, epoch, logs={}):
self.previous_loss_sum = 0
def on_train_batch_end(self, batch, logs={}):
# calculate loss of current batch:
current_loss_sum = (batch + 1) * logs['loss']
current_loss = current_loss_sum - self.previous_loss_sum
self.previous_loss_sum = current_loss_sum
# use current_loss:
# ...
This code can be added to any custom callback that needs the loss for the current batch instead of the average loss, including the LossHistory callback provided in Doc Jazzy's answer.
Also, if you are using Tensorflow 1 or TensorFlow 2 version <= 2.1, then do not include this code in your callback, as in those versions the current loss is already provided, instead of the average loss.

How to accumulate gradients in tensorflow 2.0?

I'm training a model with tensorflow 2.0. The images in my training set are of different resolutions. The Model I've built can handle variable resolutions (conv layers followed by global averaging). My training set is very small and I want to use full training set in a single batch.
Since my images are of different resolutions, I can't use model.fit(). So, I'm planning to pass each sample through the network individually, accumulate the errors/gradients and then apply one optimizer step. I'm able to compute loss values, but I don't know how to accumulate the losses/gradients. How can I accumulate the losses/gradients and then apply a single optimizer step?
Code:
for i in range(num_epochs):
print(f'Epoch: {i + 1}')
total_loss = 0
for j in tqdm(range(num_samples)):
sample = samples[j]
with tf.GradientTape as tape:
prediction = self.model(sample)
loss_value = self.loss_function(y_true=labels[j], y_pred=prediction)
gradients = tape.gradient(loss_value, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))
total_loss += loss_value
epoch_loss = total_loss / num_samples
print(f'Epoch loss: {epoch_loss}')
If I understand correctly from this statement:
How can I accumulate the losses/gradients and then apply a single optimizer step?
#Nagabhushan is trying to accumulate gradients and then apply the optimization on the (mean) accumulated gradient. The answer provided by #TensorflowSupport does not answers it.
In order to perform the optimization only once, and accumulate the gradient from several tapes, you can do the following:
for i in range(num_epochs):
print(f'Epoch: {i + 1}')
total_loss = 0
# get trainable variables
train_vars = self.model.trainable_variables
# Create empty gradient list (not a tf.Variable list)
accum_gradient = [tf.zeros_like(this_var) for this_var in train_vars]
for j in tqdm(range(num_samples)):
sample = samples[j]
with tf.GradientTape as tape:
prediction = self.model(sample)
loss_value = self.loss_function(y_true=labels[j], y_pred=prediction)
total_loss += loss_value
# get gradients of this tape
gradients = tape.gradient(loss_value, train_vars)
# Accumulate the gradients
accum_gradient = [(acum_grad+grad) for acum_grad, grad in zip(accum_gradient, gradients)]
# Now, after executing all the tapes you needed, we apply the optimization step
# (but first we take the average of the gradients)
accum_gradient = [this_grad/num_samples for this_grad in accum_gradient]
# apply optimization step
self.optimizer.apply_gradients(zip(accum_gradient,train_vars))
epoch_loss = total_loss / num_samples
print(f'Epoch loss: {epoch_loss}')
Using tf.Variable() should be avoided inside the training loop, since it will produce errors when trying to execute the code as a graph. If you use tf.Variable() inside your training function and then decorate it with "#tf.function" or apply "tf.function(my_train_fcn)" to obtain a graph function (i.e. for improved performance), the execution will rise an error.
This happens because the tracing of the tf.Variable function results in a different behaviour than the observed in eager execution (re-utilization or creation, respectively). You can find more info on this in the tensorflow help page.
In line with the Stack Overflow Answer and the explanation provided in Tensorflow Website, mentioned below is the code for Accumulating Gradients in Tensorflow Version 2.0:
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
tvs = mnist_model.trainable_variables
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, tvs)
#print(grads[0].shape)
#print(accum_vars[0].shape)
accum_ops = [accum_vars[i].assign_add(grad) for i, grad in enumerate(grads)]
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
print ('Epoch {} finished'.format(epoch))
# Call the above function
train(epochs = 3)
Complete code can be found in this Github Gist.

TensorFlow 2.0: Eager execution of training either returns bad results or doesn't learn at all

I am experimenting with TensorFlow 2.0 (alpha). I want to implement a simple feed forward Network with two output nodes for binary classification (it's a 2.0 version of this model).
This is a simplified version of the script. After I defined a simple Sequential() model, I set:
# import layers + dropout & activation
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.activations import elu, softmax
# Neural Network Architecture
n_input = X_train.shape[1]
n_hidden1 = 15
n_hidden2 = 10
n_output = y_train.shape[1]
model = tf.keras.models.Sequential([
Dense(n_input, input_shape = (n_input,), activation = elu), # Input layer
Dropout(0.2),
Dense(n_hidden1, activation = elu), # hidden layer 1
Dropout(0.2),
Dense(n_hidden2, activation = elu), # hidden layer 2
Dropout(0.2),
Dense(n_output, activation = softmax) # Output layer
])
# define loss and accuracy
bce_loss = tf.keras.losses.BinaryCrossentropy()
accuracy = tf.keras.metrics.BinaryAccuracy()
# define optimizer
optimizer = tf.optimizers.Adam(learning_rate = 0.001)
# save training progress in lists
loss_history = []
accuracy_history = []
# loop over 1000 epochs
for epoch in range(1000):
with tf.GradientTape() as tape:
# take binary cross-entropy (bce_loss)
current_loss = bce_loss(model(X_train), y_train)
# Update weights based on the gradient of the loss function
gradients = tape.gradient(current_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# save in history vectors
current_loss = current_loss.numpy()
loss_history.append(current_loss)
accuracy.update_state(model(X_train), y_train)
current_accuracy = accuracy.result().numpy()
accuracy_history.append(current_accuracy)
# print loss and accuracy scores each 100 epochs
if (epoch+1) % 100 == 0:
print(str(epoch+1) + '.\tTrain Loss: ' + str(current_loss) + ',\tAccuracy: ' + str(current_accuracy))
accuracy.reset_states()
print('\nTraining complete.')
Training goes without errors, however strange things happen:
Sometimes, the Network doesn't learn anything. All loss and accuracy scores are constant throughout all the epochs.
Other times, the network is learning, but very very badly. Accuracy never went beyond 0.4 (while in TensorFlow 1.x I got an effortless 0.95+). Such a low performance suggests me that something went wrong in the training.
Other times, the accuracy is very slowly improving, while the loss remains constant all the time.
What can cause these problems? Please help me understand my mistakes.
UPDATE:
After some corrections, I can make the Network learn. However, its performance is extremely poor. After 1000 epochs, it reaches about %40 accuracy, which clearly means something is still wrong. Any help is appreciated.
The tf.GradientTape is recording every operation that happens inside its scope.
You don't want to record in the tape the gradient calculation, you only want to compute the loss forward.
with tf.GradientTape() as tape:
# take binary cross-entropy (bce_loss)
current_loss = bce_loss(model(df), classification)
# End of tape scope
# Update weights based on the gradient of the loss function
gradients = tape.gradient(current_loss, model.trainable_variables)
# The tape is now consumed
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
More importantly, I don't see the loop on the training set, therefore I suppose the complete code looks like:
for epoch in range(n_epochs):
for df, classification in dataset:
# your code that computes loss and trains
Moreover, the usage of the metrics is wrong.
You want to accumulate, thus update the internal state of the accuracy operation, at every training step and measure the overall accuracy at the end of every epoch.
Thus you have to:
# Measure the accuracy inside the training loop
accuracy.update_state(model(df), classification)
And call accuracy.result() only at the end of the epoch, when all the accuracy value have been saved into the metric.
Remember to call to the .reset_states() method to clears the variable states, resetting it to zero at the end of every epoch.

Use custom loss value while training in tensorflow

I would like to train my neural network using a custom loss value of my own. Therefore, I would like to perform feed forward propagation for one mini batch to store the activations in the memory, and then perform back propagation using a my own loss value. This is to be done using tensorflow.
Finally, I need to do something like:
sess.run(optimizer, feed_dict={x: training_data, loss: my_custom_loss_value}
Is that possible? I am assuming that the optimizer depends on the loss which by itself depends on the input. Therefore, I want to inputs to be fed into the graph, but I want to use my value for the loss.
I guess since the optimizer depends on the activations, they will be evaluated, in other words, the input is going to be fed into the network. Here is an example:
import tensorflow as tf
a = tf.Variable(tf.constant(8.0))
a = tf.Print(input_=a, data=[a], message="a:")
b = tf.Variable(tf.constant(6.0))
b = tf.Print(input_=b, data=[b], message="b:")
c = a * b
optimizer = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(c)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
value, _ = sess.run([c, optimizer], feed_dict={c: 1})
print(value)
Finally, the printed value is 1.0, while the console shows: a:[8]b:[6] which means that the inputs got evaluated.
Exactly so.
When you train the optimizer using Gradient Descent or any other optimization algorithm like AdamOptimizer(), the optimizer minimizes your loss function, which could be a Softmax cross entropy tf.nn.softmax_cross_entropy_with_logits in terms of multi-class classification, or a squared error loss tf.losses.mean_squared_error in terms of regression or your own custom loss. The loss function is evaluated or computed using the model hypothesis.
So TensorFlow uses this cascade approach to train the model hypothesis by calling a tf.Session().run() on the optimizer. See the following as a rough example in a multi-classification setting:
batch_size = 128
# build the linear model
hypothesis = tf.add(tf.matmul(input_X, weight), bias)
# softmax cross entropy loss or cost function for logistic regression
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets,
logits=hypothesis))
# optimizer to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.001).minimize(loss)
# execute in Session
with tf.Session() as sess:
# initialize all variables
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()
# Train the model
for steps in range(1000):
mini_batch = zip(range(0, X_train.shape[0], batch_size),
range(batch_size, X_train.shape[0]+1, batch_size))
# train using mini-batches
for (start, end) in mini_batch:
sess.run(optimizer, feed_dict = {input_X: X_features[start:end],
input_y: y_targets[start:end]})

Online or batch training by default in tensorflow

I have the following question: I'm trying to learn tensor-flow and I still don't find where to set the training as online or batch. For example, if I have the following code to train a neural-network:
loss_op = tf.reduce_mean(tf.pow(neural_net(X) - Y, 2))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
If I give all the data at the same time (i.e batch_x has all the data), does that mean that is training as a batch training? or the tensor-flow optimizer optimize in a different way from behind? Is it wrong if I do a for loop giving one data sample at a time? does that count as single-step (online) training? Thank you for your help.
There are mainly 3 Types of Gradient Descent. Specifically,
Stochastic Gradient Descent
Batch Gradient Descent
Mini Batch Gradient Descent
Here, is a good tutorial (https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/) on above three methods with upsides and downsides.
For your question, Following is a standard sample training tensorflow code,
N_EPOCHS = #Need to define here
BATCH_SIZE = # Need to define hare
with tf.Session() as sess:
train_count = len(train_x)
for i in range(1, N_EPOCHS + 1):
for start, end in zip(range(0, train_count, BATCH_SIZE),
range(BATCH_SIZE, train_count + 1,BATCH_SIZE)):
sess.run(train_op, feed_dict={X: train_x[start:end],
Y: train_y[start:end]})
Here N_EPOCHS means the number of passes of the whole training dataset. And you can set the BATCH_SIZE according to your Gradient Descent method.
For Stochastic Gradient Descent, BATCH_SIZE = 1.
For Batch Gradient Descent, BATCH_SIZE = training dataset size.
For Mini Batch Gradient Decent, 1 << BATCH_SIZE << training dataset size.
Among three methods, the most popular method is the Mini Batch Gradient Decent. However, you need to set the BATCH_SIZE parameter according to your requirements. A good default for BATCH_SIZE might be 32.
Hope this helps.
Normally the first dimension of the data placeholders in Tensorflow is set as the batch_size and TensorFlow doesn't define that(the training strategy) in default. You can set that first dimension to determine if it is on-line(first dimension is 1) or mini-batch(tens normally). For example:
self.enc_batch = tf.placeholder(tf.int32, [hps.batch_size, None], name='enc_batch')

Categories