Online or batch training by default in tensorflow - python

I have the following question: I'm trying to learn tensor-flow and I still don't find where to set the training as online or batch. For example, if I have the following code to train a neural-network:
loss_op = tf.reduce_mean(tf.pow(neural_net(X) - Y, 2))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
If I give all the data at the same time (i.e batch_x has all the data), does that mean that is training as a batch training? or the tensor-flow optimizer optimize in a different way from behind? Is it wrong if I do a for loop giving one data sample at a time? does that count as single-step (online) training? Thank you for your help.

There are mainly 3 Types of Gradient Descent. Specifically,
Stochastic Gradient Descent
Batch Gradient Descent
Mini Batch Gradient Descent
Here, is a good tutorial (https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/) on above three methods with upsides and downsides.
For your question, Following is a standard sample training tensorflow code,
N_EPOCHS = #Need to define here
BATCH_SIZE = # Need to define hare
with tf.Session() as sess:
train_count = len(train_x)
for i in range(1, N_EPOCHS + 1):
for start, end in zip(range(0, train_count, BATCH_SIZE),
range(BATCH_SIZE, train_count + 1,BATCH_SIZE)):
sess.run(train_op, feed_dict={X: train_x[start:end],
Y: train_y[start:end]})
Here N_EPOCHS means the number of passes of the whole training dataset. And you can set the BATCH_SIZE according to your Gradient Descent method.
For Stochastic Gradient Descent, BATCH_SIZE = 1.
For Batch Gradient Descent, BATCH_SIZE = training dataset size.
For Mini Batch Gradient Decent, 1 << BATCH_SIZE << training dataset size.
Among three methods, the most popular method is the Mini Batch Gradient Decent. However, you need to set the BATCH_SIZE parameter according to your requirements. A good default for BATCH_SIZE might be 32.
Hope this helps.

Normally the first dimension of the data placeholders in Tensorflow is set as the batch_size and TensorFlow doesn't define that(the training strategy) in default. You can set that first dimension to determine if it is on-line(first dimension is 1) or mini-batch(tens normally). For example:
self.enc_batch = tf.placeholder(tf.int32, [hps.batch_size, None], name='enc_batch')

Related

Tensorflow Custom Training With Phases

I need to create a custom training loop with Tensorflow / Keras (because I want to have more than one optimizer and tell which weights each optimizer should act upon).
Although this tutorial and that one too are quite clear regarding this matter, they miss a very important point: how do I predict for training phase and how do I predict for validation phase?
Suppose my model has Dropout layers, or BatchNormalization layers. They certainly work in a completely different way whether they are in training or validation.
How do I adapt these tutorials? This is a dummy example (may contain one or two pieces of pseudocode):
# Iterate over epochs.
for epoch in range(3):
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
#model with two outputs
#IMPORTANT: must be in training phase (use dropouts, calculate batch statistics)
logits1, logits2 = model(x_batch_train) #must be "training"
loss_value1 = loss_fn1(y_batch_train[0], logits1)
loss_value2 = loss_fn2(y_batch_train[1], logits2)
grads1 = tape.gradient(loss_value1, model.trainable_weights[selection1])
grads2 = tape.gradient(loss_value2, model.trainable_weights[selection2])
optimizer1.apply_gradients(zip(grads1, model.trainable_weights[selection1]))
optimizer2.apply_gradients(zip(grads2, model.trainable_weights[selection2]))
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
##Important: must be validation phase
#dropouts are off: calculate all neurons and divide value
#batch norms use previously calculated statistics
val_logits1, val_logits2 = model(x_batch_val)
#.... do the evaluations
I think you can just pass a training parameter when you call a tf.keras.Model, and it will be passed down to the layers:
# On training
logits1, logits2 = model(x_batch_train, training=True)
# On evaluation
val_logits1, val_logits2 = model(x_batch_val, training=False)

TensorFlow 2.0: Eager execution of training either returns bad results or doesn't learn at all

I am experimenting with TensorFlow 2.0 (alpha). I want to implement a simple feed forward Network with two output nodes for binary classification (it's a 2.0 version of this model).
This is a simplified version of the script. After I defined a simple Sequential() model, I set:
# import layers + dropout & activation
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.activations import elu, softmax
# Neural Network Architecture
n_input = X_train.shape[1]
n_hidden1 = 15
n_hidden2 = 10
n_output = y_train.shape[1]
model = tf.keras.models.Sequential([
Dense(n_input, input_shape = (n_input,), activation = elu), # Input layer
Dropout(0.2),
Dense(n_hidden1, activation = elu), # hidden layer 1
Dropout(0.2),
Dense(n_hidden2, activation = elu), # hidden layer 2
Dropout(0.2),
Dense(n_output, activation = softmax) # Output layer
])
# define loss and accuracy
bce_loss = tf.keras.losses.BinaryCrossentropy()
accuracy = tf.keras.metrics.BinaryAccuracy()
# define optimizer
optimizer = tf.optimizers.Adam(learning_rate = 0.001)
# save training progress in lists
loss_history = []
accuracy_history = []
# loop over 1000 epochs
for epoch in range(1000):
with tf.GradientTape() as tape:
# take binary cross-entropy (bce_loss)
current_loss = bce_loss(model(X_train), y_train)
# Update weights based on the gradient of the loss function
gradients = tape.gradient(current_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# save in history vectors
current_loss = current_loss.numpy()
loss_history.append(current_loss)
accuracy.update_state(model(X_train), y_train)
current_accuracy = accuracy.result().numpy()
accuracy_history.append(current_accuracy)
# print loss and accuracy scores each 100 epochs
if (epoch+1) % 100 == 0:
print(str(epoch+1) + '.\tTrain Loss: ' + str(current_loss) + ',\tAccuracy: ' + str(current_accuracy))
accuracy.reset_states()
print('\nTraining complete.')
Training goes without errors, however strange things happen:
Sometimes, the Network doesn't learn anything. All loss and accuracy scores are constant throughout all the epochs.
Other times, the network is learning, but very very badly. Accuracy never went beyond 0.4 (while in TensorFlow 1.x I got an effortless 0.95+). Such a low performance suggests me that something went wrong in the training.
Other times, the accuracy is very slowly improving, while the loss remains constant all the time.
What can cause these problems? Please help me understand my mistakes.
UPDATE:
After some corrections, I can make the Network learn. However, its performance is extremely poor. After 1000 epochs, it reaches about %40 accuracy, which clearly means something is still wrong. Any help is appreciated.
The tf.GradientTape is recording every operation that happens inside its scope.
You don't want to record in the tape the gradient calculation, you only want to compute the loss forward.
with tf.GradientTape() as tape:
# take binary cross-entropy (bce_loss)
current_loss = bce_loss(model(df), classification)
# End of tape scope
# Update weights based on the gradient of the loss function
gradients = tape.gradient(current_loss, model.trainable_variables)
# The tape is now consumed
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
More importantly, I don't see the loop on the training set, therefore I suppose the complete code looks like:
for epoch in range(n_epochs):
for df, classification in dataset:
# your code that computes loss and trains
Moreover, the usage of the metrics is wrong.
You want to accumulate, thus update the internal state of the accuracy operation, at every training step and measure the overall accuracy at the end of every epoch.
Thus you have to:
# Measure the accuracy inside the training loop
accuracy.update_state(model(df), classification)
And call accuracy.result() only at the end of the epoch, when all the accuracy value have been saved into the metric.
Remember to call to the .reset_states() method to clears the variable states, resetting it to zero at the end of every epoch.

Use custom loss value while training in tensorflow

I would like to train my neural network using a custom loss value of my own. Therefore, I would like to perform feed forward propagation for one mini batch to store the activations in the memory, and then perform back propagation using a my own loss value. This is to be done using tensorflow.
Finally, I need to do something like:
sess.run(optimizer, feed_dict={x: training_data, loss: my_custom_loss_value}
Is that possible? I am assuming that the optimizer depends on the loss which by itself depends on the input. Therefore, I want to inputs to be fed into the graph, but I want to use my value for the loss.
I guess since the optimizer depends on the activations, they will be evaluated, in other words, the input is going to be fed into the network. Here is an example:
import tensorflow as tf
a = tf.Variable(tf.constant(8.0))
a = tf.Print(input_=a, data=[a], message="a:")
b = tf.Variable(tf.constant(6.0))
b = tf.Print(input_=b, data=[b], message="b:")
c = a * b
optimizer = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(c)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
value, _ = sess.run([c, optimizer], feed_dict={c: 1})
print(value)
Finally, the printed value is 1.0, while the console shows: a:[8]b:[6] which means that the inputs got evaluated.
Exactly so.
When you train the optimizer using Gradient Descent or any other optimization algorithm like AdamOptimizer(), the optimizer minimizes your loss function, which could be a Softmax cross entropy tf.nn.softmax_cross_entropy_with_logits in terms of multi-class classification, or a squared error loss tf.losses.mean_squared_error in terms of regression or your own custom loss. The loss function is evaluated or computed using the model hypothesis.
So TensorFlow uses this cascade approach to train the model hypothesis by calling a tf.Session().run() on the optimizer. See the following as a rough example in a multi-classification setting:
batch_size = 128
# build the linear model
hypothesis = tf.add(tf.matmul(input_X, weight), bias)
# softmax cross entropy loss or cost function for logistic regression
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets,
logits=hypothesis))
# optimizer to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.001).minimize(loss)
# execute in Session
with tf.Session() as sess:
# initialize all variables
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()
# Train the model
for steps in range(1000):
mini_batch = zip(range(0, X_train.shape[0], batch_size),
range(batch_size, X_train.shape[0]+1, batch_size))
# train using mini-batches
for (start, end) in mini_batch:
sess.run(optimizer, feed_dict = {input_X: X_features[start:end],
input_y: y_targets[start:end]})

How would you do ReduceLROnPlateau in Tensorflow?

Keras has a callback that reduces the learning rate upon a plateauing of a specified metric, called ReduceLROnPlateau.
How do you create such a feature in native Tensorflow? In a Tensorflow model, is it possible to call on Keras callbacks? Or does it need to be written in native Tensorflow? If so, how would you set the learning rate in the middle of a training session?
I'm afraid tensorflow doesn't support this out-of-the-box (and keras callbacks aren't directly applicable neither). Here's the list of supported learning rate scheduling techniques: all of them are different algorithms, but are self-contained, i.e. independent from the training performance.
But the good news is that all optimizers accept the tensor for the learning rate. So you can create a variable or a placeholder for the learning rate and change its value based on validation performance (which you'll also need to calculate yourself). Here's an example from this wonderful answer:
learning_rate = tf.placeholder(tf.float32, shape=[])
# ...
train_step = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate).minimize(mse)
sess = tf.Session()
# Feed different values for learning rate to each training step.
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.01})
sess.run(train_step, feed_dict={learning_rate: 0.01})
Here's a not 1:1 conversion from the Keras 'ReduceLROnPlateau' I wrote up. It examines each batch's loss instead of sampling randomly at the end of each epoch. Cooldown & patience are still in terms of epoch though. It can be used just like tf.train.exponential_decay(...).
I think there's probably a better way to go about it than simply monitoring the minimum loss value, as the minimum value could be an extreme outlier. A metric in terms of some running average of the loss gradient might be better.
def plateau_decay(learning_rate, global_step, loss, data_count, batch_size, factor=0.1, patience=10, min_delta=1e-4, cooldown=0, min_lr=0):
steps_per_epoch = math.ceil(data_count // batch_size)
patient_steps = patience * steps_per_epoch
cooldown_steps = cooldown * steps_per_epoch
if not isinstance(learning_rate, tf.Tensor):
learning_rate = tf.get_variable('learning_rate', initializer=tf.constant(learning_rate), trainable=False, collections=[tf.GraphKeys.LOCAL_VARIABLES])
with tf.variable_scope('plateau_decay'):
step = tf.get_variable('step', trainable=False, initializer=global_step, collections=[tf.GraphKeys.LOCAL_VARIABLES])
best = tf.get_variable('best', trainable=False, initializer=tf.constant(np.Inf, tf.float32), collections=[tf.GraphKeys.LOCAL_VARIABLES])
def _update_best():
with tf.control_dependencies([
tf.assign(best, loss),
tf.assign(step, global_step),
tf.print('Plateau Decay: Updated Best - Step:', global_step, 'Next Decay Step:', global_step + patient_steps, 'Loss:', loss)
]):
return tf.identity(learning_rate)
def _decay():
with tf.control_dependencies([
tf.assign(best, loss),
tf.assign(learning_rate, tf.maximum(tf.multiply(learning_rate, factor), min_lr)),
tf.assign(step, global_step + cooldown_steps),
tf.print('Plateau Decay: Decayed LR - Step:', global_step, 'Next Decay Step:', global_step + cooldown_steps + patient_steps, 'Learning Rate:', learning_rate)
]):
return tf.identity(learning_rate)
def _no_op(): return tf.identity(learning_rate)
met_threshold = tf.less(loss, best - min_delta)
should_decay = tf.greater_equal(global_step - step, patient_steps)
return tf.cond(met_threshold, _update_best, lambda: tf.cond(should_decay, _decay, _no_op))

tensorflow neural network model based on mnist with two outputs

I want to train a neural network with 12 inputs and 2 outputs. Here I have a simple tensorflow neural network that has two outputs. When I run the code it always consistently gives one output. That is, if the two outputs are labeled 'l1' and 'l2' the model always chooses 'l1' for its output. Is this a problem with my input (that it doesn't vary enough between 'l1' and 'l2') or is this a problem with choosing to use just two outputs? This is my question. If it's the latter, what do I do to remidy this? My model is supposed to detect skin tones in a photo. ('l1' = skin tone, 'l2' = not skin tone). I'm not sure this makes sense. It is adapted from the mnist example, but that code has ten outputs.
def nn_setup(self):
input_num = 4 * 3
mid_num = 3
output_num = 2
x = tf.placeholder(tf.float32, [None, input_num])
W_1 = tf.Variable(tf.zeros([input_num, mid_num]))
b_1 = tf.Variable(tf.zeros([mid_num]))
y_mid = tf.nn.softmax(tf.matmul(x,W_1) + b_1)
W_2 = tf.Variable(tf.zeros([mid_num, output_num]))
b_2 = tf.Variable(tf.zeros([output_num]))
y = tf.nn.softmax(tf.matmul(y_mid, W_2) + b_2)
y_ = tf.placeholder(tf.float32, [None, output_num])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
self.sess = tf.Session()
self.sess.run(init)
for i in range(1000):
batch_xs, batch_ys = self.get_nn_next_train()
self.sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
self.nn_test.images, self.nn_test.labels = self.get_nn_next_test()
print(self.sess.run(accuracy, feed_dict={x: self.nn_test.images, y_: self.nn_test.labels}))
There are a few "odd" things with your network, such as having softmax in your middle layer.
You have two major issues I can find with your implementation.
1. Weight initialisation
W_1 = tf.Variable(tf.zeros([input_num, mid_num]))
W_2 = tf.Variable(tf.zeros([mid_num, output_num]))
This will initialise the weights to be identical. So they will have identical gradient values, and be changed at each step identically.
Effectively by doing this you have created a network with one neuron in each layer (which is then copied to create the layer matrix that you use).
Use a different initial value, it is usual to take a small random matrix like this:
W_1 = tf.Variable(tf.random_normal([input_num, mid_num], stddev=0.5))
In general you will want a smaller standard deviation the larger your layers are. You don't have to do this for biases as well, but you can if you like.
This won't fix everything with your network, but it should at least start to calculate different values from input data and train a little.
2. Use of cost function
You have used this loss function incorrectly:
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(y, y_) )
. . . because softmax_cross_entropy_with_logits is designed to work with the input to softmax, not the output. So your cost function is incorrect. Instead you want to reference y_logits like this where currently you calculate y:
y_logits = tf.matmul(y_mid, W_2) + b_2
y = tf.nn.softmax(y_logits)
Then your cross-entropy would be
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(y_logits, y_) )
After the hidden layer initialization, you have calculated softmax of the logits for the hidden layer: y_mid = tf.nn.softmax(tf.matmul(x,W_1) + b_1). In a classification problem, softmax should be applied to the values obtained from the output layer. Try something like: y_mid = tf.nn.relu(tf.matmul(x,W_1) + b_1) to compute the activations from the hidden layer and see if your classification improves. If that does not solve your problem, check for the population of 'l1' and 'l2' in your training data. If your training data is highly skewed towards 'l1', you will always get 'l1' as the output. You may consider minority-oversampling or undersampling techniques to resolve population imbalance problem.

Categories