How to update weights in Stochastic Weight Averaging (SWA) on tensorflow? - python

I'm confused about how to implement tfa's SWA optimizer. There are two points here:
When you look at the documentation it points you to [this] model averaging tutorial. That tutorial uses tfa.callbacks.AverageModelCheckpoint, which allows you to
Assign the moving average weights to the model, and save them.
(or) Keep the old non-averaged weights, but the saved model uses the average weights.
Having a distinct ModelCheckpoint that allows you to save moving average weights (rather than the current weights) makes sense. However - it seems like SWA should be managing the weight averaging. That makes me want to set update_weights=False.
Is this correct? The tutorial uses update_weights=True.
There is a note about SWA not updating the BN layers in the documentation. Following the suggestion here I did this,
# original training
model.fit(...)
# updating weights from final run
optimizer.assign_average_vars(model.variables)
# batch-norm-hack: lr=0 as suggested https://stackoverflow.com/a/64376062/607528
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0),
loss=loss,
metrics=metrics)
model.fit(
data,
validation_data=None,
epochs=1,
callbacks=final_callbacks)
before saving my model.
Is this correct?
Thanks!

The easiest way to deal with the batch norm is the following:
First, loop through all layers in your model and reset the moving mean and moving variance in the batch norm layers (in my example I assume the batch norm layers end with "bn"):
for l in model.layers:
if l.name.split('_')[-1] == 'bn': # e.g. conv1_bn
l.moving_mean.assign(tf.zeros_like(l.moving_mean))
l.moving_variance.assign(tf.ones_like(l.moving_variance))
After that run your model for one epoch and set training to true to update the moving average and variance:
count = 0
for x,_ in dataset_train:
_ = model(x, training = True)
count += 1
if count > steps_per_epoch:
break

There are two ways of doing this, the first one is you manually update the weights before saving, like this example from the documentation.
import tensorflow as tf
import tensorflow_addons as tfa
model = tf.Sequential([...])
opt = tfa.optimizers.SWA(
tf.keras.optimizers.SGD(lr=2.0), 100, 10)
model.compile(opt, ...)
model.fit(x, y, ...)
# Update the weights to their mean before saving
opt.assign_average_vars(model.variables)
model.save('model.h5')
The second option is to update the weight through AverageModelCheckpoint if you set update_weights = True. As the collab notebook example shows
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
...
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
Notice that AverageModelCheckpoint also calls assign_average_vars before saving the model, from source code:
def _save_model(self, epoch, logs):
optimizer = self._get_optimizer()
assert isinstance(optimizer, AveragedOptimizerWrapper)
if self.update_weights:
optimizer.assign_average_vars(self.model.variables)
return super()._save_model(epoch, logs)
...

Related

PyTorch - Train imbalanced dataset (set weights) for object detection

I am quite new with PyTorch, and I am trying to use an object detection model to do transfer learning in order to learn how to detect my new dataset.
Here is how I load the dataset:
train_dataset = MyDataset(train_data_path, 512, 512, train_labels_path, get_train_transform())
train_loader = DataLoader(train_dataset,batch_size=8,shuffle=True,num_workers=4,collate_fn=collate_fn)
valid_dataset = MyDataset(test_data_path, 512, 512, test_labels_path, get_valid_transform())
valid_loader = DataLoader(valid_dataset,batch_size=8, shuffle=False,num_workers=4,collate_fn=collate_fn)
I define the model and optimizer as follows:
# load Faster RCNN pre-trained model
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights="FasterRCNN_ResNet50_FPN_Weights.COCO_V1") # get the number of input features
in_features = model.roi_heads.box_predictor.cls_score.in_features
# define a new head for the detector with the required number of classes
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model = model.to(DEVICE)
# get the model parameters
params = [p for p in model.parameters() if p.requires_grad]
# define the optimizer
# We are using the SGD optimizer with a learning rate of 0.001 and momentum on 0.9.
optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0005)
I train the model as follows:
def train(train_data_loader, model, optimizer, train_loss_hist):
global train_itr
global train_loss_list
prog_bar = tqdm(train_data_loader, total=len(train_data_loader), position=0, leave=True, ascii=True)
# Then we have the for loop iterating over the batches.
for i, data in enumerate(prog_bar):
optimizer.zero_grad()
images, targets = data
images = list(image.to(DEVICE) for image in images)
targets = [{k: v.to(DEVICE) for k, v in t.items()} for t in targets]
# Forward pass
loss_dict = model(images, targets)
# Then we sum the losses and append the current iterations loss value to train_loss_list list.
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
# We also send the current loss value to train_loss_hist of the Averager class.
train_loss_list.append(loss_value)
train_loss_hist.send(loss_value)
# Then we backpropagate the gradients and update parameters.
losses.backward()
optimizer.step()
train_itr += 1
return train_loss_list
Considering that I adapted one code I found and I am not sure where the loss is defined (I have not defined any kind of loss in the code, so I believe it will use the default loss that was used to train the original object detector), how can I train my network considering such an imbalanced dataset and update my code?
It seems that you have two questions.
How to deal with imbalanced dataset.
Note that Faster-RCNN is an Anchor-Based detector, which means number of anchors containing the object is extremely small compared to the number of total anchors, so you don't need to deal with the imbalanced dataset. Or you can use RetinaNet which proposed a loss function called focal loss to improve performance upon imbalanced dataset.
Where is the loss function.
torchvision integrated the loss function inside the model object, you can debug your python code step by step inside the torchvision package and see the implementation details

Tensorflow model pruning gives 'nan' for training and validation losses

I'm trying to prune a base model that consists of several layers on top of a VGG network. It also contains a user-defined layer named instance_normalization. For pruning to be successful, I've defined the get_prunable_weights function of this layer as follows:
### defined for model pruning
def get_prunable_weights(self):
return self.weights
I used the following function to obtain a to-be-pruned model structure using a base model named model:
def define_prune_model(self, model, img_shape, epochs, batch_size, validation_split=0.1):
num_images = img_shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.5,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
return model_for_pruning
Then, I wrote the following function to perform training on this pruning model:
def train_prune_model(self, model_for_pruning, train_images, train_labels,
epochs, batch_size, validation_split=0.1):
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir='./models/pruned'),
]
model_for_pruning.fit(train_images, train_labels,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
return model_for_pruning
However, when training, I found out that the training and validation losses were all nan, and the final model prediction output was totally zero. However, the base model that passed to define_prune_model has successfully trained and predicted correctly.
How can I solve this? Thank you in advance.
It is difficult to pinpoint the issue without more informations. In particular, can you please give more detail (preferably as code) about your custom instance_normalization layer ?
Assuming that the code is fine: Since you mentioned that the model trains correctly without pruning, could it be that those pruning parameters are too harsh ? After all, those options set 50% of the weights to zero right from the first learning step.
Here is what I would try:
Experiment with a lower level of sparsity (especially initial_sparsity).
Start to apply pruning later during the training (begin_step argument of the pruning schedule). Some even prefer to train the model once without applying pruning at all. Then re-train again with prune_low_magnitude().
Only prune at some steps, giving time for the model to recover between prunings (frequency argument).
Finally should it still fail, the usual cures when encountering nan losses: reduce the learning rate, use regularization or gradient clipping, ...

Access output of intermediate layers in Tensor-flow 2.0 in eager mode

I have CNN that I have built using on Tensor-flow 2.0. I need to access outputs of the intermediate layers. I was going over other stackoverflow questions that were similar but all had solutions involving Keras sequential model.
I have tried using model.layers[index].output but I get
Layer conv2d has no inbound nodes.
I can post my code here (which is super long) but I am sure even without that someone can point to me how it can be done using just Tensorflow 2.0 in eager mode.
I stumbled onto this question while looking for an answer and it took me some time to figure out as I use the model subclassing API in TF 2.0 by default (as in here https://www.tensorflow.org/tutorials/quickstart/advanced).
If somebody is in a similar situation, all you need to do is assign the intermediate output you want, as an attribute of the class. Then keep the test_step without the #tf.function decorator and create its decorated copy, say val_step, for efficient internal computation of validation performance during training. As a short example, I have modified a few functions of the tutorial from the link accordingly. I'm assuming we need to access the output after flattening.
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
self.intermediate=x #assign it as an object attribute for accessing later
x = self.d1(x)
return self.d2(x)
#Remove #tf.function decorator from test_step for prediction
def test_step(images, labels):
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
return
#Create a decorated val_step for object's internal use during training
#tf.function
def val_step(images, labels):
return test_step(images, labels)
Now when you run model.predict() after training, using the un-decorated test step, you can access the intermediate output using model.intermediate which would be an EagerTensor whose value is obtained simply by model.intermediate.numpy(). However, if you don't remove the #tf_function decorator from test_step, this would return a Tensor whose value is not so straightforward to obtain.
Thanks for answering my earlier question. I wrote this simple example to illustrate how what you're trying to do might be done in TensorFlow 2.x, using the MNIST dataset as the example problem.
The gist of the approach:
Build an auxiliary model (aux_model in the example below), which is so-called "functional model" with multiple outputs. The first output is the output of the original model and will be used for loss calculation and backprop, while the remaining output(s) are the intermediate-layer outputs that you want to access.
Use tf.GradientTape() to write a custom training loop and expose the detailed gradient values on each individual variable of the model. Then you can pick out the gradients that are of interest to you. This requires that you know the ordering of the model's variables. But that should be relatively easy for a sequential model.
import tensorflow as tf
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
# This is the original model.
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28, 1]),
tf.keras.layers.Dense(100, activation="relu"),
tf.keras.layers.Dense(10, activation="softmax")])
# Make an auxiliary model that exposes the output from the intermediate layer
# of interest, which is the first Dense layer in this case.
aux_model = tf.keras.Model(inputs=model.inputs,
outputs=model.outputs + [model.layers[1].output])
# Define a custom training loop using `tf.GradientTape()`, to make it easier
# to access gradients on specific variables (the kernel and bias of the first
# Dense layer in this case).
cce = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.optimizers.Adam()
with tf.GradientTape() as tape:
# Do a forward pass on the model, retrieving the intermediate layer's output.
y_pred, intermediate_output = aux_model(x_train)
print(intermediate_output) # Now you can access the intermediate layer's output.
# Compute loss, to enable backprop.
loss = cce(tf.one_hot(y_train, 10), y_pred)
# Do backprop. `gradients` here are for all variables of the model.
# But we know we want the gradients on the kernel and bias of the first
# Dense layer, which happens to be the first two variables of the model.
gradients = tape.gradient(loss, aux_model.variables)
# This is the gradient on the first Dense layer's kernel.
intermediate_layer_kerenl_gradients = gradients[0]
print(intermediate_layer_kerenl_gradients)
# This is the gradient on the first Dense layer's bias.
intermediate_layer_bias_gradients = gradients[1]
print(intermediate_layer_bias_gradients)
# Update the variables of the model.
optimizer.apply_gradients(zip(gradients, aux_model.variables))
The most straightforward solution would go like this:
mid_layer = model.get_layer("layer_name")
you can now treat the "mid_layer" as a model, and for instance:
mid_layer.predict(X)
Oh, also, to get the name of a hidden layer, you can use this:
model.summary()
this will give you some insights about the layer input/output as well.

Tensorflow Custom Training With Phases

I need to create a custom training loop with Tensorflow / Keras (because I want to have more than one optimizer and tell which weights each optimizer should act upon).
Although this tutorial and that one too are quite clear regarding this matter, they miss a very important point: how do I predict for training phase and how do I predict for validation phase?
Suppose my model has Dropout layers, or BatchNormalization layers. They certainly work in a completely different way whether they are in training or validation.
How do I adapt these tutorials? This is a dummy example (may contain one or two pieces of pseudocode):
# Iterate over epochs.
for epoch in range(3):
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
#model with two outputs
#IMPORTANT: must be in training phase (use dropouts, calculate batch statistics)
logits1, logits2 = model(x_batch_train) #must be "training"
loss_value1 = loss_fn1(y_batch_train[0], logits1)
loss_value2 = loss_fn2(y_batch_train[1], logits2)
grads1 = tape.gradient(loss_value1, model.trainable_weights[selection1])
grads2 = tape.gradient(loss_value2, model.trainable_weights[selection2])
optimizer1.apply_gradients(zip(grads1, model.trainable_weights[selection1]))
optimizer2.apply_gradients(zip(grads2, model.trainable_weights[selection2]))
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
##Important: must be validation phase
#dropouts are off: calculate all neurons and divide value
#batch norms use previously calculated statistics
val_logits1, val_logits2 = model(x_batch_val)
#.... do the evaluations
I think you can just pass a training parameter when you call a tf.keras.Model, and it will be passed down to the layers:
# On training
logits1, logits2 = model(x_batch_train, training=True)
# On evaluation
val_logits1, val_logits2 = model(x_batch_val, training=False)

VGG, perceptual loss in keras

I'm wondering if it's possible to add a custom model to a loss function in keras. For example:
def model_loss(y_true, y_pred):
inp = Input(shape=(128, 128, 1))
x = Dense(2)(inp)
x = Flatten()(x)
model = Model(inputs=[inp], outputs=[x])
a = model(y_pred)
b = model(y_true)
# calculate MSE
mse = K.mean(K.square(a - b))
return mse
This is a simplified example. I'll actually be using a VGG net in the loss, so just trying to understand the mechanics of keras.
The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False before compiling.
Then you recalculate your Y_train.
Suppose you have these models:
mainModel - the one you want to apply a loss function
lossModel - the one that is part of the loss function you want
Create a new model appending one to another:
from keras.models import Model
lossOut = lossModel(mainModel.output) #you pass the output of one model to the other
fullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph.
This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.
Make sure lossModel is not trainable before compiling:
lossModel.trainable = False
for l in lossModel.layers:
l.trainable = False
fullModel.compile(loss='mse',optimizer=....)
Now adjust your data for training:
fullYTrain = lossModel.predict(originalYTrain)
And finally do the training:
fullModel.fit(xTrain, fullYTrain, ....)
This is old but I'm going to answer it because no one did directly. You definitely can call another model in a custom loss, and I actually think it's much easier than adding the model to the end of your main model and creating a whole new one and a whole new set of training labels.
Here is an example that both calls a model and an outside function that we define -
def normalize_tensor(in_feat):
norm_factor = tf.math.sqrt(tf.keras.backend.sum(in_feat**2, axis=-1, keepdims=True))
return in_feat / (norm_factor + 1e-10)
def VGGLoss(y_true, y_pred):
true = vgg(preprocess_input(y_true * 255))
pred = vgg(preprocess_input(y_pred * 255))
t = normalize_tensor(true[i])
p = normalize_tensor(pred[i])
vggLoss = tf.math.reduce_mean(tf.math.square(t - p))
return vggLoss
vgg() just calls the vgg16 model with no head.
preprocess_input is a keras function that normalizes inputs to be used in the vgg model (here we are assuming your model outputs an image in 0-1 range, then we multiply by 255 to get 0-255 range for vgg).
normalize_tensor takes the vgg activations and makes them have a magnitude of 1 for each channel, otherwise your loss will be massive.

Categories