I have a model that differs during the training and inference. More precisely, it is a SSD (Single Shot Detector) that requires additional DetectionOutput layer to be added on the top of its training counterpart. In Caffe, one can use the 'include' parameter in the layer definition to turn layers on/off.
But what should I do after having defined and compiled the model, if I wish to run validation after each epoch (inside a callback)?
I cannot add DetectionOutput during the training, since it is not compatible with the input to the loss.
I also would like to avoid creation of DetectionOutput layer somewhere inside callback or a custom metric, since it requires sensible hyperparams and I would like to keep the model creation logic inside the dedicated module.
In the following example code model is created for inference, DetectionOutput layer is present. So the evaluation runs just fine:
model, _, _ = build_model(input_shape=(args.input_height, args.input_width, 3),
n_classes=num_classes,
mode='inference')
model.load_weights(args.model, by_name=True)
evaluation = SSDEvaluation(model=model,
evaluator=PascalDetectionEvaluator(categories),
data_files=[args.eval_data])
metrics = evaluation.evaluate()
But this callback does not work properly because during the training model does not have DetectionOutput:
class SSDTensorboard(Callback):
def __init__(self, evaluator, eval_data):
self.evaluator = evaluator
self.eval_data = eval_data
def on_train_begin(self, logs={}):
self.metrics = []
def on_epoch_end(self, epoch, logs={}):
evaluation = SSDEvaluation(self.model, self.evaluator, self.eval_data)
metrics = evaluation.evaluate()
self.metrics.append(metrics)
What would be the proper (pythonic, keratonic etc.) way to run the training as usual, but perform validation step on the altered model with the same weights? Maybe, having a separate model for validation with shared weights?
You should use the headless (without DetectionOutput) model for training, but provide a model with the top layer to the evaluation:
def add_detection_output(model):
# make validation/inference model here
...
evaluation = SSDEvaluation(model=add_detection_output(model),
evaluator=PascalDetectionEvaluator(categories),
data_files=[args.eval_data])
Avoid using the training model inside the callback, let the evaluation object hold the reference to the validation model:
class SSDTensorboard(Callback):
def __init__(self, evaluation):
self.evaluation = evaluation
def on_epoch_end(self, epoch, logs={}):
metrics = self.evaluation.evaluate()
Related
I have some nets, such as the following (augmented) resnet18:
num_classes = 10
resnet = models.resnet18(pretrained=True)
for param in resnet.parameters():
param.requires_grad = True
num_ftrs = resnet.fc.in_features
resnet.fc = nn.Linear(num_ftrs, num_classes)
And I want to use them inside a lightning module, and have it handle all optimizations, to_device, stages and so on. In other words, I want to register those modules for my lightning module.
I also want to be able to access their public members.
class MyLightning(LightningModule):
def __init__(self, resnet):
super().__init__()
self._resnet = resnet
self._criterion = lambda x: 1.0
def forward(self, x):
resnet_out = self._resnet(x)
loss = self._criterion(resnet_out)
return loss
my_lightning = MyLightning(resnet)
The above doesn't optimize any parameters.
Trying
def __init__(self, resnet)
...
_layers = list(resnet.children())[:-1]
self._resnet = nn.Sequential(*_layers)
Doesn't take resnet.fc into account. This also doesn't make sense to be the intended way of nesting models inside pytorch lightning.
How to nest models in pytorch lightning, and have them fully accessible and handled by the framework?
The training loop and optimization process is handles by the Trainer class. You can do so by initializing a new instance:
>>> trainer = Trainer()
And wrapping your PyTorch Lightning module with it. This way you can perform fitting, tuning, validating, and testing on that instance provided a DataLoader or LightningDataModule:
>>> trainer.fit(my_lightning, train_dataloader, val_dataloader)
You will have to implement the following functions on your Lightning module (i.e. in your case MyLightning):
Name
Description
init
Define computations here
forward
Use for inference only (separate from training_step)
training_step
the complete training loop
validation_step
the complete validation loop
test_step
the complete test loop
predict_step
the complete prediction loop
configure_optimizers
define optimizers and LR schedulers
source LightningModule documentation page.
Keep in mind a LightningModule is a nn.Module, so whenever you define a nn.Module as attribute to a LightningModule in the __init__ function, this module will end being registered as a sub-module to the parent pytorch lightning module.
The pytorch model should inherit from nn.Module, So you should find firstly the resnet18 in pytorch, then you can use the resnet18 or revise it by youself
The origin resnet codes is in this path: ...\python\Lib\site-packages\torchvision\models\resnet.py, you import the resnet network from here, so you can use it directly.
Now, you will find the original codes
class ResNet(nn.Module):...
https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py#L166
And import it like
from torchvision.models import ResNet
Finally, you can inherit from ResNet
class MyLightning(ResNet):
I want to use a pretrained model as the encoder part in my model. You can find a version of my model:
class MyClass(nn.Module):
def __init__(self, pretrained=False):
super(MyClass, self).__init__()
self.encoder=S3D_featureExtractor_multi_output()
if pretrained:
weight_dict=torch.load(os.path.join('models','weights.pt'))
model_dict=self.encoder.state_dict()
list_weight_dict=list(weight_dict.items())
list_model_dict=list(model_dict.items())
for i in range(len(list_model_dict)):
assert list_model_dict[i][1].shape==list_weight_dict[i][1].shape
model_dict[list_model_dict[i][0]].copy_(weight_dict[list_weight_dict[i][0]])
for i in range(len(list_model_dict)):
assert torch.all(torch.eq(model_dict[list_model_dict[i][0]],weight_dict[list_weight_dict[i][0]].to('cpu')))
print('Loading finished!')
def forward(self, x):
a, b = self.encoder(x)
return a, b
Because I modified some parts of the code of this pretrained model, based on this post I need to apply strict=False to avoid facing error, but based on the scenario that I load the pretrained weights, I cannot find a place in the code to apply strict=False. How can I apply that or how can I change the scenario of loading the pretrained model taht makes it possible to apply strict=False?
strict = False is to specify when you use load_state_dict() method. state_dict are just Python dictionaries that helps you save and load model weights.
(for more details, see https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html)
If you use strict=False in load_state_dict, you inform PyTorch that the target model and the original model are not identical, so it just initialises the weights of layers which are present in both and ignores the rest.
(see https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)
So, you will need to specify the strict argument when you load the pretrained model weights. load_state_dict can be called at this step.
If the model for which weights must be loaded is self.encoder
and if state_dict can be retrieved from the model you just loaded, you can just do this
loaded_weights = torch.load(os.path.join('models','weights.pt'))
self.encoder.load_state_dict(loaded_weights, strict=False)
for more details and a tutorial, see https://pytorch.org/tutorials/beginner/saving_loading_models.html .
I have created and trained a TensorFlow model using the HammingLoss metric from TensorFlow addons. Thus, it's not a custom metric that I have created on my own. I use a callbacks function with the methords ModelCheckpoint() and EarlyStopping to save the best weights of the best model and stop model training at a given threshold repsectively. When I save the model checkpoint I serialize the whole model structure (similar to model.save()), istead of model.save_weights(), which would have saved only the model weights (more about ModelCheckpoint here).
TL;DR: Here is a colab notebook with the code I post below in case you want to skip this.
The model I have trained is saved in GoogleDrive in the link here. To load the specific model I use the following code:
neural_network_parameters = {}
#======================================================================
# PARAMETERS THAT DEFINE THE NEURAL NETWORK STRUCTURE =
#======================================================================
neural_network_parameters['model_loss'] = tf.keras.losses.BinaryCrossentropy(from_logits=False, name='binary_crossentropy')
neural_network_parameters['model_metric'] = [tfa.metrics.HammingLoss(mode="multilabel", name="hamming_loss"),
tfa.metrics.F1Score(17, average="micro", name="f1_score_micro"),
tfa.metrics.F1Score(17, average=None, name="f1_score_none"),
tfa.metrics.F1Score(17, average="macro", name="f1_score_macro"),
tfa.metrics.F1Score(17, average="weighted", name="f1_score_weighted")]
"""Initialize the hyper parameters tuning the model using Tensorflow's hyperparameters module"""
HP_HIDDEN_UNITS = hp.HParam('batch_size', hp.Discrete([32]))
HP_EMBEDDING_DIM = hp.HParam('embedding_dim', hp.Discrete([50]))
HP_LEARNING_RATE = hp.HParam('learning_rate', hp.Discrete([0.001])) # Adam default: 0.001, SGD default: 0.01, RMSprop default: 0.001....0.1 to be removed
HP_DECAY_STEPS_MULTIPLIER = hp.HParam('decay_steps_multiplier', hp.Discrete([10]))
METRIC_ACCURACY = "hamming_loss"
dependencies = {
'hamming_loss': tfa.metrics.HammingLoss(mode="multilabel", name="hamming_loss"),
'attention': attention(return_sequences=True)
}
def import_trained_keras_model(model_index, method, decay_steps_mode, optimizer_name, hparams):
"""Load the model"""
training_date="2021-02-27"
model_path_structure=f"{folder_path_model_saved}/{initialize_notebbok_variables.saved_model_name}_{hparams[HP_EMBEDDING_DIM]}dim_{hparams[HP_HIDDEN_UNITS]}batchsize_{hparams[HP_LEARNING_RATE]}lr_{hparams[HP_DECAY_STEPS_MULTIPLIER]}decaymultiplier_{training_date}"
model_imported=load_model(f"{model_path_structure}", custom_objects=dependencies)
if optimizer_name=="adam":
optimizer = optimizer_adam_v2(hparams)
elif optimizer_name=="sgd":
optimizer = optimizer_sgd_v1(hparams, "step decay")
else:
optimizer = optimizer_rmsprop_v1(hparams)
model_imported.compile(optimizer=optimizer,
loss=neural_network_parameters['model_loss'],
metrics=neural_network_parameters['model_metric'])
print(f"Model {model_index} is loaded successfully\n")
return model_imported
Calling the function import trained keras model
"""Now that the functions have been created it's time to import each trained classifier from the selected dictionary of hyper parameters, calculate the evaluation metric per model and finally serialize the scores dataframe for later use."""
list_models=[] #a list to store imported models
model_optimizer="adam"
for batch_size in HP_HIDDEN_UNITS.domain.values:
for embedding_dim in HP_EMBEDDING_DIM.domain.values:
for learning_rate in HP_LEARNING_RATE.domain.values:
for decay_steps_multiplier in HP_DECAY_STEPS_MULTIPLIER.domain.values:
hparams = {
HP_HIDDEN_UNITS: batch_size,
HP_EMBEDDING_DIM: embedding_dim,
HP_LEARNING_RATE: learning_rate,
HP_DECAY_STEPS_MULTIPLIER: decay_steps_multiplier
}
print(f"\n{len(list_models)+1}/{(len(HP_HIDDEN_UNITS.domain.values)*len(HP_EMBEDDING_DIM.domain.values)*len(HP_LEARNING_RATE.domain.values)*len(HP_DECAY_STEPS_MULTIPLIER.domain.values))}")
print({h.name: hparams[h] for h in hparams},'\n')
model_object=import_trained_keras_model(len(list_models)+1, "import custom trained model", "on", model_optimizer, hparams)
list_models.append(model_object)
When I call the function I get the following error
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_config when saving. In addition, please use the custom_objects arg when calling load_model().
It's strange that I get this error since the model metric to compile the NN is from a built in method of TensorFlow and NOT some sort of a custom metric that I developed myself.
I have searched also this thread in GitHub which closed without explaining the root of the problem.
[UPDATE]--Found a temporary solution
I managed to successfully import the model by turning the compile argument to False in order to re-compile the model imported inside the function.
So I did smth like model_imported=load_model(f"{model_path_structure}", custom_objects=dependencies, compile=False).
This action produced the following result:
WARNING:tensorflow:Unable to restore custom metric. Please ensure that the layer implements get_config and from_config when saving. In addition, please use the custom_objects arg when calling load_model().
Model 1 is loaded successfully.
So TensorFlow still cannot understand that HammingLoss is not a custom metric but rather a metric imported from Tensorflow Addons. However, despite the warning the model loaded successfully.
How do I save a trained model in PyTorch? I have read that:
torch.save()/torch.load() is for saving/loading a serializable object.
model.state_dict()/model.load_state_dict() is for saving/loading model state.
Found this page on their github repo:
Recommended approach for saving a model
There are two main approaches for serializing and restoring a model.
The first (recommended) saves and loads only the model parameters:
torch.save(the_model.state_dict(), PATH)
Then later:
the_model = TheModelClass(*args, **kwargs)
the_model.load_state_dict(torch.load(PATH))
The second saves and loads the entire model:
torch.save(the_model, PATH)
Then later:
the_model = torch.load(PATH)
However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors.
See also: Save and Load the Model section from the official PyTorch tutorials.
It depends on what you want to do.
Case # 1: Save the model to use it yourself for inference: You save the model, you restore it, and then you change the model to evaluation mode. This is done because you usually have BatchNorm and Dropout layers that by default are in train mode on construction:
torch.save(model.state_dict(), filepath)
#Later to restore:
model.load_state_dict(torch.load(filepath))
model.eval()
Case # 2: Save model to resume training later: If you need to keep training the model that you are about to save, you need to save more than just the model. You also need to save the state of the optimizer, epochs, score, etc. You would do it like this:
state = {
'epoch': epoch,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
...
}
torch.save(state, filepath)
To resume training you would do things like: state = torch.load(filepath), and then, to restore the state of each individual object, something like this:
model.load_state_dict(state['state_dict'])
optimizer.load_state_dict(state['optimizer'])
Since you are resuming training, DO NOT call model.eval() once you restore the states when loading.
Case # 3: Model to be used by someone else with no access to your code:
In Tensorflow you can create a .pb file that defines both the architecture and the weights of the model. This is very handy, specially when using Tensorflow serve. The equivalent way to do this in Pytorch would be:
torch.save(model, filepath)
# Then later:
model = torch.load(filepath)
This way is still not bullet proof and since pytorch is still undergoing a lot of changes, I wouldn't recommend it.
The pickle Python library implements binary protocols for serializing and de-serializing a Python object.
When you import torch (or when you use PyTorch) it will import pickle for you and you don't need to call pickle.dump() and pickle.load() directly, which are the methods to save and to load the object.
In fact, torch.save() and torch.load() will wrap pickle.dump() and pickle.load() for you.
A state_dict the other answer mentioned deserves just a few more notes.
What state_dict do we have inside PyTorch?
There are actually two state_dicts.
The PyTorch model is torch.nn.Module which has model.parameters() call to get learnable parameters (w and b).
These learnable parameters, once randomly set, will update over time as we learn.
Learnable parameters are the first state_dict.
The second state_dict is the optimizer state dict. You recall that the optimizer is used to improve our learnable parameters. But the optimizer state_dict is fixed. Nothing to learn there.
Because state_dict objects are Python dictionaries, they can be easily saved, updated, altered, and restored, adding a great deal of modularity to PyTorch models and optimizers.
Let's create a super simple model to explain this:
import torch
import torch.optim as optim
model = torch.nn.Linear(5, 2)
# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
print("Model weight:")
print(model.weight)
print("Model bias:")
print(model.bias)
print("---")
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
print(var_name, "\t", optimizer.state_dict()[var_name])
This code will output the following:
Model's state_dict:
weight torch.Size([2, 5])
bias torch.Size([2])
Model weight:
Parameter containing:
tensor([[ 0.1328, 0.1360, 0.1553, -0.1838, -0.0316],
[ 0.0479, 0.1760, 0.1712, 0.2244, 0.1408]], requires_grad=True)
Model bias:
Parameter containing:
tensor([ 0.4112, -0.0733], requires_grad=True)
---
Optimizer's state_dict:
state {}
param_groups [{'lr': 0.001, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0, 'nesterov': False, 'params': [140695321443856, 140695321443928]}]
Note this is a minimal model. You may try to add stack of sequential
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.Conv2d(A, B, C)
torch.nn.Linear(H, D_out),
)
Note that only layers with learnable parameters (convolutional layers, linear layers, etc.) and registered buffers (batchnorm layers) have entries in the model's state_dict.
Non-learnable things belong to the optimizer object state_dict, which contains information about the optimizer's state, as well as the hyperparameters used.
The rest of the story is the same; in the inference phase (this is a phase when we use the model after training) for predicting; we do predict based on the parameters we learned. So for the inference, we just need to save the parameters model.state_dict().
torch.save(model.state_dict(), filepath)
And to use later
model.load_state_dict(torch.load(filepath))
model.eval()
Note: Don't forget the last line model.eval() this is crucial after loading the model.
Also don't try to save torch.save(model.parameters(), filepath). The model.parameters() is just the generator object.
On the other hand, torch.save(model, filepath) saves the model object itself, but keep in mind the model doesn't have the optimizer's state_dict. Check the other excellent answer by #Jadiel de Armas to save the optimizer's state dict.
A common PyTorch convention is to save models using either a .pt or .pth file extension.
Save/Load Entire Model
Save:
path = "username/directory/lstmmodelgpu.pth"
torch.save(trainer, path)
Load:
(Model class must be defined somewhere)
model.load_state_dict(torch.load(PATH))
model.eval()
If you want to save the model and wants to resume the training later:
Single GPU:
Save:
state = {
'epoch': epoch,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
}
savepath='checkpoint.t7'
torch.save(state,savepath)
Load:
checkpoint = torch.load('checkpoint.t7')
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint['epoch']
Multiple GPU:
Save
state = {
'epoch': epoch,
'state_dict': model.module.state_dict(),
'optimizer': optimizer.state_dict(),
}
savepath='checkpoint.t7'
torch.save(state,savepath)
Load:
checkpoint = torch.load('checkpoint.t7')
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint['epoch']
#Don't call DataParallel before loading the model otherwise you will get an error
model = nn.DataParallel(model) #ignore the line if you want to load on Single GPU
Saving locally
How you save your model depends on how you want to access it in the future. If you can call a new instance of the model class, then all you need to do is save/load the weights of the model with model.state_dict():
# Save:
torch.save(old_model.state_dict(), PATH)
# Load:
new_model = TheModelClass(*args, **kwargs)
new_model.load_state_dict(torch.load(PATH))
If you cannot for whatever reason (or prefer the simpler syntax), then you can save the entire model (actually a reference to the file(s) defining the model, along with its state_dict) with torch.save():
# Save:
torch.save(old_model, PATH)
# Load:
new_model = torch.load(PATH)
But since this is a reference to the location of the files defining the model class, this code is not portable unless those files are also ported in the same directory structure.
Saving to cloud - TorchHub
If you wish your model to be portable, you can easily allow it to be imported with torch.hub. If you add an appropriately defined hubconf.py file to a github repo, this can be easily called from within PyTorch to enable users to load your model with/without weights:
hubconf.py (github.com/repo_owner/repo_name)
dependencies = ['torch']
from my_module import mymodel as _mymodel
def mymodel(pretrained=False, **kwargs):
return _mymodel(pretrained=pretrained, **kwargs)
Loading model:
new_model = torch.hub.load('repo_owner/repo_name', 'mymodel')
new_model_pretrained = torch.hub.load('repo_owner/repo_name', 'mymodel', pretrained=True)
pip install pytorch-lightning
make sure your parent model uses pl.LightningModule instead of nn.Module
Saving and loading checkpoints using pytorch lightning
import pytorch_lightning as pl
model = MyLightningModule(hparams)
trainer.fit(model)
trainer.save_checkpoint("example.ckpt")
new_model = MyModel.load_from_checkpoint(checkpoint_path="example.ckpt")
These days everything is written in the official tutorial:
https://pytorch.org/tutorials/beginner/saving_loading_models.html
You have several options on how to save and what to save and all is explained in that tutorial.
I use this approach, hope it will be useful for you.
num_labels = len(test_label_cols)
robertaclassificationtrain = '/dbfs/FileStore/tables/PM/TC/roberta_model'
robertaclassificationpath = "/dbfs/FileStore/tables/PM/TC/ROBERTACLASSIFICATION"
model = RobertaForSequenceClassification.from_pretrained(robertaclassificationpath,
num_labels=num_labels)
model.cuda()
model.load_state_dict(torch.load(robertaclassificationtrain))
model.eval()
Where I save my train model already in 'roberta_model' path. Save a train model.
torch.save(model.state_dict(), '/dbfs/FileStore/tables/PM/TC/roberta_model')
Meaning to say if I have the following operations for training purposes in my graph initially:
with tf.Graph.as_default() as g:
images, labels = load_batch(...)
with slim.argscope(...):
logits, end_points = inceptionResnetV2(images, num_classes..., is_training = True)
loss = slim.losses.softmax_cross_entropy(logits, labels)
optimizer = tf.train.AdamOptimizer(learning_rate = 0.002)
train_op = slim.learning.create_train_op(loss, optimizer)
sv = tf.train.Supervisor(...)
with sv.managed_session() as sess:
#perform your regular training loop here with sess.run(train_op)
Which allows me to train my model just fine, but I would like to run a small validation dataset that evaluates my model every once in a while inside my sess, would it take too much memory to consume a nearly exact replica within the same graph like:
images_val, labels_val = load_batch(...)
with slim.argscope(...):
logits_val, end_points_val = inceptionResnetV2(images, num_classes..., is_training = False)
predictions = end_points_val['Predictions']
acc, acc_updates = tf.contrib.metrics.streaming_accuracy(predictions, labels_val)
#and then following this, we can run acc_updates in a session to update the accuracy, which we can then print to monitor
My concern is that to evaluate my validation dataset, I need to set the is_training argument to False so that I can disable dropout. But will creating an entire inception-resnet-v2 model from scratch just for validation inside the same graph consume too much memory? Or should I just create an entirely new file that runs the validation on my own?
Ideally, I wanted to have 3 kinds of dataset - a training one, a small validation dataset to test during training, and a final evaluation dataset. This small validation dataset will help me see if my model is overfitting to the training data. If my proposed idea consumes too much memory, however, would it be equivalent to just occasionally monitor the training data score? Is there a better idea to test the validation set while training?
TensorFlow's devs thought about it and made variable ready to be shared.
You can see here the doc.
Using scopes the right way make it possible to reuse some variable.
One very good example (the context is language model but never mind) is TensorFlow PTB Word LM.
The global pseudo-code of this approach is something like:
class Model:
def __init__(self, train=True, params):
""" Build the model """
tf.placeholder( ... )
tf.get_variable( ...)
def main(_):
with tf.Graph.as_default() as g:
with tf.name_scope("Train"):
with tf.variable_scope("Model", reuse=None):
train = Model(train=True, params )
with tf.name_scope("Valid"):
# Now reuse variables = no memory cost
with tf.variable_scope("Model", reuse=True):
# But you can set different parameters
valid = Model(train=False, params)
session = tf.Session
...
Thus you can share some variable without having the exact same model as the parameters may change the model itself.
Hope this helps
pltrdy