PyTorch transfer learning with pre-trained ImageNet model - python

I want to create an image classifier using transfer learning on a model already trained on ImageNet.
How do I replace the final layer of a torchvision.models ImageNet classifier with my own custom classifier?

Get a pre-trained ImageNet model (resnet152 has the best accuracy):
from torchvision import models
# https://pytorch.org/docs/stable/torchvision/models.html
model = models.resnet152(pretrained=True)
Print out its structure so we can compare to the final state:
print(model)
Remove the last module (generally a single fully connected layer) from model:
classifier_name, old_classifier = model._modules.popitem()
Freeze the parameters of the feature detector part of the model so that they are not adjusted by back-propagation:
for param in model.parameters():
param.requires_grad = False
Create a new classifier:
classifier_input_size = old_classifier.in_features
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(classifier_input_size, hidden_layer_size)),
('activation', nn.SELU()),
('dropout', nn.Dropout(p=0.5)),
('fc2', nn.Linear(hidden_layer_size, output_layer_size)),
('output', nn.LogSoftmax(dim=1))
]))
The module name for our classifier needs to be the same as the one which was removed. Add our new classifier to the end of the feature detector:
model.add_module(classifier_name, classifier)
Finally, print out the structure of the new network:
print(model)

Related

How to save an embedding layer which was created outside model?

I created a word embedding layer outside model and used it as input before fitting my model. Now I need to predict new sentences by this model, how can I save the pre-trained embedding layer and apply it to my new sentences?
Code example:
Before input to model and fitting:
embedding_sentence = tf.keras.layers.Embedding(vocab_size, model_dimension, trainable=True)
embedded_sentence = embedding_sentence(vectorized_sentence)
Model fitting:
model = tf.keras.Sequential()
model.add(tf.keras.layers.GlobalAveragePooling1D())
...
Now I need to predict new sentences, how can I apply the trained embedding to them?
The above information is insufficient to answer this question accurately but still, I will give it a try. In tensorflow, you can use a function named get_weights to get the weights of a pre-train embedding layer and save it in a numpy/hd5 file which can be used later as an embedding layer in a new architecture.
weights = embedding_sentence.get_weights()
np.save('embedding_weights.npy', weights)
# Now load the weights to the embedding layer again
new_embedding_sentence = tf.keras.layers.Embedding(vocab_size, model_dimension, trainable=True)
new_embedding_sentence.build((None,)) # required to set the weights
new_embedding_sentence.set_weights(weights)
new_sentence = "This a dummy sentence"
new_sentence_embedding = new_embedding_sentence(new_sentence )
predictions = model(new_sentence_embedding)

get outputs of classifier layers from '.pt' pretrained AlexNet model

I have a '.pt' file that includes alexnet model which trained on my dataset. How I can get "out_features" of classifier layers (layers 1 & 4) after running my model for different dataset.
I needs this data for inputs of SVM.
I have tried:
Model(inputs, outputs=model.classifier[1].out_features)
model.classifier[1].out_features(inputs)
model.classifier[1].parameters(torch.tensor(inputs))
but they didn't work
at first you have to load model
model = torch.load(model.pt)
after that you have to remove the last layer
features = list(model.classifier.children())[:-4] # Remove last layer
model.classifier = nn.Sequential(*features)
then you can have the weights of last layer by applying model for inputs
out = model(inputs)

Correct way to upload weights at inference for faster-rcnn model?

I was wondering what is the correct way of uploading saved weights for a trained model at inference.
As an FYI, I train my model using pretrained coco weights and pretrained imagenet weights:
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, pretrained_backbone=True)
num_classes = 2
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model.to(device)
I then save the model dict in a dictionary where the state is saved like:
'state_dict': model.state_dict()
Once trained, I upload the weights like:
# set up model
model_test = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, pretrained_backbone=True)
num_classes = 2
# get number of input features for the classifier
in_features = model_test.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model_test.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model_test.to(device)
bestmodel = get_best_model(args.best)
bestmodel = torch.load(bestmodel)
model_test.load_state_dict(bestmodel['state_dict'])
As you can see, I also upload the pretrained weights at inference (test time). However, you can see that I then upload the weights from my saved model (bestmodel).
I thought by uploading weights, that this would override the initial pretrained weights uploaded. However, when I set the test model to :
model_test = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone= False)
And continue to upload my best model, I get slightly worse performance (marginally).
Is there a correct way to upload these weights? And if it shouldn't matter, why do I get a difference?

How to Develop Voting Ensembles With Python of Pre-trained models?

I'm attempting to create an ensemble of a custom CNN and pre-trained inceptionV3,MobileNetV2 and Xception for a medical image classification task using Keras with Tensorflow. The code is given below for loaded models after saved:
def load_all_models():
all_models = []
model_names = ['model1.h5', 'model2.h5', 'model3.h5']
for model_name in model_names:
filename = os.path.join('models', model_name)
model = tf.keras.models.load_model(filename)
predictsDepth = np.argmax(model.predict(X_test), axis=1)
all_models.append(model)
print('loaded:', filename)
labels = np.array(labels)
return all_models
models = load_all_models()
for i, model in enumerate(models):
for layer in model.layers:
if layer._name == "Flatten":
layer._name = "Flatten_{}".format( i )
layer.trainable = False
and after load the models I am try to build an ensemble model using voting. The code is given below for voting:
from sklearn.ensemble import RandomForestClassifier , VotingClassifier
model = VotingClassifier(estimators=[('InceptionV3_Accuracy', model1), ('MobileNetV2_Accuracy',
model2), ('Xception_Accuracy', model3)], voting='hard')
model.fit(X_train, y_train)
When fit the ensemble I get the following error:
Blockquote
Scikit does not ensemble pre-trained models. Scikit voting ensemble (and stacking ensemble) object requires un-trained models ('estimators') as the input and a final meta model ('final estimator'). Calling the .fit function then trains the 'estimators' and the 'final estimator'.
Since you are using trained models, you can refer to the below article which provides 2 coded examples of how to ensemble trained models.
For each example, the output from each the trained model is required as input for the final meta model.
https://machinelearningmastery.com/stacking-ensemble-for-deep-learning-neural-networks/

Extending the vocabulary on deployment

When doing training, I initialize my embedding matrix, using the pretrained embeddings picked for words in training set vocabulary.
import torchtext as tt
contexts = tt.data.Field(lower=True, sequential=True, tokenize=tokenizer, use_vocab=True)
contexts.build_vocab(data, vectors="fasttext.en.300d",
vectors_cache=config["vectors_cache"])
In my model I pass contexts.vocab as parameter and initialize embeddings:
embedding_dim = vocab.vectors.shape[1]
self.embeddings = nn.Embedding(len(vocab), embedding_dim)
self.embeddings.weight.data.copy_(vocab.vectors)
self.embeddings.weight.requires_grad=False
I train my model and during training I save its 'best' state via torch.save(model, f).
Then I want to test/create demo for model in separate file for evaluation. I load the model via torch.load. How do I extend the embedding matrix to contain test vocabulary? I tried to replace embedding matrix
# data is TabularDataset with test data
contexts.build_vocab(data, vectors="fasttext.en.300d",
vectors_cache=config["vectors_cache"])
model.embeddings = torch.nn.Embedding(len(contexts.vocab), contexts.vocab.vectors.shape[1])
model.embeddings.weight.data.copy_(contexts.vocab.vectors)
model.embeddings.weight.requires_grad = False
But the results are terrible (almost 0 accuracy). Model was doing good during training. What is the 'correct way' of doing this?

Categories