Is there way to embed non-tf functions to a tf.Keras model graph as SavedModel Signature? - python

I want to add preprocessing functions and methods to the model graph as a SavedModel signature.
example:
# suppose we have a keras model
# ...
# defining the function I want to add to the model graph
#tf.function
def process(model, img_path):
# do some preprocessing using different libs. and modules...
outputs = {"preds": model.predict(preprocessed_img)}
return outputs
# saving the model with a custom signature
tf.saved_model.save(new_model, dst_path,
signatures={"process": process})
or we can use tf.Module here. However, the problem is I can not embed custom functions into the saved model graph.
Is there any way to do that?

I think you slightly misunderstand the purpose of save_model method in Tensorflow.
As per the documentation the intent is to have a method which serialises the model's graph so that it can be loaded with load_model afterwards.
The model returned by load_model is a class of tf.Module with all it's methods and attributes. Instead you want to serialise the prediction pipeline.
To be honest, I'm not aware of a good way to do that, however what you can do is to use a different method for serialisation of your preprocessing parameters, for example pickle or a different one, provided by the framework you use and write a class on top of that, which would do the following:
class MyModel:
def __init__(self, model_path, preprocessing_path):
self.model = load_model(model_path)
self.preprocessing = load_preprocessing(preprocessing_path)
def predict(self, img_path):
return self.model.predict(self.preprocessing(img_path))

Related

Tensorflow Keras Model subclassing -- call function

I am experimenting with self supervised learning using tensorflow. The example code I'm running can be found in the Keras examples website. This is the link to the NNCLR example. The Github link to download the code can be found here. While I have no issues running the examples, I am running into issues when I try to save the pretrained or the finetuned model using model.save().
The error I'm getting is this:
f"Model {model} cannot be saved either because the input shape is not "
ValueError: Model <__main__.NNCLR object at 0x7f6bc0f39550> cannot be saved either
because the input shape is not available or because the forward pass of the model is
not defined. To define a forward pass, please override `Model.call()`.
To specify an input shape, either call `build(input_shape)` directly, or call the model on actual data using `Model()`, `Model.fit()`, or `Model.predict()`.
If you have a custom training step, please make sure to invoke the forward pass in train step through
`Model.__call__`, i.e. `model(inputs)`, as opposed to `model.call()`.
I am unsure how to override the Model.call() method. Appreciate some help.
One way to achieve model saving in such cases is to override the save (or save_weights) method in the keras.Model class. In your case, first initialize the finetune model in the NNCLR class. And next, override the save method for it. FYI, in this way, you may also able to use ModelCheckpoint API.
As said, define the finetune model in the NNCLR model class and override the save method for it.
class NNCLR(keras.Model):
def __init__(...):
super().__init__()
...
self.finetuning_model = keras.Sequential(
[
layers.Input(shape=input_shape),
self.classification_augmenter,
self.encoder,
layers.Dense(10),
],
name="finetuning_model",
)
...
def save(
self, filepath, overwrite=True, include_optimizer=True,
save_format=None, signatures=None, options=None
):
self.finetuning_model.save(
filepath=filepath,
overwrite=overwrite,
save_format=save_format,
options=options,
include_optimizer=include_optimizer,
signatures=signatures
)
model = NNCLR(...)
model.compile
model.fit
Next, you can do
model.save('finetune_model') # SavedModel format
finetune_model = tf.keras.models.load_model('finetune_model', compile=False)
'''
NNCLR code example: Evaluate sections
"A popular way to evaluate a SSL method in computer vision or
for that fact any other pre-training method as such is to learn
a linear classifier on the frozen features of the trained backbone
model and evaluate the classifier on unseen images."
'''
for layer in finetune_model.layers:
if not isinstance(layer, layers.Dense):
layer.trainable = False
finetune_model.summary() # OK
finetune_model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
finetune_model.fit

How to make pytorch lightning module have injected, nested models?

I have some nets, such as the following (augmented) resnet18:
num_classes = 10
resnet = models.resnet18(pretrained=True)
for param in resnet.parameters():
param.requires_grad = True
num_ftrs = resnet.fc.in_features
resnet.fc = nn.Linear(num_ftrs, num_classes)
And I want to use them inside a lightning module, and have it handle all optimizations, to_device, stages and so on. In other words, I want to register those modules for my lightning module.
I also want to be able to access their public members.
class MyLightning(LightningModule):
def __init__(self, resnet):
super().__init__()
self._resnet = resnet
self._criterion = lambda x: 1.0
def forward(self, x):
resnet_out = self._resnet(x)
loss = self._criterion(resnet_out)
return loss
my_lightning = MyLightning(resnet)
The above doesn't optimize any parameters.
Trying
def __init__(self, resnet)
...
_layers = list(resnet.children())[:-1]
self._resnet = nn.Sequential(*_layers)
Doesn't take resnet.fc into account. This also doesn't make sense to be the intended way of nesting models inside pytorch lightning.
How to nest models in pytorch lightning, and have them fully accessible and handled by the framework?
The training loop and optimization process is handles by the Trainer class. You can do so by initializing a new instance:
>>> trainer = Trainer()
And wrapping your PyTorch Lightning module with it. This way you can perform fitting, tuning, validating, and testing on that instance provided a DataLoader or LightningDataModule:
>>> trainer.fit(my_lightning, train_dataloader, val_dataloader)
You will have to implement the following functions on your Lightning module (i.e. in your case MyLightning):
Name
Description
init
Define computations here
forward
Use for inference only (separate from training_step)
training_step
the complete training loop
validation_step
the complete validation loop
test_step
the complete test loop
predict_step
the complete prediction loop
configure_optimizers
define optimizers and LR schedulers
source LightningModule documentation page.
Keep in mind a LightningModule is a nn.Module, so whenever you define a nn.Module as attribute to a LightningModule in the __init__ function, this module will end being registered as a sub-module to the parent pytorch lightning module.
The pytorch model should inherit from nn.Module, So you should find firstly the resnet18 in pytorch, then you can use the resnet18 or revise it by youself
The origin resnet codes is in this path: ...\python\Lib\site-packages\torchvision\models\resnet.py, you import the resnet network from here, so you can use it directly.
Now, you will find the original codes
class ResNet(nn.Module):...
https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py#L166
And import it like
from torchvision.models import ResNet
Finally, you can inherit from ResNet
class MyLightning(ResNet):

Loading a modified pretrained model using strict=False in PyTorch

I want to use a pretrained model as the encoder part in my model. You can find a version of my model:
class MyClass(nn.Module):
def __init__(self, pretrained=False):
super(MyClass, self).__init__()
self.encoder=S3D_featureExtractor_multi_output()
if pretrained:
weight_dict=torch.load(os.path.join('models','weights.pt'))
model_dict=self.encoder.state_dict()
list_weight_dict=list(weight_dict.items())
list_model_dict=list(model_dict.items())
for i in range(len(list_model_dict)):
assert list_model_dict[i][1].shape==list_weight_dict[i][1].shape
model_dict[list_model_dict[i][0]].copy_(weight_dict[list_weight_dict[i][0]])
for i in range(len(list_model_dict)):
assert torch.all(torch.eq(model_dict[list_model_dict[i][0]],weight_dict[list_weight_dict[i][0]].to('cpu')))
print('Loading finished!')
def forward(self, x):
a, b = self.encoder(x)
return a, b
Because I modified some parts of the code of this pretrained model, based on this post I need to apply strict=False to avoid facing error, but based on the scenario that I load the pretrained weights, I cannot find a place in the code to apply strict=False. How can I apply that or how can I change the scenario of loading the pretrained model taht makes it possible to apply strict=False?
strict = False is to specify when you use load_state_dict() method. state_dict are just Python dictionaries that helps you save and load model weights.
(for more details, see https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html)
If you use strict=False in load_state_dict, you inform PyTorch that the target model and the original model are not identical, so it just initialises the weights of layers which are present in both and ignores the rest.
(see https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)
So, you will need to specify the strict argument when you load the pretrained model weights. load_state_dict can be called at this step.
If the model for which weights must be loaded is self.encoder
and if state_dict can be retrieved from the model you just loaded, you can just do this
loaded_weights = torch.load(os.path.join('models','weights.pt'))
self.encoder.load_state_dict(loaded_weights, strict=False)
for more details and a tutorial, see https://pytorch.org/tutorials/beginner/saving_loading_models.html .

Python Vetiver model - use alternative prediction method

I'm trying to use Vetiver to deploy an isolation forest model (for anomaly detection) to an API endpoint.
All is going well by adapting the example here.
However, when deployed, the endpoint uses the model.predict() method by default (which returns 1 for normal or -1 for anomaly).
I want the model to return a score between 0 and 1 as given by the model.score_samples() method.
Does anybody know how I can configure the Vetiver endpoint such that it uses .score_samples() rather than .predict() for scoring?
Thanks
vetiver.predict() primarily acts as a router to an API endpoint, so it does not have to have the same behavior as model.predict(). You can overload what function vetiver.predict() uses on your model by defining a custom handler.
In your case, an implementation might look something like below.
from vetiver.handlers.base import VetiverHandler
class ScoreSamplesHandler(VetiverHandler):
def __init__(self, model, ptype_data):
super().__init__(model, ptype_data)
def handler_predict(self, input_data, check_ptype):
"""
Define how to make predictions from your model
"""
prediction = self.model.score_samples(input_data)
return prediction
Initialize your custom handler and pass it into VetiverModel().
custom_model = ScoreSamplesHandler(model, ptype_data)
VetiverModel(custom_model, "model_name")
Then, when you use the vetiver.predict() function, it will return the values for model.score_samples(), rather than model.predict().

How do we create a reusable block that share architecture in a single model but learn different set of weight in the single model in Keras?

I am using tensorflow.keras and want to know if it is possible to create reusable blocks of inbuilt Keras layers. For example, I would like to repeatedly use the same set of layers (that able to learn the different weights) at a different position in a model. I would like to use the following block at different times in my model.
keep_prob_=0.5
input_features=Input(shape=(29, 1664))
Imortant_features= SelfAttention(activation='tanh',
kernel_regularizer=tf.keras.regularizers.l2(0.), kernel_initializer='glorot_uniform'
(input_features)
drop3=tf.keras.layers.Dropout(keep_prob_)(Imortant_features)
Layer_norm_feat=tf.keras.layers.Add()([input_features, drop3])
Layer_norm=tf.keras.layers.LayerNormalization(axis=-1)(Layer_norm_feat)
ff_out=tf.keras.layers.Dense(Layer_norm.shape[2], activation='relu')(Layer_norm)
ff_out=tf.keras.layers.Dense(Layer_norm.shape[2])(ff_out)
drop4=tf.keras.layers.Dropout(keep_prob_)(ff_out)
Layer_norm_input=tf.keras.layers.Add()([Layer_norm, drop4])
Attention_block_out=tf.keras.layers.LayerNormalization(axis=-1)(Layer_norm_input)
intraEpoch_att_block=tf.keras.Model(inputs=input_features, outputs=Attention_block_out)
I have read about creating custom layers in Keras but I did not find the documentation to be clear enough. I want to reuse the sub-model which able to learn the different set of weight in a single functional API model in tensorflow.keras.
Use this code (I removed SelfAttention, so add it back):
import tensorflow as tf
class my_model(tf.keras.layers.Layer):
def __init__(self):
super(my_model, self).__init__()
keep_prob_=0.5
input_features=tf.keras.layers.Input(shape=(29, 1664))
drop3=tf.keras.layers.Dropout(keep_prob_)(input_features)
Layer_norm_feat=tf.keras.layers.Add()([input_features, drop3])
Layer_norm=tf.keras.layers.LayerNormalization(axis=-1)(Layer_norm_feat)
ff_out=tf.keras.layers.Dense(Layer_norm.shape[2], activation='relu')(Layer_norm)
ff_out=tf.keras.layers.Dense(Layer_norm.shape[2])(ff_out)
drop4=tf.keras.layers.Dropout(keep_prob_)(ff_out)
Layer_norm_input=tf.keras.layers.Add()([Layer_norm, drop4])
Attention_block_out=tf.keras.layers.LayerNormalization(axis=-1)(Layer_norm_input)
self.intraEpoch_att_block=tf.keras.Model(inputs=input_features, outputs=Attention_block_out)
def call(self, inp, training=False):
x = self.intraEpoch_att_block(inp)
return x
model1 = my_model()
model2 = my_model()

Categories