Loading Model only once in fastAPI - python

I have a Deep learning model and i want to load that model only once when my api start.
Right now model is loading for every request made in my api service with the test image, and it is taking fair bit of time to print an output.
I tried loading my model with fastapi #app.on_event('startup')
main.py
#app.on_event('startup')
def init_data():
print("init call")
model = load_model(model_path)
return model
and I want to import this model variable to another python file classification.py
#classification.py
pred = model.predict(image)
I am not sure if this is the correct way to do this. Any help regarding this will be appreciated.
Thanks

So what you need, it is just to encapsulate this in some functions and pass the model along.
# main.py
import classification
# Other imports...
model = load_model(model_path)
# Model is global so loaded once.
# ...
#app.get('/whatever/page')
def predict(image):
global model # We make sure model is not local
# We call classification's predict function
result = classification.predict(image, model)
#...
#classification.py
# import ...
# The function predict takes the model as a param, and the image to predict.
def predict(image, model):
pred = model.predict(image)
# ...
return pred
You should do this better with typing of the parameters, etc. but that gives an idea of the dataflow. Another approach would be to load the model in the root of classification.py so you don't need to pass it along. That will be executed only once when imported the first time.

Related

Is there way to embed non-tf functions to a tf.Keras model graph as SavedModel Signature?

I want to add preprocessing functions and methods to the model graph as a SavedModel signature.
example:
# suppose we have a keras model
# ...
# defining the function I want to add to the model graph
#tf.function
def process(model, img_path):
# do some preprocessing using different libs. and modules...
outputs = {"preds": model.predict(preprocessed_img)}
return outputs
# saving the model with a custom signature
tf.saved_model.save(new_model, dst_path,
signatures={"process": process})
or we can use tf.Module here. However, the problem is I can not embed custom functions into the saved model graph.
Is there any way to do that?
I think you slightly misunderstand the purpose of save_model method in Tensorflow.
As per the documentation the intent is to have a method which serialises the model's graph so that it can be loaded with load_model afterwards.
The model returned by load_model is a class of tf.Module with all it's methods and attributes. Instead you want to serialise the prediction pipeline.
To be honest, I'm not aware of a good way to do that, however what you can do is to use a different method for serialisation of your preprocessing parameters, for example pickle or a different one, provided by the framework you use and write a class on top of that, which would do the following:
class MyModel:
def __init__(self, model_path, preprocessing_path):
self.model = load_model(model_path)
self.preprocessing = load_preprocessing(preprocessing_path)
def predict(self, img_path):
return self.model.predict(self.preprocessing(img_path))

Tensorflow Keras Model subclassing -- call function

I am experimenting with self supervised learning using tensorflow. The example code I'm running can be found in the Keras examples website. This is the link to the NNCLR example. The Github link to download the code can be found here. While I have no issues running the examples, I am running into issues when I try to save the pretrained or the finetuned model using model.save().
The error I'm getting is this:
f"Model {model} cannot be saved either because the input shape is not "
ValueError: Model <__main__.NNCLR object at 0x7f6bc0f39550> cannot be saved either
because the input shape is not available or because the forward pass of the model is
not defined. To define a forward pass, please override `Model.call()`.
To specify an input shape, either call `build(input_shape)` directly, or call the model on actual data using `Model()`, `Model.fit()`, or `Model.predict()`.
If you have a custom training step, please make sure to invoke the forward pass in train step through
`Model.__call__`, i.e. `model(inputs)`, as opposed to `model.call()`.
I am unsure how to override the Model.call() method. Appreciate some help.
One way to achieve model saving in such cases is to override the save (or save_weights) method in the keras.Model class. In your case, first initialize the finetune model in the NNCLR class. And next, override the save method for it. FYI, in this way, you may also able to use ModelCheckpoint API.
As said, define the finetune model in the NNCLR model class and override the save method for it.
class NNCLR(keras.Model):
def __init__(...):
super().__init__()
...
self.finetuning_model = keras.Sequential(
[
layers.Input(shape=input_shape),
self.classification_augmenter,
self.encoder,
layers.Dense(10),
],
name="finetuning_model",
)
...
def save(
self, filepath, overwrite=True, include_optimizer=True,
save_format=None, signatures=None, options=None
):
self.finetuning_model.save(
filepath=filepath,
overwrite=overwrite,
save_format=save_format,
options=options,
include_optimizer=include_optimizer,
signatures=signatures
)
model = NNCLR(...)
model.compile
model.fit
Next, you can do
model.save('finetune_model') # SavedModel format
finetune_model = tf.keras.models.load_model('finetune_model', compile=False)
'''
NNCLR code example: Evaluate sections
"A popular way to evaluate a SSL method in computer vision or
for that fact any other pre-training method as such is to learn
a linear classifier on the frozen features of the trained backbone
model and evaluate the classifier on unseen images."
'''
for layer in finetune_model.layers:
if not isinstance(layer, layers.Dense):
layer.trainable = False
finetune_model.summary() # OK
finetune_model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
finetune_model.fit

How to make pytorch lightning module have injected, nested models?

I have some nets, such as the following (augmented) resnet18:
num_classes = 10
resnet = models.resnet18(pretrained=True)
for param in resnet.parameters():
param.requires_grad = True
num_ftrs = resnet.fc.in_features
resnet.fc = nn.Linear(num_ftrs, num_classes)
And I want to use them inside a lightning module, and have it handle all optimizations, to_device, stages and so on. In other words, I want to register those modules for my lightning module.
I also want to be able to access their public members.
class MyLightning(LightningModule):
def __init__(self, resnet):
super().__init__()
self._resnet = resnet
self._criterion = lambda x: 1.0
def forward(self, x):
resnet_out = self._resnet(x)
loss = self._criterion(resnet_out)
return loss
my_lightning = MyLightning(resnet)
The above doesn't optimize any parameters.
Trying
def __init__(self, resnet)
...
_layers = list(resnet.children())[:-1]
self._resnet = nn.Sequential(*_layers)
Doesn't take resnet.fc into account. This also doesn't make sense to be the intended way of nesting models inside pytorch lightning.
How to nest models in pytorch lightning, and have them fully accessible and handled by the framework?
The training loop and optimization process is handles by the Trainer class. You can do so by initializing a new instance:
>>> trainer = Trainer()
And wrapping your PyTorch Lightning module with it. This way you can perform fitting, tuning, validating, and testing on that instance provided a DataLoader or LightningDataModule:
>>> trainer.fit(my_lightning, train_dataloader, val_dataloader)
You will have to implement the following functions on your Lightning module (i.e. in your case MyLightning):
Name
Description
init
Define computations here
forward
Use for inference only (separate from training_step)
training_step
the complete training loop
validation_step
the complete validation loop
test_step
the complete test loop
predict_step
the complete prediction loop
configure_optimizers
define optimizers and LR schedulers
source LightningModule documentation page.
Keep in mind a LightningModule is a nn.Module, so whenever you define a nn.Module as attribute to a LightningModule in the __init__ function, this module will end being registered as a sub-module to the parent pytorch lightning module.
The pytorch model should inherit from nn.Module, So you should find firstly the resnet18 in pytorch, then you can use the resnet18 or revise it by youself
The origin resnet codes is in this path: ...\python\Lib\site-packages\torchvision\models\resnet.py, you import the resnet network from here, so you can use it directly.
Now, you will find the original codes
class ResNet(nn.Module):...
https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py#L166
And import it like
from torchvision.models import ResNet
Finally, you can inherit from ResNet
class MyLightning(ResNet):

tensorflow models load successfully individually, but not sequentially? Garbage collection?

Sorry about the vague title but I'm not sure exactly how to describe it.
I am currently running tests on a model written in tensorflow.compat.v1. When it is used for inference, it must be restored as follows:
class Model
def __init__(self, filepath):
...
self.sess, self.saver = self.setup_tf()
self.merge = tf.compat.v1.summary.merge(tf.compat.v1.summary.scalar('loss', self.loss)])
def setup_tf():
sess = tf.compat.v1.Session() # TF session
saver = tf.compat.v1.train.Saver(max_to_keep=1)
latest_snapshot = tf.train.latest_checkpoint(join("../", self.model_dir))
saver.restore(sess, latest_snapshot)
return sess, saver
I have 2 of these tests, involving restoring a model and performing some inferences with it. After successful loading and inference of the first model, the second test fails to restore the model completely. The specific error here is
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
class TestInference(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass:
def test1:
model = Model(filepath)
model.infer()
def test2:
model = Model(filepath)
model.infer()
However when I run each test individually (commenting one out), there is no problem in restoring the model. Even when I swap the order of the models being loaded around, there the first one is always successful and the second one fails.
I figure that I should be using setUp and tearDown for each test, but I'm confused on what exactly I should be 'tearing down'? Is it garbage collection? I have tried gc.collect, but the same error occurs.
Here is the testing class, if it helps:

Loading Gensim FastText Model with Callbacks Fails

After creating a FastText model using Gensim, I want to load it but am running into errors seemingly related to callbacks.
The code used to create the model is
TRAIN_EPOCHS = 30
WINDOW = 5
MIN_COUNT = 50
DIMS = 256
vocab_model = gensim.models.FastText(sentences=model_input,
size=DIMS,
window=WINDOW,
iter=TRAIN_EPOCHS,
workers=6,
min_count=MIN_COUNT,
callbacks=[EpochSaver("./ftchkpts/")])
vocab_model.save('ft_256_min_50_model_30eps')
and the callback EpochSaver is defined as
from gensim.models.callbacks import CallbackAny2Vec
class EpochSaver(CallbackAny2Vec):
'''Callback to save model after each epoch and show training parameters '''
def __init__(self, savedir):
self.savedir = savedir
self.epoch = 0
os.makedirs(self.savedir, exist_ok=True)
def on_epoch_end(self, model):
savepath = os.path.join(self.savedir, f"ft256_{self.epoch}e")
model.save(savepath)
print(f"Epoch saved: {self.epoch + 1}")
if os.path.isfile(os.path.join(self.savedir, f"ft256_{self.epoch-1}e")):
os.remove(os.path.join(self.savedir, f"ft256_{self.epoch-1}e"))
print("Previous model deleted ")
self.epoch += 1
Aside from the type of model, this is identical to my process for Word2Vec which worked without issue. However when I open another file and try to load the model with
from gensim.models import FastText
vocab = FastText.load(r'vocab/ft_256_min_50_model_30eps')
I'm greeted with the error
AttributeError: Can't get attribute 'EpochSaver' on <module '__main__'>
What can I do to get the vocabulary to load so I can create the embedding layer for my keras model? If it's relevant, this is happening in JupyterLab.
This extra difficulty loading models with custom callbacks is a known, open issue (at least through gensim-3.8.1 and October 2019).
You can see discussions of possible workarounds and fixes there – and the gensim team is considering simply disabling the auto-saving of callbacks at all, requiring them to be re-specified for each later train()/etc call that needs them.
You may be able to load existing models saved with your custom callbacks by importing those same callback classes, as the same names, into the code context where you're doing a load().
You could save callback-free versions of your trained models by blanking the model's callbacks property to its empty default value, just before you save(), eg:
model.callbacks = ()
model.save(save_path)
Then, you wouldn't need to do any special importing of custom classes before a load(). (Of course if you again needed callback functionality on the re-loaded model, they'd then have to be explicitly reestablished after load()).

Categories