How to convert checkpoint to .pb model for model deployment? - python

I have trained a seq2seq language translation model on tensorflow and save in the form of checkpoints with the following files in my train folder.
translate.ckpt-157450.data-00000-of-00001
translate.ckpt-157450.index
translate.ckpt-157450.meta and
checkpoint file
Now, I want to convert it to a protobuf file (.pb) for deployment purposes. Here is some code that I am using:
import tensorflow as tf
meta_path = "/home/i9/L-T_Model_Training/01_Apr_model/train/translate.ckpt-157450.meta"
with tf.Session() as sess:
saver = tf.train.import_meta_graph(meta_path)
saver.restore(sess, tf.train.latest_checkpoint('.'))
output_node_names =[n.name for n in tf.get_default_graph().as_graph_def().node]
frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess_graph_def, output_node_names)
with open("output_graph.pb", "wb") as f:
f.write(frozen_graph.SerializeToString())
I am running this code inside my train folder.
It shows me an error: ValueError: Can't load save_path when it is None.
I also tried freeze_graph.py script but could not get the model.

I did it for a NVIDIA/OpenSeq2Seq trained model, don't know if it is your case.
I created a gist file with the relevant code.
Basically, the sequence I did was:
Load the model
Call build_trt_forward_pass_graph (it's the only way I could get it working)
Get the right output node
Fix batch norm nodes
Freeze the graph
Save it
Please let me know if you have other ideas and if you try it, share the results with us.
Regards

Related

How to load a Tensorflow model saved with make_image_classifier tool

I've made a custom image classifier model using a Tensorflow tool called make_image_classifier
https://github.com/tensorflow/hub/tree/master/tensorflow_hub/tools/make_image_classifier
Now the model is exported into a .pb file and also 2 folders, assets and variables.
The question is how can I use this custom model to make predictions?
I've gone through all TF documentation and have tried many different things over these days but found no solution.
Someone wrote about it when he found no clear information, so he created a guide but it also doesn't work for me. In "step 3" its all the code required to load the module and classify an image using the custom model. The problem with this is I need to know the name of the input and output node, and I don't have them. I've tried to find them using Netron but it didn't work.
https://heartbeat.fritz.ai/automl-vision-edge-exporting-and-loading-tensorflow-saved-models-with-python-f4e8ce1b943a
import tensorflow as tf
export_path = '/Users/aayusharora/Aftershoot/backend/loadmodel/models/'
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], export_path)
path = '/Users/aayusharora/Aftershoot/backend/sampleImage.jpg'
with open(path, "rb") as img_file:
y_pred = sess.run('tile:0', feed_dict={'normalised_input_image_tensor': [img_file.read()] })
print(y_pred)
Can someone please give me a clue about how to load a saved model and use it to make predictions?
From Save and load models | Tensorflow Core:
You can reload using a saved model:
new_model = tf.keras.models.load_model('<path-to-export-path>/my_model')
Assuming you have these files together (assets, variables, the .pb file) which you did seem to have:
ls <path-to-export-path>/my_model
my_model
assets saved_model.pb variables
The new_model should be like the original. To check its architecture:
new_model.summary()

TensorFlow: Saving py_func to .pb file

I try to build a tensorflow model - where i use the tf.py_func to create a part of the code in ordinary python code. The problem is that when I save the model to a .pb file, the .pb file itself is very small and does not include the py_func:0 tensor. When I try to load and run the model from the .pb file I get this error: get ValueError: callback pyfunc_0 is not found.
It works when I dont save and load as a .pb file
Is anyone able ton help. This is super important to me and have given me a couple of sleepless nights.
model_version = "465555564"
tensorboard = TensorBoard(log_dir='./logs', histogram_freq = 0, write_graph = True, write_images = False)
sess = tf.Session()
K.set_session(sess)
K.set_learning_phase(0)
def my_func(x):
some_function
input = tf.placeholder(tf.float32)
y = tf.py_func(my_func, [input], tf.float32)
prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": input}, {"prediction": y})
builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
There is a way to save TF models with tf.py_func, but you have to do it without using a SavedModel.
TF has 2 levels of model saving: checkpoints and SavedModels. See this answer for more details, but to quote it here:
A checkpoint contains the value of (some of the) variables in a TensorFlow model. It is created by a Saver. To use a checkpoint, you need to have a compatible TensorFlow Graph, whose Variables have the same names as the Variables in the checkpoint.
SavedModel is much more comprehensive: It contains a set of Graphs (MetaGraphs, in fact, saving collections and such), as well as a checkpoint which is supposed to be compatible with these Graphs, and any asset files that are needed to run the model (e.g. Vocabulary files). For each MetaGraph it contains, it also stores a set of signatures. Signatures define (named) input and output tensors.
The tf.py_func op cannot be saved with a SavedModel (noted on this page in the docs), which is what you tried to do here. There is a good reason for this. SavedModels are supposed to be totally independent from the original code, able to be loaded in any other language that can deserialize it. This allows the models to be loaded by things like ML Engine, which is probably written in C++ or something like that. The problem is that it cannot serialize arbitrary Python code, so py_func is a no-go.
You can work around this by using checkpoints, as long as you are okay with staying in Python. You will not get the independence that SavedModels provide. You can save a checkpoint after training with a tf.train.Saver, and then in a new Session, re-build the whole graph and load it with that Saver. There is even a way to use that code in ML Engine, which used to be exclusively for SavedModels. You can use custom prediction routines to side-step the need for a SavedModel.
More info on saving/restoring models in the docs.

Tensorflow: Download and run pretrained VGG or ResNet model

Let's start at the beginning. So far I have created and trained small networks in Tensorflow myself. During the training I save my model and get the following files in my directory:
model.ckpt.meta
model.ckpt.index
model.ckpt.data-00000-of-00001
Later, I load the model saved in network_dir to do some classifications and extract the trainable variables of my model.
saver = tf.train.import_meta_graph(network_dir + ".meta")
variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="NETWORK")
Now I want to work with larger pretrained models like the VGG16 or ResNet and want to use my code to do that. I want to load pretrained models like my own networks as shown above.
On this site, I found many pretrained models:
https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models
I downloaded the VGG16 checkpoint and realized that these are only the trained parameters.
I would like to know how or where I can get the saved model or graph structure of these pretrained network? How do I use, for example, the VGG16 checkpoint without model.ckpt.meta, model.ckpt.index and the model.ckpt.data-00000-of-00001 files?
Next to the weights link, there is link to the code that defines the model. For instance, for VGG16: Code. Create the model using the code and restore variables from the checkpoint:
import tensorflow as tf
slim = tf.contrib.slim
image = ... # Define your input somehow, e.g with placeholder
logits, _ = vgg.vgg_16(image)
predictions = tf.argmax(logits, 1)
variables_to_restore = slim.get_variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
with tf.Session() as sess:
saver.restore(sess, "/path/to/model.ckpt")
So, the code contained in vgg.py will create all the variables for you. Using the tf-slim helper, you can get the list. Then, just follow the usual procedure. There was a similar question on this.

How to save model in .pb format and then load it for inference in Tensorflow?

I am newbie of Tensorflow and trying to run one tutorial code located in https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/02_Convolutional_Neural_Network.ipynb
Based on this code, I would like to try to save the model in .pb format using simple_save and restore it for testing but I have no idea how to modify this piece of code. I have browsed some web pages but still didn't get the idea. Can anyone help me change this piece of code so that I can save the trained model and then load it for inference? Thank you!
For saving model, you need two things - input and output tensor names. In your case, input tensor is called x and output tensor is y_pred and y_pred_cls (mentioned in In [29] in notebook). Here's simple example to save your model:
simple_save(session,
export_dir,
inputs={"x": x,},
outputs={"y_pred": y_pred,
"y_pred_cls": y_pred_class})
EDIT:
Restoring-
restoring_graph = tf.Graph()
with restoring_graph.as_default():
with tf.Session(graph=restoring_graph) as sess:
# Restore saved values
tf.saved_model.loader.load(
sess,
[tag_constants.TRAINING],
export_dir # Path to SavedModel
)
# Pass inputs to model and do predictions below

Tensorflow - Deep MNIST Tutorial - Export classifier to C++

I got the trained "Deep MNIST Tutorial" NN, and I know how to test the model with the TensorFlow Python API. Now i want to export the classifier to C++, so i can use it without TensorFlow API.
I know the trained models topology, weights and activation functions. There is any example of this implementation? I searched it, but found how to create and train a NN in C++ and not classifier examples.
Thanks in advance.
Maybe the following code in Tensorflow tutorial can help?
TensorFlow C++ Image Recognition Demo
This tutorial code is written in C++ and can run without a Tensorflow installed.
But one limitation is that it uses a model which is exported as a "frozen protobuf" .pb file.You can download an inception V3 pre-trained model as the page described, or freeze your own model to make one.
If you already saved the model/variables into a checkpoint, the following code would be helpful for freezing your graph:
freeze_graph.py
Or you can add the following code after your training is over to get a frozen model file as my_model.pb:
#...some sess.run loop for training
output_graph_def = sess.graph_def
output_graph_def = graph_util.convert_variables_to_constants(
sess, sess.graph_def, ['some_tensor_names_for_output'])
output_graph_def = remove_training_nodes(output_graph_def)
with open('my_model.pb', 'wb') as f:
f.write(output_graph_def.SerializeToString())

Categories