Tensorflow - Deep MNIST Tutorial - Export classifier to C++ - python

I got the trained "Deep MNIST Tutorial" NN, and I know how to test the model with the TensorFlow Python API. Now i want to export the classifier to C++, so i can use it without TensorFlow API.
I know the trained models topology, weights and activation functions. There is any example of this implementation? I searched it, but found how to create and train a NN in C++ and not classifier examples.
Thanks in advance.

Maybe the following code in Tensorflow tutorial can help?
TensorFlow C++ Image Recognition Demo
This tutorial code is written in C++ and can run without a Tensorflow installed.
But one limitation is that it uses a model which is exported as a "frozen protobuf" .pb file.You can download an inception V3 pre-trained model as the page described, or freeze your own model to make one.
If you already saved the model/variables into a checkpoint, the following code would be helpful for freezing your graph:
freeze_graph.py
Or you can add the following code after your training is over to get a frozen model file as my_model.pb:
#...some sess.run loop for training
output_graph_def = sess.graph_def
output_graph_def = graph_util.convert_variables_to_constants(
sess, sess.graph_def, ['some_tensor_names_for_output'])
output_graph_def = remove_training_nodes(output_graph_def)
with open('my_model.pb', 'wb') as f:
f.write(output_graph_def.SerializeToString())

Related

How to load a Tensorflow model saved with make_image_classifier tool

I've made a custom image classifier model using a Tensorflow tool called make_image_classifier
https://github.com/tensorflow/hub/tree/master/tensorflow_hub/tools/make_image_classifier
Now the model is exported into a .pb file and also 2 folders, assets and variables.
The question is how can I use this custom model to make predictions?
I've gone through all TF documentation and have tried many different things over these days but found no solution.
Someone wrote about it when he found no clear information, so he created a guide but it also doesn't work for me. In "step 3" its all the code required to load the module and classify an image using the custom model. The problem with this is I need to know the name of the input and output node, and I don't have them. I've tried to find them using Netron but it didn't work.
https://heartbeat.fritz.ai/automl-vision-edge-exporting-and-loading-tensorflow-saved-models-with-python-f4e8ce1b943a
import tensorflow as tf
export_path = '/Users/aayusharora/Aftershoot/backend/loadmodel/models/'
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], export_path)
path = '/Users/aayusharora/Aftershoot/backend/sampleImage.jpg'
with open(path, "rb") as img_file:
y_pred = sess.run('tile:0', feed_dict={'normalised_input_image_tensor': [img_file.read()] })
print(y_pred)
Can someone please give me a clue about how to load a saved model and use it to make predictions?
From Save and load models | Tensorflow Core:
You can reload using a saved model:
new_model = tf.keras.models.load_model('<path-to-export-path>/my_model')
Assuming you have these files together (assets, variables, the .pb file) which you did seem to have:
ls <path-to-export-path>/my_model
my_model
assets saved_model.pb variables
The new_model should be like the original. To check its architecture:
new_model.summary()

How to convert checkpoint to .pb model for model deployment?

I have trained a seq2seq language translation model on tensorflow and save in the form of checkpoints with the following files in my train folder.
translate.ckpt-157450.data-00000-of-00001
translate.ckpt-157450.index
translate.ckpt-157450.meta and
checkpoint file
Now, I want to convert it to a protobuf file (.pb) for deployment purposes. Here is some code that I am using:
import tensorflow as tf
meta_path = "/home/i9/L-T_Model_Training/01_Apr_model/train/translate.ckpt-157450.meta"
with tf.Session() as sess:
saver = tf.train.import_meta_graph(meta_path)
saver.restore(sess, tf.train.latest_checkpoint('.'))
output_node_names =[n.name for n in tf.get_default_graph().as_graph_def().node]
frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess_graph_def, output_node_names)
with open("output_graph.pb", "wb") as f:
f.write(frozen_graph.SerializeToString())
I am running this code inside my train folder.
It shows me an error: ValueError: Can't load save_path when it is None.
I also tried freeze_graph.py script but could not get the model.
I did it for a NVIDIA/OpenSeq2Seq trained model, don't know if it is your case.
I created a gist file with the relevant code.
Basically, the sequence I did was:
Load the model
Call build_trt_forward_pass_graph (it's the only way I could get it working)
Get the right output node
Fix batch norm nodes
Freeze the graph
Save it
Please let me know if you have other ideas and if you try it, share the results with us.
Regards

Tensorflow: Download and run pretrained VGG or ResNet model

Let's start at the beginning. So far I have created and trained small networks in Tensorflow myself. During the training I save my model and get the following files in my directory:
model.ckpt.meta
model.ckpt.index
model.ckpt.data-00000-of-00001
Later, I load the model saved in network_dir to do some classifications and extract the trainable variables of my model.
saver = tf.train.import_meta_graph(network_dir + ".meta")
variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="NETWORK")
Now I want to work with larger pretrained models like the VGG16 or ResNet and want to use my code to do that. I want to load pretrained models like my own networks as shown above.
On this site, I found many pretrained models:
https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models
I downloaded the VGG16 checkpoint and realized that these are only the trained parameters.
I would like to know how or where I can get the saved model or graph structure of these pretrained network? How do I use, for example, the VGG16 checkpoint without model.ckpt.meta, model.ckpt.index and the model.ckpt.data-00000-of-00001 files?
Next to the weights link, there is link to the code that defines the model. For instance, for VGG16: Code. Create the model using the code and restore variables from the checkpoint:
import tensorflow as tf
slim = tf.contrib.slim
image = ... # Define your input somehow, e.g with placeholder
logits, _ = vgg.vgg_16(image)
predictions = tf.argmax(logits, 1)
variables_to_restore = slim.get_variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
with tf.Session() as sess:
saver.restore(sess, "/path/to/model.ckpt")
So, the code contained in vgg.py will create all the variables for you. Using the tf-slim helper, you can get the list. Then, just follow the usual procedure. There was a similar question on this.

Reduce size of Tensorflow SavedModel for Google ML Engine deployment

I have developed and trained a CNN Keras model and now I want to deploy this model to Google Machine Learning Engine, so I can execute predictions using their API.
I have converted to SavedModel format and the export/saved_model.pb has 14MB and the /export/variables/ directory has around 380MB. Google ML Engine has a limit of 250MB for this data and does not allow deploying a bigger model.
I saw a solution regarding https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms, but I still didn't manage to bezel build this project due to VS unmet dependencies.
Is there any other way to reduce/compress (especially) the variables directory? What I would want is to convert dtype from int64 to int32, but don't know the format of variables.data-00000-of-00001 file.
Thanks a lot!
I attach my Keras model to Tensorflow SavedModel code here:
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('local-activity-recognition-model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Model.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = '.' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'export_input': new_Model.input},
outputs={'export_output': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
You can freeze the graph, that should shrink it down a bit.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph_test.py
If you're building a classifier, you might want to step to InceptionV3 architecture, this can easily be trained by tensorflows retrain code. This architecture is only 90mb.
https://www.tensorflow.org/tutorials/image_retraining
https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/label_image.py
Is it possible in your model to set the dtype of the Variables when building the graph? Using float32 while training is generally a good idea.
You can also use the techniques described here, but they take a little more effort.

Export tensorflow weights to hdf5 file and model to keras model.json

I recently found this Project which runs inference of keras model in a browser with GPU support using webgl. I have a few tensorflow project that I would like to run inference on a browser, is there a way to export tensorflow models into hdf5 file so it can be run using keras-js
If you are using Keras, you can do something like this.
model.save_weights('my_model.hdf5')
The only way I can see this working is if you use a Keras model as an interface to your TensorFlow workflow. If you do that, you can do this to save the model and its weights:
# save model
with open(model_save_filename, "w") as model_save_file:
model_json = model.to_json()
model_save_file.write(model_json)
# save model weights
model.save_weights(model_weights_save_filename)
More information on using Keras as an interface to Tensorflow workflows here: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#using-keras-models-with-tensorflow

Categories