How to convert from .pb to .tflite? - python

I have created a object detection model using Pytorch and then converted from .pth to .onnx and then .pb, but now I need to convert it into .tflite for android app! How to do it? It's my first time.
input_arrays = [64, 3, 224, 224]
output_arrays = ?
for binary classification.
I have done it from pytorch but everything I find to look at was from keras or Tensorflow...
This is the code I have used to convert it from .pb to .tflie
converter = lite.TFLiteConverter.from_frozen_graph(
model/model.pb , input_arrays, output arrays )
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
!tflite_convert \
--output_file= model/model.tflite \
--graph_def_file= model/model.pb \
--input_arrays= input_arrays \
-- output_arrays= output_arrays
I think it has something to do with input arrays and output arrays, but not sure about it. Is graph_def_file supposed to store model.pb ?

No need to specify input and output array, when using the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Try this out.

Related

How to load tflite weights in Raspberry Pi

I'm working with neuronal networks and I need to use a Raspberry Pi v2.
When I want to install tensorflow 2.X it fails, and I just can install tensorflow 1.14. For this reason I found the tflite library, that theoretically helps me, with a lite version of tf.
Here an image that shows I can't install it.
First of all, I convert my keras model (model.h5) into .tflite model.
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Since here, all OK. The problem is when I want to use this model. With tensorflow I know how to do it,
from tensorflow import keras
def importModel(myPath):
file = open(myPath+'model/model.json', 'r')
model_json = file.read(); file.close()
model = keras.models.model_from_json(model_json)
model.load_weights(myPath+'model/model.h5')
return model, scaler
But I really don't understand how to do it with tflite, can somebody help me, please??
You can find this in the official documentation
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
And if you have trouble to install TensorFlow 2.x on your raspberry it's maybe because you are not using the latest version of Python3

Can't convert Frozen Inference Graph to .tflite

I am new to the object detection API and TensorFlow in general. I followed this tutorial and in the end I produced a frozen_inference_graph.pb. I want to run this object detection model on my phone, which in my understanding requires me to convert it to .tflite (please lmk if this doesn't make any sense).
When I tried to convert it using this standard code here:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
It throws an error, saying:
ValueError: None is only supported in the 1st dimension. Tensor
'image_tensor' has invalid shape '[None, None, None, 3]'
This is a common error I found on the internet, and after searching through many threads, I tried to give an extra parameter to the code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
Now it looks like this:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This works at first, but throws another error at the end, saying:
Check failed: array.data_type == array.final_data_type Array
"image_tensor" has mis-matching actual and final data types
(data_type=uint8, final_data_type=float). Fatal Error: Aborted
I understand that my input tensor has the data type of uint8 and this causes a mismatch, I guess. My question would be, is this the correct way to approach things? (I want to run my model on my phone). If it is, how do I then fix the error? :/
Thank you very much.
Change your model input (the image_tensor placeholder) to have data type tf.float32.

Obtain input_array and output_array items to convert model to tflite format

PS. Please dont point me to converting Keras model directly to tflite as my .h5 file would fail to convert to .tflite directly. I somehow managed to convert my .h5 file to .pb
I have followed this Jupyter notebook for face recognition using Keras. I then saved my model to a model.h5 file, then converted it to a frozen graph, model.pb using this.
Now I want to use my tensorflow file in Android. For this I will need to have Tensorflow Lite, which requires me to convert my model into a .tflite format.
For this, I'm trying to follow the official guidelines for it here. As you can see there, it requires input_array and output_array arrays. How do I obtain details of these things from my model.pb file?
input arrays and output arrays are the arrays which store input and output tensors respectively.
They intend to inform the TFLiteConverter about the input and output tensors which will be used at the time of inference.
For a Keras model,
The input tensor is the placeholder tensor of the first layer.
input_tensor = model.layers[0].input
The output tensor may relate with a activation function.
output_tensor = model.layers[ LAST_LAYER_INDEX ].output
For a Frozen Graph,
import tensorflow as tf
gf = tf.GraphDef()
m_file = open('model.pb','rb')
gf.ParseFromString(m_file.read())
We get the names of the nodes,
for n in gf.node:
print( n.name )
To get the tensor,
tensor = n.op
The input tensor may be a placeholder tensor. Output tensor is the tensor which you run using session.run()
For conversion, we get,
input_array =[ input_tensor ]
output_array = [ output_tensor ]

Tensorflow quantization on Windows

I've freezed my model and got .pb file. Then I've quantize my model using tocoConverter on Linux, as it's not supported on Windows. I've got quantized_model.tflite. I can load it and get predictions on Linux, but I have issues to make it on Windows, as my project requires.
I've tried to load it using tf.contrib.lite.Interpreter using this code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter=tf.contrib.lite.Interpreter(model_path="quantized_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
# change the following line to feed into your own data.
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
*ImportError: No module named 'tensorflow.contrib.lite.python.Interpreter*
But it failed with "No module named 'tensorflow.contrib.lite.python.interpreter" error. I always get this errors on Windows, when trying to use something from tf.contrib.lite. Maybe there is a way to load this on Windows? Or can you advice alternative options to quantize a model on Windows?
toco is currently not supported on Windows build for cmake. This is what I remember reading somewhere.

Tensorflow Convert pb file to TFLITE using python

I have a model saved after training as pb file, I want to use tensorflow mobile and it's important to work with TFLITE file.
The problem is most of the examples I found after googling for converters are command on terminal or cmd.
Can you please share with me an example of converting to tflite files using python code?
You can convert to tflite directly in python directly. You have to freeze the graph and use toco_convert. It needs the input and output names and shapes to be determined ahead of calling the API just like in the commandline case.
An example code snippet
Copied from documentation, where a "frozen" (no variables) graph is defined as part of your code:
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
out = tf.identity(val, name="out")
with tf.Session() as sess:
tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
open("test.tflite", "wb").write(tflite_model)
In the example above, there is no freeze graph step since there are no variables. If you have variables and run toco without freezing graph, i.e. converting those variables to constants first, then toco will complain!
If you have frozen graphdef and know the inputs and outputs
Then you don't need the session. You can directly call toco API:
path_to_frozen_graphdef_pb = '...'
input_tensors = [...]
output_tensors = [...]
frozen_graph_def = tf.GraphDef()
with open(path_to_frozen_graphdef_pb, 'rb') as f:
frozen_graph_def.ParseFromString(f.read())
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
If you have non-frozen graphdef and know the inputs and outputs
Then you have to load the session and freeze the graph first before calling toco:
path_to_graphdef_pb = '...'
g = tf.GraphDef()
with open(path_to_graphdef_pb, 'rb') as f:
g.ParseFromString(f.read())
output_node_names = ["..."]
input_tensors = [..]
output_tensors = [...]
with tf.Session(graph=g) as sess:
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess, sess.graph_def, output_node_names)
# Note here we are passing frozen_graph_def obtained in the previous step to toco.
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
If you don't know inputs / outputs of the graph
This can happen if you did not define the graph, ex. you downloaded the graph from somewhere or used a high level API like the tf.estimators that hide the graph from you. In this case, you need to load the graph and poke around to figure out the inputs and outputs before calling toco. See my answer to this SO question.
Following this TF example you can pass "--Saved_model_dir" parameter to export the saved_model.pb and variables folder to some directory (none existing dir) before running retrain.py script:
python retrain.py ...... --saved_model_dir /home/..../export
In order to convert your model to tflite you need to use below line:
convert_saved_model.convert(saved_model_dir='/home/.../export',output_arrays="final_result",output_tflite='/home/.../export/graph.tflite')
Note: you need to import convert_saved_model:
from tensorflow.contrib.lite.python import convert_saved_model
Remember you can convert to tflite in 2 ways:
But the easiest way is to export saved_model.pb with variables in case you want to avoid using builds tools like Bazel.
This is what worked for me: (SSD_InceptionV2 model)
After finishing the training. i used model_main.py from object_detection folder. TFv1.11
ExportGraph as TFLITE:
python /tensorflow/models/research/object_detection/export_tflite_ssd_graph.py
--pipeline_config_path annotations/ssd_inception_v2_coco.config
--trained_checkpoint_prefix trained-inference-graphs/inference_graph_v7.pb/model.ckpt
--output_directory trained-inference-graphs/inference_graph_v7.pb/tflite
--max_detections 3
This generates a .pb file so you can generate the tflite file from it like this:
tflite_convert
--output_file=test.tflite
--graph_def_file=tflite_graph.pb
--input_arrays=normalized_input_image_tensor
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
--input_shape=1,300,300,3
--allow_custom_ops
Now the inputs/outputs i am not 100 sure how to get this but this code helps me before:
import tensorflow as tf
frozen='/tensorflow/mobilenets/mobilenet_v1_1.0_224.pb'
gf = tf.GraphDef()
gf.ParseFromString(open(frozen,'rb').read())
[n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]
[n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')]
converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
frozen_model_filename, INPUT_NODE, OUTPUT_NODE)
tflite_model = converter.convert()
open(TFLITE_OUTPUT_FILE, "wb").write(tflite_model)
INPUT_NODE and OUTPUT_NODE are the lists of names of input and output respectively.

Categories