How to create a tflite file from saved_model (SSD MobileNet) - python

I want to create an object-detection app based on a retrained ssd_mobilenet model I've retrained like the guy on youtube.
I chose the model ssd_mobilenet_v2_coco from the Tensorflow Model Zoo. After the retraining process I've got the model with the following structure:
- saved_model
- variables (empty folder)
- saved_model.pb
- checkpoint
- frozen_inverence_graph.pb
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
In the same folder, I have the python script with the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("saved_model")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After running this code, I got the following error:
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.
It seems, that the image width and hight is missing in the model. When I use the model like in the youtube video, it is working.
After lots of research and attempts I tried other ways, like running bazel/toco, but nothing helped me to create a tflite-file.

As it describes in documentation, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input tensor name and its shape, and also output tensor name and its shape. And for ssd_mobilenet_v2_coco, you need to define on which input shape you need to use the network like this:
tf.lite.TFLiteConverter.from_saved_model("saved_model", input_shapes={"image_tensor" : [1,300,300,3]})

Related

How to convert a style transfer tensorflow model to mlmodel with flexible input shape?

I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.

ValueError: Input and filter shapes must be int or symbolic in Conv2D node

I was trying to convert my .pb file to a CoreML model using tfcoreml (with the script below) when I got this error: ValueError: Input and filter shapes must be int or symbolic in Conv2D node detector/darknet-53/Conv/Conv2D. How would I resolve this error and convert my model to CoreML successfully?
I opened up my .pb model with Netron and found the layer in question:
Here is the conversion script I am using:
import tfcoreml
tfcoreml.convert(tf_model_path='model.pb',
mlmodel_path='model.mlmodel',
output_feature_names=['output_boxes'], # name of the output op
input_name_shape_dict={'inputs': [None, 416, 416, 3]}, # map from the placeholder op in the graph to shape (can have -1s)
minimum_ios_deployment_target='13')
From what I can gather it seems to me that I need to change the input types of all Conv2D nodes to get this to work. But I am not an expert by any means so I could be wrong. Is there a way I can fix this model to convert successfully by using a Python script, and if so what would it look like?
EDIT: After changing the None in the input_name_shape_dict I got a different error. This one says: ValueError: Incompatible dimension 3 in Sub operation detector/darknet-53/Conv/BatchNorm/FusedBatchNorm/Sub. So I opened up Netron once again I took a look. Here is what I got, any idea how to fix it?
It seems the script got past the previous error just to get stuck on the next layer. Is my .pb completely useless or does just need to be fixed in some places?
Try 1 instead of None in the input_name_shape_dict.

Error while trying to predict on SavedModel using tensorflow 2

I am trying to predict on a savedmodel using the following code
features = np.ones((20, 40, 3), dtype=np.float32)
features = tf.convert_to_tensor(value, dtype=tf.float32)
imported_model = tf.saved_model.load(export_dir=os.path.join(os.path.join(model_path, directory)))
import_fn = imported_model.signatures["serving_default"]
import_fn(features)
I get the following error when running it using Tensorflow 2. The model prediction works fine when I use the saved_model_cli.
tensorflow.python.framework.errors_impl.InvalidArgumentError: In[0] is not a matrix. Instead it has shape [20,40,3]
[[node dense/BiasAdd (defined at model_manager.py:54) ]] [Op:__inference_pruned_318590]
The saved cli command is as follows
saved_model_cli run --dir ./model_dir --tag_set serve --signature_def serving_default --input_exprs 'input=np.ones((20, 40, 3), dtype=np.float32)'
InvalidArgumentError is typically caused by Data Type mismatch in the input.
Based on your error " In[0] is not a matrix. Instead, it has shape [20,40,3] ". You could try to manipulate your input data to properly match the input type and shape that the model is originally trained. You could also inspect how the model treats your input when you used the saved_model_cli compared to Python IDE. As you might be missing some preprocessing steps when you used the Python IDE that is being done when using saved_model_cli.
You could read about using Saved_Model format usage more in this link.

Can't convert Frozen Inference Graph to .tflite

I am new to the object detection API and TensorFlow in general. I followed this tutorial and in the end I produced a frozen_inference_graph.pb. I want to run this object detection model on my phone, which in my understanding requires me to convert it to .tflite (please lmk if this doesn't make any sense).
When I tried to convert it using this standard code here:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
It throws an error, saying:
ValueError: None is only supported in the 1st dimension. Tensor
'image_tensor' has invalid shape '[None, None, None, 3]'
This is a common error I found on the internet, and after searching through many threads, I tried to give an extra parameter to the code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
Now it looks like this:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This works at first, but throws another error at the end, saying:
Check failed: array.data_type == array.final_data_type Array
"image_tensor" has mis-matching actual and final data types
(data_type=uint8, final_data_type=float). Fatal Error: Aborted
I understand that my input tensor has the data type of uint8 and this causes a mismatch, I guess. My question would be, is this the correct way to approach things? (I want to run my model on my phone). If it is, how do I then fix the error? :/
Thank you very much.
Change your model input (the image_tensor placeholder) to have data type tf.float32.

Define input and output tensors for tf.lite.TocoConverter

I'm trying to convert this CPM-TF model to TFLite, but to use the TocoConverter, I need to specify input and output tensors.
https://github.com/timctho/convolutional-pose-machines-tensorflow
I ran the included run_freeze_model.py and got the cpm_hand_frozen.pb (GraphDef?) file.
From this post I copied the code snippet for converting the ProtoBuf file with known inputs and outputs. But looking through the model definition code, I have some trouble finding the correct answers for the in- and outputs.
Tensorflow Convert pb file to TFLITE using python
import tensorflow as tf
import numpy as np
from config import FLAGS
path_to_frozen_graphdef_pb = 'frozen_models/cpm_hand_frozen.pb'
def main(argv):
input_tensors = [1, FLAGS.input_size, FLAGS.input_size, 3]
output_tensors = np.zeros(FLAGS.num_of_joints)
frozen_graph_def = tf.GraphDef()
with open(path_to_frozen_graphdef_pb, 'rb') as f:
frozen_graph_def.ParseFromString(f.read())
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
if __name__ == '__main__':
tf.app.run()
I'm quite new to Tensorflow, but I think the input should be defined as
[1, FLAGS.input_size, FLAGS.input_size, 3]
Found that here: https://github.com/timctho/convolutional-pose-machines-tensorflow/blob/master/models/nets/cpm_hand.py#L23
Not sure what 1 represents, but None does not work and I guess the other parameters are the image size and color channels.
However, with that input, it returns an error:
AttributeError: 'int' object has no attribute 'dtype'
I got no clue on what the output should be, other than it should be an array.
UPDATE 1
Looking through the TF docs, it appears that I need to define the input as a tensor (obvious!).
https://www.tensorflow.org/lite/convert/python_api
input_tensors = tf.placeholder(name="img", dtype=tf.float32, shape=(1,FLAGS.input_size, FLAGS.input_size, 3))
This does not return an error, but I still need to figure out if the input are correct and what the output should be like.
UPDATE 2
Alright, so I finally got it to spit out the tflite model with this code snippet
def tflite_converter():
graph_def_file = os.path.join('frozen_models', '{}_frozen.pb'.format('cpm_hand'))
input_arrays = ['input_placeholer']
output_arrays = [FLAGS.output_node_names]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open('{}.tflite'.format('cpm_hand'), 'wb').write(tflite_model)
I hope that I did it correctly. I will try to do inference on the model on Android.
I do also think there is a misspelling in the input tensor input_placeholder. It appears to be corrected in the code itself, but from printing out all node names from the pretrained model, the spelling input_placeholer is present.
Node names can be seen here: https://github.com/timctho/convolutional-pose-machines-tensorflow/issues/59
My setup is:
Ubuntu 18.04
CUDA 9.1 & cuDNN 7.0
Python 3.6.5
Tensorflow GPU 1.6
Inference works like a charm, so there should be no issue on the setup itself.

Categories