Can't convert Frozen Inference Graph to .tflite - python

I am new to the object detection API and TensorFlow in general. I followed this tutorial and in the end I produced a frozen_inference_graph.pb. I want to run this object detection model on my phone, which in my understanding requires me to convert it to .tflite (please lmk if this doesn't make any sense).
When I tried to convert it using this standard code here:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
It throws an error, saying:
ValueError: None is only supported in the 1st dimension. Tensor
'image_tensor' has invalid shape '[None, None, None, 3]'
This is a common error I found on the internet, and after searching through many threads, I tried to give an extra parameter to the code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
Now it looks like this:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This works at first, but throws another error at the end, saying:
Check failed: array.data_type == array.final_data_type Array
"image_tensor" has mis-matching actual and final data types
(data_type=uint8, final_data_type=float). Fatal Error: Aborted
I understand that my input tensor has the data type of uint8 and this causes a mismatch, I guess. My question would be, is this the correct way to approach things? (I want to run my model on my phone). If it is, how do I then fix the error? :/
Thank you very much.

Change your model input (the image_tensor placeholder) to have data type tf.float32.

Related

How to convert a style transfer tensorflow model to mlmodel with flexible input shape?

I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.

ONNX Quantized Model Type Error: Type 'tensor(float16)'

I converted onnx model from float32 to float16 by using this script.
from onnxruntime_tools import optimizer
optimized_model = optimizer.optimize_model("model_fixed.onnx", model_type='bert_tf', num_heads=12, hidden_size=768, opt_level=99)
optimized_model.use_dynamic_axes()
optimized_model.convert_model_float32_to_float16()
optimized_model.save_model_to_file("model_fixed_fp16.onnx")
But at the time of inference I am getting this error.
[ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from
./model_fixed_fp16.onnx failed:This is an invalid model.
Type Error: Type 'tensor(float16)' of input parameter
(conv2d_1/convolution__24:0) of operator (Conv) in node (batch_normalization_1/FusedBatchNormV3_1:0_nchwc) is invalid
Also I changed the input dtype to float 16 by using this
pimage = np.array(np.expand_dims(pimage, axis=0), dtype=np.float16)
but still getting same error. What I have to do to resolve this?
Can you try running the conversion script here:
https://github.com/microsoft/onnxconverter-common/blob/master/onnxconverter_common/float16.py
If you still get an issue, open an issue on that repo.

How to create a tflite file from saved_model (SSD MobileNet)

I want to create an object-detection app based on a retrained ssd_mobilenet model I've retrained like the guy on youtube.
I chose the model ssd_mobilenet_v2_coco from the Tensorflow Model Zoo. After the retraining process I've got the model with the following structure:
- saved_model
- variables (empty folder)
- saved_model.pb
- checkpoint
- frozen_inverence_graph.pb
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
In the same folder, I have the python script with the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("saved_model")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After running this code, I got the following error:
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.
It seems, that the image width and hight is missing in the model. When I use the model like in the youtube video, it is working.
After lots of research and attempts I tried other ways, like running bazel/toco, but nothing helped me to create a tflite-file.
As it describes in documentation, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input tensor name and its shape, and also output tensor name and its shape. And for ssd_mobilenet_v2_coco, you need to define on which input shape you need to use the network like this:
tf.lite.TFLiteConverter.from_saved_model("saved_model", input_shapes={"image_tensor" : [1,300,300,3]})

Define input and output tensors for tf.lite.TocoConverter

I'm trying to convert this CPM-TF model to TFLite, but to use the TocoConverter, I need to specify input and output tensors.
https://github.com/timctho/convolutional-pose-machines-tensorflow
I ran the included run_freeze_model.py and got the cpm_hand_frozen.pb (GraphDef?) file.
From this post I copied the code snippet for converting the ProtoBuf file with known inputs and outputs. But looking through the model definition code, I have some trouble finding the correct answers for the in- and outputs.
Tensorflow Convert pb file to TFLITE using python
import tensorflow as tf
import numpy as np
from config import FLAGS
path_to_frozen_graphdef_pb = 'frozen_models/cpm_hand_frozen.pb'
def main(argv):
input_tensors = [1, FLAGS.input_size, FLAGS.input_size, 3]
output_tensors = np.zeros(FLAGS.num_of_joints)
frozen_graph_def = tf.GraphDef()
with open(path_to_frozen_graphdef_pb, 'rb') as f:
frozen_graph_def.ParseFromString(f.read())
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
if __name__ == '__main__':
tf.app.run()
I'm quite new to Tensorflow, but I think the input should be defined as
[1, FLAGS.input_size, FLAGS.input_size, 3]
Found that here: https://github.com/timctho/convolutional-pose-machines-tensorflow/blob/master/models/nets/cpm_hand.py#L23
Not sure what 1 represents, but None does not work and I guess the other parameters are the image size and color channels.
However, with that input, it returns an error:
AttributeError: 'int' object has no attribute 'dtype'
I got no clue on what the output should be, other than it should be an array.
UPDATE 1
Looking through the TF docs, it appears that I need to define the input as a tensor (obvious!).
https://www.tensorflow.org/lite/convert/python_api
input_tensors = tf.placeholder(name="img", dtype=tf.float32, shape=(1,FLAGS.input_size, FLAGS.input_size, 3))
This does not return an error, but I still need to figure out if the input are correct and what the output should be like.
UPDATE 2
Alright, so I finally got it to spit out the tflite model with this code snippet
def tflite_converter():
graph_def_file = os.path.join('frozen_models', '{}_frozen.pb'.format('cpm_hand'))
input_arrays = ['input_placeholer']
output_arrays = [FLAGS.output_node_names]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open('{}.tflite'.format('cpm_hand'), 'wb').write(tflite_model)
I hope that I did it correctly. I will try to do inference on the model on Android.
I do also think there is a misspelling in the input tensor input_placeholder. It appears to be corrected in the code itself, but from printing out all node names from the pretrained model, the spelling input_placeholer is present.
Node names can be seen here: https://github.com/timctho/convolutional-pose-machines-tensorflow/issues/59
My setup is:
Ubuntu 18.04
CUDA 9.1 & cuDNN 7.0
Python 3.6.5
Tensorflow GPU 1.6
Inference works like a charm, so there should be no issue on the setup itself.

Converting Tensorflow model to CoreML model. Shape Translator missing for OP Slice

Im using TFCoreml in python to convert my Tensorflow model into CoreML for development on an iOS device using the CoreML Libs.
Im using the following python code to try and convert the model to CoreML.
import tfcoreml as tf_converter
tf_converter.convert(tf_model_path = 'frozen_inference_graph.pb',
mlmodel_path = 'ml_model.mlmodel',
output_feature_names = ['SemanticPredictions:0'],
input_name_shape_dict = {'ImageTensor:0' : [1, 512, 512, 3]})
This gives the following error:
Shape Translator missing for OP of type Slice.
I read the docs a bit further for TFCoreml and it states that Slice isn't fully supported, and some custom conversion code is required for this to work. In the TFCoreml documentation it suggests breaking the frozen graph into sub graphs and converting them individually then merging them back together post conversion.
I updated my code to use custom layers, but I don't really understand how the custom conversion functions work.
Just need a few pointers on where to look to begin understanding how to write these custom conversion methods so I can solve my problem with converting the Tensorflow model to CoreML.
[EDIT]
I did some more reading into the TFCoreml examples and documentation and have adapted my solution to this.
import tfcoreml as tf_converter
def _convert_slice(**kwargs):
tf_op = kwargs["op"]
coreml_nn_builder = kwargs["nn_builder"]
constant_inputs = kwargs["constant_inputs"]
params = NeuralNetwork_pb2.CustomLayerParams()
params.className = 'Slice'
params.description = "Custom layer that corresponds to the slice TF op"
# get the value of begin
begin = constant_inputs.get(tf_op.inputs[1].name, [0,0,0,0])
size = constant_inputs.get(tf_op.inputs[2].name, [0,0,0,0])
# add begin and size as two repeated weight fields
begin_as_weights = params.weights.add()
begin_as_weights.floatValue.extend(map(float, begin))
size_as_weights = params.weights.add()
size_as_weights.floatValue.extend(map(float, size))
coreml_nn_builder.add_custom(name=tf_op.name,
input_names=[tf_op.inputs[0].name],
output_names=[tf_op.outputs[0].name],
custom_proto_spec=params)
coreml_model = tfcoreml.convert(
tf_model_path='frozen_inference_graph.pb',
mlmodel_path='my_model.mlmodel',
input_name_shape_dict={'ImageTensor:0':[1, 512, 512, 3]},
output_feature_names=['SemanticPredictions:0'],
add_custom_layers=True,
custom_conversion_functions={'Slice': _convert_slice}) # dictionary has op name as the key
print("\n \n ML Model layers info: \n")
# inspect the CoreML model: this should be same as the one we got above
spec = coreml_model.get_spec()
_print_coreml_nn_layer_info(spec)
I'm still getting the same error as before
Shape Translator missing for OP of type Slice.
But i did notice I'm also getting this error/warning
custom_conversion_functions={'Slice': _convert_slice}) # dictionary has op name as the key
Any assistance would be appreciated

Categories