I'm trying to convert this CPM-TF model to TFLite, but to use the TocoConverter, I need to specify input and output tensors.
https://github.com/timctho/convolutional-pose-machines-tensorflow
I ran the included run_freeze_model.py and got the cpm_hand_frozen.pb (GraphDef?) file.
From this post I copied the code snippet for converting the ProtoBuf file with known inputs and outputs. But looking through the model definition code, I have some trouble finding the correct answers for the in- and outputs.
Tensorflow Convert pb file to TFLITE using python
import tensorflow as tf
import numpy as np
from config import FLAGS
path_to_frozen_graphdef_pb = 'frozen_models/cpm_hand_frozen.pb'
def main(argv):
input_tensors = [1, FLAGS.input_size, FLAGS.input_size, 3]
output_tensors = np.zeros(FLAGS.num_of_joints)
frozen_graph_def = tf.GraphDef()
with open(path_to_frozen_graphdef_pb, 'rb') as f:
frozen_graph_def.ParseFromString(f.read())
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
if __name__ == '__main__':
tf.app.run()
I'm quite new to Tensorflow, but I think the input should be defined as
[1, FLAGS.input_size, FLAGS.input_size, 3]
Found that here: https://github.com/timctho/convolutional-pose-machines-tensorflow/blob/master/models/nets/cpm_hand.py#L23
Not sure what 1 represents, but None does not work and I guess the other parameters are the image size and color channels.
However, with that input, it returns an error:
AttributeError: 'int' object has no attribute 'dtype'
I got no clue on what the output should be, other than it should be an array.
UPDATE 1
Looking through the TF docs, it appears that I need to define the input as a tensor (obvious!).
https://www.tensorflow.org/lite/convert/python_api
input_tensors = tf.placeholder(name="img", dtype=tf.float32, shape=(1,FLAGS.input_size, FLAGS.input_size, 3))
This does not return an error, but I still need to figure out if the input are correct and what the output should be like.
UPDATE 2
Alright, so I finally got it to spit out the tflite model with this code snippet
def tflite_converter():
graph_def_file = os.path.join('frozen_models', '{}_frozen.pb'.format('cpm_hand'))
input_arrays = ['input_placeholer']
output_arrays = [FLAGS.output_node_names]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open('{}.tflite'.format('cpm_hand'), 'wb').write(tflite_model)
I hope that I did it correctly. I will try to do inference on the model on Android.
I do also think there is a misspelling in the input tensor input_placeholder. It appears to be corrected in the code itself, but from printing out all node names from the pretrained model, the spelling input_placeholer is present.
Node names can be seen here: https://github.com/timctho/convolutional-pose-machines-tensorflow/issues/59
My setup is:
Ubuntu 18.04
CUDA 9.1 & cuDNN 7.0
Python 3.6.5
Tensorflow GPU 1.6
Inference works like a charm, so there should be no issue on the setup itself.
Related
I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.
I trained a model on darknet using the YOLOv3-SPP model. I need to be able to use this model in my iPhone app so I need to convert it to CoreML. I started by converting the .weights file to a .pb file. Now I am trying to convert it from TensorFlow to CoreML with tfcoreml. However I cannot seem to determine my input and output tensor names. I tried to use tensorboard to visualize the model and determine the inputs and outputs but since I am quite new to TensorFlow I can't figure out what to use. I am using this script to convert the model from TensorFlow to CoreML:
import tfcoreml
import os
import tensorflow as tf
frozen_model_file = os.path.abspath('frozen_darknet_yolov3_model.pb')
input_tensor_shapes = {"input/placeholder:0": [1, 32, 32, 9]}
# Output CoreML model path
coreml_model_file = './model.mlmodel'
output_tensor_names = ['output/prediction:0']
def convert():
# Read the pb model
with tf.gfile.GFile(frozen_model_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we import the graph_def into a new Graph
tf.import_graph_def(graph_def, name="")
# Convert
tfcoreml.convert(
tf_model_path=frozen_model_file,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
output_feature_names=output_tensor_names)
convert()
This what my tensorboard looks like:
What should I set the input_tensor_shapes and output_tensor_names too so that I don't get an error saying that my TensorFlow graph does not contain a tensor with that name.
I suggest using Netron to view the TensorFlow file. It makes the graph much easier to understand.
I am new to the object detection API and TensorFlow in general. I followed this tutorial and in the end I produced a frozen_inference_graph.pb. I want to run this object detection model on my phone, which in my understanding requires me to convert it to .tflite (please lmk if this doesn't make any sense).
When I tried to convert it using this standard code here:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
It throws an error, saying:
ValueError: None is only supported in the 1st dimension. Tensor
'image_tensor' has invalid shape '[None, None, None, 3]'
This is a common error I found on the internet, and after searching through many threads, I tried to give an extra parameter to the code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
Now it looks like this:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This works at first, but throws another error at the end, saying:
Check failed: array.data_type == array.final_data_type Array
"image_tensor" has mis-matching actual and final data types
(data_type=uint8, final_data_type=float). Fatal Error: Aborted
I understand that my input tensor has the data type of uint8 and this causes a mismatch, I guess. My question would be, is this the correct way to approach things? (I want to run my model on my phone). If it is, how do I then fix the error? :/
Thank you very much.
Change your model input (the image_tensor placeholder) to have data type tf.float32.
Im using TFCoreml in python to convert my Tensorflow model into CoreML for development on an iOS device using the CoreML Libs.
Im using the following python code to try and convert the model to CoreML.
import tfcoreml as tf_converter
tf_converter.convert(tf_model_path = 'frozen_inference_graph.pb',
mlmodel_path = 'ml_model.mlmodel',
output_feature_names = ['SemanticPredictions:0'],
input_name_shape_dict = {'ImageTensor:0' : [1, 512, 512, 3]})
This gives the following error:
Shape Translator missing for OP of type Slice.
I read the docs a bit further for TFCoreml and it states that Slice isn't fully supported, and some custom conversion code is required for this to work. In the TFCoreml documentation it suggests breaking the frozen graph into sub graphs and converting them individually then merging them back together post conversion.
I updated my code to use custom layers, but I don't really understand how the custom conversion functions work.
Just need a few pointers on where to look to begin understanding how to write these custom conversion methods so I can solve my problem with converting the Tensorflow model to CoreML.
[EDIT]
I did some more reading into the TFCoreml examples and documentation and have adapted my solution to this.
import tfcoreml as tf_converter
def _convert_slice(**kwargs):
tf_op = kwargs["op"]
coreml_nn_builder = kwargs["nn_builder"]
constant_inputs = kwargs["constant_inputs"]
params = NeuralNetwork_pb2.CustomLayerParams()
params.className = 'Slice'
params.description = "Custom layer that corresponds to the slice TF op"
# get the value of begin
begin = constant_inputs.get(tf_op.inputs[1].name, [0,0,0,0])
size = constant_inputs.get(tf_op.inputs[2].name, [0,0,0,0])
# add begin and size as two repeated weight fields
begin_as_weights = params.weights.add()
begin_as_weights.floatValue.extend(map(float, begin))
size_as_weights = params.weights.add()
size_as_weights.floatValue.extend(map(float, size))
coreml_nn_builder.add_custom(name=tf_op.name,
input_names=[tf_op.inputs[0].name],
output_names=[tf_op.outputs[0].name],
custom_proto_spec=params)
coreml_model = tfcoreml.convert(
tf_model_path='frozen_inference_graph.pb',
mlmodel_path='my_model.mlmodel',
input_name_shape_dict={'ImageTensor:0':[1, 512, 512, 3]},
output_feature_names=['SemanticPredictions:0'],
add_custom_layers=True,
custom_conversion_functions={'Slice': _convert_slice}) # dictionary has op name as the key
print("\n \n ML Model layers info: \n")
# inspect the CoreML model: this should be same as the one we got above
spec = coreml_model.get_spec()
_print_coreml_nn_layer_info(spec)
I'm still getting the same error as before
Shape Translator missing for OP of type Slice.
But i did notice I'm also getting this error/warning
custom_conversion_functions={'Slice': _convert_slice}) # dictionary has op name as the key
Any assistance would be appreciated
I've freezed my model and got .pb file. Then I've quantize my model using tocoConverter on Linux, as it's not supported on Windows. I've got quantized_model.tflite. I can load it and get predictions on Linux, but I have issues to make it on Windows, as my project requires.
I've tried to load it using tf.contrib.lite.Interpreter using this code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter=tf.contrib.lite.Interpreter(model_path="quantized_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
# change the following line to feed into your own data.
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
*ImportError: No module named 'tensorflow.contrib.lite.python.Interpreter*
But it failed with "No module named 'tensorflow.contrib.lite.python.interpreter" error. I always get this errors on Windows, when trying to use something from tf.contrib.lite. Maybe there is a way to load this on Windows? Or can you advice alternative options to quantize a model on Windows?
toco is currently not supported on Windows build for cmake. This is what I remember reading somewhere.