I am trying to predict on a savedmodel using the following code
features = np.ones((20, 40, 3), dtype=np.float32)
features = tf.convert_to_tensor(value, dtype=tf.float32)
imported_model = tf.saved_model.load(export_dir=os.path.join(os.path.join(model_path, directory)))
import_fn = imported_model.signatures["serving_default"]
import_fn(features)
I get the following error when running it using Tensorflow 2. The model prediction works fine when I use the saved_model_cli.
tensorflow.python.framework.errors_impl.InvalidArgumentError: In[0] is not a matrix. Instead it has shape [20,40,3]
[[node dense/BiasAdd (defined at model_manager.py:54) ]] [Op:__inference_pruned_318590]
The saved cli command is as follows
saved_model_cli run --dir ./model_dir --tag_set serve --signature_def serving_default --input_exprs 'input=np.ones((20, 40, 3), dtype=np.float32)'
InvalidArgumentError is typically caused by Data Type mismatch in the input.
Based on your error " In[0] is not a matrix. Instead, it has shape [20,40,3] ". You could try to manipulate your input data to properly match the input type and shape that the model is originally trained. You could also inspect how the model treats your input when you used the saved_model_cli compared to Python IDE. As you might be missing some preprocessing steps when you used the Python IDE that is being done when using saved_model_cli.
You could read about using Saved_Model format usage more in this link.
Related
I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.
I was trying to convert my .pb file to a CoreML model using tfcoreml (with the script below) when I got this error: ValueError: Input and filter shapes must be int or symbolic in Conv2D node detector/darknet-53/Conv/Conv2D. How would I resolve this error and convert my model to CoreML successfully?
I opened up my .pb model with Netron and found the layer in question:
Here is the conversion script I am using:
import tfcoreml
tfcoreml.convert(tf_model_path='model.pb',
mlmodel_path='model.mlmodel',
output_feature_names=['output_boxes'], # name of the output op
input_name_shape_dict={'inputs': [None, 416, 416, 3]}, # map from the placeholder op in the graph to shape (can have -1s)
minimum_ios_deployment_target='13')
From what I can gather it seems to me that I need to change the input types of all Conv2D nodes to get this to work. But I am not an expert by any means so I could be wrong. Is there a way I can fix this model to convert successfully by using a Python script, and if so what would it look like?
EDIT: After changing the None in the input_name_shape_dict I got a different error. This one says: ValueError: Incompatible dimension 3 in Sub operation detector/darknet-53/Conv/BatchNorm/FusedBatchNorm/Sub. So I opened up Netron once again I took a look. Here is what I got, any idea how to fix it?
It seems the script got past the previous error just to get stuck on the next layer. Is my .pb completely useless or does just need to be fixed in some places?
Try 1 instead of None in the input_name_shape_dict.
I'm trying to do post-training full 8-bit quantization of a Keras model to compile and deploy to EdgeTPU.
I have a trained Keras model saved as .h5 file, and am trying to go through the steps as specified here: https://coral.withgoogle.com/docs/edgetpu/models-intro/, for deployment to the Coral Dev Board.
I'm following these instructions for quantization: https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization_of_weights_and_activations)
I’m trying to use the following code:
import tensorflow as tf
num_calibration_steps = 100
def representative_dataset_gen():
for _ in range(num_calibration_steps):
# Get sample input data as a numpy array in a method of your choosing.
yield [X_train_quant_conv]
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file('/tmp/classNN_simple.h5')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
tflite_full_integer_quant_model = converter.convert()
where X_train_quant_conv is a subset of my training data converted to np.array and of type np.float32
When running this piece of code, I get the following error:
ValueError: Cannot set tensor: Dimension mismatch
I’ve tried changing the function representative_dataset_gen() in different ways, but every time I get a new error. I’m not sure how this function should be. I’m also in doubt of what value num_calibration_steps should have.
Any suggestions or working examples are very appreciated.
This question is very similar to this answered question: Convert Keras model to quantized Tensorflow Lite model that can be used on Edge TPU
You might want to look at my demo script for quantization, on github.
It's just a guess since I can't see what X_train_quant_conv really is, but in my working demo, I yield one image at a time (random data created on the fly, in my case) in representative_dataset_gen(). The image is stored as batch of size 1 (e.g., tensor shape is (1, 56, 56, 32) for my 52x52x32 image). There are 32 channels, though there would typically just be 3, for a color image. I think representative_dataset_gen() has to yield a list containing a tensor (or more than one?) for which the first dimension is of length 1.
image_shape = (56, 56, 32)
def representative_dataset_gen():
num_calibration_images = 10
for i in range(num_calibration_images):
image = tf.random.normal([1] + list(image_shape))
yield [image]
I am new to the object detection API and TensorFlow in general. I followed this tutorial and in the end I produced a frozen_inference_graph.pb. I want to run this object detection model on my phone, which in my understanding requires me to convert it to .tflite (please lmk if this doesn't make any sense).
When I tried to convert it using this standard code here:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
It throws an error, saying:
ValueError: None is only supported in the 1st dimension. Tensor
'image_tensor' has invalid shape '[None, None, None, 3]'
This is a common error I found on the internet, and after searching through many threads, I tried to give an extra parameter to the code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
Now it looks like this:
import tensorflow as tf
graph = 'pathtomygraph'
input_arrays = ['image_tensor']
output_arrays = ['all_class_predictions_with_background']
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This works at first, but throws another error at the end, saying:
Check failed: array.data_type == array.final_data_type Array
"image_tensor" has mis-matching actual and final data types
(data_type=uint8, final_data_type=float). Fatal Error: Aborted
I understand that my input tensor has the data type of uint8 and this causes a mismatch, I guess. My question would be, is this the correct way to approach things? (I want to run my model on my phone). If it is, how do I then fix the error? :/
Thank you very much.
Change your model input (the image_tensor placeholder) to have data type tf.float32.
I want to create an object-detection app based on a retrained ssd_mobilenet model I've retrained like the guy on youtube.
I chose the model ssd_mobilenet_v2_coco from the Tensorflow Model Zoo. After the retraining process I've got the model with the following structure:
- saved_model
- variables (empty folder)
- saved_model.pb
- checkpoint
- frozen_inverence_graph.pb
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
In the same folder, I have the python script with the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("saved_model")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After running this code, I got the following error:
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.
It seems, that the image width and hight is missing in the model. When I use the model like in the youtube video, it is working.
After lots of research and attempts I tried other ways, like running bazel/toco, but nothing helped me to create a tflite-file.
As it describes in documentation, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input tensor name and its shape, and also output tensor name and its shape. And for ssd_mobilenet_v2_coco, you need to define on which input shape you need to use the network like this:
tf.lite.TFLiteConverter.from_saved_model("saved_model", input_shapes={"image_tensor" : [1,300,300,3]})