I'm working with neuronal networks and I need to use a Raspberry Pi v2.
When I want to install tensorflow 2.X it fails, and I just can install tensorflow 1.14. For this reason I found the tflite library, that theoretically helps me, with a lite version of tf.
Here an image that shows I can't install it.
First of all, I convert my keras model (model.h5) into .tflite model.
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Since here, all OK. The problem is when I want to use this model. With tensorflow I know how to do it,
from tensorflow import keras
def importModel(myPath):
file = open(myPath+'model/model.json', 'r')
model_json = file.read(); file.close()
model = keras.models.model_from_json(model_json)
model.load_weights(myPath+'model/model.h5')
return model, scaler
But I really don't understand how to do it with tflite, can somebody help me, please??
You can find this in the official documentation
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
And if you have trouble to install TensorFlow 2.x on your raspberry it's maybe because you are not using the latest version of Python3
Related
I'm a newbie in python. I have model.ckpt file. I want to convert that model to core ML model .mlmodel to using in iOS project. I have spent a lots of time to research but I don't know how to do it. Someone told me use coremltools but I could not find tutorial how to do that. This code bellow is not work.
coreml_model = coremltools.converters.keras.convert('./model.ckpt',
input_names='image',
image_input_names='image',
output_names='output',
class_labels=['1', '2'],
image_scale=1/255)
coreml_model.save('abc.mlmodel')
From the article:
convert_tf_keras_model
# Tested with TensorFlow 2.6.2
import tensorflow as tf
import coremltools as ct
tf_keras_model = tf.keras.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
]
)
# Pass in `tf.keras.Model` to the Unified Conversion API
mlmodel = ct.convert(tf_keras_model, convert_to="mlprogram")
# or save the keras model in SavedModel directory format and then convert
tf_keras_model.save('tf_keras_model')
mlmodel = ct.convert('tf_keras_model', convert_to="mlprogram")
# or load the model from a SavedModel and then convert
tf_keras_model = tf.keras.models.load_model('tf_keras_model')
mlmodel = ct.convert(tf_keras_model, convert_to="mlprogram")
# or save the keras model in HDF5 format and then convert
tf_keras_model.save('tf_keras_model.h5')
mlmodel = ct.convert('tf_keras_model.h5', convert_to="mlprogram")
To load tensorflow saved models, check this
I am trying to convert my custom trained SSD mobilenet TF2 Object Detection model to .tflite format (flatbuffer), it will be used with Raspberry pi, I've followed the official tensorflow tutorials of converting my model to tflite model:
Note: I've used Colab with Tensorflow 2.5-gpu for training and Tensorflow 2.7-nightly for conversion (some Github issues related to SSD to tflite model conversion were mentioned to use nightly version)
1- I started by trying to export tflite graph using export_tflite_ssd_graph.py with these args:
!python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path models/myssd_mobile/pipeline.config \
--trained_checkpoint_prefix models/myssd_mobile/ckpt-9.index \
--output_directory exported_models/tflite_model
but it showed the following error:
RuntimeError: tf.placeholder() is not compatible with eager execution.
even after I disabled it by adding tf.disable_eager_execution() it showed the following error:
NameError: name 'graph_matcher' is not defined
so I realized that it might be not created for tf2 so I converted the model with export_tflite_graph_tf2.py using the code below, and I got the savedmodel:
!python object_detection/export_tflite_graph_tf2.py \
--pipeline_config_path models/myssd_mobile/pipeline.config \
--trained_checkpoint_dir models/myssd_mobile \
--output_directory exported_models/tflite_model
2- I converted the tflite savedmodel to .tflite model using the code below which is taken from tensorflow docs:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('exported_models/tflite_model/saved_model')
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After that I've created tflite labels.txt manually and here it is:
t1
t2
t3
t4
t5
t6
t7
t8
t9
t10
t11
then I ran the following script:
TFLite_detection_image.py
but it shows this error:
Traceback (most recent call last):
File "TFLite_detection_image.py", line 157, in <module>
for i in range(len(scores)):
TypeError: object of type 'numpy.float32' has no len()
where is the wrong?
Thanks in advance
I had the same problem on both SSD MobileNet v2 320x320 and SSD MobileNet V2 FPNLite 640x640. So I figured out that it should not be related to the model itself.
This morning I just fixed this error by generating once again all needed files : Your split of data into train/test (and val) ; the ones for Tensorflow like train.record, test.record ...
Also I checked the labelmap I used to ensure that it has all my classes.
After this pretreatment, I exported my model using export_tflite_graph_tf2.py and then my TF Lite converter that is the following :
import tensorflow as tf
import argparse
# Define model and output directory arguments
parser = argparse.ArgumentParser()
parser.add_argument('--model', help='Folder that the saved model is located in',
default='exported-models/my_tflite_model/saved_model')
parser.add_argument('--output', help='Folder that the tflite model will be written to',
default='exported-models/my_tflite_model')
args = parser.parse_args()
converter = tf.lite.TFLiteConverter.from_saved_model(args.model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
output = args.output + '/model.tflite'
with tf.io.gfile.GFile(output, 'wb') as f:
f.write(tflite_model)
Note : When I used TF Lite Nighty, I also had the error so I just used this script with TF2.
After that, I can confirm that both models are working on Raspberry Pi 4B+ with the same precision/recall scores I got on GPU.
I had the same error of you TypeError: object of type 'numpy.float32' has no len(), and the solution was to change the indexings of scores, boxes and classes in the python code because that means scores is a scaller value not an array. Please refere to this answer.
BTW, this error happened when I used tflite-runtime on Jetson Nano, but when I ran the code on TF2.5 on Raspberry PI it ran without chaning anything.
I want to quantization-aware train with my keras model. I have tried like below. I'm using tensorflow 1.14.0
train_graph = tf.Graph()
train_sess = tf.compat.v1.Session(graph=train_graph)
tf.compat.v1.keras.backend.set_session(train_sess)
with train_graph.as_default():
tf.keras.backend.set_learning_phase(1)
model = my_keras_model()
tf.contrib.quantize.create_training_graph(input_graph = train_graph, quant_delay=5)
train_sess.run(tf.global_variables_initializer())
model.compile(...)
model.fit_generator(...)
saver = tf.compat.v1.train.Saver()
saver.save(train_sess, checkpoint_path)
It works without errors.
However, size of saved model(h5 and ckpt) is completely same as the model without quantization.
Is it the right way? How I can check whether it is quantized well?
Or, is there better way to quantize?
When you finish the quantization-aware-training and save your model to disk, it is actually not already quantized. In other words, it is "prepared" for quantization, but the weights are still float32. You have to further convert your model to TFLite for it to actually be quantized. You can do so with the following piece of code:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
This will quantize your model with int8 weights and uint8 activations.
Have a look at the official example for further reference.
I have created a object detection model using Pytorch and then converted from .pth to .onnx and then .pb, but now I need to convert it into .tflite for android app! How to do it? It's my first time.
input_arrays = [64, 3, 224, 224]
output_arrays = ?
for binary classification.
I have done it from pytorch but everything I find to look at was from keras or Tensorflow...
This is the code I have used to convert it from .pb to .tflie
converter = lite.TFLiteConverter.from_frozen_graph(
model/model.pb , input_arrays, output arrays )
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
!tflite_convert \
--output_file= model/model.tflite \
--graph_def_file= model/model.pb \
--input_arrays= input_arrays \
-- output_arrays= output_arrays
I think it has something to do with input arrays and output arrays, but not sure about it. Is graph_def_file supposed to store model.pb ?
No need to specify input and output array, when using the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Try this out.
I've freezed my model and got .pb file. Then I've quantize my model using tocoConverter on Linux, as it's not supported on Windows. I've got quantized_model.tflite. I can load it and get predictions on Linux, but I have issues to make it on Windows, as my project requires.
I've tried to load it using tf.contrib.lite.Interpreter using this code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter=tf.contrib.lite.Interpreter(model_path="quantized_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
# change the following line to feed into your own data.
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
*ImportError: No module named 'tensorflow.contrib.lite.python.Interpreter*
But it failed with "No module named 'tensorflow.contrib.lite.python.interpreter" error. I always get this errors on Windows, when trying to use something from tf.contrib.lite. Maybe there is a way to load this on Windows? Or can you advice alternative options to quantize a model on Windows?
toco is currently not supported on Windows build for cmake. This is what I remember reading somewhere.