Create TFLite file with InceptionV3 and Food101 - python

Im trying to create a TFLITE model to use on Android.
Im using InceptionV3Model with Food101 dataset.
Im new to this all ML world.
I found this code from TensorFlow that Im trying to use :
https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py
I can't figure out where to add and what to add to convert the model into TFLite.
Please explain me about checkpoints and frozen graphs, and help to create TFLite model.
Thanks.

You could first export the model to saved_model by settting --saved_model_dir
and then using TFLiteConverter.from_saved_model.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/lite/TFLiteConverter#from_saved_model

Related

Can you use .tflite model like you would a normal tensorflow model?

I've used tensor flow lite to make my own custom android.tflite model as described in this tflite collab demo.
I was going to test my object detection model using tflite_support, but this module is not Windows compatible.
Can I use my .tflite model with "regular" tensorflow? If so, how do I load the .tflite model? Following this example, we use tf.saved_model.load(model_dir), which I believe will not load our android.tflite file.
Is there a function for loading a .tflite file? Or while I'm making my model with tensorflow_model_maker, to I need to specify a different format for the model?
Additionally, I see that tflite has a way to convert normal tensorflow models to tflite models, but I don't see that it has a means of doing the reverse:
tf.lite.TFLiteConverter(
funcs, trackable_obj=None
)

Splitting an ONNX DNN Model

I'm trying to split DNN Models in order to execute part of the network on the edge and the rest on the cloud. Because it has to be cross-platform and work with every framework I need to do it directly starting from an ONNX model.
I know how to generate an ONNX model starting from tensorflow/keras and how to run an ONNX model, but I realized that is really hard to work on the ONNX file, like visualizing it and modify it.
Is there someone that can help me understand how to split and ONNX model, or at least run part of an ONNX model (like from input to layer N and from layer N to the output)?
I'm starting from this situation:
# load MobileNetV2 model
model = MobileNetV2()
# Export the model
tf.saved_model.save(model, "saved_model")
# export to .onnx
!python -m tf2onnx.convert --saved-model saved_model --output mobilenet_v2.onnx --opset 7
# open the saved ONNX Model
print("Import ONNX Model..")
onnx_model = onnx.load("mobilenet_v2.onnx")
tf_rep = prepare(onnx_model, logging_level="WARN", auto_cast=True)
I tried to use sclblonnx but on models this big(although it's a small model) I can't really print the graph and when I list the inputs and outputs with textlist_inputs/list_outputs I don't really get how ther are interconnected.
Any help would be greatly appreciated. Thank you in advance.
From Onnx PythonAPI specs, you can split onnx model by specifying input name and output name of the tensors.
The first thing you probably need to do is understand the underlining graph for the onnx model you have.
onnx_graph = onnx_model.graph
Will return the graph object.
After that, you need to understand where you want to separate this graph into two separate graphs (and so run two models).
You can plot the graph with Netron (this is what sclblonnx does) or you can try to look inside manually by looking at
onnx_graph_nodes = onnx_graph.node
Of course looking at the graph inputs(onnx_graph.input) and outputs (onnx_graph.output) is also important.
If you look at the "merge" file from sclblonnx you will see the syntax details for diving into a graph as well as a "split" function at may help you.

Can Yolo-V3 trained model be converted to TensorFlow model?

I have trained my model of doors in yolo-v3 but now I need it in TensorFlow-Lite. But, I am facing a problem, that is, if I want to train my model for tensorflow, I need annotation file in ".csv" or ".xml" but the ones I have are "*.txt". I did found a software to create annotation files manually from drawing rectangles in pictures but I can not do that for thousands of images due to time shortage.
Can anyone guide me how to handle such situation?
I have followed the following link but the resulted model did not work.
https://medium.com/analytics-vidhya/yolov3-to-tensorflow-lite-conversion-4602cec5c239
i think it will be good to train tensorflow implementation on your data , then converting tensrflow model to tflite should be easy
here is yolov3 in tf : https://github.com/YunYang1994/tensorflow-yolov3
then use official tensorflow codes to convert to tflite : https://www.tensorflow.org/lite/convert

Convert TFLite (TensorFlow) to MLModel (Apple)

I'm trying to convert TFLite Face Mesh model to MLModel (Apple).
TFLite model description:
https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view
TFLite actual .tflite file:
https://github.com/google/mediapipe/blob/master/mediapipe/models/face_landmark.tflite
Looking at CoreMLTools provided by Apple (https://coremltools.readme.io/docs/introductory-quickstart) seems like it's possible, but all the samples codes demonstrate conversation from Keras and not from TFLite (although it's clearly supported):
How does one convert TFLite model to MLModel model?
As far as I know, there is no direct conversion from TFLite to Core ML. Someone could create such a converter but apparently no one has.
Two options:
Do it yourself. There is a Python API to read the TFLite file (flatbuffers) and an API to write Core ML files (NeuralNetworkBuilder in coremltools). Go through the layers of the TFLite model one-by-one, and add them to the NeuralNetworkBuilder, then save as a .mlmodel file.
Let TFLite do this for you. When you use the CoreMLDelegate in TFLite, it actually performs the model conversion on-the-fly and saves a .mlmodel file (or the compiled version, .mlmodelc). Then it uses Core ML to run this model. You can write some code to load the model with TFLite using the CoreMLDelegate, then grab the .mlmodel file that this created from the app bundle and use that.

Having trouble converting tensorflow model to tflite

I have used ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz to train a simple model to detect car and bike but im not able to figure out how to convert it to tflite. i have used export_tflite_ssd_graph.py to create .pb file i have attached the tflite_graph.pb and tflite_graph.pbtxt file here. plese help me out thanks in advance
https://drive.google.com/drive/folders/17tGCG2H7NEB7StTPHTBcNuZzJWOjEAtm?usp=sharing

Categories