Can Yolo-V3 trained model be converted to TensorFlow model? - python

I have trained my model of doors in yolo-v3 but now I need it in TensorFlow-Lite. But, I am facing a problem, that is, if I want to train my model for tensorflow, I need annotation file in ".csv" or ".xml" but the ones I have are "*.txt". I did found a software to create annotation files manually from drawing rectangles in pictures but I can not do that for thousands of images due to time shortage.
Can anyone guide me how to handle such situation?
I have followed the following link but the resulted model did not work.
https://medium.com/analytics-vidhya/yolov3-to-tensorflow-lite-conversion-4602cec5c239

i think it will be good to train tensorflow implementation on your data , then converting tensrflow model to tflite should be easy
here is yolov3 in tf : https://github.com/YunYang1994/tensorflow-yolov3
then use official tensorflow codes to convert to tflite : https://www.tensorflow.org/lite/convert

Related

How to create a Keras model from saved weights without a config JSON (Mask-RCNN)

I'm trying to use the TACO dataset GitHub Repository to get a functional neural network, and I downloaded pre-trained weights from here. I understand that the .h5 file contains only the weights and not the architecture of the model itself. I am interested in getting a .hdf5 file that contains both the weights and the model architecture to test on sample images.
I tried the solution shown in the first answer to this question. The code below just prints "none."
`
from tensorflow import keras
import h5py
f = h5py.File('mask_rcnn_taco_0100.h5', 'r')
print(f.attrs.get('model_config'))
I'm able to print a list of keys, values, and items with the following code, but I'm not sure how this translates to the actual model architecture.
`
print('KEYS-------------------------------------')
print(list(f.keys()))
print('VALUES-------------------------------------')
print(list(f.values()))
print('ITEMS-------------------------------------')
print(list(f.items()))
I think the issue is that I'm missing the config.json file, and I'm not sure where to find that or how to produce it.
A few specific questions:
Is there somewhere I can find a config.json file for a generic Mask-RCNN architecture and somehow apply the pre-trained TACO weights to it?
Is there a way to extract the model architecture from the weights file other than what I've already tried?

Splitting an ONNX DNN Model

I'm trying to split DNN Models in order to execute part of the network on the edge and the rest on the cloud. Because it has to be cross-platform and work with every framework I need to do it directly starting from an ONNX model.
I know how to generate an ONNX model starting from tensorflow/keras and how to run an ONNX model, but I realized that is really hard to work on the ONNX file, like visualizing it and modify it.
Is there someone that can help me understand how to split and ONNX model, or at least run part of an ONNX model (like from input to layer N and from layer N to the output)?
I'm starting from this situation:
# load MobileNetV2 model
model = MobileNetV2()
# Export the model
tf.saved_model.save(model, "saved_model")
# export to .onnx
!python -m tf2onnx.convert --saved-model saved_model --output mobilenet_v2.onnx --opset 7
# open the saved ONNX Model
print("Import ONNX Model..")
onnx_model = onnx.load("mobilenet_v2.onnx")
tf_rep = prepare(onnx_model, logging_level="WARN", auto_cast=True)
I tried to use sclblonnx but on models this big(although it's a small model) I can't really print the graph and when I list the inputs and outputs with textlist_inputs/list_outputs I don't really get how ther are interconnected.
Any help would be greatly appreciated. Thank you in advance.
From Onnx PythonAPI specs, you can split onnx model by specifying input name and output name of the tensors.
The first thing you probably need to do is understand the underlining graph for the onnx model you have.
onnx_graph = onnx_model.graph
Will return the graph object.
After that, you need to understand where you want to separate this graph into two separate graphs (and so run two models).
You can plot the graph with Netron (this is what sclblonnx does) or you can try to look inside manually by looking at
onnx_graph_nodes = onnx_graph.node
Of course looking at the graph inputs(onnx_graph.input) and outputs (onnx_graph.output) is also important.
If you look at the "merge" file from sclblonnx you will see the syntax details for diving into a graph as well as a "split" function at may help you.

Convert TFLite (TensorFlow) to MLModel (Apple)

I'm trying to convert TFLite Face Mesh model to MLModel (Apple).
TFLite model description:
https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view
TFLite actual .tflite file:
https://github.com/google/mediapipe/blob/master/mediapipe/models/face_landmark.tflite
Looking at CoreMLTools provided by Apple (https://coremltools.readme.io/docs/introductory-quickstart) seems like it's possible, but all the samples codes demonstrate conversation from Keras and not from TFLite (although it's clearly supported):
How does one convert TFLite model to MLModel model?
As far as I know, there is no direct conversion from TFLite to Core ML. Someone could create such a converter but apparently no one has.
Two options:
Do it yourself. There is a Python API to read the TFLite file (flatbuffers) and an API to write Core ML files (NeuralNetworkBuilder in coremltools). Go through the layers of the TFLite model one-by-one, and add them to the NeuralNetworkBuilder, then save as a .mlmodel file.
Let TFLite do this for you. When you use the CoreMLDelegate in TFLite, it actually performs the model conversion on-the-fly and saves a .mlmodel file (or the compiled version, .mlmodelc). Then it uses Core ML to run this model. You can write some code to load the model with TFLite using the CoreMLDelegate, then grab the .mlmodel file that this created from the app bundle and use that.

Tensorflow: How to load a pre-trained ResNet model

I want to use a pre-trained ResNet model which Tensorflow provides here.
First I downloaded the code (resnet_v1.py) to reconstruct the model's graph here. The model's weights (resnet_v1_50.ckpt) can be found on the same page here.
The model can be tested using the following script (resnet_v1_test.py) from here. However, I have problems to extract the right information from resnet_v1_test.py. I don't understand many things that happen in this script. Which are the essential functions to pass a random image through the network? How can I access the weights and activations for further work?
What are the next steps from here? I would appreciate any help!
TL;DR: How can I use the resnet_v1_test.py script to perform classification and access weights and activations?

Image Retrain Inception only check the own specific category not tensor dataset

I have retrained the inception model from my data set of traffic sign.Its working fine but when I am trying to check other image e.g panda it's resulting with the name of traffic sign with some probabilities.I don't understand why its doing it.I need both tensor-flow data-set and my own category too.
My steps:
I have installed the python 3.5.2 in windows 7
I installed tensor-flow with
pip --install tensorflow
I download these two files retrain.py to train my data and label_image.py to check image.
Files downloaded from:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/image_retraining
You have misunderstood the fundamentals of transfer learning wrt this image retraining program.
In the image retraining program you are referencing, you are taking the inception CNN model that is already pretraining on the imageNET dataset. You are then retraining the final classification layers on your NEW classes and data.
The transfer learning occurs because you are retaining all the learnt feature extraction filters etc. in the early layers and you are just reclassifying the activations of those layers to new classes based on your new dataset. This means you are replacing the classification part with a new one. AFAIK there is no way to simply add classes to a CNN model via transfer learning because you have already trained a softmax layer (for example) with the classification distribution for each class.
To achieve what you are suggesting will require you to retrain the final layers of inception with the original dataset PLUS your additional data. This will take a long time due to the size of imageNET.
I would re-evaluate whether you actually need to be able to utilise all these classes in your application or whether it is sufficient to just have your traffic signs etc.
You can learn more about the program at the tensorflow tutorial here.

Categories