Splitting an ONNX DNN Model - python

I'm trying to split DNN Models in order to execute part of the network on the edge and the rest on the cloud. Because it has to be cross-platform and work with every framework I need to do it directly starting from an ONNX model.
I know how to generate an ONNX model starting from tensorflow/keras and how to run an ONNX model, but I realized that is really hard to work on the ONNX file, like visualizing it and modify it.
Is there someone that can help me understand how to split and ONNX model, or at least run part of an ONNX model (like from input to layer N and from layer N to the output)?
I'm starting from this situation:
# load MobileNetV2 model
model = MobileNetV2()
# Export the model
tf.saved_model.save(model, "saved_model")
# export to .onnx
!python -m tf2onnx.convert --saved-model saved_model --output mobilenet_v2.onnx --opset 7
# open the saved ONNX Model
print("Import ONNX Model..")
onnx_model = onnx.load("mobilenet_v2.onnx")
tf_rep = prepare(onnx_model, logging_level="WARN", auto_cast=True)
I tried to use sclblonnx but on models this big(although it's a small model) I can't really print the graph and when I list the inputs and outputs with textlist_inputs/list_outputs I don't really get how ther are interconnected.
Any help would be greatly appreciated. Thank you in advance.

From Onnx PythonAPI specs, you can split onnx model by specifying input name and output name of the tensors.

The first thing you probably need to do is understand the underlining graph for the onnx model you have.
onnx_graph = onnx_model.graph
Will return the graph object.
After that, you need to understand where you want to separate this graph into two separate graphs (and so run two models).
You can plot the graph with Netron (this is what sclblonnx does) or you can try to look inside manually by looking at
onnx_graph_nodes = onnx_graph.node
Of course looking at the graph inputs(onnx_graph.input) and outputs (onnx_graph.output) is also important.
If you look at the "merge" file from sclblonnx you will see the syntax details for diving into a graph as well as a "split" function at may help you.

Related

How to create a Keras model from saved weights without a config JSON (Mask-RCNN)

I'm trying to use the TACO dataset GitHub Repository to get a functional neural network, and I downloaded pre-trained weights from here. I understand that the .h5 file contains only the weights and not the architecture of the model itself. I am interested in getting a .hdf5 file that contains both the weights and the model architecture to test on sample images.
I tried the solution shown in the first answer to this question. The code below just prints "none."
`
from tensorflow import keras
import h5py
f = h5py.File('mask_rcnn_taco_0100.h5', 'r')
print(f.attrs.get('model_config'))
I'm able to print a list of keys, values, and items with the following code, but I'm not sure how this translates to the actual model architecture.
`
print('KEYS-------------------------------------')
print(list(f.keys()))
print('VALUES-------------------------------------')
print(list(f.values()))
print('ITEMS-------------------------------------')
print(list(f.items()))
I think the issue is that I'm missing the config.json file, and I'm not sure where to find that or how to produce it.
A few specific questions:
Is there somewhere I can find a config.json file for a generic Mask-RCNN architecture and somehow apply the pre-trained TACO weights to it?
Is there a way to extract the model architecture from the weights file other than what I've already tried?

Can Yolo-V3 trained model be converted to TensorFlow model?

I have trained my model of doors in yolo-v3 but now I need it in TensorFlow-Lite. But, I am facing a problem, that is, if I want to train my model for tensorflow, I need annotation file in ".csv" or ".xml" but the ones I have are "*.txt". I did found a software to create annotation files manually from drawing rectangles in pictures but I can not do that for thousands of images due to time shortage.
Can anyone guide me how to handle such situation?
I have followed the following link but the resulted model did not work.
https://medium.com/analytics-vidhya/yolov3-to-tensorflow-lite-conversion-4602cec5c239
i think it will be good to train tensorflow implementation on your data , then converting tensrflow model to tflite should be easy
here is yolov3 in tf : https://github.com/YunYang1994/tensorflow-yolov3
then use official tensorflow codes to convert to tflite : https://www.tensorflow.org/lite/convert

Tensorflow: How to load a pre-trained ResNet model

I want to use a pre-trained ResNet model which Tensorflow provides here.
First I downloaded the code (resnet_v1.py) to reconstruct the model's graph here. The model's weights (resnet_v1_50.ckpt) can be found on the same page here.
The model can be tested using the following script (resnet_v1_test.py) from here. However, I have problems to extract the right information from resnet_v1_test.py. I don't understand many things that happen in this script. Which are the essential functions to pass a random image through the network? How can I access the weights and activations for further work?
What are the next steps from here? I would appreciate any help!
TL;DR: How can I use the resnet_v1_test.py script to perform classification and access weights and activations?

How to Optimize a Trained Frozen Model for Inference?

My goal is to decrease the size and complexity of a pre-trained Model (a Tensorflow Frozen Graph as Protobuf .pb file) as far as possible to make Inference (in my case realtime object detection using Webcams) as fast as possible.
(See my project repo for more information: https://github.com/GustavZ/realtime_object_detection)
Let's take a look at the pre-trained ssd_mobilenet_v1_coco provided by the tensorflow object detection API:
link to ssd_mobilenet graph
Which Layers are not necessary for inference (so only for the already completed training) and thus can be removed (from the config file to export a new frozen model using the export_inference_graph.py script)?
It would be very nice to get a general answer on how to optimize Models for inference as well as an answer on my special case as I think this could be of interest for others.
EDIT: I know now about the optimize_for_inference.py script provided py tensorflow. But i have no experience using it, for example how do i know which are the really necessary Input and Output nodes, or how do i read them from tensorboard?

How do you get the name of the tensorflow output nodes in a Keras Model?

I'm trying to create a pb file from my Keras (tensorflow backend) model so I can build it on iOS. I'm using freeze.py and I need to pass the output nodes. How do i get the names of the output nodes of my Keras model?
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
You can use Keras model.summary() to get the name of the last layer.
If model.outputs is not empty you can get the node names via:
[node.op.name for node in model.outputs]
you get the session via
session = keras.backend.get_session()
and you convert all training variables to consts via
min_graph = convert_variables_to_constants(session, session.graph_def, [node.op.name for node in model.outputs])
after that you can write a protobuf-file via
tensorflow.train.write_graph(min_graph, "/logdir/", "file.pb", as_text=True)
If output nodes are not explicitly specified when constructing a model in Keras, you can print them out like this:
[print(n.name) for n in tf.get_default_graph().as_graph_def().node]
Then all you need to do is find the right one, which often is similar to the name of activation function. You can just use this string name you've found as a value for output_node_names in freeze_graph function.
You can also use the tensorflow utility: summarize_graph to find possible output_nodes. From the official documentation:
Many of the transforms that the tool supports need to know what the input and output layers of the model are. The best source for these is the model training process, where for a classifier the inputs will be the nodes that receive the data from the training set, and the output will be the predictions. If you're unsure, the summarize_graph tool can inspect the model and provide guesses about likely input and output nodes, as well as other information that's useful for debugging.
It just needs the saved graph pb file as the input. Check the documentation for an example.
The output_node_names should contain the names of the graph nodes you intend to use for inference(e.g. softmax). It is used to extract the sub-graph that will be needed for inference.
It may be useful to look at freeze_graph_test.

Categories