Error converting Facenet model .pb file to TFLITE format - python

i'm trying to convert a pre-trained frozen .pb based on Inception ResNet i got from David Sandbergs Github with the Tensorflow Lite Converter on Ubuntu using the following command:
/home/nils/.local/bin/tflite_convert
--output_file=/home/nils/Documents/frozen.tflite
--graph_def_file=/home/nils/Documents/20180402-114759/20180402-114759.pb
--input_arrays=input
--output_arrays=embeddings
--input_shapes=1,160,160,3
However, i get the following error:
2018-12-03 15:03:16.807431: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
File "/home/nils/.local/bin/tflite_convert", line 11, in <module>
sys.exit(main())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
_convert_model(tflite_flags)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 162, in _convert_model
output_data = converter.convert()
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert
**converter_kwargs)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl
input_data.SerializeToString())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos
(stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'2018-12-03 15:03:26.006252: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: FIFOQueueV2\n2018-12-03 15:03:26.006322: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join/fifo_queue\n2018-12-03 15:03:26.006339: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: QueueDequeueUpToV2\n2018-12-03 15:03:26.006352: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join\n2018-12-03 15:03:27.496676: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 5601 operators, 9399 arrays (0 quantized)\n2018-12-03 15:03:28.603936: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.418074: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.420354: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42]
Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.\nAborted (core dumped)\n'
None
If i get this right, this might be because of two unsupported Ops, QueueDequeueUpToV2 and FIFOQueueV2, but i don't know for sure.
Do you have any ideas what might be the problem or how i can solve this error? What does that error even mean? I want this model to run on a mobile android device, are there any alternatives?
Versions:
Tensorflow V1.12
Python 3.6.7
Ubuntu 18.04.1 LTS
on a VirtualBox
Thanks in advance!

I have solved this problem here, adding the snippet here too:
I could able to convert FaceNet .pb to .tflite model, and following are the instructions to do so:
We will quantise pre-trained Facenet model with 512 embedding size. This model is about 95MB in size before quantization. 
$ ls -l model_pc
total 461248
-rw-rw-r--# 1 milinddeore staff 95745767 Apr 9 2018 20180402-114759.pb
create a file inference_graph.py with following code:
import tensorflow as tf
from src.models import inception_resnet_v1
import sys
import click
from pathlib import Path
#click.command()
#click.argument('training_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
#click.argument('eval_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
def main(training_checkpoint_dir, eval_checkpoint_dir):
traning_checkpoint = Path(training_checkpoint_dir) / "model-20180402-114759.ckpt-275"
eval_checkpoint = Path(eval_checkpoint_dir) / "imagenet_facenet.ckpt"
data_input = tf.placeholder(name='input', dtype=tf.float32, shape=[None, 160, 160, 3])
output, _ = inception_resnet_v1.inference(data_input, keep_probability=0.8, phase_train=False, bottleneck_layer_size=512)
label_batch= tf.identity(output, name='label_batch')
embeddings = tf.identity(output, name='embeddings')
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
saver.restore(sess, traning_checkpoint.as_posix())
save_path = saver.save(sess, eval_checkpoint.as_posix())
print("Model saved in file: %s" % save_path)
if __name__ == "__main__":
main()
Run this file on pre-trained model, would generate model for inference. Download pre-trained model and unzip it to model_pre_trained/ directory.
Make sure you have python ≥ 3.4 version.
python3 eval_graph.py model_pre_trained/ model_inference/
FaceNet provides freeze_graph.py file, which we will use to freeze the inference model. 
python3 src/freeze_graph.py model_inference/ my_facenet.pb
Once the frozen model is generated, time to convert it to .tflite 
$ tflite_convert --output_file model_mobile/my_facenet.tflite --graph_def_file my_facenet.pb --input_arrays "input" --input_shapes "1,160,160,3" --output_arrays embeddings --output_format TFLITE --mean_values 128 --std_dev_values 128 --default_ranges_min 0 --default_ranges_max 6 --inference_type QUANTIZED_UINT8 --inference_input_type QUANTIZED_UINT8
Let us check the quantized model size:
$ ls -l model_mobile/
total 47232
-rw-r--r--# 1 milinddeore staff 23667888 Feb 25 13:39 my_facenet.tflite
Interpeter code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="/Users/milinddeore/facenet/model_mobile/my_facenet.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.uint8)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print('INPUTS: ')
print(input_details)
print('OUTPUTS: ')
print(output_details)
Interpeter output:
$ python inout.py
INPUTS:
[{'index': 451, 'shape': array([ 1, 160, 160, 3], dtype=int32), 'quantization': (0.0078125, 128L), 'name': 'input', 'dtype': <type 'numpy.uint8'>}]
OUTPUTS:
[{'index': 450, 'shape': array([ 1, 512], dtype=int32), 'quantization': (0.0235294122248888, 0L), 'name': 'embeddings', 'dtype': <type 'numpy.uint8'>}]
Hope this helps!

I had no luck with #milind-deore's suggestions.
The model does reduce to 23 MB but the embeedings seems to be broken.
I found an alternative way: TF -> Keras -> TF Lite
David Sandberg's FaceNet implementation can be converted to TensorFlow Lite, first converting from TensorFlow to Keras, and then from Keras to TensorFlow Lite.
I created this Google Colab that does the conversion.
Most part of the code was taken from here.
What it does is as follows:
Download Hiroki Taniai's Keras FaceNet implementation
Override the inception_resnet_v1.py file with my patched version (which does adds an extra layer to the model to have normalized embeedings as output)
Download Sandberg's pre-trained model (20180402-114759) from here, and unzips it
Extract the tensors from the checkpoint file and writes the weights to numpy arrays on disk, mapping the name of each corresponding layer.
Create a new Keras model with random weights (Important: using 512 classes).
Write the weights for each corresponding layer reading from the numpy arrays.
Store the model with the Keras format .h5
Convert Keras to TensorFlow Lite using the command "tflite_convert".
tflite_convert --post_training_quantize --output_file facenet.tflite --keras_model_file /content/keras-facenet/model/keras/model/facenet_keras.h5
Also in my Colab I provide some code to show that the conversion is good, and the TFLite model does work.
distance bill vs bill 0.7266881
distance bill vs larry 1.2134411
So even though I'm not aligning the faces, a threshold of about 1.2 would be good to the recognition.
Hope it helps!

Related

Getting the error "str' object has no attribute 'decode" when trying to use custom weights for image classification [duplicate]

After Training, I saved Both Keras whole Model and Only Weights using
model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)
Models and Weights were saved successfully and there was no error.
I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error.
File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'
I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6.
My Code for training is :
from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard,
ModelCheckpoint,EarlyStopping
import pandas as pd
MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():
# Here, we initialize the "NASNetMobile" model type and customize the final
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))
# model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})
# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)
# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]
train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)
# train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
# test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd,
directory="./images", x_col="id", y_col="label",
has_ext=True,
class_mode="other", target_size=(224, 224),
batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
has_ext=True, class_mode="other", target_size=(224, 224),
batch_size=batch_size)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callbacks,
epochs=20)
# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
# freeze_support() here if program needs to be frozen
main()
For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.
I downgraded my h5py package with the following command,
pip install 'h5py==2.10.0' --force-reinstall
Restarted my ipython kernel and it worked.
For me it was the version of h5py that was superior to my previous build.
Fixed it by setting to 2.10.0.
Downgrade h5py package with the following command to resolve the issue,
pip install h5py==2.10.0 --force-reinstall
I had the same problem, solved putting compile=False in load_model:
model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
saved using TF format file and not h5py: save_format='tf'. In my case:
model.save_weights("NMT_model_weight.tf",save_format='tf')
This is probably due to a model saved from a different version of keras. I got the same problem when loading a model generated by tensorflow.keras (which is similar to keras 2.1.6 for tf 1.12 I think) from keras 2.2.6.
You can load the weights with model.load_weights and resave the complete model from the keras version you want to use.
The solution than works for me was:
pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras
I still kept having this error after having tensorflow==2.4.1, h5py==2.1.0, and python 3.8 in my environment.
what fixed it was downgrading the python version to 3.6.9
Downgrading python, tensorflow, keras and h5py resolved the issue.
python -> 3.6.2
pip install tensorflow==1.3.0
pip install keras==2.1.2
pip install 'h5py==2.10.0' --force-reinstall

MaskRCNN TensorFlow Lite Inference Issue. No output from TFLite Model

System information
OS Platform and Distribution ( Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1034-azure x86_64)):
TensorFlow installed from (source- Pip Install):
TensorFlow version (2.3.0):
Command used to run the converter
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.allow_custom_ops = True
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.optimizations = [ tf.lite.Optimize.DEFAULT ]
tflite_model = converter.convert()
link to Jupyter notebook and tflite model
https://drive.google.com/drive/folders/1pTB33fTSo5ENzevobTvuG7hN4YmiCPF_?usp=sharing
Commands used for inference
### Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="model_2.3.tflite")
interpreter.allocate_tensors()
### Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
### Test the model on random input data.
input_data_1 = np.array(np.random.random_sample(input_details[0]['shape']), dtype=np.float32)
input_data_2 = np.array(np.random.random_sample(input_details[1]['shape']), dtype=np.float32)
input_data_3 = np.array(np.random.random_sample(input_details[2]['shape']), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data_1)
interpreter.set_tensor(input_details[1]['index'], input_data_2)
interpreter.set_tensor(input_details[2]['index'], input_data_3)
interpreter.invoke() ---> Kernel is getting stuck here. No output. I am executing the code from jupyter.
The output from the converter invocation
No output in Jupyter.
Segmentation fault (core dumped) -- When executed in command line.
Failure details
Conversion is successful. But there is no output from model.
Could you guys please provide some ideas? I am stuck here and don't know how to proceed!

Error converting tensorflow .pb model to .mlmodel

I am trying to convert a tensorflow graph (.pb file) into a .mlmodel
import tfcoreml
coreml_model = tfcoreml.convert(tf_model_path='optimized_model.pb', mlmodel_path='FaceImages.mlmodel', output_feature_names=['final_result'], input_name_shape_dict={'ResizeBilinear': {'images': None, 'size': {None, None}}}, minimum_ios_deployment_target='13')
but I am getting following error:
/usr/local/lib/python3.6/dist-packages/coremltools/converters/nnssa/frontend/tensorflow/graphdef_to_ssa.py
in load_tf_graph(graph_file)
21 with tf.io.gfile.GFile(graph_file, "rb") as f:
22 graph_def = tf.compat.v1.GraphDef()
---> 23 graph_def.ParseFromString(f.read())
24
25 # Then, we import the graph_def into a new Graph and returns it
DecodeError: Error parsing message
Could anybody help with this pls?
Here is the colab project where I have attached the tensorflow model and the associated code for conversion
https://colab.research.google.com/drive/1S7nf7pnX15UuswFZaTih5pHhfDFwG5Xa
Have you checked that the version of Tensorflow your are using is compatible with those libraries? This is just a guess, but you could try running
!pip install tensorflow --upgrade
at the top of the notebook to see if it resolves the issue.

ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported

DOESN'T WORK:
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.optimizers import Nadam
import numpy as np
ipt = Input(shape=(4,))
out = Dense(1, activation='sigmoid')(ipt)
model = Model(ipt, out)
model.compile(optimizer=Nadam(lr=1e-4), loss='binary_crossentropy')
X = np.random.randn(32,4)
Y = np.random.randint(0,2,(32,1))
model.train_on_batch(X,Y)
WORKS: remove .python from above's imports.
What's the deal, and how to fix?
ADDITIONAL INFO:
CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10
tensorflow, tensorflow-gpu v2.0.0, and Keras 2.3.0 via pip, all else via Anaconda 3
Per DEBUG 1, I note pip installs the r2.0 branch rather than master; manually overwriting local tensorflow_core.python folder with master's breaks everything - but doing so for a select-few files doesn't, though error persists
DEBUG 1: files difference
This holds for my local installation, rather than TF's Github branches master or r2.0; Github files lack api/_v2 for some reason:
from tensorflow import keras
print(keras.__file__)
from tensorflow.python import keras
print(keras.__file__)
[1] D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\api\_v2\keras\__init__.py
[2] D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\__init__.py
Looking into each __init__ for Optimizer:
# [1]
from tensorflow.python.keras.optimizer_v2.optimizer_v2 import OptimizerV2 as Optimizer
# [2]
from tensorflow.python.keras import optimizers
# in python.keras.optimizers.py:
# all imports are from tensorflow.python
class Optimizer(object): # <--- does NOT use optimizer_v2 for Optimizer
This appears to root the problem, as below works:
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.python.keras.models import Model
from tensorflow.keras.optimizers import Nadam
This is strange, however, as the direct import keras doesn't use optimizer_v2 either, though the definition of Optimizer in keras.optimizers does differ.
DEBUG 2: execution difference
Debugging side-by-side, while both use the same training.py, execution diverges fairly quickly:
### TF.KERAS
if self._experimental_run_tf_function: # TRUE
### TF.PYTHON.KERAS
if self._experimental_run_tf_function: # FALSE
Former proceeds to call training_v2_utils.train_on_batch(...) and returns thereafter, latter self._standardize_user_data(...) and others before ultimately failing.
DEBUG 3 (+ solution?): the fail-line
if None in grads: # <-- in traceback
Inserting print(None in grads) right above it yields the exact same error - thus, it appears related to TF2 iterable ops -- this works:
if any([g is None for g in grads]): # <-- works; similar but not equivalent Python logic
Unsure yet if it's a complete fix, still debugging -- update: started a Github Pull Request
Full error trace:
File "<ipython-input-1-2db039c052cf>", line 20, in <module>
model.train_on_batch(X,Y)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1017, in train_on_batch
self._make_train_function()
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2116, in _make_train_function
params=self._collected_trainable_weights, loss=self.total_loss)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\optimizers.py", line 653, in get_updates
grads = self.get_gradients(loss, params)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\optimizers.py", line 92, in get_gradients
if None in grads:
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 1336, in tensor_equals
return gen_math_ops.equal(self, other)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 3626, in equal
name=name)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 545, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported.
It was a bug, and my pull request fix was approved (but isn't yet merged). In the meantime, you can make the change manually, as here. Also, tf.python.keras isn't always meant to be used, if at all.
UPDATE: the pull request is now merged.
Why it works: None in grads is same as any(g == None for g in grads); problem is, g may be a tf.Tensor/tf.Variable which has .__eq__ defined only to operate on tensors, so is None must be used instead.
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import numpy as np
ipt = Input((16,))
out = Dense(16)(ipt)
model = Model(ipt, out)
model.compile('adam', 'mse')
x = y = np.random.randn(32, 16)
model.train_on_batch(x, y)
W = model.optimizer.weights
W[0] == None
>>> ValueError: Attempt to convert a value (None) with an unsupported type
(<class 'NoneType'>) to a Tensor.
Checking source code:
from inspect import getsource
print(getsource(W[0].__eq__))
def __eq__(self, other):
"""Compares two variables element-wise for equality."""
if ops.Tensor._USE_EQUALITY and ops.executing_eagerly_outside_functions():
return gen_math_ops.equal(self, other, incompatible_shape_error=False)
else:
# In legacy graph mode, tensor equality is object equality
return self is other
Probably you should correct your imports
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Nadam

Issue with converting tensorflow model to Intel Movidius graph

Hello I faced with the problem when trying to use Intel Movidius Neural Stick with tensorflow. I have keras model and I convert it to tensorflow model. When I convert it to Movidius graph I got error:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 290, in parse_tensor
if have_first_input(strip_tensor_id(node.outputs[0].name)):
IndexError: list index out of range
Here is my code:
from keras.models import model_from_json
from keras.models import load_model
from keras import backend as K
import tensorflow as tf
import nn
import os
weights_file = "weights.h5"
sess = K.get_session()
K.set_learning_phase(0)
model = nn.alexnet_model() # get keras model
model.load_weights(weights_file)
saver = tf.train.Saver()
saver.save(sess, "./TF_Model/tf_model") # convert keras to tensorflow model
tf_model_path = "./TF_Model/tf_model"
fw = tf.summary.FileWriter('logs', sess.graph)
fw.close()
os.system('mvNCCompile TF_Model/tf_model.meta -in=conv2d_1_input -on=activation_7/Softmax') # get Movidius graph
Python version: 2.7
OS: Ubuntu 16.04
Tensorflow version: 1.12
As I know, the ncsdk compiler does not resolve every part of a normal tensorflow network, so you have to modify the network and re-save it in an NCS-friendly way in order to successfully make a Movidius graph.
For more information about how to modify tensorflow network, have a look at the official guidance.
Hope it will help you.

Categories