Issue with converting tensorflow model to Intel Movidius graph - python

Hello I faced with the problem when trying to use Intel Movidius Neural Stick with tensorflow. I have keras model and I convert it to tensorflow model. When I convert it to Movidius graph I got error:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 290, in parse_tensor
if have_first_input(strip_tensor_id(node.outputs[0].name)):
IndexError: list index out of range
Here is my code:
from keras.models import model_from_json
from keras.models import load_model
from keras import backend as K
import tensorflow as tf
import nn
import os
weights_file = "weights.h5"
sess = K.get_session()
K.set_learning_phase(0)
model = nn.alexnet_model() # get keras model
model.load_weights(weights_file)
saver = tf.train.Saver()
saver.save(sess, "./TF_Model/tf_model") # convert keras to tensorflow model
tf_model_path = "./TF_Model/tf_model"
fw = tf.summary.FileWriter('logs', sess.graph)
fw.close()
os.system('mvNCCompile TF_Model/tf_model.meta -in=conv2d_1_input -on=activation_7/Softmax') # get Movidius graph
Python version: 2.7
OS: Ubuntu 16.04
Tensorflow version: 1.12

As I know, the ncsdk compiler does not resolve every part of a normal tensorflow network, so you have to modify the network and re-save it in an NCS-friendly way in order to successfully make a Movidius graph.
For more information about how to modify tensorflow network, have a look at the official guidance.
Hope it will help you.

Related

AttributeError: 'Sequential' object has no attribute '_get_distribution_strategy'

I am following an online course through linkedin regrading the Building of models through Keras.
This is my code. (This is claimed to work)
import pandas as pd
import keras
from keras.models import Sequential
from keras.layers import *
training_data_df = pd.read_csv("sales_data_training_scaled.csv")
X = training_data_df.drop('total_earnings', axis=1).values
Y = training_data_df[['total_earnings']].values
# Define the model
model = Sequential()
model.add(Dense(50, input_dim=9, activation='relu', name='layer_1'))
model.add(Dense(100, activation='relu', name='layer_2'))
model.add(Dense(50, activation='relu', name='layer_3'))
model.add(Dense(1, activation='linear', name='output_layer'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Create a TensorBoard logger
logger = keras.callbacks.TensorBoard(
log_dir='logs',
write_graph=True,
histogram_freq=5
)
# Train the model
model.fit(
X,
Y,
epochs=50,
shuffle=True,
verbose=2,
callbacks=[logger]
)
# Load the separate test data set
test_data_df = pd.read_csv("sales_data_test_scaled.csv")
X_test = test_data_df.drop('total_earnings', axis=1).values
Y_test = test_data_df[['total_earnings']].values
test_error_rate = model.evaluate(X_test, Y_test, verbose=0)
print("The mean squared error (MSE) for the test data set is: {}".format(test_error_rate))
I get the following error when the following code was executed.
Using TensorFlow backend.
2020-01-16 13:58:14.024374: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-16 13:58:14.037202: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fc47b436390 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-16 13:58:14.037211: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "/Users/himsaragallage/Documents/Building_Deep_Learning_apps/06/model_logging final.py", line 35, in <module>
callbacks=[logger]
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training.py", line 1239, in fit
validation_freq=validation_freq)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training_arrays.py", line 119, in fit_loop
callbacks.set_model(callback_model)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/callbacks/callbacks.py", line 68, in set_model
callback.set_model(model)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/callbacks/tensorboard_v2.py", line 116, in set_model
super(TensorBoard, self).set_model(model)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/callbacks.py", line 1532, in set_model
self.log_dir, self.model._get_distribution_strategy()) # pylint: disable=protected-access
AttributeError: 'Sequential' object has no attribute '_get_distribution_strategy'
Process finished with exit code 1
While I was trying to Debug
I found out that this error was caused because I am trying to use a tensorboard logger. More accurately. When I add callbacks=[logger]. Without that line of code the program runs without any errors. But Tensorboard won't be used.
Please suggest me a method in which I can eliminate the error successfully run the above mentioned python script.
Hope you are referring to this LinkedIn Keras Course.
Even I faced the Same Error when I have used Tensorflow Version 2.1. However, after downgrading the Tensorflow Version and with slight modifications in the code, I could invoke Tensorboard.
Working Code is shown below:
import pandas as pd
import keras
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
training_data_df = pd.read_csv("sales_data_training_scaled.csv")
X = training_data_df.drop('total_earnings', axis=1).values
Y = training_data_df[['total_earnings']].values
# Define the model
model = Sequential()
model.add(Dense(50, input_dim=9, activation='relu', name='layer_1'))
model.add(Dense(100, activation='relu', name='layer_2'))
model.add(Dense(50, activation='relu', name='layer_3'))
model.add(Dense(1, activation='linear', name='output_layer'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Create a TensorBoard logger
logger = tf.keras.callbacks.TensorBoard(
log_dir='logs',
write_graph=True,
histogram_freq=5
)
# Train the model
model.fit(
X,
Y,
epochs=50,
shuffle=True,
verbose=2,
callbacks=[logger]
)
# Load the separate test data set
test_data_df = pd.read_csv("sales_data_test_scaled.csv")
X_test = test_data_df.drop('total_earnings', axis=1).values
Y_test = test_data_df[['total_earnings']].values
test_error_rate = model.evaluate(X_test, Y_test, verbose=0)
print("The mean squared error (MSE) for the test data set is: {}".format(test_error_rate))
You may find this post useful.
So instead of importing from keras (i.e.)
from keras.models import Sequential
import from tensorflow:
from tensorflow.keras.models import Sequential
And this of course applies to most other imports as well.
This is just a lucky guess because I can't run your code, but hope it helps!
I would recommend not mixing keras and tf.keras. Those are different projects as keras is the original, multi-backend project and tf.keras is the version integrated into tensorflow. Keras will stop supporting other backends but tensorflow so it's adviced to switch to it. Check https://keras.io/#multi-backend-keras-and-tfkeras
An easy way to do that is importing keras from tensorflow:
import tensorflow as tf
import tensorflow.keras as keras
#import keras
import keras.backend as K
from keras.models import Model, Sequential, load_model
from keras.layers import Dense, Embedding, Dropout, Input, Concatenate
print("Python: "+str(sys.version))
print("Tensorflow version: "+tf.__version__)
print("Keras version: "+keras.__version__)
Python: 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
Tensorflow version: 2.1.0
Keras version: 2.2.4-tf
It seems that your python environment is mixing imports from keras and tensorflow.keras. Try to use Sequential module like this:
model = tensorflow.keras.Sequential()
I had same error, the problem was tensorflows version.
!pip install tensorflow==2.3.0
fixed that

saved_model from AutoML Vision Edge not loading properly

I've been using AutoML Vision Edge for some image classification tasks with great results when exporting the models in TFLite format. However, I just tried exporting the saved_model.pb file and running it with Tensorflow 2.0 and seem to be running into some issues.
Code snippet:
import numpy as np
import tensorflow as tf
import cv2
from tensorflow import keras
my_model = tf.keras.models.load_model('saved_model')
print(my_model)
print(my_model.summary())
'saved_model' is the directory containing my downloaded saved_model.pb file. Here's what I'm seeing:
2019-10-18 23:29:08.801647: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-18 23:29:08.829017: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ffc2d717510 executing computations on platform Host. Devices:
2019-10-18 23:29:08.829038: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "classify_in_out_tf2.py", line 81, in
print(my_model.summary())
AttributeError: 'AutoTrackable' object has no attribute 'summary'
I'm not sure if it's an issue with how I'm exporting the model, or with my code to load the model, or if these models aren't compatible with Tensorflow 2.0, or some combination.
Any help would be greatly appreciated!
I've got my saved_model.pb working outside of the docker container (for object detection, not classification - but they should be similar, change the outputs and maybe the inputs for tf 1.14), here is how:
tensorflow 1.14.0:
image encoded as bytes
import cv2
import tensorflow as tf
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
graph = tf.get_default_graph()
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'encoded_image_string_tensor:0': inp})
image as numpy array
import cv2
import tensorflow as tf
import numpy as np
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
graph = tf.get_default_graph()
# Read and preprocess an image.
img = cv2.imread(filepath)
# Run the model
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'map/TensorArrayStack/TensorArrayGatherV3:0': img[np.newaxis, :, :, :]})
I used netron to find my input.
tensorflow 2.0:
import cv2
import tensorflow as tf
img = cv2.imread('path_to_image_file')
flag, bts = cv2.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
loaded = tf.saved_model.load(export_dir='directory_of_saved_model')
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))

ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported

DOESN'T WORK:
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.optimizers import Nadam
import numpy as np
ipt = Input(shape=(4,))
out = Dense(1, activation='sigmoid')(ipt)
model = Model(ipt, out)
model.compile(optimizer=Nadam(lr=1e-4), loss='binary_crossentropy')
X = np.random.randn(32,4)
Y = np.random.randint(0,2,(32,1))
model.train_on_batch(X,Y)
WORKS: remove .python from above's imports.
What's the deal, and how to fix?
ADDITIONAL INFO:
CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10
tensorflow, tensorflow-gpu v2.0.0, and Keras 2.3.0 via pip, all else via Anaconda 3
Per DEBUG 1, I note pip installs the r2.0 branch rather than master; manually overwriting local tensorflow_core.python folder with master's breaks everything - but doing so for a select-few files doesn't, though error persists
DEBUG 1: files difference
This holds for my local installation, rather than TF's Github branches master or r2.0; Github files lack api/_v2 for some reason:
from tensorflow import keras
print(keras.__file__)
from tensorflow.python import keras
print(keras.__file__)
[1] D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\api\_v2\keras\__init__.py
[2] D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\__init__.py
Looking into each __init__ for Optimizer:
# [1]
from tensorflow.python.keras.optimizer_v2.optimizer_v2 import OptimizerV2 as Optimizer
# [2]
from tensorflow.python.keras import optimizers
# in python.keras.optimizers.py:
# all imports are from tensorflow.python
class Optimizer(object): # <--- does NOT use optimizer_v2 for Optimizer
This appears to root the problem, as below works:
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.python.keras.models import Model
from tensorflow.keras.optimizers import Nadam
This is strange, however, as the direct import keras doesn't use optimizer_v2 either, though the definition of Optimizer in keras.optimizers does differ.
DEBUG 2: execution difference
Debugging side-by-side, while both use the same training.py, execution diverges fairly quickly:
### TF.KERAS
if self._experimental_run_tf_function: # TRUE
### TF.PYTHON.KERAS
if self._experimental_run_tf_function: # FALSE
Former proceeds to call training_v2_utils.train_on_batch(...) and returns thereafter, latter self._standardize_user_data(...) and others before ultimately failing.
DEBUG 3 (+ solution?): the fail-line
if None in grads: # <-- in traceback
Inserting print(None in grads) right above it yields the exact same error - thus, it appears related to TF2 iterable ops -- this works:
if any([g is None for g in grads]): # <-- works; similar but not equivalent Python logic
Unsure yet if it's a complete fix, still debugging -- update: started a Github Pull Request
Full error trace:
File "<ipython-input-1-2db039c052cf>", line 20, in <module>
model.train_on_batch(X,Y)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1017, in train_on_batch
self._make_train_function()
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2116, in _make_train_function
params=self._collected_trainable_weights, loss=self.total_loss)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\optimizers.py", line 653, in get_updates
grads = self.get_gradients(loss, params)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\keras\optimizers.py", line 92, in get_gradients
if None in grads:
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 1336, in tensor_equals
return gen_math_ops.equal(self, other)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 3626, in equal
name=name)
File "D:\Anaconda\envs\tf2_env\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 545, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported.
It was a bug, and my pull request fix was approved (but isn't yet merged). In the meantime, you can make the change manually, as here. Also, tf.python.keras isn't always meant to be used, if at all.
UPDATE: the pull request is now merged.
Why it works: None in grads is same as any(g == None for g in grads); problem is, g may be a tf.Tensor/tf.Variable which has .__eq__ defined only to operate on tensors, so is None must be used instead.
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import numpy as np
ipt = Input((16,))
out = Dense(16)(ipt)
model = Model(ipt, out)
model.compile('adam', 'mse')
x = y = np.random.randn(32, 16)
model.train_on_batch(x, y)
W = model.optimizer.weights
W[0] == None
>>> ValueError: Attempt to convert a value (None) with an unsupported type
(<class 'NoneType'>) to a Tensor.
Checking source code:
from inspect import getsource
print(getsource(W[0].__eq__))
def __eq__(self, other):
"""Compares two variables element-wise for equality."""
if ops.Tensor._USE_EQUALITY and ops.executing_eagerly_outside_functions():
return gen_math_ops.equal(self, other, incompatible_shape_error=False)
else:
# In legacy graph mode, tensor equality is object equality
return self is other
Probably you should correct your imports
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Nadam

Error converting Facenet model .pb file to TFLITE format

i'm trying to convert a pre-trained frozen .pb based on Inception ResNet i got from David Sandbergs Github with the Tensorflow Lite Converter on Ubuntu using the following command:
/home/nils/.local/bin/tflite_convert
--output_file=/home/nils/Documents/frozen.tflite
--graph_def_file=/home/nils/Documents/20180402-114759/20180402-114759.pb
--input_arrays=input
--output_arrays=embeddings
--input_shapes=1,160,160,3
However, i get the following error:
2018-12-03 15:03:16.807431: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
File "/home/nils/.local/bin/tflite_convert", line 11, in <module>
sys.exit(main())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
_convert_model(tflite_flags)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 162, in _convert_model
output_data = converter.convert()
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert
**converter_kwargs)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl
input_data.SerializeToString())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos
(stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'2018-12-03 15:03:26.006252: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: FIFOQueueV2\n2018-12-03 15:03:26.006322: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join/fifo_queue\n2018-12-03 15:03:26.006339: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: QueueDequeueUpToV2\n2018-12-03 15:03:26.006352: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join\n2018-12-03 15:03:27.496676: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 5601 operators, 9399 arrays (0 quantized)\n2018-12-03 15:03:28.603936: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.418074: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.420354: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42]
Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.\nAborted (core dumped)\n'
None
If i get this right, this might be because of two unsupported Ops, QueueDequeueUpToV2 and FIFOQueueV2, but i don't know for sure.
Do you have any ideas what might be the problem or how i can solve this error? What does that error even mean? I want this model to run on a mobile android device, are there any alternatives?
Versions:
Tensorflow V1.12
Python 3.6.7
Ubuntu 18.04.1 LTS
on a VirtualBox
Thanks in advance!
I have solved this problem here, adding the snippet here too:
I could able to convert FaceNet .pb to .tflite model, and following are the instructions to do so:
We will quantise pre-trained Facenet model with 512 embedding size. This model is about 95MB in size before quantization. 
$ ls -l model_pc
total 461248
-rw-rw-r--# 1 milinddeore staff 95745767 Apr 9 2018 20180402-114759.pb
create a file inference_graph.py with following code:
import tensorflow as tf
from src.models import inception_resnet_v1
import sys
import click
from pathlib import Path
#click.command()
#click.argument('training_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
#click.argument('eval_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
def main(training_checkpoint_dir, eval_checkpoint_dir):
traning_checkpoint = Path(training_checkpoint_dir) / "model-20180402-114759.ckpt-275"
eval_checkpoint = Path(eval_checkpoint_dir) / "imagenet_facenet.ckpt"
data_input = tf.placeholder(name='input', dtype=tf.float32, shape=[None, 160, 160, 3])
output, _ = inception_resnet_v1.inference(data_input, keep_probability=0.8, phase_train=False, bottleneck_layer_size=512)
label_batch= tf.identity(output, name='label_batch')
embeddings = tf.identity(output, name='embeddings')
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
saver.restore(sess, traning_checkpoint.as_posix())
save_path = saver.save(sess, eval_checkpoint.as_posix())
print("Model saved in file: %s" % save_path)
if __name__ == "__main__":
main()
Run this file on pre-trained model, would generate model for inference. Download pre-trained model and unzip it to model_pre_trained/ directory.
Make sure you have python ≥ 3.4 version.
python3 eval_graph.py model_pre_trained/ model_inference/
FaceNet provides freeze_graph.py file, which we will use to freeze the inference model. 
python3 src/freeze_graph.py model_inference/ my_facenet.pb
Once the frozen model is generated, time to convert it to .tflite 
$ tflite_convert --output_file model_mobile/my_facenet.tflite --graph_def_file my_facenet.pb --input_arrays "input" --input_shapes "1,160,160,3" --output_arrays embeddings --output_format TFLITE --mean_values 128 --std_dev_values 128 --default_ranges_min 0 --default_ranges_max 6 --inference_type QUANTIZED_UINT8 --inference_input_type QUANTIZED_UINT8
Let us check the quantized model size:
$ ls -l model_mobile/
total 47232
-rw-r--r--# 1 milinddeore staff 23667888 Feb 25 13:39 my_facenet.tflite
Interpeter code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="/Users/milinddeore/facenet/model_mobile/my_facenet.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.uint8)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print('INPUTS: ')
print(input_details)
print('OUTPUTS: ')
print(output_details)
Interpeter output:
$ python inout.py
INPUTS:
[{'index': 451, 'shape': array([ 1, 160, 160, 3], dtype=int32), 'quantization': (0.0078125, 128L), 'name': 'input', 'dtype': <type 'numpy.uint8'>}]
OUTPUTS:
[{'index': 450, 'shape': array([ 1, 512], dtype=int32), 'quantization': (0.0235294122248888, 0L), 'name': 'embeddings', 'dtype': <type 'numpy.uint8'>}]
Hope this helps!
I had no luck with #milind-deore's suggestions.
The model does reduce to 23 MB but the embeedings seems to be broken.
I found an alternative way: TF -> Keras -> TF Lite
David Sandberg's FaceNet implementation can be converted to TensorFlow Lite, first converting from TensorFlow to Keras, and then from Keras to TensorFlow Lite.
I created this Google Colab that does the conversion.
Most part of the code was taken from here.
What it does is as follows:
Download Hiroki Taniai's Keras FaceNet implementation
Override the inception_resnet_v1.py file with my patched version (which does adds an extra layer to the model to have normalized embeedings as output)
Download Sandberg's pre-trained model (20180402-114759) from here, and unzips it
Extract the tensors from the checkpoint file and writes the weights to numpy arrays on disk, mapping the name of each corresponding layer.
Create a new Keras model with random weights (Important: using 512 classes).
Write the weights for each corresponding layer reading from the numpy arrays.
Store the model with the Keras format .h5
Convert Keras to TensorFlow Lite using the command "tflite_convert".
tflite_convert --post_training_quantize --output_file facenet.tflite --keras_model_file /content/keras-facenet/model/keras/model/facenet_keras.h5
Also in my Colab I provide some code to show that the conversion is good, and the TFLite model does work.
distance bill vs bill 0.7266881
distance bill vs larry 1.2134411
So even though I'm not aligning the faces, a threshold of about 1.2 would be good to the recognition.
Hope it helps!

Failed precondition error when using Tensorflow serving to serve pretrained keras xception model

This is the code that i am using to export the keras model into tensorflow serving format.The exported model loads up successfully in tensorflow serving( without any warnings or errors). But when i use my client to make a request to the server, i get a FailedPrecondition error.
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Attempting to use uninitialized value block11_sepconv2_bn/moving_mean
import sys
import os
import tensorflow as tf
from keras import backend as K
from keras.models import Model
from keras.models import load_model
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import
build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
config = tf.ConfigProto( device_count = {'GPU': 2 , 'CPU': 12} )
sess = tf.Session(config=config)
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
xception = load_model('models/xception/model.h5')
config = xception.get_config()
weights = xception.get_weights()
new_xception = Model.from_config(config)
new_xception.set_weights(weights)
export_path = 'prod_models/2'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'images': new_xception.input},
outputs={'scores': new_xception.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict':
signature})
builder.save()
Package versions
Python 3.6.3
tensorflow-gpu 1.8.0
Keras 2.1.5
CUDA 9.0.176
I tried to replicate your problem with the following model as I do not have access to the model file that you are using:
from keras.applications.xception import Xception
new_xception = Xception()
I can make requests to this model without issues (python 3.6.4, tf 1.8.0, keras 2.2.0). What version of TF-serving are you using?

Categories