Error converting tensorflow .pb model to .mlmodel - python

I am trying to convert a tensorflow graph (.pb file) into a .mlmodel
import tfcoreml
coreml_model = tfcoreml.convert(tf_model_path='optimized_model.pb', mlmodel_path='FaceImages.mlmodel', output_feature_names=['final_result'], input_name_shape_dict={'ResizeBilinear': {'images': None, 'size': {None, None}}}, minimum_ios_deployment_target='13')
but I am getting following error:
/usr/local/lib/python3.6/dist-packages/coremltools/converters/nnssa/frontend/tensorflow/graphdef_to_ssa.py
in load_tf_graph(graph_file)
21 with tf.io.gfile.GFile(graph_file, "rb") as f:
22 graph_def = tf.compat.v1.GraphDef()
---> 23 graph_def.ParseFromString(f.read())
24
25 # Then, we import the graph_def into a new Graph and returns it
DecodeError: Error parsing message
Could anybody help with this pls?
Here is the colab project where I have attached the tensorflow model and the associated code for conversion
https://colab.research.google.com/drive/1S7nf7pnX15UuswFZaTih5pHhfDFwG5Xa

Have you checked that the version of Tensorflow your are using is compatible with those libraries? This is just a guess, but you could try running
!pip install tensorflow --upgrade
at the top of the notebook to see if it resolves the issue.

Related

How to upgrade TF1 GAN notebook on Colab to TF2? It does't work because Colab is't supporting TF1 anymore

I was trying to run this notebook on colab,
https://colab.research.google.com/github/https-deeplearning-ai/GANs-Public/blob/master/C1W1_(Colab)_Inputs_to_a_pre_trained_GAN.ipynb ,
but first I got this :
ValueError: Tensorflow 1 is unsupported in Colab.
then I upgraded it using this script:
import tensorflow as tf
!tf_upgrade_v2 \
--intree stylegan/ \
--inplace
and I did comment these:
%tensorflow_version 1.x
tflib.init_tf()
but I got this one! and couldn't solve:
AttributeError: Can't get attribute 'Network' on <module 'dnnlib.tflib.network' from '/content/stylegan/dnnlib/tflib/network.py'>
Can somebody help?
# Clone the official StyleGAN repository from GitHub
!git clone https://github.com/NVlabs/stylegan.git
%tensorflow_version 1.x
import os
import pickle
import numpy as np
import PIL.Image
import stylegan
from stylegan import config
from stylegan.dnnlib import tflib
from tensorflow.python.util import module_wrapper
module_wrapper._PER_MODULE_WARNING_LIMIT = 0
# Initialize TensorFlow
tflib.init_tf()
# Go into that cloned directory
path = 'stylegan/'
if "stylegan" not in os.getcwd():
os.chdir(path)
# Load pre-trained network
# url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # Downloads the pickled model file: karras2019stylegan-ffhq-1024x1024.pkl
url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl'
with stylegan.dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
print(f)
_G, _D, Gs = pickle.load(f)
# Gs.print_layers() # Print network details

Error converting Facenet model .pb file to TFLITE format

i'm trying to convert a pre-trained frozen .pb based on Inception ResNet i got from David Sandbergs Github with the Tensorflow Lite Converter on Ubuntu using the following command:
/home/nils/.local/bin/tflite_convert
--output_file=/home/nils/Documents/frozen.tflite
--graph_def_file=/home/nils/Documents/20180402-114759/20180402-114759.pb
--input_arrays=input
--output_arrays=embeddings
--input_shapes=1,160,160,3
However, i get the following error:
2018-12-03 15:03:16.807431: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
File "/home/nils/.local/bin/tflite_convert", line 11, in <module>
sys.exit(main())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
_convert_model(tflite_flags)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 162, in _convert_model
output_data = converter.convert()
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert
**converter_kwargs)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl
input_data.SerializeToString())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos
(stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'2018-12-03 15:03:26.006252: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: FIFOQueueV2\n2018-12-03 15:03:26.006322: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join/fifo_queue\n2018-12-03 15:03:26.006339: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: QueueDequeueUpToV2\n2018-12-03 15:03:26.006352: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join\n2018-12-03 15:03:27.496676: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 5601 operators, 9399 arrays (0 quantized)\n2018-12-03 15:03:28.603936: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.418074: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.420354: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42]
Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.\nAborted (core dumped)\n'
None
If i get this right, this might be because of two unsupported Ops, QueueDequeueUpToV2 and FIFOQueueV2, but i don't know for sure.
Do you have any ideas what might be the problem or how i can solve this error? What does that error even mean? I want this model to run on a mobile android device, are there any alternatives?
Versions:
Tensorflow V1.12
Python 3.6.7
Ubuntu 18.04.1 LTS
on a VirtualBox
Thanks in advance!
I have solved this problem here, adding the snippet here too:
I could able to convert FaceNet .pb to .tflite model, and following are the instructions to do so:
We will quantise pre-trained Facenet model with 512 embedding size. This model is about 95MB in size before quantization. 
$ ls -l model_pc
total 461248
-rw-rw-r--# 1 milinddeore staff 95745767 Apr 9 2018 20180402-114759.pb
create a file inference_graph.py with following code:
import tensorflow as tf
from src.models import inception_resnet_v1
import sys
import click
from pathlib import Path
#click.command()
#click.argument('training_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
#click.argument('eval_checkpoint_dir', type=click.Path(exists=True, file_okay=False, resolve_path=True))
def main(training_checkpoint_dir, eval_checkpoint_dir):
traning_checkpoint = Path(training_checkpoint_dir) / "model-20180402-114759.ckpt-275"
eval_checkpoint = Path(eval_checkpoint_dir) / "imagenet_facenet.ckpt"
data_input = tf.placeholder(name='input', dtype=tf.float32, shape=[None, 160, 160, 3])
output, _ = inception_resnet_v1.inference(data_input, keep_probability=0.8, phase_train=False, bottleneck_layer_size=512)
label_batch= tf.identity(output, name='label_batch')
embeddings = tf.identity(output, name='embeddings')
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
saver.restore(sess, traning_checkpoint.as_posix())
save_path = saver.save(sess, eval_checkpoint.as_posix())
print("Model saved in file: %s" % save_path)
if __name__ == "__main__":
main()
Run this file on pre-trained model, would generate model for inference. Download pre-trained model and unzip it to model_pre_trained/ directory.
Make sure you have python ≥ 3.4 version.
python3 eval_graph.py model_pre_trained/ model_inference/
FaceNet provides freeze_graph.py file, which we will use to freeze the inference model. 
python3 src/freeze_graph.py model_inference/ my_facenet.pb
Once the frozen model is generated, time to convert it to .tflite 
$ tflite_convert --output_file model_mobile/my_facenet.tflite --graph_def_file my_facenet.pb --input_arrays "input" --input_shapes "1,160,160,3" --output_arrays embeddings --output_format TFLITE --mean_values 128 --std_dev_values 128 --default_ranges_min 0 --default_ranges_max 6 --inference_type QUANTIZED_UINT8 --inference_input_type QUANTIZED_UINT8
Let us check the quantized model size:
$ ls -l model_mobile/
total 47232
-rw-r--r--# 1 milinddeore staff 23667888 Feb 25 13:39 my_facenet.tflite
Interpeter code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="/Users/milinddeore/facenet/model_mobile/my_facenet.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.uint8)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print('INPUTS: ')
print(input_details)
print('OUTPUTS: ')
print(output_details)
Interpeter output:
$ python inout.py
INPUTS:
[{'index': 451, 'shape': array([ 1, 160, 160, 3], dtype=int32), 'quantization': (0.0078125, 128L), 'name': 'input', 'dtype': <type 'numpy.uint8'>}]
OUTPUTS:
[{'index': 450, 'shape': array([ 1, 512], dtype=int32), 'quantization': (0.0235294122248888, 0L), 'name': 'embeddings', 'dtype': <type 'numpy.uint8'>}]
Hope this helps!
I had no luck with #milind-deore's suggestions.
The model does reduce to 23 MB but the embeedings seems to be broken.
I found an alternative way: TF -> Keras -> TF Lite
David Sandberg's FaceNet implementation can be converted to TensorFlow Lite, first converting from TensorFlow to Keras, and then from Keras to TensorFlow Lite.
I created this Google Colab that does the conversion.
Most part of the code was taken from here.
What it does is as follows:
Download Hiroki Taniai's Keras FaceNet implementation
Override the inception_resnet_v1.py file with my patched version (which does adds an extra layer to the model to have normalized embeedings as output)
Download Sandberg's pre-trained model (20180402-114759) from here, and unzips it
Extract the tensors from the checkpoint file and writes the weights to numpy arrays on disk, mapping the name of each corresponding layer.
Create a new Keras model with random weights (Important: using 512 classes).
Write the weights for each corresponding layer reading from the numpy arrays.
Store the model with the Keras format .h5
Convert Keras to TensorFlow Lite using the command "tflite_convert".
tflite_convert --post_training_quantize --output_file facenet.tflite --keras_model_file /content/keras-facenet/model/keras/model/facenet_keras.h5
Also in my Colab I provide some code to show that the conversion is good, and the TFLite model does work.
distance bill vs bill 0.7266881
distance bill vs larry 1.2134411
So even though I'm not aligning the faces, a threshold of about 1.2 would be good to the recognition.
Hope it helps!

Issue with converting tensorflow model to Intel Movidius graph

Hello I faced with the problem when trying to use Intel Movidius Neural Stick with tensorflow. I have keras model and I convert it to tensorflow model. When I convert it to Movidius graph I got error:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 290, in parse_tensor
if have_first_input(strip_tensor_id(node.outputs[0].name)):
IndexError: list index out of range
Here is my code:
from keras.models import model_from_json
from keras.models import load_model
from keras import backend as K
import tensorflow as tf
import nn
import os
weights_file = "weights.h5"
sess = K.get_session()
K.set_learning_phase(0)
model = nn.alexnet_model() # get keras model
model.load_weights(weights_file)
saver = tf.train.Saver()
saver.save(sess, "./TF_Model/tf_model") # convert keras to tensorflow model
tf_model_path = "./TF_Model/tf_model"
fw = tf.summary.FileWriter('logs', sess.graph)
fw.close()
os.system('mvNCCompile TF_Model/tf_model.meta -in=conv2d_1_input -on=activation_7/Softmax') # get Movidius graph
Python version: 2.7
OS: Ubuntu 16.04
Tensorflow version: 1.12
As I know, the ncsdk compiler does not resolve every part of a normal tensorflow network, so you have to modify the network and re-save it in an NCS-friendly way in order to successfully make a Movidius graph.
For more information about how to modify tensorflow network, have a look at the official guidance.
Hope it will help you.

im2txt & TensorFlow 1.4.1

Does anyone here succeed to run im2txt with TensorFlow 1.4.1?
I'm using this model(https://drive.google.com/file/d/0B_qCJ40uBfjEWVItOTdyNUFOMzg/view)
2018-01-04 00:46:59.268582: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key lstm/basic_lstm_cell/kernel not found in checkpoint
Then I tried the following script to convert model. The script generated checkpoint, .meta, .data, and .index.
OLD_CHECKPOINT_FILE = "/tmp/my_checkpoint/model.ckpt-3000000"
NEW_CHECKPOINT_FILE = "/tmp/my_converted_checkpoint/model.ckpt-3000000"
import tensorflow as tf
vars_to_rename = {
"lstm/BasicLSTMCell/Linear/Matrix": "lstm/basic_lstm_cell/weights",
"lstm/BasicLSTMCell/Linear/Bias": "lstm/basic_lstm_cell/biases",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] = tf.Variable(reader.get_tensor(old_name))
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
print("save checkpoint")
saver.save(sess, NEW_CHECKPOINT_FILE)
Could anyone tell me how I can use those files to run im2txt with TensorFlow 1.4.1. (Actually, I could run im2txt with tensorflow 0.12.1)
Env
python 3.5.2
Mac OS X version 10.12.6
TensorFlow 1.4.1
Thank for your help.
Get the same error with checkpoint file with tf 1.4.1 and python3.5 on MacOS 10.13
Reason: checkpoint file downloaded is generated using an old version of tensorflow(python2). word_count.txt file format
answers from https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Changes:
1. generate ckp file which can be loaded by tf1.4.1
OLD_CHECKPOINT_FILE = "model.ckpt-1000000"
NEW_CHECKPOINT_FILE = "model2.ckpt-1000000"
import tensorflow as tf
vars_to_rename = {
"lstm/basic_lstm_cell/weights": "lstm/basic_lstm_cell/kernel",
"lstm/basic_lstm_cell/biases": "lstm/basic_lstm_cell/bias",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] =
tf.Variable(reader.get_tensor(old_name))`
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
saver.save(sess, NEW_CHECKPOINT_FILE)
python3 file reading problem, in im2txt/run_reference.py
with tf.gfile.GFile(filename, "rb") as f:
word_count.txt downloaded from that link need to be replaced with this one
https://github.com/siavash9000/im2txt_demo/tree/master/im2txt_pretrained
Chunfang's solution works for me, but I wanted to share another approach.
In recent versions of TensorFlow, Google provides an "official" checkpoint_convert.py utility to convert old RNN checkpoints:
python checkpoint_convert.py [--write_v1_checkpoint] \
'/path/to/old_checkpoint' '/path/to/new_checkpoint'

OSError unable to create file - invalid argument

I am using Python and Keras on top of Tensorflow to train my neural networks.
When I switched from Ubuntu 16.04 to Windows 10, my model could not be saved anymore when I run the following:
filepath = "checkpoint-"+str(f)+model_type+"-"+optimizer_name+"-{epoch:02d}-{loss:.3f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
and later on:
model.fit(X, y,
batch_size=128,
epochs=1,
shuffle=False,
callbacks=callbacks_list)
I get this Error:
OSError: Unable to create file (Unable to open file: name = 'checkpoint-<_io.textiowrapper name='data/swing-projects100-raw/many-chunks/log-gamma-f3.txt' mode='a' encoding='cp1252'>2l128-adam-0.001-{epoch:02d}-{loss:.3f}.h5', errno = 22, error message = 'invalid argument', flags = 13, o_flags = 302)
I have Keras 2.0.8 and h5py 2.7.0 installed via conda.
I tried
filepath = "checkpoint-"+str(f)+model_type+"-"+optimizer_name+"-{epoch:02d}-{loss:.3f}.hdf5"
with open(filepath, "w") as f:
f.write("Test.")
and got a similar error:
OSError: [Errno 22] Invalid argument: "checkpoint-<_io.TextIOWrapper name='data/swing-projects100-raw/many-chunks/log-gamma-f3.txt' mode='a' encoding='cp1252'>2L128-Adam-0.001-{epoch:02d}-{loss:.3f}.hdf5"
When I removed str(f) from the filepath, it worked.
f is an Integer and I don't know why it caused the error, but removing it from the string solved my problem.
Let me know if you know exactly why.
I had a similar problem with this code:
agent.save("./saved_models/weights_episode_{}.h5".format(e))
I solved it by manually creating the folder saved_models
e being an integer did not cause any problems in my case.
I have the similar problem when using tensorflow on a distant machine.
the reason of my maybe 'have no permission to modify the file'.
I solve this problem by use the save path like "../model.h5"———the folder where you have permission.
may this helps someone.

Categories