Exporting trained TensorFlow models to C++ - python

I am trying to export trained TensorFlow models to C++ using freeze_graph.py. I am trying to export the ssd_mobilenet_v1_coco_2017_11_17 model using the following syntax:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=frozen_inference_graph.pb \
--input_checkpoint=model.ckpt \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
The terminal said that the build was successful but showed the following error:
Traceback (most recent call last):
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 350, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 249, in main
FLAGS.saved_model_tags)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 227, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 171, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 525, in Merge
descriptor_pool=descriptor_pool)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 579, in MergeLines
return parser.MergeLines(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 612, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 627, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 671, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 1144, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number, got `.
On running that command again, I got this message:
WARNING: /home/my_username/tensorflow/tensorflow/core/BUILD:1814:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /home/my_username/tensorflow/tensorflow/tensorflow.bzl:1138:30
WARNING: /home/my_username/tensorflow/tensorflow/core/BUILD:1814:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /home/my_username/tensorflow/tensorflow/tensorflow.bzl:1138:30
WARNING: /home/my_username/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately.
WARNING: /home/my_username/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately.
INFO: Analysed target //tensorflow/python/tools:freeze_graph (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/python/tools:freeze_graph up-to-date:
bazel-bin/tensorflow/python/tools/freeze_graph
INFO: Elapsed time: 0.419s, Critical Path: 0.00s
INFO: Build completed successfully, 1 total action
Traceback (most recent call last):
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 350, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 249, in main
FLAGS.saved_model_tags)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 227, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 171, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 525, in Merge
descriptor_pool=descriptor_pool)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 579, in MergeLines
return parser.MergeLines(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 612, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 627, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 671, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 1144, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number, got `.
I am exporting the ssd_mobilenet_v1_coco_2017_11_17 just as for practice. I intend to export my own trained models and test the output with this program.
I have built TensorFlow 1.5 using Bazel v0.11.1. I validated the installation using the following code snippet provided in the TensorFlow website;
# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
I also ran the object detection iPython example notebook and it worked.
I am using Ubuntu 17.10.1 on a laptop with an Intel Core i5-8250U CPU, 8GB RAM, 1TB HDD and an NVIDIA MX150(2 GB) GPU. Please help. How do I export a trained model to C++?

in order to export object detection models I use the export_inference_graph.py code in research/object-detection. Here's an example of running the code:
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path path/to/model.config \
--trained_checkpoint_prefix path/to/model.ckpt-CHECKPOINTNUMBER \
--output_directory path/to/frozen_inference_graph
Then I use the created frozen_inference_graph.pb with a C++ code that is essentially the same as label_image with little modifications to run for a detection model rather than a classification one.

Related

Tensorflow / mobilenet training / ValueError: Unsupported input_reader_config

I'm trying to traing Mobilenet to recognize custom objects.
I'm following this guide:
https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9
and using a checkpoint and pipeline.config from here:
ssdlite_mobilenet_v2_coco
The Problem
When I start traing with the following command:
python object_detection/model_main.py \
--pipeline_config_path=C:\t\models\pipeline.config \
--model_dir=C:\t\models\ \
--num_train_steps=50000 \
--alsologtostderr
I get the following:
C:\tensorflow\models-master\research>path=C:\t\models\pipeline.config \ --model_dir=C:\t\models\ \ --num_train_steps=50000 \ --alsologtostderr
WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x0000013B6CD26C80>) includes params argument, but params are not pa
ssed to Estimator.
Traceback (most recent call last):
File "object_detection/model_main.py", line 101, in <module>
tf.app.run()
File "C:\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "object_detection/model_main.py", line 97, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\training.py", line 447, in train_and_evaluate
return executor.run()
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\training.py", line 531, in run
return self.run_local()
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\training.py", line 681, in run_local
eval_result, export_results = evaluator.evaluate_and_export()
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\training.py", line 886, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\estimator.py", line 453, in evaluate
input_fn, hooks, checkpoint_path)
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1346, in _evaluate_build_graph
model_fn_lib.ModeKeys.EVAL))
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\estimator.py", line 985, in _get_features_and_labels_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "C:\Python36\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1074, in _call_input_fn
return input_fn(**kwargs)
File "C:\Python36\lib\site-packages\object_detection\inputs.py", line 493, in _eval_input_fn
transform_input_data_fn=transform_and_pad_input_data_fn)
File "C:\Python36\lib\site-packages\object_detection\builders\dataset_builder.py", line 150, in build
raise ValueError('Unsupported input_reader_config.')
ValueError: Unsupported input_reader_config.
A comment in "dataset_builder.py" says:
Raises:
ValueError: On invalid input reader proto.
ValueError: If no input paths are specified.
Question:
Is it a problem with pipeline.config file?
Does it mean that "dataset_builder.py" can't read it?
OR
Shall I pass some additional input path as it stated in the comment?
If I remember correctly, cause of the problem was I didn't prepare test data. There was a training data only. So I prepared a list of test images with related XML files and generated test TFrecord.
Then the error has gone.
P.S.
I had a lot of other errors later but it's another story :)

Tensorflow object detection API using cnn

Tensorflow object detection API using cnn
Traceback (most recent call last):
File "export_inference_graph.py", line 147, in tf.app.run()
File "C:\Users\Ali Salar\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run_sys.exit(main(argv))
File "export_inference_graph.py", line 143, in main
FLAGS.output_directory, input_shape)
File "C:\tensorflow2\models\research\object_detection\exporter.py", line 453, in export_inference_graph
graph_hook_fn=None)
File "C:\tensorflow2\models\research\object_detection\exporter.py", line 421, in _export_inference_graph
placeholder_tensor, outputs)
File "C:\tensorflow2\models\research\object_detection\exporter.py", line 280, in write_saved_model
builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)
File "C:\Users\Ali Salar\Anaconda3\envs\tensorflow\lib\sit`enter code here`e-packages\tensorflow\python\saved_model\builder_impl.py", line 90, in init
"directory: %s" % export_dir)
AssertionError: Export directory already exists. Please specify a different export directory: inference_graph\saved_model
It's saying the directory already exists. Either change your command in the terminal to point to a new directory or remove everything in this directory that you are currently setting as the output directory.
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path YOUR_PIPELINE.CONFIG_FILE_PATH \
--trained_checkpoint_prefix YOUR_MODEL_CHECKPOINT_PATH \
--output_directory exported_graphs/saved_model2
Your try changing output directory.For example exported_graphs/saved_model2 as above.

Python, theano Runtimeerror: could not initialize elemwise support

Sorry if this happens to be trivial as I happen to be new with these stuff. I setup theano to use my gpu for computations on ubuntu trusty tahr. I have AMD Radeon HD 7670M gpu. When I try to run the test script to check the functioning of the theano with gpu, I get the following error:
Mapped name None to device opencl0:0: Turks
Traceback (most recent call last):
File "test.py", line 11, in <module>
f = function([], T.exp(x))
File "/home/sachu/git/Theano/theano/compile/function.py", line 322, in function
output_keys=output_keys)
File "/home/sachu/git/Theano/theano/compile/pfunc.py", line 480, in pfunc
output_keys=output_keys)
File "/home/sachu/git/Theano/theano/compile/function_module.py", line 1784, in orig_function
defaults)
File "/home/sachu/git/Theano/theano/compile/function_module.py", line 1648, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "/home/sachu/git/Theano/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "/home/sachu/git/Theano/theano/gof/vm.py", line 1042, in make_all
no_recycling))
File "/home/sachu/git/Theano/theano/gof/op.py", line 975, in make_thunk
no_recycling)
File "/home/sachu/git/Theano/theano/gof/op.py", line 875, in make_c_thunk
output_storage=node_output_storage)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1189, in make_thunk
keep_lock=keep_lock)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1130, in __compile__
keep_lock=keep_lock)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1602, in cthunk_factory
*(in_storage + out_storage + orphd))
RuntimeError: ('The following error happened while compiling the node', GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float64, (False,))>), '\n', 'Could not initialize elemwise support')
The script I ran was the one available on the website: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Is it something wrong with the config? I believe all dependencies are set properly, but I could have made some mistake, but then I would probably something other than runtime-error. I searched a lot on the github for info related to this, but found nothing. Same was the result after searching on stackoverflow, heance I am posting this here. Any help is appreciated.
Thanks
Additional Info: python3.4, theano bleeding edge version. Libgpuarray, clblas, openblas are all built from the git source master branch. 64bit architecture.
Theano support for OpenCL is just not ready yet and it does not seem to be a priority for the development team to get this working (see this issue). So either you will need some patience or an nvidia GPU on which you could run CUDA.

Not able to run fully_connected_feed.py in Tensorflow

I am following the tutorial of TensorFlow Mechanics 101 (version 0.7.0). As per the document, I download the two files (mnist.py and fully_connected_feed.py) and save them to the same directory on my local machine.
When I run the following command:
$ python /FULL_PATH_TO_fully_connected_feed.py/fully_connected_feed.py
...I get this error: OSError: [Errno 2] No such file or directory: ''. The full output and stack trace are below:
...
...
Step 800: loss = 0.56 (0.005 sec)
Step 900: loss = 0.51 (0.004 sec)
Traceback (most recent call last):
File "./fully_connected_feed.py", line 228, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "./fully_connected_feed.py", line 224, in main
run_training()
File "./fully_connected_feed.py", line 199, in run_training
saver.save(sess, FLAGS.train_dir, global_step=step)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 970, in save
self.export_meta_graph(meta_graph_file_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 990, in export_meta_graph
as_text=as_text)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1315, in export_meta_graph
os.path.basename(filename), as_text=as_text)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/training_util.py", line 70, in write_graph
gfile.MakeDirs(logdir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_gfile.py", line 295, in MakeDirs
os.makedirs(path, mode)
File "/usr/lib/python2.7/os.py", line 160, in makedirs
mkdir(name, mode)
OSError: [Errno 2] No such file or directory: ''
This is a bug in the 0.7.0 release of TensorFlow, which was fixed in a recent commit and will appear in a bugfix release shortly. The issue is caused when the --train_dir flag doesn't contain a directory name component.
In the meantime, you can avoid this issue by passing the flag --train_dir=./ when you run the example.
This should be a comment to mrry's post (I'm missing reputation)
Changing line #42 from fully_connected_feed.py to
flags.DEFINE_string('train_dir', './data', 'Directory to put the training data.')
solved the problem for me. I'm also on 0.7.0 and was able to run all other mnist examples.

python theano Optimization failure due to: local_dot_to_dot22

I just pip installed theano and tried to run theano.test(). It produced a very long log of errors and I copied the first part. I also tried a couple other examples - I have seen
"local_dot_to_dot22"
and
"ValueError: invalid token "Files\Enthought\Canopy\App\appdata\canopy1.5.2.2785.win-x86_64\Scripts" in ldflags_str: "-LC:\Program Files\Enthought\Canopy\App\appdata\canopy-1.5.2.2785.win-x86_64\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt"
several times.
I'm using python 2.7 (canopy), scipy 0.15.1-2 and numpy 1.9.2-1. I am very new to theano. I appreciate if you can point me to the right direction. Thanks!
EEEEEERROR (theano.gof.opt): Optimization failure due to: local_dot_to_dot22
ERROR:theano.gof.opt:Optimization failure due to: local_dot_to_dot22
ERROR (theano.gof.opt): TRACEBACK:
ERROR:theano.gof.opt:TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "c:\theano\theano\gof\opt.py", line 1737, in process_node
replacements = lopt.transform(node)
File "c:\theano\theano\tensor\blas.py", line 1776, in local_dot_to_dot22
return [_dot22(x.dimshuffle('x', 0), y).dimshuffle(1)]
File "c:\theano\theano\gof\op.py", line 647, in __call__
no_recycling=[])
File "c:\theano\theano\gof\op.py", line 918, in make_thunk
no_recycling)
File "c:\theano\theano\gof\op.py", line 836, in make_c_thunk
output_storage=node_output_storage)
File "c:\theano\theano\gof\cc.py", line 1175, in make_thunk
keep_lock=keep_lock)
File "c:\theano\theano\gof\cc.py", line 1113, in __compile__
keep_lock=keep_lock)
File "c:\theano\theano\gof\cc.py", line 1541, in cthunk_factory
key = self.cmodule_key()
File "c:\theano\theano\gof\cc.py", line 1257, in cmodule_key
compile_args=self.compile_args(),
File "c:\theano\theano\gof\cc.py", line 936, in compile_args
ret += x.c_compile_args()
File "c:\theano\theano\tensor\blas.py", line 652, in c_compile_args
return ldflags(libs=False, flags=True)
File "c:\theano\theano\tensor\blas.py", line 537, in ldflags
include_dir=include_dir)
File "c:\theano\theano\gof\utils.py", line 182, in rval
val = f(*args, **kwargs)
File "c:\theano\theano\tensor\blas.py", line 597, in _ldflags
% (t, ldflags_str))
ValueError: invalid token "Files\Enthought\Canopy\App\appdata\canopy- 1.5.2.2785.win-x86_64\Scripts" in ldflags_str: "-LC:\Program Files\Enthought\Canopy\App\appdata\canopy-1.5.2.2785.win-x86_64\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt"
The problem here is problem caused by having spaces in your path, i.e. Canopy is installed in C:\Program Files\Enthought\Canopy but the Theano scripts don't work well with the space between Program and Files. Try uninstalling Canopy and reinstall in a directory with no space in the path.
You should also follow the other instructions for installing Theano on Windows. Unfortunately it's not as simple as just pip install theano.
In case you don't want to reinstall things, if they're heavy programs, for instance, affecting Window's registry and so, you can try symbolic links.
A symbolic link will create something similar to a shortcut to a folder, but seen as an actual folder by other applications.
So, you can do something like this:
Run cmd as administrator
User this command: mklink /D "C:\LinkToProgramFiles" "C:\Program Files"
And then, you start using "C:\LinkToProgramFiles" in your ldflags var.

Categories