Export Tensorflow Estimator - python

I'm trying to build a CNN with Tensorflow (r1.4) based on the API tf.estimator. It's a canned model. The idea is to train and evaluate the network with estimator in python and use the prediction in C++ without estimator by loading a pb file generated after the training.
My first question is, is it possible?
If yes, the training part works and the prediction part works too (with pb file generated without estimator) but it doesn't work when I load a pb file from estimator.
I got this error : "Data loss: Can't parse saved_model.pb as binary proto"
My pyhon code to export my model :
feature_spec = {'input_image': parsing_ops.FixedLenFeature(dtype=dtypes.float32, shape=[1, 48 * 48])}
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
input_fn = tf.estimator.inputs.numpy_input_fn(self.eval_features,
self.eval_label,
shuffle=False,
num_epochs=1)
eval_result = self.model.evaluate(input_fn=input_fn, name='eval')
exporter = tf.estimator.FinalExporter('save_model', export_input_fn)
exporter.export(estimator=self.model, export_path=MODEL_DIR,
checkpoint_path=self.model.latest_checkpoint(),
eval_result=eval_result,
is_the_final_export=True)
It doesn't work neither with tf.estimator.Estimator.export_savedmodel()
If one of you knows an explicit tutorial on estimator with canned model and how to export it, I'm interested

Please look at this issue on github, it looks like you have the same problem. Apparently (at least when using estimator.export_savedmodel) you should load the graph with LoadSavedModel instead of ReadBinaryProto, because it's not saved as a graphdef file.
You'll find here a bit more instructions about how to use it:
const string export_dir = ...
SavedModelBundle bundle;
...
LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},
&bundle);
I can't seem to find the SavedModelBundle documentation for c++ to use it afterwards, but it's likely close to the same class in Java, in which case it basically contains the session and the graph you'll be using.

Related

EffiecientDet mAP evaluation on custom dataset

I'm trying to run 'mAP_evaluation.py' to get mAP evaluation on my own dataset:
https://github.com/Tessellate-Imaging/Monk_Object_Detection/tree/master/4_efficientdet/lib
but the whole python file is made for COCO dataset only I think, but if I use the function evaluate_coco() then I don't know how to customize my dataset to match the function. Please help.
P/S: I already trained and export the EfficientDet model (pth file), predicted test images/videos, just don't know how to evaluate.
you can fix the issue like that
def __init__(root_dir, img_dir='images', set_dir='train2017', transform=None)
so I fixed right here from mAP_evaluation.py:
dataset_val = CocoDataset("/content/Monk_Object_Detection/4_efficientdet/lib/data/pothole", img_dir='images', set_dir='val2017',
transform=transforms.Compose([Normalizer(), Resizer()]))
evaluate_coco(dataset_val, efficientdet)

How to save a trained Pipeline model into a single tflite file?

I trained a Pipeline model, which uses CountVectorizer, TfidfTransformer, OneVsRestClassifier and also a GridSearchCV.
Now I want to save it into a tflite file, to use it on my Android app.
For a Sequential model (where my tflite file was created successfully), I did:
sequential_model = Sequential()
...
# train and fit the model
...
h5_file = "h5_model.h5"
tflite_file = "tflite_model.tflite"
sequential_model.save(h5_file)
converter = tf.lite.TFLiteConverter.from_keras_model_file(h5_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
All good to save Sequential model into a tflite file.
Well, Pipeline has no attribute "save", unlike a Sequential model, so I tried saving the Pipeline model with joblib and then with pickle, but none of them worked.
Let's say that pipeline_model is my trained model (the one described in the first sentence).
pb_file = 'pipeline_model.pb'
# I also tried with other extensions, like h5, hdf5, sav, pkl
joblib.dump(pipeline_model, filename)
# or with pickle equivalent and pkl extension
# pickle.dump(pipeline_model, open(pb_file, 'wb'))
Now the pb file is created and I want to create a tflite one. Since it's not a Keras model, I can't use from_keras_model_file, so I tried instead with from_saved_model.
pb_file = 'pipeline_model.pb'
tflite_file = "tflite_model.tflite"
converter = tf.lite.TFLiteConverter.from_saved_model(pb_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
It generates the error on line of converter = ...:
OSError: SavedModel file does not exist at: pb_file.pb/{saved_model.pbtxt|saved_model.pb}
I tried running it on Kaggle, Colab, PyCharm IDE, with both versions of tensorflow (1 and 2), with different file extensions and nothing seems to work.
I also noticed that TFLiteConverter contains the methods from_frozen_graph and from_session, but these two requires an extra parameter, so I don't think these could be the solution.
So, how can I obtain my tflite file from the trained Pipeline model? Please, if you find any solution, tell me the library versions that you used, since there could be a different behaviour on different libs.

Exporting and loading tf.contrib.estimator in Tensorflow 1.3 for prediction in python without using Tensorflow Serving

I am using tf.contrib.learn.DNNLinearCombinedRegressor with wide and deep feature columns. I use an input_fn to send data to this regressor for training. I have both real and categorical features, and so my deep feature columns are made of sparse columns with embedding as well as real valued columns. While my linear columns are real valued columns. After training, I can naively access the trained model for prediction by using the estimator.predict_scores() method. However, to my understanding, this recreates the tensorflow graph from the checkpoint file, which is slow for my intended application. I then tried to use estimator.export_savedmodel(), to see if I can save and load the graph prior to actual prediction in my application. But I got stuck with it. I am facing the following issues:
To export, I did the following:
feature_spec = tf.feature_column.make_parse_example_spec(deep_feature_cols)
export_input_fn = tf.contrib.learn.utils.build_parsing_serving_input_fn(feature_spec)
servable_model_path = regressor.export_savedmodel(export_dir_base = servable_model_dir, serving_input_fn = export_input_fn, as_text= True)
(note that the raw and default parsing functions didn't work for me due to VarLen feature column -- the embedding column). Since I construct feature_spec with my deep columns only, I don't understand how the saved model will know what I am using for my linear columns. I don't know how to combine the two types of columns for saving/exporting.
While I was able to export the trained regressor model successfully, I couldn't figure out how to use it for predictions. I have seen tutorials for doing this using Tensorflow Serving such as this. However, I just want something simple that I load and use in python. I am working on an interactive GUI application in Windows and so don't want to use bazel to build a saved model for Tensorflow Serving. I tried the following from this post, but it didn't work with my input_fn
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(servable_model_path)
predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1, shuffle=False))
where get_input_fn is my custom input_fn. When I run this code, I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-15-128f4d1ba155> in <module>()
----> 1 predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1,
shuffle=False))
~\AppData\Local\Continuum\Anaconda3\envs\tensorflow\lib\site-
packages\tensorflow\contrib\predictor\predictor.py in __call__(self, input_dict)
63 """
64 # TODO(jamieas): make validation optional?
---> 65 input_keys = set(input_dict.keys())
66 expected_keys = set(self.feed_tensors.keys())
67 unexpected_keys = input_keys - expected_keys
AttributeError: 'function' object has no attribute 'keys'
Any input is appreciated. I am on windows 10, using Tensorflow 1.3 with Python 3.5 in Anaconda virtual environment.
Poked around a bit more and the predict_fn expects a dictionary with the key set to the exact same name as the feature variable your model expects. In my CNN model the input was "inputs" as below:
def _cnn_model_fn(features, labels, mode):
# Input Layer
print("features shape", features['inputs'])
Here's what I passed in the predict_fn where myimage is a flattened np.array.
predictions = predict_fn({"inputs": [myimage]})

Using saved Tensorflow Estimator with C++ API

I have written the Abalone estimator in Python as described in https://www.tensorflow.org/versions/r0.11/tutorials/estimators/. I wish to save the state of the estimator, then load it in C++ and use it to make predictions.
To save it from Python, I use the model_dir parameter in the tf.contrib.learn.Estimator constructor, which creates a (text) protobuf file and several checkpoint files. I then use the freeze_graph.py tool (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) to combine the checkpoint and the protobuf file into a standalone GraphDef file.
I load this file using the C++ API, load some input values into a Tensor, then run the session. The input node in the protobuf file is called 'input' and the output node 'output', and both are placeholder nodes.
// ...
std::vector<std::pair<string, tensorflow::Tensor>> inputs =
{
{"input", inputTensor}
};
std::vector<tensorflow::Tensor> outputs;
status = pSession->Run(inputs, {"output"}, {}, &outputs);
However, since the output node is a placeholder node, this fails since it needs to be fed a value. But you cannot both feed and fetch a node value, so I cannot get access to the output of the estimator. Why is the output node a placeholder node?
What is the best way to save a trained estimator from Python and load it for prediction in C++?

how make correct predictions of jpeg image in cloud-ml

I want to predict a jpeg image in cloud-ml.
My training model is the inception model, and I would like to send the input to the first layer of the graph: 'DecodeJpeg/contents:0' (where I have to send a jpeg image). I have set this layer as possible input by adding in retrain.py:
inputs = {'image_bytes': 'DecodeJpeg/contents:0'}
tf.add_to_collection('inputs', json.dumps(inputs))
Then I save the results of the training in two files (export and export.meta) with:
saver.save(sess, os.path.join(output_directory,'export'))
and I create a model in cloud-ml using these files.
As suggested in some posts (here, here, and here from Google cloud official blog) I'm trying to make the prediction with
gcloud beta ml predict --json-instances=request.json --model=MODEL
where the instance is the jpeg image decoded in base64 format with:
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"key":"0", "image_bytes": {"b64": img}})' image.jpg &> request.json
However the request return me:
error: 'Prediction failed: '
What is the problem of my procedure? Do you have any suggestion?
I particular from this post I assume that cloud-ml automatically convert the base64 image in jpeg format when it reads a request with image_bytes. Is it correct? Otherwise how can I do?
CloudML requires you to feed the graph with a batch of images.
I'm pretty sure this is the issue with re-using retrain.py. See that code's sess.run line; it is feeding a single image at a time. Compare with the batched jpeg placeholder in the flowers sample.
Note that three slightly different TF graphs need to be constructed: Training, Evaluation, and Prediction. See this recent blog post for details. The training and Prediction graphs directly consume embedding from preprocessing so they do not contain an Inception graph. For prediction, we need to take image bytes as input and use Inception to extract embeddings.
For online prediction, you need to export the prediction graph.You should also specify the outputs and a key for inputs.
To build the prediction graph (the code):
def build_prediction_graph(self):
"""Builds prediction graph and registers appropriate endpoints."""
tensors = self.build_graph(None, 1, GraphMod.PREDICT)
keys_placeholder = tf.placeholder(tf.string, shape=[None])
inputs = {
'key': keys_placeholder.name,
'image_bytes': tensors.input_jpeg.name
}
tf.add_to_collection('inputs', json.dumps(inputs))
# To extract the id, we need to add the identity function.
keys = tf.identity(keys_placeholder)
outputs = {
'key': keys.name,
'prediction': tensors.predictions[0].name,
'scores': tensors.predictions[1].name
}
tf.add_to_collection('outputs', json.dumps(outputs))
To export the preciction graph:
def export(self, last_checkpoint, output_dir):
# Build and save prediction meta graph and trained variable values.
with tf.Session(graph=tf.Graph()) as sess:
self.build_prediction_graph()
init_op = tf.global_variables_initializer()
sess.run(init_op)
self.restore_from_checkpoint(sess, self.inception_checkpoint_file,
last_checkpoint)
saver = tf.train.Saver()
saver.export_meta_graph(filename=os.path.join(output_dir, 'export.meta'))
saver.save(sess, os.path.join(output_dir, 'export'), write_meta_graph=False)
last_checkpoint must point to the latest checkpoint file from training:
self.model.export(tf.train.latest_checkpoint(self.train_path), self.model_path)
In your post, you indicated that your inputs collection has only "image_bytes" tensor alias. However, in the code where you are framing the request, you are including 2 inputs: One is "key" and the other is "image_bytes". So, my suggestion would be to remove "key" from the request or add "key" to the inputs collection.
Second issue is that the shape of DecodeJpeg/contents:0', is (). For Cloud ML, you need to have a shape like (None, ) so that you can feed that in.
There are some suggestions in other answers to your question here, on how you might be able to follow the public posts to modify your graph, but at hand I can tell these two issues.
Let us know if you encounter any further issues.

Categories