Restoring the tensorflow model - python

I want to restore a tensorflow model after it's trained. I know that I can use tf.train.Saver but the problem is with the restoring because I get confused with the names for get_tensor_by_name. Can anybody help me?
This is my graph:
x_hat = tf.placeholder(tf.float32, shape=[None, dim_img], name='input_img')
x = tf.placeholder(tf.float32, shape=[None, dim_img], name='target_img')
# dropout
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# input for PMLR
z_in = tf.placeholder(tf.float32, shape=[None, dim_z], name='latent_variable')
# network architecture
y, z, loss, neg_marginal_likelihood, KL_divergence = vae.autoencoder(x_hat, x, dim_img, dim_z, n_hidden,
keep_prob)

When you save a model you save 2 things: 1) the meta graph, that is a representation of the graph (all the TF symbols you've defined; and 2) the checkpoint which contains the actual Variable values (which are saved and restored by name).
When you restore you can restore one or both of those components. What you are describing is restoring both the meta graph AND the checkpoint data. In this case you need to look up the various operations and tensors you're interested by name, which can be confusing (especially if you didn't name your variables well, which you should always do).
# In this method you import the meta graph then restore
saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')
saver.restore(sess, 'my-save-dir/my-model-10000')
The other option to restoring (which I prefer mysefl) is to not load the meta graph at all. Instead just re-run the same code you originally used to create the graph (if you've done things well this will all be organized in one place). Then you only restore the checkpoint. This approach has the benefit that you can easily keep a reference to all the operations you'll need (such as cost, train_op, placeholders, etc).
# This method only performs the restor operation
# assuming the graph is already constructure
saver.restore(sess, 'my-save-dir/my-model-10000')

Related

Is there a way in tensorflow(<2) to use a dynamic input shape with a frozen graph?

I got a two-part question here. First of all, is it possible to use a dynamic(variable) shape input (image) with a frozen graph? More exactly:
graph = unfreeze(output_graph) #this unfreezes a .pb model. I can add that as well if necessary
x = graph.get_tensor_by_name('prefix/Placeholder:0')
y = graph.get_tensor_by_name('prefix/generator/out:0')
with tf.Session(config=config, graph=graph) as sess:
#some irrelevant code related to 'image'
image_out = sess.run(y, feed_dict={x: image})
This throws a very understandable error about how to size of the image doesn't fit what the placeholder is expecting. (if indeed it doesn't fit)
This leads me to the second part of my question:
If I use a model that I restore from a checkpoint (graph from chkp.meta, variables from .data), I can somehow, magically, choose the size of the input tensor like so:
#IMAGE_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH are just the shape of the images I'm doing prediction on, regardless of the training input shape
x_ = tf.placeholder(tf.float32, [None, IMAGE_SIZE])
x_image = tf.reshape(x_, [-1, IMAGE_HEIGHT, IMAGE_WIDTH, 3])
#more stuff..
out = sess.run(y, feed_dict={x_: image})
In conclusion, what I'm really asking is how is it possible to choose the input shape after the training? shouldn't that be fixed ? even if the variables are still variable and not constant as they are in the frozen model, they should still have the same shape and number. It makes no sense to me to be able to modify the shape of the layers after training.
Thanks and sorry if I didn't abide the posting rules, it's my first time.

How to run prediction (using image as input) for a saved model?

Problem:
I am very new to Tensorflow. My specific question is what particular arguments should I put inside sess.run(fetches, feed_dict) function. For instance, how could find out what the values of the arguments?
Steps:
Here is my understanding of the steps after looking at other posts.
Save tranied tensorflow model, it should consists of 4 files, below are my outputs:
checkpoint
Inception_resnet_v2.ckpt.data-00000-of-00001
Inception_resnet_v2.ckpt.index
Inception_resnet_v2.ckpt.meta
Resize the input image to whatever format required by the neural network.
Start tensorflow session.
Retrive the Graph and associated parameters, tensors...
Predict the input image.
Code:
Traning code:
https://github.com/taki0112/SENet-Tensorflow/blob/master/SE_Inception_resnet_v2.py
[Solved] Test code:
import tensorflow as tf
import numpy as np
import cv2
labels = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
# Load graph and parameters, etc.
sess=tf.Session()
saver = tf.train.import_meta_graph('./model/Inception_resnet_v2.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint("./model/"))
graph = tf.get_default_graph()
# Get tensor names
x = graph.get_tensor_by_name("Placeholder:0")
training_flag = graph.get_tensor_by_name("Placeholder_2:0")
op_to_restore = graph.get_tensor_by_name("final_fully_connected/dense/BiasAdd:0")
# Preprocess imgae imput
src = cv2.imread("./input/car3.jpg")
dst = cv2.resize(src, (32, 32), interpolation=cv2.INTER_CUBIC)
b,g,r = cv2.split(dst)
b = (b - np.mean(b)) / np.std(b) * .1
g = (g - np.mean(g)) / np.std(g) * .1
r = (r - np.mean(r)) / np.std(r) * .1
src = cv2.merge((b,g,r))
picture = dst.reshape(1, 32, 32, 3)
feed_dict ={x: picture, training_flag:False}
result_index = sess.run(op_to_restore,feed_dict)
print(result_index)
print (labels[np.argmax(result_index)])
the arguments actually depend on what you're doing, but mostly the first argument is the weights and placeholders. Whenever you are working with Tensorflow, you define a graph which is fed examples(training data) and some hyperparameters like learning rate, global step etc. It’s a standard practice to feed all the training data and hyperparameters using placeholders. when you build a network using placeholders and save it the network is saved, however, values of the placeholders are not saved.
Let's see a toy example:
import tensorflow as tf
#Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1= tf.Variable(2.0,name="bias")
feed_dict ={w1:4,w2:8}
#Define a test operation that we will restore
w3 = tf.add(w1,w2)
w4 = tf.multiply(w3,b1,name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
#Create a saver object which will save all the variables
saver = tf.train.Saver()
#Run the operation by feeding input
print sess.run(w4,feed_dict)
#Prints 24 which is sum of (w1+w2)*b1
#Now, save the graph
saver.save(sess, 'my_test_model',global_step=1000)
Now, when we want to restore it, we not only have to restore the graph and weights, but also prepare a new feed_dict that will feed the new training data to the network. We can get reference to these saved operations and placeholder variables via graph.get_tensor_by_name() method. So if you want to train the same model with further new data, then you would have to utilize those weigtages, if however you just want to get the prediction from the model you trained, you could utilize the op_to_restore and the feed_dict as new data. Something like this, if you follow the above example:
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
print sess.run(op_to_restore,feed_dict)
#This will print 60 which is calculated
#using new values of w1 and w2 and saved value of b1.
So, this is how it works, in your case, since you're trying to load the Inception model, your op_to_restore should depend on what you're trying to restore if you could tell us what you're trying to do, then only it's possible to suggest something. However in the other parameter feed_dict , it's just the numpy array of image pixel, of you, you're trying to classify/predict or whatever you're doing.
I took the code from the following article. This will help you as well. http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
Update: For your particular case, you may like to try the following code to predict the classes in the new images.
import tensorflow as tf
slim = tf.contrib.slim
from inception_resnet_v2 import *
#Well, since you're using resnet_v2, this may be equivalent to you.
checkpoint_file = 'inception_resnet_v2_2016_08_30.ckpt'
sample_images = ['dog.jpg', 'panda.jpg']
#Load the model
sess = tf.Session()
arg_scope = inception_resnet_v2_arg_scope()
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(input_tensor, is_training=False)
#With this, you could consider the op_variable with the following
predict_values, logit_values = sess.run([end_points['Predictions'], logits], feed_dict={input_tensor: im})
#Here im is the normalized numpy array of the image pixels.
Furthermore, the following resources may help you even more:
Using pre-trained inception_resnet_v2 with Tensorflow
https://github.com/tensorflow/tensorflow/issues/7172

Tensorflow Performing Feature Extraction (on the whole Dataset) is very time consuming

I want to perform a Feature Extraction on the TensorFlow's standard MNIST dataset, (before training my Neural Network) which is a simple tf.matmul() but it takes about 3 hours to be done. Any tuning tricks or Ideas to reduce the time ?
The code looks like below
def apply_feature_extraction(data, feature_mapper):
weights, bias = feature_mapper
return session.run(tf.add(tf.matmul(data, weights), bias))
batch_x, batch_y = mnist.train.next_batch(batch_size)
transformed_features = apply_feature_extraction(batch_x, my_feature_mapper)
You should not create any operations while executing the graph!
Each time when you call apply_feature_extraction you put a new operation tf.add(tf.matmul(...) to your graph. As a result your graph gets bloated.
First, create a fully defined graph that contains all variables and operations you need and then just execute ops within a tf.Session that are defined in the graph.
In your case that might look like this:
def apply_feature_extraction(data, feature_mapper):
weights, bias = feature_mapper
return tf.add(tf.matmul(data, weights), bias)
batch_x, batch_y = mnist.train.next_batch(batch_size)
# define graph
x = tf.placeholder(tf.float32, shape=None, name='input')
transformed_features = apply_feature_extraction(x, my_feature_mapper)
# execute graph
with tf.Session() as sess:
trans_feat_evaluated = sess.run(transformed_features, feat_dict={x:batch_x}
I just resolved this issue by avoiding feed_dict and moving to Datset API:
https://www.tensorflow.org/programmers_guide/datasets

How to replace the input of a saved graph, e.g. a placeholder by a Dataset iterator?

I have a saved Tensorflow graph that consumes input through a placeholder with a feed_dict param.
sess.run(my_tensor, feed_dict={input_image: image})
Because feeding data with a Dataset Iterator is more efficient, I want to load the saved graph, replace the input_image placeholder with an Iterator and run. How can I do that? Is there a better way to do it? An answer with code example would be highly appreciated.
You can achieve that by serializing your graph and reimport it using tf.import_graph_def, which has an input_map argument used to plug-in inputs at the desired places.
To do that you need at least to know the name of the inputs you replace and of the outputs you wish to execute (resp. x and y in my examples).
import tensorflow as tf
# restore graph (built from scratch here for the example)
x = tf.placeholder(tf.int64, shape=(), name='x')
y = tf.square(x, name='y')
# just for display -- you don't need to create a Session for serialization
with tf.Session() as sess:
print("with placeholder:")
for i in range(10):
print(sess.run(y, {x: i}))
# serialize the graph
graph_def = tf.get_default_graph().as_graph_def()
tf.reset_default_graph()
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# plug in new pipeline
[y] = tf.import_graph_def(graph_def, input_map={'x:0': batch}, return_elements=['y:0'])
# enjoy Dataset inputs!
with tf.Session() as sess:
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass
Note that the placeholder node is still there as I did not bother here to parse graph_def to remove it -- you could remove it as an improvement, although I think it is also OK to leave it here.
Depending on how you restore your graph, the input replacement may be already built-in in the loader, which makes things simpler (no need to go back to a GraphDef). For example, if you load your graph from a .meta file, you can use tf.train.import_meta_graph which accepts the same input_map argument.
import tensorflow as tf
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# load your net and plug in new pipeline
# you need to know the name of the tensor where to plug-in your input
restorer = tf.train.import_meta_graph(graph_filepath, input_map={'x:0': batch})
y = tf.get_default_graph().get_tensor_by_name('y:0')
# enjoy Dataset inputs!
with tf.Session() as sess:
# not needed here, but in practice you would also need to restore weights
# restorer.restore(sess, weights_filepath)
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass

Restored model in tensorflow and predictions

I created model in tensorflow of neural network.
I saved the model and restore it in another python file.
The code is below:
def restoreModel():
prediction = neuralNetworkModel(x)
tf_p = tensorFlow.nn.softmax(prediction)
temp = np.array([2,1,541,161124,3,3])
temp = np.vstack(temp)
with tensorFlow.Session() as sess:
new_saver = tensorFlow.train.import_meta_graph('model.ckpt.meta')
new_saver.restore(sess, tensorFlow.train.latest_checkpoint('./'))
all_vars = tensorFlow.trainable_variables()
tensorFlow.initialize_all_variables().run()
sess.run(tensorFlow.initialize_all_variables())
predict = sess.run([tf_p], feed_dict={
tensorFlow.transpose(x): temp,
y : ***
})
when "temp" variable in what I want to predict!
X is the vector shape, and I "transposed" it to match the shapes.
I dont understand what I need to write in feed_dict variable.
I am answering late but maybe it can still be useful. feed_dict is used to give tensorflow the values you want your placeholders to take. fetches (the first argument of run) is the list of results you want. The keys of feed_dict and the elements of fetches must be either the names of the tensors (I didn't try it though) or variables you can get by
graph = tf.get_default_graph()
var = graph.get_operation_by_name('name_of_operation').outputs[0]
Maybe graph.get_tensor_by_name('name_of_operation:0') works too, I didn't try.
By default, the name of placeholders are simply 'Placeholder', 'Placeholder_1' etc, following the order of creation in the graph definition.

Categories