The context is that I'm trying to incrementally grow a rnn autoencoder, by first training a single cell encoder/decoder then extending. I'd like to load the parameters of the preceding cells.
This code here is a minimal code where I'm investigating how to do this, and it fails with:
TypeError: Cannot interpret feed_dict key as Tensor: The name 'save_1/Const:0' refers to a Tensor which does not exist. The operation, 'save_1/Const', does not exist in the graph.
I've searched and found nothing, this thread and this thread are not the same problem.
MVCE
import tensorflow as tf
import numpy as np
with tf.Session(graph=tf.Graph()) as sess:
cell1 = tf.nn.rnn_cell.LSTMCell(1,name='lstm_cell1')
cell = tf.nn.rnn_cell.MultiRNNCell([cell1])
inputs = tf.random_normal((5,10,1))
rnn1 = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)
vars0 = tf.trainable_variables()
saver = tf.train.Saver(vars0,max_to_keep=1)
sess.run(tf.initialize_all_variables())
saver.save(sess,'./save0')
vars0_val = sess.run(vars0)
# creating a new graph/session because it is not given that it'll be in the same session.
with tf.Session(graph=tf.Graph()) as sess:
cell1 = tf.nn.rnn_cell.LSTMCell(1,name='lstm_cell1')
#one extra cell
cell2 = tf.nn.rnn_cell.LSTMCell(1,name='lstm_cell2')
cell = tf.nn.rnn_cell.MultiRNNCell([cell1,cell2])
inputs = tf.random_normal((5,10,1))
rnn1 = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)
sess.run(tf.initialize_all_variables())
# new saver with first cell variables
saver = tf.train.Saver(vars0,max_to_keep=1)
# fails
saver.restore(sess,'./save0')
# Should be the same
vars0_val1 = sess.run(vars0)
assert np.all(vars0_val1 = vars0_val)
The mistake comes from the line,
saver = tf.train.Saver(vars0,max_to_keep=1)
if the second session. vars0 refers to actual tensor objects that existed in the previous graph (not the current one). Saver's var_list requires an actual set of tensors (not strings, which I assumed would be good enough).
To make it work the second Saver object should be initialized with the corresponding tensors in the current graph.
Something like,
vars0_names = [v.name for v in vars0]
load_vars = [sess.graph.get_tensor_by_name(n) for n in vars0_names]
saver = tf.train.Saver(load_vars,max_to_keep=1)
Related
Problem:
I am very new to Tensorflow. My specific question is what particular arguments should I put inside sess.run(fetches, feed_dict) function. For instance, how could find out what the values of the arguments?
Steps:
Here is my understanding of the steps after looking at other posts.
Save tranied tensorflow model, it should consists of 4 files, below are my outputs:
checkpoint
Inception_resnet_v2.ckpt.data-00000-of-00001
Inception_resnet_v2.ckpt.index
Inception_resnet_v2.ckpt.meta
Resize the input image to whatever format required by the neural network.
Start tensorflow session.
Retrive the Graph and associated parameters, tensors...
Predict the input image.
Code:
Traning code:
https://github.com/taki0112/SENet-Tensorflow/blob/master/SE_Inception_resnet_v2.py
[Solved] Test code:
import tensorflow as tf
import numpy as np
import cv2
labels = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
# Load graph and parameters, etc.
sess=tf.Session()
saver = tf.train.import_meta_graph('./model/Inception_resnet_v2.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint("./model/"))
graph = tf.get_default_graph()
# Get tensor names
x = graph.get_tensor_by_name("Placeholder:0")
training_flag = graph.get_tensor_by_name("Placeholder_2:0")
op_to_restore = graph.get_tensor_by_name("final_fully_connected/dense/BiasAdd:0")
# Preprocess imgae imput
src = cv2.imread("./input/car3.jpg")
dst = cv2.resize(src, (32, 32), interpolation=cv2.INTER_CUBIC)
b,g,r = cv2.split(dst)
b = (b - np.mean(b)) / np.std(b) * .1
g = (g - np.mean(g)) / np.std(g) * .1
r = (r - np.mean(r)) / np.std(r) * .1
src = cv2.merge((b,g,r))
picture = dst.reshape(1, 32, 32, 3)
feed_dict ={x: picture, training_flag:False}
result_index = sess.run(op_to_restore,feed_dict)
print(result_index)
print (labels[np.argmax(result_index)])
the arguments actually depend on what you're doing, but mostly the first argument is the weights and placeholders. Whenever you are working with Tensorflow, you define a graph which is fed examples(training data) and some hyperparameters like learning rate, global step etc. It’s a standard practice to feed all the training data and hyperparameters using placeholders. when you build a network using placeholders and save it the network is saved, however, values of the placeholders are not saved.
Let's see a toy example:
import tensorflow as tf
#Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1= tf.Variable(2.0,name="bias")
feed_dict ={w1:4,w2:8}
#Define a test operation that we will restore
w3 = tf.add(w1,w2)
w4 = tf.multiply(w3,b1,name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
#Create a saver object which will save all the variables
saver = tf.train.Saver()
#Run the operation by feeding input
print sess.run(w4,feed_dict)
#Prints 24 which is sum of (w1+w2)*b1
#Now, save the graph
saver.save(sess, 'my_test_model',global_step=1000)
Now, when we want to restore it, we not only have to restore the graph and weights, but also prepare a new feed_dict that will feed the new training data to the network. We can get reference to these saved operations and placeholder variables via graph.get_tensor_by_name() method. So if you want to train the same model with further new data, then you would have to utilize those weigtages, if however you just want to get the prediction from the model you trained, you could utilize the op_to_restore and the feed_dict as new data. Something like this, if you follow the above example:
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
print sess.run(op_to_restore,feed_dict)
#This will print 60 which is calculated
#using new values of w1 and w2 and saved value of b1.
So, this is how it works, in your case, since you're trying to load the Inception model, your op_to_restore should depend on what you're trying to restore if you could tell us what you're trying to do, then only it's possible to suggest something. However in the other parameter feed_dict , it's just the numpy array of image pixel, of you, you're trying to classify/predict or whatever you're doing.
I took the code from the following article. This will help you as well. http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
Update: For your particular case, you may like to try the following code to predict the classes in the new images.
import tensorflow as tf
slim = tf.contrib.slim
from inception_resnet_v2 import *
#Well, since you're using resnet_v2, this may be equivalent to you.
checkpoint_file = 'inception_resnet_v2_2016_08_30.ckpt'
sample_images = ['dog.jpg', 'panda.jpg']
#Load the model
sess = tf.Session()
arg_scope = inception_resnet_v2_arg_scope()
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(input_tensor, is_training=False)
#With this, you could consider the op_variable with the following
predict_values, logit_values = sess.run([end_points['Predictions'], logits], feed_dict={input_tensor: im})
#Here im is the normalized numpy array of the image pixels.
Furthermore, the following resources may help you even more:
Using pre-trained inception_resnet_v2 with Tensorflow
https://github.com/tensorflow/tensorflow/issues/7172
I have a saved Tensorflow graph that consumes input through a placeholder with a feed_dict param.
sess.run(my_tensor, feed_dict={input_image: image})
Because feeding data with a Dataset Iterator is more efficient, I want to load the saved graph, replace the input_image placeholder with an Iterator and run. How can I do that? Is there a better way to do it? An answer with code example would be highly appreciated.
You can achieve that by serializing your graph and reimport it using tf.import_graph_def, which has an input_map argument used to plug-in inputs at the desired places.
To do that you need at least to know the name of the inputs you replace and of the outputs you wish to execute (resp. x and y in my examples).
import tensorflow as tf
# restore graph (built from scratch here for the example)
x = tf.placeholder(tf.int64, shape=(), name='x')
y = tf.square(x, name='y')
# just for display -- you don't need to create a Session for serialization
with tf.Session() as sess:
print("with placeholder:")
for i in range(10):
print(sess.run(y, {x: i}))
# serialize the graph
graph_def = tf.get_default_graph().as_graph_def()
tf.reset_default_graph()
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# plug in new pipeline
[y] = tf.import_graph_def(graph_def, input_map={'x:0': batch}, return_elements=['y:0'])
# enjoy Dataset inputs!
with tf.Session() as sess:
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass
Note that the placeholder node is still there as I did not bother here to parse graph_def to remove it -- you could remove it as an improvement, although I think it is also OK to leave it here.
Depending on how you restore your graph, the input replacement may be already built-in in the loader, which makes things simpler (no need to go back to a GraphDef). For example, if you load your graph from a .meta file, you can use tf.train.import_meta_graph which accepts the same input_map argument.
import tensorflow as tf
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# load your net and plug in new pipeline
# you need to know the name of the tensor where to plug-in your input
restorer = tf.train.import_meta_graph(graph_filepath, input_map={'x:0': batch})
y = tf.get_default_graph().get_tensor_by_name('y:0')
# enjoy Dataset inputs!
with tf.Session() as sess:
# not needed here, but in practice you would also need to restore weights
# restorer.restore(sess, weights_filepath)
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass
I have seen variations of this question asked, but I haven't quite found a satisfactory answer yet. Basically, I would like to do the equivalent from keras model.to_json(), model.get_weights(), model.from_json(), model.set_weights() to tensorflow. I think I am getting close to there, but I am at a point where I am stuck. I'd prefer if I could get the weights and graph in the same string, but I understand if that isn't possible.
Currently, what I have is:
g = optimizer.minimize(loss_op,
global_step=tf.train.get_global_step())
de = g.graph.as_graph_def()
json_string = json_format.MessageToJson(de)
gd = tf.GraphDef()
gd = json_format.Parse(json_string, gd)
That seems to create the graph fine, but obviously the meta graph is not included for variable, weights, etc. There is also the meta graph, but the only thing I see is export_meta_graph, which doesn't seem to serialize in the same manner. I saw that MetaGraph has a proto function, but I don't know how to serialize those variables.
So in short, how would you take a tensorflow model (model as in weights, graph, etc), serialize it to a string (preferably json), then deserialize it and continue training or serve predictions.
Here are things that get me close to there and I have tried, but mostly has limitations in needing to write to disk, which I can't do in this case:
Gist on GitHub
This is the closest one I found, but the link to serializing a metagraph doesn't exist.
Note that the solution from #Maxim will create new operations in the graph each time it runs.
If you run the function very frequently this will cause your code to get slower and slower.
Two solutions to work around this problem:
Create the assign operations at the same time as the rest of the graph and reuse them:
assign_ops = []
for var_name in tf.trainable_variables():
assign_placeholder = tf.placeholder(var.dtype, shape=value.shape)
assign_op = var.assign(assign_placeholder)
assign_ops.append(assign_op)
Use the load function on the variables, I prefer this one as it removes the need for the code above:
self.params = tf.trainable_variables()
def get_weights(self):
values = tf.get_default_session().run(self.params)
return values
def set_weights(self, weights):
for i, value in enumerate(weights):
value = np.asarray(value)
self.params[i].load(value, self.sess)
(I can't comment so I put this as an answer instead)
If you want the equivalent of keras Model.get_weights() and Model.set_weights(), these methods aren't strongly tied to keras internals and can be easily extracted.
Original code
Here's how they look like in keras source code:
def get_weights(self):
weights = []
for layer in self.layers:
weights += layer.weights
return K.batch_get_value(weights) # this is just `get_session().run(weights)`
def set_weights(self, weights):
tuples = []
for layer in self.layers:
num_param = len(layer.weights)
layer_weights = weights[:num_param]
for sw, w in zip(layer.weights, layer_weights):
tuples.append((sw, w))
weights = weights[num_param:]
K.batch_set_value(tuples) # another wrapper over `get_session().run(...)`
Keras's weights is the list of numpy arrays (not json). As you can see, it uses the fact that model architecture is known (self.layers) which allows it to reconstruct the correct mapping from variables to values. Some seemingly non-trivial work is done in K.batch_set_value, but in fact it simply prepares assign ops and runs them in session.
Getting and setting weights in pure tensorflow
def tensorflow_get_weights():
vars = tf.trainable_variables()
values = tf.get_default_session().run(vars)
return zip([var.name for var in vars], values)
def tensorflow_set_weights(weights):
assign_ops = []
feed_dict = {}
for var_name, value in weights:
var = tf.get_default_session().graph.get_tensor_by_name(var_name)
value = np.asarray(value)
assign_placeholder = tf.placeholder(var.dtype, shape=value.shape)
assign_op = tf.assign(var, assign_placeholder)
assign_ops.append(assign_op)
feed_dict[assign_placeholder] = value
tf.get_default_session().run(assign_ops, feed_dict=feed_dict)
Here I assume that you want to serialize / deserialize the whole model (i.e., all trainable variables) and in the default session. If this is not the case, functions above are easily customizable.
Testing
x = tf.placeholder(shape=[None, 5], dtype=tf.float32, name='x')
W = tf.Variable(np.zeros([5, 5]), dtype=tf.float32, name='W')
b = tf.Variable(np.zeros([5]), dtype=tf.float32, name='b')
y = tf.add(tf.matmul(x, W), b)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
# Save the weights
w = tensorflow_get_weights()
print(W.eval(), b.eval())
# Update the model
session.run([tf.assign(W, np.ones([5, 5])), tf.assign(b, np.ones([5]) * 2)])
print(W.eval(), b.eval())
# Restore the weights
tensorflow_set_weights(w)
print(W.eval(), b.eval())
If you run this test, you should see the model was freezed at zeros, then got updated and then restored back to zeros.
You can use freeze_graph
This script is included in Tensorflow and allows you to take a GraphDef proto, a SaverDef proto, and a set of variable values stored in a checkpoint file.
In this way you can output a GraphDef with all of the variable ops converted into const ops containing the values of the variables.
To restore a frozen model you have to reinitialize graphs and remap inputs from the frozen model, see this example
Thanks to Maxim for getting me to the solution. I wanted to post an answer with both the graph and weights being converted to json for people that stumble across this problem. To just serialize the graph and not the weights, I created a gist that encapsulates what Maxim wrote here: Tensorflow graph with non json serialized weights
Now to serialize/deserialize both the graph and weights, I created a separate gist here: Tensorflow graph with json serialized weights and graph.
To run through the explanation, I first slightly tweaked the weight functions by not returning the variables in get weights, and in set weights, grabbing the current variables there. The is an important caveat, especially if the graph is slightly different than the current trainable variables:
import tensorflow as tf
import numpy as np
from google.protobuf import json_format
import json
def tensorflow_get_weights():
vs = tf.trainable_variables()
values = tf.get_default_session().run(vs)
return values
def tensorflow_set_weights(weights):
assign_ops = []
feed_dict = {}
vs = tf.trainable_variables()
zipped_values = zip(vs, weights)
for var, value in zipped_values:
value = np.asarray(value)
assign_placeholder = tf.placeholder(var.dtype, shape=value.shape)
assign_op = var.assign(assign_placeholder)
assign_ops.append(assign_op)
feed_dict[assign_placeholder] = value
tf.get_default_session().run(assign_ops, feed_dict=feed_dict)
Next, I created two utility functions that would convert weights to and from json:
def convert_weights_to_json(weights):
weights = [w.tolist() for w in weights]
weights_list = json.dumps(weights)
return weights_list
def convert_json_to_weights(json_weights):
loaded_weights = json.loads(json_weights)
loaded_weights = [np.asarray(x) for x in loaded_weights]
return loaded_weights
Than I had a method that initially ran to kick off training. This method would initialize variables, run the optimization, get the weights and graph, and convert them into json. It looks like:
def run_initial_with_json_weights(opti, feed_dict):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(0, 250):
sess.run(opti, feed_dict=feed_dict)
first_weights = tensorflow_get_weights()
g = tf.get_default_graph().as_graph_def()
json_string = json_format.MessageToJson(g)
return json_string, convert_weights_to_json(first_weights)
Now that we have the serialized weights and graph, if we want to continue training and or make predictions, we can do the following. This method deserializes the graphdef and weights, runs the optimization, then makes predictions.
def run_serialized(json_graph, json_weights, feed_dict):
gd = tf.GraphDef()
gd = json_format.Parse(json_graph, gd)
weights = convert_json_to_weights(json_weights)
with tf.Session() as sess:
tf.import_graph_def(gd)
sess.run(tf.global_variables_initializer())
nu_out = tf.get_default_graph().get_tensor_by_name('outer/Sigmoid:0')
mini = tf.get_default_graph().get_tensor_by_name('mini:0')
tensorflow_set_weights(weights)
for i in range(0, 50):
sess.run(mini, feed_dict=feed_dict)
predicted = sess.run(nu_out, feed_dict=feed_dict)
return predicted
A full xor example is in the gist above.
I created model in tensorflow of neural network.
I saved the model and restore it in another python file.
The code is below:
def restoreModel():
prediction = neuralNetworkModel(x)
tf_p = tensorFlow.nn.softmax(prediction)
temp = np.array([2,1,541,161124,3,3])
temp = np.vstack(temp)
with tensorFlow.Session() as sess:
new_saver = tensorFlow.train.import_meta_graph('model.ckpt.meta')
new_saver.restore(sess, tensorFlow.train.latest_checkpoint('./'))
all_vars = tensorFlow.trainable_variables()
tensorFlow.initialize_all_variables().run()
sess.run(tensorFlow.initialize_all_variables())
predict = sess.run([tf_p], feed_dict={
tensorFlow.transpose(x): temp,
y : ***
})
when "temp" variable in what I want to predict!
X is the vector shape, and I "transposed" it to match the shapes.
I dont understand what I need to write in feed_dict variable.
I am answering late but maybe it can still be useful. feed_dict is used to give tensorflow the values you want your placeholders to take. fetches (the first argument of run) is the list of results you want. The keys of feed_dict and the elements of fetches must be either the names of the tensors (I didn't try it though) or variables you can get by
graph = tf.get_default_graph()
var = graph.get_operation_by_name('name_of_operation').outputs[0]
Maybe graph.get_tensor_by_name('name_of_operation:0') works too, I didn't try.
By default, the name of placeholders are simply 'Placeholder', 'Placeholder_1' etc, following the order of creation in the graph definition.
I am trying to build a simplest possible LSTM network. Just want it to predict the next value in the sequence np_input_data.
import tensorflow as tf
from tensorflow.python.ops import rnn_cell
import numpy as np
num_steps = 3
num_units = 1
np_input_data = [np.array([[1.],[2.]]), np.array([[2.],[3.]]), np.array([[3.],[4.]])]
batch_size = 2
graph = tf.Graph()
with graph.as_default():
tf_inputs = [tf.placeholder(tf.float32, [batch_size, 1]) for _ in range(num_steps)]
lstm = rnn_cell.BasicLSTMCell(num_units)
initial_state = state = tf.zeros([batch_size, lstm.state_size])
loss = 0
for i in range(num_steps-1):
output, state = lstm(tf_inputs[i], state)
loss += tf.reduce_mean(tf.square(output - tf_inputs[i+1]))
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
feed_dict={tf_inputs[i]: np_input_data[i] for i in range(len(np_input_data))}
loss = session.run(loss, feed_dict=feed_dict)
print(loss)
The interpreter returns:
ValueError: Variable BasicLSTMCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
output, state = lstm(tf_inputs[i], state)
What do I do wrong?
The call to lstm here:
for i in range(num_steps-1):
output, state = lstm(tf_inputs[i], state)
will try to create variables with the same name each iteration unless you tell it otherwise. You can do this using tf.variable_scope
with tf.variable_scope("myrnn") as scope:
for i in range(num_steps-1):
if i > 0:
scope.reuse_variables()
output, state = lstm(tf_inputs[i], state)
The first iteration creates the variables that represent your LSTM parameters and every subsequent iteration (after the call to reuse_variables) will just look them up in the scope by name.
I ran into a similar issue in TensorFlow v1.0.1 using tf.nn.dynamic_rnn. It turned out that the error only arose if I had to re-train or cancel in the middle of training and restart my training process. Basically the graph was not being reset.
Long story short, throw a tf.reset_default_graph() at the start of your code and it should help. At least when using tf.nn.dynamic_rnn and retraining.
Use tf.nn.rnn or tf.nn.dynamic_rnn which do this, and a lot of other nice things, for you.