Initializing variables, variable scope and import_graph_def in tensorflow - python

I have a number of related questions about tensorflow behavior when attempting to do graph surgery using import_graph_def. 2 different graph surgeries
In the image above, I represent with bold red arrows 2 different graph surgeries. On the left, there are 2 graphs, g1 and g2, and the surgery consists of replacing a node in graph g2 by a node - and everything below it - from graph g1. How to do that is explained in this post. The surgery on the right, which involves replacing nodes that belong to the same graph, I haven't been able to figure out how to perform, or even if it is at all possible. I ended up with this minimal example
with tf.Graph().as_default() as g1:
with tf.variable_scope('foo', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype=tf.float64, shape=[2], name='x')
c = tf.get_variable('c', initializer=tf.cast(1.0, tf.float64))
y = tf.identity(2*x, 'y')
z = tf.identity(3*x*c, 'z')
g1_def = g1.as_graph_def()
z1, = tf.import_graph_def(g1_def, input_map={'foo/x:0' : y}, return_elements=["foo/z:0"],
name='z1')
init_op = tf.global_variables_initializer()
print(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='foo'))
with tf.Session(graph=g1) as sess:
sess.run(init_op)
print(sess.run(z, feed_dict={'foo/x:0' : np.array([1.0, 2.0])}) )
print(sess.run(tf.report_uninitialized_variables()))
# z1 = sess.run(z1, feed_dict={'foo/x:0' : np.array([1.0, 2.0])})
This code runs as it is. The 3 prints yield respectively:
[<tf.Variable 'foo/c:0' shape=() dtype=float64_ref>]
[ 3. 6.]
[]
In particular, the last print informs that there are no unintialized variables. However, uncommenting the last line, yields the error
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value foo/z1/foo/c
Note that if I remove c from the definition of z above, this would also work. However, I would like to understand this error. To begin with, why is the variable reported as foo/z1/foo/c? Why does the scope foo appear twice? Why is nothing reported when I print the uninitialized variables? Why is only foo/c reported when I print the GLOBAL_VARIABLES collection under the scope foo?
PS: I guess that there is a simpler way to ask the question which is, what is the tensorflow analogue of
theano.clone(some_tensor, replace={input_var : replace_var})

To begin with, why is the variable reported as foo/z1/foo/c?
Why does the scope foo appear twice?
After you've called tf.import_graph_def(...), the graph got duplicated. The first graph is defined in foo score. The second subgraph has been imported under the scope foo/z1 (because name='z1', plus foo is preserved from the scope above). So the graph g1 now contains the following tensors:
foo/x
foo/y
foo/c
...
foo/z1/foo/x
foo/z1/foo/y
foo/z1/foo/c
...
The first foo/c is initialized, but the second foo/z1/foo/c is not (see below).
Why is nothing reported when I print the uninitialized variables? Why is only foo/c reported when I print the GLOBAL_VARIABLES collection under the scope foo?
Since report_uninitialized_variables() scans LOCAL_VARIABLES and GLOBAL_VARIABLES by default, this is basically the same question.
And it probably is a bug: GLOBAL_VARIABLES collection isn't updated after tf.import_graph_def call. I say probably because GLOBAL_VARIABLES was designed as a mere convenience collection. Tensorflow tries to keep it up do date, but probably doesn't guarantee it always has all variables. The fact that tf.add_to_collection exists publicly supports this idea -- one can add any value to any collection if they want it. Bottom line: this behavior may or may not change in future versions, but as of 1.5 the client is responsible to update the global variables after graph import.
In particular, the last print informs that there are no unintialized variables. However, uncommenting the last line, yields the error
To fix this error, you simply need to run the initializer for the z1 subgraph. Like this:
# note that it's defined before `g1.as_graph_def()` to be a part of graph def
init_op = tf.global_variables_initializer()
g1_def = g1.as_graph_def()
z1, = tf.import_graph_def(g1_def, input_map={'foo/x:0': y}, return_elements=["foo/z:0"],
name='z1')
# find the init op
z1_init_op = tf.get_default_graph().get_operation_by_name('foo/z1/foo/init')
...
sess.run(z1_init_op)
And voila! You have the duplicated graphs, just like you wanted to.

I faced a similar issue but simply running the init operation didn't work.
I fixed it by manually running all "Assign" ops of the global variables of the imported graph.
In my scenario I want to run an encoding op 'z' with input 'patch:0' using two different input tensors.
with tf.Session(graph=tf.get_default_graph()).as_default() as sess:
g = tf.Graph()
saved_model = predictor.from_saved_model(args.export_dir, graph=g)
variables = g.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)]
fetch_ops = ['z:0','init']
fetch_ops.extend([v.name.strip(":0") + "/Assign" for v in variables)
image_graph = tf.graph_util.import_graph_def(
g.as_graph_def(),
input_map={'patch:0': image},
return_elements=fetch_ops,
name='image')
warped_graph = tf.graph_util.import_graph_def(
g.as_graph_def(),
input_map={'patch:0': warped_image},
return_elements=fetch_ops,
name='warp')
loss = tf.reduce_sum(tf.math.squared_difference(image_graph[0], warped_graph[0]))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0001)
compute_gradients = optimizer.compute_gradients(
loss,
var_list=[dest_control_point_locations])
apply_gradients = optimizer.apply_gradients(compute_gradients, global_step=step)
sess.run(image_graph[1:])
sess.run(warped_graph[1:])
sess.run(tf.global_variables_initializer())
gradients = sess.run(compute_gradients)
When extracting the operation and running it by feeding my tensors with feed_dict, gradient_computation doesn't work, that's why I used tf.graph_util.import_graph_def(...).
Hope this might help anyone facing the same issue.

Related

What is not feedable in tensorflow?

I've tried the following code. But I don't find what is not feedable in tensorflow. Could anybody show me what is not feedable?
#!/usr/bin/env python
# vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 fileencoding=utf-8:
import tensorflow as tf
x = tf.Variable(3)
y = tf.constant(3)
z = tf.add(1, 2)
with tf.Session() as sess:
print sess.graph.is_feedable(x)
print sess.graph.is_feedable(y)
print sess.graph.is_feedable(z)
All tensors are feedable (including the constants, as you can see), unless they are explicitly prevented from feeding via tf.Graph.prevent_feeding method. One can call this method directly or indirectly, for example, that's what tf.contrib.util.constant_value function does:
NOTE: If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value for tensor. This allows the result of this function to influence the graph that is constructed, and permits static shape optimizations.
Sample code:
y = tf.constant(3)
tf.contrib.util.constant_value(y) # 3
with tf.Session() as sess:
print sess.graph.is_feedable(y) # False!

Using op inputs when defining custom gradients in TensorFlow

I'm trying to define a gradient method for my custom TF operation. Most of the solutions I have found online seem to based on a gist by harpone. I'm reluctant to use that approach as it uses py_func which won't run on GPU. I found another solution here that uses tf.identity() that looks more elegant and I think will run on GPU. However, I have some problems accessing inputs of the ops in my custom gradient function. Here's my code:
#tf.RegisterGradient('MyCustomGradient')
def _custom_gradient(op, gradients):
x = op.inputs[0]
return(x)
def my_op(w):
return tf.pow(w,3)
var_foo = tf.Variable(5, dtype=tf.float32)
bar = my_op(var_foo)
g = tf.get_default_graph()
with g.gradient_override_map({'Identity': 'MyCustomGradient'}):
bar = tf.identity(bar)
g = tf.gradients(bar, var_foo)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(g))
I was expecting _custom_gradient() to return the input to the op (5 in this example) but instead it seems to return op output x gradient. My custom my_op will have non-differentiable operations like tf.sign and I'd like to define my custom gradient based on the inputs. What am I doing wrong?
There is no problem with your code:
Let's first do the forward pass:
var_foo = 5 -> bar = 125 -> tf.identity(bar) = 125
Now let's backpropagate:
The gradient of tf.identity(bar) with respect to its argument bar equals (by your definition) to bar, that is, 125. The gradient of bar with respect to var_foo equals 3 times the square of var_foo which is 75. Multiply, and you get 9375, which is indeed the output of your code.
op.inputs[0] contains the forward-pass value of the op. In this case, the forward pass of the identity op is 125.

TensorFlow (v1.1.0) Multi-RNN BasicLSTMCell Error ('reuse' parameter) Python 3.5

An expansion on: What is the use of a "reuse" parameter of tf.contrib.layers functions?.
The question: Although this issue has been brought up on github and will likely be addressed in another release of TensorFlow, I have found no existing solution for the time being; is there a stop-gap measure that might work in the meantime?
Code:
state_size = 4
def lstm_cell():
if 'reuse' in inspect.getargspec(tf.contrib.rnn.BasicLSTMCell.__init__).args:
return tf.contrib.rnn.BasicLSTMCell(state_size, forget_bias=0.0, state_is_tuple=True, reuse=tf.get_variable_scope().reuse)
else:
return tf.contrib.rnn.BasicLSTMCell(state_size, forget_bias=0.0, state_is_tuple=True)
cell = lstm_cell()
cell = rnn.DropoutWrapper(cell, output_keep_prob=0.5)
cell = rnn.MultiRNNCell([cell] * num_layers, state_is_tuple=True)
states_series, current_state = tf.nn.dynamic_rnn(cell, tf.expand_dims(batchX_placeholder, -1), initial_state=rnn_tuple_state)
states_series = tf.reshape(states_series, [-1, state_size])
The function lstm_cell() is a suggestion from https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py. It explains that the newest version of tensorflow has includes the 'reuse' parameter for BasicLSTMCell().
In this code, if I set reuse to False, tf.nn.dynamic_rnn line produces the error:
"ValueError: Variable
rnn/multi_rnn_cell/cell_0/basic_lstm_cell/weights already exists,
disallowed. Did you mean to set reuse=True in VarScope? Originally
defined at:..."
If I set reuse to True, the error is:
"ValueError: Attempt to reuse RNNCell
with a different variable scope than its first
use. First use of cell was with scope
'rnn/multi_rnn_cell/cell_0/basic_lstm_cell', this attempt is with
scope 'rnn/multi_rnn_cell/cell_1/basic_lstm_cell'. Please create a
new instance of the cell if you would like it to use a different set
of weights. If before you were using:
MultiRNNCell([BasicLSTMCell(...)] * num_layers), change to:
MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)]). If
before you were using the same cell instance as both the forward and
reverse cell of a bidirectional RNN, simply create two instances (one
for forward, one for reverse). In May 2017, we will start
transitioning this cell's behavior to use existing stored weights, if
any, when it is called with scope=None (which can lead to silent
model degradation, so this error will remain until then.)"
Lastly, adding in 'scope=None' to dynamic_rnn provides no difference either.
Have you considered trying what the 'reuse to True'-error is suggesting?
If before you were using: MultiRNNCell([BasicLSTMCell(...)] *
num_layers), change to: MultiRNNCell([BasicLSTMCell(...) for _ in
range(num_layers)]).
following code snipped works for me (already answered here)
def lstm_cell():
cell = tf.contrib.rnn.NASCell(state_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)

Avoid cluttering the tensorflow graph with assign operations

I have to run something like the following code
import tensorflow as tf
sess = tf.Session()
x = tf.Variable(42.)
for i in range(10000):
sess.run(x.assign(42.))
sess.run(x)
print(i)
several times. The actual code is much more complicated and uses more variables.
The problem is that the TensorFlow graph grows with each instantiated assign op, which makes the graph grow, eventually slowing down the computation.
I could use feed_dict= to set the value, but I would like to keep my state in the graph, so that I can easily query it in other places.
Is there some way of avoiding cluttering the current graph in this case?
I think I've found a good solution for this:
I define a placeholder y and create an op that assigns the value of y to x.
I can then use that op repeatedly, using feed_dict={y: value} to assign a new value to x.
This doesn't add another op to the graph.
It turns out that the loop runs much more quickly than before as well.
import tensorflow as tf
sess = tf.Session()
x = tf.Variable(42.)
y = tf.placeholder(dtype=tf.float32)
assign = x.assign(y)
sess.run(tf.initialize_all_variables())
for i in range(10000):
sess.run(assign, feed_dict={y: i})
print(i, sess.run(x))
Each time you call sess.run(x.assign(42.))
two things happen: (i) a new assign operation is added to the computational graph sess.graph, (ii) the newly added operation executes. No wonder the graph gets pretty large if loop repeats many times. If you define assignment operation before execution (asgnmnt_operation in example below), just a single operation is added to the graph so the performance is great:
import tensorflow as tf
x = tf.Variable(42.)
c = tf.constant(42.)
asgnmnt_operation = x.assign(c)
sess = tf.Session()
for i in range(10000):
sess.run(asgnmnt_operation)
sess.run(x)
print(i)

How do I get the current value of a Variable?

Suppose we have a variable:
x = tf.Variable(...)
This variable can be updated during the training process using the assign() method.
What is the best way to get the current value of a variable?
I know we could use this:
session.run(x)
But I'm afraid this would trigger a whole chain of operations.
In Theano, you could just do
y = theano.shared(...)
y_vals = y.get_value()
I'm looking for the equivalent thing in TensorFlow.
The only way to get the value of the variable is by running it in a session. In the FAQ it is written that:
A Tensor object is a symbolic handle to the result of an operation,
but does not actually hold the values of the operation's output.
So TF equivalent would be:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
v = sess.run(x)
print(v) # will show you your variable.
The part with init = global_variables_initializer() is important and should be done in order to initialize variables.
Also, take a look at InteractiveSession if you work in IPython.
In general, session.run(x) will evaluate only the nodes that are necessary to compute x and nothing else, so it should be relatively cheap if you want to inspect the value of the variable.
Take a look at this great answer https://stackoverflow.com/a/33610914/5543198 for more context.
tf.Print can simplify your life!
tf.Print will print the value of the tensor(s) you tell it to print at the moment where the tf.Print line is called in your code when your code is evaluated.
So for example:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
x = tf.Print(x,[x])
x = 2* x
tf.initialize_all_variables()
sess = tf.Session()
sess.run()
[1.0 2.0 ]
because it prints the value of x at the moment when the tf.Print line is. If instead you do
v = x.eval()
print(v)
you will get:
[2.0 4.0 ]
because it will give you the final value of x.
As they cancelled tf.Variable() in tensorflow 2.0.0,
If you want to extract values from a tensor(ie "net"), you can use this,
net.[tf.newaxis,:,:].numpy().

Categories