tf.print doesn't print with sess on node being evaluated - python

Using tf print documentation
I wrote
print_op = tf.print("tensors:", cut_points[0,0,:], output_stream=sys.stderr)
with tf.control_dependencies([print_op]):
return cut_points
But not output to std whatsoever (I see other logs, and the session is indeed evaluates this point.

tf.control_dependencies only affects new operations created within the context. In you snippet, you are not creating any new operation in the context, so it is having no effect. The simplest solution is to use a tf.identity operation that will produce the same result but will have the control dependencies:
print_op = tf.print("tensors:", cut_points[0,0,:], output_stream=sys.stderr)
with tf.control_dependencies([print_op]):
return tf.identity(cut_points)

Related

What is the relationship between `tf.function` and `autograph.to_graph` in Tensorflow?

Similar results can be obtained via tf.function and autograph.to_graph.
However this seems to be version dependant.
For example, the function (taken from the official guide):
def square_if_positive(x):
if x > 0:
x = x * x
else:
x = 0.0
return x
Can be evaluated in graph mode using:
autograph.to_graph in TF 1.14
tf_square_if_positive = autograph.to_graph(square_if_positive)
with tf.Graph().as_default():
g_out = tf_square_if_positive(tf.constant( 9.0))
with tf.Session() as sess:
print(sess.run(g_out))
tf.function in TF2.0
#tf.function
def square_if_positive(x):
if x > 0:
x = x * x
else:
x = 0.0
return x
square_if_positive(tf.constant( 9.0))
So:
What is the relationship between tf.function and autograph.to_graph? One can assumes tf.function is using autograph.to_graph (as well as autograph.to_code) internally, but this is far from obvious.
Is the autograph.to_graph snippet still supported in TF2.0 (since it requires tf.Session)? It is present in the autograph doc in TF 1.14, but not in the corresponding doc of TF 2.0
I covered and answered all your questions in a three-part article: "Analyzing tf.function to discover AutoGraph strengths and subtleties": part 1, part 2, part 3.
To summarize and answer your 3 questions:
What is the relationship between tf.function and autograph.to_graph?
tf.function uses AutoGraph by default. What happens when you invoke the first time a tf.function-decorated function is that:
The function body is executed (in TensorFlow 1.x like, thus without eager mode) and its execution is traced (now tf.function knows which nodes are present, which branch of the if to keep and so on)
At the same time, AutoGraph starts and tries to convert to tf.* calls, the Python statements it knows (while -> tf.while, if -> tf.cond, ...)-.
Merging the information from points 1 and 2 a new graph is built, and based on the function name and the type of the parameters it is cached in a map (see the articles for a better understanding).
Is the autograph.to_graph snippet still supported in TF2.0?
Yes, tf.autograph.to_graph is still present and it creates a session internally for you (in TF2 you don't have to worry about them).
At any rate, I suggest you read the three articles linked since they cover in detail this and other peculiarities of tf.function.
#nessuno 's answer is excellent and it helps me a lot. While, actually the doc tf.autograph.to_graph explains the relation ship between autograpsh and tf.funciton directly:
Unlike tf.function, to_graph is a low-level transpiler that converts Python code to TensorFlow graph code. It does not implement any caching, variable management or create any actual ops, and is best used where greater control over the generated TensorFlow graph is desired. Another difference from tf.function is that to_graph will not wrap the graph into a TensorFlow function or a Python callable. Internally, tf.function uses to_graph.

How to use TensorFlow tf.print with non capital p?

I have some TensorFlow code in a custom loss function.
I'm using tf.Print(node, [debug1, debug2], "print my debugs: ")
It works fine but TF says tf.Print is depricated and will be removed once i update TensorFlow and that i should be using tf.**p**rint(), with small p.
I've tried using tf.print the same way i would tf.Print() but it's not working. Once i fit my model in Keras, i get an error. unlike tf.Print, tf.print seems to take in anything **kwargs, so what am i suppose to give it? and unlike tf.Print it do not seem to return something that i can inject into the computational graph.
It's really difficult to search because all the information online is about tf.Print().
Can someone explain how to use tf.print()?
Edit: Example code
def custom_loss(y_true, y_pred):
loss = K.mean(...)
print_no_op = tf.Print(loss, [loss, y_true, y_true.shape], "Debug output: ")
return print_no_op
model.compile(loss=custom_loss)
Both the documentation of tf.print and tf.Print mention that tf.print returns an operation with no output, so it cannot be evaluated to any value. The syntax of tf.print is meant to be more similar to Python's builtin print. In your case, you could use it as follows:
def custom_loss(y_true, y_pred):
loss = K.mean(...)
print_op = tf.print("Debug output:", loss, y_true, y_true.shape)
with tf.control_dependencies([print_op]):
return K.identity(loss)
Here K.identity creates a new tensor identical to loss but with a control dependency to print_op, so evaluating it will force executing the printing operation. Note that Keras also offers K.print_tensor, although it is less flexible than tf.print.
Just a little addition to jdehesa's excellent answer:
tf.tuple can be used to couple the print operation with another operation, which will then run with that operation whichever session executes the graph. Here's how that is done:
print_op = tf.print(something_you_want_to_print)
some_tensor_list = tf.tuple([some_tensor], control_inputs=[print_op])
# Use some_tensor_list[0] instead of any_tensor below.

what happens when I write a function using tensorflow ops

I write a function using tensorflow ops. I know the fact when I run the function, it will add many ops to the graph. But I am confused with how to get access of these ops.
for example:
def assign_weights():
with tf.name_scope('zheng'):
v = tf.Variable(0, 'v', dtype=tf.float32)
b = tf.placeholder(tf.float32, shape=())
z = tf.assign(v, b)
return z, b
I can use feed_dict to pass a value to b, only if I set b as a return value. Otherwise, there is no way to access b. If we want to access many ops in the function scope, we should set many return values. This is very ugly.
I want to know what happens under the hood when I run functions using tensorflow and how to get access of the ops in the function scope.
Thank you!
Obviously, it's true that to access an op (or tensor) we need some reference to it. IMHO, one standard workaround is to build your graph in a class and make certain tensors attributes of the class and access them through the object.
Alternatively, if you're more inclined to the functional approach, a better way than returning all relevant ops and tensors separately would be to return a dict (or namedtuple).
Additionally, there are also specialized functions that return ops by name: e.g. get_operation_by_name.
As an aside to this question, you might also want to try out eager execution, which is imperative.
3 things happen when you use op function:
create and add a compute node to default graph
set your input as the node input tensor
set node output tensor as return value
for example, a = tf.add(b, c, name='add'),
add a node with op Add to default graph, with name 'add'
set b and c as node input tensor
set node output, with name 'add:0', to a
So you can access nodes via sess.graph, there are many functions to access nodes, say, get_operation_by_name.
Also, you can operate the graph via sess.graph_def, which is serialized graph with protobuf, you can find the protobuf definition in the tensorflow source code, tensorflow/core/framework, some .proto files there.

Tensorflow: Is it always more convenient to use InteractiveSession() compared to Session()?

It seems to be more convenient to simply use something like sub.eval() instead of sess.run(eval), so would it be more convenient to always use InteractiveSession()? Are there any tradeoffs if we were to use InteractiveSession() all the time?
So far the only 'disadvantage' I see is that I can't use something like:
with tf.InteractiveSession() as sess:
result = product.eval() #Where product is a simple matmul
print result
sess.close()
Instead I've to just define sess = tf.InteractiveSession right away.
From their implementation, the InteractiveSession sets itself as the default session and your subsequent eval() calls can use this session. You should be able to use the InteractiveSession in almost all the cases where you use Session.
One small difference is that you don't need to use InteractiveSession in a with block:
sess = tf.InteractiveSession()
# do your work
sess.close()
So don't forget to close the session after doing your work.
Here is an comparison between session.run() and eval(): In TensorFlow, what is the difference between Session.run() and Tensor.eval()?

Is it possible to modify an existing TensorFlow computation graph?

TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update/modify an existing graph?
The TensorFlow tf.Graph class is an append-only data structure, which means that you can add nodes to the graph after executing part of the graph, but you cannot remove or modify existing nodes. Since TensorFlow executes only the necessary subgraph when you call Session.run(), there is no execution-time cost to having redundant nodes in the graph (although they will continue to consume memory).
To remove all nodes in the graph, you can create a session with a new graph:
with tf.Graph().as_default(): # Create a new graph, and make it the default.
with tf.Session() as sess: # `sess` will use the new, currently empty, graph.
# Build graph and execute nodes in here.
Yes, tf.Graph are build in an append-only fashion as #mrry puts it.
But there's workaround:
Conceptually you can modify an existing graph by cloning it and perform the modifications needed along the way. As of r1.1, Tensorflow provides a module named tf.contrib.graph_editor which implements the above idea as a set of convinient functions.
In addition to what #zaxily and #mrry says, I want to provide an example of how to actually do a modification to the graph. In short:
one can not modify existing operations, all ops are final and non-mutable
one may copy an op, modify it's inputs or attributes and add new op back to the graph
all downstream ops that depend on the new/copied op have to be recreated. Yes, a signifficant portion of the graph would be copied copied, which is not a problem
The code:
import tensorflow
import copy
import tensorflow.contrib.graph_editor as ge
from copy import deepcopy
a = tf.constant(1)
b = tf.constant(2)
c = a+b
def modify(t):
# illustrate operation copy&modification
new_t = deepcopy(t.op.node_def)
new_t.name = new_t.name+"_but_awesome"
new_t = tf.Operation(new_t, tf.get_default_graph())
# we got a tensor, let's return a tensor
return new_t.outputs[0]
def update_existing(target, updated):
# illustrate how to use new op
related_ops = ge.get_backward_walk_ops(target, stop_at_ts=updated.keys(), inclusive=True)
new_ops, mapping = ge.copy_with_input_replacements(related_ops, updated)
new_op = mapping._transformed_ops[target.op]
return new_op.outputs[0]
new_a = modify(a)
new_b = modify(b)
injection = new_a+39 # illustrate how to add another op to the graph
new_c = update_existing(c, {a:injection, b:new_b})
with tf.Session():
print(c.eval()) # -> 3
print(new_c.eval()) # -> 42
For tensorflow v>=2.6, using Graph directly have been depcreated
A tf.Graph can be constructed and used directly without a tf.function, as was required in TensorFlow 1, but this is deprecated and it is recommended to use a tf.function instead. If a graph is directly used, other deprecated TensorFlow 1 classes are also required to execute the graph, such as a tf.compat.v1.Session.
That being said, I think your question can be still relevant, I think the kind of problem you are facing might be solved when using tensorflow eager execution. While running tf in eager mode, you can run, modify the graph before building it test it before building it...
TensorFlow's eager execution is an imperative programming environment that evaluates operations immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later. This makes it easy to get started with TensorFlow and debug models, and it reduces boilerplate as well. To follow along with this guide, run the code samples below in an interactive python interpreter.
However be careful eager mode trade debugging/flexbillity with performance/speed, so for production you might consider turning it off.
Lastly, there is other feature of tensorflow, that might be relvant for this probelm, which is tensor slicing, tf.slice.

Categories