This is a beginner question on learning Tensorflow. I'm used to playing around machine learning models in interactive shell like Jupyter notebook. I understand tensorflow adopts the lazy execution style, so I can't easily print tensors to check.
After some research, I found two work around: tf.InteractiveSession() or tf.enable_eager_execution(). From what I understand, both allow me to print variables as I write them. Is this correct? And is there a preference?
When using tf.InteractiveSession() you are still in lazy execution. So you can't print variable values. You only can get to see symbols.
sess = tf.InteractiveSession()
a = tf.random.uniform(shape=(2,3))
print(a) # <tf.Tensor 'random_uniform:0' shape=(2, 3) dtype=float32>
When using tf.enable_eager_execution() you can get to see variable values.
tf.enable_eager_execution()
a = tf.random.uniform(shape=(2,3))
print(a) # prints out value
Related
I want to use tf.print to show tensor value, but it has no result?
This is my code and is there something wrong for that:
from __future__ import print_function
import tensorflow as tf
sess = tf.InteractiveSession()
a = tf.constant([1.0, 3.0])
tf.print(a)
From the documentation of tf.Print - that's deprecated and suggests to use tf.print:
Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators.
This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:
sess = tf.Session()
with sess.as_default():
tensor = tf.range(10)
print_op = tf.print(tensor)
with tf.control_dependencies([print_op]):
out = tf.add(tensor, tensor)
sess.run(out)
Hence, if you enable the eager mode your code will work as you expected, if you want to continue using the static-graph mode you have to use sess.run
import tensorflow as tf
a = tf.constant([1.0, 3.0])
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
print(sess.run(a))
is what I'd do. Import tensorflow, set up your variables, set up and run the initializer for them, and then print the session evaluating the constant
I am running a neural network program from shell. It is running right but it prints all the information besides my output which I really don't need. The detail is given in the picture attached.
I haven't written anything in my code to print this un-necessary information.
It looks like you might have log_device_placement turned on. You can turn this off by either removing it from the config in tf.Session() entirely or setting it to False.
# Example 1
sess = tf.Session(config=tf.ConfigProto(log_device_placement=False))
# Example 2
sess = tf.Session()
References:
Tensorflow - Using GPUs
Here is a great question on how to find the first occurence of Nan in a tensorflow graph:
Debugging nans in the backward pass
The answer is quite helpful, here is the code from it:
train_op = ...
check_op = tf.add_check_numerics_ops()
sess = tf.Session()
sess.run([train_op, check_op]) # Runs training and checks for NaNs
Apparently, running the training and the numerical check at the same time will result in an error report as soon as Nan is encountered for the first time.
How do I integrate this into Keras ?
In the documentation, I can't find anything that looks like this.
I checked the code, too.
The update step is executed here:
https://github.com/fchollet/keras/blob/master/keras/engine/training.py
There is a function called _make_train_function where an operation to compute the loss and apply updates is created. This is later called to train the network.
I could change the code like this (always assuming that we're running on a tf backend):
check_op = tf.add_check_numerics_ops()
self.train_function = K.function(inputs,
[self.total_loss] + self.metrics_tensors + [check_op],
updates=updates, name='train_function', **self._function_kwargs)
I'm currently trying to set this up properly and not sure whether the code above actually works.
Maybe there is an easier way ?
I've been running into the exact same problem, and found an alternative to the check_add_numerics_ops() function. Instead of going that route, I use the TensorFlow Debugger to walk through my model, following the example in https://www.tensorflow.org/guide/debugger to figure out exactly where my code produces nans. This snippet should work for replacing the TensorFlow Session that Keras is using with a debugging session, allowing you to use tfdbg.
from tensorflow.python import debug as tf_debug
sess = K.get_session()
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
K.set_session(sess)
I am working on tensorflow 0.12 and am having problem with casting.
The following snippet of code does a strange thing:
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
I get a value:
6.86574233e-36
I also tried using tf.to_float() and tf.saturate_cast. Both gave the same result.
Please help.
sess = tf.InteractiveSession()
a = tf.constant(1, tf.int64) <--------
b = tf.cast(a, tf.float32)
print b.eval() # 1.0
You need to declare the dtype for your tf.constant: https://www.tensorflow.org/api_docs/python/tf/constant
Since I see that this is still getting some attention, I should mention that the newer versions of tensorflow do not show this behavior, I suggest working with tensorflow version 1.13 or higher
I checked the code in python3 and python2 for the same tensorflow version as well the code seems to be working correctly as in both the cases I got the following output for python2
print b.eval()
1.0
I would suggest checking the tensorflow installation or the virtualenv.
No error in your program.
import tensorflow as tf
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
This is an online environment for TF https://codeenv.com/env/run/gXGpnR/
Test your code there to run, use
click on test_tf.py
add your code
in left side CLI, type ipython test_tf.py
TensorFlow has two ways to evaluate part of graph: Session.run on a list of variables and Tensor.eval. Is there a difference between these two?
If you have a Tensor t, calling t.eval() is equivalent to calling tf.get_default_session().run(t).
You can make a session the default as follows:
t = tf.constant(42.0)
sess = tf.Session()
with sess.as_default(): # or `with sess:` to close on exit
assert sess is tf.get_default_session()
assert t.eval() == sess.run(t)
The most important difference is that you can use sess.run() to fetch the values of many tensors in the same step:
t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.mul(t, u)
ut = tf.mul(u, t)
with sess.as_default():
tu.eval() # runs one step
ut.eval() # runs one step
sess.run([tu, ut]) # evaluates both tensors in a single step
Note that each call to eval and run will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable.
The FAQ session on tensor flow has an answer to exactly the same question. I will just go ahead and leave it here:
If t is a Tensor object, t.eval() is shorthand for sess.run(t) (where sess is the current default session. The two following snippets of code are equivalent:
sess = tf.Session()
c = tf.constant(5.0)
print sess.run(c)
c = tf.constant(5.0)
with tf.Session():
print c.eval()
In the second example, the session acts as a context manager, which has the effect of installing it as the default session for the lifetime of the with block. The context manager approach can lead to more concise code for simple use cases (like unit tests); if your code deals with multiple graphs and sessions, it may be more straightforward to explicit calls to Session.run().
I'd recommend that you at least skim throughout the whole FAQ, as it might clarify a lot of things.
eval() can not handle the list object
tf.reset_default_graph()
a = tf.Variable(0.2, name="a")
b = tf.Variable(0.3, name="b")
z = tf.constant(0.0, name="z0")
for i in range(100):
z = a * tf.cos(z + i) + z * tf.sin(b - i)
grad = tf.gradients(z, [a, b])
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print("z:", z.eval())
print("grad", grad.eval())
but Session.run() can
print("grad", sess.run(grad))
correct me if I am wrong
The most important thing to remember:
The only way to get a constant, variable (any result) from TenorFlow is the session.
Knowing this everything else is easy:
Both tf.Session.run() and tf.Tensor.eval() get results from the session where tf.Tensor.eval() is a shortcut for calling tf.get_default_session().run(t)
I would also outline the method tf.Operation.run() as in here:
After the graph has been launched in a session, an Operation can be executed by passing it to tf.Session.run(). op.run() is a shortcut for calling tf.get_default_session().run(op).
Tensorflow 2.x Compatible Answer: Converting mrry's code to Tensorflow 2.x (>= 2.0) for the benefit of the community.
!pip install tensorflow==2.1
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
t = tf.constant(42.0)
sess = tf.compat.v1.Session()
with sess.as_default(): # or `with sess:` to close on exit
assert sess is tf.compat.v1.get_default_session()
assert t.eval() == sess.run(t)
#The most important difference is that you can use sess.run() to fetch the values of many tensors in the same step:
t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.multiply(t, u)
ut = tf.multiply(u, t)
with sess.as_default():
tu.eval() # runs one step
ut.eval() # runs one step
sess.run([tu, ut]) # evaluates both tensors in a single step
In tensorflow you create graphs and pass values to that graph. Graph does all the hardwork and generate the output based on the configuration that you have made in the graph.
Now When you pass values to the graph then first you need to create a tensorflow session.
tf.Session()
Once session is initialized then you are supposed to use that session because all the variables and settings are now part of the session. So, there are two ways to pass external values to the graph so that graph accepts them. One is to call the .run() while you are using the session being executed.
Other way which is basically a shortcut to this is to use .eval(). I said shortcut because the full form of .eval() is
tf.get_default_session().run(values)
You can check that yourself.
At the place of values.eval() run tf.get_default_session().run(values). You must get the same behavior.
what eval is doing is using the default session and then executing run().