Tensorflow - casting from int to float strange behavior - python

I am working on tensorflow 0.12 and am having problem with casting.
The following snippet of code does a strange thing:
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
I get a value:
6.86574233e-36
I also tried using tf.to_float() and tf.saturate_cast. Both gave the same result.
Please help.

sess = tf.InteractiveSession()
a = tf.constant(1, tf.int64) <--------
b = tf.cast(a, tf.float32)
print b.eval() # 1.0
You need to declare the dtype for your tf.constant: https://www.tensorflow.org/api_docs/python/tf/constant

Since I see that this is still getting some attention, I should mention that the newer versions of tensorflow do not show this behavior, I suggest working with tensorflow version 1.13 or higher

I checked the code in python3 and python2 for the same tensorflow version as well the code seems to be working correctly as in both the cases I got the following output for python2
print b.eval()
1.0
I would suggest checking the tensorflow installation or the virtualenv.

No error in your program.
import tensorflow as tf
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
This is an online environment for TF https://codeenv.com/env/run/gXGpnR/
Test your code there to run, use
click on test_tf.py
add your code
in left side CLI, type ipython test_tf.py

Related

Upgrading from tensorflow 1.x to 2.0

I am new to tensorflow.
Have tried this simple example:
import tensorflow as tf
sess = tf.Session()
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y
print(sess.run(z, feed_dict={x: 3.0, y: 4.5}))
and got some warnings The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. and the right answer - 7.5
After reading here, I understand that the warnings are due to upgrading from tf 1.x to 2.0, the steps described are "simple" but they don't give any example....
I have tried:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
Is my code correct (in the sense defined in the link)?
Now, I am getting Tensor("PartitionedCall:0", shape=(), dtype=float32) as output, how can I get the actual value?
You code is indeed correct. The warning that you get indicates that as from Tensorflow 2.0, tf.Session() won't exist in the API. Therefore, if you want you code to be compatible with Tensorflow 2.0, you should use tf.compat.v1.Session instead. So, just change this line:
sess = tf.Session()
To:
sess = tf.compat.v1.Session()
Then, even if you update Tensorflow from 1.xx to 2.xx, your code would execute in the same way. As for the code in Tensorflow 2.0:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
it is fine if you run it in Tensorflow 2.0. If you want to run the same code, without installing Tensorflow 2.0, you can do the following:
import tensorflow as tf
tf.enable_eager_execution()
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())
The reason for this is because the default way of executing Tensorflow operations starting from Tensorflow 2.0 is in eager mode. The way of activating eager mode in Tensorflow 1.xx, is to enable it right after the import of Tensorflow, as I am doing it in the example above.
Your code is correct as per Tensorflow 2.0. Tensorflow 2.0 is even more tightly bonded with numpy so if you want to get the result of the operation, you could use the numpy() method:
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())

tf.InteractiveSession() or tf.enable_eager_execution()

This is a beginner question on learning Tensorflow. I'm used to playing around machine learning models in interactive shell like Jupyter notebook. I understand tensorflow adopts the lazy execution style, so I can't easily print tensors to check.
After some research, I found two work around: tf.InteractiveSession() or tf.enable_eager_execution(). From what I understand, both allow me to print variables as I write them. Is this correct? And is there a preference?
When using tf.InteractiveSession() you are still in lazy execution. So you can't print variable values. You only can get to see symbols.
sess = tf.InteractiveSession()
a = tf.random.uniform(shape=(2,3))
print(a) # <tf.Tensor 'random_uniform:0' shape=(2, 3) dtype=float32>
When using tf.enable_eager_execution() you can get to see variable values.
tf.enable_eager_execution()
a = tf.random.uniform(shape=(2,3))
print(a) # prints out value

getting attentionwrapper issue in spellchecker program

https://github.com/Currie32/Spell-Checker.
In the above link code I'm getting error that DynamicAttentionWrapper is not defined. I'm using TensorFlow version as 1.2. I couldn't able to over come with this error. Please help me with this.
There is issue with DynamicAttentionWrapper in your version of tensorflow.
Try changing DynamicAttentionWrapper to AttentionWrapper or downgrade to tensorflow 1.1.
For your version of tensorflow, try these changes for initial_state, inference_logits and training_logits:
initial_state = dec_cell.zero_state(batch_size=batch_size,dtype=tf.float32).clone(cell_state=enc_state)
inference_logits, _ ,_ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=max_target_length)
training_logits, _ ,_ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=max_target_length)

tensorflow sess does not close automatically

I have a code as follows:
import tensorflow as tf
a = tf.constant([1,2,3])
with tf.Session() as sess:
print('12')
print(sess.run(a))
However, I get results
12
[1 2 3]
when I run the code. It seems the sess does not close automatically. Can someone explain how it happens? Thanks.
I can reproduce it with tensorflow 1.7.
This has been fixed in version 1.8 of tensorflow.

Keras + Tensorflow : Debug NaNs

Here is a great question on how to find the first occurence of Nan in a tensorflow graph:
Debugging nans in the backward pass
The answer is quite helpful, here is the code from it:
train_op = ...
check_op = tf.add_check_numerics_ops()
sess = tf.Session()
sess.run([train_op, check_op]) # Runs training and checks for NaNs
Apparently, running the training and the numerical check at the same time will result in an error report as soon as Nan is encountered for the first time.
How do I integrate this into Keras ?
In the documentation, I can't find anything that looks like this.
I checked the code, too.
The update step is executed here:
https://github.com/fchollet/keras/blob/master/keras/engine/training.py
There is a function called _make_train_function where an operation to compute the loss and apply updates is created. This is later called to train the network.
I could change the code like this (always assuming that we're running on a tf backend):
check_op = tf.add_check_numerics_ops()
self.train_function = K.function(inputs,
[self.total_loss] + self.metrics_tensors + [check_op],
updates=updates, name='train_function', **self._function_kwargs)
I'm currently trying to set this up properly and not sure whether the code above actually works.
Maybe there is an easier way ?
I've been running into the exact same problem, and found an alternative to the check_add_numerics_ops() function. Instead of going that route, I use the TensorFlow Debugger to walk through my model, following the example in https://www.tensorflow.org/guide/debugger to figure out exactly where my code produces nans. This snippet should work for replacing the TensorFlow Session that Keras is using with a debugging session, allowing you to use tfdbg.
from tensorflow.python import debug as tf_debug
sess = K.get_session()
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
K.set_session(sess)

Categories