Printing a TensorFlow object without using session - python

I am receiving different errors when trying to print a TensorFlow object.
import numpy as np
import tensorflow as tf
The versions of both TensorFlow and numpy are 2.6 and 1.19.5, respectively.
print("np version:", np.__version__)
print("tf version:" ,tf.version.VERSION)
print("eager is on? ", tf.executing_eagerly())
#np version: 1.19.5
#tf version: 2.6.0
#eager is on? True
Now, let me create a small array and turn it into a tf object.
arr= [0,1.2,-0.8]
arr = tf.constant(arr, dtype = tf.float32)
When I use tf.print or tf.compat.v1.print(arr), nothing happens. When I call numpy, I do receive an error.
tf.compat.v1.print(arr.numpy())
AttributeError: 'Tensor' object has no attribute 'numpy'
The only thing that has worked so far is ;
with tf.compat.v1.Session() as sess: print(arr.eval())
#[ 0. 1.2 -0.8]
However, I would like to use numpy since my goal is to print certain features of the network during the traning phase. For instance, if I want to print the learning rate, I call with tf.compat.v1.Session() as sess: print(model.optimizer.learning_rate.eval()) . Yet, it returns me another error.
'ExponentialDecay' object has no attribute 'eval'
I was able to use numpy to print everything before, however, I updated both TensorFlow and numpy packages and now am facing so many incompatibilities. The worst thing is that I don't remember which versions I was using.
I followed every step explainedd in this post
AttributeError: 'Tensor' object has no attribute 'numpy'. It did not help me.

Following code gives me an output -
import numpy as np
import tensorflow as tf
print("np version:", np.__version__)
print("tf version:" ,tf.version.VERSION)
print("eager is on? ", tf.executing_eagerly())
tf.enable_eager_execution()
arr= [0,1.2,-0.8]
arr = tf.constant(arr, dtype = tf.float32)
tf.compat.v1.print(arr.numpy())
Output: array([ 0. , 1.2, -0.8], dtype=float32)
Did you add tf.enable_eager_execution() ?

Related

WARNING:tensorflow:AutoGraph could not transform <function _preprocess at 0x7f8ff80cd160> and will run it as-is

I'm trying to run the code found in https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE .
I'm using Python version 3.9 and my TensorFlow version is >2.0. The code is as follows:
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
datasets, datasets_info = tfds.load(name='mnist', with_info=True, as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255 #Scale to [0, 1]
image = image < tf.random.uniform(tf.shape(image)) #Gives 0, 1 when compared to a random number
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.experimental.AUTOTUNE)
.shuffle(int(10e3)))
What I get is the following warning:
WARNING:tensorflow:AutoGraph could not transform <function _preprocess at 0x7f8ff80cd160> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
The warning is related to the last part of the code but I can't tell if it's going to potentially affect how the code runs. If it won't affect it, is there a way to consistently remove such warnings?
This is an API conflict between TensorFlow and Python 3.9. Note that, as of today (2021-04-07), official releases of TensorFlow support only Python versions 3.6 to 3.8. TensorFlow 2.5 should officially support Python 3.9.
You can either:
Downgrade your python version to 3.8
Downgrade your version of the package gast to the 0.3.3 as mentioned on this GitHub issue : Report: AutoGraph could not transform, module 'gast' has no attribute 'Index' #44146

Can't convert a tf.data.Dataset object to a numpy iterator

I am using Tensorflow 1.14.0 and tensorflow_datasets 1.2.0
When trying to run the following code
import tensorflow as tf
import tensorflow_datasets as tfds
smallnorb = tfds.load("smallnorb")
smallnorb_train, smallnorb_test = smallnorb["train"], smallnorb["test"]
assert isinstance(smallnorb_train, tf.data.Dataset)
smallnorb_train = smallnorb_train.as_numpy_iterator()
I get the following error
AttributeError: 'DatasetV1Adapter' object has no attribute 'as_numpy_iterator'
According to the tensorflow_datasets docs this should work.
Why won't it? And why am I getting a DatasetV1Adapter object in the first place?
You are using wrong tensorflow and tensorflow_datasets versions.
Please use 2.x unless you need 1.x for some very specific reasons.
This code works if you use tensorflow 2.1.0 and tensorflow_datasets 2.0.0. Proper documentation for 1.x of tf.data.Dataset can be found here and it has no such method indeed.
As #szymon mentioned, tensorflow-1.14 does not support the as_numpy_iterator. You should move your code to tf>=2.0
A handy tip which I frequently use is firing up a REPL python shell in one of the bash shells and use dir(tf.data.Dataset) to list all the attributes & methods that can be called from that object. You can further use the help(tf.data.Dataset.xxx) for parameters and return values of that method.
>>> import tensorflow as tf
>>> dir(tf.data.Dataset)
... <output>
>>> help(tf.data.Dataset.from_tensor_slices)
... and so on
If you do the same, you'll find that as_numpy_iterator won't be present in the dir(tf.data.Dataset) list output, hence the error.

Upgrading from tensorflow 1.x to 2.0

I am new to tensorflow.
Have tried this simple example:
import tensorflow as tf
sess = tf.Session()
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y
print(sess.run(z, feed_dict={x: 3.0, y: 4.5}))
and got some warnings The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. and the right answer - 7.5
After reading here, I understand that the warnings are due to upgrading from tf 1.x to 2.0, the steps described are "simple" but they don't give any example....
I have tried:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
Is my code correct (in the sense defined in the link)?
Now, I am getting Tensor("PartitionedCall:0", shape=(), dtype=float32) as output, how can I get the actual value?
You code is indeed correct. The warning that you get indicates that as from Tensorflow 2.0, tf.Session() won't exist in the API. Therefore, if you want you code to be compatible with Tensorflow 2.0, you should use tf.compat.v1.Session instead. So, just change this line:
sess = tf.Session()
To:
sess = tf.compat.v1.Session()
Then, even if you update Tensorflow from 1.xx to 2.xx, your code would execute in the same way. As for the code in Tensorflow 2.0:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
it is fine if you run it in Tensorflow 2.0. If you want to run the same code, without installing Tensorflow 2.0, you can do the following:
import tensorflow as tf
tf.enable_eager_execution()
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())
The reason for this is because the default way of executing Tensorflow operations starting from Tensorflow 2.0 is in eager mode. The way of activating eager mode in Tensorflow 1.xx, is to enable it right after the import of Tensorflow, as I am doing it in the example above.
Your code is correct as per Tensorflow 2.0. Tensorflow 2.0 is even more tightly bonded with numpy so if you want to get the result of the operation, you could use the numpy() method:
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())

module 'tensorflow._api.v2.train' has no attribute 'GradientDescentOptimizer'

I used Python 3.7.3 and installed tensorflow 2.0.0-alpha0,But there are some problems。such as
module 'tensorflow._api.v2.train' has no attribute 'GradientDescentOptimizer'
Here's all my code
import tensorflow as tf
import numpy as np
x_data=np.random.rand(1,10).astype(np.float32)
y_data=x_data*0.1+0.3
Weights = tf.Variable(tf.random.uniform([1], -1.0, 1.0))
biases = tf.Variable(tf.zeros([1]))
y=Weights*x_data+biases
loss=tf.reduce_mean(tf.square(y-y_data))
optimizer=tf.train.GradientDescentOptimizer(0.5)
train=optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(Weights), sess.run(biases))
In TensorFlow 2.0, Keras became the default high-level API, and optimizer functions migrated from tf.keras.optimizers into separate API called tf.optimizers. They inherit from Keras class Optimizer. Relevant functions from tf.train aren't included into TF 2.0. So to access GradientDescentOptimizer, call tf.optimizers.SGD
You are using Tensorflow 2.0.
The following code will be helpful:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
This is because you are using TensorFlow version 2.
`tf.train.GradientDescentOptimizer(0.5)`
The above call is for TensorFlow version 1(ex : 1.15.0).
You can try pip install tensorflow==1.15.0 to downgrade the TensorFlow and use the code as it is.
Else use the TensorFlow version 2(what you already has) with following call.
tf.optimizers.SGD (learning_rate=0.001, lr_decay=0.0, decay_step=100, staircase=False, use_locking=False, name='SGD')
For the answer #HoyeolKim gave, it may be needed to add:
tf.disable_v2_behavior()
As it is suggested in this answer.

Tensorflow Module Import error: AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'

When attempting to pass my RNN call, I call tf.nn.rnn_cell and I receive the following error:
AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'
Which is odd, because I'm sure I imported everything correctly:
from __future__ import print_function, division
from tensorflow.contrib import rnn
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
But looking at the docs, things have moved around between tensorflow versions.
what would you all recommend to fix this??
Line, I'm getting the error against:
state_per_layer_list = tf.unstack(init_state, axis=0)
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(state_per_layer_list[idx][0], state_per_layer_list[idx][1])
for idx in range(num_layers)]
)
Specifically:
tf.nn.rnn_cell
I'm using anaconda 3 to manage all of this so, the dependancies should all be taken care of. I have already tried working around a damn rank/shape error with Tensor shapes which took ages to resolve.
Cheers in advance.
Replace tf.nn.rnn_cell with tf.contrib.rnn
Since version 1.0, rnn implemented as part of the contrib module.
More information can be found here
https://www.tensorflow.org/api_guides/python/contrib.rnn

Categories