I used Python 3.7.3 and installed tensorflow 2.0.0-alpha0,But there are some problems。such as
module 'tensorflow._api.v2.train' has no attribute 'GradientDescentOptimizer'
Here's all my code
import tensorflow as tf
import numpy as np
x_data=np.random.rand(1,10).astype(np.float32)
y_data=x_data*0.1+0.3
Weights = tf.Variable(tf.random.uniform([1], -1.0, 1.0))
biases = tf.Variable(tf.zeros([1]))
y=Weights*x_data+biases
loss=tf.reduce_mean(tf.square(y-y_data))
optimizer=tf.train.GradientDescentOptimizer(0.5)
train=optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(Weights), sess.run(biases))
In TensorFlow 2.0, Keras became the default high-level API, and optimizer functions migrated from tf.keras.optimizers into separate API called tf.optimizers. They inherit from Keras class Optimizer. Relevant functions from tf.train aren't included into TF 2.0. So to access GradientDescentOptimizer, call tf.optimizers.SGD
You are using Tensorflow 2.0.
The following code will be helpful:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
This is because you are using TensorFlow version 2.
`tf.train.GradientDescentOptimizer(0.5)`
The above call is for TensorFlow version 1(ex : 1.15.0).
You can try pip install tensorflow==1.15.0 to downgrade the TensorFlow and use the code as it is.
Else use the TensorFlow version 2(what you already has) with following call.
tf.optimizers.SGD (learning_rate=0.001, lr_decay=0.0, decay_step=100, staircase=False, use_locking=False, name='SGD')
For the answer #HoyeolKim gave, it may be needed to add:
tf.disable_v2_behavior()
As it is suggested in this answer.
Related
I'm trying to run the code found in https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE .
I'm using Python version 3.9 and my TensorFlow version is >2.0. The code is as follows:
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
datasets, datasets_info = tfds.load(name='mnist', with_info=True, as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255 #Scale to [0, 1]
image = image < tf.random.uniform(tf.shape(image)) #Gives 0, 1 when compared to a random number
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.experimental.AUTOTUNE)
.shuffle(int(10e3)))
What I get is the following warning:
WARNING:tensorflow:AutoGraph could not transform <function _preprocess at 0x7f8ff80cd160> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
The warning is related to the last part of the code but I can't tell if it's going to potentially affect how the code runs. If it won't affect it, is there a way to consistently remove such warnings?
This is an API conflict between TensorFlow and Python 3.9. Note that, as of today (2021-04-07), official releases of TensorFlow support only Python versions 3.6 to 3.8. TensorFlow 2.5 should officially support Python 3.9.
You can either:
Downgrade your python version to 3.8
Downgrade your version of the package gast to the 0.3.3 as mentioned on this GitHub issue : Report: AutoGraph could not transform, module 'gast' has no attribute 'Index' #44146
I am using Tensorflow 1.14.0 and tensorflow_datasets 1.2.0
When trying to run the following code
import tensorflow as tf
import tensorflow_datasets as tfds
smallnorb = tfds.load("smallnorb")
smallnorb_train, smallnorb_test = smallnorb["train"], smallnorb["test"]
assert isinstance(smallnorb_train, tf.data.Dataset)
smallnorb_train = smallnorb_train.as_numpy_iterator()
I get the following error
AttributeError: 'DatasetV1Adapter' object has no attribute 'as_numpy_iterator'
According to the tensorflow_datasets docs this should work.
Why won't it? And why am I getting a DatasetV1Adapter object in the first place?
You are using wrong tensorflow and tensorflow_datasets versions.
Please use 2.x unless you need 1.x for some very specific reasons.
This code works if you use tensorflow 2.1.0 and tensorflow_datasets 2.0.0. Proper documentation for 1.x of tf.data.Dataset can be found here and it has no such method indeed.
As #szymon mentioned, tensorflow-1.14 does not support the as_numpy_iterator. You should move your code to tf>=2.0
A handy tip which I frequently use is firing up a REPL python shell in one of the bash shells and use dir(tf.data.Dataset) to list all the attributes & methods that can be called from that object. You can further use the help(tf.data.Dataset.xxx) for parameters and return values of that method.
>>> import tensorflow as tf
>>> dir(tf.data.Dataset)
... <output>
>>> help(tf.data.Dataset.from_tensor_slices)
... and so on
If you do the same, you'll find that as_numpy_iterator won't be present in the dir(tf.data.Dataset) list output, hence the error.
fashion_model.compile(
loss = keras.losses.categorical_crossentropy,
optimizer = tf.keras.optimizers.Adam(),
metrics = ['accuracy']
)
When I execute this line of code I am facing the error
module 'tensorflow' has no attribute 'log'
and my tensorflow version is 2.0
substitute tf.math.log for tf.log in tf 2.0.
If you know the exact line where tf.log is, replace it with tf.math.log.
if not, you can use this guide to Automatically upgrade code to TensorFlow 2
loss = tf.keras.losses.categorical_crossentropy
I also faced a similar issue then i called tensorflow with each Keras object and it resolved it
It is due to TensorFlow update.
Just do this:
from tensorflow import keras
Then run your code
I am new to tensorflow.
Have tried this simple example:
import tensorflow as tf
sess = tf.Session()
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y
print(sess.run(z, feed_dict={x: 3.0, y: 4.5}))
and got some warnings The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. and the right answer - 7.5
After reading here, I understand that the warnings are due to upgrading from tf 1.x to 2.0, the steps described are "simple" but they don't give any example....
I have tried:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
Is my code correct (in the sense defined in the link)?
Now, I am getting Tensor("PartitionedCall:0", shape=(), dtype=float32) as output, how can I get the actual value?
You code is indeed correct. The warning that you get indicates that as from Tensorflow 2.0, tf.Session() won't exist in the API. Therefore, if you want you code to be compatible with Tensorflow 2.0, you should use tf.compat.v1.Session instead. So, just change this line:
sess = tf.Session()
To:
sess = tf.compat.v1.Session()
Then, even if you update Tensorflow from 1.xx to 2.xx, your code would execute in the same way. As for the code in Tensorflow 2.0:
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)))
it is fine if you run it in Tensorflow 2.0. If you want to run the same code, without installing Tensorflow 2.0, you can do the following:
import tensorflow as tf
tf.enable_eager_execution()
#tf.function
def f1(x1, y1):
return tf.math.add(x1, y1)
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())
The reason for this is because the default way of executing Tensorflow operations starting from Tensorflow 2.0 is in eager mode. The way of activating eager mode in Tensorflow 1.xx, is to enable it right after the import of Tensorflow, as I am doing it in the example above.
Your code is correct as per Tensorflow 2.0. Tensorflow 2.0 is even more tightly bonded with numpy so if you want to get the result of the operation, you could use the numpy() method:
print(f1(tf.constant(3.0), tf.constant(4.5)).numpy())
Is there a CORRELATE function in TensorFlow (like numpy.correlate)? For example,
numpy.correlate(x,y)
I suggest this doc. But I don't work with this framework.
I installed it using pip install --upgrade tensorflow-probability.
import tensorflow as tf
import tensorflow_probability as tfp
x = tf.random_uniform((5,2),2,3)
y = tf.random_uniform((5,2),2,3)
corr = tfp.stats.correlation(x, y)
with tf.Session() as sess :
print(sess.run( corr))
[[ 0.32789752 -0.12169117]
[ 0.83670807 -0.09973542]]