https://github.com/Currie32/Spell-Checker.
In the above link code I'm getting error that DynamicAttentionWrapper is not defined. I'm using TensorFlow version as 1.2. I couldn't able to over come with this error. Please help me with this.
There is issue with DynamicAttentionWrapper in your version of tensorflow.
Try changing DynamicAttentionWrapper to AttentionWrapper or downgrade to tensorflow 1.1.
For your version of tensorflow, try these changes for initial_state, inference_logits and training_logits:
initial_state = dec_cell.zero_state(batch_size=batch_size,dtype=tf.float32).clone(cell_state=enc_state)
inference_logits, _ ,_ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=max_target_length)
training_logits, _ ,_ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=max_target_length)
Related
I am using QuestionAnsweringModel from SimpleTransformers. When I run my code, and review my processes in Windows Task Manager, python is not using the GPU at all. I have included a code snippet to recreate the problem. Any help is highly appreciated.
import torch
from simpletransformers.question_answering
import QuestionAnsweringModel,QuestionAnsweringArgs
model_type=“bert”
model_name= “bert-base-cased”
model_args = QuestionAnsweringArgs()
train_args = {
'n_best_size':1 ,
‘overwrite_output_dir’: True,
‘show_running_loss’:True,
‘n_gpu’: 3
}
model = QuestionAnsweringModel(model_type,model_name, args=train_args, use_cuda=True)
I have also looked at a topic on the same issue here before. Recommendations were to update pytorch. But I have already done that as well.
Update:
Tried to set the device to CUDA manually with the code below but no luck so far.
model.to(torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”))
Thanks!
I'm trying to run the code found in https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE .
I'm using Python version 3.9 and my TensorFlow version is >2.0. The code is as follows:
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
datasets, datasets_info = tfds.load(name='mnist', with_info=True, as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255 #Scale to [0, 1]
image = image < tf.random.uniform(tf.shape(image)) #Gives 0, 1 when compared to a random number
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.experimental.AUTOTUNE)
.shuffle(int(10e3)))
What I get is the following warning:
WARNING:tensorflow:AutoGraph could not transform <function _preprocess at 0x7f8ff80cd160> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
The warning is related to the last part of the code but I can't tell if it's going to potentially affect how the code runs. If it won't affect it, is there a way to consistently remove such warnings?
This is an API conflict between TensorFlow and Python 3.9. Note that, as of today (2021-04-07), official releases of TensorFlow support only Python versions 3.6 to 3.8. TensorFlow 2.5 should officially support Python 3.9.
You can either:
Downgrade your python version to 3.8
Downgrade your version of the package gast to the 0.3.3 as mentioned on this GitHub issue : Report: AutoGraph could not transform, module 'gast' has no attribute 'Index' #44146
I am dealing with a task where I have to create a code using TensorFlow-version2. I have an existing code of the same thing but in TensorFlow version 1.
with tf.variable_scope(name):
self.obs = tf.placeholder(dtype=tf.float32, shape=[None] + list(ob_space.shape), name='obs')
with tf.variable_scope('policy_net'):
layer_1 = tf.layers.dense(inputs=self.obs, units=20, activation=tf.tanh)
layer_2 = tf.layers.dense(inputs=layer_1, units=20, activation=tf.tanh)
layer_3 = tf.layers.dense(inputs=layer_2, units=act_space.n, activation=tf.tanh)
self.act_probs = tf.layers.dense(inputs=layer_3, units=act_space.n, activation=tf.nn.softmax)
I have worked on tf2 directly but I am facing challenges in understanding the given excerpt. Please help me understand it. Also, how can rewrite this code suitable for tf2. kindly suggest or provide me with a doc to do so.
For this, I will be really thankful to you.
Two ways to migrate your TensorFlow 1 code to TensorFlow 2:
Manually migrate low level TensorFlow APIs
Automatically upgrade code to TensorFlow 2
I am using some code from here: https://github.com/monikkinom/ner-lstm with tensorflow. I think the code was written for an older version of tensorflow, I am using version 1.0.0. I used tf_upgrade.py to upgrade model.py in that github repos, but I am still getting the error:
output, _, _ = contrib_rnn.bidirectional_rnn(fw_cell, bw_cell,
AttributeError: 'module' object has no attribute 'bidirectional_rnn'
this is after I changed the bidirectional_rnn call to use contrib_rnn which is:
from tensorflow.contrib.rnn.python.ops import core_rnn as contrib_rnn
The old call was
output, _, _ = tf.nn.bidirectional_rnn(fw_cell, bw_cell,
tf.unpack(tf.transpose(self.input_data, perm=[1, 0, 2])),
dtype=tf.float32, sequence_length=self.length)
which also doesn't work.
I had to change the LSTMCell, DroputWrapper, etc. to rnn.LSTMCell, but they seem to work fine. It is the bidirectional_rnn that I can't figure out how to change.
In TensorFlow 1.0, you have the choice of two bidirectional RNN functions:
tf.nn.bidirectional_dynamic_rnn()
tf.contrib.rnn.static_bidirectional_rnn()
Maybe you can try to reimplement a bidirectional RNN by simply wrapping into a single class two monodirectional RNNs with the parameter "go_backwards=True" set on one of them. Then you can also have control over the type of merge done with the outputs. Maybe taking a look at the implementation in https://github.com/fchollet/keras/blob/master/keras/layers/wrappers.py (see the class Bidirectional) could get you started.
I am working on tensorflow 0.12 and am having problem with casting.
The following snippet of code does a strange thing:
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
I get a value:
6.86574233e-36
I also tried using tf.to_float() and tf.saturate_cast. Both gave the same result.
Please help.
sess = tf.InteractiveSession()
a = tf.constant(1, tf.int64) <--------
b = tf.cast(a, tf.float32)
print b.eval() # 1.0
You need to declare the dtype for your tf.constant: https://www.tensorflow.org/api_docs/python/tf/constant
Since I see that this is still getting some attention, I should mention that the newer versions of tensorflow do not show this behavior, I suggest working with tensorflow version 1.13 or higher
I checked the code in python3 and python2 for the same tensorflow version as well the code seems to be working correctly as in both the cases I got the following output for python2
print b.eval()
1.0
I would suggest checking the tensorflow installation or the virtualenv.
No error in your program.
import tensorflow as tf
sess = tf.InteractiveSession()
a = tf.constant(1)
b = tf.cast(a, tf.float32)
print b.eval()
This is an online environment for TF https://codeenv.com/env/run/gXGpnR/
Test your code there to run, use
click on test_tf.py
add your code
in left side CLI, type ipython test_tf.py