Tensorflow's API: seq2seq - python

I have been following https://github.com/kvfrans/twitch/blob/master/main.py tutorial to create and train a chatbot based on rnn using tensorflow. From what I understand, the tutorials was written on an older version of tensorflow, so some parts are outdated and give me an error like:
Traceback (most recent call last):
File "main.py", line 33, in <module>
outputs, last_state = tf.nn.seq2seq.rnn_decoder(inputs, initialstate, cell, loop_function=None, scope='rnnlm')
AttributeError: 'module' object has no attribute 'seq2seq'
I fixed some of them, but can't figure out what is the alternative to tf.nn.seq2seq.rnn_decoder and what should be the new module's parameters. What I currently fixed:
tf.nn.rnn_cell.BasicLSTMCell(embedsize) changed to
tf.contrib.rnn.BasicLSTMCell(embedsize)
tf.nn.rnn_cell.DropoutWrapper(lstm_cell,keep_prob) changed to tf.contrib.rnn.DropoutWrapper(lstm_cell,keep_prob)
tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * numlayers) changed to
tf.contrib.rnn.MultiRNNCell([lstm_cell] * numlayers)
Can someone please help me figure out what tf.nn.seq2seq.rnn_decoder will be?

I think this is the one you need:
tf.contrib.legacy_seq2seq.rnn_decoder

Related

Tensorflow Backend - Bug in "model._make_predict_function"

there's is a bug while running the Tensorflow code, the error code appears like this:
Traceback (most recent call last):
File "app.py", line 76, in <module>
model = deepmoji_emojis(maxlen, PRETRAINED_PATH)
File "/home/lifeofpy/LifeofPy/AI Photographer Project/Text-to-Color/deepmoji/model_def.py", line 35, in deepmoji_emojis
model._make_predict_function()
AttributeError: 'Functional' object has no attribute '_make_predict_function
and the file app.py is like this:
# print('Loading model from {}.'.format(PRETRAINED_PATH))
model = deepmoji_emojis(maxlen, PRETRAINED_PATH)
model.summary()
model._make_predict_function()
I think the error message is occured by the function 'model._make_predict_function',
I would appreciate any comments on this issue. Thanks!
Use: model.make_predict_function()
instead of: model._make_predict_function()
I tried to find _make_predict_function() with Google and it seems it was private function in old Keras in keras.engine.training.py but now Keras is part of tensorflow and function was removed from code. I can't find _make_predict_function() in tensorflow.keras.engine.training.py
Some old posts suggest to use model.predict() instead of model._make_predict_function() before threads but other posts suggest to duplicate model for every thread. But maybe new code in tensorflow resolved problem with running it in threads and maybe it doesn't need this function any more.

pytorch torch.jit.trace returns function instead of torch.jit.ScriptModule

I need to run in c++ a pre-trained pytorch nn model (trained in python) to make predictions.
To do so, I'm following the instructions on how to load a pytorch model in c++ given here: https://pytorch.org/tutorials/advanced/cpp_export.html
But when I try to get the torch.jit.ScriptModule via tracing as stated in the first step of the tutorial:
traced_script_module =
torch.jit.trace(model, (input_tensor_1, input_tensor_2))
Instead of returning a torch.jit.ScriptModule, it returns a function:
print(type(traced_script_module))
<type 'function'>
Which, when I run:
traced_script_module.save("model.pt")
then leads into the following error:
Traceback (most recent call last):
File "serialize_model.py", line 60, in <module>
traced_script_module.save("model.pt")
AttributeError: 'function' object has no attribute 'save'
Any ideas on what I'm doing wrong?
Thanks for asking Jatentaki. I was using PyTorch 0.4 in Python and when I updated to 1.0 it worked.

Setting up VGG-Face Descriptor in PyTorch

I've been trying to use the VGG-Face descriptor model (http://www.robots.ox.ac.uk/~vgg/software/vgg_face/) for a project of mine. All I want to do is to simply obtain the outputs of the network from an input image.
I haven't used any of MatConvNet, Caffe or PyTorch before and so I picked PyTorch at random. It turns out that the model (of class torch.legacy.nn.Sequential.Sequential) was saved in an older version of PyTorch and the syntax was thus slightly different to the ones on PyTorch's documentation.
I was able to load the lua .t7 model like so:
vgg_net = load_lua('./vgg_face_torch/VGG_FACE.t7', unknown_classes=True)
And loading in the input image:
# load image
image = imread('./ak.png')
# convert to tensor
input = torch.from_numpy(image).float()
Gleefully, I loaded in the image into the model with much anticipation:
# load into vgg_net
output = vgg_net.forward(input)
However, my hopes of it cooperating at all was quickly dashed when the code fails to compile. Leaving behind a cryptic error message:
Traceback (most recent call last):
File "~/Documents/python/vgg-face-test/vgg-pytorch.py", line 25, in <module>
output = vgg_net.forward(input)
File "~/.local/lib/python3.6/site-packages/torch/legacy/nn/Module.py", line 33, in forward
return self.updateOutput(input)
File "~/.local/lib/python3.6/site-packages/torch/utils/serialization/read_lua_file.py", line 235, in updateOutput_patch
return obj.updateOutput(*args)
File "~/.local/lib/python3.6/site-packages/torch/legacy/nn/Sequential.py", line 36, in updateOutput
currentOutput = module.updateOutput(currentOutput)
TypeError: 'NoneType' object is not callable
Of which I am absolutely dumbfounded by.
This is why I sought help on Stackoverflow. I hope someone here could perhaps lend me a hand in setting up the model - not even necessarily in Torch, in fact any working model will do, where I can simply get the description for any particular image.
try output = vgg_net(input) without the forward.
This apparently calls a default method defined in the module but I'm having trouble understanding why this is necessary.

Cannot find Transformer function in module in caffe

Here is the error i get when i run the transformer function for preprocessing the image.
Traceback (most recent call last):
File "tst.py", line 18, in
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
AttributeError: 'module' object has no attribute 'Transformer'
I actually figured it out.
The io.py file did not contain the class Trasnformer. Maybe it was missing in the modified caffe. I took the class from the original BVLC caffe and pasted it in the io.py file.
Link to io.py of BVLC caffe
https://github.com/BVLC/caffe/blob/master/python/caffe/io.py
works for me.

NuPIC one_hot_gym tutorial swarm attribute error

I'm trying to follow the tutorial at:
https://github.com/numenta/nupic/tree/master/examples/opf/clients/hotgym/prediction/one_gym
I'm right at the beginning but running the very first python script gave me an error:
Traceback (most recent call last):
File "swarm.py", line 105, in <module>
swarm(INPUT_FILE)
File "swarm.py", line 97, in swarm
modelParams = swarmForBestModelParams(SWARM_DESCRIPTION, name)
File "swarm.py", line 68, in swarmForBestModelParams
modelParams = permutations_runner.runWithConfig(
AttributeError: 'module' object has no attribute 'runWithConfig'
So I don't see anyone else complaining about this error so I'm assuming its something I'm doing (or overlooking) can you help me understand whats going on?
The contents of swarm.py are here:
https://github.com/numenta/nupic/blob/master/examples/opf/clients/hotgym/prediction/one_gym/swarm.py
I imagine you probably cloned NuPIC before you watched and ran the tutorial. The tutorial requires you to have the latest codebase. Pull the latest from master, rebuild NuPIC and try it again.
Even if you've updated the codebase and you can see the runWithConfig function, you will still need to re-run the build process as described in the README.md.

Categories