Retrieving last value of LSTM sequence in Tensorflow - python

I have sequences of different lengths that I want to classify using LSTMs in Tensorflow. For the classification I just need the LSTM output of the last timestep of each sequence.
max_length = 10
n_dims = 2
layer_units = 5
input = tf.placeholder(tf.float32, [None, max_length, n_dims])
lengths = tf.placeholder(tf.int32, [None])
cell = tf.nn.rnn_cell.LSTMCell(num_units=layer_units, state_is_tuple=True)
sequence_outputs, last_states = tf.nn.dynamic_rnn(cell, sequence_length=lengths, inputs=input)
I would like to get, in NumPy notation: output = sequence_outputs[:,lengths]
Is there any way or workaround to get this behaviour in Tensorflow?
---UPDATE---
Following this post How to select rows from a 3-D Tensor in TensorFlow? it seems that is possible to solve the problem in an efficient manner with tf.gather and manipulating the indices. The only requirement is that the batch size must be known in advance. Here is the adaptation of the referred post to this concrete problem:
max_length = 10
n_dims = 2
layer_units = 5
batch_size = 2
input = tf.placeholder(tf.float32, [batch_size, max_length, n_dims])
lengths = tf.placeholder(tf.int32, [batch_size])
cell = tf.nn.rnn_cell.LSTMCell(num_units=layer_units, state_is_tuple=True)
sequence_outputs, last_states = tf.nn.dynamic_rnn(cell,
sequence_length=lengths, inputs=input)
#Code adapted from #mrry response in StackOverflow:
#https://stackoverflow.com/questions/36088277/how-to-select-rows-from-a-3-d-tensor-in-tensorflow
rows_per_batch = tf.shape(input)[1]
indices_per_batch = 1
# Offset to add to each row in indices. We use `tf.expand_dims()` to make
# this broadcast appropriately.
offset = tf.range(0, batch_size) * rows_per_batch
# Convert indices and logits into appropriate form for `tf.gather()`.
flattened_indices = lengths - 1 + offset
flattened_sequence_outputs = tf.reshape(self.sequence_outputs, tf.concat(0, [[-1],
tf.shape(sequence_outputs)[2:]]))
selected_rows = tf.gather(flattened_sequence_outputs, flattened_indices)
last_output = tf.reshape(selected_rows,
tf.concat(0, [tf.pack([batch_size, indices_per_batch]),
tf.shape(self.sequence_outputs)[2:]]))
#petrux option (Get the last output of a dynamic_rnn in TensorFlow) seems also to work but the need of building a list within a for loop may be less optimized, although I did not perform any benchmark to support this statement.

This could be an answer. I don't think there is anything similar to the NumPy notation you pointed out, but the effect is the same.

Here's a solution, using gather_nd, where batch size does not need to be known ahead of time.
def extract_axis_1(data, ind):
"""
Get specified elements along the first axis of tensor.
:param data: Tensorflow tensor that will be subsetted.
:param ind: Indices to take (one for each element along axis 0 of data).
:return: Subsetted tensor.
"""
batch_range = tf.range(tf.shape(data)[0])
indices = tf.stack([batch_range, ind], axis=1)
res = tf.gather_nd(data, indices)
return res
output = extract_axis_1(sequence_outputs, lengths - 1)
Now output is a tensor of dimension [batch_size, num_cells].

Related

How to get output from randomly sampled k entries from a tensor

I have a keras/tf problem using sub-sampling of values from a tensor. My model is given below:
x_input = Input((input_size,))
enc1 = Dense(encoder_size[0], activation='relu')(x_input)
drop = Dropout(keep_prob)(enc1)
enc2 = Dense(encoder_size[1], activation='relu')(drop)
drop = Dropout(keep_prob)(enc2)
mu = Dense(latent_dim, activation='linear', name='encoder_mean')(drop)
encoder = Model(x_input,mu)
I want to sample from the input randomly and then get the encoded values of the input. The error I am getting is
ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size. Got tensor with shape: (None, 13)
which I can understand is because "predict" does not work on placeholder but I am not sure what to pass to get the output for a placeholder.
# sample input randomly
sample_num = 500
idxs = tf.range(tf.shape(x_input)[0])
ridxs = tf.random_shuffle(idxs)[:sample_num]
sample_input = tf.gather(x_input, ridxs)
# get sample shape
sample_shape = K.shape(sample_input)
# sample from encoded value
sample_encoded = encoder.predict(sample_input) <----- Error
If you see the predict function documentation, it doesn't expect a placeholder or a tensor node as an expected set of input. You have to pass directly the Numpy array (in your case).
If you wish to perform some special data preprocessing which is not part of your regular model, you have to do it in Numpy and avoid Tensor computations for it.

How to build an embedding layer in Tensorflow RNN?

I'm building an RNN LSTM network to classify texts based on the writers' age (binary classification - young / adult).
Seems like the network does not learn and suddenly starts overfitting:
Red: train
Blue: validation
One possibility could be that the data representation is not good enough. I just sorted the unique words by their frequency and gave them indices. E.g.:
unknown -> 0
the -> 1
a -> 2
. -> 3
to -> 4
So I'm trying to replace that with word embedding.
I saw a couple of examples but I'm not able to implement it in my code. Most of the examples look like this:
embedding = tf.Variable(tf.random_uniform([vocab_size, hidden_size], -1, 1))
inputs = tf.nn.embedding_lookup(embedding, input_data)
Does this mean we're building a layer that learns the embedding? I thought that one should download some Word2Vec or Glove and just use that.
Anyway let's say I want to build this embedding layer...
If I use these 2 lines in my code I get an error:
TypeError: Value passed to parameter 'indices' has DataType float32 not in list of allowed values: int32, int64
So I guess I have to change the input_data type to int32. So I do that (it's all indices after all), and I get this:
TypeError: inputs must be a sequence
I tried wrapping inputs (argument to tf.contrib.rnn.static_rnn) with a list: [inputs] as suggested in this answer, but that produced another error:
ValueError: Input size (dimension 0 of inputs) must be accessible via
shape inference, but saw value None.
Update:
I was unstacking the tensor x before passing it to embedding_lookup. I moved the unstacking after the embedding.
Updated code:
MIN_TOKENS = 10
MAX_TOKENS = 30
x = tf.placeholder("int32", [None, MAX_TOKENS, 1])
y = tf.placeholder("float", [None, N_CLASSES]) # 0.0 / 1.0
...
seqlen = tf.placeholder(tf.int32, [None]) #list of each sequence length*
embedding = tf.Variable(tf.random_uniform([VOCAB_SIZE, HIDDEN_SIZE], -1, 1))
inputs = tf.nn.embedding_lookup(embedding, x) #x is the text after converting to indices
inputs = tf.unstack(inputs, MAX_POST_LENGTH, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, inputs, dtype=tf.float32, sequence_length=seqlen) #---> Produces error
*seqlen: I zero-padded the sequences so all of them have the same list size, but since the actual size differ, I prepared a list describing the length without the padding.
New error:
ValueError: Input 0 of layer basic_lstm_cell_1 is incompatible with
the layer: expected ndim=2, found ndim=3. Full shape received: [None,
1, 64]
64 is the size of each hidden layer.
It's obvious that I have a problem with the dimensions... How can I make the inputs fit the network after embedding?
From the tf.nn.static_rnn , we can see the inputs arguments to be:
A length T list of inputs, each a Tensor of shape [batch_size, input_size]
So your code should be something like:
x = tf.placeholder("int32", [None, MAX_TOKENS])
...
inputs = tf.unstack(inputs, axis=1)
tf.squeeze is a method that removes dimensions of size 1 from the tensor. If the end goal is to have the input shape as [None,64], then put a line similar to inputs = tf.squeeze(inputs) and that would fix your problem.

Seq2Seq in TensorFlow without embeddings

I'm trying to create a very basic multivariate time series auto-encoder.
I want to be able to reconstruct the exact two signals I pass in.
Most of the references I'm looking at are using older versions of APIs or use embeddings.
I'm trying to use the latest higher level APIs, but its not obvious how you cobble them together.
class Seq2SeqConfig():
def __init__(self):
self.sequence_length = 15 # num of time steps
self.hidden_units = 64 # ?
self.num_features = 2
self.batch_size = 10
# Expect input as batch major.
encoder_inputs = tf.placeholder(shape=(None, config.sequence_length, config.num_features), dtype=tf.float32)
decoder_inputs = tf.placeholder(shape=(None, config.sequence_length, config.num_features), dtype=tf.float32))
# Convert inputs to time major
encoder_inputs_tm = tf.transpose(encoder_inputs, [1, 0, 2])
decoder_inputs_tm = tf.transpose(decoder_inputs, [1, 0, 2])
# setup encoder
encoder_cell = tf.contrib.rnn.LSTMCell(config.hidden_units)
encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn(
cell=encoder_cell,
inputs=encoder_inputs_tm,
time_major=True)
# setup decoder
decoder_cell = tf.contrib.rnn.LSTMCell(config.hidden_units)
# The sequence length is mandatory. Not sure what the expectation is here?
helper = tf.contrib.seq2seq.TrainingHelper(
decoder_inputs_tm,
sequence_length=tf.constant(config.sequence_length, dtype=tf.int32, shape=[config.batch_size]),
time_major=True)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_final_state)
decoder_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, output_time_major=True)
# loss calculation
loss_op = tf.reduce_mean(tf.square(decoder_outputs.rnn - decoder_targets_tm)
The loss operation fails because the shapes are different.
decoder_targets is (?, 15, 2) and decoder_outputs.rnn is (?, ?, 64).
Question 1:
Am I missing an operation somewhere where I reshape the decoder output?
I loosely followed this tensorflow tutorial: https://www.tensorflow.org/tutorials/seq2seq
There is a projection_layer operation passed into the basic decoder. Is that the purpose of this?
projection_layer = layers_core.Dense(tgt_vocab_size, use_bias=False)
I don't see a layers_core.Dense() function anywhere. I assume its deprecated or internal.
Question 2:
Which helper does one use for Inference when not using embeddings?
Question 3:
What would the ideal size of the hidden units be?
I assume because we want to reduce the dimensions in the latent space, it needs to be less that the size of the inputs. How does that translate when you have a input with sequence length = 15 and number of features = 2.
Should the number of hidden units be < 15, < 2 or < 15 *2?
Figured out the answer to Question 1
from tensorflow.python.layers.core import Dense
output_layer = Dense(config.num_features)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_final_state, output_layer)
Reference: https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb
Other two questions still stand.
Regarding question 3: I suggest you run several training and validation cycles with different hyperparameters to find what works best for your data and requirements. You can take a look at my implementation here (https://github.com/fritzfitzpatrick/tensorflow-seq2seq-generic-example) where I have built a very simple training & validation loop that stops once the validation loss has not gone down for a number of cycles to prevent overfitting.
Regarding question 2: I am still working on a CustomHelper implementation at the moment, and it looks like it is going somewhere. You can find the full sample code here (https://github.com/fritzfitzpatrick/tensorflow-seq2seq-generic-example/blob/master/tensorflow_custom_helper.ipynb).
batch_size = 5
features_dec_inp = 2 # number of features in target sequence
go_token = 2
end_token = 3
sess = tf.InteractiveSession()
def initialize_fn():
finished = tf.tile([False], [batch_size])
start_inputs = tf.fill([batch_size, features_dec_inp], go_token)
return (finished, start_inputs)
def next_inputs_fn(time, outputs, state, sample_ids):
del time, sample_ids
# finished needs to update after last step.
# one could use conditional logic based on sequence length
# if sequence length is known in advance
finished = tf.tile([False], [batch_size])
# next inputs should be the output of the dense layer
# unless the above finished logic returns [True]
# in which case next inputs can be anything in the right shape
next_inputs = tf.fill([batch_size, features_dec_inp], 0.5)
return (finished, next_inputs, state)
helper = tf.contrib.seq2seq.CustomHelper(
initialize_fn = initialize_fn,
sample_fn = tf.identity,
next_inputs_fn = next_inputs_fn)
print(helper)
Regarding question 1: This is the code that I am using to reduce the dimensionality of my decoder output to the number of features in my target sequence:
train_output_dense = tf.layers.dense(
train_dec_out_logits, # [batch_size x seq_length x num_units]
features_dec_exp_out) # [batch_size x seq_length x num_target_features]

How to concatenate an input and a matrix in Keras

I am building a model in Keras. I have an input
X = Input(shape=(input_size, ), name='input_feature')
and a fixed pre-given numpy matrix D which is input_size by n.
I want to concatenate X and D before input them to the next layer. In other word, I need to concatenate each slice of X and D to generate a new inpt whose expected size should be (none, input_size, n+1). So what should I do to concatenate them? In my understanding, the batch size is none since it will adaptive to the batch size of input X when we fit data to the model.
Answer if D has shape (batch, input_size,n)
Provided that D is a tensor (it's a tensor if it's an output from some layer):
X = Reshape((input_size,1))(X)
concat = Concatenate()([D,X])
If D is not a tensor:
import keras.backend as K
#create a tensor:
Dval = K.variable(numpyArrayForD)
#create an input for D:
D = Input(tensor=Dval)
#do as in the top of this answer.
If you want to avoid the additional Input (it will not affect the way you train, because of the tensor parameter), you can use a lambda layer:
def concatenation(x):
D = K.variable(D_df)
return K.concatenate([x,D])
XD = Lambda(concatenation,output_shape=(input_size,n+1))(X)
Answer if D has shape (input_size,n)
In this case, it's probably better replicate D many times. You can do this outside of the model, using numpy functions before creating the K.variable (see the other answer), like this:
D_df = D_df.reshape((1,input_size,n))
D_df = numpy.repeat(D_df,batch,axis=0)
But this approach requires you to adapt to the size of x beforehand.
If you want something that adapts to any size of X without having to change D before, then it's more complicated....

Tensorflow Grid LSTM RNN TypeError

I'm trying to build a LSTM RNN that handles 3D data in Tensorflow. From this paper, Grid LSTM RNN's can be n-dimensional. The idea for my network is a have a 3D volume [depth, x, y] and the network should be [depth, x, y, n_hidden] where n_hidden is the number of LSTM cell recursive calls. The idea is that each pixel gets its own "string" of LSTM recursive calls.
The output should be [depth, x, y, n_classes]. I'm doing a binary segmentation -- think foreground and background, so the number of classes is just 2.
# Network Parameters
n_depth = 5
n_input_x = 200 # MNIST data input (img shape: 28*28)
n_input_y = 200
n_hidden = 128 # hidden layer num of features
n_classes = 2
# tf Graph input
x = tf.placeholder("float", [None, n_depth, n_input_x, n_input_y])
y = tf.placeholder("float", [None, n_depth, n_input_x, n_input_y, n_classes])
# Define weights
weights = {}
biases = {}
# Initialize weights
for i in xrange(n_depth * n_input_x * n_input_y):
weights[i] = tf.Variable(tf.random_normal([n_hidden, n_classes]))
biases[i] = tf.Variable(tf.random_normal([n_classes]))
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_input_y, n_input_x)
# Permuting batch_size and n_input_y
x = tf.reshape(x, [-1, n_input_y, n_depth * n_input_x])
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_input_y*batch_size, n_input_x)
x = tf.reshape(x, [-1, n_input_x * n_depth])
# Split to get a list of 'n_input_y' tensors of shape (batch_size, n_hidden)
# This input shape is required by `rnn` function
x = tf.split(0, n_depth * n_input_x * n_input_y, x)
# Define a lstm cell with tensorflow
lstm_cell = grid_rnn_cell.GridRNNCell(n_hidden, input_dims=[n_depth, n_input_x, n_input_y])
# lstm_cell = rnn_cell.MultiRNNCell([lstm_cell] * 12, state_is_tuple=True)
# lstm_cell = rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=0.8)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
# pdb.set_trace()
output = []
for i in xrange(n_depth * n_input_x * n_input_y):
#I'll need to do some sort of reshape here on outputs[i]
output.append(tf.matmul(outputs[i], weights[i]) + biases[i])
return output
pred = RNN(x, weights, biases)
pred = tf.transpose(tf.pack(pred),[1,0,2])
pred = tf.reshape(pred, [-1, n_depth, n_input_x, n_input_y, n_classes])
# pdb.set_trace()
temp_pred = tf.reshape(pred, [-1, n_classes])
n_input_y = tf.reshape(y, [-1, n_classes])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(temp_pred, n_input_y))
Currently I'm getting the error: TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
It occurs after the RNN intialization: outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
x of course is of type float32
I am unable to tell what type GridRNNCell returns, any helpe here? This could be the issue. Should I be defining more arguments to this? input_dims makes sense, but what should output_dims be?
Is this a bug in the contrib code?
GridRNNCell is located in contrib/grid_rnn/python/ops/grid_rnn_cell.py
I was unsure on some of the implementation decisions of the code, so I decided to roll my own. One thing to keep in mind is that this is an implementation of just the cell. It is up to you to build the actual machinery that handles the locations and interactions of the h and m vectors and isn't as simple as passing in your data and expecting it to traverse the dimensions properly.
So for example, if you are working in two dimensions, start with the top left block, take the incoming x and y vectors, concat them together, then use your cell to compute the output (which includes outgoing vectors for both x and y); and it is up to you to store the output for later use in neighboring blocks. Pass those outputs individually to each corresponding dimension, and in each of those neighboring blocks, concat the incoming vectors (again, for each dimension) and compute the output for the neighboring blocks. To do this, you'll need two for-loops, one for each dimension.
Perhaps the version in contrib will work for this, but a couple problems I have with it (I could be wrong here, but as far as I can tell):
1) The vectors are handled using concat and slice rather than with tuples. This will likely result in slower performance.
2) It looks like the input is projected at each step, which doesn't sit well with me. In the paper they only project into the network for incoming blocks along the edge of the grid and not throughout.
If you look at the code, it is actually very simple. Perhaps reading the paper and making adjustments to the code as needed, or rolling your own are your best bet. And remember that the cell is only good for performing the recurrence at each step, and not for managing the incoming and outgoing h and m vectors.
which version of Grid LSTM cells are you using?
If you are using https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/rnn_cell.py
I think you can try to initialize 'feature_size' and 'frequency_skip'.
Also, I think there may exists another bug. Feed a dynamic shape into this version may cause a TypeError
Yes, dynamic shape was the cause. There is a PR to fix this: https://github.com/tensorflow/tensorflow/pull/4631
#jstaker7: Thank you for trying it out. Re. problem 1, the above PR uses tuples for states and outputs, hopefully it can address the performance issue. GridRNNCell was created some while ago, at that time all the LSTMCells in Tensorflow was using concat/slice instead of tuple.
Re. problem 2, GridRNNCell will not project the input if you pass None. A dimension can be both input and recurrent, and when there is no input (inputs = None), it will use the recurrent tensors for computation. We can also use 2 input dimensions, by instantiate the GridRNNCell directly.
Of course writing a generic class for all cases makes the code looks a bit convoluted, and I think that it needs better documentation.
Anyway, it will be great if you could share your improvements, or any idea you might have to make it clearer/more useful. It is the nature of an open-source project anyway.

Categories