LSTM cells after convolution - python

I need to implement an LSTM layer after a two convolutional layers. Here is my code after the first convolution:
convo_2 = convolutional_layer(convo_1_pooling, shape=[5, 5, 32, 64])
convo_2_pooling = max_pool_2by2(convo_2)
convo_2_flat = tf.reshape(convo_2_pooling, shape=[-1, 64 * 50 * 25])
cell = rnn.LSTMCell(num_units=100, activation=tf.nn.relu)
cell = rnn.OutputProjectionWrapper(cell, output_size=7)
conv_to_rnn = int(convo_2_flat.get_shape()[1])
outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)
I get this error on the last line:
ValueError: Shape (?, 50, 64) must have rank 2
I have to indicate the time steps into the convo_2_flat variable, right? How? I really don't know ho to do that.
EDIT:
After this reshape:
convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
where
N_TIME_STEPS = 25
INPUT_SIZE = int(64 * 50 * 25 / N_TIME_STEPS)
I got this error: InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[5000,7] labels_size=[50,7] on this line:
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=outputs))
Seem to me that the batch size has changed after the last reshape.
EDIT 2:
Is it wrong the code below?
convo_2_shape = convo_2_pooling.get_shape().as_list()
shape_convo_flat = convo_2_shape[1] * convo_2_shape[2] * convo_2_shape[3]
N_TIME_STEPS = convo_2_shape[1]
INPUT_SIZE = tf.cast(shape_convo_flat / N_TIME_STEPS, tf.int32)
convo_2_out = tf.reshape(convo_2_pooling, shape=[-1, shape_convo_flat])
convo_2_out = tf.reshape(convo_2_out, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
I set N_TIME_STEPS that way because otherwise I'll have a float INPUT_SIZE and tf will throw an error.

According to Tensorflow documentation (https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
Input should be in the following shape (I use the default here),
i.e., [BATCH_SIZE, N_TIME_STEPS, INPUT_SIZE]. Therefore, you can reshape convo_2_flat as follows,
#get the shape of the output of max pooling
shape = convo_2_pooling.get_shape().as_list()
#flat accordingly
convo_2_flat = tf.reshape(convo_2_pooling, [-1, shape[1] * shape[2] * shape[3]])
# Here shape[1] * shape[2] * shape[3]] = N_TIME_STEPS*INPUT_SIZE
#reshape according to dynamic_rnn input
convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)
# get the output of the last time step
val = tf.transpose(outputs, [1, 0, 2])
lstm_last_output = val[-1]
OUTPUT_SIZE = 7 #since you have defined in cell = rnn.OutputProjectionWrapper(cell, output_size=7)
W = {
'output': tf.Variable(tf.random_normal([OUTPUT_SIZE, N_CLASSES]))
}
biases = {
'output': tf.Variable(tf.random_normal([N_CLASSES]))
}
#Dense Layer
pred_Y= tf.matmul(lstm_last_output, W['output']) + biases['output']
#Softmax Layer
pred_softmax = tf.nn.softmax(pred_Y)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=pred_softmax))
Note on the outputs:
According to the documentation, output of the dynamic_rnn is as follows,
i.e., [BATCH_SIZE, N_TIME_STEPS, OUTPUT_SIZE]. Therefore, you have an output for every time step. In the above code, I only get the output of the last time step. Alternatively, you can think about a different architecture for rnn output that describes as here (How do we use LSTM to classify sequences?),
Hope this helps.

Related

Issues with the output size of a Many-to-Many CNN-LSTM in PyTorch

I am trying to build a binary temporal image classifier by combining ResNet18 and an LSTM. However, I have never really used RNNs before and have been struggling on getting the correct output shape.
I am using a batch size of 128 and a sequence size of 32. The images are 80x80 grayscale images.
The current model is:
class CNNLSTM(nn.Module):
def __init__(self):
super(CNNLSTM, self).__init__()
self.resnet = models.resnet18(pretrained=False)
self.resnet.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3)
self.resnet.fc = nn.Sequential(nn.Linear(in_features=512, out_features=256, bias=True))
self.lstm = nn.LSTM(input_size=256, hidden_size=256, num_layers=3)
self.fc1 = nn.Linear(256, 128)
self.fc2 = nn.Linear(128, 1)
def forward(self, x_3d):
#x3d: torch.Size([128, 32, 1, 80, 80])
hidden = None
toret = []
for t in range(x_3d.size(1)):
x = self.resnet(x_3d[:, t, :, :, :])
out, hidden = self.lstm(x.unsqueeze(0), hidden)
x = self.fc1(out[-1, :, :])
x = F.relu(x)
x = self.fc2(x)
print("x shape: ", x.shape)
toret.append(x)
return torch.stack(toret)
Which returns a tensor of shape torch.Size([32, 128, 1]) which, according to what I understand, means that every nth row represents the nth time step of each element in the sequence.
How can I get output of shape 128x1x32 instead?
And is there a better way to do this?
You could permute the dimensions:
a = torch.rand(32, 128, 1)
a = a.permute(1, 2, 0) # these are the indices of the original dimensions
print(a.shape)
>> torch.Size([128, 1, 32])
But you could also set batch_first=True in the LSTM module:
self.lstm = nn.LSTM(input_size=256, hidden_size=256, num_layers=3, batch_first=True)
This will expect that the input to the LSTM has the shape batch-size x seq-len x features and will output a tensor in the same way.

GRU same configurations but in two different ways produces two different output in tensorflow

I would like to do some sequence prediction in tensorflow using GRU. so I have created the same model in 2 different ways as follows:
In model 1 I have a 2 GRUs, one after the other, that is, the new_state1, the final hidden state of the first GRU, acts as the initial state to the second GRU. Therefore, the model outputs new_state1 and new_state2 consequentially. Note that this is not a 2 layer model, but only 1 layer. From the code below, I divided the input and the output into 2 parts where GRU1 takes the first part, and the second GRU takes the second part.
Also the random_seed is set and fixed for both model so that results can be comparable.
Model 1
import tensorflow as tf
import numpy as np
cell_size = 32
seq_length = 1000
time_steps1 = 500
time_steps2 = seq_length - time_steps1
x_t = np.arange(1, seq_length + 1)
x_t_plus_1 = np.arange(2, seq_length + 2)
tf.set_random_seed(123)
m_dtype = tf.float32
input_1 = tf.placeholder(dtype=m_dtype, shape=[None, time_steps1, 1], name="input_1")
input_2 = tf.placeholder(dtype=m_dtype, shape=[None, time_steps2, 1], name="input_2")
labels1 = tf.placeholder(dtype=m_dtype, shape=[None, time_steps1, 1], name="labels_1")
labels2 = tf.placeholder(dtype=m_dtype, shape=[None, time_steps2, 1], name="labels_2")
labels = tf.concat([labels1, labels2], axis=1, name="labels")
initial_state = tf.placeholder(shape=[None, cell_size], dtype=m_dtype, name="initial_state")
def model(input_feat1, input_feat2):
with tf.variable_scope("GRU"):
cell1 = tf.nn.rnn_cell.GRUCell(cell_size)
cell2 = tf.nn.rnn_cell.GRUCell(cell_size)
with tf.variable_scope("First50"):
# output1: shape=[1, time_steps1, 32]
output1, new_state1 = tf.nn.dynamic_rnn(cell1, input_feat1, dtype=m_dtype, initial_state=initial_state)
with tf.variable_scope("Second50"):
# output2: shape=[1, time_steps2, 32]
output2, new_state2 = tf.nn.dynamic_rnn(cell2, input_feat2, dtype=m_dtype, initial_state=new_state1)
with tf.variable_scope("output"):
# output shape: [1, time_steps1 + time_steps2, 32] => [1, 100, 32]
output = tf.concat([output1, output2], axis=1)
output = tf.reshape(output, shape=[-1, cell_size])
output = tf.layers.dense(output, units=1)
output = tf.reshape(output, shape=[1, time_steps1 + time_steps2, 1])
with tf.variable_scope("outputs_1_2_reshaped"):
output1 = tf.slice(input_=output, begin=[0, 0, 0], size=[-1, time_steps1, -1])
output2 = tf.slice(input_=output, begin=[0, time_steps1, 0], size=[-1, time_steps2, 1])
print(output.get_shape().as_list(), "1")
print(output1.get_shape().as_list(), "2")
print(output2.get_shape().as_list(), "3")
return output, output1, output2, initial_state, new_state1, new_state2
output, output1, output2, initial_state, new_state1, new_state2 = model(input_1, input_2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
to_run_list = [new_state1, new_state2]
in1 = np.reshape(x_t[:time_steps1], newshape=(1, time_steps1, 1))
in2 = np.reshape(x_t[time_steps1:], newshape=(1, time_steps2, 1))
l1 = np.reshape(x_t_plus_1[:time_steps1], newshape=(1, time_steps1, 1))
l2 = np.reshape(x_t_plus_1[time_steps1:], newshape=(1, time_steps2, 1))
i_s = np.zeros([1, cell_size])
new_s1, new_s2 = sess.run(to_run_list, feed_dict={input_1: in1,
input_2: in2,
labels1: l1,
labels2: l2,
initial_state: i_s})
print(np.shape(new_s1), np.shape(new_s2))
print(np.mean(new_s1), np.mean(new_s2))
print(np.sum(new_s1), np.sum(new_s2))
In this model, Instead of having 2 different GRU, I created one, and I divided the input and labels into 2 different parts as well, and I used a for loop to iterate over my input dataset. Then the final state is taken and fed back into the same model as initial state.
Note that both model1 and model2 have the very first initial state of zeros.
Model 2
import tensorflow as tf
import numpy as np
cell_size = 32
seq_length = 1000
time_steps = 500
x_t = np.arange(1, seq_length + 1)
x_t_plus_1 = np.arange(2, seq_length + 2)
tf.set_random_seed(123)
m_dtype = tf.float32
inputs = tf.placeholder(dtype=m_dtype, shape=[None, time_steps, 1], name="inputs")
labels = tf.placeholder(dtype=m_dtype, shape=[None, time_steps, 1], name="labels")
initial_state = tf.placeholder(shape=[None, cell_size], dtype=m_dtype, name="initial_state")
grads_initial_state = tf.placeholder(dtype=m_dtype, shape=[None, cell_size], name="prev_grads")
this_is_last_batch = tf.placeholder(dtype=tf.bool, name="this_is_last_batch")
def model(input_feat):
with tf.variable_scope("GRU"):
cell = tf.nn.rnn_cell.GRUCell(cell_size)
with tf.variable_scope("cell"):
# output1: shape=[1, time_steps, 32]
output, new_state = tf.nn.dynamic_rnn(cell, input_feat, dtype=m_dtype, initial_state=initial_state)
with tf.variable_scope("output"):
output = tf.reshape(output, shape=[-1, cell_size])
output = tf.layers.dense(output, units=1)
output = tf.reshape(output, shape=[1, time_steps, 1])
print(output.get_shape().as_list(), "1")
return output, new_state
output, new_state = model(inputs)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# 1000 // 500 = 2
num_iterations = seq_length // time_steps
print("num_iterations:", num_iterations)
final_states = []
to_run_list = [grads_wrt_initial_state, new_state]
for i in range(num_iterations):
current_xt = x_t[i * time_steps: (i + 1)*time_steps]
current_xt_plus_1 = x_t_plus_1[i*time_steps: (i + 1)*time_steps]
in1 = np.reshape(current_xt, newshape=(1, time_steps, 1))
l1 = np.reshape(current_xt_plus_1, newshape=(1, time_steps, 1))
i_s = np.zeros([1, cell_size])
if i == 0:
new_s = sess.run(new_state, feed_dict={inputs: in1,
labels: l1,
initial_state: i_s})
final_states.append(new_s)
print("---->", np.mean(final_states[-1]), np.sum(final_states[-1]), i)
else:
new_s = sess.run(new_state, feed_dict={inputs: in1,
labels: l1,
initial_state: final_states[-1]})
final_states.append(new_s)
print("---->", np.mean(final_states[-1]), np.sum(final_states[-1]), i)
Finally, after printing out the statistics of new_state1 and new_state2 in model1, they were different from the new_state, after each iteration, in model2.
I would like to know how to fix this problem and why is that happening.
Edit:
I have figured out that the weights values of the gru in both files are different
Now how can I reproduce the same results in 2 the different files even after setting the random seed?
Any help is much appreciated!!!
so to reproduce the same results in different files, tf.set_random_seed() is not enough. I figured out that we need to also set the seed for the intializers of the gru cells as well as the initializers of the weights in the dense layer at the output (this is at least acccording to my model); so the definition of the cell is now:
cell1 = tf.nn.rnn_cell.GRUCell(cell_size, kernel_initializer=tf.glorot_normal_initializer(seed=123, dtype=m_dtype))
And for the dense layer:
output = tf.layers.dense(output, units=1, kernel_initializer=tf.glorot_uniform_initializer(seed=123, dtype=m_dtype))
Note that any other initializer could be used as long as we set the seed the dtype for it.

Can one obtain variable batch sizes for LSTM's in TensorFlow without using Placeholders?

In my implementation of an LSTM RNN, I used the following line of code:
self.batch_size = tf.shape(x)[0]
Where x was a tensor obtained from the dataset api. Printing x gave the following output:
Tensor("IteratorGetNext:0", shape=(?, 2, 1024), dtype=float32)
The rest of my code is given by
targets = tf.one_hot(y,num_classes)
cell = tf.contrib.rnn.BasicLSTMCell
cells = [cell(num_units=n) for n in num_units]
stacked_rnn_cell = tf.contrib.rnn.MultiRNNCell(cells, state_is_tuple=True)
initial_state = stacked_rnn_cell.zero_state(self.batch_size, tf.float32)
...
output, state = tf.nn.dynamic_rnn(
stacked_rnn_cell, prev_output, initial_state = initial_state, dtype = tf.float32,
sequence_length = [1024]*self.batch_size)
logits = tf.contrib.layers.fully_connected(output[-1],24)
xent = tf.nn.softmax_cross_entropy_with_logits_v2(labels = targets, logits = logits)
self.loss = tf.reduce_mean(xent)
self.opt = tf.train.GradientDescentOptimizer(0.01).\
minimize(self.loss,global_step=global_step)
self.metric_loss,self.update_loss = tf.metrics.mean(self.loss)
self.summary = tf.summary.scalar('Loss',self.update_loss)
I'm met with the error:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1024,2] vs. shape[1] = [1,128]
1024 is the batch size, 2 is the input size, 128 is the state size.
If I change the first line to
self.batch_size = 1024
or any other constant number, it trains. I'd rather not handle this with placeholders and just infer the value from the data sample so I can keep it general. Any ideas?
Found the solution! The problem line is
output, state = tf.nn.dynamic_rnn(
stacked_rnn_cell, prev_output, initial_state = initial_state, dtype = tf.float32,
sequence_length = [1024]*self.batch_size)
If we change it to:
output, state = tf.nn.dynamic_rnn(
stacked_rnn_cell, polar, initial_state = initial_state, dtype = tf.float32,
sequence_length = tf.tile([1024],[self.batch_size]))
It seems to work as expected.

NaN loss in tensorflow LSTM model

The following network code, which should be your classic simple LSTM language model, starts outputting nan loss after a while... on my training set it takes a couple of hours and I couldn't replicate it easily on smaller datasets. But it always happens in serious training.
Sparse_softmax_with_cross_entropy should be numerically stable, so it can't be the cause... but other than that, I don't see any other node that could cause an issue in the graph. What could be the problem?
class MyLM():
def __init__(self, batch_size, embedding_size, hidden_size, vocab_size):
self.x = tf.placeholder(tf.int32, [batch_size, None]) # [batch_size, seq-len]
self.lengths = tf.placeholder(tf.int32, [batch_size]) # [batch_size]
# remove padding. [batch_size * seq_len] -> [batch_size * sum(lengths)]
mask = tf.sequence_mask(self.lengths) # [batch_size, seq_len]
mask = tf.cast(mask, tf.int32) # [batch_size, seq_len]
mask = tf.reshape(mask, [-1]) # [batch_size * seq_len]
# remove padding + last token. [batch_size * seq_len] -> [batch_size * sum(lengths-1)]
mask_m1 = tf.cast(tf.sequence_mask(self.lengths - 1, maxlen=tf.reduce_max(self.lengths)), tf.int32) # [batch_size, seq_len]
mask_m1 = tf.reshape(mask_m1, [-1]) # [batch_size * seq_len]
# remove padding + first token. [batch_size * seq_len] -> [batch_size * sum(lengths-1)]
m1_mask = tf.cast(tf.sequence_mask(self.lengths - 1), tf.int32) # [batch_size, seq_len-1]
m1_mask = tf.concat([tf.cast(tf.zeros([batch_size, 1]), tf.int32), m1_mask], axis=1) # [batch_size, seq_len]
m1_mask = tf.reshape(m1_mask, [-1]) # [batch_size * seq_len]
embedding = tf.get_variable("TokenEmbedding", shape=[vocab_size, embedding_size])
x_embed = tf.nn.embedding_lookup(embedding, self.x) # [batch_size, seq_len, embedding_size]
lstm = tf.nn.rnn_cell.LSTMCell(hidden_size, use_peepholes=True)
# outputs shape: [batch_size, seq_len, hidden_size]
outputs, final_state = tf.nn.dynamic_rnn(lstm, x_embed, dtype=tf.float32,
sequence_length=self.lengths)
outputs = tf.reshape(outputs, [-1, hidden_size]) # [batch_size * seq_len, hidden_size]
w = tf.get_variable("w_out", shape=[hidden_size, vocab_size])
b = tf.get_variable("b_out", shape=[vocab_size])
logits_padded = tf.matmul(outputs, w) + b # [batch_size * seq_len, vocab_size]
self.logits = tf.dynamic_partition(logits_padded, mask_m1, 2)[1] # [batch_size * sum(lengths-1), vocab_size]
predict = tf.argmax(logits_padded, axis=1) # [batch_size * seq_len]
self.predict = tf.dynamic_partition(predict, mask, 2)[1] # [batch_size * sum(lengths)]
flat_y = tf.dynamic_partition(tf.reshape(self.x, [-1]), m1_mask, 2)[1] # [batch_size * sum(lengths-1)]
self.cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=flat_y)
self.cost = tf.reduce_mean(self.cross_entropy)
self.train_step = tf.train.AdamOptimizer(learning_rate=0.01).minimize(self.cost)
check your columns which are fed to the model, in my case, there was a column having NaN values, after removing NaNs, it worked
It may be the case of exploding gradients, where gradients may explode during backpropagation in LSTMs, resulting number overflows. A common technique to deal with exploding gradients is to perform Gradient Clipping.

Tensorflow: Recurrent Neural Network Batch Training

I am trying to implement RNN in Tensorflow. I am writing my own functions instead of using RNN cells to practice.
The problem is sequence tagging, input size is [32, 48, 900] where 32 is batch size, 48 is time steps and 900 is vocab size which is one-hot encoded vector. Output is [32, 48, 145] where first two dimensions are same as input, but the last dimension is output vocabulary size (one-hot). Basically this is a NLP tagging problem.
I am getting following error:
InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[48,145] labels_size=[1536,145]
The actual labels_size is [32, 48, 145] but it merges first two dimensions without my control. FYI 32*48 = 1536
If I run my RNN with batch size 1, it works fine as expected. I could not figure out how to solve the issue. I am getting the problem in the last line of the code.
I pasted the related part of the code:
inputs = tf.placeholder(shape=[None, self.seq_length, self.vocab_size], dtype=tf.float32, name="inputs")
targets = tf.placeholder(shape=[None, self.seq_length, self.output_vocab_size], dtype=tf.float32, name="targets")
init_state = tf.placeholder(shape=[1, self.hidden_size], dtype=tf.float32, name="state")
initializer = tf.random_normal_initializer(stddev=0.1)
with tf.variable_scope("RNN") as scope:
hs_t = init_state
ys = []
for t, xs_t in enumerate(tf.split(inputs[0], self.seq_length, axis=0)):
if t > 0: scope.reuse_variables()
Wxh = tf.get_variable("Wxh", [self.vocab_size, self.hidden_size], initializer=initializer)
Whh = tf.get_variable("Whh", [self.hidden_size, self.hidden_size], initializer=initializer)
Why = tf.get_variable("Why", [self.hidden_size, self.output_vocab_size], initializer=initializer)
bh = tf.get_variable("bh", [self.hidden_size], initializer=initializer)
by = tf.get_variable("by", [self.output_vocab_size], initializer=initializer)
hs_t = tf.tanh(tf.matmul(xs_t, Wxh) + tf.matmul(hs_t, Whh) + bh)
ys_t = tf.matmul(hs_t, Why) + by
ys.append(ys_t)
hprev = hs_t
output_softmax = tf.nn.softmax(ys) # Get softmax for sampling
#outputs = tf.concat(ys, axis=0)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets, logits=ys))
The problem may fall in the size of the ys, ys should have the size of [32, 48, 145], but the output ys only have the size of [48,145], so if the batchsize is 1, the taget size is [1, 48, 145], which just have the same size of [48,145] after dimensionality reduction.
To solve the problem you can add a loop to deal with the batchsize ( inputs[0] ) :
such as :
for i in range(inputs.getshape(0)):
for t, xs_t in enumerate(tf.split(inputs[i], self.seq_length, axis=0)):

Categories