I am trying to implement RNN in Tensorflow. I am writing my own functions instead of using RNN cells to practice.
The problem is sequence tagging, input size is [32, 48, 900] where 32 is batch size, 48 is time steps and 900 is vocab size which is one-hot encoded vector. Output is [32, 48, 145] where first two dimensions are same as input, but the last dimension is output vocabulary size (one-hot). Basically this is a NLP tagging problem.
I am getting following error:
InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[48,145] labels_size=[1536,145]
The actual labels_size is [32, 48, 145] but it merges first two dimensions without my control. FYI 32*48 = 1536
If I run my RNN with batch size 1, it works fine as expected. I could not figure out how to solve the issue. I am getting the problem in the last line of the code.
I pasted the related part of the code:
inputs = tf.placeholder(shape=[None, self.seq_length, self.vocab_size], dtype=tf.float32, name="inputs")
targets = tf.placeholder(shape=[None, self.seq_length, self.output_vocab_size], dtype=tf.float32, name="targets")
init_state = tf.placeholder(shape=[1, self.hidden_size], dtype=tf.float32, name="state")
initializer = tf.random_normal_initializer(stddev=0.1)
with tf.variable_scope("RNN") as scope:
hs_t = init_state
ys = []
for t, xs_t in enumerate(tf.split(inputs[0], self.seq_length, axis=0)):
if t > 0: scope.reuse_variables()
Wxh = tf.get_variable("Wxh", [self.vocab_size, self.hidden_size], initializer=initializer)
Whh = tf.get_variable("Whh", [self.hidden_size, self.hidden_size], initializer=initializer)
Why = tf.get_variable("Why", [self.hidden_size, self.output_vocab_size], initializer=initializer)
bh = tf.get_variable("bh", [self.hidden_size], initializer=initializer)
by = tf.get_variable("by", [self.output_vocab_size], initializer=initializer)
hs_t = tf.tanh(tf.matmul(xs_t, Wxh) + tf.matmul(hs_t, Whh) + bh)
ys_t = tf.matmul(hs_t, Why) + by
ys.append(ys_t)
hprev = hs_t
output_softmax = tf.nn.softmax(ys) # Get softmax for sampling
#outputs = tf.concat(ys, axis=0)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets, logits=ys))
The problem may fall in the size of the ys, ys should have the size of [32, 48, 145], but the output ys only have the size of [48,145], so if the batchsize is 1, the taget size is [1, 48, 145], which just have the same size of [48,145] after dimensionality reduction.
To solve the problem you can add a loop to deal with the batchsize ( inputs[0] ) :
such as :
for i in range(inputs.getshape(0)):
for t, xs_t in enumerate(tf.split(inputs[i], self.seq_length, axis=0)):
Related
I'm trying to train a neural network in PyTorch with some input signals. The layers are conv1d. The shape of my input is [100, 10], meaning 100 signals of a length of 10.
But when I execute the training, I have this error:
Given groups=1, weight of size [100, 10, 1], expected input[1, 1, 10] to have 10 channels, but got 1 channels instead
config = [10, 100, 100, 100, 100, 100, 100, 100]
batch_size = 1
epochs = 10
learning_rate = 0.001
kernel_size = 1
class NeuralNet(nn.Module):
def __init__(self, config, kernel_size=1):
super().__init__()
self.config = config
self.layers = nn.ModuleList([nn.Sequential(
nn.Conv1d(self.config[i], self.config[i + 1], kernel_size = kernel_size),
nn.ReLU())
for i in range(len(self.config)-1)])
self.last_layer = nn.Linear(self.config[-1], 3)
self.layers.append(nn.Flatten())
self.layers.append(self.last_layer)
def forward(self, x):
for i, l in enumerate(self.layers):
x = l(x)
return x
def loader(train_data, batch_size):
inps = torch.tensor(train_data[0])
tgts = torch.tensor(train_data[1])
inps = torch.unsqueeze(inps, 1)
dataset = TensorDataset(inps, tgts)
train_dataloader = DataLoader(dataset, batch_size = batch_size)
return train_dataloader
At first, my code was without the unsqueez(inps) line and I had the exact same error, but then I added this line thinking that I must have an input of size (num_examples, num_channels, lenght_of_signal) but it didn't resolve the problem at all.
Thank you in advance for your answers
nn.Conv1d expects input with shape of form (batch_size, num_of_channels, seq_length). It's parameters allow to directly set number of ouput channels (out_channels) and change length of output using, for example, stride. For conv1d layer to work correctly it should know number of input channels (in_channels), which is not the case on first convolution: input.shape == (batch_size, 1, 10), therefore num_of_channels = 1, while convolution in self.layers[0] expects this value to be equal 10 (because in_channels set by self.config[0] and self.config[0] == 10). Hence to fix this append one more value to config:
config = [10, 100, 100, 100, 100, 100, 100, 100] # as in snippet above
config = [1] + config
At this point convs should be working fine, but there is another obstacle in self.layers -- linear layer at the end. So if kernel_size of 1 was used, then after final convolution batch will have shape (batch_size, 100, 10), and after flatten (batch_size, 100 * 10), while last_layer expects input of shape (batch_size, 100). So, if length of sequence after final conv layer is known (which is certainly the case if you're using kernel_size of 1 with default stride of 1 and default padding of 0 -- length stays same), last_layer should be defined as:
self.last_layer = nn.Linear(final_length * self.config[-1], 3)
and in snippet above final_length can be set to 10 (since conditions in previous brackets satisfied). To catch idea of how shapes in conv1d transformed take look at simple example in gif below (here batch_size is equal to 1):
I'm trying to learn tensorflow and I'm getting the following error:
logits and labels must be broadcastable: logits_size=[32,1] labels_size=[16,1]
The code runs fine when I got this as input:
self.input = np.ones((500, 784))
self.y = np.ones((500, 1))
However, when I add and extra dimension the error is thrown:
self.input = np.ones((500, 2, 784))
self.y = np.ones((500, 1))
The code to build the graph
self.x = tf.placeholder(tf.float32, shape=[None] + self.config.state_size)
self.y = tf.placeholder(tf.float32, shape=[None, 1])
# network architecture
d1 = tf.layers.dense(self.x, 512, activation=tf.nn.relu, name="dense1")
d2 = tf.layers.dense(d1, 1, name="dense2")
with tf.name_scope("loss"):
self.cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=self.y, logits=d2))
self.train_step = tf.train.AdamOptimizer(self.config.learning_rate).minimize(self.cross_entropy,
global_step=self.global_step_tensor)
correct_prediction = tf.equal(tf.argmax(d2, 1), tf.argmax(self.y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Could someone explain me why this is happening and how I can fix this?
logits is the name typically given to the output of the network, these are your predictions. A size of [32, 10] tells me that you have a batch size of 32, and 10 outputs, such as is common with mnist, as you appear to be working with.
Your labels are sized [16, 10], which is to say, you're providing 16 labels/vectors of size 10. The number of labels you're providing is in conflict with the output of the network, they should be the same.
I'm not quite clear what you're doing with the extra dimension in the input, but I guess you must be accidentally doubling the samples in some way. Perhaps the [500, 2, 784] shape is being reshaped to [1000, 784] automatically somewhere along the way, which is then not matching the 500 labels. Also, your self.y should be shaped [500, 10] not, [500, 1], your labels need to be in one-hot encoding format. E.g. a single label of shape [1, 10] for digit 3 would be [[0,0,0,1,0,0,0,0,0,0,0]], not in digit representation, e.g. [3] as you seem to have it set up in your sanity-test here.
I'm a beginner in deep learning and have taken a few courses on Udacity. Recently I'm trying to build a deep network detecting hand joints in the input depth images, which doesn't seem to be working well. (My dataset is ICVL Hand Posture Dataset)
The network structure is shown here.
① A batch of input images, 240x320;
② An 8-channel convolutional layer with a 5x5 kernel;
③ A max pooling layer, ksize = stride = 2;
④ A fully-connected layer, weight.shape = [38400, 1024];
⑤ A fully-connected layer, weight.shape = [1024, 48].
After several epochs of training, the output of the last layer converges as a (0, 0, ..., 0) vector. I chose the mean square error as the loss function and its value stayed above 40000 and didn't seem to reduce.
The network structure is already too simple to be simplified again but the problem remains. Could anyone offer any suggestions?
My main code is posted below:
image = tf.placeholder(tf.float32, [None, 240, 320, 1])
annotations = tf.placeholder(tf.float32, [None, 48])
W_convolution_layer1 = tf.Variable(tf.truncated_normal([5, 5, 1, 8], stddev=0.1))
b_convolution_layer1 = tf.Variable(tf.constant(0.1, shape=[8]))
h_convolution_layer1 = tf.nn.relu(
tf.nn.conv2d(image, W_convolution_layer1, [1, 1, 1, 1], 'SAME') + b_convolution_layer1)
h_pooling_layer1 = tf.nn.max_pool(h_convolution_layer1, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')
W_fully_connected_layer1 = tf.Variable(tf.truncated_normal([120 * 160 * 8, 1024], stddev=0.1))
b_fully_connected_layer1 = tf.Variable(tf.constant(0.1, shape=[1024]))
h_pooling_flat = tf.reshape(h_pooling_layer1, [-1, 120 * 160 * 8])
h_fully_connected_layer1 = tf.nn.relu(
tf.matmul(h_pooling_flat, W_fully_connected_layer1) + b_fully_connected_layer1)
W_fully_connected_layer2 = tf.Variable(tf.truncated_normal([1024, 48], stddev=0.1))
b_fully_connected_layer2 = tf.Variable(tf.constant(0.1, shape=[48]))
detection = tf.nn.relu(
tf.matmul(h_fully_connected_layer1, W_fully_connected_layer2) + b_fully_connected_layer2)
mean_squared_error = tf.reduce_sum(tf.losses.mean_squared_error(annotations, detection))
training = tf.train.AdamOptimizer(1e-4).minimize(mean_squared_error)
# This data loader reads images and annotations and convert them into batches of numbers.
loader = ICVLDataLoader('../data/')
with tf.Session() as session:
session.run(tf.global_variables_initializer())
for i in range(1000):
# batch_images: a list with shape = [BATCH_SIZE, 240, 320, 1]
# batch_annotations: a list with shape = [BATCH_SIZE, 48]
[batch_images, batch_annotations] = loader.get_batch(100).to_1d_list()
[x_, t_, l_, p_] = session.run([x_image, training, mean_squared_error, detection],
feed_dict={images: batch_images, annotations: batch_annotations})
And it runs like this.
The main issue is likely the relu activation in the output layer. You should remove this, i.e. let detection simply be the results of a matrix multiplication. If you want to force the outputs to be positive, consider something like the exponential function instead.
While relu is a popular hidden activation, I see one major problem with using it as an output activation: As is well known relu maps negative inputs to 0 -- however, crucially, the gradients will also be 0. This happening in the output layer basically means your network cannot learn from its mistakes when it produces outputs < 0 (which is likely to happen with random initializations). This will likely heavily impair the overall learning process.
I am trying to implement LSTM based network where after hidden state computation we also apply linear + sigmoid transformation at each time step. I have found the official documentation and a nice article that describe tf.nn.raw_rnn function suitable for this task however I struggle to understand why it does not work in my particular case.
Input description
So, let our input to LSTM be a minibatch of size [num_steps x batch_size x size], concretely [5, 32, 100]. Let LSTM have 200 hidden units. Then the output of the LSTM is [5, 32, 200] tensor which we can later use for loss computation. I assume the input [5, 32, 100] tensor is first unstacked into an array of [32, 100] tensors and then stacked back if we use tf.nn.dynamic_rnn with time_major=True in Tensorflow:
tf.nn.dynamic_rnn(LSTM)
LSTM t=0 LSTM t=1 LSTM t=2 LSTM t=3 LSTM t=4
[5, 32, 100] --> [[32, 100], [32, 100], [32, 100], [32, 100], [32, 100]] --> [5, 32, 200]
Hidden state model
In addition after each LSTM cell I need to perform linear + sigmoid transformation to squash each [32, 200] tensor into [32, 1] for example. Our tf.nn.dynamic_rnn won't work for that since it only accepts cells. We need to use tf.nn.raw_rnn API. So, here is my try:
def _get_raw_rnn_graph(self, inputs):
time = tf.constant(0, dtype=tf.int32)
_inputs_ta = tf.TensorArray(dtype=tf.float32, size=5)
# our [5, 32, 100] tensor becomes [[32, 100], [32, 100], ...]
_inputs_ta = _inputs_ta.unstack(inputs)
# create simple LSTM cell
cell = tf.contrib.rnn.LSTMCell(config.hidden_size)
# create loop_fn for raw_rnn
def loop_fn(time, cell_output, cell_state, loop_state):
emit_output = cell_output # == None if time = 0
if cell_output is None: # time = 0
next_cell_state = cell.zero_state(32, tf.float32)
self._initial_state = next_cell_state
else:
next_cell_state = cell_state
elements_finished = (time >= 32)
finished = tf.reduce_all(elements_finished)
next_input = tf.cond(finished,
lambda: tf.zeros([32, config.input_size], dtype=tf.float32),
lambda: _inputs_ta.read(time))
# apply linear + sig transform here
next_input = self._linear_transform(next_input, activation=tf.sigmoid)
next_loop_state = None
return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)
outputs_ta, final_state, _ = tf.nn.raw_rnn(cell, loop_fn)
outputs = outputs_ta.stack()
return outputs, final_state
This unfortunately does not work. The loop_fn iterates only two times instead of num_steps times as I expected and its output is Tensor("Train/Model/TensorArrayStack/TensorArrayGatherV3:0", shape=(?, 32, 200), dtype=float32) not [5, 32, 1] as we intended. What am I missing here?
I need to implement an LSTM layer after a two convolutional layers. Here is my code after the first convolution:
convo_2 = convolutional_layer(convo_1_pooling, shape=[5, 5, 32, 64])
convo_2_pooling = max_pool_2by2(convo_2)
convo_2_flat = tf.reshape(convo_2_pooling, shape=[-1, 64 * 50 * 25])
cell = rnn.LSTMCell(num_units=100, activation=tf.nn.relu)
cell = rnn.OutputProjectionWrapper(cell, output_size=7)
conv_to_rnn = int(convo_2_flat.get_shape()[1])
outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)
I get this error on the last line:
ValueError: Shape (?, 50, 64) must have rank 2
I have to indicate the time steps into the convo_2_flat variable, right? How? I really don't know ho to do that.
EDIT:
After this reshape:
convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
where
N_TIME_STEPS = 25
INPUT_SIZE = int(64 * 50 * 25 / N_TIME_STEPS)
I got this error: InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[5000,7] labels_size=[50,7] on this line:
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=outputs))
Seem to me that the batch size has changed after the last reshape.
EDIT 2:
Is it wrong the code below?
convo_2_shape = convo_2_pooling.get_shape().as_list()
shape_convo_flat = convo_2_shape[1] * convo_2_shape[2] * convo_2_shape[3]
N_TIME_STEPS = convo_2_shape[1]
INPUT_SIZE = tf.cast(shape_convo_flat / N_TIME_STEPS, tf.int32)
convo_2_out = tf.reshape(convo_2_pooling, shape=[-1, shape_convo_flat])
convo_2_out = tf.reshape(convo_2_out, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
I set N_TIME_STEPS that way because otherwise I'll have a float INPUT_SIZE and tf will throw an error.
According to Tensorflow documentation (https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
Input should be in the following shape (I use the default here),
i.e., [BATCH_SIZE, N_TIME_STEPS, INPUT_SIZE]. Therefore, you can reshape convo_2_flat as follows,
#get the shape of the output of max pooling
shape = convo_2_pooling.get_shape().as_list()
#flat accordingly
convo_2_flat = tf.reshape(convo_2_pooling, [-1, shape[1] * shape[2] * shape[3]])
# Here shape[1] * shape[2] * shape[3]] = N_TIME_STEPS*INPUT_SIZE
#reshape according to dynamic_rnn input
convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])
outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)
# get the output of the last time step
val = tf.transpose(outputs, [1, 0, 2])
lstm_last_output = val[-1]
OUTPUT_SIZE = 7 #since you have defined in cell = rnn.OutputProjectionWrapper(cell, output_size=7)
W = {
'output': tf.Variable(tf.random_normal([OUTPUT_SIZE, N_CLASSES]))
}
biases = {
'output': tf.Variable(tf.random_normal([N_CLASSES]))
}
#Dense Layer
pred_Y= tf.matmul(lstm_last_output, W['output']) + biases['output']
#Softmax Layer
pred_softmax = tf.nn.softmax(pred_Y)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=pred_softmax))
Note on the outputs:
According to the documentation, output of the dynamic_rnn is as follows,
i.e., [BATCH_SIZE, N_TIME_STEPS, OUTPUT_SIZE]. Therefore, you have an output for every time step. In the above code, I only get the output of the last time step. Alternatively, you can think about a different architecture for rnn output that describes as here (How do we use LSTM to classify sequences?),
Hope this helps.