Multi-dimension input to a neural network - python

I have a neural network with many layers. I have the input to the neural network of dimension [batch_size, 7, 4]. When this input is passed through the network, I observed that only the third dimension of the input keeps changing, that is if my first layer has 20 outputs, then the output of the second layer is [batch_size, 7, 20]. I need the end result after many layers to be of the shape [batchsize, 16].
I have the following questions:
Are the other two dimensions being used at all?
If not, how can I modify my network so that all three dimensions are used?
How do I drop one dimension meaningfully to get the 2-d output that I desire?
Following is my current implementation in Tensorflow v1.14 and Python 3:
out1 = tf.layers.dense(inputs=noisy_data, units=150, activation=tf.nn.tanh) # Outputs [batch, 7, 150]
out2 = tf.layers.dense(inputs=out1, units=75, activation=tf.nn.tanh) # Outputs [batch, 7, 75]
out3 = tf.layers.dense(inputs=out2, units=32, activation=tf.nn.tanh) # Outputs [batch, 7, 32]
out4 = tf.layers.dense(inputs=out3, units=16, activation=tf.nn.tanh) # Outputs [batch, 7, 16]
Any help is appreciated. Thanks.

Answer to Question 1: The data values in 2nd dimension (axis=1) are not being used because if you look at the output of code snippet below (assuming batch_size=2):
>>> input1 = tf.placeholder(float, shape=[2,7,4])
>>> tf.layers.dense(inputs=input1, units=150, activation=tf.nn.tanh)
>>> graph = tf.get_default_graph()
>>> graph.get_collection('variables')
[<tf.Variable 'dense/kernel:0' shape=(4, 150) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(150,) dtype=float32_ref>]
you can see that the dense layer ignores values along 2nd dimension. However, the values along 1st dimension would be considered as it is a part of a batch though the offical tensorflow docs doesn't say anything about the required input shape.
Answer to Question 2: Reshape the input [batch_size, 7, 4] to [batch_size, 28] by using the below line of code before passing the input to the first dense layer:
input1 = tf.reshape(input1, [-1, 7*4])
Answer to Question 3: If you reshape the inputs as above, there is no need to drop a dimension.

Related

Input fixed length sequence of frames to CNN

I want my pytorch CNN to take as input a sequence of length SEQ_LEN of 32x32 RGB images concatenated along channels dimension. Therefore, a single input of the network has shape (32, 32, 3, SEQ_LEN). How should I define my CNN input layer?
The common way
SEQ_LEN = 10
input_conv = nn.Conv2d(in_channels=SEQ_LEN, out_channels=32, kernel_size=3)
BATCH_SIZE = 64
frames = np.random.randint(0, 255, size=(BATCH_SIZE, SEQ_LEN, 3, 32, 32))
frames_tensor = torch.tensor(frames)
input_conv(frames_tensor)
gives the error
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 10, 3, 3], but got 5-dimensional input of size [64, 10, 3, 32, 32] instead
Given your comments, it sounds like your data is not fit for a 2D convolutional neural network at all, and that a 3D one (Conv3d) would be more appropriate. As you can see from its documentation, its input shape is what you would expect.

Is it possible to remove batch dimension from frozen graph?

Checking frozen tensorflow model:
wget https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz
I see that input size is Tensor 'input:0', which has shape '(1, 299, 299, 3)', I wonder is it possible to make input (None, 299, 299, 3) to make availible batch prediction with batch_size > 1?
In the general case it may not be possible to do this, as there could be operations that rely on the first dimension being 1 (e.g. suppose tf.squeeze is used on input:0). However, you can try to replace the input with a placeholder of the desired shape. You can do this with tf.graph_util.import_graph_def. If the operations allow it, then TensorFlow should import the graph adjusting the node shapes accordingly. See the following example:
import tensorflow as tf
# First graph
with tf.Graph().as_default():
x = tf.placeholder(tf.float32, [1, 10, 20], name='Input')
y = tf.square(x, name='Output')
print(y)
# Tensor("Output:0", shape=(1, 10, 20), dtype=float32)
gd = tf.get_default_graph().as_graph_def()
# Second graph
with tf.Graph().as_default():
x = tf.placeholder(tf.float32, [None, 10, 20], name='Input')
y, = tf.graph_util.import_graph_def(gd, input_map={'Input:0': x},
return_elements=['Output:0'], name='')
print(y)
# Tensor("Output:0", shape=(?, 10, 20), dtype=float32)
In the first graph, the Output:0 node has a shape (1, 10, 20), which is inferred from the shape of the Input:0 tensor. However, when I take the graph definition from the first graph and load in the second graph, replacing the Input:0 tensor with a placeholder with undefined first dimension, the shape of Output:0 is updated to (?, 10, 20). If I run the operations in the second graph giving an input value with a first dimension greater than one, it will work as expected, because the graph is correct.

reshape a matrix from [?, 100] to [batch_size, ?, 100]

I'm building an autoencoder based on RNN. After FC layer, I have to reshape my output to [batch_size, sequence_length, embedding_dimension]. However, my sequence length(timestep) for my decoder is uncertain. What I wish is something work as follow.
outputs = tf.reshape(outputs, [batch_size, None, word_dimension])
Or, is there any other way for me to get the sequence length from the input data which has a shape [batch_size, sequence_length, embedding_dimension].
You can use -1 for the dimension in your reshape operation that you want to be calculated automatically.
For example, here:
x = tf.zeros((100 * 10 *12,))
reshaped = tf.reshape(x, [100, -1, 12])
reshaped will have shape (100, 10, 12)
Or, is there any other way for me to get the sequence length from the input data which has a shape [batch_size, sequence_length, embedding_dimension].
You can use the tf.shape operation to find the shape of a tensor at runtime so if you want sequence_length in a tensor with shape [batch_size, sequence_length, embedding_dimension], you need just call tf.shape(x)[1].
For my example above, calling:
tf.shape(reshaped)[1]
would give an int32 tensor with shape () and value 10

Shape of 1D convolution output on a 2D data using keras

I am trying to implement a 1D convolution on a time series classification problem using keras. I am having some trouble interpreting the output size of the 1D convolutional layer.
I have my data composed of the time series of different features over a time interval of 128 units and I apply a 1D convolutional layer:
x = Input((n_timesteps, n_features))
cnn1_1 = Conv1D(filters = 100, kernel_size= 10, activation='relu')(x)
which after compilation I obtain the following shapes of the outputs:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_26 (InputLayer) (None, 128, 9) 0
_________________________________________________________________
conv1d_28 (Conv1D) (None, 119, 100) 9100
I was assuming that with 1D convolution, the data is only convoluted across the time axis (axis 1) and the size of my output would be:
119, 100*9. But I guess that the network is performing some king of operation across the feature dimension (axis 2) and I don't know which operation is performing.
I am saying this because what I interpret as 1d convolution is that the features shapes must be preserved because I am only convolving the time domain: If I have 9 features, then for each filter I have 9 convolutional kernels, each of these applied to a different features and convoluted across the time axis. This should return 9 convoluted features for each filter resulting in an output shape of 119, 9*100.
However the output shape is 119, 100.
Clearly something else is happening and I can't understand it or get it.
where am I failing my reasoning? How is the 1d convolution performed?
I add one more comment which is my comment on one of the answers provided:
I understand the reduction from 128 to 119, but what I don't understand is why the feature dimension changes. For example, if I use
Conv1D(filters = 1, kernel_size= 10, activation='relu')
, then the output dimension is going to be (None, 119, 1), giving rise to only one feature after the convolution. What is going on in this dimension, which operation is performed to go from from 9 --> 1?
Conv1D needs 3D tensor for its input with shape (batch_size,time_step,feature). Based on your code, the filter size is 100 which means filter converted from 9 dimensions to 100 dimensions. How does this happen? Dot Product.
In above, X_i is the concatenation of k words (k = kernel_size), l is number of filters (l=filters), d is the dimension of input word vector, and p_i is output vector for each window of k words.
What happens in your code?
[n_features * 9] dot [n_features * 9] => [1] => repeat l-times => [1 * 100]
do above for all sequences => [128 * 100]
Another thing that happens here is you did not specify the padding type. According to the docs, by default Conv1d use valid padding which caused your dimension to reduce from 128 to 119. If you need the dimension to be the same as the input you can choose the same option:
Conv1D(filters = 100, kernel_size= 10, activation='relu', padding='same')
It Sums over the last axis, which is the feature axis, you can easily check this by doing the following:
input_shape = (1, 128, 9)
# initialize kernel with ones, and use linear activations
y = tf.keras.layers.Conv1D(1,3, activation="linear", input_shape=input_shape[2:],kernel_initializer="ones")(x)
y :
if you sum x along the feature axis you will get:
x
Now you can easily see that the sum of the first 3 values of sum of x is the first value of convolution, I used a kernel size of 3 to make this verification easier

Tensorflow convolution

I'm trying to perform a convolution (conv2d) on images of variable dimensions. I have those images in form of an 1-D array and I want to perform a convolution on them, but I have a lot of troubles with the shapes.
This is my code of the conv2d:
tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
where x is the input image.
The error is:
ValueError: Shape must be rank 4 but is rank 1 for 'Conv2D' (op: 'Conv2D') with input shapes: [1], [5,5,1,32].
I think I might reshape x, but I don't know the right dimensions. When I try this code:
x = tf.reshape(self.x, shape=[-1, 5, 5, 1]) # example
I get this:
ValueError: Dimension size must be evenly divisible by 25 but is 1 for 'Reshape' (op: 'Reshape') with input shapes: [1], [4] and with input tensors computed as partial shapes: input[1] = [?,5,5,1].
You can't use conv2d with a tensor of rank 1. Here's the description from the doc:
Computes a 2-D convolution given 4-D input and filter tensors.
These four dimensions are [batch, height, width, channels] (as Engineero already wrote).
If you don't know the dimensions of the image in advance, tensorflow allows to provide a dynamic shape:
x = tf.placeholder(tf.float32, shape=[None, None, None, 3], name='x')
with tf.Session() as session:
print session.run(x, feed_dict={x: data})
In this example, a 4-D tensor x is created, but only the number of channels is known statically (3), everything else is determined on runtime. So you can pass this x into conv2d, even if the size is dynamic.
But there's another problem. You didn't say your task, but if you're building a convolutional neural network, I'm afraid, you'll need to know the size of the input to determine the size of FC layer after all pooling operations - this size must be static. If this is the case, I think the best solution is actually to scale your inputs to a common size before passing it into a convolutional network.
UPD:
Since it wasn't clear, here's how you can reshape any image into 4-D array.
a = np.zeros([50, 178, 3])
shape = a.shape
print shape # prints (50, 178, 3)
a = a.reshape([1] + list(shape))
print a.shape # prints (1, 50, 178, 3)

Categories