Tensorflow CNN for different input size - python

I'm trying to make conv network for image regression.
As shown in below, one image [224 x 224] has one GT value {x}.
It's easy to make train [224 x 224] and valid/test with [224 x 224] images.
However, I'd like to apply CNN for different image sizes.
For example, [224 x 229] image, I want to get 5 regression values 'at once'.
Simply, I can do that by just sliding windows of [224 x 224] x 5 times, but apparently it is too slow.
I think using conv for different image size is possible. But FCL is not.
If I change image size to [455 x 256]
lhs shape= [4608,1024] rhs shape= [2048,1024]
error occurred. Is there any way to handle it?

Fully connected layers have a fixed size input. Thus, changing the input size will cause a wrong-size error.
One way to tackle this problem, and allow for different image sizes is to use a fully convolutional network.
An example with easy numbers:
Assuming for example the conv layer's output is of size 16X16, you can create a "classifier layer" of size 4x4 with stride 4, that would output for each of the 4 4x4 squares comprising the 16x16 feature map, a single value per dimension. Such filter would be of size 4x4xn_dim, in your case n_dim will be 5, and the final output would be of size 4x4x5, corresponding to 5 outputs (one for each regression value) for each 4x4 square.
You will notice you can play with the shape of the last conv filter to obtain different sizes for the final output, corresponding to different parts of the input image, but really, looking at all of it.
You can work out the numbers for your own example.
You probably would like to read about basic methods for semantic segmentaion.
Also see basic fully conv nets.

Related

ResNet for 32x32 images

I am trying to train a resnet for 32x32 images, and I came upon a tutorial: https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0, which applies to cifar-10 (32x32 image dataset), but I don't understand what its saying.
This is from the site:
"The rest of the notes from the authors to construct the ResNet are:
Use a stack of 6 layers of 3x3 convolutions. The choice of will determine the size of our ResNet.
The feature map sizes are {32, 16, 8} respectively with 2 convolutions for each feature map size. Also, the number of filters is {16, 32, 64} respectively.
The down sampling of the volumes through the ResNet is achieved increasing the stride to 2, for the first convolution of each layer. Therefore, no pooling operations are used until right before the dense layer.
For the bypass connections, no projections will be used. In the cases where there is a different in the shape of the volume, the input will be simply padded with zeros, so the output size matched the size of the volume before the addition.
This would leave Figure 4 as the representation of our first layer. In this case, our bypass connection is a regular Identity Shortcut because the dimensionality of the volume is constant thorough the layer operations. Since we chose n=1, 2 convolutions are applied within the layer 1.
We still can check from Figure 2 that the output volume of Layer1 is indeed 32x32x16. Let’s go deeper!"
I am confused why the number of channels in the output is 16, when the number of filters was {16, 32, 64}

How to interpret this CNN architecture

How does this CNN architecture work from an input layer to the first convolution layer? hx98 are input matrix dimensions, is n the number of channels or the number of inputs?
It doesn't seem like n is the number of channels because 25 is the number of feature maps and their dimensions do not indicate they are two channels.
However if n is the number of inputs and matrices are single channel, I haven't found a single CNN architecture anywhere that takes multiple input matrices and convolute them together. Most example convolute them seperately and then concatenate.
In my example, n is 2, one is matrix with BER values and another with connection line-rate values.
What mistake am I making? How does this CNN work.
In CNN the image pixels with height and width are multiplied with the
kernel weights of the convolution layer and are added to create a
feature map.
The kernel will pass through all the channels of the
image (3 channels for RGB, 1 channel for GreyScale) based on the
strides defined in the convolution layer.
After the convolution, the size of the image is reduced.
To get the same output dimension as the input dimension, you need to add padding. Padding consists of adding
the right number of rows and columns on each side of the matrix. For
details, please refer to this
documentation.
Thank You.

Kernel size change in convolutional neural networks

I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.
Convolutional layer with kernel_size = (5,5) with 32 output channels
new dimension of throughput = (32, 28, 28)
Max Pooling layer with pool_size (2,2) and step (2,2)
new dimension of throughput = (32, 14, 14)
If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels?
Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).
you need 64 kernel, each with the size of (32,5,5) .
depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.
e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.
("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)")
You essentially answered your own question. YOU are building the network solver. It seems like your convolutional layer output is [channels out] = [channels in] * [number of kernels]. I had to infer this from the wording of your question. In general, this is how it works: you specify the kernel size of the layer and how many kernels to use. Since you have one input channel you are essentially saying that there are 32 kernels in your first convolution layer. That is 32 unique 5x5 kernels. Each of these kernels will be applied to the one input channel. More in general, each of the layer kernels (32 in your example) is applied to each of the input channels. And that is the key. If you build code to implement the convolution layer according to these generalities, then your subsequent convolution layers are done. In the next layer you specify two kernels per channel. In your example there would be 32 input channels, the hidden layer has 2 kernels per channel, and the output would be 64 channels.
You could then down sample by applying a pooling layer, then flatten the 64 channels [turn a matrix into a vector by stacking the columns or rows], and pass it as a column vector into a fully connected network. That is the basic scheme of convolutional networks.
The work comes when you try to code up backpropagation through the convolutional layers. But the OP didn’t ask about that. I’ll just say this, you will come to a place where you have the stored input matrix (one channel), you have a gradient from a lower layer in the form of a matrix and is the size of the layer kernel, and you need to backpropagate it up to the next convolutional layer.
The simple approach is to rotate your stored channel matrix by 180 degrees and then convolve it with the gradient. The explanation for this is long and tedious, too much to write here, and not a lot on the internet explains it well.
A more sophisticated approach is to apply “correlation” between the input gradient and the stored channel matrix. Note I specifically said “correlation” as opposed to “convolution” and that is key. If you think they “almost” the same thing, then I recommend you take some time and learn about the differences.
If you would like to have a look at my CNN solver here's a link to the project. It's C++ and no documentation, sorry :) It's all in a header file called layer.h, find the class FilterLayer2D. I think the code is pretty readable (what programmer doesn't think his code is readable :) )
https://github.com/sraber/simplenet.git
I also wrote a paper on basic fully connected networks. I wrote it so that I would forget what I learned in my self study. Maybe you'll get something out of it. It's at this link:
http://www.raberfamily.com/scottblog/scottblog.htm

tensorflow - understanding tensor shapes for convolution

Currently trying to work my way through the Tensorflow MNIST tutorial for convolutional networks and I could use some help with understanding the dimensions of the darn tensors.
So we have images of 28x28 pixels in size.
The convolution will compute 32 features for each 5x5 patch.
Let's just accept this, for now, and ask ourselves later why 32 features and why 5x5 patches.
Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels.
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
If you say so ...
To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.
x_image = tf.reshape(x, [-1,28,28,1])
Alright, now I'm getting lost.
Judging by this last reshape, we have
"howevermany" 28x28x1 "blocks" of pixels that are our images.
I guess this makes sense because the images are in greyscale
However, if that is the ordering, then our weight tensor is essentially a collection of five 5x1x32 "blocks" of values.
The x32 makes sense, I guess, if we want to infer 32 features per patch
The rest, though, I'm not terribly convinced by.
Why does the weight tensor look the way it apparently does?
(For completeness: we use them
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
where
def conv2d(x,W):
'''
2D convolution, expects 4D input x and filter matrix W
'''
return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding ='SAME')
def max_pool_2x2(x):
'''
max-pooling, using 2x2 patches
'''
return tf.nn.max_pool(x,ksize=[1,2,2,1], strides=[1,2,2,1],padding='SAME')
)
Your input tensor has the shape [-1,28,28,1]. Like you mention, the last dimension is 1 because the images are in greyscale. The first index is the batchsize. The convolution will process every image in the batch independently, therefore the batchsize has no influence on the convolution-weight-tensor dimensions, or, in fact, no influence on any weight-tensor dimensions in the network. That is why the batchsize can be arbitrary (-1 signifies arbitrary size in tensorflow).
Now to the weight tensor; you don't have five of 5x1x32-blocks, you rather have 32 of 5x5x1-blocks. Each represents one feature. The 1 is the depth of the patch and is 1 due to the gray scale (it would be 5x5x3x32 for color images). The 5x5 is the size of the patch.
The ordering of dimensions in the data tensors is different from the ordering of dimensions in the convolution weight tensors.
Beside the other answer, I would like to add some more points,
Let's just accept this, for now, and ask ourselves later why 32 features and why 5x5 patches.
There is no specific reason why we choose 5x5 patches or 32 features, all of this parameters are experienced (except in some cases), you may use 3x3 patches or larger feature size.
I said 'except in some cases', because may we use 3x3 patches to catch information from images in more details, or larger feature size to learn each image in more details ('larger' and 'more details' are relative terms in this case).
However, if that is the ordering, then our weight tensor is essentially a collection of five 5x1x32 "blocks" of values.
Not exactly, but the weight tensor is not a collection it is only a filter with size 5x5 and input channel 1 and output feature (channel) 32
Why does the weight tensor look the way it apparently does?
The weight tensor weight_variable([5, 5, 1, 32]) tells I have 5x5 patch size to apply on an image, I have 1 input feature (since images are in grayscale) and 32 output feature (channel).
More Details:
So this line tf.nn.conv2d(x,W,strides=[1,1,1,1],padding ='SAME') takes input x as [-1,28,28,1], -1 means you can put in this dimension any size you want (batch size), 28,28 shows input size, and it must be exactly 28x82, and the last 1 shows the number of input channel, since the mnist images are grayscale so it is 1, in more details it says input image is a 28x28 2D matrix and each cell of matrix shows a value which indicates the grayscale intensity. If input images were RGB so we should have 3 channel instead 1, and this 3 channel says input image is a 28x28x3 3D matrix, the cells in the first dimension of 3 shows the intensity of Red color, the second dimension of 3 shows the intensity of Green color and the other shows Blue color.
Now tf.nn.conv2d(x,W,strides=[1,1,1,1],padding ='SAME') takes x and apply W ( which is a 3x3 patches and apply whis patch on 28x28 image with step size 1 (since stride is 1) and give the result image again in size 28x28 because we use padding='SAME'

When bulding a CNN, I am getting complaints from Keras that do not make sense to me.

My input shape is supposed to be 100x100. It represents a sentence. Each word is a vector of 100 dimensions and there are 100 words at maximum in a sentence.
I feed eight sentences to the CNN.I am not sure whether this means my input shape should be 100x100x8 instead.
Then the following lines
Convolution2D(10, 3, 3, border_mode='same',
input_shape=(100, 100))
complains:
Input 0 is incompatible with layer convolution2d_1: expected ndim=4, found ndim=3
This does not make sense to me as my input dimension is 2. I can get through it by changing input_shape to (100,100,8). But the "expected ndim=4" bit just does not make sense to me.
I also cannot see why a convolution layer of 3x3 with 10 filters do not accept input of 100x100.
Even I get thru the complains about the "expected ndim=4". I run into problem in my activation layer. There it complains:
Cannot apply softmax to a tensor that is not 2D or 3D. Here, ndim=4
Can anyone explain what is going on here and how to fix it? Many thanks.
I had the same problem and I solved it adding one dimension for channel to input_shape argument.
I suggest following solution:
Convolution2D(10, 3, 3, border_mode='same', input_shape=(100, 100, 1))
the missing dimension for 2D convolutional layers is the "channel" dimension.
For image data, the channel dimension is 1 for grayscale images and 3 for color images.
In your case, to make sure that Keras won't complain, you could use 2D convolution with 1 channel, or 1D convolution with 100 channels.
Ref: http://keras.io/layers/convolutional/#convolution2d
Keras's Convolutional layer takes in 4 dimensional arrays, so you need to structure your input to fit it that way. The dimensions are made up of (batch_size,x_dim,y_dim,channels). This makes a lot of sense in the case of images, which is where CNNs are mostly used, but for your case it gets a bit trickier.
However, batch_size is invariant to the dataset so you need to stack the 8 sentences in the first dimension to get (8,100,100). Channels can be kept to 1 and you need to write it in such a way that keras will accept the data, so expanding the data to (8,100,100,1) would be the input shape you need.

Categories