ResNet for 32x32 images - python

I am trying to train a resnet for 32x32 images, and I came upon a tutorial: https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0, which applies to cifar-10 (32x32 image dataset), but I don't understand what its saying.
This is from the site:
"The rest of the notes from the authors to construct the ResNet are:
Use a stack of 6 layers of 3x3 convolutions. The choice of will determine the size of our ResNet.
The feature map sizes are {32, 16, 8} respectively with 2 convolutions for each feature map size. Also, the number of filters is {16, 32, 64} respectively.
The down sampling of the volumes through the ResNet is achieved increasing the stride to 2, for the first convolution of each layer. Therefore, no pooling operations are used until right before the dense layer.
For the bypass connections, no projections will be used. In the cases where there is a different in the shape of the volume, the input will be simply padded with zeros, so the output size matched the size of the volume before the addition.
This would leave Figure 4 as the representation of our first layer. In this case, our bypass connection is a regular Identity Shortcut because the dimensionality of the volume is constant thorough the layer operations. Since we chose n=1, 2 convolutions are applied within the layer 1.
We still can check from Figure 2 that the output volume of Layer1 is indeed 32x32x16. Let’s go deeper!"
I am confused why the number of channels in the output is 16, when the number of filters was {16, 32, 64}

Related

How to interpret this CNN architecture

How does this CNN architecture work from an input layer to the first convolution layer? hx98 are input matrix dimensions, is n the number of channels or the number of inputs?
It doesn't seem like n is the number of channels because 25 is the number of feature maps and their dimensions do not indicate they are two channels.
However if n is the number of inputs and matrices are single channel, I haven't found a single CNN architecture anywhere that takes multiple input matrices and convolute them together. Most example convolute them seperately and then concatenate.
In my example, n is 2, one is matrix with BER values and another with connection line-rate values.
What mistake am I making? How does this CNN work.
In CNN the image pixels with height and width are multiplied with the
kernel weights of the convolution layer and are added to create a
feature map.
The kernel will pass through all the channels of the
image (3 channels for RGB, 1 channel for GreyScale) based on the
strides defined in the convolution layer.
After the convolution, the size of the image is reduced.
To get the same output dimension as the input dimension, you need to add padding. Padding consists of adding
the right number of rows and columns on each side of the matrix. For
details, please refer to this
documentation.
Thank You.

Tensorflow CNN for different input size

I'm trying to make conv network for image regression.
As shown in below, one image [224 x 224] has one GT value {x}.
It's easy to make train [224 x 224] and valid/test with [224 x 224] images.
However, I'd like to apply CNN for different image sizes.
For example, [224 x 229] image, I want to get 5 regression values 'at once'.
Simply, I can do that by just sliding windows of [224 x 224] x 5 times, but apparently it is too slow.
I think using conv for different image size is possible. But FCL is not.
If I change image size to [455 x 256]
lhs shape= [4608,1024] rhs shape= [2048,1024]
error occurred. Is there any way to handle it?
Fully connected layers have a fixed size input. Thus, changing the input size will cause a wrong-size error.
One way to tackle this problem, and allow for different image sizes is to use a fully convolutional network.
An example with easy numbers:
Assuming for example the conv layer's output is of size 16X16, you can create a "classifier layer" of size 4x4 with stride 4, that would output for each of the 4 4x4 squares comprising the 16x16 feature map, a single value per dimension. Such filter would be of size 4x4xn_dim, in your case n_dim will be 5, and the final output would be of size 4x4x5, corresponding to 5 outputs (one for each regression value) for each 4x4 square.
You will notice you can play with the shape of the last conv filter to obtain different sizes for the final output, corresponding to different parts of the input image, but really, looking at all of it.
You can work out the numbers for your own example.
You probably would like to read about basic methods for semantic segmentaion.
Also see basic fully conv nets.

Implement Causal CNN in Keras for multivariate time-series prediction

This question is a followup to my previous question here: Multi-feature causal CNN - Keras implementation, however, there are numerous things that are unclear to me that I think it warrants a new question. The model in question here has been built according to the accepted answer in the post mentioned above.
I am trying to apply a Causal CNN model on multivariate time-series data of 10 sequences with 5 features.
lookback, features = 10, 5
What should filters and kernel be set to?
What is the effect of filters and kernel on the network?
Are these just an arbitrary number - i.e. number of neurons in ANN layer?
Or will they have an effect on how the net interprets the time-steps?
What should dilations be set to?
Is this just an arbitrary number or does this represent the lookback of the model?
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
According to the previously mentioned answer, the input needs to be reshaped according to the following logic:
After Reshape 5 input features are now treated as the temporal layer for the TimeDistributed layer
When Conv1D is applied to each input feature, it thinks the shape of the layer is (10, 1)
with the default "channels_last", therefore...
10 time-steps is the temporal dimension
1 is the "channel", the new location for the feature maps
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
According to the mentioned answer, the model needs to be reshaped, according to the following logic:
Stack feature maps on top of each other so each time step can look at all features produced earlier - (10 time steps, 5 features * 32 filters)
Next, causal layers are now applied to the 5 input features dependently.
Why were they initially applied independently?
Why are they now applied dependently?
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
SUMMARY
What should filters and kernel be set to?
Will they have an effect on how the net interprets the time-steps?
What should dilations be set to to represent lookback of 10?
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
===========================================================================
FULL CODE
lookback, features = 10, 5
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
===========================================================================
EDIT:
Daniel, thank you for your answer.
Question:
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.
Answer:
I hope I understand your question correctly.
Each feature is a sequence array of time-series data. They are independent, as in, they are not an image, however, they correlate with each other somewhat.
Which is why I am trying to use Wavenet, which is very good at predicting a single time-series array, however, my problem requires me to use multiple multiple features.
Comments about the given answer
Questions:
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
That answer is sort of strange. I'm not an expert, but I don't see the need to keep independent features with a TimeDistributed layer. But I also cannot say whether it gives a better result or not. At first I'd say it's just unnecessary. But it might bring extra intelligence though, given that it might see relations that involve distant steps between two features instead of just looking at "same steps". (This should be tested)
Nevertheless, there is a mistake in that approach.
The reshapes that are intended to swap lookback and feature sizes are not doing what they are expected to do. The author of the answer clearly wants to swap axes (keeps the interpretation of what is feature, what is lookback), which is different from reshape (mixes everything and data loses meaningfulness)
A correct approach would need actual axis swapping, like model.add(Permute((2,1))) instead of the reshapes.
So, I don't know these answers, but nothing seems to create that need.
One sure thing is: you will certainly want the dependent part. A model will not get any near the intelligence of your original model if it doesn't consider relations between features. (Unless you're lucky to have your data completely independent)
Now, explaining the relation between LSTM and Conv1D
An LSTM can be directly compared to a Conv1D and the shapes used are exactly the same, and they mean virtually the same, as long as you're using channels_last.
That said, the shape (samples, input_length, features_or_channels) is the correct shape for both LSTM and Conv1D. In fact, features and channels are exactly the same thing in this case. What changes is how each layer works regarding the input length and calculations.
Concept of filters and kernels
Kernel is the entire tensor inside the conv layer that will be multiplied to the inputs to get the results. A kernel includes its spatial size (kernel_size) and number of filters (output features). And also automatic input filters.
There is not a number of kernels, but there is a kernel_size. The kernel size is how many steps in the length will be joined together for each output step. (This tutorial is great for undestanding 2D convolutions regarding what it does and what the kernel size is - just imagine 1D images instead -- this tutorial doesn't show the number of "filters" though, it's like 1-filter animations)
The number of filters relates directly to the number of features, they're exactly the same thing.
What should filters and kernel be set to?
So, if your LSTM layer is using units=256, meaning it will output 256 features, you should use filters=256, meaning your convolution will output 256 channels/features.
This is not a rule, though, you may find that using more or less filters could bring better results, since the layers do different things after all. There is no need to have all layers with the same number of filters as well!! Here you should go with a parameter tuning. Test to see which numbers are best for your goal and data.
Now, kernel size is something that can't be compared to the LSTM. It's a new thing added to the model.
The number 3 is sort of a very common choice. It means that the convolution will take three time steps to produce one time step. Then slide one step to take another group of three steps to produce the next step and so on.
Dilations
Dilations mean how many spaces between steps the convolution filter will have.
A convolution dilation_rate=1 takes kernel_size consecutive steps to produce one step.
A convolution with dilation_rate = 2 takes, for instance, steps 0, 2 and 4 to produce a step. Then takes steps 1,3,5 to produce the next step and so on.
What should dilations be set to to represent lookback of 10?
range = 1 + (kernel_size - 1) * dilation_rate
So, with a kernel size = 3:
Dilation = 0 (dilation_rate=1): the kernel size will range 3 steps
Dilation = 1 (dilation_rate=2): the kernel size will range 5 steps
Dilation = 2 (dilation_rate=4): the kernel size will range 9 steps
Dilation = 3 (dilation_rate=8): the kernel size will range 17 steps
My question to you
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.

Kernel size change in convolutional neural networks

I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.
Convolutional layer with kernel_size = (5,5) with 32 output channels
new dimension of throughput = (32, 28, 28)
Max Pooling layer with pool_size (2,2) and step (2,2)
new dimension of throughput = (32, 14, 14)
If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels?
Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).
you need 64 kernel, each with the size of (32,5,5) .
depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.
e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.
("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)")
You essentially answered your own question. YOU are building the network solver. It seems like your convolutional layer output is [channels out] = [channels in] * [number of kernels]. I had to infer this from the wording of your question. In general, this is how it works: you specify the kernel size of the layer and how many kernels to use. Since you have one input channel you are essentially saying that there are 32 kernels in your first convolution layer. That is 32 unique 5x5 kernels. Each of these kernels will be applied to the one input channel. More in general, each of the layer kernels (32 in your example) is applied to each of the input channels. And that is the key. If you build code to implement the convolution layer according to these generalities, then your subsequent convolution layers are done. In the next layer you specify two kernels per channel. In your example there would be 32 input channels, the hidden layer has 2 kernels per channel, and the output would be 64 channels.
You could then down sample by applying a pooling layer, then flatten the 64 channels [turn a matrix into a vector by stacking the columns or rows], and pass it as a column vector into a fully connected network. That is the basic scheme of convolutional networks.
The work comes when you try to code up backpropagation through the convolutional layers. But the OP didn’t ask about that. I’ll just say this, you will come to a place where you have the stored input matrix (one channel), you have a gradient from a lower layer in the form of a matrix and is the size of the layer kernel, and you need to backpropagate it up to the next convolutional layer.
The simple approach is to rotate your stored channel matrix by 180 degrees and then convolve it with the gradient. The explanation for this is long and tedious, too much to write here, and not a lot on the internet explains it well.
A more sophisticated approach is to apply “correlation” between the input gradient and the stored channel matrix. Note I specifically said “correlation” as opposed to “convolution” and that is key. If you think they “almost” the same thing, then I recommend you take some time and learn about the differences.
If you would like to have a look at my CNN solver here's a link to the project. It's C++ and no documentation, sorry :) It's all in a header file called layer.h, find the class FilterLayer2D. I think the code is pretty readable (what programmer doesn't think his code is readable :) )
https://github.com/sraber/simplenet.git
I also wrote a paper on basic fully connected networks. I wrote it so that I would forget what I learned in my self study. Maybe you'll get something out of it. It's at this link:
http://www.raberfamily.com/scottblog/scottblog.htm

Keras Conv2D: filters vs kernel_size

What's the difference between those two? It would also help to explain in the more general context of convolutional networks.
Also, as a side note, what is channels? In other words, please break down the 3 terms for me: channels vs filters vs kernel.
Each convolution layer consists of several convolution channels (aka. depth or filters). In practice, they are a number such as 64, 128, 256, 512 etc. This is equal to number of channels in the output of a convolutional layer. kernel_size, on the other hand, is the size of these convolution filters. In practice, they take values such as 3x3 or 1x1 or 5x5. To abbreviate, they can be written as 1 or 3 or 5 as they are mostly square in practice.
Edit
Following quote should make it more clear.
Discussion on vlfeat
Suppose X is an input with size W x H x D x N (where N is the size of the batch) to a convolutional layer containing filter F (with size FW x FH x FD x K) in a network.
The number of feature channels D is the third dimension of the input X here (for example, this is typically 3 at the first input to the network if the input consists of colour images).
The number of filters K is the fourth dimension of F.
The two concepts are closely linked because if the number of filters in a layer is K, it produces an output with K feature channels. So the input to the next layer will have K feature channels.
The FW x FH above is filter size you are looking for.
Added
You should be familiar with filters. You can consider each filter to be responsible for extracting some type of feature from a raw image. The CNNs try to learn such filters i.e. the filters parametrized in CNNs are learned during training of CNNs. You apply each filter in a Conv2D to each input channel and combine these to get output channels. So, the number of filters and the number of output channels are the same.

Categories