Stuck understanding ResNet's Identity block and Convolutional blocks - python

I'm learning Residual Networks (ResNet50) from Andrew Ng coursera lectures. I understand that one of the main reasons why ResNets work is that they can learn identity function and that's why adding more and more layers in network does not hurt the performance of the network.
Now as described in lectures, there are two type of blocks are used in ResNets: 1) Identity block and Convolutional block.
Identity Block is used when there is no change in input and output dimensions. Convolutional block is almost same as identity block but there is a convolutional layer in short-cut path to just change the dimension such that the dimension of input and output matches.
Here is identity block:
and here is convolutional block:
Now in implementation of convolutional block (2nd image), First block (i.e. conv2d --> BatchNorm --> ReLu is implemented with 1x1 convolution and stride > 1.
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding = 'valid', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
I don't understand the reason behind keeping stride > 1 with window size 1. Isn't it just data loss? We are just considering alternate pixels in this case.
What should be the possible reason for such hyperparameter selection? Any intuitive explanation will help! Thanks.

I don't understand the reason behind keeping stride > 1 with window
size 1. Isn't it just data loss?
Please refer the section on Deeper Bottleneck Architectures in the resnet paper. Also, Figure 5.
https://arxiv.org/pdf/1512.03385.pdf
1 x 1 convolutions are typically used to increase or decrease the dimensionality along the filter dimension. So, in the bottleneck architecture the first 1 x 1 layer reduces the dimensions so that the 3 x 3 layer needs to handle smaller input/output dimensions. Then the final 1 x 1 layer increases the filter dimensions again.
It's done to save on computation/training time.
From the paper,
"Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design".

I believe you might have answered your own question. The convolutional block is used whenever you need to change the dimension in order for the output and input dimensions to match. That being said, how do you change the dimension of a certain volume using convolutions? Well, you change the stride.
For any given convolution operation, assuming a square input, the dimension of the output volume can be obtained through the formula (n+2p-f)/s +1, where n is the input dimension, p is your zero-padding, f the filter dimension and s is the stride. By increasing the stride you're effectively reducing the dimension of your shortcut's output volume, and thus, it can be used in such a way as to make sure that the dimensions of your shortcut and lower paths will match in order for the final sum to be performed.
Why is it >1 then? Well, if you didn't need a stride larger than one, you wouldn't be needing a dimension alteration in the first place and therefore would be using the identity block instead.

Related

Implement Causal CNN in Keras for multivariate time-series prediction

This question is a followup to my previous question here: Multi-feature causal CNN - Keras implementation, however, there are numerous things that are unclear to me that I think it warrants a new question. The model in question here has been built according to the accepted answer in the post mentioned above.
I am trying to apply a Causal CNN model on multivariate time-series data of 10 sequences with 5 features.
lookback, features = 10, 5
What should filters and kernel be set to?
What is the effect of filters and kernel on the network?
Are these just an arbitrary number - i.e. number of neurons in ANN layer?
Or will they have an effect on how the net interprets the time-steps?
What should dilations be set to?
Is this just an arbitrary number or does this represent the lookback of the model?
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
According to the previously mentioned answer, the input needs to be reshaped according to the following logic:
After Reshape 5 input features are now treated as the temporal layer for the TimeDistributed layer
When Conv1D is applied to each input feature, it thinks the shape of the layer is (10, 1)
with the default "channels_last", therefore...
10 time-steps is the temporal dimension
1 is the "channel", the new location for the feature maps
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
According to the mentioned answer, the model needs to be reshaped, according to the following logic:
Stack feature maps on top of each other so each time step can look at all features produced earlier - (10 time steps, 5 features * 32 filters)
Next, causal layers are now applied to the 5 input features dependently.
Why were they initially applied independently?
Why are they now applied dependently?
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
SUMMARY
What should filters and kernel be set to?
Will they have an effect on how the net interprets the time-steps?
What should dilations be set to to represent lookback of 10?
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
===========================================================================
FULL CODE
lookback, features = 10, 5
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
===========================================================================
EDIT:
Daniel, thank you for your answer.
Question:
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.
Answer:
I hope I understand your question correctly.
Each feature is a sequence array of time-series data. They are independent, as in, they are not an image, however, they correlate with each other somewhat.
Which is why I am trying to use Wavenet, which is very good at predicting a single time-series array, however, my problem requires me to use multiple multiple features.
Comments about the given answer
Questions:
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
That answer is sort of strange. I'm not an expert, but I don't see the need to keep independent features with a TimeDistributed layer. But I also cannot say whether it gives a better result or not. At first I'd say it's just unnecessary. But it might bring extra intelligence though, given that it might see relations that involve distant steps between two features instead of just looking at "same steps". (This should be tested)
Nevertheless, there is a mistake in that approach.
The reshapes that are intended to swap lookback and feature sizes are not doing what they are expected to do. The author of the answer clearly wants to swap axes (keeps the interpretation of what is feature, what is lookback), which is different from reshape (mixes everything and data loses meaningfulness)
A correct approach would need actual axis swapping, like model.add(Permute((2,1))) instead of the reshapes.
So, I don't know these answers, but nothing seems to create that need.
One sure thing is: you will certainly want the dependent part. A model will not get any near the intelligence of your original model if it doesn't consider relations between features. (Unless you're lucky to have your data completely independent)
Now, explaining the relation between LSTM and Conv1D
An LSTM can be directly compared to a Conv1D and the shapes used are exactly the same, and they mean virtually the same, as long as you're using channels_last.
That said, the shape (samples, input_length, features_or_channels) is the correct shape for both LSTM and Conv1D. In fact, features and channels are exactly the same thing in this case. What changes is how each layer works regarding the input length and calculations.
Concept of filters and kernels
Kernel is the entire tensor inside the conv layer that will be multiplied to the inputs to get the results. A kernel includes its spatial size (kernel_size) and number of filters (output features). And also automatic input filters.
There is not a number of kernels, but there is a kernel_size. The kernel size is how many steps in the length will be joined together for each output step. (This tutorial is great for undestanding 2D convolutions regarding what it does and what the kernel size is - just imagine 1D images instead -- this tutorial doesn't show the number of "filters" though, it's like 1-filter animations)
The number of filters relates directly to the number of features, they're exactly the same thing.
What should filters and kernel be set to?
So, if your LSTM layer is using units=256, meaning it will output 256 features, you should use filters=256, meaning your convolution will output 256 channels/features.
This is not a rule, though, you may find that using more or less filters could bring better results, since the layers do different things after all. There is no need to have all layers with the same number of filters as well!! Here you should go with a parameter tuning. Test to see which numbers are best for your goal and data.
Now, kernel size is something that can't be compared to the LSTM. It's a new thing added to the model.
The number 3 is sort of a very common choice. It means that the convolution will take three time steps to produce one time step. Then slide one step to take another group of three steps to produce the next step and so on.
Dilations
Dilations mean how many spaces between steps the convolution filter will have.
A convolution dilation_rate=1 takes kernel_size consecutive steps to produce one step.
A convolution with dilation_rate = 2 takes, for instance, steps 0, 2 and 4 to produce a step. Then takes steps 1,3,5 to produce the next step and so on.
What should dilations be set to to represent lookback of 10?
range = 1 + (kernel_size - 1) * dilation_rate
So, with a kernel size = 3:
Dilation = 0 (dilation_rate=1): the kernel size will range 3 steps
Dilation = 1 (dilation_rate=2): the kernel size will range 5 steps
Dilation = 2 (dilation_rate=4): the kernel size will range 9 steps
Dilation = 3 (dilation_rate=8): the kernel size will range 17 steps
My question to you
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.

Should a 1D CNN need padding to retain input length?

Shouldn't a 1D CNN with stride = 1 and 1 filter have output length equal to input length without the need for padding?
I thought this was the case, but created a Keras model with these specifications that says the output shape is (17902,1) when the input shape is (17910,1). I'm wondering why the dimension has been reduced, since the stride is 1 and it's a 1D convolution.
model = keras.Sequential([
layers.Conv1D(filters=1,kernel_size=9,strides=1,activation=tf.nn.relu,input_shape=X_train[0].shape)
])
I expect that the output shape of this model should be (17910,1), but clearly I'm missing a source of reduction in dimension in this conv. layer.
The length of your output vector is dependent on the length of the input and your kernel size. Since you have a kernel size of 9 you'll get 17902 convolutions with your input and thus an output of shape (17902,1) (without padding).
For better understanding:
Without padding:
With padding:
Whether you should use padding or not is more a question of accuracy. As Ian Goodfellow, Yoshua Bengio and Aaaron Courville in their Deep Learning book found, the optimal padding (at least for 2D images) lies somewhere between "none" and "same"
So my suggestion would be, to try two different CNNs, which have the same architecture except the padding and take the one which has the better accuracy.
(Source: https://www.slideshare.net/xavigiro/recurrent-neural-networks-2-d2l3-deep-learning-for-speech-and-language-upc-2017)

Kernel size change in convolutional neural networks

I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.
Convolutional layer with kernel_size = (5,5) with 32 output channels
new dimension of throughput = (32, 28, 28)
Max Pooling layer with pool_size (2,2) and step (2,2)
new dimension of throughput = (32, 14, 14)
If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels?
Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).
you need 64 kernel, each with the size of (32,5,5) .
depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.
e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.
("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)")
You essentially answered your own question. YOU are building the network solver. It seems like your convolutional layer output is [channels out] = [channels in] * [number of kernels]. I had to infer this from the wording of your question. In general, this is how it works: you specify the kernel size of the layer and how many kernels to use. Since you have one input channel you are essentially saying that there are 32 kernels in your first convolution layer. That is 32 unique 5x5 kernels. Each of these kernels will be applied to the one input channel. More in general, each of the layer kernels (32 in your example) is applied to each of the input channels. And that is the key. If you build code to implement the convolution layer according to these generalities, then your subsequent convolution layers are done. In the next layer you specify two kernels per channel. In your example there would be 32 input channels, the hidden layer has 2 kernels per channel, and the output would be 64 channels.
You could then down sample by applying a pooling layer, then flatten the 64 channels [turn a matrix into a vector by stacking the columns or rows], and pass it as a column vector into a fully connected network. That is the basic scheme of convolutional networks.
The work comes when you try to code up backpropagation through the convolutional layers. But the OP didn’t ask about that. I’ll just say this, you will come to a place where you have the stored input matrix (one channel), you have a gradient from a lower layer in the form of a matrix and is the size of the layer kernel, and you need to backpropagate it up to the next convolutional layer.
The simple approach is to rotate your stored channel matrix by 180 degrees and then convolve it with the gradient. The explanation for this is long and tedious, too much to write here, and not a lot on the internet explains it well.
A more sophisticated approach is to apply “correlation” between the input gradient and the stored channel matrix. Note I specifically said “correlation” as opposed to “convolution” and that is key. If you think they “almost” the same thing, then I recommend you take some time and learn about the differences.
If you would like to have a look at my CNN solver here's a link to the project. It's C++ and no documentation, sorry :) It's all in a header file called layer.h, find the class FilterLayer2D. I think the code is pretty readable (what programmer doesn't think his code is readable :) )
https://github.com/sraber/simplenet.git
I also wrote a paper on basic fully connected networks. I wrote it so that I would forget what I learned in my self study. Maybe you'll get something out of it. It's at this link:
http://www.raberfamily.com/scottblog/scottblog.htm

How to interpret clearly the meaning of the units parameter in Keras?

I am wondering how LSTM work in Keras. In this tutorial for example, as in many others, you can find something like this :
model.add(LSTM(4, input_shape=(1, look_back)))
What does the "4" mean. Is it the number of neuron in the layer. By neuron, I mean something that for each instance gives a single output ?
Actually, I found this brillant discussion but wasn't really convinced by the explanation mentioned in the reference given.
On the scheme, one can see the num_unitsillustrated and I think I am not wrong in saying that each of this unit is a very atomic LSTM unit (i.e. the 4 gates). However, how these units are connected ? If I am right (but not sure), x_(t-1)is of size nb_features, so each feature would be an input of a unit and num_unit must be equal to nb_features right ?
Now, let's talk about keras. I have read this post and the accepted answer and get trouble. Indeed, the answer says :
Basically, the shape is like (batch_size, timespan, input_dim), where input_dim can be different from the unit
In which case ? I am in trouble with the previous reference...
Moreover, it says,
LSTM in Keras only define exactly one LSTM block, whose cells is of unit-length.
Okay, but how do I define a full LSTM layer ? Is it the input_shape that implicitely create as many blocks as the number of time_steps (which, according to me is the first parameter of input_shape parameter in my piece of code ?
Thanks for lighting me
EDIT : would it also be possible to detail clearly how to reshape data of, say, size (n_samples, n_features) for a stateful LSTM model ? How to deal with time_steps and batch_size ?
First, units in LSTM is NOT the number of time_steps.
Each LSTM cell(present at a given time_step) takes in input x and forms a hidden state vector a, the length of this hidden unit vector is what is called the units in LSTM(Keras).
You should keep in mind that there is only one RNN cell created by the code
keras.layers.LSTM(units, activation='tanh', …… )
and RNN operations are repeated by Tx times by the class itself.
I've linked this to help you understand it better in with a very simple code.
You can (sort of) think of it exactly as you think of fully connected layers. Units are neurons.
The dimension of the output is the number of neurons, as with most of the well known layer types.
The difference is that in LSTMs, these neurons will not be completely independent of each other, they will intercommunicate due to the mathematical operations lying under the cover.
Before going further, it might be interesting to take a look at this very complete explanation about LSTMs, its inputs/outputs and the usage of stative = true/false: Understanding Keras LSTMs. Notice that your input shape should be input_shape=(look_back, 1). The input shape goes for (time_steps, features).
While this is a series of fully connected layers:
hidden layer 1: 4 units
hidden layer 2: 4 units
output layer: 1 unit
This is a series of LSTM layers:
Where input_shape = (batch_size, arbitrary_steps, 3)
Each LSTM layer will keep reusing the same units/neurons over and over until all the arbitrary timesteps in the input are processed.
The output will have shape:
(batch, arbitrary_steps, units) if return_sequences=True.
(batch, units) if return_sequences=False.
The memory states will have a size of units.
The inputs processed from the last step will have size of units.
To be really precise, there will be two groups of units, one working on the raw inputs, the other working on already processed inputs coming from the last step. Due to the internal structure, each group will have a number of parameters 4 times bigger than the number of units (this 4 is not related to the image, it's fixed).
Flow:
Takes an input with n steps and 3 features
Layer 1:
For each time step in the inputs:
Uses 4 units on the inputs to get a size 4 result
Uses 4 recurrent units on the outputs of the previous step
Outputs the last (return_sequences=False) or all (return_sequences = True) steps
output features = 4
Layer 2:
Same as layer 1
Layer 3:
For each time step in the inputs:
Uses 1 unit on the inputs to get a size 1 result
Uses 1 unit on the outputs of the previous step
Outputs the last (return_sequences=False) or all (return_sequences = True) steps
The number of units is the size (length) of the internal vector states, h and c of the LSTM. That is no matter the shape of the input, it is upscaled (by a dense transformation) by the various kernels for the i, f, and o gates. The details of how the resulting latent features are transformed into h and c are described in the linked post. In your example, the input shape of data
(batch_size, timesteps, input_dim)
will be transformed to
(batch_size, timesteps, 4)
if return_sequences is true, otherwise only the last h will be emmited making it (batch_size, 4). I would recommend using a much higher latent dimension, perhaps 128 or 256 for most problems.
I would put it this way - there are 4 LSTM "neurons" or "units", each with 1 Cell State and 1 Hidden State for each timestep they process. So for an input of 1 timestep processing , you will have 4 Cell States, and 4 Hidden States and 4 Outputs.
Actually the correct way to say this is - for one timestep sized input you 1 Cell State (a vector of size 4) and 1 Hidden State (a vector of size 4) and 1 Output (a vector of size 4).
So if you feed in a timeseries with 20 steps, you will have 20 (intermediate) Cell States, each of size 4. That is because the inputs in LSTM are processed sequentially, 1 after the other. Similarly you will have 20 Hidden States, each of size 4.
Usually, your output will be the output of the LAST step (a vector of size 4). However in case you want the outputs of each intermediate step(remember you have 20 timesteps to process), you can make return_sequences = TRUE. In which case you will have 20 , 4 sized vectors each telling you what was the output when each of those steps got processed as those 20 inputs came one after the other.
In case when you put return_states = TRUE , you get the last Hidden State of size = 4 and last Cell State of size 4.

Initializing weights in a 3 layer neural network

So I'm learning the SIMPLEST way to code a neural network, one that can be modified in many ways depending on what you want, basically like a template. I found i am trask's 11 line neural network code, and the weight initialization makes perfect sense:
syn0 = 2*np.random.random((3,1)) - 1
However, when I look at it for his extended 3 layer network, it looks like this:
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
I would understand if syn1 was a bit different, but BOTH are now different! He doesn't explain it, only gives a comment saying, "randomly initialize our weights with mean 0."
Can someone explain to me the mathematical reasoning behind this? Go full crazy if you want, I'm a math person since 5.
If by different, you are referring to the arguments of np.random.random(), then it is because you are creating weights with different shapes/dimensions. In this example (which ignores biases), you are trying to go from an input of dimension 3 to an output of dimension 1. With one layer, you require the shape (3,1). For two layers, you need shapes (3,n) and (n,1), where n is any integer. This is just to ensure that matrix multiplication is valid. Here n = 4 has been chosen as the hidden layer dimension.

Categories