How to interpret clearly the meaning of the units parameter in Keras? - python

I am wondering how LSTM work in Keras. In this tutorial for example, as in many others, you can find something like this :
model.add(LSTM(4, input_shape=(1, look_back)))
What does the "4" mean. Is it the number of neuron in the layer. By neuron, I mean something that for each instance gives a single output ?
Actually, I found this brillant discussion but wasn't really convinced by the explanation mentioned in the reference given.
On the scheme, one can see the num_unitsillustrated and I think I am not wrong in saying that each of this unit is a very atomic LSTM unit (i.e. the 4 gates). However, how these units are connected ? If I am right (but not sure), x_(t-1)is of size nb_features, so each feature would be an input of a unit and num_unit must be equal to nb_features right ?
Now, let's talk about keras. I have read this post and the accepted answer and get trouble. Indeed, the answer says :
Basically, the shape is like (batch_size, timespan, input_dim), where input_dim can be different from the unit
In which case ? I am in trouble with the previous reference...
Moreover, it says,
LSTM in Keras only define exactly one LSTM block, whose cells is of unit-length.
Okay, but how do I define a full LSTM layer ? Is it the input_shape that implicitely create as many blocks as the number of time_steps (which, according to me is the first parameter of input_shape parameter in my piece of code ?
Thanks for lighting me
EDIT : would it also be possible to detail clearly how to reshape data of, say, size (n_samples, n_features) for a stateful LSTM model ? How to deal with time_steps and batch_size ?

First, units in LSTM is NOT the number of time_steps.
Each LSTM cell(present at a given time_step) takes in input x and forms a hidden state vector a, the length of this hidden unit vector is what is called the units in LSTM(Keras).
You should keep in mind that there is only one RNN cell created by the code
keras.layers.LSTM(units, activation='tanh', …… )
and RNN operations are repeated by Tx times by the class itself.
I've linked this to help you understand it better in with a very simple code.

You can (sort of) think of it exactly as you think of fully connected layers. Units are neurons.
The dimension of the output is the number of neurons, as with most of the well known layer types.
The difference is that in LSTMs, these neurons will not be completely independent of each other, they will intercommunicate due to the mathematical operations lying under the cover.
Before going further, it might be interesting to take a look at this very complete explanation about LSTMs, its inputs/outputs and the usage of stative = true/false: Understanding Keras LSTMs. Notice that your input shape should be input_shape=(look_back, 1). The input shape goes for (time_steps, features).
While this is a series of fully connected layers:
hidden layer 1: 4 units
hidden layer 2: 4 units
output layer: 1 unit
This is a series of LSTM layers:
Where input_shape = (batch_size, arbitrary_steps, 3)
Each LSTM layer will keep reusing the same units/neurons over and over until all the arbitrary timesteps in the input are processed.
The output will have shape:
(batch, arbitrary_steps, units) if return_sequences=True.
(batch, units) if return_sequences=False.
The memory states will have a size of units.
The inputs processed from the last step will have size of units.
To be really precise, there will be two groups of units, one working on the raw inputs, the other working on already processed inputs coming from the last step. Due to the internal structure, each group will have a number of parameters 4 times bigger than the number of units (this 4 is not related to the image, it's fixed).
Flow:
Takes an input with n steps and 3 features
Layer 1:
For each time step in the inputs:
Uses 4 units on the inputs to get a size 4 result
Uses 4 recurrent units on the outputs of the previous step
Outputs the last (return_sequences=False) or all (return_sequences = True) steps
output features = 4
Layer 2:
Same as layer 1
Layer 3:
For each time step in the inputs:
Uses 1 unit on the inputs to get a size 1 result
Uses 1 unit on the outputs of the previous step
Outputs the last (return_sequences=False) or all (return_sequences = True) steps

The number of units is the size (length) of the internal vector states, h and c of the LSTM. That is no matter the shape of the input, it is upscaled (by a dense transformation) by the various kernels for the i, f, and o gates. The details of how the resulting latent features are transformed into h and c are described in the linked post. In your example, the input shape of data
(batch_size, timesteps, input_dim)
will be transformed to
(batch_size, timesteps, 4)
if return_sequences is true, otherwise only the last h will be emmited making it (batch_size, 4). I would recommend using a much higher latent dimension, perhaps 128 or 256 for most problems.

I would put it this way - there are 4 LSTM "neurons" or "units", each with 1 Cell State and 1 Hidden State for each timestep they process. So for an input of 1 timestep processing , you will have 4 Cell States, and 4 Hidden States and 4 Outputs.
Actually the correct way to say this is - for one timestep sized input you 1 Cell State (a vector of size 4) and 1 Hidden State (a vector of size 4) and 1 Output (a vector of size 4).
So if you feed in a timeseries with 20 steps, you will have 20 (intermediate) Cell States, each of size 4. That is because the inputs in LSTM are processed sequentially, 1 after the other. Similarly you will have 20 Hidden States, each of size 4.
Usually, your output will be the output of the LAST step (a vector of size 4). However in case you want the outputs of each intermediate step(remember you have 20 timesteps to process), you can make return_sequences = TRUE. In which case you will have 20 , 4 sized vectors each telling you what was the output when each of those steps got processed as those 20 inputs came one after the other.
In case when you put return_states = TRUE , you get the last Hidden State of size = 4 and last Cell State of size 4.

Related

Stuck understanding ResNet's Identity block and Convolutional blocks

I'm learning Residual Networks (ResNet50) from Andrew Ng coursera lectures. I understand that one of the main reasons why ResNets work is that they can learn identity function and that's why adding more and more layers in network does not hurt the performance of the network.
Now as described in lectures, there are two type of blocks are used in ResNets: 1) Identity block and Convolutional block.
Identity Block is used when there is no change in input and output dimensions. Convolutional block is almost same as identity block but there is a convolutional layer in short-cut path to just change the dimension such that the dimension of input and output matches.
Here is identity block:
and here is convolutional block:
Now in implementation of convolutional block (2nd image), First block (i.e. conv2d --> BatchNorm --> ReLu is implemented with 1x1 convolution and stride > 1.
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding = 'valid', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
I don't understand the reason behind keeping stride > 1 with window size 1. Isn't it just data loss? We are just considering alternate pixels in this case.
What should be the possible reason for such hyperparameter selection? Any intuitive explanation will help! Thanks.
I don't understand the reason behind keeping stride > 1 with window
size 1. Isn't it just data loss?
Please refer the section on Deeper Bottleneck Architectures in the resnet paper. Also, Figure 5.
https://arxiv.org/pdf/1512.03385.pdf
1 x 1 convolutions are typically used to increase or decrease the dimensionality along the filter dimension. So, in the bottleneck architecture the first 1 x 1 layer reduces the dimensions so that the 3 x 3 layer needs to handle smaller input/output dimensions. Then the final 1 x 1 layer increases the filter dimensions again.
It's done to save on computation/training time.
From the paper,
"Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design".
I believe you might have answered your own question. The convolutional block is used whenever you need to change the dimension in order for the output and input dimensions to match. That being said, how do you change the dimension of a certain volume using convolutions? Well, you change the stride.
For any given convolution operation, assuming a square input, the dimension of the output volume can be obtained through the formula (n+2p-f)/s +1, where n is the input dimension, p is your zero-padding, f the filter dimension and s is the stride. By increasing the stride you're effectively reducing the dimension of your shortcut's output volume, and thus, it can be used in such a way as to make sure that the dimensions of your shortcut and lower paths will match in order for the final sum to be performed.
Why is it >1 then? Well, if you didn't need a stride larger than one, you wouldn't be needing a dimension alteration in the first place and therefore would be using the identity block instead.

Implement Causal CNN in Keras for multivariate time-series prediction

This question is a followup to my previous question here: Multi-feature causal CNN - Keras implementation, however, there are numerous things that are unclear to me that I think it warrants a new question. The model in question here has been built according to the accepted answer in the post mentioned above.
I am trying to apply a Causal CNN model on multivariate time-series data of 10 sequences with 5 features.
lookback, features = 10, 5
What should filters and kernel be set to?
What is the effect of filters and kernel on the network?
Are these just an arbitrary number - i.e. number of neurons in ANN layer?
Or will they have an effect on how the net interprets the time-steps?
What should dilations be set to?
Is this just an arbitrary number or does this represent the lookback of the model?
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
According to the previously mentioned answer, the input needs to be reshaped according to the following logic:
After Reshape 5 input features are now treated as the temporal layer for the TimeDistributed layer
When Conv1D is applied to each input feature, it thinks the shape of the layer is (10, 1)
with the default "channels_last", therefore...
10 time-steps is the temporal dimension
1 is the "channel", the new location for the feature maps
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
According to the mentioned answer, the model needs to be reshaped, according to the following logic:
Stack feature maps on top of each other so each time step can look at all features produced earlier - (10 time steps, 5 features * 32 filters)
Next, causal layers are now applied to the 5 input features dependently.
Why were they initially applied independently?
Why are they now applied dependently?
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
SUMMARY
What should filters and kernel be set to?
Will they have an effect on how the net interprets the time-steps?
What should dilations be set to to represent lookback of 10?
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
===========================================================================
FULL CODE
lookback, features = 10, 5
filters = 32
kernel = 5
dilations = 5
dilation_rates = [2 ** i for i in range(dilations)]
model = Sequential()
model.add(InputLayer(input_shape=(lookback, features)))
model.add(Reshape(target_shape=(features, lookback, 1), input_shape=(lookback, features)))
# Add causal layers
for dilation_rate in dilation_rates:
model.add(TimeDistributed(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu')))
model.add(Reshape(target_shape=(lookback, features * filters)))
next_dilations = 3
dilation_rates = [2 ** i for i in range(next_dilations)]
for dilation_rate in dilation_rates:
model.add(Conv1D(filters=filters,
kernel_size=kernel,
padding='causal',
dilation_rate=dilation_rate,
activation='elu'))
model.add(MaxPool1D())
model.add(Flatten())
model.add(Dense(units=1, activation='linear'))
model.summary()
===========================================================================
EDIT:
Daniel, thank you for your answer.
Question:
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.
Answer:
I hope I understand your question correctly.
Each feature is a sequence array of time-series data. They are independent, as in, they are not an image, however, they correlate with each other somewhat.
Which is why I am trying to use Wavenet, which is very good at predicting a single time-series array, however, my problem requires me to use multiple multiple features.
Comments about the given answer
Questions:
Why are causal layers initially applied independently?
Why are they applied dependently after reshape?
Why not apply them dependently from the beginning?
That answer is sort of strange. I'm not an expert, but I don't see the need to keep independent features with a TimeDistributed layer. But I also cannot say whether it gives a better result or not. At first I'd say it's just unnecessary. But it might bring extra intelligence though, given that it might see relations that involve distant steps between two features instead of just looking at "same steps". (This should be tested)
Nevertheless, there is a mistake in that approach.
The reshapes that are intended to swap lookback and feature sizes are not doing what they are expected to do. The author of the answer clearly wants to swap axes (keeps the interpretation of what is feature, what is lookback), which is different from reshape (mixes everything and data loses meaningfulness)
A correct approach would need actual axis swapping, like model.add(Permute((2,1))) instead of the reshapes.
So, I don't know these answers, but nothing seems to create that need.
One sure thing is: you will certainly want the dependent part. A model will not get any near the intelligence of your original model if it doesn't consider relations between features. (Unless you're lucky to have your data completely independent)
Now, explaining the relation between LSTM and Conv1D
An LSTM can be directly compared to a Conv1D and the shapes used are exactly the same, and they mean virtually the same, as long as you're using channels_last.
That said, the shape (samples, input_length, features_or_channels) is the correct shape for both LSTM and Conv1D. In fact, features and channels are exactly the same thing in this case. What changes is how each layer works regarding the input length and calculations.
Concept of filters and kernels
Kernel is the entire tensor inside the conv layer that will be multiplied to the inputs to get the results. A kernel includes its spatial size (kernel_size) and number of filters (output features). And also automatic input filters.
There is not a number of kernels, but there is a kernel_size. The kernel size is how many steps in the length will be joined together for each output step. (This tutorial is great for undestanding 2D convolutions regarding what it does and what the kernel size is - just imagine 1D images instead -- this tutorial doesn't show the number of "filters" though, it's like 1-filter animations)
The number of filters relates directly to the number of features, they're exactly the same thing.
What should filters and kernel be set to?
So, if your LSTM layer is using units=256, meaning it will output 256 features, you should use filters=256, meaning your convolution will output 256 channels/features.
This is not a rule, though, you may find that using more or less filters could bring better results, since the layers do different things after all. There is no need to have all layers with the same number of filters as well!! Here you should go with a parameter tuning. Test to see which numbers are best for your goal and data.
Now, kernel size is something that can't be compared to the LSTM. It's a new thing added to the model.
The number 3 is sort of a very common choice. It means that the convolution will take three time steps to produce one time step. Then slide one step to take another group of three steps to produce the next step and so on.
Dilations
Dilations mean how many spaces between steps the convolution filter will have.
A convolution dilation_rate=1 takes kernel_size consecutive steps to produce one step.
A convolution with dilation_rate = 2 takes, for instance, steps 0, 2 and 4 to produce a step. Then takes steps 1,3,5 to produce the next step and so on.
What should dilations be set to to represent lookback of 10?
range = 1 + (kernel_size - 1) * dilation_rate
So, with a kernel size = 3:
Dilation = 0 (dilation_rate=1): the kernel size will range 3 steps
Dilation = 1 (dilation_rate=2): the kernel size will range 5 steps
Dilation = 2 (dilation_rate=4): the kernel size will range 9 steps
Dilation = 3 (dilation_rate=8): the kernel size will range 17 steps
My question to you
If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.

LSTM architecture in Keras implementation?

I am new to Keras and going through the LSTM and its implementation details in Keras documentation. It was going easy but suddenly I came through this SO post and the comment. It has confused me on what is the actual LSTM architecture:
Here is the code:
model = Sequential()
model.add(LSTM(32, input_shape=(10, 64)))
model.add(Dense(2))
As per my understanding, 10 denote the no. of time-steps and each one of them is fed to their respective LSTM cell; 64 denote the no. of features for each time-step.
But, the comment in the above post and the actual answer has confused me about the meaning of 32.
Also, how is the output from LSTM is getting connected to the Dense layer.
A hand-drawn diagrammatic explanation would be quite helpful in visualizing the architecture.
EDIT:
As far as this another SO post is concerned, then it means 32 represents the length of the output vector that is produced by each of the LSTM cells if return_sequences=True.
If that's true then how do we connect each of 32-dimensional output produced by each of the 10 LSTM cells to the next dense layer?
Also, kindly tell if the first SO post answer is ambiguous or not?
how do we connect each of 32-dimensional output produced by each of
the 10 LSTM cells to the next dense layer?
It depends on how you want to do it. Suppose you have:
model.add(LSTM(32, input_shape=(10, 64), return_sequences=True))
Then, the output of that layer has shape (10, 32). At this point, you can either use a Flatten layer to get a single vector with 320 components, or use a TimeDistributed to work on each of the 10 vectors independently:
model.add(TimeDistributed(Dense(15))
The output shape of this layer is (10, 15), and the same weights are applied to the output of every LSTM unit.
it's easy to figure out the no. of LSTM cells required for the input(specified in timespan)
How to figure out the no. of LSTM units required in the output?
You either get the output of the last LSTM cell (last timestep) or the output of every LSTM cell, depending on the value of return_sequences. As for the dimensionality of the output vector, that's just a choice you have to make, just like the size of a dense layer, or number of filters in a conv layer.
how each of the 32-dim vector from the 10 LSTM cells get connected to TimeDistributed layer?
Following the previous example, you would have a (10, 32) tensor, i.e. a size-32 vector for each of the 10 LSTM cells. What TimeDistributed(Dense(15)) does, is to create a (15, 32) weight matrix and a bias vector of size 15, and do:
for h_t in lstm_outputs:
dense_outputs.append(
activation(dense_weights.dot(h_t) + dense_bias)
)
Hence, dense_outputs has size (10, 15), and the same weights were applied to every LSTM output, independently.
Note that everything still works when you don't know how many timesteps you need, e.g. for machine translation. In this case, you use None for the timestep; everything that I wrote still applies, with the only difference that the number of timesteps is not fixed anymore. Keras will repeat LSTM, TimeDistributed, etc. for as many times as necessary (which depend on the input).

Keras Bidirectional LSTM - Layer grouping

While working to implement a paper (Dialogue Act Sequence Labeling using Hierarchical encoder with CRF) using Keras, I need to implement a specific Bidirectional LSTM architecture.
I have to train the network on the concept of a Conversation. Conversations are composed of Utterances, and Utterances are composed of Words. Words are N-dimensional vectors. The model represented in the paper first reduces each Utterance to a single M-dimensional vector. To achieve this, it uses a Bidirectional LSTM layer. Let's call this layer A.
(For simplicity, let's assume that each Utterance has a length of |U| and each Conversation has a length of |C|)
Each Utterance is input to a Bi-LSTM layer with U timesteps, and the output of the last timestep is taken. The input size is (|U|, N), and the output size is (1, M).
This Bi-LSTM layer should be applied separately/simultaneously to each Utterance in the Conversation. Note that, since the network takes as input the entire Conversation, the dimensions for a single input to the network would be (|C|, |U|, N).
As the paper describes, I intend to feed each utterance (i.e. each (|U|, N)) of that input and feed it to a Bi-LSTM layer with |U| units. As there are |C| Utterances in a Conversation, this implies that there should be a total of |C| x |U| Bi-LSTM units, grouped into |C| different partitions for each Utterance. There should be no connection between the |C| groups of units. Once processed, the output of each of those C groups of Bidirectional LSTM units will then be fed into another Bi-LSTM layer, say B.
How is it possible to feed specific portions of the input only to specific portions of the layer A, and make sure that they are not interconnected? (i.e. the portion of Bi-LSTM units used for an Utterance should not be connected to the Bi-LSTM units used for another Utterance)
Is it possible to achieve this through keras.models.Sequential, or is there a specific way to achieve this using Functional API?
Here is what I have tried so far:
# ...
model = Sequential()
model.add(Bidirectional(LSTM(C * U), input_shape = (C, U, N),
merge_mode='concat'))
model.add(GlobalMaxPooling1D())
model.add(Bidirectional(LSTM(n, return_sequences = True), merge_mode='concat'))
# ...
model.compile(loss = loss_function,
optimizer = optimizer,
metrics=['accuracy'])
However, this code is currently receiving the following error:
ValueError: Input 0 is incompatible with layer bidirectional_1: expected ndim=3, found ndim=4
More importantly, the code above obviously does not do the grouping I mentioned. I am looking for a way to enhance the model as I described above.
Finally, below is the figure of the model I described above. It may possibly help clarify some of the written content narrated above. The layer tagged as "Utterance layer" is what I called the layer A. As you can see in the figure, each utterance u_i in the figure is composed of words w_j, which are N-dimensional vectors. (You may omit the embedding layer for the purposes of this question) Assuming, for simplicity, that each u_i has equal number of Words, then each group of Bidirectional LSTM nodes in the Utterance Layer will have an input size of (|U|, N). Yet, since there are |C| such utterances u_i in a Conversation, the dimensions of the entire input will be (|C|, |U|, N).
I'll create a net for what I see in the picture. For now I'm ignoring the "units" part I mentioned in my comment to your question.
This model does exactly what is shown in the picture. All utterances are completely separate from start to end.
model = Sequential()
#You have an extra time dimension that should be kept as is
#So we add a TimeDistributed` wrapper to the first layers
model.add(TimeDistributed(Embedding(dictionaryLength,N), input_shape=(C,U)))
#This is the utterance layer. It works in "word steps", keeping "utterance steps" untouched
model.add(TimeDistributed(Bidirectional(LSTM(M//2, return_sequences=False))))
#Is the pooling really demanded by the article?
#Or was it an attempt to remove one of the time dimensions?
#Not adding it here because I used `return_sequences=False`
model.add(Bidirectional(LSTM(someSize//2,return_sequences=True)))
model.add(Dense(anotherSize)) #is this a CRF layer???
model.summary()
Notice that in every Bidirectional layer, I divided the output size by two, so it's important that M and someSize are even numbers.

How to vectorize LSTMs?

In particular, I'm confused about what it means for an LSTM layer to have (say) 50 cells. Consider the following LSTM block from this awesome blog post:
Say my input xt is a (20,) vector and the hidden layer ht is a (50,) vector. Given that the cell state Ct undergoes only point-wise operations (point-wise tanh and *) before becoming the new hidden state, I gather that Ct.shape = ht.shape = (50,). Now the forget gate looks at the input concatenated with the hidden layer, which would be a (20+50,) = (70,) vector, which means the forget gate must have a weight matrix of shape (50, 70), such that dot(W, [xt, ht]).shape = (50,).
So my question at this point is that, am I looking at a LSTM block with 50 cells when Ct.shape = (50,)? Or am I misunderstanding what it means for a LSTM layer to have 50 cells?
I understand what you are getting confused with. So basically, the black line connecting the two boxes at the top which represents the cell state is actually a set of very small 50 lines grouped together. These get multiplied point wise with the output of the forget gate which has an output consisting of 50 values. These 50 values multiply with the cell state point wise.

Categories