LSTM seq2seq input and output with different number of time steps - python

I am new to this field and currently working on a video action prediction project using keras. The input data takes 10% frames of each video and convert all same successive actions into 1 single action. For example [0,0,0,1,1,1,2] -> [0,1,2]. After applying padding and one-hot encoding, the shape of the input data is (1460, 6, 48) -> (number of videos, number of actions, one-hot encoded form for 48 actions). I would like to predict all future actions for each video. The shape of the output should be (1460, 23, 48) -> (number of videos, max timesteps, one-hot encoded form for 48 actions).
Here is my current approach, which does not work.
def lstm_model(frame_len, max_timesteps):
model = Sequential()
model.add(LSTM(100, input_shape=(None,48), return_sequences=True))
model.add(Dense(48, activation='tanh'))
model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
I would like to know if I have to keep the number of timesteps the same for input and output.
If not, how could I modify the model to fit such data.
Any help would be appreciated.

You can do someting like this :
Encode your input data with LSTM
Copy the required number of time this encoded vector
Decode the encoded vector
In keras, it looks like :
from tensorflow.keras import layers,models
input_timesteps=10
input_features=2
output_timesteps=3
output_features=1
units=100
#Input
encoder_inputs = layers.Input(shape=(input_timesteps,input_features))
#Encoder
encoder = layers.LSTM(units, return_sequences=False)(encoder_inputs)
#Repeat
decoder = layers.RepeatVector(output_timesteps)(encoder)
#Decoder
decoder = layers.LSTM(units, return_sequences=True)(decoder)
#Output
out = layers.TimeDistributed(Dense(output_features))(decoder)
model = models.Model(encoder_inputs, out)
it gives you:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 10, 2)] 0
_________________________________________________________________
lstm (LSTM) (None, 100) 41200
_________________________________________________________________
repeat_vector (RepeatVector) (None, 3, 100) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 3, 100) 80400
_________________________________________________________________
time_distributed (TimeDistri (None, 3, 1) 101
=================================================================
if you want to keep the cell state from the encoder to re use in the decoder, you can do it with return_state=True. Check this question.

While you don't have to keep them the same, you do need to add fully connected layers after LSTM to change dimensions, or use MaxPool2D or similar types of layers.

Related

Cannot resolve error in keras sequential model

I am working on a gesture recognition problem. For that I have a train set. Train set consists of multiple folders and each folder consists of a series of 30 images. From those images the model is trained. Also I have a csv file that contains the class label of each folder. The class labels are : "Left Swipe", "Right Swipe", "Stop", "Thumbs Down" and "Thumbs Up". Those labels are present in one np.array variable train_class. Now, I have created a CNN model then feeding that in a Sequential model.
The code is available in below GIT location
https://github.com/subhrajyoti-ghosh/ML-and-Deep-Learning/blob/main/Gesture_Recognition.ipynb
But when I am trying to fit the model, I am receiving error. Can you please help me understanding the error and how to solve that?
You are trying to use a TimeDistributed layer on a 2D input (batch_size, 256), which will not work, because the layer needs at least a 3D tensor. You should try using tf.keras.layers.RepeatVector:
import tensorflow as tf
resnet = tf.keras.applications.ResNet50(include_top=False,weights='imagenet',input_shape=(224,224,3))
cnn = tf.keras.Sequential([resnet])
cnn.add(tf.keras.layers.Conv2D(64,(2,2),strides=(1,1)))
cnn.add(tf.keras.layers.Conv2D(16,(3,3),strides=(1,1)))
cnn.add(tf.keras.layers.Flatten())
inputs = tf.keras.layers.Input(shape=(224,224,3))
x = cnn(inputs)
x = tf.keras.layers.RepeatVector(n=30)(x)
x = tf.keras.layers.GRU(16,return_sequences=True)(x)
x = tf.keras.layers.GRU(8)(x)
outputs = tf.keras.layers.Dense(5,activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
dummy_x = tf.random.normal((1, 224,224,3))
print(model.summary())
print(model(dummy_x))
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_14 (InputLayer) [(None, 224, 224, 3)] 0
sequential_6 (Sequential) (None, 256) 24121296
repeat_vector_2 (RepeatVect (None, 30, 256) 0
or)
gru_5 (GRU) (None, 30, 16) 13152
gru_6 (GRU) (None, 8) 624
dense_7 (Dense) (None, 5) 45
=================================================================
Total params: 24,135,117
Trainable params: 24,081,997
Non-trainable params: 53,120
_________________________________________________________________
None

Simple questions about LSTM networks from Keras

Considering this LSTM based RNN:
# Instantiating the model
model = Sequential()
# Input layer
model.add(LSTM(30, activation="softsign", return_sequences=True, input_shape=(30, 1)))
# Hidden layers
model.add(LSTM(12, activation="softsign", return_sequences=True))
model.add(LSTM(12, activation="softsign", return_sequences=True))
# Final Hidden layer
model.add(LSTM(10, activation="softsign"))
# Output layer
model.add(Dense(10))
Is each output unit from the final hidden layer connected to each 12 output unit of the preceding hidden layer ? (10*12 = 120 connections)
Is each one of the 10 outputs from the Dense layer connected to each one of the final hidden layer (10*10 = 100 connections)
Would there be a difference in term of connections between the Input layer and the 1st hidden layer if variable "return_sequence" was set to False (for both layers or for one) ?
Thanks a lot for your help
Aymeric
Here is how I picture the RNN, please tell me if it's wrong:
Note about the picture:
X = one training example, i.e a vector of 30 bitcoin (BTC) values (each value represent one day, 30 days total)
Output vector = 10 values that are supposed to be the 10 next values of bitcoin (10 next days)
Let's take a look at the model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 30, 30) 3840
_________________________________________________________________
lstm_1 (LSTM) (None, 30, 12) 2064
_________________________________________________________________
lstm_2 (LSTM) (None, 30, 12) 1200
_________________________________________________________________
lstm_3 (LSTM) (None, 10) 920
_________________________________________________________________
dense (Dense) (None, 10) 110
=================================================================
Total params: 8,134
Trainable params: 8,134
Non-trainable params: 0
_________________________________________________________________
Since you don't use return_sequences=True, the default is return_sequences=False, which means only the last output from the final LSTM layer is used by the Dense layer.
Yes. But it is actually 110 because you have a bias: (10 + 1) * 10.
There would not. The difference between return_sequence=True and return_sequence=False is that when it is set to false, only the final output will be sent to the next layer. So if I have a time series data with 30 events (1, 30, 30), only the output from the 30th event will be passed along to the next layer. The computations are the same, so there will be no difference in weights. Do know that there might be some shape mis-matches if you try to set some of these to be False out of the box.

Keras time series prediction with CNN+LSTM model and TimeDistributed layer wrapper

I have several data files of human activity recognition data consisting of time-ordered rows of recorded raw samples. Each row has 8 columns of EMG sensor data and 1 corresponding column of target sensor data. I'm trying to feed the 8 channels of EMG sensor data into a CNN+LSTM deep model in order to predict the 1 channel of target data. I do this by breaking down a dataset (a in the image below) into 50-row windows of raw samples (b in the image below) and then reshaping these windows into blocks of 4 windows, to act as time steps for the LSTM part of the model (c in the image below). The following image will hopefully explain it better:
I've been following the tutorial here as to how to implement my model: https://medium.com/smileinnovation/how-to-work-with-time-distributed-data-in-a-neural-network-b8b39aa4ce00
I have reshaped the data and built the model but keep coming back to the following error that I cannot figure out how to resolve:
"ValueError: Error when checking target: expected FC_out to have 2 dimensions, but got array with shape (808, 50, 1)"
My code follows and is written in Python using Keras and Tensorflow:
from keras.models import Sequential
from keras.layers import CuDNNLSTM
from keras.layers.convolutional import Conv2D
from keras.layers.core import Dense, Dropout
from keras.layers import Flatten
from keras.layers import TimeDistributed
#Code that reads in file data and shapes it into 4-window blocks omitted. That code produces the following arrays:
#x_train - shape of (808, 4, 50, 8) which equates to (samples, time steps, window length, number of channels)
#x_valid - shape of (223, 4, 50, 8) which equates to the same as x_train
#y_train - shape of (808, 50, 1) which equates to (samples, window length, number of target channels)
# Followed machine learning mastery style for ease of reading
numSteps = x_train.shape[1]
windowLength = x_train.shape[2]
numChannels = x_train.shape[3]
numOutputs = 1
# Reshape x data for use with TimeDistributed wrapper, adding extra dimension at the end
x_train = x_train.reshape(x_train.shape[0], numSteps, windowLength, numChannels, 1)
x_valid = x_valid.reshape(x_valid.shape[0], numSteps, windowLength, numChannels, 1)
# Build model
model = Sequential()
model.add(TimeDistributed(Conv2D(64, (3,3), activation=activation, name="Conv2D_1"),
input_shape=(numSteps, windowLength, numChannels, 1)))
model.add(TimeDistributed(Conv2D(64, (3,3), activation=activation, name="Conv2D_2")))
model.add(Dropout(0.4, name="CNN_Drop_01"))
# Flatten for passing to LSTM layer
model.add(TimeDistributed(Flatten(name="Flatten_1")))
# LSTM and Dropout
model.add(CuDNNLSTM(28, return_sequences=True, name="LSTM_01"))
model.add(Dropout(0.4, name="Drop_01"))
# Second LSTM and Dropout
model.add(CuDNNLSTM(28, return_sequences=False, name="LSTM_02"))
model.add(Dropout(0.3, name="Drop_02"))
# Fully Connected layer and further Dropout
model.add(Dense(16, activation=activation, name="FC_1"))
model.add(Dropout(0.4)) # For example, for 3 outputs classes
# Final fully Connected layer specifying outputs
model.add(Dense(numOutputs, activation=activation, name="FC_out"))
# Compile model, produce summary and save model image to file
# NOTE: coeffDetermination refers to a function for calculating R2 and is not included in this code
model.compile(optimizer='Adam', loss='mse', metrics=[coeffDetermination])
# Now train the model
history_cb = model.fit(x_train, y_train, validation_data=(x_valid, y_valid), epochs=30, batch_size=64)
I'd be grateful if anyone can figure out what I've done wrong. Or am I just going about this the incorrect way, with trying to use this model configuration for time series prediction?
"ValueError: Error when checking target: expected FC_out to have 2 dimensions, but got array with shape (808, 50, 1)"
Your input is (808, 4, 50, 8, 1) and output is (808, 50, 1)
However, from the model.summary() shows that output shape should be (None, 4, 1)
Since the # of time steps is 4, y_train should be something like (808, 4, 1).
Or, if you want to have (888, 50, 1), you need to change model to get the last part as (None, 50, 1).
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
time_distributed_18 (TimeDis (None, 4, 48, 6, 64) 640
_________________________________________________________________
time_distributed_19 (TimeDis (None, 4, 46, 4, 64) 36928
_________________________________________________________________
CNN_Drop_01 (Dropout) (None, 4, 46, 4, 64) 0
_________________________________________________________________
time_distributed_20 (TimeDis (None, 4, 11776) 0
_________________________________________________________________
LSTM_01 (LSTM) (None, 4, 28) 1322160
_________________________________________________________________
Drop_01 (Dropout) (None, 4, 28) 0
_________________________________________________________________
Drop_02 (Dropout) (None, 4, 28) 0
_________________________________________________________________
FC_1 (Dense) (None, 4, 16) 464
_________________________________________________________________
dropout_3 (Dropout) (None, 4, 16) 0
_________________________________________________________________
FC_out (Dense) (None, 4, 1) 17
=================================================================
Total params: 1,360,209
Trainable params: 1,360,209
Non-trainable params: 0
For Many to many sequence prediction with different sequence length, check this link https://github.com/keras-team/keras/issues/6063
dataX or input : (nb_samples, nb_timesteps, nb_features) -> (1000, 50, 1)
dataY or output: (nb_samples, nb_timesteps, nb_features) -> (1000, 10, 1)
model = Sequential()
model.add(LSTM(input_dim=1, output_dim=hidden_neurons, return_sequences=False))
model.add(RepeatVector(10))
model.add(LSTM(output_dim=hidden_neurons, return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.add(Activation('linear'))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])

how does input_shape in keras.applications work?

I have been through the Keras documentation but I am still unable to figure how does the input_shape parameter works and why it does not change the number of parameters for my DenseNet model when I pass it my custom input shape. An example:
import keras
from keras import applications
from keras.layers import Conv3D, MaxPool3D, Flatten, Dense
from keras.layers import Dropout, Input, BatchNormalization
from keras import Model
# define model 1
INPUT_SHAPE = (224, 224, 1) # used to define the input size to the model
n_output_units = 2
activation_fn = 'sigmoid'
densenet_121_model = applications.densenet.DenseNet121(include_top=False, weights=None, input_shape=INPUT_SHAPE, pooling='avg')
inputs = Input(shape=INPUT_SHAPE, name='input')
model_base = densenet_121_model(inputs)
output = Dense(units=n_output_units, activation=activation_fn)(model_base)
model = Model(inputs=inputs, outputs=output)
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 224, 224, 1) 0
_________________________________________________________________
densenet121 (Model) (None, 1024) 7031232
_________________________________________________________________
dense_1 (Dense) (None, 2) 2050
=================================================================
Total params: 7,033,282
Trainable params: 6,949,634
Non-trainable params: 83,648
_________________________________________________________________
# define model 2
INPUT_SHAPE = (512, 512, 1) # used to define the input size to the model
n_output_units = 2
activation_fn = 'sigmoid'
densenet_121_model = applications.densenet.DenseNet121(include_top=False, weights=None, input_shape=INPUT_SHAPE, pooling='avg')
inputs = Input(shape=INPUT_SHAPE, name='input')
model_base = densenet_121_model(inputs)
output = Dense(units=n_output_units, activation=activation_fn)(model_base)
model = Model(inputs=inputs, outputs=output)
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 512, 512, 1) 0
_________________________________________________________________
densenet121 (Model) (None, 1024) 7031232
_________________________________________________________________
dense_2 (Dense) (None, 2) 2050
=================================================================
Total params: 7,033,282
Trainable params: 6,949,634
Non-trainable params: 83,648
_________________________________________________________________
Ideally with an increase in the input shape the number of parameters should increase, however as you can see they stay exactly the same. My questions are thus:
Why do the number of parameters not change with a change in the input_shape?
I have only defined one channel in my input_shape, what would happen to my model training in this scenario? The documentation says the following:
input_shape: optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3) (with
'channels_last' data format) or (3, 224, 224) (with 'channels_first'
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 32. E.g. (200, 200, 3) would be one
valid value.
However when I run the model with this configuration it runs without any problems. Could there be something that I am missing out?
Using Keras 2.2.4 with Tensorflow 1.12.0 as backend.
1.
In the convolutional layers the input size does not influence the number of weights, because the number of weights is determined by the kernel matrix dimensions. A larger input size leads to a larger output size, but not to an increasing number of weights.
This means, that the output size of the convolutional layers of the second model will be larger than for the first model, which would increase the number of weights in the following dense layer. However if you take a look into the architecture of DenseNet you notice that there's a GlobalMaxPooling2D layer after all the convolutional layers, which averages all the values for each output channel. Thats why the output of DenseNet will be of size 1024, whatever the input shape.
2.
Yes, the model will still work. I'm not entirely sure about that, but my guess is that the single channel will be broadcasted (dublicated) to fill all three channels. Thats at least how these things are usually handled (see for exaple tensorflow or numpy).
The DenseNet is composed of two parts, the convolution part, and the global pooling part.
The number of the convolution part's trainable weights doesn't depend on the input shape.
Usually, a classification network should employ fully connected layers to infer the classification, however, in DenseNet, global pooling is used and doesn't bring any trainable weights.
Therefore, the input shape doesn't affect the number of weights of the entire network.

Mutli Step Forecast LSTM model

I am trying to implement a multi step forecasting LSTM model in Keras. The shapes of data is like this:
X : (5831, 48, 1)
y : (5831, 1, 12)
The model that I am trying to use is:
power_in = Input(shape=(X.shape[1], X.shape[2]))
power_lstm = LSTM(50, recurrent_dropout=0.4128,
dropout=0.412563, kernel_initializer=power_lstm_init, return_sequences=True)(power_in)
main_out = TimeDistributed(Dense(12, kernel_initializer=power_lstm_init))(power_lstm)
While trying to train the model like this:
hist = forecaster.fit([X], y, epochs=325, batch_size=16, validation_data=([X_valid], y_valid), verbose=1, shuffle=False)
I am getting the following error:
ValueError: Error when checking target: expected time_distributed_16 to have shape (48, 12) but got array with shape (1, 12)
How to fix this?
According to your comment:
[The] data i have is like t-48, t-47, t-46, ..... , t-1 as the past data and
t+1, t+2, ......, t+12 as the values that I want to forecast
you may not need to use a TimeDistributed layer at all:
first, just remove the resturn_sequences=True argument of the LSTM layer. After doing it, the LSTM layer would encode the input timeseries of the past in a vector of shape (50,). Now you can feed it directly to a Dense layer with 12 units:
# make sure the labels have are in shape (num_samples, 12)
y = np.reshape(y, (-1, 12))
power_in = Input(shape=(X.shape[1:],))
power_lstm = LSTM(50, recurrent_dropout=0.4128,
dropout=0.412563,
kernel_initializer=power_lstm_init)(power_in)
main_out = Dense(12, kernel_initializer=power_lstm_init)(power_lstm)
Alternatively, if you would like to use a TimeDistributed layer and considering that the output is a sequence itself, we can explicitly enforce this temporal dependency in our model by using another LSTM layer before the Dense layer (with the addition of a RepeatVector layer after the first LSTM layer to make its output a timseries of length 12, i.e. same as the output timeseries length):
# make sure the labels have are in shape (num_samples, 12, 1)
y = np.reshape(y, (-1, 12, 1))
power_in = Input(shape=(48,1))
power_lstm = LSTM(50, recurrent_dropout=0.4128,
dropout=0.412563,
kernel_initializer=power_lstm_init)(power_in)
rep = RepeatVector(12)(power_lstm)
out_lstm = LSTM(32, return_sequences=True)(rep)
main_out = TimeDistributed(Dense(1))(out_lstm)
model = Model(power_in, main_out)
model.summary()
Model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 48, 1) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 50) 10400
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 12, 50) 0
_________________________________________________________________
lstm_4 (LSTM) (None, 12, 32) 10624
_________________________________________________________________
time_distributed_1 (TimeDist (None, 12, 1) 33
=================================================================
Total params: 21,057
Trainable params: 21,057
Non-trainable params: 0
_________________________________________________________________
Of course, in both models you may need to tune the hyper-parameters (e.g. number of LSTM layers, the dimension of LSTM layers, etc.) to be able to accurately compare them and achieve good results.
Side note: actually, in your scenario, you don't need to use TimeDistributed layer at all because (currently) Dense layer is applied on the last axis. Therefore, TimeDistributed(Dense(...)) and Dense(...) are equivalent.

Categories