I'm trying to create a keras LSTM to predict time series. My x_train is shaped like 3000,15,10 (Examples, Timesteps, Features), y_train like 3000,15,1 and I'm trying to build a many to many model (10 input features per sequence make 1 output / sequence).
The code I'm using is this:
model = Sequential()
model.add(LSTM(
10,
input_shape=(15, 10),
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(
100,
return_sequences=True))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
model.compile(loss="mse", optimizer="rmsprop")
model.fit(
X_train, y_train,
batch_size=512, nb_epoch=1, validation_split=0.05)
However, I can't fit the model when using :
model.add(Dense(1, activation='linear'))
>> Error when checking model target: expected dense_1 to have 2 dimensions, but got array with shape (3000, 15, 1)
or when formatting it this way:
model.add(Dense(1))
model.add(Activation("linear"))
>> Error when checking model target: expected activation_1 to have 2 dimensions, but got array with shape (3000, 15, 1)
I already tried flattening the model ( model.add(Flatten()) ) before adding the dense layer but that just gives me ValueError: Input 0 is incompatible with layer flatten_1: expected ndim >= 3, found ndim=2. This confuses me because I think my data actually is 3 dimensional, isn't it?
The code originated from https://github.com/Vict0rSch/deep_learning/tree/master/keras/recurrent
In case of keras < 2.0: you need to use TimeDistributed wrapper in order to apply it element-wise to a sequence.
In case of keras >= 2.0: Dense layer is applied element-wise by default.
Since you updated your keras version and your error messages changed, here is what works on my machine (Keras 2.0.x)
This works:
model = Sequential()
model.add(LSTM(10,input_shape=(15, 10), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM( 100, return_sequences=True))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
This also works:
model = Sequential()
model.add(LSTM(10,input_shape=(15, 10), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM( 100, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(1,return_sequences=True, activation='linear'))
Testing with:
x = np.ones((3000,15,10))
y = np.ones((3000,15,1))
Compiling and training with:
model.compile(optimizer='adam',loss='mse')
model.fit(x,y,epochs=4,verbose=2)
Related
I have a CNN-LSTM that looks as follows;
SEQUENCE_LENGTH = 32
BATCH_SIZE = 32
EPOCHS = 30
n_filters = 64
n_kernel = 1
n_subsequences = 4
n_steps = 8
def DNN_Model(X_train):
model = Sequential()
model.add(TimeDistributed(
Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu', input_shape=(n_subsequences, n_steps, X_train.shape[3]))))
model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu')))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
return model
I'm using this CNN-LSTM for a multivariate time series forecasting problem. the CNN-LSTM input data comes in the 4D format: [samples, subsequences, timesteps, features]. For some reason, I need TimeDistributed Layers; or I get errors like ValueError: Input 0 of layer conv1d is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 4, 8, 35]. I think this has to do with the fact that Conv1D is officially not meant for time series, so to preserve time-series data shape we need to use a wrapper layer like TimeDistributed. I don't really mind using TimeDistributed layers - They're wrappers and if they make my model work I am happy. However, when I try to visualize my model with
file = 'CNN_LSTM_Visualization.png'
tf.keras.utils.plot_model(model, to_file=file, show_layer_names=False, show_shapes=False)
The resulting visualization only shows the Sequential():
I suspect this has to do with the TimeDistributed layers and the model not being built yet. I cannot call model.summary() either - it throws ValueError: This model has not yet been built. Build the model first by calling build()or callingfit()with some data, or specify aninput_shape argument in the first layer(s) for automatic build Which is strange because I have specified the input_shape, albeit in the Conv1D layer and not in the TimeDistributed wrapper.
I would like a working model together with a working tf.keras.utils.plot_model function. Any explanation as to why I need TimeDistributed and why it makes the plot_model function behave weirdly would be greatly awesome.
An alternative to using an Input layer is to simply pass the input_shape to the TimeDistributed wrapper, and not the Conv1D layer:
def DNN_Model(X_train):
model = Sequential()
model.add(TimeDistributed(
Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'), input_shape=(n_subsequences, n_steps, X_train.shape[3])))
model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu')))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
return model
Add your input layer at the beginning. Try this
def DNN_Model(X_train):
model = Sequential()
model.add(InputLayer(input_shape=(n_subsequences, n_steps, X_train)))
model.add(TimeDistributed(
Conv1D(filters=n_filters, kernel_size=n_kernel,
activation='relu')))
model.add(TimeDistributed(Conv1D(filters=n_filters,
kernel_size=n_kernel, activation='relu')))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
....
Now, you can plot and get a summary properly.
DNN_Model(3).summary() # OK
tf.keras.utils.plot_model(DNN_Model(3)) # OK
I created a CNN-LSTM for survival prediction of web sessions, my training data looks as follows:
print(x_train.shape)
(288, 3, 393)
with (samples, timesteps, features) and my model:
model = Sequential()
model.add(TimeDistributed(Conv1D(128, 5, activation='relu'),
input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(TimeDistributed(MaxPooling1D()))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(64, stateful=True, return_sequences=True))
model.add(LSTM(16, stateful=True))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
However, the TimeDistributed Layer requires a minimum of 3 dimensions, how should I transform the data to get it work?
Thanks a lot!
your data are in 3d format and this is all you need to feed a conv1d or an LSTM. if your target is 2D remember to set return_sequences=False in your last LSTM cell.
using a flatten before an LSTM is a mistake because you are destroying the 3D dimensionality
pay attention also on the pooling operation in order to not have a negative time dimension to reduce (I use 'same' padding in the convolution above in order to avoid this)
below is an example in a binary classification task
n_sample, time_step, n_features = 288, 3, 393
X = np.random.uniform(0,1, (n_sample, time_step, n_features))
y = np.random.randint(0,2, n_sample)
model = Sequential()
model.add(Conv1D(128, 5, padding='same', activation='relu',
input_shape=(time_step, n_features)))
model.add(MaxPooling1D())
model.add(LSTM(64, return_sequences=True))
model.add(LSTM(16, return_sequences=False))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X,y, epochs=3)
I'm getting a list of images to train my CNN.
model = Sequential()
model.add(Dense(32, activation='tanh', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
data, labels = ReadImages(TRAIN_DIR)
# Train the model, iterating on the data in batches of 32 samples
model.fit(np.array(data), np.array(labels), epochs=10, batch_size=32)
But I faced this error:
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected dense_1_input to have 2 dimensions, but got array with shape (391, 605, 700, 3)
You are feeding images to the Dense Layer. Either flatten the images using .flatten() or use a model with CNN Layers. The shape (391,605,700,3) means you have 391 images of size 605x700 having 3 dimensions(rgb).
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(605, 700, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
This link has good explanations for basic CNN.
This is no CNN. A Convolutional Neural Network is defined by having Conv Layer. Those Layers work with imput shapes in 4D (Batchsize, ImageDimX, ImageDimY, ColorChannels). The Dense Layers(aka. Fully connected) you are using 2D input (Batchsize, DataAsAVector)
You need to first flatten the image if you want to pass the image directly to dense layers as Dense layer takes input in 2 dimensions only and since you are passing whole image there is 4 dimensions in it i.e. Number of images X Height X Width X Number of channels (391, 605, 700, 3).
You are not actually doing any convolutions on the image. To do convolutions you need to add CNN layers after initialising the model as sequential.
To add dense layer :
model = Sequential()
model.add(Flatten())
model.add(Dense(32, activation='tanh', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
To add CNN layer and then flatten it :
model = Sequential()
model.add(Conv2D(input_shape=(605,700,3), filters=64, kernel_size=(3,3),
padding="same",activation="relu"))
model.add(Flatten())
model.add(Dense(32, activation='tanh', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
Could someone help me to understand what this error is all about?
model = Sequential()
model.add(Embedding(82, 100, weights=[embedding_matrix], input_length=1000))
model.add(LSTM(100))
model.add(Dense(100, activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(x_train, y_train, epochs = 5, batch_size=64)
When i run this LSTM model, I am getting an error as
ValueError: Error when checking model target: expected dense_16 to have shape (None, 100) but got array with shape (16, 2)
I am not sure how much the below information would be useful:
x_train.shape
Out[959]: (16, 1000)
y_train.shape
Out[962]: (16, 2)
If you need any other information, I am ready to provide
you have defined dense layer input shape is 100.
model.add(Dense(100, activation = 'sigmoid'))
so you need to make sure your input should always same shape.
here in your case make x_train and and y_train same shape.
try with :
model = Sequential()
# here the batch dimension is None,
# which means any batch size will be accepted by the model.
model.add(Dense(32, batch_input_shape=(None, 500)))
model.add(Dense(32))
Your last layer has an output shape of None,100
model.add(Dense(100, activation = 'sigmoid'))
But your data (y_train) has the shape (16,2). It should be
model.add(Dense(2, activation = 'sigmoid'))
I am new to Keras and I am trying to make a model for classification, this is my model:
model = Sequential()
model.add(Dense(86, activation='sigmoid', input_dim=21))
model.add(Dense(50, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
but it keeps giving me this error:
ValueError: Error when checking target: expected dense_3 to have shape (None, 1) but got array with shape (17268, 2)
Now I know that I need to encode my labels using one hot encoding and flatten them, so I've done that too.
oht_y_train = np_utils.to_categorical(y_train, num_classes=3)
oht_y_train = np.ndarray.flatten(oht_y_train)
But I still get the same error.
NOTE: Before I flattened the labels I got the same error, just the shape was (5765, 3)
I have also printed the shape of the labels array, it gives me (17268,)
Your labels should not be one-hot encoded if your final layer has an output dimension of 1 (for binary classification). If you have several classes, you should use one-hot encoding and a categorical_crossentropy loss function, but your final output layer should have dimension 3, i.e. Dense(3), where 3 is the number of classes. You should not be flattening the labels after they are encoded.
model = Sequential()
model.add(Dense(86, activation='sigmoid', input_dim=21))
model.add(Dense(50, activation='sigmoid'))
model.add(Dense(3, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
model.fit(X_data, Y-one-hot-encoded) # here your labels have shape (data_size, 3).
If you only need to perform binary categorization, then it is better to use a binary_crossentropy loss and have output dimension 1, using Dense(1) and a sigmoid or softmax activation to normalize outputs between 0 and 1.
model = Sequential()
model.add(Dense(86, activation='sigmoid', input_dim=21))
model.add(Dense(50, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy'])
model.fit(X_data, Y_labels) # here your labels have shape (data_size,).