Input of 3D array into Sequential model Keras (Python) - python

I have a training input in 3 dimensions (8,50,3).
I am trying to pass it as an input to the Sequential Model in Keras. Looking up the documentation I found that this should work:
model = Sequential()
model.add(Dense(100, activation='relu', input_shape=(50,3)))
model.add(Dense(100,init="uniform", activation='sigmoid'))
model.add(Dense(50,init="uniform", activation='relu'))
model.add(Dense(output_dim=1))
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['accuracy'])
When I try to train this model:
model.fit(train,labelTrain,epochs=1,batch_size=1,verbose=1)
I get the following error:
Error when checking model target: expected dense_148 to have 3 dimensions, but got array with shape (8, 1)
What can it mean?
Also, my first objective was to pass a 3D array where the middle dimension did not have a fixed size but I gave up after finding it impossible. Could it work?

Target means it's the expected result. The problem is in labelTrain, not in the input.
A Dense layer must have a number of neurons. You don't pass it an output shape, you pass the amount of neurons, and the output is automatically (None, neurons)
Your last layer should be:
model.add(Dense(1, activation='I recomend an activation here'))

Related

Python Keras Sequential model input

I have an array for attempting some times series sliding window method for machine learning forecasting with tf.Keras:
X.shape
(8779, 6, 1)
to fit the MLP model:
# define model
model = Sequential()
model.add(Dense(100, activation='relu', input_shape=(6,)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
Could anyone give me a tip on how to correct this model input?
input_shape=(6,)
I cant figure out to how get past this error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 6 but received input with shape (None, 6, 1)
Even though it was solved by a recommendation from comments, here is the solution:
Changing:
input_shape=(6,)
Into:
input_shape=(6,1)
worked.

How can I implement a 1D CNN in front of my LSTM network

At the moment I reshape my X_train like this:
X_train = input.reshape(1,1,12)
model = Sequential()
model.add(LSTM(100,input_shape=(1, 12)))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(9, activation='sigmoid'))
But now I am thinking of implementing a 1D CNN in front of this LSTM layer. Does anybody know how this should be done?
You have a keras.layers.Conv1D (see doc) that you can applay to your network input.
If you input is of shape (1,1,12) and you apply K filters, you'll get an output of shape (1,1,K): so you might want to swap it to (1,12,1) to place the steps in 2nd position (check doc).
Note that model.summary() might help you debug the input and output shapes of your network.

How to get one output for several timestep in Keras LSTM?

I want to classifying a timeframe of data. So for example every 5 input, there's one output. But my code refuse to accept my output.
model = Sequential()
model.add(GRU(32, input_shape=(TimeStep.TIME_STEP + 1, 10), return_sequences=True, activation='relu'))
model.add(GRU(64, activation='relu', return_sequences=True))
model.add(Dense(2, activation='hard_sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
history = model.fit(TimeStep.fodder, TimeStep.target, epochs=50)
The error:
ValueError: Error when checking target: expected dense_1 to have shape (5, 2) but got array with shape (31057, 2)
It does have 31057 data point that each data point consist of 5 sequential data.
The return_sequences param in the GRU layer instructs the model to return the state at each time step rather than the final activation.
If you set that flag to False in the second GRU, your model will return the shape that you expect.
Tip: use model.summary() to display the output shapes of your layers.
For a model with a categorical loss you want the output layer activation to be a softmax not a sigmoid.

Keras - Wrong input shape in LSTM dense layer

I am trying to build an lstm text classifier using Keras.
This is the model structure:
model_word2vec = Sequential()
model_word2vec.add(Embedding(input_dim=vocabulary_dimension,
output_dim=embedding_dim,
weights=[word2vec_weights,
input_length=longest_sentence,
mask_zero=True,
trainable=False))
model_word2vec.add(LSTM(units=embedding_dim, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_word2vec.add(Dense(3, activation='softmax'))
model_word2vec.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
results = model_word2vec.fit(X_tr_word2vec, y_tr_word2vec, validation_split=0.16, epochs=3, batch_size=128, verbose=0)
Where y_tr_word2vec is a 3-dimensional one-hot encoded variable.
When I run the code above, I get this error:
ValueError: Error when checking model target: expected dense_2 to have 3 dimensions, but got array with shape (15663, 3)
I suppose that the issue could be about y_tr_word2vec shape or the batch size dimension, but I'm not sure.
Update:
I have changed return_sequences=False, y_tr_word2vec from one-hot to categorical, 1 neuron in dense layer, and now I am using sparse_categorical_crossentropy instead of categorical_crossentropy.
Now, I get this error: ValueError: invalid literal for int() with base 10: 'countess'.
Therefore now I suppose that, during fit(), something goes wrong with the input vector X_tr_word2vec, which contains the sentences.
The problem is this code
model_word2vec.add(LSTM(units=dim_embedding, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_word2vec.add(Dense(3, activation='softmax'))
You have set return_sequences=True ,which means LSTM will return a 3D array to dense layer,,whereas dense does not need 3D data...so delete return_sequences=True
model_word2vec.add(LSTM(units=dim_embedding, dropout=0.25, recurrent_dropout=0.25))
model_word2vec.add(Dense(3, activation='softmax'))
why did u set return_sequences=True?

LSTM Keras target size error

#x_train.shape = 7x5x5 numpy array
#y_train.shape = 3x5x5 numpy array
#x_test.shape = (7,) numpy array
#y_test.shape = (3,) numpy array I have binary output as 0 or 1.
timeteps = 5
data_dim = 5
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=`(timesteps,data_dim)))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=1)
score = model.evaluate(X_test,y_test,batch_size=1)
ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (3, 5, 5)
I am trying to model LSTM using random data and this error occurs. I have tried many things but I could not succeed.
Thanks in advance.
There are a few problems/misunderstands here?
You can see that your y is actually 3 dimensional. However, the last lstm layer, you have return sequences as false, meaning that the LSTM is returning a single 32 long vector and sending that into the dense layer.
Furthermore, the use of multiple LSTMS here seems to lack purpose, though it does not necessarily harm anything.
In order to fit your presumed data, you would want the last lstm to have return_sequences as True, and have the number of neurons in that lstm not 32, but rather 5, as in the final dimension of your y data.
You could also not have it at all (since you already have two lstms before that, and instead make the second lstm only have 5 neurons and have the final lstm layer be removed entirely. You would then use a time distributed wrapper on the last dense layer
model.add(TimeDistrubuted(Dense(1,activation='sigmoid')))
which says to apply the same dense layer to every timestep of the data, which is required by the shape of your y data.

Categories