I want to apply LSTM.
I have 12 features and 74 rows
my data shape after dropping the targeted variable and reshape it for 3d arrays:(1, 74, 12)
and my targeted shape: (74,)
when I split the data using this code:
x_train, x_test, y_train, y_test = train_test_split(data_1, target, test_size = 0.2,random_state =25)
I got this error:
ValueError: Found input variables with inconsistent numbers of samples: [1, 74]
I defined the model well but when I fit the model also I have another error
defining the model:
model = Sequential()
model.add(LSTM(1, batch_input_shape=(1, 74, 12), return_sequences = True))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accurecy'])
model.summary()
fitting the model:
history = model.fit(x_train, y_train, epochs = 100, validation_data= (x_test, y_test))
here I have also this error:
ValueError: Input 0 of layer sequential_14 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 12)
How can I resolve this error?
tf.keras.layers.LSTM expects inputs: A 3D tensor with shape [batch, timesteps, feature].
import tensorflow as tf
inputs = tf.random.normal([32, 10, 8])
lstm = tf.keras.layers.LSTM(4, return_sequences=True, return_state=True)
whole_seq_output, final_memory_state, final_carry_state = lstm(inputs)
print(whole_seq_output.shape)
Output
(1, 74, 4)
If your input shape is of 2D, use tf.expand_dims(input, axis=0) to add extra dimension.
Related
My train set has 10 columns including a target column that I'm trying to predict, while my test set (dataframe_test) has 9 columns. When I run the code I receive this error:
Input 0 of layer "Hidden1" is incompatible with the layer: expected axis -1 of input shape to have value 10, but received input with shape (None, 9)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 9), dtype=float64)
• training=False
• mask=None**
My model looks like this:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=10,
activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(l=0.01),
name='Hidden1'))
model.add(tf.keras.layers.Dense(units=6,
activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(l=0.01),
name='Hidden2'))
model.add(tf.keras.layers.Dense(units=1,
name='Output'))
my_learning_rate = 0.3
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=my_learning_rate),
loss="categorical_crossentropy",
metrics='accuracy')
epochs = 10
batch_size = 32
history = model.fit(train, y_train, epochs = epochs, batch_size = batch_size)
epochs = history.epoch
print(epochs)
score = model.predict(dataframe_test)
You must to split your train set in a 9 columns input matrix x_train = train[:, :10] and a single column training target matrix y_train = train[:, 10].reshape((-1, 1)).
Try using sigmoid
input_size=len(X.columns)
model.add(Dense(10,activation='sigmoid', input_shape=(input_size,)))
model.add(Dense(10,activation='relu'))
model.add(Dense(10,activation='relu'))
model.add(Dense(1))
I'm building a RNN and I use LSTM.
The X matrix has this dimension (1824, 7) instead Y has this dim (1824, 1).
This is my model:
num_units = 64
learning_rate = 0.0001
activation_function = 'sigmoid'
adam = Adam(lr=learning_rate)
loss_function = 'mse'
batch_size = 5
num_epochs = 50
# Initialize the RNN
model = Sequential()
model.add(LSTM(units = num_units, activation=activation_function, input_shape=(1824, 7, )))
model.add(LeakyReLU(alpha=0.5))
model.add(Dropout(0.1))
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer=adam, loss=loss_function, metrics=['accuracy'])
history = model.fit(
X,
y,
validation_split=0.1,
batch_size=batch_size,
epochs=num_epochs,
shuffle=False
)
I know the error is in input_shape parameter. When I try to fit the model I get this error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 7]
I have seen similar questions, And I tried to apply some of that changes, such as:
input_dim = X.shape
input_dim=(7,)
input_dim=(1824, 7, 1)
But in any case I got this kind of error. How can I fix it?
As commented by #Nicolas Gervais,
Tensorflow Keras LSTM expects inputs: A 3D tensor with shape [batch, timesteps, feature].
Working sample code
import tensorflow as tf
inputs = tf.random.normal([32, 10, 8])
print(inputs.shape)
lstm = tf.keras.layers.LSTM(4)
output = lstm(inputs)
print(output.shape)
Output
(32, 10, 8)
(32, 4)
I am trying to do the classification of the inputs into categories.
The shapes are:
df_train.shape: (17980, 380)
df_validation.shape: (17980, 380)
However, when I run my code, I am getting the following error
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [32, 380]
How can we fix this error?
Conv1D takes input of shape:
3+D tensor with shape: batch_shape + (steps, input_dim)
If your data is only 2D add a dummy dimension with:
df_train = df_train[..., None]
df_validation = df_validation[..., None]
also modify batch_input_shape=(32, 1, 380) accordingly to: batch_input_shape=(32, 380, 1)
or omit it altogether
other changes (working on this dummy data):
df_train = np.random.normal(size=(17980, 380))
df_validation = np.random.normal(size=(17980, 380))
df_train = df_train[..., None]
df_validation = df_validation[..., None]
y_train = np.random.normal(size=(17980, 1))
y_validation = np.random.normal(size=(17980, 1))
#train,test = train_test_split(df, test_size=0.20, random_state=0)
batch_size=32
epochs=5
model = Sequential()
model.add((Conv1D(filters=5, kernel_size=2, activation='relu', padding='same')))
model.add((MaxPooling1D(pool_size=2)))
model.add(LSTM(50, return_sequences=True))
model.add(LSTM(10))
model.add(Dense(1))
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)
model.compile(optimizer=adam, loss='mse', metrics=['mae', 'mape', 'acc'])
callbacks = [EarlyStopping('val_loss', patience=3)]
model.fit(df_train, df_validation, batch_size=batch_size)
print(model.summary())
My input is a tensor of X_train.shape=(4291, 1, 278, 29, 1). My output is a tensor of Y_train.shape=(4291, 1, 9). When it conducted fit(X_train,Y_train), it showed me an error of
"ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (4291, 1, 9)"
So how do I deal with the shape of output?
model = Sequential()
model.add(ConvLSTM2D(filters=8, kernel_size=5, strides=2,
input_shape=(1, 278, 29, 1),activation='relu',
padding='same',return_sequences=False))
model.add(Flatten())
model.add(Dense(9))
model.compile(loss="mse", optimizer="Adam", metrics=['mse'])
model.fit(X_train, Y_train,batch_size=batch_size, epochs=epochs, verbose=2, shuffle=False)
The output of your model has a shape of (None, 9). Therefore, the target array (i.e. Y_train) must have the same shape, i.e. (num_samples, 9). Try to reshape it:
Y_train = Y_train.reshape(-1, 9) # -1 indicates that the dimension of that axis should be automatically inferred.
I'm new with Keras and I'm trying to implement a Sequence to Sequence LSTM.
Particularly, I have a dataset with 9 features and I want to predict 5 continuous values.
I split the training and the test set and their shape are respectively:
X TRAIN (59010, 9)
X TEST (25291, 9)
Y TRAIN (59010, 5)
Y TEST (25291, 5)
The LSTM is extremely simple at the moment:
model = Sequential()
model.add(LSTM(100, input_shape=(9,), return_sequences=True))
model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])
history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))
But I have the following error:
ValueError: Input 0 is incompatible with layer lstm_1: expected
ndim=3, found ndim=2
Can anyone help me?
LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option.
x_train = x_train.reshape(-1, 1, 9)
x_test = x_test.reshape(-1, 1, 9)
y_train = y_train.reshape(-1, 1, 5)
y_test = y_test.reshape(-1, 1, 5)
model = Sequential()
model.add(LSTM(100, input_shape=(1, 9), return_sequences=True))
model.add(LSTM(5, input_shape=(1, 9), return_sequences=True))
model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])
history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))