I want to implement Recurrent Neural network with GRU using Keras in python. I have problem in running code and I change variables more and more but it doesn't work. Do you have an idea for solve it?
inputs = 42 #number of columns input
num_hidden =50 #number of neurons in the layer
outputs = 1 #number of columns output
num_epochs = 50
batch_size = 1000
learning_rate = 0.05
#train (125973, 42) 125973 Rows and 42 Features
#Labels (125973,1) is True Results
model = tf.contrib.keras.models.Sequential()
fv=tf.contrib.keras.layers.GRU
model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True)) #i want to send Batches to train
#model.add(tf.keras.layers.Dropout(0.15)) # Dropout overfitting
#model.add(fv((1,42),activation='tanh', return_sequences=True))
#model.add(Dropout(0.2)) # Dropout overfitting
model.add(fv(42, activation='tanh'))
model.add(tf.keras.layers.Dropout(0.15)) # Dropout overfitting
model.add(tf.keras.layers.Dense(1000,activation='softsign'))
#model.add(tf.keras.layers.Activation("softsign"))
start = time.time()
# sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# model.compile(loss="mse", optimizer=sgd)
model.compile(loss="mse", optimizer="Adam")
inp = np.array(train)
oup = np.array(labels)
X_tr = inp[:batch_size].reshape(-1, batch_size, inputs)
model.fit(X_tr,labels,epochs=20, batch_size=batch_size)
However I get the following error:
ValueError: Error when checking target: expected dense to have shape (1000,) but got array with shape (1,)
Here, you have mentioned input vector shape to be 1000.
model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True)) #i want to send Batches to train
However, shape of your training data (X_tr) is 1-D
Check your X_tr variable and have same dimension for input layer.
If you read the error carefully you would realize there is a shape mismatch between the shapes of labels you provide, which is (None, 1), and the shape of output of model, which is (None, 1):
ValueError: Error when checking target: <--- This means the output shapes
expected dense to have shape (1000,) <--- output shape of model
but got array with shape (1,) <--- the shape of labels you give when training
Therefore you need to make them consistent. You just need to change the number of units in the last layer to 1 since there is one output per input sample:
model.add(tf.keras.layers.Dense(1, activation='softsign')) # 1 unit in the output
Related
After running my model for one epoch it crashed with following error message:
InvalidArgumentError: Specified a list with shape [60,9] from a tensor with shape [56,9]
[[{{node TensorArrayUnstack/TensorListFromTensor}}]]
[[sequential_7/lstm_17/PartitionedCall]] [Op:__inference_train_function_29986]
This happened after I changed the LSTM Layer to stateful=True and had to pass
the batch_input_shape Argument instead of the input_shape
Below is my code, I'm sure it has something to do with the shape of my data:
test_split = 0.2
history_points = 60
n = int(histories.shape[0] * test_split)
histories_train = histories[:n]
y_train = next_values_normalized[:n]
histories_test = histories[n:]
y_test = next_values_normalized[n:]
next_values_test = next_values[n:]
print(histories_train.shape)
print(y_train.shape)
-->(1421, 60, 9)
-->(1421, 1)
# model architecture
´´´model = Sequential()
model.add(LSTM(units=128, stateful=True,return_sequences=True, batch_input_shape=(60,history_points, 9)))
model.add(LSTM(units=64,stateful=True,return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=32))
model.add(Dropout(0.2))
model.add(Dense(20))
ADAM=keras.optimizers.Adam(0.0005, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='mean_squared_error', optimizer=ADAM)
model.fit(x=histories_train, y=y_train, batch_size=batchsize, epochs=50, shuffle=False, validation_split=0.2,verbose=1)
´´´
For a stateful LSTM, the batch size should be chosen in a way, so that the number of samples is divisible by the batch size. See also here:
Keras: What if the size of data is not divisible by batch_size?
In your case, considering that you take 20% from your training data as a validation set, you have 1136 samples remaining. So you should choose a batch size by which 1136 is divisible.
Additionally, you could for example remove some samples or reuse samples to be able to choose various batch sizes.
I'm trying to build a multi-output keras model starting from a working single output model. Keras however, is complaining about tensors dimensions.
The single output Model:
This GRU model is training and predicting fine:
timesteps = 250
features = 2
input_tensor = Input(shape=(timesteps, features), name="input")
conv = Conv1D(filters=128, kernel_size=6,use_bias=True)(input_tensor)
b = BatchNormalization()(conv)
s_gru, states = GRU(256, return_sequences=True, return_state=True, name="gru_1")(b)
biases = keras.initializers.Constant(value=88.15)
out = Dense(1, activation='linear', name="output")(s_gru)
model = Model(inputs=input_tensor, outputs=out)
My numpy arrays are:
train_x # shape:(7110, 250, 2)
train_y # shape: (7110, 250, 1)
If fit the model with the following code and everything is fine:
model.fit(train_x, train_y,batch_size=128, epochs=10, verbose=1)
The Problem:
I want to use a slightly modified version of the network that outputs also the GRU states:
input_tensor = Input(shape=(timesteps, features), name="input")
conv = Conv1D(filters=128, kernel_size=6,use_bias=True)(input_tensor)
b = BatchNormalization()(conv)
s_gru, states = GRU(256, return_sequences=True, return_state=True, name="gru_1")(b)
biases = keras.initializers.Constant(value=88.15)
out = Dense(1, activation='linear', name="output")(s_gru)
model = Model(inputs=input_tensor, outputs=[out, states]) # multi output
#fit the model but with a list of numpy array as y
model.compile(optimizer=optimizer, loss='mae', loss_weights=[0.5, 0.5])
history = model.fit(train_x, [train_y,train_y], batch_size=128, epochs=10, callbacks=[])
This training fails and keras is complaining about the target dimensions:
ValueError: Error when checking target: expected gru_1 to have 2 dimensions, but got array with shape (7110, 250, 1)
I'm using Keras 2.3.0 and Tensorflow 2.0.
What am I missing here?
The dimensions of the second output and the second element in the outputs list should be of similar shape. In this case, states would be of shape (7110, 256), which can't really be compared to the train_y shape (which will be of shape (7110, 250, 1) as noted in the first code block. Make sure the outputs can be compared with a similar shape.
I want to classifying a timeframe of data. So for example every 5 input, there's one output. But my code refuse to accept my output.
model = Sequential()
model.add(GRU(32, input_shape=(TimeStep.TIME_STEP + 1, 10), return_sequences=True, activation='relu'))
model.add(GRU(64, activation='relu', return_sequences=True))
model.add(Dense(2, activation='hard_sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
history = model.fit(TimeStep.fodder, TimeStep.target, epochs=50)
The error:
ValueError: Error when checking target: expected dense_1 to have shape (5, 2) but got array with shape (31057, 2)
It does have 31057 data point that each data point consist of 5 sequential data.
The return_sequences param in the GRU layer instructs the model to return the state at each time step rather than the final activation.
If you set that flag to False in the second GRU, your model will return the shape that you expect.
Tip: use model.summary() to display the output shapes of your layers.
For a model with a categorical loss you want the output layer activation to be a softmax not a sigmoid.
I'm new with Keras and I'm trying to implement a Sequence to Sequence LSTM.
Particularly, I have a dataset with 9 features and I want to predict 5 continuous values.
I split the training and the test set and their shape are respectively:
X TRAIN (59010, 9)
X TEST (25291, 9)
Y TRAIN (59010, 5)
Y TEST (25291, 5)
The LSTM is extremely simple at the moment:
model = Sequential()
model.add(LSTM(100, input_shape=(9,), return_sequences=True))
model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])
history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))
But I have the following error:
ValueError: Input 0 is incompatible with layer lstm_1: expected
ndim=3, found ndim=2
Can anyone help me?
LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option.
x_train = x_train.reshape(-1, 1, 9)
x_test = x_test.reshape(-1, 1, 9)
y_train = y_train.reshape(-1, 1, 5)
y_test = y_test.reshape(-1, 1, 5)
model = Sequential()
model.add(LSTM(100, input_shape=(1, 9), return_sequences=True))
model.add(LSTM(5, input_shape=(1, 9), return_sequences=True))
model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])
history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))
I am trying for multi-class classification and here are the details of my training input and output:
train_input.shape= (1, 95000, 360) (95000 length input array with each
element being an array of 360 length)
train_output.shape = (1, 95000, 22) (22 Classes are there)
model = Sequential()
model.add(LSTM(22, input_shape=(1, 95000,360)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(train_input, train_output, epochs=2, batch_size=500)
The error is:
ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4
in line:
model.add(LSTM(22, input_shape=(1, 95000,360)))
Please help me out, I am not able to solve it through other answers.
I solved the problem by making
input size: (95000,360,1) and
output size: (95000,22)
and changed the input shape to (360,1) in the code where model is defined:
model = Sequential()
model.add(LSTM(22, input_shape=(360,1)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)
input_shape is supposed to be (timesteps, n_features). Remove the first dimension.
input_shape = (95000,360)
Same for the output.
Well, I think the main problem out there is with the return_sequences parameter in the network.This hyper parameter should be set to False for the last layer and true for the other previous layers.
In Artifical Neural Networks (ANN), input is of shape (N,D), where N is the number of samples and D is the number of features.
IN RNN, GRU and LSTM, input is of shape (N,T,D), where N is the number of samples, T is length of time sequence and D is the number of features.
So, while adding layers
Input(shape = (D,)) for ANN and
Input(shape = (T,D)) for RNN, GRU and LSTMs