keras lstm error: expected to see 1 array - python

so i want to make a lstm network to run on my data but i get this message:
ValueError: Error when checking input: expected lstm_1_input to have shape (None, 1) but got array with shape (1, 557)
this is my code:
x_train=numpy.array(x_train)
x_test=numpy.array(x_test)
x_train = numpy.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))
x_test = numpy.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(50, input_shape=(1,len(x_train[0]) )))
model.add(Dense(1))
model.add(Dropout(0.2))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train, numpy.array(y_train), epochs=100, batch_size=1, verbose=2)

You need to change the input_shape value for LSTM layer. Also, x_train must have the following shape.
x_train = x_train.reshape(len(x_train), x_train.shape[1], 1)
So, change
x_train = numpy.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))
model.add(LSTM(50, input_shape=(1,len(x_train[0]) )))
to
x_train = x_train.reshape(len(x_train), x_train.shape[1], 1)
model.add(LSTM(50, input_shape=(x_train.shape[1], 1) )))

Related

binary signal data: keras ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2

I'm attempting to train models for RF fingerprinting, and have captured samples from a number of devices at a length of 1 million each. I've converted the samples into a variety of images, and have successfully trained models using that form of data by means of:
imageSize = 224
x_train = np.array(x_train) / 255
x_train.reshape(-1, imageSize, imageSize, 1)
x_val = np.array(x_val) / 255
x_val.reshape(-1, imageSize, imageSize, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv2D(96, 7, padding="same", activation="relu", input_shape = (224, 224, 3)))
model.add(MaxPool2D())
model.add(Conv2D(96, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(192, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
However, attempting to do the same to the array data (shape (60, 4000)) which was used to create the images yields the "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2" issue listed in the title. My code for that is:
x_train = np.array(x_train)
x_train.reshape(-1, 4000, 1)
x_val = np.array(x_val)
x_val.reshape(-1, 4000, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv1D(96, 7, padding="same", activation="relu", input_shape=(4000, 1)))
model.add(MaxPooling1D())
model.add(Conv1D(96, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Conv1D(192, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
Like many it seems, I'm unable to figure out why this input shape isn't working for the array data. Any clarifications will be helpful.
The error: expected min_ndim=3, found ndim=2 clearly explains all. The first problem, you used input shape is in three dimension (224, 224, 3), while for the second one the input shape changed to 1 dimensional array of shape (4000, 1). You should reshape the dimension of your input to the sequential model.

Preparing Pandas DataFrame for LSTM

I'm trying to fit a LSTM classifier using Keras but don't understand how to prepare the data for training.
I currently have two dataframes for the training data. X_train contains 48 hand-crafted temporal features from IMU data, and y_train contains corresponding labels (4 kinds) representing terrain. The shape of these dataframes is given below:
X_train = X_train.values.reshape(X_train.shape[0],X_train.shape[1],1)
print(X_train.shape, y_train.shape)
**(268320, 48, 1) (268320,)**
Model using batch_size = (32,5,48):
def def_model():
model = Sequential()
model.add(LSTM(units=144,batch_size=(32, 5, 48),return_sequences=True))
model.add(Dropout(0.5))
model.add(Dense(144, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['categorical_accuracy'])
return model
model_LSTM = def_model()
LSTM_history = model_LSTM.fit(X_train, y_train, epochs=15, validation_data=(X_valid, y_valid), verbose=1)
The error that I am getting:
ValueError: Shapes (32, 1) and (32, 48, 4) are incompatible
Any insight into how to fix this particular error and any intuition into what Keras is expecting?
What is the 5 in your batch size ? The batch_size argument in the LSTM layer indicates that your data should be in the form (batch_size, time_steps, feature_per_time_step). If I am understanding correctly, your data has time_steps = 1 and feature_per_time_step = 48.
Here is a sample of working code and the shape of each of them.
def def_model():
model = Sequential()
model.add(LSTM(units=144,batch_size=(32, 1, 48),return_sequences=True))
model.add(Dropout(0.5))
model.add(Dense(144, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['categorical_accuracy'])
return model
model_LSTM = def_model()
X_train = np.random.random((10000,1,48))
y_train = np.random.random((10000,4))
y_train = y_train.reshape(-1,1,4)
data = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(32)
model_LSTM.fit(data, epochs=15, verbose=1)
Passing data instead of x_train and y_train in your fit function will fit the model properly.
If you want to have 5 timesteps in your data, you will have to create your X_train in such a way to have it have a shape (n_samples,5,48).

Input shape error with Keras - not sure what's going on?

This code was given to us by a teacher, so it should work right off the bat. However, I can't get it to run.
K.image_data_format()
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 25
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I receive the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 32 but received input with shape [None, 32, 32, 3]
Just thrown off since no one else had this issue with the given code. I did have to change the first line from
K.set_image_dim_ordering('th')
to
K.image_data_format
as I was being told that set_image_dim_ordering was not a known function of Keras.backend
Any ideas here? Could my change has introduced this error?
Update: If data is in channel_last format, then change input shape from input_shape=(3, 32, 32) to input_shape=(32, 32, 3) in,
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
I found reference to set_image_dim_ordering below for keras 1.2.2,
https://faroit.com/keras-docs/1.2.2/backend/
Here, it mentions,
For 2D data (e.g. image), "tf" assumes (rows, cols, channels) while
"th" assumes (channels, rows, cols).
Since you are receving input shape [None, 32, 32, 3] that means it is tf format of (rows, cols, channels). So providing th means the input shapes mismatch. You were using theano backend with th instead of tensorflow.
I could not find any reference to set_image_dim_ordering or image_dim_ordering in latest tf.keras backend. Instead these below provide method for getting and setting 'channels_first' or 'channels_last' format.
https://www.tensorflow.org/api_docs/python/tf/keras/backend/image_data_format
https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_image_data_format

data shape mismatch in time series prediction

working with a LSTM model for predicting stock prices, i did every step exactly as the tutorial but unlike the tutorial, my code runs into an error.
here is the code i am working with:
df = pd.read_csv(f'D:\\algo\\all\\EURUSD_15M.csv')
df = df.loc[:, ~df.columns.str.contains('^Unnamed',)]
training_set = df.iloc[:-int(len(df)/10), 4:5].values
sc = MinMaxScaler(feature_range= (0, 1))
training_set_scaled = sc.fit_transform(training_set)
x_train , y_train = [], []
for i in range(60, len(training_set)):
x_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape = (x_train.shape[1],1)))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(Dense(units=1))
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['Accuracy'])
model.fit(x_train, y_train, epochs=5, batch_size=64)
So whats supposed to happen is take 60 periods of a price and predict the 61th period.
but i ultimately face the following error:
ValueError: A target array with shape (379319, 1) was passed for an output of shape (None, 60, 1) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.
what am i doing wrong?
As a dataset try using
TimeseriesGenerator (tf.keras.preprocessing.sequence.TimeseriesGenerator) instead of your custom list -> https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator
ex.
train_gen = TimeseriesGenerator(Xtrain, Xtrain, n_steps, batch_size=24*7)
valid_gen = TimeseriesGenerator(Xvalid, Xvalid, n_steps, batch_size=24*7)
test_gen = TimeseriesGenerator(Xtest, Xtest, n_steps, batch_size=24*7)

Keras - "ValueError: Error when checking target: expected activation_1 to have shape (None, 9) but got array with shape (9,1)

I'm building a model to classify text into one of 9 layers, and am having this error when running it. Activation 1 seems to refer to the Convolutional layer's input, but I'm unsure about what's wrong with the input.
num_classes=9
Y_train = keras.utils.to_categorical(Y_train, num_classes)
#Reshape data to add new dimension
X_train = X_train.reshape((100, 150, 1))
Y_train = Y_train.reshape((100, 9, 1))
model = Sequential()
model.add(Conv1d(1, kernel_size=3, activation='relu', input_shape=(None, 1)))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x=X_train,y=Y_train, epochs=200, batch_size=20)
Running this results in the following error:
"ValueError: Error when checking target: expected activation_1 to have shape (None, 9) but got array with shape (9,1)
There are several typos and bugs in your code.
Y_train = Y_train.reshape((100,9))
Since you reshape X_train to (100,150,1), I guess your input step is 150, and channel is 1. So for the Conv1D, (there is a typo in your code), input_shape=(150,1).
You need to flatten your output of conv1d before feeding into Dense layer.
import keras
from keras import Sequential
from keras.layers import Conv1D, Dense, Flatten
X_train = np.random.normal(size=(100,150))
Y_train = np.random.randint(0,9,size=100)
num_classes=9
Y_train = keras.utils.to_categorical(Y_train, num_classes)
#Reshape data to add new dimension
X_train = X_train.reshape((100, 150, 1))
Y_train = Y_train.reshape((100, 9))
model = Sequential()
model.add(Conv1D(2, kernel_size=3, activation='relu', input_shape=(150,1)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x=X_train,y=Y_train, epochs=200, batch_size=20)

Categories