I am trying to do the classification of the inputs into categories.
The shapes are:
df_train.shape: (17980, 380)
df_validation.shape: (17980, 380)
However, when I run my code, I am getting the following error
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [32, 380]
How can we fix this error?
Conv1D takes input of shape:
3+D tensor with shape: batch_shape + (steps, input_dim)
If your data is only 2D add a dummy dimension with:
df_train = df_train[..., None]
df_validation = df_validation[..., None]
also modify batch_input_shape=(32, 1, 380) accordingly to: batch_input_shape=(32, 380, 1)
or omit it altogether
other changes (working on this dummy data):
df_train = np.random.normal(size=(17980, 380))
df_validation = np.random.normal(size=(17980, 380))
df_train = df_train[..., None]
df_validation = df_validation[..., None]
y_train = np.random.normal(size=(17980, 1))
y_validation = np.random.normal(size=(17980, 1))
#train,test = train_test_split(df, test_size=0.20, random_state=0)
batch_size=32
epochs=5
model = Sequential()
model.add((Conv1D(filters=5, kernel_size=2, activation='relu', padding='same')))
model.add((MaxPooling1D(pool_size=2)))
model.add(LSTM(50, return_sequences=True))
model.add(LSTM(10))
model.add(Dense(1))
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)
model.compile(optimizer=adam, loss='mse', metrics=['mae', 'mape', 'acc'])
callbacks = [EarlyStopping('val_loss', patience=3)]
model.fit(df_train, df_validation, batch_size=batch_size)
print(model.summary())
Related
I am trying to classify text with bi-lstm but while I run model.predict on new dataset it is giving me this error:
Input 0 of layer "bidirectional_2" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 100)
Shape of my training data is:(39780, 2)
Shape of my testing data is: (28619, 2)
model = Sequential()
model.add(Embedding(len(word_index) + 1, embed_size, weights=[embedding_matrix]))
model.add(Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(Bidirectional(LSTM(30,return_sequences=True)))
model.add(GlobalMaxPool1D())
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history=model.fit(X_train, Y_train, batch_size=64, epochs=5)
y_pred = model.predict([X_test], batch_size=26, verbose=1)
you should use Reshape layer after Bidirectional layer
This might works:
model = Sequential()
model.add(Embedding(len(word_index) + 1, embed_size, weights=[embedding_matrix]))
model.add(Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(Reshape((100, 1), input_shape = (100, )))
model.add(Bidirectional(LSTM(30,return_sequences=True)))
model = Sequential()
model.add(LSTM(100, input_shape = [X_sequence.shape[1], X_sequence.shape[2]]))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy"
, metrics=[binary_accuracy]
, optimizer="adam")
model.summary()
training_size = int(len(X_sequence) * 0.7)
X_train, y_train = X_sequence[:training_size], y[:training_size]
X_test, y_test = X_sequence[training_size:], y[training_size:]
model.fit(X_train, y_train, batch_size=64, epochs=10)
y_test_pred = model.predict(X_test)
def create_dataset(dataset, time_step=1):
dataX = []
for i in range(len(dataset)-time_step-1):
a = dataset[i:(i+time_step), 0]
dataX.append(a)
return np.array(dataX)
x_final=create_dataset(test.loc[:, "sensor_00":"sensor_12"].values)
y_final=model.predict(x_final)
There is error in last line. I have successfully trained the data but while predicting for test data. There is error.
I've used the dataset from here to reproduce the issue.
Please expand the dimensions of x_final to solve the error as follows
x_final=create_dataset(test.loc[:, "sensor_00":"sensor_12"].values)
#Expand dimensions
x_final=tf.expand_dims(x_final,axis=1)
y_final=model.predict(x_final)
Let us know if the issue still persists. Thanks!
I'm attempting to train models for RF fingerprinting, and have captured samples from a number of devices at a length of 1 million each. I've converted the samples into a variety of images, and have successfully trained models using that form of data by means of:
imageSize = 224
x_train = np.array(x_train) / 255
x_train.reshape(-1, imageSize, imageSize, 1)
x_val = np.array(x_val) / 255
x_val.reshape(-1, imageSize, imageSize, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv2D(96, 7, padding="same", activation="relu", input_shape = (224, 224, 3)))
model.add(MaxPool2D())
model.add(Conv2D(96, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(192, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
However, attempting to do the same to the array data (shape (60, 4000)) which was used to create the images yields the "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2" issue listed in the title. My code for that is:
x_train = np.array(x_train)
x_train.reshape(-1, 4000, 1)
x_val = np.array(x_val)
x_val.reshape(-1, 4000, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv1D(96, 7, padding="same", activation="relu", input_shape=(4000, 1)))
model.add(MaxPooling1D())
model.add(Conv1D(96, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Conv1D(192, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
Like many it seems, I'm unable to figure out why this input shape isn't working for the array data. Any clarifications will be helpful.
The error: expected min_ndim=3, found ndim=2 clearly explains all. The first problem, you used input shape is in three dimension (224, 224, 3), while for the second one the input shape changed to 1 dimensional array of shape (4000, 1). You should reshape the dimension of your input to the sequential model.
I'm building a model to classify text into one of 9 layers, and am having this error when running it. Activation 1 seems to refer to the Convolutional layer's input, but I'm unsure about what's wrong with the input.
num_classes=9
Y_train = keras.utils.to_categorical(Y_train, num_classes)
#Reshape data to add new dimension
X_train = X_train.reshape((100, 150, 1))
Y_train = Y_train.reshape((100, 9, 1))
model = Sequential()
model.add(Conv1d(1, kernel_size=3, activation='relu', input_shape=(None, 1)))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x=X_train,y=Y_train, epochs=200, batch_size=20)
Running this results in the following error:
"ValueError: Error when checking target: expected activation_1 to have shape (None, 9) but got array with shape (9,1)
There are several typos and bugs in your code.
Y_train = Y_train.reshape((100,9))
Since you reshape X_train to (100,150,1), I guess your input step is 150, and channel is 1. So for the Conv1D, (there is a typo in your code), input_shape=(150,1).
You need to flatten your output of conv1d before feeding into Dense layer.
import keras
from keras import Sequential
from keras.layers import Conv1D, Dense, Flatten
X_train = np.random.normal(size=(100,150))
Y_train = np.random.randint(0,9,size=100)
num_classes=9
Y_train = keras.utils.to_categorical(Y_train, num_classes)
#Reshape data to add new dimension
X_train = X_train.reshape((100, 150, 1))
Y_train = Y_train.reshape((100, 9))
model = Sequential()
model.add(Conv1D(2, kernel_size=3, activation='relu', input_shape=(150,1)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x=X_train,y=Y_train, epochs=200, batch_size=20)
Could someone help me to understand what this error is all about?
model = Sequential()
model.add(Embedding(82, 100, weights=[embedding_matrix], input_length=1000))
model.add(LSTM(100))
model.add(Dense(100, activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(x_train, y_train, epochs = 5, batch_size=64)
When i run this LSTM model, I am getting an error as
ValueError: Error when checking model target: expected dense_16 to have shape (None, 100) but got array with shape (16, 2)
I am not sure how much the below information would be useful:
x_train.shape
Out[959]: (16, 1000)
y_train.shape
Out[962]: (16, 2)
If you need any other information, I am ready to provide
you have defined dense layer input shape is 100.
model.add(Dense(100, activation = 'sigmoid'))
so you need to make sure your input should always same shape.
here in your case make x_train and and y_train same shape.
try with :
model = Sequential()
# here the batch dimension is None,
# which means any batch size will be accepted by the model.
model.add(Dense(32, batch_input_shape=(None, 500)))
model.add(Dense(32))
Your last layer has an output shape of None,100
model.add(Dense(100, activation = 'sigmoid'))
But your data (y_train) has the shape (16,2). It should be
model.add(Dense(2, activation = 'sigmoid'))