I created a model and then loaded it in another script and try to perform a prediction from it however I can not understand why the shape being passed to the function is incorrect.
This is how the model is created:
batch_size = 1232
epochs = 5
IMG_HEIGHT = 400
IMG_WIDTH = 400
model1 = np.load("training_data.npy", allow_pickle=True)
model2 = np.load("training_data_1.npy", allow_pickle=True)
data = np.asarray(np.concatenate((model1, model2), axis=0)) # 1232
train_data = data[:-100]
X_train = np.asarray(np.array([i[0] for i in train_data]))
Y_train = np.asarray([i[1] for i in train_data])
validation_data = data[-100:]
X_val = np.asarray(np.array([i[0] for i in validation_data]))
Y_val = np.asarray([i[1] for i in validation_data])
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(X_train, Y_train, steps_per_epoch=batch_size, epochs=epochs,
validation_data=(X_val, Y_val), validation_steps=batch_size)
model.save("test")
And this is how I'm trying to make a prediction:
batch_size = 1232
epochs = 5
IMG_HEIGHT = 400
IMG_WIDTH = 400
model = tf.keras.models.load_model('test')
test_1 = cv2.imread('./Data/Images/test_no.jpg')
test_1 = cv2.resize(test_1, (IMG_HEIGHT, IMG_WIDTH))
prediction = model.predict([test_1])[0]
print(prediction)
When printing the shape of the test image the output is: (400, 400, 3)
I also tried using the numpy operation reshape when passing the test image to predict. However the error is always:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 400, 3]
Add extra dimension to your input as [n_items,400,400,3]
import tensorflow as tf
X_train = tf.expand_dims(X_train, axis =-1)
Related
So I have done data augmentation in a keras model. I am using Fashin_Mnist dataset. Everything goes okay but when it finished the first epoch it throws an error.
The error: ValueError: Shapes (32, 1) and (32, 10) are incompatible
My data:
img_rows = 28
img_cols = 28
batch_size = 512
img_shape = (img_rows, img_cols, 1)
x_train = x_train.reshape(x_train.shape[0], *img_shape)
x_test = x_test.reshape(x_test.shape[0], *img_shape)
x_val = x_val.reshape(x_val.shape[0], *img_shape)
label_as_binary = LabelBinarizer()
y_train_binary = label_as_binary.fit_transform(y_train)
y_test_binary = label_as_binary.fit_transform(y_test)
y_val_binary = label_as_binary.fit_transform(y_val)
My model:
model2 = Sequential([
Conv2D(filters=32, kernel_size=3, activation='relu',
input_shape=img_shape, padding="same"),
MaxPooling2D(pool_size=2),
Conv2D(filters=32, kernel_size=3, activation='relu',
padding="same"),
MaxPooling2D(pool_size=2),
Dropout(0.25),
Conv2D(filters=64, kernel_size=3, activation='relu',
padding="same"),
MaxPooling2D(pool_size=2),
Conv2D(filters=64, kernel_size=3, activation='relu',
padding="same"),
MaxPooling2D(pool_size=2),
Dropout(0.25),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
The data augmentation:
datagen = ImageDataGenerator(horizontal_flip=True, rotation_range=45,
width_shift_range=0.2, height_shift_range=0.2, zoom_range=0.1)
datagen.fit(x_train)
for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=9):
for i in range(0, 9):
pyplot.subplot(330 + 1 + i)
pyplot.imshow(x_batch[i].reshape(28, 28),
cmap=pyplot.get_cmap('gray'))
pyplot.show()
break
model2.compile(loss='categorical_crossentropy',
optimizer=Adadelta(learning_rate=0.01), metrics=['accuracy'])
history = model2.fit_generator(datagen.flow(x_train,y_train_binary,
batch_size=batch_size),
epochs = 10, validation_data = (x_train, y_val_binary), verbose = 1)
I have seen many similar answers but none of them seem to fit mine. Help is much appreciated.
I think you should change this line:
validation_data = (x_train, y_val_binary)
to this:
validation_data = (x_val, y_val_binary)
Then, your model should run properly.
I'm attempting to train models for RF fingerprinting, and have captured samples from a number of devices at a length of 1 million each. I've converted the samples into a variety of images, and have successfully trained models using that form of data by means of:
imageSize = 224
x_train = np.array(x_train) / 255
x_train.reshape(-1, imageSize, imageSize, 1)
x_val = np.array(x_val) / 255
x_val.reshape(-1, imageSize, imageSize, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv2D(96, 7, padding="same", activation="relu", input_shape = (224, 224, 3)))
model.add(MaxPool2D())
model.add(Conv2D(96, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(192, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
However, attempting to do the same to the array data (shape (60, 4000)) which was used to create the images yields the "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2" issue listed in the title. My code for that is:
x_train = np.array(x_train)
x_train.reshape(-1, 4000, 1)
x_val = np.array(x_val)
x_val.reshape(-1, 4000, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv1D(96, 7, padding="same", activation="relu", input_shape=(4000, 1)))
model.add(MaxPooling1D())
model.add(Conv1D(96, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Conv1D(192, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
Like many it seems, I'm unable to figure out why this input shape isn't working for the array data. Any clarifications will be helpful.
The error: expected min_ndim=3, found ndim=2 clearly explains all. The first problem, you used input shape is in three dimension (224, 224, 3), while for the second one the input shape changed to 1 dimensional array of shape (4000, 1). You should reshape the dimension of your input to the sequential model.
My current model is:
# from tensorflow.keras.layers import InputLayer
model_training = Sequential()
# input_layer = keras.Input(shape=(300,1))
model_training.add(InputLayer(input_shape=(300,1)))
model_training.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='tanh'))
model_training.add(Dropout(0.2))
model_training.add(MaxPooling1D(pool_size=3))
model_training.add(Dropout(0.2))
model_training.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='tanh'))
model_training.add(Dropout(0.2))
model_training.add(MaxPooling1D(pool_size=3))
# model_training.add(Dropout(0.2))
# model_training.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='tanh'))
# model_training.add(Dropout(0.2))
# model_training.add(MaxPooling1D(pool_size=3))
# model_training.add(Dropout(0.2))
# model_training.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='tanh'))
# model_training.add(Dropout(0.2))
# model_training.add(MaxPooling1D(pool_size=3))
# model_training.add(Dropout(0.2))
#model.add(Dropout(0.2))
model_training.add(Flatten())
model_training.add(Dense(90))
model_training.add(Activation('sigmoid'))
model_training.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_training.summary())
My fit function:
model_training.fit(train_data, train_labels, validation_data=(test_data, test_labels), batch_size=32, epochs=15)
I get this error when I run this:
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 90 for '{{node Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](remove_squeezable_dimensions/Squeeze)' with input shapes: [?,90].
Any idea?
my output layer has 90 as there are 90 classes in total to give a prediction on.
The train and labels are shaped as follows:
(7769, 300, 1)
(7769, 90, 1)
I can't figure out this issue. Any help is appreciated!
Partial model summary:
Squeeze your labels before training:
train_labels = tf.squeeze(train_labels, axis=-1)
It seems like the shape of your labels is the problem. The model will output a shape of (batch, 90), but you are providing (batch, 90, 1). Keras is unable to squeeze dimension 1 because it has a length of 90 and not 1.
This code was given to us by a teacher, so it should work right off the bat. However, I can't get it to run.
K.image_data_format()
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 25
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I receive the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 32 but received input with shape [None, 32, 32, 3]
Just thrown off since no one else had this issue with the given code. I did have to change the first line from
K.set_image_dim_ordering('th')
to
K.image_data_format
as I was being told that set_image_dim_ordering was not a known function of Keras.backend
Any ideas here? Could my change has introduced this error?
Update: If data is in channel_last format, then change input shape from input_shape=(3, 32, 32) to input_shape=(32, 32, 3) in,
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
I found reference to set_image_dim_ordering below for keras 1.2.2,
https://faroit.com/keras-docs/1.2.2/backend/
Here, it mentions,
For 2D data (e.g. image), "tf" assumes (rows, cols, channels) while
"th" assumes (channels, rows, cols).
Since you are receving input shape [None, 32, 32, 3] that means it is tf format of (rows, cols, channels). So providing th means the input shapes mismatch. You were using theano backend with th instead of tensorflow.
I could not find any reference to set_image_dim_ordering or image_dim_ordering in latest tf.keras backend. Instead these below provide method for getting and setting 'channels_first' or 'channels_last' format.
https://www.tensorflow.org/api_docs/python/tf/keras/backend/image_data_format
https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_image_data_format
My Xtrain is a RGB image of dimension (numofsamples,height,width,channel).
Where numofsamples = 1047, height = 128, width = 128, channels = 3.
This is done by running the codes below:
for img_filename in train_files:
img = image.load_img(os.path.join(rootdir,'train_image/train_image',img_filename), target_size=(128,128,1))
img = image.img_to_array(img)
img = img/255
train_image.append(img)
X_train = np.array(train_image)
Im trying to run a CNN+LSTM model on the data with image augmentation. But I keep getting different dimensions error.
If I were to run this code:
batch_size = 64
model.add(TimeDistributed(Conv2D(64, kernel_size=(3, 3),activation='relu',input_shape = (128,128,3))))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(512, return_sequences=False, dropout=0.1))
model.add(Flatten())
model.add(Dense(256))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])
model.fit_generator(train_generator.flow(Xtrain,ytrain,batch_size=batch_size),
#steps_per_epoch = Xtrain.shape[0] // batch_size,
epochs = 50,
verbose = True,
validation_data= (Xval,yval),
callbacks = [reduce_lr,es,ModelCheckpoint('cnnlstm'+'_weights.hdf5', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')]
)
My error encountered is:
InvalidArgumentError: input must be 4-dimensional[8192,128,3] [Op:Conv2D]
If I were to reshape it to (1047, 128, 128, 3, 1), I will get a numpy error.
Xtrain_reshape = np.expand_dims(Xtrain, -1)
Xval_reshape = np.expand_dims(Xval, -1)
model = Sequential()
# define CNN model
model.add(TimeDistributed(Conv2D(64, kernel_size=(3, 3),activation='relu',input_shape = (128,128,3))))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Flatten()))
# define LSTM model
model.add(LSTM(512, return_sequences=False, dropout=0.1))
model.add(Flatten())
model.add(Dense(256))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])
model.fit_generator(train_generator.flow(Xtrain_reshape,ytrain,batch_size=batch_size),
#steps_per_epoch = Xtrain.shape[0] // batch_size,
epochs = 50,
verbose = True,
validation_data= (Xval_reshape,yval),
callbacks = [reduce_lr,es,ModelCheckpoint('cnnlstm'+'_weights.hdf5', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')]
)
My error here is
ValueError: ('Input data in NumpyArrayIterator should have rank 4. You passed an array with shape', (1047, 128, 128, 3, 1))
Can you let me know how to reshape it, so I can go through CNN and LSTM?