This code was given to us by a teacher, so it should work right off the bat. However, I can't get it to run.
K.image_data_format()
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 25
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I receive the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 32 but received input with shape [None, 32, 32, 3]
Just thrown off since no one else had this issue with the given code. I did have to change the first line from
K.set_image_dim_ordering('th')
to
K.image_data_format
as I was being told that set_image_dim_ordering was not a known function of Keras.backend
Any ideas here? Could my change has introduced this error?
Update: If data is in channel_last format, then change input shape from input_shape=(3, 32, 32) to input_shape=(32, 32, 3) in,
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
I found reference to set_image_dim_ordering below for keras 1.2.2,
https://faroit.com/keras-docs/1.2.2/backend/
Here, it mentions,
For 2D data (e.g. image), "tf" assumes (rows, cols, channels) while
"th" assumes (channels, rows, cols).
Since you are receving input shape [None, 32, 32, 3] that means it is tf format of (rows, cols, channels). So providing th means the input shapes mismatch. You were using theano backend with th instead of tensorflow.
I could not find any reference to set_image_dim_ordering or image_dim_ordering in latest tf.keras backend. Instead these below provide method for getting and setting 'channels_first' or 'channels_last' format.
https://www.tensorflow.org/api_docs/python/tf/keras/backend/image_data_format
https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_image_data_format
Related
Being new to Keras sequential models is causing me a few troubles!
I have an x_train of shape : 17755 x 500 x 12
and y_train of shape: 17755 x 15 (labels are already one-hot encoded)
And I made the next model to be trained on this data:
model = Sequential()
model.add(Conv2D(32,3,padding="same", activation="relu", input_shape=(17755,500,12)))
model.add(MaxPool2D())
model.add(Conv2D(32, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(15, activation="sigmoid"))
model.compile(optimizer ='adam', loss='categorical_crossentropy', metrics = ['Accuracy'])
history = model.fit(x_train, y_train, epochs=5)
1- when I don’t use np.expand_dims to add an axis for batch, I get this error:
ValueError: Input 0 of layer "sequential" is incompatible with the
layer: expected shape=(None, 17755, 500, 12), found shape=(None, 500,
12)
2- when I do use np.expand_dims and the shape of x_train became: 1x17755x500x12
I get this error:
Data cardinality is ambiguous:
x sizes: 1
y sizes: 17755
Make sure all arrays contain the same number of samples.
3- when I use np.expand_dims for y_train too and its shape became: 1x17755x15
I get this error:
ValueError: Shapes (None, 17755, 15) and (None, 15) are incompatible
I know I’m doing something fundamentally wrong, but what what is that? Can anyone please help me out with the shape of data please?
Regarding x_train try adding a new dimension at the end to represent the channel dimension needed for Conv2D layers. Note also that you do not provide the number of samples to your input shape. Here is a working example:
import tensorflow as tf
import numpy as np
x_train = np.random.random((17755,500,12))
x_train = np.expand_dims(x_train, axis=-1)
y_train = np.random.random((17755,15))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32,3,padding="same", activation="relu", input_shape=(500, 12, 1)))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(32, 3, padding="same", activation="relu"))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(64, 3, padding="same", activation="relu"))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128,activation="relu"))
model.add(tf.keras.layers.Dense(15, activation="sigmoid"))
model.compile(optimizer ='adam', loss='categorical_crossentropy', metrics = ['Accuracy'])
history = model.fit(x_train, y_train, epochs=5)
Hi I have using this code and getting error
ValueError: Data cardinality is ambiguous: x sizes: 150000y sizes: 50000
Make sure all arrays contain the same number of samples.
I tried changing the reshape option and even numpy.transpose but no use can anyone help?
import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
(x_train, y_train) , (x_test, y_test) = datasets.cifar10.load_data()
#x_train.shape #(50000, 32, 32, 3)
#x_test.shape #(10000, 32, 32, 3)
x_train = x_train.reshape(-1, 32, 32, 1)
x_test = x_test.reshape(-1, 32, 32 ,1)
x_train = x_train.astype('float32') # change integers to 32-bit floating point numbers
x_test = x_test.astype('float32')
x_train /= 255.0
x_test /= 255.0
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(512, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.build(input_shape=(512,32,32,1))
model.summary()
model.fit(x_train, y_train, batch_size=1000, epochs=1)
score = model.evaluate(x_test, y_test)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
predictions = model.predict([x_test])
#print(predictions)
print(np.argmax(predictions[0]))
img_path = x_test[0]
print(img_path.shape)
if(len(img_path.shape) == 3):
plt.imshow(np.squeeze(img_path))
elif(len(img_path.shape) == 2):
plt.imshow(img_path)
else:
print("Higher dimensional data")
there are some changes you would have to make. I will write an example for you
import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
(x_train, y_train) , (x_test, y_test) = datasets.cifar10.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255.0
x_test /= 255.0
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(32,32,3)))
model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, batch_size=32, epochs=1)
Changes:
you don't need to reshape x_train, x_test as they are already in correct shape.
It is always good to use tf.keras.layers.InputLayer instead of building the model later.
I haven't made that change but whenever possible you should use tf.keras.Sequential to make models. (more readable, less prone to error). Functional api (or your current method) is for when you need to make some complex architecture.
You can now increase the model (more layers). I used a few just to show you an example.
The input_shape in the InputLayer is considered as (batch_size, img_width, img_height, img_channels) as the batch_size could be non-uniform and hence taken as None by default so we don't give it and hence we only pass (img_width, img_height, img_channels) and as your input has 32 imgwith 32 imgheight and 3 channels we pass it (32, 32, 3).
If it solved your issue then kindly upvote or give a green tick.
I'm attempting to train models for RF fingerprinting, and have captured samples from a number of devices at a length of 1 million each. I've converted the samples into a variety of images, and have successfully trained models using that form of data by means of:
imageSize = 224
x_train = np.array(x_train) / 255
x_train.reshape(-1, imageSize, imageSize, 1)
x_val = np.array(x_val) / 255
x_val.reshape(-1, imageSize, imageSize, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv2D(96, 7, padding="same", activation="relu", input_shape = (224, 224, 3)))
model.add(MaxPool2D())
model.add(Conv2D(96, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(192, 7, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
However, attempting to do the same to the array data (shape (60, 4000)) which was used to create the images yields the "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2" issue listed in the title. My code for that is:
x_train = np.array(x_train)
x_train.reshape(-1, 4000, 1)
x_val = np.array(x_val)
x_val.reshape(-1, 4000, 1)
y_train = np.array(y_train)
y_val = np.array(y_val)
model = Sequential()
model.add(Conv1D(96, 7, padding="same", activation="relu", input_shape=(4000, 1)))
model.add(MaxPooling1D())
model.add(Conv1D(96, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Conv1D(192, 7, padding="same", activation="relu"))
model.add(MaxPooling1D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(384, activation="relu"))
model.add(Dense(6, activation="softmax"))
opt = Adam(learning_rate=0.000001)
model.compile(optimizer = opt, loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
model.summary()
history = model.fit(x_train, y_train, epochs = 500, validation_data = (x_val, y_val))
Like many it seems, I'm unable to figure out why this input shape isn't working for the array data. Any clarifications will be helpful.
The error: expected min_ndim=3, found ndim=2 clearly explains all. The first problem, you used input shape is in three dimension (224, 224, 3), while for the second one the input shape changed to 1 dimensional array of shape (4000, 1). You should reshape the dimension of your input to the sequential model.
I have built up a NN with following architecture:
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
(1901, 456, 3) (476, 456, 3) (1901, 3, 3) (476, 3, 3)
model = Sequential()
model.add(Flatten(input_shape=(456,3)))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(3 * 3))
model.add(Reshape((3, 3)))
model.compile('adam', 'mse')
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=100)
Now I want to replace this architecture with a analogue CNN which does the same; but when trying to implement this I always get problems with the dimensions of the different layers. And my error is always like this
ValueError: Error when checking input: expected conv2d_3_input to have 4 dimensions, but got array with shape (x, x, x)
the dataset remains the same, just the NN architecture changes and this is my first approach:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(1901,456,3)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
Can someone help me out to replace my first NN into a CNN?
Your network is well defined, the error you're getting is during the fit operation. And why is that the case.
Well Conv2D is looking for data with 4D shape as you can see here : doc
X_train shape must then be (samples, channels, rows, cols)
When you gave input_shape=(1901,456,3), you didn't have to specify the number of samples.
But during the fit operation you need to have a data shaped as (samples, channels, rows, cols) .
And now you see that you have a problem. Why is your X_train shaped like that, it seems that you only have one image. You can feed it by reshaping it using :
X_train = X_train.reshape((1, 1901, 456, 3))
But that seems odd, you're only feeding one image to your network.
Edit : after clarification on the comments, conv1D will be better in this type of case, here is how to do it:
model = Sequential()
model.add(Conv1D(32, kernel_size=3,
activation='relu',
input_shape=(456,3)))
model.add(Conv1D(64, 3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3 * 3, activation='softmax'))
model.add(Reshape((3, 3))
now everything worked with the architecture and there is also no problem when compiling the NN;
batch_size = 128
epochs = 12
model.compile(
optimizer='rmsprop',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'],
)
model.fit(X_test, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
but when trying to fit I get following error :
ValueError: Input arrays should have the same number of samples as target
arrays. Found 476 input samples and 1901 target samples.
what am I missing here?
I am following the steps outlined in the tutorial here
I am attempting to run the following code from the tutorial in a cell inside of a Google Colaboratory notebook:
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) =
tf.keras.datasets.fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
model = tf.keras.Sequential()
# Must define the input shape in the first layer of the neural network
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
# Take a look at the model summary
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train,
y_train,
batch_size=64,
epochs=10)
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
When I run the cell , I get the following error:
Error when checking input: expected conv2d_5_input to have 4 dimensions, but got array with shape (60000, 28, 28)
I feel like I am missing something that is fundamental to the usage of a convolutional layer, nonetheless it appears as though this should have worked. I found some similar questions on SO where people recommended manipulating the "input_shape" argument. I've tried changing it to (60000, 28, 28) and also added additional dimensions with values of 1, but nothing has worked so far. Can anyone point out where I might be missing something here?
It looks like you skipped reshaping part from tutorial:
# Reshape input data from (28, 28) to (28, 28, 1)
w, h = 28, 28
x_train = x_train.reshape(x_train.shape[0], w, h, 1)
x_valid = x_valid.reshape(x_valid.shape[0], w, h, 1)
x_test = x_test.reshape(x_test.shape[0], w, h, 1)
The idea here is that your samples are 28x28x1 (one color, 28x28 pixels), and the first dimension - the number of the sample (60000 in your case).