shape error while using cnn model for image classification - python

Shapes (None, 1) and (None, 10) are incompatible is my error message
cnn = models.Sequential([
layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
is the model i am using ,my x train and ytrain have 1000 samples and shape of x train is 224, 224,3 y train is an array with outputs as numbers 0,1,2, for classification
how do i fix this shape error?

Related

Multi Label Image Classifier Input Issues

Hello I am trying to build a multi label image classifier but I am having issues with the input shape.
My features.shape is (40000, 28, 28, 1). The image is of two letters ranging from (a-g) in the photo that are to be classified. The third dimension (1) I manually added to it because from my understanding the Conv2D needs a 3 dimensional shape.
labels.shape is (40000, 2) and it is an array with the two letters associated with each photo.
Here is my model:
model = keras.Sequential([
Conv2D(32, 3, padding='same', activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(256, activation='relu'),
Dense(7, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
When I train the model I receive the error
ValueError: `logits` and `labels` must have the same shape, received ((None, 7) vs (None, 2)).
I am assuming I need to reshape the labels or features somehow but I am not sure.
I have been trying multiple different inputs and changes to no avail. I appreciate any help on this problem.
You are doing it correctly; the problem is with the last Dense layer since you are doing two label output change the last layer to Dense(2, activation='sigmoid') instead of Dense(7, activation='sigmoid')

ValueError: logits and labels must have the same shape ((None, 128, 128, 1) vs (None, 1))

I'm trying to build a simple binary image classifier.
Initially, my data looked like this
X_train.shape: (1421, 128, 128, 3)
X_test.shape : (356, 128, 128, 3)
y_train.shape: (1421,)
y_test.shape : (356,)
tried to reshape data with
X_train = X_train.reshape(-1, img_size, img_size, 3)
X_test = X_test.reshape(-1, img_size, img_size, 3)
y_train = y_train.reshape(-1, 1)
y_test = y_test.reshape(-1, 1)
and result updated to
X_train.shape: (1421, 128, 128, 3)
X_test.shape : (356, 128, 128, 3)
y_train.shape: (1421, 1)
y_test.shape : (356, 1)
model
model = Sequential([
Dense(64, activation='relu', input_shape=(X_train[0].shape)),
Dense(32, activation='relu'),
Dense(32),
Dense(1, activation='sigmoid')
])
early_stopping = keras.callbacks.EarlyStopping(
patience=10,
min_delta=0.001,
restore_best_weights=True,
)
history = model.fit(
X_train, y_train,
validation_data=(X_test, y_test),
batch_size=512,
epochs=10,
callbacks=[early_stopping],
verbose=2
)
and got the error
ValueError: logits and labels must have the same shape ((None, 128, 128, 1) vs (None, 1))
You need to use a Flatten layer with Dense layer.
model = tf.keras.Sequential([
Flatten(input_shape=(128, 128, 3))),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(32),
Dense(1, activation='sigmoid')
])
Or can use convolutional layers in your model as given below:
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
and then compile the model.
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])

Negative Loss In Keras Convolutional AutoEncoder

I am trying to implement the AutoEncoder developed on the keras documentation webpage (https://blog.keras.io/building-autoencoders-in-keras.html) that is using convolutional layers. On the example they use it for the MNIST dataset flattened (reshaped from 3 channels RGB to 1) but I want to use all 3 channels. The dataset I am using has different dimentions. So, what I tried to do is just changing the parts of the code in order to output and image from the decoder with dims = (128, 128, 3) but the problem is that I get negative loss (deeply negative) and I do not know what is happening. This is the chunk of code where I do so:
input_img = keras.Input(shape= input_dim)
x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = layers.UpSampling3D((2))(x)
x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling3D((2))(x)
decoded = layers.Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = keras.Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test))
Input dim is equal to (128, 128, 3) and my data dimentions x_train.shape, x_test.shape, are equal to ((6000, 128, 128, 3), (1200, 128, 128, 3)).
Thanks in advance!

WARNING : tensorflow:Model was constructed with shape

I created a model. but when I want the model to do the estimation, I get an error.
inputs = tf.keras.Input(shape=(512, 512,1))
conv2d_layer = tf.keras.layers.Conv2D(32, (2,2), padding='Same')(inputs)
conv2d_layer = tf.keras.layers.Conv2D(32, (2,2), activation='relu', padding='Same')(conv2d_layer)
bn_layer = tf.keras.layers.BatchNormalization()(conv2d_layer)
mp_layer = tf.keras.layers.MaxPooling2D(pool_size=(2,2))(bn_layer)
drop = tf.keras.layers.Dropout(0.25)(mp_layer)
conv2d_layer = tf.keras.layers.Conv2D(64, (2,2), activation='relu', padding='Same')(drop)
conv2d_layer = tf.keras.layers.Conv2D(64, (2,2), activation='relu', padding='Same')(conv2d_layer)
bn_layer = tf.keras.layers.BatchNormalization()(conv2d_layer)
mp_layer = tf.keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(bn_layer)
drop = tf.keras.layers.Dropout(0.25)(mp_layer)
flatten_layer = tf.keras.layers.Flatten()(drop)
dense_layer = tf.keras.layers.Dense(512, activation='relu')(flatten_layer)
drop = tf.keras.layers.Dropout(0.5)(dense_layer)
outputs = tf.keras.layers.Dense(2, activation='softmax')(drop)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name='tumor_model')
model.summary()
Train Images Shape (342, 512, 512, 1)
Train Labels Shape (342, 2)
Test Images Shape (38, 512, 512, 1)
Test Labels Shape (38, 2)
Problem Here:
pred = model.predict(test_images[12])
WARNING:tensorflow:Model was constructed with shape (None, 512, 512, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (32, 512, 1, 1).
The error is telling you that test_images.shape is (32,512,1,1). Print out
test_images.shape then find out what is wrong with how you created the test_images

Keras Error: Data cardinality is ambiguous

I am trying the following snippet on 64 images of size 28,28,1, but it throws
ValueError: Data cardinality is ambiguous
despite I think the dimensions of the tensors being correct.
loss = model_net.train_on_batch(Batch_X, Batch_Y)
print(type(Batch_X)):<class 'list'>
print(type(Batch_X[1][0])):<class 'numpy.ndarray'>
print(type(Batch_Y)):<class 'numpy.ndarray'>
print(np.shape(Batch_X)):(2, 64, 28, 28, 1)
print(np.shape(Batch_Y)):(64,)
Model is
input_shape=(28,28,1)
left_input = Input(input_shape)
right_input = Input(input_shape)
model = Sequential([
Conv2D(filters=64, kernel_size=(3, 3), activation='relu',input_shape=(28, 28, 1)),
MaxPool2D(pool_size=(2, 2), strides=2),
Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
MaxPool2D(pool_size=(2, 2), strides=2),
Flatten(),
Dense(units=4096, activation='sigmoid')])
model.summary()
encoded_l = model(left_input)
encoded_r = model(right_input)
subtracted = keras.layers.Subtract()([encoded_l, encoded_r])
prediction = Dense(1, activation='sigmoid')(subtracted)
model_net = Model(inputs=[left_input, right_input], outputs=prediction)
optimizer= Adam(learning_rate=0.0006)
model_net.compile(loss='binary_crossentropy', optimizer=optimizer)
plot_model(model_net, show_shapes=True, show_layer_names=True)
In the above you can see the model takes in 2 images simultaneously for forward pass; thus every element in Batch_X comprises of 2 matrixes. Any suggestions where I am likely making a mistake.

Categories