CNN + LSTM implementation error for image classification - python

I am trying to implement a CNN network + LSTM to be able to predict 4 different classes based on the sequence of x-ray images, which were preprocessed to 150x150x3 shape. My X-train shape is (4067, 150, 150, 3). When I am executing the code
model.fit(), i am getting the error.
# x_train = np.reshape(x_train, (4067, 150, 150, 3))
# y_train = np.reshape(y_train, (4067, 4))
model = Sequential()
model.add(TimeDistributed(Conv2D(filters = 32,
kernel_size=(3,3),
padding='same',
activation = 'relu'),
input_shape=(None, 150, 150, 3)))
model.add(TimeDistributed(AveragePooling2D()))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dense(24, activation='relu',name='output'))
model.add(Dense(4, activation = 'softmax'))
from tensorflow.keras.optimizers import Adam
optimizer = Adam(lr=0.001)
model.compile(optimizer = optimizer,
loss = 'categorical_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor = 'val_accuracy',
factor = 0.3,
patience = 2,
min_delta = 0.001,
mode = 'auto',
verbose = 1)
hist_cnn_lstm = model.fit(x_train, y_train, batch_size=64, epochs=15,
validation_data = (x_valid, y_valid),
callbacks=reduce_lr
)
ERROR:
Epoch 1/15
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-3ec61fbabcf1> in <module>()
1 hist_cnn_lstm = model.fit(x_train, y_train, batch_size=64, epochs=15,
2 validation_data = (x_valid, y_valid),
----> 3 callbacks=reduce_lr
4 )
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is
ValueError: Input 0 of layer "sequential_1" is incompatible with the layer: expected shape=(None, None, 150, 150, 3), found shape=(None, 150, 150, 3)

Related

ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 163), found shape=(None, 35)

I have train_dataset: <BatchDataset element_spec=(TensorSpec(shape=(None, 35), dtype=tf.int32, name=None), TensorSpec(shape=(None, 163), dtype=tf.int64, name=None))>
and
val_dataset: <BatchDataset element_spec=(TensorSpec(shape=(None, 35), dtype=tf.int32, name=None), TensorSpec(shape=(None, 163), dtype=tf.int64, name=None))>
My model:
embedding_dim = 300
maxlen = maxlen
max_words = len(word_index)
num_tags = len(tags)
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(max_words, embedding_dim, input_length=maxlen),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units=100, activation='tanh', return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units=100, activation='tanh', return_sequences=True)),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(num_tags, activation='softmax'))
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=val_dataset, epochs=15)
The last row returns error which is strange since my train and val datasets shapes look the same. Is it possible the issue to be linked to batch size and something I do not do in relation to it?
Epoch 1/15
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in
----> 1 history = model.fit(train_dataset, validation_data=val_dataset, epochs=15)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py
in autograph_handler(*args, **kwargs) 1145 except
Exception as e: # pylint:disable=broad-except 1146 if
hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e) 1148 else: 1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py",
line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py",
line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py",
line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py",
line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py",
line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py",
line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 163), found shape=(None, 35)

problem with dimensions when using keras text_dataset_from_directory i get (None,) and can't lode it in to a model

I am using keras.utils.text_dataset_from_directory (see code). when I reach model.fit I get a warning about input dimentions and a error (that I think has something to do with the output).
code:
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Rescaling
MAX_VAL = 0.5
num_classes = 2
train_ds = keras.utils.text_dataset_from_directory(
directory='.../training_data/',
labels='inferred',
label_mode='categorical',
class_names=None,
batch_size=32,
max_length=None,
shuffle=True,
seed=None,
validation_split=None,
subset=None,
follow_links=False)
validation_ds = keras.utils.text_dataset_from_directory(
directory='.../validation_data/',
labels='inferred',
label_mode='categorical',
class_names=None,
batch_size=32,
max_length=None,
shuffle=True,
seed=None,
validation_split=None,
subset=None,
follow_links=False)
inputs = keras.Input(shape=(None,))
x = layers.Reshape((-1, 1))(inputs)
x = Rescaling(scale=1.0 / MAX_VAL)(x)
x = layers.Dense(32, activation="softmax")(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
keras.utils.plot_model(model, "my_first_model.png")
model.compile(optimizer='Adam', loss='categorical_crossentropy')
history = model.fit(train_ds, epochs=10, validation_data=validation_ds)
output:
Epoch 1/10
WARNING:tensorflow:Model was constructed with shape (None, None) for input KerasTensor(type_spec=TensorSpec(shape=(None, None), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (None,).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-3656f32a72a9> in <module>
1 model.compile(optimizer='Adam', loss='categorical_crossentropy')
----> 2 history = model.fit(train_ds, epochs=10, validation_data=validation_ds)
~\AppData\Roaming\Python\Python38\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "...\Python\Python38\site-packages\keras\engine\training.py", line 1021, in train_function *
return step_function(self, iterator)
File "...\Python\Python38\site-packages\keras\engine\training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "...\Python\Python38\site-packages\keras\engine\training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "...\Python\Python38\site-packages\keras\engine\training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "...\Python\Python38\site-packages\keras\engine\training.py", line 918, in compute_loss
return self.compiled_loss(
File "...\Python\Python38\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "...\Python\Python38\site-packages\keras\losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "...\Python\Python38\site-packages\keras\losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "...\Python\Python38\site-packages\keras\losses.py", line 1789, in categorical_crossentropy
return backend.categorical_crossentropy(
File "...\Python\Python38\site-packages\keras\backend.py", line 5083, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 2) and (None, 1, 2) are incompatible
The inputs to the model are 1d vectors (of length ~27000) saved in .txt files:
0.101471743,0.099917953,0.103334975,0.099364908,0.099035715,...,0.097369999,0.099680934
readin dataset frome dir formated as:
/training_data/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
/validation_data/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
How can I get the dimensions right?
EDIT:
I saved the data as a .jpg file and loaded it using image_dataset_from_directory. this fixed the issue. but I would still like to understand why I cant get the data from .txt files properly. (I lose a lot of data transferring from float data to 8bit int in .jpg. and using image_dataset_from_directory requires all img size to be the same length, I wand my data to be different sizes).

ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 60, 5), found shape=(None, 60, 7)

regressor.add(LSTM(units = 60, activation = 'relu', return_sequences = True, input_shape = (X_train.shape[1], 5)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 70, activation = 'relu', return_sequences = True, input_shape = (X_train.shape[1], 5)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 90, activation = 'relu', return_sequences = True, input_shape = (X_train.shape[1], 5)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 140, activation = 'relu', return_sequences = True, input_shape = (X_train.shape[1], 5)))
regressor.add(Dropout(0.2))
regressor.add(Dense(units =1))
regressor.summary()
regressor.compile(optimizer = 'adam', loss='mean_absolute_error')
regressor.fit(X_train, Y_train, epochs = 20, batch_size =50)[enter image description here][1]
On running this code ; the value error raised while I was preparing my model for prediction . Help me to rectify it
Epoch 1/20
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-535f5f5c29a7> in <module>()
----> 1 regressor.fit(X_train, Y_train, epochs = 20, batch_size =50)
1 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 60, 5), found shape=(None, 60, 7)
On running the code ; I came across this error on my python interpretor.
Let me know the correct compatibility!
The error pretty much says it all. The first LSTM layer in your model expects a batch of time series, each having 60 timesteps and 5 features per timestep, but somehow you fed it a batch of time series each having 60 steps and 7 features. You might check your X_train.shape[2] to see if it is 7.
Also, the way you're using the output of LSTM layers is incorrect. You might want to go through this answer and official tensorflow documentation to see what are the outputs of a LSTM layer with return_sequences set to True.

I am trying to train a model but not working because a batch_size error in Google Colab

I am creating a deep learning model in google colab and I am coming to an issue when I am training my model and batch_size is throwing an error and the error is: ValueError: Shapes (None, 34, 3) and (None, 3) are incompatible
One issue
Error: IndexError: index 26 is out of bounds for axis 0 with size 26
# Store labels of dataset
labels = ['Ա', 'Բ', 'Գ', 'Դ', 'Ե', 'Զ', 'Է', 'Ը', 'Թ', 'Ժ', 'Ի', 'Լ', 'Խ', 'Ծ', 'Կ', 'Հ', 'Ձ', 'Ղ', 'Ճ', 'Մ', 'Յ', 'Ն', 'Շ', 'Ո', 'Չ', 'Ռ', 'Ս', 'Վ', 'Տ', 'Ր', 'Ց', 'Պ', 'Փ', 'Ք', 'Ֆ']
# Print the first several training images, along with the labels
fig = plt.figure(figsize=(20,5))
for i in range(36): #36
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i])) #this line throws that error
ax.set_title("{}".format(labels[y_train[i]]))
plt.show()
Second issue
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Flatten, Dense
from keras.models import Sequential
from tensorflow.keras.utils import to_categorical
model = Sequential()
# First convolutional layer accepts image input
model.add(Conv2D(filters=5, kernel_size=5, padding='same', activation='relu',
input_shape=(50, 50, 3)))
# Add a max pooling layer
model.add(MaxPooling2D(pool_size=4))
# Add a convolutional layer
model.add(Conv2D(filters=15, kernel_size=5, padding='same', activation='relu'))
# Add another max pooling layer
model.add(MaxPooling2D(pool_size=4))
# Flatten and feed to output layer
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
# Summarize the model
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
hist = model.fit(x_train, y_train_OH,
validation_split=0.20,
epochs=2,
batch_size=50)
Full Stack trace:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-8468c80f3ad0> in <module>()
3 validation_split=0.20,
4 epochs=2,
----> 5 batch_size=50)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1127 except Exception as e: # pylint:disable=broad-except
1128 if hasattr(e, "ag_error_metadata"):
-> 1129 raise e.ag_error_metadata.to_exception(e)
1130 else:
1131 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 878, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 867, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 810, in train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1665, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 4994, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 34, 3) and (None, 3) are incompatible

Facing ValueError: Shapes (None, None) and (None, 256, 256, 12) are incompatible

İ am working on transfer learning for multiclass classification of image datasets that consists of 12 classes. As a result, İ am using VGG19. However, I am facing an error i.e. Facing ValueError: Shapes (None, None) and (None, 256, 256, 12) are incompatible. Moreover, İ have flaten layers too
My code:
from tensorflow.keras.callbacks import ReduceLROnPlateau
#Learning Rate Annealer
lrr= ReduceLROnPlateau(monitor='val_acc', factor=.01, patience=3, min_lr=1e-5)
from tensorflow.keras.applications import VGG19 #For Transfer Learning
#Defining the VGG Convolutional Neural Net
base_model = VGG19(include_top = False, weights = 'imagenet')
from tensorflow.keras.layers import Flatten,Dense,BatchNormalization,Activation,Dropout
#Adding the final layers to the above base models where the actual classification is done in the dense layers
model= Sequential()
model.add(base_model)
model.add(Flatten())
# Create a `Sequential` model and add a Dense layer as the first layer.
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(256,256,3)))
model.add(tf.keras.layers.Dense(32, activation='relu'))
# Now the model will take as input arrays of shape (None, 16)
# and output arrays of shape (None, 32).
# Note that after the first layer, you don't need to specify
# the size of the input anymore:
model.add(tf.keras.layers.Dense(32))
model.output_shape
#Adding the Dense layers along with activation and batch normalization
model.add(Dense(1024,activation=('relu'),input_dim=256))
model.add(Dense(512,activation=('relu')))
model.add(Dense(128,activation=('relu')))
model.add(Dropout(.3))
#model.add(Dropout(.2))
model.add(Dense(12,activation=('softmax')))
#Checking the final model summary
model.summary()
from tensorflow.keras import optimizers
model.compile(optimizer = optimizers.Adam(learning_rate=0.5), loss='categorical_crossentropy', metrics=["accuracy"])
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, period=1)
history = model.fit(
train_data,
validation_data=valid_data,
batch_size = 32,
epochs=10,
callbacks=[
tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=2,
restore_best_weights=True
)
]
)
model_final.save_weights("vgg16_1.h5")
Error in details:
ValueError Traceback (most recent call last)
<ipython-input-73-c4ac91bd242e> in <module>()
10 monitor='val_loss',
11 patience=2,
---> 12 restore_best_weights=True
13 )
14 ]
9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
984 except Exception as e: # pylint:disable=broad-except
985 if hasattr(e, "ag_error_metadata"):
--> 986 raise e.ag_error_metadata.to_exception(e)
987 else:
988 raise
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:855 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:845 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1285 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:838 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:797 train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:155 __call__
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:259 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:1644 categorical_crossentropy
y_true, y_pred, from_logits=from_logits)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/backend.py:4862 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:1161 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, None) and (None, 256, 256, 12) are incompatible
As #Frightera mentioned in the comments, you have defined Sequential 2 times.
And I have to add that you DON'T have to complicate the model from the first time, try to run a simple one because VGG19 will do all the work for you.
Adding many Dense layers after the VGG19 doesn't mean you get better scores, as the number of layers is a hyperparameter.
Also try to fix a small learning rate at the beginning as 0.1, 0.05, or 0.01.
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.layers import Flatten,Dense,BatchNormalization,Activation,Dropout
from tensorflow.keras import optimizers
lrr= ReduceLROnPlateau(monitor='val_acc', factor=.01, patience=3, min_lr=1e-5)
from tensorflow.keras.applications import VGG19 #For Transfer Learning
base_model = VGG19(weights='imagenet', input_shape=(256, 256, 3), include_top=False)
inputs = keras.Input(shape=(256, 256, 3))
x = base_model(inputs, training=False)
x = Flatten()(x)
x = Dense(32, activation='relu')(x)
outputs = Dense(12,activation='softmax')(x)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(optimizer = optimizers.Adam(learning_rate=0.05), loss='categorical_crossentropy', metrics=["accuracy"])

Categories