How to fix error with Keras Flatten layers? - python

This is my code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(1,11)),
keras.layers.Dense(4, activation='relu'),
keras.layers.Dense(10, activation='softmax')
]
)
My data is 1000 rows with 11 columns (11 inputs for the model). So to make the input layer of the NN I used flatten. This gives me the error:
WARNING:tensorflow:Model was constructed with shape (None, 1, 11) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1, 11), dtype=tf.float32, name='flatten_1_input'), name='flatten_1_input', description="created by layer 'flatten_1_input'"), but it was called on an input with incompatible shape (None, 11).

It seems like your input shape is (num_inputs, 11) already so you don't need to flatten it. Taking out the Flatten layer should fix this.

Related

How to Feed Tensor Dataset to Model

I am new to Tensorflow and trying to figure out how to build a simple text classification model. Taking a basic model from this tutorial, I am trying to adapt it to my own custom dataset.
I have tensors with shape=(32, 2, 500) grouped into training and validation datasets with shape=(None, 2, 500).
def get_model(max_features=20000, embedding_dim=128):
# A integer input for vocab indices.
inputs = tf.keras.Input(shape=(None,), dtype="int64")
# Next, we add a layer to map those vocab indices into a space of dimensionality
#'embedding_dim'.
x = layers.Embedding(max_features, embedding_dim)(inputs)
x = layers.Dropout(0.5)(x)
# Conv1D + global max pooling
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.GlobalMaxPooling1D()(x)
# We add a vanilla hidden layer:
x = layers.Dense(128, activation="relu")(x)
x = layers.Dropout(0.5)(x)
# We project onto a single unit output layer, and squash it with a sigmoid:
predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x)
model = tf.keras.Model(inputs, predictions)
# Compile the model with binary crossentropy loss and an adam optimizer.
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
I get the following warning:
WARNING:tensorflow:Model was constructed with shape (None, None) for input KerasTensor(type_spec=TensorSpec(shape=(None, None), dtype=tf.int64, name='input_16'), name='input_16', description="created by layer 'input_16'"), but it was called on an input with incompatible shape (None, 2, 500).
And the following error message:
Input 0 of layer "global_max_pooling1d_6" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 2, 53, 128)
Call arguments received by layer "model_7" " f"(type Functional):
• inputs=tf.Tensor(shape=(None, 2, 500), dtype=int64)
• training=True
• mask=None
What do I need to change to get rid of this error and get the model working?

Tensorflow Keras ValueError on input shape

I am doing a simple Conv1D using TensorFlow Keras to try out a time-series dataset.
Data:
train_df = dff[:177] #get train data
tdf = train_df.shape #get shape = (177,4)
test = tf.convert_to_tensor(train_df)
Model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=1,
strides=1,
padding="causal",
activation="relu",
input_shape=tdf),
tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding="valid")
])
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(5e-4,
decay_steps=1000000,
decay_rate=0.98,
staircase=False)
model.compile(loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.8),
metrics=['mae'])
model.summary()
Summary:
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_16 (Conv1D) (None, 177, 32) 160
_________________________________________________________________
max_pooling1d_8 (MaxPooling1 (None, 176, 32) 0
=================================================================
Total params: 160
Trainable params: 160
Non-trainable params: 0
Fit:
trainedModel = model.fit(test,
epochs=100,
steps_per_epoch=1,
verbose=1)
Error raised # Fit:
ValueError: Input 0 of layer sequential_13 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (2, 1)
From the various SO, it was said that this is due to the input data shape. So I tried a recommendation in a SO to reshape the my data and re-input it
Reshape:
X_train=np.reshape(test,(test.shape[0], test.shape[1],1))
Error raised # Fit after reshaping:
ValueError: Input 0 of layer sequential_14 is incompatible with the layer: expected axis -1 of input shape to have value 4 but received input with shape (177, 4, 1)
I am at a loss here. What is the way to tackle this?
Current Parametric values:
tdf = (177,4)
My assumption - "You have 4 features for 177 training samples".
The Reason of current Error - The model assumes that each sample is of shape (177,4) but when you try to pass it to the model the error comes which is
ValueError: Input 0 of layer sequential_13 is incompatible with the layer: :
expected min_ndim=3, found ndim=2. Full shape received: (2, 1)
This error says that the model expected the input to have a 3D dimension which in this case is that model wants a batch size. Say a batch of 16 images of height=177 and width=4. (Although you don't have images but model expects this cause of how you specified your input shape). That means the input should have a shape - (batch_size, 177, 4).
This could be solved by passing a parameter batch_size=1 in the model.fit. As below (Without reshaping the data)
trainedModel = model.fit(data,
epochs=100,
steps_per_epoch=1,
batch_size=16,
verbose=1)
But this will give another error which is as below
ValueError: Input 0 of layer sequential_1 is incompatible with the layer: :
expected min_ndim=3, found ndim=2. Full shape received: (None, 4)
Now this error mean that the input passed to the model had some batch_size represented by None and a feature vector of shape 4 but the model expects the input to have a shape of (batch_size, height, width). Here batch_size is expected by all the models but the rest 2 are specified by us while defining the input shape. We have defined this here:
tf.keras.layers.Conv1D(filters=32,
kernel_size=1,
strides=1,
padding="causal",
activation="relu",
input_shape=tdf), # Here We save input_shape = (177,4)
As you can see the input_shape has been defined as height=177, width=4. (I used height and width for easier explanation otherwise there is no height/width only the dimension number). BUT, we wanted the model to take input of 4 features. So, now we have to change this to below:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=1,
strides=1,
padding="causal",
activation="relu",
input_shape=(4,)),
tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding="valid")
])
But now when you try to run this you will get another error as below:
ValueError: Input 0 of layer conv1d_10 is incompatible with the layer: :
expected min_ndim=3, found ndim=2. Full shape received: (None, 4)
The thing to note is the error arise from the Conv1D and this is cause this layer expects the input in 3D including the batch_size but we never specify the batch_size while creating the model the parameter input_shape should have a value like this input_shape = (dim1, dim2) but in case we only have 4 features and hence only dim1 and not dim2. In this case we will reshape our input to have the 4 as (4,1). By this we will have the dim1 = 4 and the dim2 = 1. And we will update our model as below:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=1,
strides=1,
padding="causal",
activation="relu",
input_shape=(4,1)),
tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding="valid")
])
Now we also reshape our inputs to have shape (177,4, 1) as below:
train_df = dff[:177]
train_df = train_df.values.reshape(177, 4, 1)
test = tf.convert_to_tensor(train_df)
Now we can use it to pass into the model.
trainedModel = model.fit(test,
epochs=100,
steps_per_epoch=1,
verbose=1)
Sadly this will give yet another error like below.
ValueError: No gradients provided for any variable: ['conv1d_14/kernel:0',
'conv1d_14/bias:0'].
This is cause the model don't get any Y corresponding to your input X and hence it can't compute the gradients using the loss function and hence can't train. But it can still be used to get the outputs as below:
preds = model(test)
preds.shape # Result -> TensorShape([177, 3, 32])

Add GlobalAveragePooling2D (before ResNet50)

I'm trying to do a model using ResNet50 for image classification into 6 classes and I want to reduce the dimension of the images before using them to train the ResNet50 model. To do this I start creating a ResNet50 model using the model in keras:
ResNet = ResNet50(
include_top= None, weights='imagenet', input_tensor=None, input_shape=([64, 109, 3]),
pooling=None, classes=6)
And then I create a sequential model that includes ResNet50 but adding some final layers for the classification and also the first layer for dimensionality reduction before using ResNet50:
(About the input shape: The images I'm using have a dimension of 128x217 and the 3 is for the channel that ResNet needs)
model = models.Sequential()
model.add(GlobalAveragePooling2D(input_shape = ([128, 217, 3])))
model.add(ResNet)
model.add(GlobalAveragePooling2D())
model.add(Dense(units=512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=6, activation='softmax'))
But this doesn't work because the dimension after the first global average pooling doesn't fit with the input shape in the Resnet, the error I get is:
WARNING:tensorflow:Model was constructed with shape (None, 64, 109, 3) for input Tensor("input_6:0", shape=(None, 64, 109, 3), dtype=float32), but it was called on an input with incompatible shape (None, 3).
ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 3]
I think I understand what is the problem but I don't know how to fix it since (None, 3) is not a valid input shape for ResNet50. How can I fix this? Thank you!:)
You should first understand what GlobalAveragePooling actually does. This layer cannot be apllied right after the input, because it will only give the maximum value of all the images for each channel (in your case 3 values, because you have 3 channels).
You have to use another method to reduce the size of the images (e.g. simple conversion to a smaller size.

Re-called on a Tensor with incompatible shape

I'm trying to create a CNN + Regression model here through the code below:
# Create the base model from the pre-trained model MobileNet V2
cnn_model = keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
# The Regression Model
regression_model = keras.Sequential([
keras.layers.Dense(64, activation='relu',
input_shape=cnn_model.output_shape),
keras.layers.Dense(64, activation='relu')
])
prediction_layer = tf.keras.layers.Dense(1)
# Final Model
model = keras.Sequential([
cnn_model,
regression_model,
prediction_layer
])
Now the issue is that I get the WARNING below:
WARNING:tensorflow:Model was constructed with shape
Tensor("dense_12_input:0", shape=(None, None, 7, 7, 1280),
dtype=float32) for input (None, None, 7, 7, 1280), but it was
re-called on a Tensor with incompatible shape (None, 7, 7, 1280).
Does anyone know as to why this warning is coming up and how I can combat it, unless it's harmless.
It seems as though adding a flatten after the CNN solved my problem. Since we want to pass a flattened vector to the fully connected layer. The model should look like:
model = keras.Sequential([
cnn_model, keras.layers.Flatten(),
regression_model,
prediction_layer
])

ValueError: Error when checking target: expected dense_13 to have shape (None, 6) but got array with shape (6, 1)

I am training a classification network with training data which has X.shape = (1119, 7) and Y.shape = (1119, 6). Below is my simple Keras network with and output dim of 6 (size of labels). The error which is returned is below the code
hidden_size = 128
model = Sequential()
model.add(Embedding(7, hidden_size))
#model.add(LSTM(128, input_shape=(1,7)))
model.add(LSTM(hidden_size, return_sequences=True))
model.add(LSTM(hidden_size, return_sequences=True))
model.add(Dense(output_dim=6, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"])
ValueError: Error when checking target: expected dense_13 to have shape (None, 6) but got array with shape (6, 1)
I would perfer not to do this in tensorflow because I am just prototyping yet it is my first run at Keras and am confused about why it cannot take this data. I attempted to reshape the data in a number of ways in which nothing worked. Any advice as to why this isn't work would be greatly appreciated.
You should probably remove the parameter return_sequences=True from your last LSTM layer. When using return_sequences=True, the output of the LSTM layer has shape (seq_len, hidden_size). Passing this on to a Dense layer gives you an output shape of (seq_len, 6), which is incompatible with your labels. If you instead omit return_sequences=True, then your LSTM layer returns shape (hidden_size,) (it only returns the last element of the sequence) and subsequently your final Dense layer will have output shape (6,) like your labels.

Categories