WARNING : tensorflow:Model was constructed with shape - python

I created a model. but when I want the model to do the estimation, I get an error.
inputs = tf.keras.Input(shape=(512, 512,1))
conv2d_layer = tf.keras.layers.Conv2D(32, (2,2), padding='Same')(inputs)
conv2d_layer = tf.keras.layers.Conv2D(32, (2,2), activation='relu', padding='Same')(conv2d_layer)
bn_layer = tf.keras.layers.BatchNormalization()(conv2d_layer)
mp_layer = tf.keras.layers.MaxPooling2D(pool_size=(2,2))(bn_layer)
drop = tf.keras.layers.Dropout(0.25)(mp_layer)
conv2d_layer = tf.keras.layers.Conv2D(64, (2,2), activation='relu', padding='Same')(drop)
conv2d_layer = tf.keras.layers.Conv2D(64, (2,2), activation='relu', padding='Same')(conv2d_layer)
bn_layer = tf.keras.layers.BatchNormalization()(conv2d_layer)
mp_layer = tf.keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(bn_layer)
drop = tf.keras.layers.Dropout(0.25)(mp_layer)
flatten_layer = tf.keras.layers.Flatten()(drop)
dense_layer = tf.keras.layers.Dense(512, activation='relu')(flatten_layer)
drop = tf.keras.layers.Dropout(0.5)(dense_layer)
outputs = tf.keras.layers.Dense(2, activation='softmax')(drop)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name='tumor_model')
model.summary()
Train Images Shape (342, 512, 512, 1)
Train Labels Shape (342, 2)
Test Images Shape (38, 512, 512, 1)
Test Labels Shape (38, 2)
Problem Here:
pred = model.predict(test_images[12])
WARNING:tensorflow:Model was constructed with shape (None, 512, 512, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (32, 512, 1, 1).

The error is telling you that test_images.shape is (32,512,1,1). Print out
test_images.shape then find out what is wrong with how you created the test_images

Related

TensorFlow ValueError: Input 0 of layer "sequential" is incompatible with the layer

I am working on a code that loads spectogram data from .npy files using a custom generator using the from_generator function. When I start training the network the error I get is as mentioned in the title.
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 99, 43, 1), found shape=(99, 43, 1)
The spectogram numpy array files are in the following shape: 99,43,1.
Generator code:
def train_dataset_gen():
for index, file_name in enumerate(train_dataset):
X_train[index] = np.load(path + file_name)
Y_train[index] = file_name[0:1]
X_train[index] = np.expand_dims(X_train[index], axis=0)
yield X_train[index], Y_train[index]
gen_train_dataset = tf.data.Dataset.from_generator(
train_dataset_gen,
output_types=(tf.float32, tf.uint8),
output_shapes=((99,43,1), tuple())).repeat(count=-1)
gen_train_dataset.shuffle(len(train_dataset)).batch(batch_size)
Model:
model = Sequential([
Conv2D(4, 3,
padding='same',
activation='relu',
kernel_regularizer=regularizers.l2(0.001),
name='conv_layer1',
input_shape=(99, 43, 1)),
MaxPooling2D(name='max_pooling1', pool_size=(2,2)),
Conv2D(4, 3,
padding='same',
activation='relu',
kernel_regularizer=regularizers.l2(0.001),
name='conv_layer2'),
MaxPooling2D(name='max_pooling3', pool_size=(2,2)),
Flatten(),
Dropout(0.1),
Dense(
80,
activation='relu',
kernel_regularizer=regularizers.l2(0.001),
name='hidden_layer1'
),
Dropout(0.1),
Dense(
len(command_words),
activation='softmax',
kernel_regularizer=regularizers.l2(0.001),
name='output'
)
])
Model fit:
history = model.fit(
gen_train_dataset,
steps_per_epoch=len(X_train) // batch_size,
epochs=epochs,
validation_data=gen_validate_dataset,
validation_steps=10,
)
Thank you for any suggestions!!
Tried:
To try to add the missing None column I added the X_train[index] = np.expand_dims(X_train[index], axis=0) and although the generator data changed to the shape of (None, 99, 43, 1), the error persists.
gen_train_dataset = gen_train_dataset.shuffle(len(train_dataset)).batch(batch_size)

shape error while using cnn model for image classification

Shapes (None, 1) and (None, 10) are incompatible is my error message
cnn = models.Sequential([
layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
is the model i am using ,my x train and ytrain have 1000 samples and shape of x train is 224, 224,3 y train is an array with outputs as numbers 0,1,2, for classification
how do i fix this shape error?

keras: Multi-branch non-shared weight network input issues. Full shape received: (None, None, None, None)

I tried to write a Multi-branch network with non-shared weights using keras, but there was a problem with the input of the network. I expected the input shape to be (None, 30, 64, 64, 3), but the input shape received by the network was (None, None, None, None).
Each branch network is a VGG network.
def vggLstmNet():
inp = Input(shape=(flameSize, size, size, 3)) # (30,64,64,3)
x = TimeDistributed(Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu'))(inp)
x = TimeDistributed(Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(x)
x = TimeDistributed(Conv2D(128, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(128, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(x)
x = TimeDistributed(Conv2D(256, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(256, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(256, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(Conv2D(512, kernel_size=(3, 3), padding='same', activation='relu'))(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(x)
x = TimeDistributed(Flatten())(x)
model = Model(inputs=inp, outputs=x)
return model
This is the fusion model.
class_models = Model(inputs=[input1.input, input2.input, input3.input, input4.input, input5.input, input6.input], outputs=x)
The location of the error is the input in each branch network. And the error information is as follows:
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_2'), name='input_2', description="created by layer 'input_2'"), but it was called on an input with incompatible shape (None, None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_3'), name='input_3', description="created by layer 'input_3'"), but it was called on an input with incompatible shape (None, None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_4'), name='input_4', description="created by layer 'input_4'"), but it was called on an input with incompatible shape (None, None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_5'), name='input_5', description="created by layer 'input_5'"), but it was called on an input with incompatible shape (None, None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_6'), name='input_6', description="created by layer 'input_6'"), but it was called on an input with incompatible shape (None, None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 30, 64, 64, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 30, 64, 64, 3), dtype=tf.float32, name='input_7'), name='input_7', description="created by layer 'input_7'"), but it was called on an input with incompatible shape (None, None, None, None).
ValueError: Input 0 of layer time_distributed_95 is incompatible with the layer: expected ndim=5, found ndim=4. Full shape received: (None, None, None, None)
The input to the network is a data generator. Each time a batchsize list is generated, the list contains six elements, each of which is a shape of (30, 64, 64, 3) array.
How can I solve this problem?
Your data generator must provide a list of 6 tensors of shape (batch_size, 30, 64, 64, 3)
I have solved the problem. For multi-input networks, when using the generator, the generator should strictly return a tuple, and each element in the tuple corresponds to the input of the network one by one.
The generator returns:
yield {'in1': t1, 'in2': t2, 'in3': t3, 'in4': t4, 'in5': t5, 'in6': t6}, {'out': labels}
model:
class_models = Model(inputs=[inp1, inp2, inp3, inp4, inp5, inp6], outputs=x)

Training a CNN with multiple input 3D-arrays in keras

I need to train a 3D_Unet model with (128x128x128) patches of 42 CT scans.
My input data is 128x128x128 for the CT scans and also for masks.
I extended the shape of arrays to (128, 128, 128, 1). Where 1 is the channel.
The problem is how to feed the model with my list of 40 4D-arrays?
How can I use the model.fit() or model.train_on_batch with the correct input shape specified in my Model?
project_name = '3D-Unet Segmentation of Lungs'
img_rows = 128
img_cols = 128
img_depth = 128
# smooth = 1
K.set_image_data_format('channels_last')
#corresponds to inputs with shape:
#(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)
def get_unet():
inputs = Input(shape=(img_depth, img_rows, img_cols, 1))
conv1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(conv1)
conv2 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(conv2)
....
model = Model(inputs=[inputs], outputs=[conv10])
model.summary()
#plot_model(model, to_file='model.png')
model.compile(optimizer=Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.000000199),
loss='binary_crossentropy', metrics=['accuracy'])
return model
for list of arrays as input
What should I specify in either .train_on_batch() or .fit()?
This is the error I get when using the .train_on_batch option:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 42 arrays
model.train_on_batch(train_arrays_list, mask_arrays_list)
This is the error I get when using the .model.fit option after having increased the shape of arrays with axis=0.
UnboundLocalError: local variable 'batch_index' referenced before assignment
model.fit(train_arrays_list[0], mask_arrays_list[0],
batch_size=1,
epochs=50,
verbose=1,
shuffle=True,
validation_split=0.10,
callbacks=[model_checkpoint, csv_logger])
You have to transform your list of numpy arrays of shape (128, 128, 128, 1) into a stacked 5 dimensional numpy array of shape (42, 128, 128, 128, 1). You can do this with: model.fit(np.array(train_arrays_list), np.array(mask_arrays_list), batch_size=1, ...)

How to match the output shape of generator and the input shape of discriminator in GAN?

I am working on my first GANs model, I've followed Tensorflows official documentation using MNIST dataset. I've run it smoothly. I tried to replace MNIST with my own dataset, I've prepared it to match the same size as MNSIT: 28 * 28, it works.
However, my dataset is more complicated than MNIST so I tried to make the image size of my dataset larger: 512 * 512, but I keep getting errors related to input & output shape. I couldn't figure out the relationship between all these input and output shapes of the discriminator and generator. Assuming I want to change my dataset from 28 * 28 (MNSIT size) to y*y (custom size) which input/output shapes exactly I need to tune in these layers? and why? anyone could clarify this flow?
This is my code where I reshape my datasets to match MNIST size :
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
and here I normalize it :
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
This is the generator model where the output shape of the last layer indicating something 28 * 28 :
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
This is the discriminator model, where the input of the first layer indicating somehting 28 * 28 :
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
Here is the formula for calculating output shape of Conv2DTranspose, which you can think of it as a learn-able way of upsampling:
# Padding==Same:
H = H1 * stride
# Padding==Valid
H = (H1-1) * stride + HF
where, H = output size, H1 = input size, HF = height of filter. From "how-to-calculate-the-output-shape-of-conv2d-transpose"
So the the input&output shape of Conv2DTranspose should be:
(None, h1, h2, channels)
||
Conv2DTranspose(num_filters, (kernel_h1, kernel_h2), strides=(s1, s2), padding='same')
||
(None, h1*s1, h2*s2, num_filters)
Where None is batch_size
To just make the code runnable, you may just change the output shape of your first Dense Layer to (8*8*256) and repeat the Conv2DTranspose->BatchNormalization->LeakyReLU block until it becomes (512*512)for gray scale or (512*512*3) for RGB.
For discriminator, the only necessary change is only the input_shape in first layer. Since Conv2D with padding='same' doesn't change the shape of tensors.
However, the above changes doesn't guarantee a good result of your model. You really have to look into your task to decide how your model architecture should be.
if your image shape is (64, 64, 3)
output of your generator must be (64, 64, 3) as input of discriminator

Categories