Dimension value error to load model with keras - python

I am trying to denoise an image with a pre-trained model I loaded as "model". I am getting an error as a result of the dimensions being different. Here is the code I have:
path_clean = r"clean.png"
clean = load_img(path_clean)
path_noisy = r"noise.png"
noisy = load_img(path_noisy)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=3e-4),
loss=tf.keras.losses.mean_squared_error,
metrics=[tf.keras.metrics.mean_absolute_error])
history = model.fit(img_to_array(noisy), img_to_array(clean), epochs=50)
Here is the error I get, calling from the "history" line:
ValueError: Exception encountered when calling layer "concatenate" (type Concatenate).
Dimension 1 in both shapes must be equal, but are 113 and 114. Shapes are [?,113,1] and [?,114,2]. for '{{node model/concatenate/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model/conv2d_6/Relu, model/up_sampling2d/resize/ResizeNearestNeighbor, model/concatenate/concat/axis)' with input shapes: [?,113,1,128], [?,114,2,128], [] and with computed input tensors: input[2] = <3>.
Call arguments received:
• inputs=['tf.Tensor(shape=(None, 113, 1, 128), dtype=float32)', 'tf.Tensor(shape=(None, 114, 2, 128), dtype=float32)']
What does it mean that one is 113 and one is 114? When I print the shapes of each image using this:
print(img_to_array(clean).shape)
print(img_to_array(noisy).shape)
I get this:
(500, 500, 3)
(500, 500, 3)
So the dimensions should be the same, right? Thanks for your help.

The error has to do with a certain layer within the network not managing to align the inputs that you give it. The number you see are different because the input data undergoes a series of transformations and then it arrives at this layer and everything breaks down.
Try reading the documentation for this pre-trained model to understand what its constraints are - maybe it performs some reshaping magic and expects a certain shape as input.
When you load the model, you should also be able to inspect the graph structure to understand what happens to the input up until this concatenation.

The issue is that your model relies on a certain image input size (e.g., most likely a multiple of 32). So make sure that the input width and height of the images are divisible by 32.

Related

np.array creation shape differs from original

I'm trying test/predict my model, in tensorflow with keras.
For now, I'm using image from train dataset, I'll change once it's working
So I'm calling predict like this:
print(x[0].shape) # <- (128, 128, 3)
print(np.array(x[0])[0].shape) # <- (128, 3)
model.predict(np.array(x[0]))
But it gives me: layer model: expected shape=(None, 128, 128, 3), found shape=(32, 128, 3)
Should not it work? Why is the shape changing when creating array?
You need to add an extra dimension for batch size. For single image batch size would be 1. You can use np.expand_dims to add the extra dimension.
np.expand_dims(np.array(x[0]), axis=0)
model.predict always works in batches. Therefore you need to provide test data in batches, or in other words "provide data point in rows". If you just want to predict for one single data point, you have to either expand it like vivekpadia said or try this:
model.predict(x)

TensorFlow Keras dimension error for input layer

I've searched through all the solutions related to this, and I still can't figure out how to shape my training data so Tensorflow accepts it.
My training data is a numpy array of shape (21005, 48, 48), where the 21005 is number of elements and the 48,48 is a 48x48 grayscale image.
model.add(tf.keras.layers.Conv2D(64, kernel_size=3,activation='relu',input_shape=(48,48,1)))
model.add(tf.keras.layers.Conv2D(32, kernel_size=3,activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(7, activation='softmax'))
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(image_train, emotion_train,batch_size=BATCH_SIZE,epochs=EPOCHS, verbose=1)
When I run the fit function, however, it returns an error stating:
ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (21005, 48, 48)
This leads me to think I'm formatting the input data incorrectly, or missing something regarding how Keras and TF actually pass the input image into the input layer. I've tried adding the extra dimension to the input shape to allow for a channel in a 2d Conv layer, as well as reshaping the images themselves to no avail. Any advice?
Reshape your training data to have 4-dimensions before calling model.fit() such as:
image_train = np.reshape(image_train, (21005, 48, 48, 1))
This is needed because the first Conv2D layer expects an image to have an input_shape of (48,48,1)
When you made your preprocessing, you might have read the image in grayscale mode with a library OpenCV/PIL.
When you read them, your library considers a grayscale image of size (48,48), not a (48,48,1), hence the issue that you have.
Solve the issue as soon as possible, not before feeding to your model; in your code, wherever you read those images, before appending to your list/arrays, ensure the right shape of the array is picked. You can see down below an OpenCV example:
image = cv2.imread(filepath, 0)
#Before this np_expand_dims, image has shape (48,48)
image = np.expand_dims(image , axis=2)
#After this step, image has shape (48,48,1)

Making inputs to keras RNN written in Functional API

I'm having some problems making masking work with a keras RNN written in Functional API. The idea is to mask a tensor, zero-padded, with shape (batch_size, timesteps, 100) and feed it into a SimpleRNN. Right now I have the following:
input = keras.layers.Input(shape=(None, 100))
mask_layer = keras.layers.Masking(mask_value=0.)
mask = mask_layer(input)
rnn = keras.layers.SimpleRNN(20)
x = rnn(input, mask=mask)
However, this does not work, because it raises the following InvalidArgumentError:
InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 20 and 2000. Shapes are [?,20] and [?,2000]. for 'Select' (op: 'Select') with input shapes: [?,2000], [?,20], [?,20].
By changing my Input's shape into (None, 1) - a sequential input where each element is a single integer, instead of n-dimensional embeddings - I've gotten this code to work. I've also gotten the same idea to work with the Sequential API, but I cannot do this, as my final model will have multiple inputs and outputs. I also do not want to force my Input's shape to be (None, 1), as I want to swap out different embedding models (Word2Vec, etc) during preprocessing, which means my Inputs will be embedding vectors from the start.
Can anyone help me with using masks with RNNs when using keras's functional API?
According to Masking and Padding with Keras, you won't need to manually set mask on the RNN layer, in the following code the RNN layer will automatically receive the mask.
import keras
input_layer = keras.layers.Input(shape=(None, 100))
masked_layer = keras.layers.Masking(mask_value=0.)(input_layer)
rnn_layer = keras.layers.SimpleRNN(20)(masked_layer)

Wrong model input when training autoencoder

I have a signal with 1024 values. When I print it in the console I get this output:
[-13.2172165 -13.0935545 -13.149217 ... -1.8910782 -1.5482559 -1.6714929]
I've been using a very basic autoencoder code from Keras website which looks like this:
encoding_dim = 32
input_signal = Input(shape=(1024,))
encoded = Dense(encoding_dim, activation='relu')(input_signal)
decoded = Dense(1024, activation='sigmoid')(encoded)
autoencoder = Model(input_signal, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(my_signal, epochs=50, shuffle=True)
However, I get this and I don't understand how/why (despite reading intensively the documentation):
ValueError: Error when checking input: expected input_1 to have shape (1024,) but got array with shape (1,)
I've been trying to change the shape input in 1024,1 but then it told me that it expects a 3 dimension input but (here's the weird interpretation): my input is an array with shape (1024,1) even though I didn't change the input at all.
My scenario: I have 2034 arrays (each one with 1024 elements) that I want to fit into my model. For now, I'm trying to make my autoencoder work only with 1 such array. I understand that I need to set the batch size with the amount of my arrays (2034 in this case).
Thanks!
I finally managed to find out my answer: I had to make sure my array is placed in another array (like a 1-D array) like this:
[[-13.2172165 -13.0935545 -13.149217 ... -1.8910782 -1.5482559 -1.6714929]]
I could do it by using:
x_train.reshape((1,1024)).

Do I understand batch_size correctly in Keras?

I'm using Keras' built-in inception_resnet_v2 to train a CNN to recognize images. When training the model, I have a numpy array of data as inputs, with input shape (1000, 299, 299, 3),
model.fit(x=X, y=Y, batch_size=16, ...) # Output shape `Y` is (1000, 6), for 6 classes
At first, When trying to predict, I passed in a single image of shape (299, 299, 3), but got the error
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (299, 299, 3)
I reshaped my input with:
x = np.reshape(x, ((1, 299, 299, 3)))
Now, when I predict,
y = model.predict(x, batch_size=1, verbose=0)
I don't get an error.
I want to make sure I understand batch_size correctly in both training and predicting. My assumptions are:
1) With model.fit, Keras takes batch_size elements from the input array (in this case, it works through my 1000 examples 16 samples at a time)
2) With model.predict, I should reshape my input to be a single 3D array, and I should explicitly set batch_size to 1.
Are these correct assumptions?
Also, would it be better (possible even) to provide training data to the model so that this sort of reshape before prediction was not necessary? Thank you for helping me learn this.
No, you got the idea wrong. batch_size specifies how many data examples are "forwarded" through the network at once (using GPU usually).
By default, this value is set to 32 inside model.predict method, but you may specify otherwise (as you did with batch_size=1). Because of this default value you got an error:
ValueError: Error when checking input: expected input_1 to have 4
dimensions, but got array with shape (299, 299, 3)
You should not reshape your input this way, rather you would provide it with the correct batch size.
Say, for the default case you would pass an array of shape (32, 299, 299, 3), analogous for different batch_size, e.g. with batch_size=64 this function requires you to pass an input of shape (64, 299, 299, 3.
EDIT:
It seems you need to reshape your single sample into a batch. I would advise you to use np.expand_dims for improved readability and portability of your code, like this:
y = model.predict(np.expand_dims(x, axis=0), batch_size=1)

Categories