Predicting a single PNG image using a trained TensorFlow model - python

import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape = (28,28)),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
This is the code for the model, which I have trained using the mnist dataset. What I want to do is to then pass a 28x28 png image to the predict() method, which is not working. The code for the prediction is:
img = imageio.imread('image_0.png')
prediction = model.predict(img, batch_size = 1)
which produces the error
ValueError: Error when checking input: expected flatten_input to have shape (28, 28) but got array with shape (28, 3)
I have been stuck on this problem for a few days, but I can't find the correct way to pass an image into the predict method. Any help?

Predict function makes predictions over a batch of image. You should include batch dimension (first dimension) to your img, even to predict a single example.
You need something like this:
img = imageio.imread('image_0.png')
img = np.expand_dims(img, axis=0)
prediction = model.predict(img)
As #desertnaut says, seems you are using a RGB image, so your first layer should use input_shape = (28,28,3). Therefore, img parameter of predict function should have (1,28,28,3) shape.
In your case, img parameter of predict function has (28,28,3) shape, thus predict function took the first dimension as number of images, and could not match the other two dimensions to the input_shape of the first layer.

Related

How to get Layer Outputs of CNN

I built a VGG16 model and trained it. I would like to see output of softmax layer (prediction probabilities) of this model for test images. I searched for answers and tried below code. It gives this error InvalidArgumentError: 2 root error(s) found. (0) INVALID_ARGUMENT: transpose expects a vector of size 3. But input(1) is a vector of size 4 [[{{node conv2d_26/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] [[conv2d_29/Relu/_311]] (1) INVALID_ARGUMENT: transpose expects a vector of size 3. But input(1) is a vector of size 4 [[{{node conv2d_26/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] 0 successful operations. 0 derived errors ignored.
Here is the code snippet below, I tried test image (224,244,3) and array of that image for the "image" variable. Still gives the same error. Any help is highly appreciated.
def get_all_outputs(model, input_data, learning_phase=1):
outputs = [layer.output for layer in model.layers[1:]] # exclude Input
layers_fn = K.function([model.input, K.learning_phase()], outputs)
return layers_fn([input_data, learning_phase])
outputs = get_all_outputs(model, image, 1)
you want to predict score of an image.
model.predict passes your input image through the model and gives the score of your image.
model.predict(image)
First save the model and then load the model like this
# # Save the model
filepath = './saved_model'
save_model(model, filepath)
# Load the model
model = load_model(filepath)
Then get the output for a test image like the following code
# Generate predictions for samples
predictions = model.predict(samples_to_predict)
print(predictions)
# Generate arg maxes for predictions
classes = np.argmax(predictions, axis = 1)
print(classes)

Keras model with TensorFlow TFRecord Dataset error -- rank is undefined

I'm using a fairly standard TFRecord dataset. The records are Example protobufs. The "image" feature is a 28 by 28 tensor serialised by tf.io.serialize_tensor.
feature_description = {
"image": tf.io.FixedLenFeature((), tf.string),
"label": tf.io.FixedLenFeature((), tf.int64)}
image_shape = (28, 28)
def preprocess(example):
example = tf.io.parse_single_example(example, feature_description)
image, label = example["image"], example["label"]
image = tf.io.parse_tensor(image, out_type=tf.float64)
return image, label
batch_size = 32
dataset = tf.data.TFRecordDataset("data/train.tfrecord")\
.map(preprocess).batch(batch_size).prefetch(1)
However, I have the following simple Keras model:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=image_shape))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
and whenever I try to fit or predict this model with the dataset
model.fit(dataset)
model.predict(dataset)
I get the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: its rank is undefined, but the layer requires a defined rank.
Strangely, if I instead create an equivalent dataset via tf.data.Dataset.from_tensor_slices(images), although it yields exactly the same items, the error does not occur.
The model needs to infer a single input shape. But preprocess parses serialised image tensors of any shape, and this is done on the fly as records are streamed, so it is not possible to infer an input shape for all of the data.
This is easily fixed by adding a TF function which asserts the tensor shape, tf.ensure_shape:
def preprocess(example):
example = tf.io.parse_single_example(example, feature_description)
image, label = example["image"], example["label"]
image = tf.io.parse_tensor(image, out_type=tf.float64)
image = tf.ensure_shape(image, image_shape) # THE FIX
return image, label

Shape mismatch, 2D Input & 2D Labels

I want to create a neural network, that -easy speaking- creates an image out of an image (greyscale)
I have successfully created a dataset of 3200 examples of input and output (label) images.
(I know the dataset should be larger but that is not the problem right now)
The input [Xin] has the size (3200, 50, 30), since it is 50*30 pixels
The output [yout] has the size of (3200, 30, 20) since it is 30*20 pixels
I want to try out a fully connected network (later on a CNN)
The built of the fully connected model looks like that:
# 5 Create Model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(30*20, activation=tf.nn.relu))
#compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 6 Train the model
model.fit(Xin, yout, epochs=1) #train the model
After that I get the following error:
ValueError: Shape mismatch: The shape of labels (received (19200,)) should equal the shape of logits except for the last dimension (received (32, 600)).
I already tried to flatten yout:
youtflat = yout.transpose(1,0,2).reshape(-1,yout.shape[1]*yout.shape[2])
but this resulted in the same error
It appears you're flattening your labels (yout) completely, i.e., you're losing batch dimension. If your original yout has a shape of (3200, 30, 20) you should reshape it to have a shape of (3200, 30*20) which equals (3200, 600):
yout = numpy.reshape((3200, 600))
Then it should work
NOTE
The suggested fix however only removes the error. I see many problems with your method though. For the task you're trying to perform (getting an image as output), you cannot use sparse_categorical_crossentropy as loss and accuracy as metrics. You should use 'mse' or 'mae' instead.

TensorFlow Keras dimension error for input layer

I've searched through all the solutions related to this, and I still can't figure out how to shape my training data so Tensorflow accepts it.
My training data is a numpy array of shape (21005, 48, 48), where the 21005 is number of elements and the 48,48 is a 48x48 grayscale image.
model.add(tf.keras.layers.Conv2D(64, kernel_size=3,activation='relu',input_shape=(48,48,1)))
model.add(tf.keras.layers.Conv2D(32, kernel_size=3,activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(7, activation='softmax'))
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(image_train, emotion_train,batch_size=BATCH_SIZE,epochs=EPOCHS, verbose=1)
When I run the fit function, however, it returns an error stating:
ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (21005, 48, 48)
This leads me to think I'm formatting the input data incorrectly, or missing something regarding how Keras and TF actually pass the input image into the input layer. I've tried adding the extra dimension to the input shape to allow for a channel in a 2d Conv layer, as well as reshaping the images themselves to no avail. Any advice?
Reshape your training data to have 4-dimensions before calling model.fit() such as:
image_train = np.reshape(image_train, (21005, 48, 48, 1))
This is needed because the first Conv2D layer expects an image to have an input_shape of (48,48,1)
When you made your preprocessing, you might have read the image in grayscale mode with a library OpenCV/PIL.
When you read them, your library considers a grayscale image of size (48,48), not a (48,48,1), hence the issue that you have.
Solve the issue as soon as possible, not before feeding to your model; in your code, wherever you read those images, before appending to your list/arrays, ensure the right shape of the array is picked. You can see down below an OpenCV example:
image = cv2.imread(filepath, 0)
#Before this np_expand_dims, image has shape (48,48)
image = np.expand_dims(image , axis=2)
#After this step, image has shape (48,48,1)

How to apply a previously unseen image to a previously saved model?

I wanted to know how to apply a previously unseen image to a previously saved CNN model and see how it classifies it ?
Code (My attempt)
from keras.models import load_model
from keras.preprocessing import image
import numpy as np
img_path = '/Users/eoind/code/1.jpg'
model = load_model('food.h5')
model.summary()
img = image.load_img(img_path, target_size=(100,100))
image = image.img_to_array(img)
image = np.expand_dims(image, axis=0)
print(image.shape)
images = np.vstack([image])
print("classifying images..")
image_class = model.predict_classes(images)
print(image_class)
iPython Console Error
ValueError: Error when checking : expected dense_1_input to have 2 dimensions, but got array with shape (1, 100, 100, 3)
The error says that the input shape of your network does not match the shape of your image. It seems like the first layer of your network is a dense layer, which is a fully-connected layer and it expects the input to be of shape (batch_size, num_of_neurons_in_the_bottom), but your are giving it an image with shape (batch_size, height, width, channels). Here is a checklist for troubleshooting your problem:
Has the model at least loaded? If the above error happens during loading, then your model is broken (perhaps not properly saved). If not, continue through the checklist...
In programming, always output the values of your variables for debugging! What is the value of model.summary()? Are you sure that the input shape of your network is (100, 100, 3)? Is the first layer a convolutional one?
If the first layer is dense (which is fully-connected), check the training code on how you feed images to the model - maybe your image should be re-shaped or somehow pre-processed?

Categories