my goal is to resize the output image from [32,32,1] to [8,8,1]
I try this with reshape but became a error:
Output_model = Reshape((8,8,-1))(out_1)
Error when checking target: expected reshape_1 to have shape (8, 8, 32) but got array with shape (32, 32, 1)
How can I solve this problem??
Thanks a lot
you cannot reshape the array directly because 32*32*1 not equal to 8*8*1, therefore you have to subsample:
import keras
x=keras.layers.Input((32,32,1))
x=keras.layers.MaxPooling2D((4,4))(x)
then your image will downsample to (8,8,1).
Related
I have greyscale images of this shape: x_train_grey.shape = (73257, 32, 32)
I specify the first layer like this:
Flatten(input_shape=(32,32,1)'
Because I don't pass the batch_size and the greyscale images have only 1 channel. But I get this error:
ValueError: Error when checking input: expected flatten_1_input to have 4 dimensions, but got an array with shape (73257, 32, 32)
I don't understand what is wrong, please help. I understand this has been asked many times, but I cannot find a solution.
Cheers!
The problem probably lies in the way you are passing your data to your model. If your input shape is (batch_size, 32, 32) then try something like this:
import tensorflow as tf
grey_scale_images = tf.random.normal((64, 32, 32))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(32,32,1)))
print(model(grey_scale_images).shape)
# (64, 1024)
Update: Both input_shape=(32,32,1) and input_shape=(32,32) will work. It depends how you are feeding your data to your model:
import tensorflow as tf
grey_scale_images = tf.random.normal((64, 32, 32))
Y = tf.random.normal((64, 1024))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(32, 32, 1)))
model.compile(loss='MSE')
model.fit(grey_scale_images, Y)
I'm still learning this stuff too but I would guess that "1" as a dimension's number of entries isn't possible. Even if it is possible, it's a start. "1" as a size of an axis doesn't make sense to me.
Anyone else?
I am trying to transform a tensor with shape (4498,) to a tensor with shape (None, 4498). I tried to reshape it, but it gives a tensor with shape (1, 4498). I tried a lot of other ways to transform it, but none of them worked. Any ideas?
print(info_state.get_shape()) #(4498,)
info_state = tf.reshape(info_state, [-1, 4498])
print(info_state.get_shape()) #(1, 4498)
I want to reshape and resize an image in the first layers before using Conv2D and other layers. The input will be a flattend array. Here is my code:
#Create flat example image:
img_test = np.zeros((120,160))
img_test_flat = img_test.flatten()
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape)))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
result = reshape_model(img_test_flat)
result.shape
Unfortunately this code results in the error I added down below. What is the issue and how do I correctly reshape and resize the flattend array?
WARNING:tensorflow:Model was constructed with shape (None, 19200) for input Tensor("input_13:0", shape=(None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
InvalidArgumentError: Input to reshape is a tensor with 19200 values, but the requested shape has 368640000 [Op:Reshape]
EDIT:
I tried:
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(None, img_test_flat.shape[0])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
Which gave me:
WARNING:tensorflow:Model was constructed with shape (None, None, 19200) for input Tensor("input_19:0", shape=(None, None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
EDIT2:
I recieve the input in C++ from a 1D array and pass it with
// Copy value to input buffer (tensor)
for (size_t i = 0; i < fb->len; i++){
model_input->data.i32[i] = (int32_t) (fb->buf[i]);
so what I pass on to the model is a flat array.
Your use of shapes simply doesn't make sense here. The first dimension of your input should be the number of samples. Is it supposed to be 19,200, or 1 sample?
input_shape should omit the number of samples, so if you want 1 sample, input shape should be 19,200. If you have 19,200 samples, shape should be 1.
The reshaping layer also omits the number of samples, so Keras is confused. What exactly are you trying to do?
This seems to be roughly what you're trying to achieve but I would personally resize the image outside of the neural network:
import numpy as np
import tensorflow as tf
img_test = np.zeros((120,160)).astype(np.float32)
img_test_flat = img_test.reshape(1, -1)
reshape_model = tf.keras.Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape[1:])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.Lambda(lambda x: tf.image.resize(x, (28, 28))))
result = reshape_model(img_test_flat)
print(result.shape)
TensorShape([1, 28, 28, 1])
Feel free to use the Resizing layer instead of the Lambda layer, I can't use it due to my Tensorflow version.
I'm building an image classifier model which classifies Handwritten digits MNIST 28x28 grayscale images using CNN
Here is my layer defination
model = keras.Sequential()
model.add(keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28,1)))
model.add(keras.layers.MaxPool2D((2,2)))
model.add(keras.layers.Conv2D(64,(3,3),activation='relu'))
model.add(keras.layers.MaxPool2D((2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(200,activation='relu'))
model.add(keras.layers.Dense(10,activation='softmax'))
But i get this error when i fit the model
ValueError: Input 0 of layer sequential_6 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 28, 28]
And also i want to know why we should mention 1 in input_shape in Conv2D layer.The image shape is 28x28 but we should mention 1 there.
The minimal change that should work is to change the line:
model.add(keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28,1)))
to this, dropping the 1:
model.add(keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28)))
The reason you have the error is that your input image is 28x28 and the batch size you feed into the network has 32 images, thus an array of dimension [32, 28, 28]. Unfortunately I don't see how you feed the input to the network. But what your current code expect is an array of dimension [32, 28, 28, 1]. If that's a numpy array that you can manipulate, just reshape() it to such dimension will solve the problem.
What I suggested above is to do the other way round, ask the network to expect each image of 2D array of dimension [28,28] instead of 3D array of dimension [28,28,1]
Update:
You provided the following code change that made it work:
train_image=train_image.reshape(60000, 28, 28, 1)
train_image=train_image / 255.0
test_image = test_image.reshape(10000, 28, 28, 1)
test_image=test_image/255.0
What this does is that your input images are in a single huge numpy array and you fit your model with it directly. The model fit function will select "tensors" from this array from its first dimension and create a batch for each training step. The batch size is 32, so it will implicitly create an array of shape (32, 28, 28, 1) and pass it down the layers. The 2nd to 4th dimension is merely copied from the original array.
The reshape() command is to change the dimension of the array. Your original array before reshape was (60000, 28, 28) and if you lay it out as a single sequence of numbers, there will be 6000x28x28 floats. What reshape() does is to pick up these numbers and fill them into a (60000, 28, 28, 1) array, which expects 60000x28x28x1 numbers, so it can be filled exactly.
I am using unet for image segmentation, using the code outlined herein.
My input images are 256x256x3. while the corresponding segmentation masks are 256x256.
I have changed the size for the input to Unet:
def unet(pretrained_weights = None,input_size = (256,256,3)):
and get a network with a 256x256x1 layer for the output
conv2d_144 (Conv2D) (None, 256, 256, 1) 2 conv2d_143[0][0]
See the full architecture here.
When I try and run using .fit_generator, I get the following error:
ValueError: Error when checking target: expected conv2d_144 to have shape (256, 256, 1) but got array with shape (256, 256, 3)
What can I do to fix this? Please let me know what extra information I can give!
Thank you!
PS: I have three classes in the outputs, could that be the reason?
You'll have to decide if want an RGB or grayscale input for your images:
Either convert your images to grayscale or change the conv layer. Another option would be to flatten the 256x256x3 input to a one dimension and use that as input.
I've actually fixed it by one-hot encoding my segmentation masks and changing the activation function of the last layer to softmax, with a filtersize to match the number of classes!
https://github.com/MKeel1ng/MULTI-CHANNEL-UNET