I'm currently working with the TensorFlow Addons SpatialPyramidPooling2D layer for image classification and I got the following error when I tried to fit the model.
ValueError: Dimensions must be equal, but are 8 and 20 for '{{node MatMul}} = BatchMatMulV2[T=DT_FLOAT, adj_x=false, adj_y=false](feature, transpose_1)' with input shapes: [?,20,8], [8,20,?]
I doubt that it's something to do with the output shape of the model. The last layer is supposed to be (None,<number_of_classes>) but I got (None,<number_of_channels>,<number_of_classes>). Because the output of SpatialPyraidPooling2D is a 3D tensor.
I tried to solve it by adding a Flatten layer right after SpatialPyramidPooling2D but it ends up the softmax layer giving me an error as below
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 1280 but received input with shape [None, 25600]
If you want output of shape (None, 8), I suggest you add a 1D pooling layer after the pyramid pooling thing.
import tensorflow as tf
x = tf.random.uniform((10, 20, 8), dtype=tf.float32)
pool = tf.keras.layers.GlobalAveragePooling1D()
print(pool(x).shape)
TensorShape([10, 8])
Related
I have 9 features, one output variable i.e. to be predicted, window size is 5
code works very well without "TimeDistributed" command
MODEL INPUT SHAPE: feature_tensor.shape=(1649, 5, 9) MODEL OUTPUT
SHAPE: y_train.shape= (1649,)
Thats my Code:
#Build the network model
act_fn='relu'
modelq = Sequential()
modelq.add(TimeDistributed(Conv1D(filters=105, kernel_size=2, activation=act_fn, input_shape=(None, feature_tensor.shape[1],feature_tensor.shape[2]))))
modelq.add(TimeDistributed(AveragePooling1D(pool_size=1)))
modelq.add(TimeDistributed(Flatten()))
modelq.add(LSTM(50))
modelq.add(Dense(64, activation=act_fn))
modelq.add(Dense(1))
#Compile the model
modelq.compile(optimizer='adam', loss='mean_squared_error')
modelq.fit(feature_tensor, y_train ,batch_size=1, epochs=epoch_count)
THE ERROR STATEMENT IS :
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (5, 9)
I feel like there is some thing wrong with dimensionality of "feature_tensor" during "Model FITTING" i.e last command... But I don't know what's wrong with it :(
Your intuition is right, the problem is the dimensionality of tensor_feature. If you take a look in the documentation of TimeDistributed you see an example with images and Conv2d layers. There the the input has to have the following shape: batch_size, time steps, x_dim, y_dim, channels. Since you use time-series you need: batch_size, time steps, 1, features. E.g. you can reshape your data by numpy:
feature_tensor = np.reshape(feature_tensor, (-1, 5, 1, 9))
However, I am not sure if it useful to combine Conv1D with TimeDistributed, since in that case you apply the convolution only on the features and not on temporal contiguous values, where a 1d Convolution should be applied.
I want to reshape and resize an image in the first layers before using Conv2D and other layers. The input will be a flattend array. Here is my code:
#Create flat example image:
img_test = np.zeros((120,160))
img_test_flat = img_test.flatten()
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape)))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
result = reshape_model(img_test_flat)
result.shape
Unfortunately this code results in the error I added down below. What is the issue and how do I correctly reshape and resize the flattend array?
WARNING:tensorflow:Model was constructed with shape (None, 19200) for input Tensor("input_13:0", shape=(None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
InvalidArgumentError: Input to reshape is a tensor with 19200 values, but the requested shape has 368640000 [Op:Reshape]
EDIT:
I tried:
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(None, img_test_flat.shape[0])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
Which gave me:
WARNING:tensorflow:Model was constructed with shape (None, None, 19200) for input Tensor("input_19:0", shape=(None, None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
EDIT2:
I recieve the input in C++ from a 1D array and pass it with
// Copy value to input buffer (tensor)
for (size_t i = 0; i < fb->len; i++){
model_input->data.i32[i] = (int32_t) (fb->buf[i]);
so what I pass on to the model is a flat array.
Your use of shapes simply doesn't make sense here. The first dimension of your input should be the number of samples. Is it supposed to be 19,200, or 1 sample?
input_shape should omit the number of samples, so if you want 1 sample, input shape should be 19,200. If you have 19,200 samples, shape should be 1.
The reshaping layer also omits the number of samples, so Keras is confused. What exactly are you trying to do?
This seems to be roughly what you're trying to achieve but I would personally resize the image outside of the neural network:
import numpy as np
import tensorflow as tf
img_test = np.zeros((120,160)).astype(np.float32)
img_test_flat = img_test.reshape(1, -1)
reshape_model = tf.keras.Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape[1:])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.Lambda(lambda x: tf.image.resize(x, (28, 28))))
result = reshape_model(img_test_flat)
print(result.shape)
TensorShape([1, 28, 28, 1])
Feel free to use the Resizing layer instead of the Lambda layer, I can't use it due to my Tensorflow version.
...coming from TensorFlow, where pretty much any shape and everything is defined explicitly, I am confused about Keras' API for recurrent models. Getting an Elman network to work in TF was pretty easy, but Keras resists to accept the correct shapes...
For example:
x = k.layers.Input(shape=(2,))
y = k.layers.Dense(10)(x)
m = k.models.Model(x, y)
...works perfectly and according to model.summary() I get an input layer with shape (None, 2), followed by a dense layer with output shape (None, 10). Makes sense since Keras automatically adds the first dimension for batch processing.
However, the following code:
x = k.layers.Input(shape=(2,))
y = k.layers.SimpleRNN(10)(x)
m = k.models.Model(x, y)
raises an exception ValueError: Input 0 is incompatible with layer simple_rnn_1: expected ndim=3, found ndim=2.
It works only if I add another dimension:
x = k.layers.Input(shape=(2,1))
y = k.layers.SimpleRNN(10)(x)
m = k.models.Model(x, y)
...but now, of course, my input would not be (None, 2) anymore.
model.summary():
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 2, 1) 0
_________________________________________________________________
simple_rnn_1 (SimpleRNN) (None, 10) 120
=================================================================
How can I have an input of type batch_size x 2 when I just want to feed vectors with 2 values to the network?
Furthermore, how would I chain RNN cells?
x = k.layers.Input(shape=(2, 1))
h = k.layers.SimpleRNN(10)(x)
y = k.layers.SimpleRNN(10)(h)
m = k.models.Model(x, y)
...raises the same exception with incompatible dim sizes.
This sample here works:
x = k.layers.Input(shape=(2, 1))
h = k.layers.SimpleRNN(10, return_sequences=True)(x)
y = k.layers.SimpleRNN(10)(h)
m = k.models.Model(x, y)
...but then layer h does not output (None, 10) anymore, but (None, 2, 10) since it returns the whole sequence instead of just the "regular" RNN cell output.
Why is this needed at all?
Moreover: where are the states? Do they just default to 1 recurrent state?
The documentation touches on the expected shapes of recurrent components in Keras, let's look at your case:
Any RNN layer in Keras expects a 3D shape (batch_size, timesteps, features). This means you have timeseries data.
The RNN layer then iterates over the second, time dimension of the input using a recurrent cell, the actual recurrent computation.
If you specify return_sequences then you collect the output for every timestep getting another 3D tensor (batch_size, timesteps, units) otherwise you only get the last output which is (batch_size, units).
Now returning to your questions:
You mention vectors but shape=(2,) is a vector so this doesn't work. shape=(2,1) works because now you have 2 vectors of size 1, these shapes exclude batch_size. So to feed vectors of size to you need shape=(how_many_vectors, 2) where the first dimension is the number of vectors you want your RNN to process, the timesteps in this case.
To chain RNN layers you need to feed 3D data because that what RNNs expect. When you specify return_sequences the RNN layer returns output at every timestep so that can be chained to another RNN layer.
States are collection of vectors that a RNN cell uses, LSTM uses 2, GRU has 1 hidden state which is also the output. They default to 0s but can be specified when calling the layer using initial_states=[...] as a list of tensors.
There is already a post about the difference between RNN layers and RNN cells in Keras which might help clarify the situation further.
I'd like to do transfer learning from a pre-trained model. I'm following the guide for retrain from Tensorflow.
However, I'm stuck in an error tensorflow.python.framework.errors_impl.InvalidArgumentError: Shapes must be equal rank, but are 3 and 2 for 'input_1/BottleneckInputPlaceholder' (op: 'PlaceholderWithDefault') with input shapes: [1,?,128].
# Last layer of pre-trained model
# `[<tf.Tensor 'embeddings:0' shape=(?, 128) dtype=float32>]`
with tf.name_scope('input'):
bottleneck_input = tf.placeholder_with_default(
bottleneck_tensor,
shape=[None, 128],
name='BottleneckInputPlaceholder')
Any ideas?
This is happening because your bottleneck_tensor is of shape [1, ?, 128] and you are explicitly stating that the shape should be [?, 128]. You can use tf.squeeze to reduce convert your tensor in the required shape as
tf.squeeze(bottleneck_tensor)
So I'm trying to train a model in Keras that takes in frames of signals that are of a shape (750,1). My first layer is the following Conv1D layer:
Conv1D(128, 5,input_shape=(1,750) padding='valid', activation='relu', strides=1)
But this gives me the following error:
ValueError: Negative dimension size caused by subtracting 5 from 1 for 'conv1d_1/convolution/Conv2D' (op: 'Conv2D') with input shapes: [?,1,1,750], [1,5,750,128].
Which seems to indicate that the layer is trying to apply a 5x5 kernel to the 1 dimensional data which doesn't make much sense. Any other input shapes just seem to throw different less useful errors. What am I doing wrong? Am I completely misunderstanding Conv1D ?