I am trying to transform a tensor with shape (4498,) to a tensor with shape (None, 4498). I tried to reshape it, but it gives a tensor with shape (1, 4498). I tried a lot of other ways to transform it, but none of them worked. Any ideas?
print(info_state.get_shape()) #(4498,)
info_state = tf.reshape(info_state, [-1, 4498])
print(info_state.get_shape()) #(1, 4498)
Related
I have a model that takes two inputs of the same shape (batch_size,512,512,1), and predict two masks each of shape (batch_size,512,512,1).
dataset_input = tf.data.Dataset.zip((dataset_img_A, dataset_img_B))
dataset_output = tf.data.Dataset.zip((seg_A, seg_B))
dataset = tf.data.Dataset.zip((dataset_input, dataset_output))
dataset = dataset.repeat()
dataset = dataset.batch(batch_size, drop_remainder=True)
I'm creating a model like so:
image_inputs_A = layers.Input((512,512,1), batch_size=self.batch_size)
image_inputs_B = layers.Input((512,512,1), batch_size=self.batch_size)
output_A = some_layers(image_inputs_A)
output_B = some_layers(image_inputs_B)
model = models.Model([image_inputs_A, image_inputs_B],[output_A, output_B])
However I'm getting the following error
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [<tf.Tensor 'IteratorGetNext:0' shape=(?, 2, ?, ?, ?) dtype=float32>]...
It seems that its concatenating the inputs to (batch_size,2,512,512,1), instead of listing them as a tuple of two tensors (batch_size,512,512,1 ). Is this the expected behaviour? How can I use multiple inputs without them concatenating?
EDIT:
I have tried to use an layers.Input with shape (batch_size, 2, 512, 512, 1) and then pass through two lambda layers to split the tensor along the second axis, however.. I get the following error
ValueError: Error when checking input: expected input_1 to have 5 dimensions, but got array with shape (None, None, None, None)
EDIT 2:
I've double checked the data im inputing into the model.
INPUT: (512, 512, 1) <dtype: 'float32'> INPUT: (512, 512, 1) <dtype: 'float32'> OUTPUT: (512, 512, 1) <dtype: 'int64'> OUTPUT: (512, 512, 1) <dtype: 'int64'>
SOLVED: Turns out it was an issue with the data augmentation step where tensors where concatenated inputs. Lesson learnt
I am trying to take out a single element out of one dimension, while keeping the shapes the same.
The shape of the tensor is: (BATCH_SIZE, N_STEPS, NUM_FEATURES)
I want to create a new tensor that is (BATCH_SIZE, 1, NUM_FEATURES), where 1 is the final step.
The input tensor shape is (None, 128,16)
I tried to create a new tensor with the following:
X = X[:,-1,:]
X's shape becomes (None, 16) , but I need this to be (None, 1,16)
Update: I got this to work with the following code:
s = tf.shape(X)
X = tf.reshape(X[:,-1,:],shape=[s[0],1,s[2]])
I want to reshape and resize an image in the first layers before using Conv2D and other layers. The input will be a flattend array. Here is my code:
#Create flat example image:
img_test = np.zeros((120,160))
img_test_flat = img_test.flatten()
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape)))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
result = reshape_model(img_test_flat)
result.shape
Unfortunately this code results in the error I added down below. What is the issue and how do I correctly reshape and resize the flattend array?
WARNING:tensorflow:Model was constructed with shape (None, 19200) for input Tensor("input_13:0", shape=(None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
InvalidArgumentError: Input to reshape is a tensor with 19200 values, but the requested shape has 368640000 [Op:Reshape]
EDIT:
I tried:
reshape_model = Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(None, img_test_flat.shape[0])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.experimental.preprocessing.Resizing(28, 28, interpolation='nearest'))
Which gave me:
WARNING:tensorflow:Model was constructed with shape (None, None, 19200) for input Tensor("input_19:0", shape=(None, None, 19200), dtype=float32), but it was called on an input with incompatible shape (19200,).
EDIT2:
I recieve the input in C++ from a 1D array and pass it with
// Copy value to input buffer (tensor)
for (size_t i = 0; i < fb->len; i++){
model_input->data.i32[i] = (int32_t) (fb->buf[i]);
so what I pass on to the model is a flat array.
Your use of shapes simply doesn't make sense here. The first dimension of your input should be the number of samples. Is it supposed to be 19,200, or 1 sample?
input_shape should omit the number of samples, so if you want 1 sample, input shape should be 19,200. If you have 19,200 samples, shape should be 1.
The reshaping layer also omits the number of samples, so Keras is confused. What exactly are you trying to do?
This seems to be roughly what you're trying to achieve but I would personally resize the image outside of the neural network:
import numpy as np
import tensorflow as tf
img_test = np.zeros((120,160)).astype(np.float32)
img_test_flat = img_test.reshape(1, -1)
reshape_model = tf.keras.Sequential()
reshape_model.add(tf.keras.layers.InputLayer(input_shape=(img_test_flat.shape[1:])))
reshape_model.add(tf.keras.layers.Reshape((120, 160,1)))
reshape_model.add(tf.keras.layers.Lambda(lambda x: tf.image.resize(x, (28, 28))))
result = reshape_model(img_test_flat)
print(result.shape)
TensorShape([1, 28, 28, 1])
Feel free to use the Resizing layer instead of the Lambda layer, I can't use it due to my Tensorflow version.
my goal is to resize the output image from [32,32,1] to [8,8,1]
I try this with reshape but became a error:
Output_model = Reshape((8,8,-1))(out_1)
Error when checking target: expected reshape_1 to have shape (8, 8, 32) but got array with shape (32, 32, 1)
How can I solve this problem??
Thanks a lot
you cannot reshape the array directly because 32*32*1 not equal to 8*8*1, therefore you have to subsample:
import keras
x=keras.layers.Input((32,32,1))
x=keras.layers.MaxPooling2D((4,4))(x)
then your image will downsample to (8,8,1).
I'm building an autoencoder based on RNN. After FC layer, I have to reshape my output to [batch_size, sequence_length, embedding_dimension]. However, my sequence length(timestep) for my decoder is uncertain. What I wish is something work as follow.
outputs = tf.reshape(outputs, [batch_size, None, word_dimension])
Or, is there any other way for me to get the sequence length from the input data which has a shape [batch_size, sequence_length, embedding_dimension].
You can use -1 for the dimension in your reshape operation that you want to be calculated automatically.
For example, here:
x = tf.zeros((100 * 10 *12,))
reshaped = tf.reshape(x, [100, -1, 12])
reshaped will have shape (100, 10, 12)
Or, is there any other way for me to get the sequence length from the input data which has a shape [batch_size, sequence_length, embedding_dimension].
You can use the tf.shape operation to find the shape of a tensor at runtime so if you want sequence_length in a tensor with shape [batch_size, sequence_length, embedding_dimension], you need just call tf.shape(x)[1].
For my example above, calling:
tf.shape(reshaped)[1]
would give an int32 tensor with shape () and value 10