How to concatenate tensors of shapes [None, 128] with tensor of [1,128]. Here the first tensor will some data of unknown length and the second tensor is fixed tensor not dependant on data size. The final output should be of shape[None, 328]. This is a part of a neural network concatenation.
I tried
> c = Concatenate(axis = -1, name = 'DQN_Input')([ a, b])
Here a.shape = (None, 192) and b.shape = (1,128)
But this does not work.
The error is
ValueError: A Concatenate layer requires inputs with matching
shapes except for the concat axis. Got inputs shapes: [(None, 192),
(1, 128)]
What you can do is use tf.repeat on b based on the first dimension of a to generate the same shape tensor. Here is a simple working example:
import tensorflow as tf
a = tf.keras.layers.Input((192, ), name = 'a')
alpha = tf.keras.layers.Input((1,),name = 'Alpha')
b = tf.matmul(alpha, a, transpose_a=True)
b = tf.repeat(b, repeats=tf.shape(a)[0], axis=0)
c = tf.keras.layers.Concatenate(axis = -1, name = 'DQN_Input')([ a, b])
model = tf.keras.Model([a, alpha], c)
tf.print(model((tf.random.normal((5, 192)), tf.random.normal((5, 1)))).shape)
TensorShape([5, 384])
Related
I am new to tensorflow and I'm trying to concatenate 2 tensors with different shapes.
The tensors have shape:
>>> a
# <tf.Tensor: id=38, shape=(30000, 943, 1), dtype=float64
>>> b
<tf.Tensor: id=2, shape=(30000, 260, 1), dtype=float64
Is it possible to concatenate them on axis=0 to obtain a tensor with shape (60000, ?, 1)?
I tried to convert them to ragged tensors before concatenating:
a2 = tf.ragged.constant(a)
b2 = tf.ragged.constant(b)
c = tf.concat([a2, b2], axis=0)
but it did not work.
You can convert the tensor to RaggedTensor then use your own code (tf.concat).
a = tf.random.uniform((30000, 943, 1), maxval=4, dtype=tf.int32)
b = tf.random.uniform((30000, 260, 1), maxval=4, dtype=tf.int32)
rag_a = tf.RaggedTensor.from_tensor(a)
rag_b = tf.RaggedTensor.from_tensor(b)
res = tf.concat([rag_a, rag_b], axis=0)
print(res.shape)
(60000, None, 1)
Try using tf.ragged.stack and merge_dims without converting them to ragged tensors:
import tensorflow as tf
a2 = tf.random.normal((10, 943, 1))
b2 = tf.random.normal((10, 260, 1))
c = tf.ragged.stack([a2, b2], axis=0).merge_dims(0, 1)
print(c.shape)
# (20, None, 1)
I am building a Convolution Neural Network in Keras that receives batch of images with dimensions (None, 256, 256, 1) and the output would be batches with size (None, 256, 256, 3). Now after the final layer output I want to add a layer that assigns values to some of the pixels in output layer based on a value condition on inputs. Here is what I tried:
The Function
def SetBoundaries(ins):
xi = ins[0]
xo = ins[1]
bnds = np.where(xi[:, :, :, 0] == 0)
bnds_s, bnds_i, bnds_j = bnds[0], bnds[1], bnds[2]
xo[bnds_s, bnds_i, bnds_j, 0] = 0
xo[bnds_s, bnds_i, bnds_j, 1] = 0
xo[bnds_s, bnds_i, bnds_j, 2] = 0
return xo
Keras model
def conv_res(inputs):
x0 = inputs
...
xc = conv_layer(xc, kernel_size=3, stride=1,
num_filters=3, name="Final_Conv")
# apply assignment function
xc = Lambda(SetBoundaries, name="assign_boundaries")([x0, xc])
return xc
Finally, the model is built using
def build_model(inputs):
xres = int(inputs.shape[1])
yres = int(inputs.shape[2])
cres = int(inputs.shape[3])
inputs = Input((xres, yres, cres))
outputs = UNet.conv_res(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
However, when I run I get the error:
NotImplementedError: Cannot convert a symbolic Tensor (assign_boundaries/Equal:0) to a numpy array.
Everything works fine without the Lambda function. I understand the issue is assigning value to Tensor object but how can I achieve what I am after?
Thanks
np.where works with NumPy arrays, but the output from your model is a Tensorflow tensor. Try using tf.where, which is the same thing but for tf.Tensors.
I managed to make it work by changing the function to:
def SetBoundaries(ins):
xi = ins[0]
xo = ins[1]
xin = tf.broadcast_to(xi, tf.shape(xo))
mask = K.cast(tf.not_equal(xin, 0), dtype="float32")
xf = layers.Multiply()([mask, xo])
return xf
My Input is like a (3,3,2) array and a (3,3) array:
img = np.array([[[1,1],[2,2],[3,3]],
[[4,4],[5,5],[6,6]],
[[7,7],[8,8],[9,9]]])
idx = np.array([[1,0,0],
[0,0,1],
[1,1,0]])
My ideal output should be:
[[1 1]
[6 6]
[7 7]
[8 8]]
I want to do this by a custom layer:
make a layer:
def extract_layer(data, idx):
idx = tf.where(idx)
data = tf.gather_nd(data,idx)
data = tf.reshape(data,[-1,2])
return data
make into model:
input_data = kl.Input(shape=(3,3,2))
input_idxs = kl.Input(shape=(3,3))
extraction = kl.Lambda(lambda x:extract_layer(*x),name='extraction')([input_data,input_idxs])
I can build the model , and i can see the keras summary of the model,
the output is
model = Model(inputs=([input_data,input_idxs]), outputs=extraction)
model.summary()
...
input_1 (InputLayer) (None, 3, 3, 2)
input_2 (InputLayer) (None, 3, 3)
extraction (Lambda) (None, 2)
Total params: 0
...
but when i start to predict like :
'i have already made the two inputs into (1,3,3,2) and (1,3,3) shape'
result = model.predict(x=([img,idx]))
it gets error:
'ValueError: could not broadcast input array from shape (4,2) into shape (1,2)'
i think the tensor of shape(4,2) is the value i want
but i don't know why keras broadcast it to (1,2)
is there anyone who can help me ??
thanks very much !
In your extract_layer() function, the data is two dims tensor. But model.predict is supposed to return results with an extra batch dim. Just expand the dim when return data in extract_layer() can fix this error.
def extract_layer(data, idx):
idx = tf.where(idx)
data = tf.gather_nd(data,idx)
data = tf.reshape(data,[-1,2])
return tf.expand_dims(data, axis=0)
Note: Since the results returned by tf.gather_nd might have different length, I think the batch size would only be 1. Please correct me if I'm wrong.
I have a keras 3D/2D model. In this model a 3D layer has a shape of [None, None, 4, 32]. I want to reshape this into [None, None, 128]. However, if I simply do the following:
reshaped_layer = Reshape((-1, 128))(my_layer)
my_layer has a shape of [None, 128] and therefore I cannot apply afterwards any 2D convolution, like:
conv_x = Conv2D(16, (1,1))(reshaped_layer)
I've tried to use tf.shape(my_layer) and tf.reshape, but I have not been able to compile the model since tf.reshape is not a Keras layer.
Just to clarify, I'm using channels last; this is not tf.keras, this is just Keras. Here I send a debug of the reshape function: Reshape in keras
This is what I'm doing right now, following the advice of anna-krogager:
def reshape(x):
x_shape = K.shape(x)
new_x_shape = K.concatenate([x_shape[:-2], [x_shape[-2] * x_shape[-1]]])
return K.reshape(x, new_x_shape)
reshaped = Lambda(lambda x: reshape(x))(x)
reshaped.set_shape([None,None, None, 128])
conv_x = Conv2D(16, (1,1))(reshaped)
I get the following error: ValueError: The channel dimension of the inputs should be defined. Found None
You can use K.shape to get the shape of your input (as a tensor) and wrap the reshaping in a Lambda layer as follows:
def reshape(x):
x_shape = K.shape(x)
new_x_shape = K.concatenate([x_shape[:-2], [x_shape[-2] * x_shape[-1]]])
return K.reshape(x, new_x_shape)
reshaped = Lambda(lambda x: reshape(x))(x)
reshaped.set_shape([None, None, None, a * b]) # when x is of shape (None, None, a, b)
This will reshape a tensor with shape (None, None, a, b) to (None, None, a * b).
Digging into the base_layer.py, I have found that reshaped is:
tf.Tensor 'lambda_1/Reshape:0' shape=(?, ?, ?, 128) dtype=float32.
However its atribute "_keras_shape" is (None, None, None, None) even after the set_shape. Therefore, the solution is to set this attribute:
def reshape(x):
x_shape = K.shape(x)
new_x_shape = K.concatenate([x_shape[:-2], [x_shape[-2] * x_shape[-1]]])
return K.reshape(x, new_x_shape)
reshaped = Lambda(lambda x: reshape(x))(x)
reshaped.set_shape([None, None, None, 128])
reshaped.__setattr__("_keras_shape", (None, None, None, 128))
conv_x = Conv2D(16, (1,1))(reshaped)
Since you are reshaping the best you can obtain from (4,32), without losing dimensions, is either (128, 1) or (1, 128). Thus you can do the following:
# original has shape [None, None, None, 4, 32] (including batch)
reshaped_layer = Reshape((-1, 128))(original) # shape is [None, None, 128]
conv_layer = Conv2D(16, (1,1))(K.expand_dims(reshaped_layer, axis=-2)) # shape is [None, None, 1, 16]
I want to implement a Generative adversarial network (GAN) with unfixed input size, like 4-D Tensor (Batch_size, None, None, 3).
But when I use conv2d_transpose, there is a parameter output_shape, this parameter must pass the true size after deconvolution opeartion.
For example, if the size of batch_img is (64, 32, 32, 128), w is weight with (3, 3, 64, 128) , after
deconv = tf.nn.conv2d_transpose(batch_img, w, output_shape=[64, 64, 64, 64],stride=[1,2,2,1], padding='SAME')
So, I get deconv with size (64, 64, 64, 64), it's ok if I pass the true size of output_shape.
But, I want to use unfixed input size (64, None, None, 128), and get deconv with (64, None, None, 64).
But, it raises an error as below.
TypeError: Failed to convert object of type <type'list'> to Tensor...
So, what can I do to avoid this parameter in deconv? or is there another way to implement unfixed GAN?
The output shape list does not accept to have None in the list because the None object can not be converted to a Tensor Object
None is only allowed in shapes of tf.placeholder
for varying size output_shape instead of None try -1 for example you want size(64, None, None, 128) so try [64, -1, -1, 128]... I am not exactly sure whether this will work... It worked for me for batch_size that is my first argument was not of fixed size so I used -1
How ever there is also one high level api for transpose convolution tf.layers.conv2d_transpose()
I am sure the high level api tf.layers.conv2d_transpose() will work for you because it takes tensors of varying inputs
You do not even need to specify the output-shape you just need to specify the output_channel and the kernel to be used
For more details : https://www.tensorflow.org/api_docs/python/tf/layers/conv2d_transpose... I hope this helps
I ran into this problem too. Using -1, as suggested in the other answer here, doesn't work. Instead, you have to grab the shape of the incoming tensor and construct the output_size argument. Here's an excerpt from a test I wrote. In this case it's the first dimension that's unknown, but it should work for any combination of known and unknown parameters.
output_shape = [8, 8, 4] # width, height, channels-out. Handle batch size later
xin = tf.placeholder(dtype=tf.float32, shape = (None, 4, 4, 2), name='input')
filt = tf.placeholder(dtype=tf.float32, shape = filter_shape, name='filter')
## Find the batch size of the input tensor and add it to the front
## of output_shape
dimxin = tf.shape(xin)
ncase = dimxin[0:1]
oshp = tf.concat([ncase,output_shape], axis=0)
z1 = tf.nn.conv2d_transpose(xin, filt, oshp, strides=[1,2,2,1], name='xpose_conv')
I find a solution to use tf.shape for unspecified shape and get_shape() for specified shape.
def get_deconv_lens(H, k, d):
return tf.multiply(H, d) + k - 1
def deconv2d(x, output_shape, k_h=2, k_w=2, d_h=2, d_w=2, stddev=0.02, name='deconv2d'):
# output_shape: the output_shape of deconv op
shape = tf.shape(x)
H, W = shape[1], shape[2]
N, _, _, C = x.get_shape().as_list()
H1 = get_deconv_lens(H, k_h, d_h)
W1 = get_deconv_lens(W, k_w, d_w)
with tf.variable_scope(name):
w = tf.get_variable('weights', [k_h, k_w, C, x.get_shape()[-1]], initializer=tf.random_normal_initializer(stddev=stddev))
biases = tf.get_variable('biases', shape=[C], initializer=tf.zeros_initializer())
deconv = tf.nn.conv2d_transpose(x, w, output_shape=[N, H1, W1, C], strides=[1, d_h, d_w, 1], padding='VALID')
deconv = tf.nn.bias_add(deconv, biases)
return deconv