Related
TLDR
My for lambda layers to get tensor slices only get the last column of data.
I have a (Batch_size, R) shape tensor that I will be running through an embedding layer for each of the R features seperately. I wrote the following code to split the input (Batch_size, R) shaped tensor into R (None,) slices.
R=2
inp = tf.keras.Input(shape = (R,), dtype=tf.int32)
SLICES = []
for i in range(R):
slice_ = tf.keras.layers.Lambda(lambda a: a[:,i], name=f"slice_{i}", dtype=tf.int32)(inp)
SLICES.append(slice_)
model = tf.keras.Model(inputs= inp, outputs = SLICES)
Running tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True) makes it appear that the code works. Running data into the model shows that there is a problem: the model takes the last feature and copies it for all layers.
input_ = np.array([[1,2],[3,4],[5,6]])
model.predict(input_)
[array([2, 4, 6], dtype=int32), array([2, 4, 6], dtype=int32)]
Approach 1
I "fixed" the problem in the R=2 case by getting rid of the for loop and writing each layer by hand.
slice1 = tf.keras.layers.Lambda(lambda a: a[:,0], name=f"first_slice", dtype=tf.int32)(inp)
slice2 = tf.keras.layers.Lambda(lambda a: a[:,1], name=f"second_slice", dtype=tf.int32)(inp)
model = tf.keras.Model(inputs= inp, outputs = [slice1, slice2])
input_ = np.array([[1,2],[3,4],[5,6]])
model.predict(input_)
[array([1, 3, 5], dtype=int32), array([2, 4, 6], dtype=int32)]
This is clearly undesirable for any number of reasons.
Approach 2
Another approach is to do the embedding on the raw features. Unfortunately, I have a CutMix like layer in front of the embedding operation, preventing me from embedding the raw features.
How can I get the for loop to correctly copy each slice of the tensor?
The reason why your first block of codes not working is you need to write the lambda function like this instead: lambda a,k=i: a[:,k]
R=2
inp = tf.keras.Input(shape = (R,), dtype=tf.int32)
SLICES = []
for i in range(R):
slice_ = tf.keras.layers.Lambda(lambda a,k=i: a[:,k], name=f"slice_{i}", dtype=tf.int32)(inp)
SLICES.append(slice_)
model = tf.keras.Model(inp, SLICES)
input_ = np.array([[1,2],[3,4],[5,6]])
print(model.predict(input_))
Outputs:
[array([1, 3, 5], dtype=int32), array([2, 4, 6], dtype=int32)]
In my Tensorflow 2 model, I want my batch size to be parametric, such that I can build tensors which have appropriate batch size dynamically. I have the following code:
batch_size_param = 128
tf_batch_size = tf.keras.Input(shape=(), name="tf_batch_size", dtype=tf.int32)
batch_indices = tf.range(0, tf_batch_size, 1)
md = tf.keras.Model(inputs={"tf_batch_size": tf_batch_size}, outputs=[batch_indices])
res = md(inputs={"tf_batch_size": batch_size_param})
The code throws an error in tf.range:
ValueError: Shape must be rank 0 but is rank 1
for 'limit' for '{{node Range}} = Range[Tidx=DT_INT32](Range/start, tf_batch_size, Range/delta)' with input shapes: [], [?], []
I think the problem is with the fact that tf.keras.Input automatically tries to expand the input array at the first dimension, since it expects the partial shape of the input without the batch size and will attach the batch size according to the shape of the input array, which in my case a scalar. I can just feed the scalar value as a constant integer into tf.range but this time, I won't be able to change it after the model graph has been compiled.
Interestingly, I failed to find a proper way to input only a scalar into a TF-2 model even though I checked the documentation, too. So, what would be the best way to handle such a case?
Don't use tf.keras.Input and just define the model by subclassing.
import tensorflow as tf
class ScalarModel(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, x):
return tf.range(0, x, 1)
print(ScalarModel()(10))
# tf.Tensor([0 1 2 3 4 5 6 7 8 9], shape=(10,), dtype=int32)
I'm not sure if this is actually a good idea, but you could use tf.squeeze like
inp = keras.Input(shape=(), dtype=tf.int32)
batch_indices = tf.range(tf.squeeze(inp))
model = keras.Model(inputs=inp, outputs=batch_indices)
so that
model(6)
gives
<tf.Tensor: shape=(6,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5])>
Edit:
Depending on what you want to achieve, it might also be worth looking into ragged tensors:
inp = keras.Input(shape=(), dtype=tf.int32)
batch_indices = tf.ragged.range(inp)
model = keras.Model(inputs=inp, outputs=batch_indices)
would make
model(np.array([6,7]))
return
<tf.RaggedTensor [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6]]>
I'm trying to implement a custom Keras Layer in Tensorflow 2.0RC and need to concatenate a [None, Q] shaped tensor onto a [None, H, W, D] shaped tensor to produce a [None, H, W, D + Q] shaped tensor. It is assumed that the two input tensors have the same batch size even though it is not known beforehand. Also, none of H, W, D, and Q are known at write-time but are evaluated in the layer's build method when the layer is first called. The issue that I'm experiencing is when broadcasting the [None, Q] shaped tensor up to a [None, H, W, Q] shaped tensor in order to concatenate.
Here is an example of trying to create a Keras Model using the Functional API that performs variable-batch broadcasting from shape [None, 3] to shape [None, 5, 5, 3]:
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
y = tf.broadcast_to(y, [-1, 5, 5, 3]) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
Tensorflow produces the error:
InvalidArgumentError: Dimension -1 must be >= 0
And then when I change -1 to None it gives me:
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 5, 5, 3]. Consider casting elements to a supported type.
How can I perform the specified broadcasting?
You need to use the dynamic shape of y to determine the batch size. The dynamic shape of a tensor y is given by tf.shape(y) and is a tensor op representing the shape of y evaluated at runtime. The modified example demonstrates this by selecting between the old shape, [None, 1, 1, 3], and the new shape using tf.where.
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
# Retain the batch and depth dimensions, but broadcast along H and W
broadcast_shape = tf.where([True, False, False, True],
tf.shape(y), [0, 5, 5, 0])
y = tf.broadcast_to(y, broadcast_shape) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
# prints: "(8, 5, 5, 3)"
References:
"TensorFlow: Shapes and dynamic dimensions"
currently I'm working on a neural network that can classify the numbers in the Street View House Number dataset (http://ufldl.stanford.edu/housenumbers/). For now, I'm just trying to do it on the second format, the one similar to the MNIST dataset.
The problem I've encountered is that the shapes of the train and test arrays of examples are (HEIGHT, WIDTH, CHANNELS, EXAMPLES) rather than (EXAMPLES, HEIGHT, WIDTH, CHANNELS).
Is there a simple way to reshape the array to what I want without using many nested loops?
I'm not sure if the object you are trying to reshape is a Tensor or numpy.ndarray.
If it is a numpy.ndarray, you can use np.transpose. For example:
import numpy as np
a = np.zeros((299, 299, 3, 50))
print(a.shape) # (299, 299, 3, 50) H x W x C x M
b = np.transpose(a, [3, 0, 1, 2])
print(b.shape) # (50, 299, 299, 3)
If it is a Tensor, You can use tf.transpose to change the order of the dimension in exactly the same way as np.transpose. For example:
import tensorflow as tf
a = tf.zeros((299, 299, 3, 50), dtype=tf.int32)
print(a.shape.as_list()) # [299, 299, 3, 50]
b = tf.transpose(a, [3, 0, 1, 2])
print(b.shape.as_list()) # [50, 299, 299, 3]
I am trying an Op that is not behaving as expected.
graph = tf.Graph()
with graph.as_default():
train_dataset = tf.placeholder(tf.int32, shape=[128, 2])
embeddings = tf.Variable(
tf.random_uniform([50000, 64], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
embed = tf.reduce_sum(embed, reduction_indices=0)
So I need to know the dimensions of the Tensor embed. I know that it can be done at the run time but it's too much work for such a simple operation. What's the easier way to do it?
I see most people confused about tf.shape(tensor) and tensor.get_shape()
Let's make it clear:
tf.shape
tf.shape is used for dynamic shape. If your tensor's shape is changable, use it.
An example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:
new_height = tf.shape(image)[0] / 2
tensor.get_shape
tensor.get_shape is used for fixed shapes, which means the tensor's shape can be deduced in the graph.
Conclusion:
tf.shape can be used almost anywhere, but t.get_shape only for shapes can be deduced from graph.
Tensor.get_shape() from this post.
From documentation:
c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
print(c.get_shape())
==> TensorShape([Dimension(2), Dimension(3)])
A function to access the values:
def shape(tensor):
s = tensor.get_shape()
return tuple([s[i].value for i in range(0, len(s))])
Example:
batch_size, num_feats = shape(logits)
Just print out the embed after construction graph (ops) without running:
import tensorflow as tf
...
train_dataset = tf.placeholder(tf.int32, shape=[128, 2])
embeddings = tf.Variable(
tf.random_uniform([50000, 64], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
print (embed)
This will show the shape of the embed tensor:
Tensor("embedding_lookup:0", shape=(128, 2, 64), dtype=float32)
Usually, it's good to check shapes of all tensors before training your models.
Let's make it simple as hell. If you want a single number for the number of dimensions like 2, 3, 4, etc., then just use tf.rank(). But, if you want the exact shape of the tensor then use tensor.get_shape()
with tf.Session() as sess:
arr = tf.random_normal(shape=(10, 32, 32, 128))
a = tf.random_gamma(shape=(3, 3, 1), alpha=0.1)
print(sess.run([tf.rank(arr), tf.rank(a)]))
print(arr.get_shape(), ", ", a.get_shape())
# for tf.rank()
[4, 3]
# for tf.get_shape()
Output: (10, 32, 32, 128) , (3, 3, 1)
The method tf.shape is a TensorFlow static method. However, there is also the method get_shape for the Tensor class. See
https://www.tensorflow.org/api_docs/python/tf/Tensor#get_shape
To create tensor in tensorflow using tf.constant()
This is to import the library
import tensorflow as tf
This will create the tensor
tensor = tf.constant([[[2,4,5], [5,6,6]], [[9,7,8], [4,8,2]], [[7,1,3], [4,8,9]]])
This will show the tensor
tensor
this will show the number of dimension
tensor.ndim
#create a tensor
tensor = tf.constant([[[1, 2, 3],
[3, 4, 5]],
[[5, 6, 7],
[8, 6, 9]],
[[2, 1, 5],
[5, 7, 8]]])
tensor
#Display result
<tf.Tensor: shape=(3, 2, 3), dtype=int32, numpy= array([[[1, 2, 3],[3, 4, 5]],
[[5, 6, 7],
[8, 6, 9]],
[[2, 1, 5],
[5, 7, 8]]], dtype=int32)>