I want to apply a threshold to 1 column in a 2D tensor. Any value below the cutoff would be listed as null or zero. I have tried to avoid looping through the tensor and I want the input & output tensor to have the same shape.
Here is the code:
NFValue = tf.Variable(1.,dtype=tf.float64,constraint=lambda t: tf.clip_by_value(t, 10, 20))
col1 = tf.gather(x, [0], axis=0)
col2 = tf.gather(x, [1], axis=0)
y = tf.fill(tf.shape(col2), NFValue) # creates a tensor of the same size as X, with Cutoff
y = tf.cast(y, np.float32) # converts that tensor into the correct type for comparision.
NewCol2 = tf.boolean_mask(col2, tf.math.greater(col2, y))
return tf.concat([col1[0,:], NewCol2], axis=0)
The problem is that tf.boolean_mask() returns a tensor with just the values which were greater than NFValue. So the shape has changed. tf.Greater will return a boolean vector of the correct shape, but I would need to loop through the tensor.
I have tried several different options around this. I have looked at slice, tf.Scan and a couple different functions. I am expecting there to be a canned solution here.
Use tf.where
import tensorflow as tf
x = tf.reshape(tf.range(9), (3, 3))
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])>
tf.where(x > 5, x, 0)
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[0, 0, 0],
[0, 0, 0],
[6, 7, 8]])>
Related
The following is how it works in Numpy
import numpy as np
vals_for_fives = [12, 18, 22, 33]
arr = np.array([5, 2, 3, 5, 5, 5])
arr[arr == 5] = vals_for_fives # It is guaranteed that length of vals_for_fives is equal to the number of fives in arr
# now the value of arr is [12, 2, 3, 18, 22, 33]
For broadcastable or constant assignment we can use where() and assign() in Tensorflow. How can we achieve the above scenario in TF?
tf.experimental.numpy.where is a thing in tensorflow v2.5.
But for now you could do this:
First find the positions of the 5's:
arr = np.array([5, 2, 3, 5, 5, 5])
where = tf.where(arr==5)
where = tf.cast(where, tf.int32)
print(where)
# <tf.Tensor: id=91, shape=(4, 1), dtype=int32, numpy=
array([[0],
[3],
[4],
[5]])>
Then use scatter_nd to "replace" elements by index:
tf.scatter_nd(where, tf.constant([12,18,22,23]), tf.constant([5]))
# <tf.Tensor: id=94, shape=(5,), dtype=int32, numpy=array([12, 0, 0, 18
, 22])>
Do a similar thing for the entries that were not 5 to find the missing tensor:
tf.scatter_nd(tf.constant([[1], [2]]), tf.constant([2,3]), tf.constant([5]))
# <tf.Tensor: id=98, shape=(5,), dtype=int32, numpy=array([0, 2, 3, 0, 0])>
Then sum the two tensors to get:
<tf.Tensor: id=113, shape=(5,), dtype=int32, numpy=array([12, 2, 3, 1, 8, 22])>
I have a ragged tensor of dimensions [BATCH_SIZE, TIME_STEPS, EMBEDDING_DIM]. I want to augment the last axis with data from another tensor of shape [BATCH_SIZE, AUG_DIM]. Each time step of a given example gets augmented with the same value.
If the tensor wasn't ragged with varying TIME_STEPS for each example, I could simply reshape the second tensor with tf.repeat and then use tf.concat:
import tensorflow as tf
# create data
# shape: [BATCH_SIZE, TIME_STEPS, EMBEDDING_DIM]
emb = tf.constant([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [0, 0, 0]]])
# shape: [BATCH_SIZE, 1, AUG_DIM]
aug = tf.constant([[[8]], [[9]]])
# concat
aug = tf.repeat(aug, emb.shape[1], axis=1)
emb_aug = tf.concat([emb, aug], axis=-1)
This doesn't approach work when emb is ragged since emb.shape[1] is unknown and varies across examples:
# rag and remove padding
emb = tf.RaggedTensor.from_tensor(emb, padding=(0, 0, 0))
# reshape for augmentation - this doesn't work
aug = tf.repeat(aug, emb.shape[1], axis=1)
ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.
The goal is to create a ragged tensor emb_aug which looks like this:
<tf.RaggedTensor [[[1, 2, 3, 8], [4, 5, 6, 8]], [[1, 2, 3 ,9]]]>
Any ideas?
The easiest way to do this is to just make your ragged tensor a regular tensor by using tf.RaggedTensor.to_tensor() and then do the rest of your solution. I'll assume that you need the tensor to remain ragged. The key is to find the row_lengths of each batch in your ragged tensor, and then use this information to make your augmentation tensor ragged.
Example:
import tensorflow as tf
# data
emb = tf.constant([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [0, 0, 0]]])
aug = tf.constant([[[8]], [[9]]])
# make embeddings ragged for testing
emb_r = tf.RaggedTensor.from_tensor(emb, padding=(0, 0, 0))
print(emb_r.shape)
# (2, None, 3)
Here we'll use a combination of row_lengths and sequence_mask to create a new ragged tensor.
# find the row lengths of the embeddings
rl = emb_r.row_lengths()
print(rl)
# tf.Tensor([2 1], shape=(2,), dtype=int64)
# find the biggest row length
max_rl = tf.math.reduce_max(rl)
print(max_rl)
# tf.Tensor(2, shape=(), dtype=int64)
# repeat the augmented data `max_rl` number of times
aug_t = tf.repeat(aug, repeats=max_rl, axis=1)
print(aug_t)
# tf.Tensor(
# [[[8]
# [8]]
#
# [[9]
# [9]]], shape=(2, 2, 1), dtype=int32)
# create a mask
msk = tf.sequence_mask(rl)
print(msk)
# tf.Tensor(
# [[ True True]
# [ True False]], shape=(2, 2), dtype=bool)
From here we can use tf.ragged.boolean_mask to make the augmented data ragged
# make the augmented data a ragged tensor
aug_r = tf.ragged.boolean_mask(aug_t, msk)
print(aug_r)
# <tf.RaggedTensor [[[8], [8]], [[9]]]>
# concatenate!
output = tf.concat([emb_r, aug_r], 2)
print(output)
# <tf.RaggedTensor [[[1, 2, 3, 8], [4, 5, 6, 8]], [[1, 2, 3, 9]]]>
You can find the list of tensorflow methods that support ragged tensors here
Ragged Tensors can be constructed from row lengths directly.
The values input is a flat (with respect to the future ragged dimension not all other dimensions) tensor that can be constructed using tf.repeat, again using the row_lengths to find the appropriate number of repeats per sample!
ragged_lengths = emb.row_lengths()
aug = tf.RaggedTensor.from_row_lengths(
values=tf.repeat(aug, ragged_lengths, axis=0),
row_lengths=ragged_lengths)
emb_aug = tf.concat([emb, aug], axis=-1)
I am learning this TensorFlow-2.x-Tutorials where it use layers.MaxPooling2D. The autocompletion also hint layers.MaxPool2D, so I search for the difference between them.
Refer to this api_docs, I find their entire name tf.compat.v1.layers.MaxPooling2D and tf.keras.layers.MaxPool2D, which have almost same arguments, can I just consider layers.MaxPooling2D = layers.MaxPool2D, but the former is to tf1.x, the latter is to tf2.x?
What's more, I also find tf.keras.layers.GlobalMaxPool1D(Global max pooling operation for 1D temporal data) and tf.keras.layers.GlobalAveragePooling1D(Global average pooling operation for temporal data), these two have exact the same arguments, why is the syntax of function name different?
I'm only going to answer your second question because someone found a duplicate for your first one.
MaxPooling2D takes the maximum value from a 2D array. Take for example this input:
import tensorflow as tf
x = tf.random.uniform(minval=0, maxval=10, dtype=tf.int32, shape=(3, 3, 3), seed=42)
<tf.Tensor: shape=(3, 3, 3), dtype=int32, numpy=
array([[[2, 4, 3],
[9, 1, 8],
[8, 3, 5]],
[[6, 6, 9],
[9, 6, 1],
[7, 5, 2]],
[[2, 0, 8],
[1, 6, 1],
[2, 3, 9]]])>
MaxPooling2D will take the average value of all of these three elements:
gmp = tf.keras.layers.GlobalMaxPooling2D()
gmp(x[..., None])
<tf.Tensor: shape=(3, 1), dtype=int32, numpy=
array([[9],
[9],
[9]])>
There's a 9 in every elements so the operation returns a 9 for all three. For GlobalAveragePooling2D, it's the exact same thing but with averaging.
gap = tf.keras.layers.GlobalAveragePooling2D()
gap(x[..., None])
<tf.Tensor: shape=(3, 1), dtype=int32, numpy=
array([[3],
[6],
[5]])>
Define x as:
>>> import tensorflow as tf
>>> x = tf.constant([1, 2, 3])
Why does this normal tensor multiplication work fine with broacasting:
>>> tf.constant([[1, 2, 3], [4, 5, 6]]) * tf.expand_dims(x, axis=0)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[ 1, 4, 9],
[ 4, 10, 18]], dtype=int32)>
while this one with a ragged tensor does not?
>>> tf.ragged.constant([[1, 2, 3], [4, 5, 6]]) * tf.expand_dims(x, axis=0)
*** tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized data: b'Unable to broadcast: dimension size mismatch in dimension'
1
b'lengths='
3
b'dim_size='
3, 3
How can I get a 1-D tensor to broadcast over a 2-D ragged tensor? (I am using TensorFlow 2.1.)
The problem will be resolved if you add ragged_rank=0 to the Ragged Tensor, as shown below:
tf.ragged.constant([[1, 2, 3], [4, 5, 6]], ragged_rank=0) * tf.expand_dims(x, axis=0)
Complete working code is:
%tensorflow_version 2.x
import tensorflow as tf
x = tf.constant([1, 2, 3])
print(tf.ragged.constant([[1, 2, 3], [4, 5, 6]], ragged_rank=0) * tf.expand_dims(x, axis=0))
Output of the above code is:
tf.Tensor(
[[ 1 4 9]
[ 4 10 18]], shape=(2, 3), dtype=int32)
One more correction.
As per the definition of Broadcasting, Broadcasting is the process of **making** tensors with different shapes have compatible shapes for elementwise operations, there is no need to specify tf.expand_dims explicitly, Tensorflow will take care of it.
So, below code works and demonstrates the property of Broadcasting well:
%tensorflow_version 2.x
import tensorflow as tf
x = tf.constant([1, 2, 3])
print(tf.ragged.constant([[1, 2, 3], [4, 5, 6]], ragged_rank=0) * x)
Output of the above code is:
tf.Tensor(
[[ 1 4 9]
[ 4 10 18]], shape=(2, 3), dtype=int32)
For more information, please refer this link.
Hope this helps. Happy Learning!
Say I have a Tensorflow tensor l with shape [20,] and these are 10 coordinates packed as [x1,y1,x2,y2,...]. I need access to [x1,x2,...] and [y1,y2,...] to modify their values (e.g., rotate, scale, shift) and then repackage as [x1',y1',x1',y2',...].
I can reshape, tf.reshape(l, (10, 2)), but then I'm not sure whether to use split or unstack and what the arguments should be. When should one use split instead of unstack? And then how should the modified values be repacked so they're in the original format?
This is the kind of stuff that can be easily verifiable with tensorflow's eager execution mode:
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
l = np.arange(20)
y = tf.reshape(l, [10, 2])
a = tf.split(y, num_or_size_splits=2, axis=1)
b = tf.unstack(y, axis=1)
print('reshaped:', y, sep='\n', end='\n\n')
for operation, c in zip(('split', 'unstack'), (a, b)):
print('%s:' % operation, c, sep='\n', end='\n\n')
reshaped:
tf.Tensor(
[[ 0 1]
[ 2 3]
...
[16 17]
[18 19]], shape=(10, 2), dtype=int64)
split:
[<tf.Tensor: id=5, shape=(10, 1), dtype=int64, numpy=
array([[ 0],
[ 2],
...
[16],
[18]])>,
<tf.Tensor: id=6, shape=(10, 1), dtype=int64, numpy=
array([[ 1],
[ 3],
...
[17],
[19]])>]
unstack:
[<tf.Tensor: id=7, shape=(10,), dtype=int64, numpy=array([ 0, 2, ... 16, 18])>,
<tf.Tensor: id=8, shape=(10,), dtype=int64, numpy=array([ 1, 3, ... 17, 19])>]
So they are pretty much the same, using these parameters; except by:
tf.split will always split the tensor along the axis into num_or_size_splits splits, which can potentially be different than the number of dimensions shape[axis] and therefore needs to retain the original rank, outputting tensors of shape [10, n / num_or_size_splits] = [10, 2 / 2] = [10, 1].
Repacking can be performed by concatenating all split parts in a:
c=tf.concat(a, axis=1)
print(c)
array([[ 0, 1],
[ 2, 3],
...
[16, 17],
[18, 19]])>
tf.unstack will split the tensor along the axis into the exact amount of dimensions shape[axis], and can therefore unambiguously reduce the rank by 1, resulting in tensors of shape [10].
Repacking can be performed by stacking all split parts in b:
c=tf.stack(b, axis=1)
print(c)
array([[ 0, 1],
[ 2, 3],
...
[16, 17],
[18, 19]])>