If A is a TensorFlow variable like so
A = tf.Variable([[1, 2], [3, 4]])
and index is another variable
index = tf.Variable([0, 1])
I want to use this index to select columns in each row. In this case, item 0 from first row and item 1 from second row.
If A was a Numpy array then to get the columns of corresponding rows mentioned in index we can do
x = A[np.arange(A.shape[0]), index]
and the result would be
[1, 4]
What is the TensorFlow equivalent operation/operations for this? I know TensorFlow doesn't support many indexing operations. What would be the work around if it cannot be done directly?
You can extend your column indices with row indices and then use gather_nd:
import tensorflow as tf
A = tf.constant([[1, 2], [3, 4]])
indices = tf.constant([1, 0])
# prepare row indices
row_indices = tf.range(tf.shape(indices)[0])
# zip row indices with column indices
full_indices = tf.stack([row_indices, indices], axis=1)
# retrieve values by indices
S = tf.gather_nd(A, full_indices)
session = tf.InteractiveSession()
session.run(S)
You can use one hot method to create a one_hot array and use it as a boolean mask to select the indices you'd like.
A = tf.Variable([[1, 2], [3, 4]])
index = tf.Variable([0, 1])
one_hot_mask = tf.one_hot(index, A.shape[1], on_value = True, off_value = False, dtype = tf.bool)
output = tf.boolean_mask(A, one_hot_mask)
After dabbling around for quite a while. I found two functions that could be useful.
One is tf.gather_nd() which might be useful if you can produce a tensor
of the form [[0, 0], [1, 1]] and thereby you could do
index = tf.constant([[0, 0], [1, 1]])
tf.gather_nd(A, index)
If you are unable to produce a vector of the form [[0, 0], [1, 1]](I couldn't produce this as the number of rows in my case was dependent on a placeholder) for some reason then the work around I found is to use the tf.py_func(). Here is an example code on how this can be done
import tensorflow as tf
import numpy as np
def index_along_every_row(array, index):
N, _ = array.shape
return array[np.arange(N), index]
a = tf.Variable([[1, 2], [3, 4]], dtype=tf.int32)
index = tf.Variable([0, 1], dtype=tf.int32)
a_slice_op = tf.py_func(index_along_every_row, [a, index], [tf.int32])[0]
session = tf.InteractiveSession()
a.initializer.run()
index.initializer.run()
a_slice = a_slice_op.eval()
a_slice will be a numpy array [1, 4]
We can do the same using this combination of map_fn and gather_nd.
def get_element(a, indices):
"""
Outputs (ith element of indices) from (ith row of a)
"""
return tf.map_fn(lambda x: tf.gather_nd(x[0], x[1]),
(a, indices),
dtype = tf.float32)
Here's an example usage.
A = tf.constant(np.array([[1,2,3],
[4,5,6],
[7,8,9]], dtype = np.float32))
idx = tf.constant(np.array([[2],[1],[0]]))
elems = get_element(A, idx)
with tf.Session() as sess:
e = sess.run(elems)
print(e)
I don't know if this will be much slower than other answers.
It has the advantage that you don't need to specify the number of rows of A in advance, as long as a and indices have the same number of rows at runtime.
Note the output of the above will be rank 1. If you'd prefer it to have rank 2, replace gather_nd by gather
I couldn't get the accepted answer to work in Tensorflow 2 when I incorporated it into a loss function. Something about GradientTape didn't like it. My solution is an altered version of the accepted answer:
def get_rows(arr):
N, _ = arr.shape
return N
num_rows= tf.py_function(get_rows, [arr], [tf.int32])[0]
rng = tf.range(0,num_rows)
ind = tf.stack([rng, ind], axis=1)
tf.gather_nd(arr, ind)
Related
I already know that Numpy "double-slice" with fancy indexing creates copies instead of views, and the solution seems to be to convert them to one single slice (e.g. This question). However, I am facing this particular problem where i need to deal with an integer indexing followed by boolean indexing and I am at a loss what to do. The problem (simplified) is as follows:
a = np.random.randn(2, 3, 4, 4)
idx_x = np.array([[1, 2], [1, 2], [1, 2]])
idx_y = np.array([[0, 0], [1, 1], [2, 2]])
print(a[..., idx_y, idx_x].shape) # (2, 3, 3, 2)
mask = (np.random.randn(2, 3, 3, 2) > 0)
a[..., idx_y, idx_x][mask] = 1 # assignment doesn't work
How can I make the assignment work?
Not sure, but an idea is to do the broadcasting manually and adding the mask respectively just like Tim suggests. idx_x and idx_y both have the same shape (3,2) which will be broadcasted to the shape (6,6) from the cartesian product (3*2)^2.
x = np.broadcast_to(idx_x.ravel(), (6,6))
y = np.broadcast_to(idx_y.ravel(), (6,6))
# this should be the same as
x,y = np.meshgrid(idx_x, idx_y)
Now reshape the mask to the broadcasted indices and use it to select
mask = mask.reshape(6,6)
a[..., x[mask], y[mask]] = 1
The assignment now works, but I am not sure if this is the exact assignment you wanted.
Ok apparently I am making things complicated. No need to combine the indexing. The following code solves the problem elegantly:
b = a[..., idx_y, idx_x]
b[mask] = 1
a[..., idx_y, idx_x] = b
print(a[..., idx_y, idx_x][mask]) # all 1s
EDIT: Use #Kevin's solution which actually gets the dimensions correct!
I haven't tried it specifically on your sample code but I had a similar issue before. I think I solved it by applying the mask to the indices instead, something like:
a[..., idx_y[mask], idx_x[mask]] = 1
-that way, numpy can assign the values to the a array correctly.
EDIT2: Post some test code as comments remove formatting.
a = np.arange(27).reshape([3, 3, 3])
ind_x = np.array([[0, 0], [1, 2]])
ind_y = np.array([[1, 2], [1, 1]])
x = np.broadcast_to(ind_x.ravel(), (4, 4))
y = np.broadcast_to(ind_y.ravel(), (4, 4)).T
# x1, y2 = np.meshgrid(ind_x, ind_y) # above should be the same as this
mask = a[:, ind_y, ind_x] % 2 == 0 # what should this reshape to?
# a[..., x[mask], y[mask]] = 1 # Then you can mask away (may also need to reshape a or the masked x or y)
So, I want to mask out entire rows of a SparseTensor. This would be easy to do with tf.boolean_mask, but there isn't an equivalent for SparseTensors. Currently, something that is possible is for me to just go through all of the indices in SparseTensor.indices and filter out all of the ones that aren't a masked row, e.g.:
masked_indices = list(filter(lambda index: masked_rows[index[0]], indices))
where masked_rows is a 1D array of whether or not the row at that index is masked.
However, this is really slow, since my SparseTensor is fairly large (it has 90k indices, but will be growing to be significantly larger). It takes quite a few seconds on a single data point, before I even apply SparseTensor.mask on the filtered indices. Another flaw of the approach is that it doesn't actually remove the rows, either (although, in my case, a row of all zeros is just as fine).
Is there a better way to mask a SparseTensor by row, or is this the best approach?
You can do that like this:
import tensorflow as tf
def boolean_mask_sparse_1d(sparse_tensor, mask, axis=0): # mask is assumed to be 1D
mask = tf.convert_to_tensor(mask)
ind = sparse_tensor.indices[:, axis]
mask_sp = tf.gather(mask, ind)
new_size = tf.math.count_nonzero(mask)
new_shape = tf.concat([sparse_tensor.shape[:axis], [new_size],
sparse_tensor.shape[axis + 1:]], axis=0)
new_shape = tf.dtypes.cast(new_shape, tf.int64)
mask_count = tf.cumsum(tf.dtypes.cast(mask, tf.int64), exclusive=True)
masked_idx = tf.boolean_mask(sparse_tensor.indices, mask_sp)
new_idx_axis = tf.gather(mask_count, masked_idx[:, axis])
new_idx = tf.concat([masked_idx[:, :axis],
tf.expand_dims(new_idx_axis, 1),
masked_idx[:, axis + 1:]], axis=1)
new_values = tf.boolean_mask(sparse_tensor.values, mask_sp)
return tf.SparseTensor(new_idx, new_values, new_shape)
# Test
sp = tf.SparseTensor([[1], [3], [4], [6]], [1, 2, 3, 4], [7])
mask = tf.constant([True, False, True, True, False, False, True])
out = boolean_mask_sparse_1d(sp, mask)
print(out.indices.numpy())
# [[2]
# [3]]
print(out.values.numpy())
# [2 4]
print(out.shape)
# (4,)
The problem I'm trying to solve is the one in the picture. Given a text sentence with word embeddings, and a fixed set of indexes for each sentence pointing to the words I want to keep, how do I slice the embeddings of interest?
Note: I cannot do it as a preprocess step because the embeddings are the result of several layers.
As a toy example, say that I have 2 input datasets, one containing the data itself as 2D tensors, and another one containing the indices of the words that I'm interested in. So for instance
NUM_SENTENCES=2
NUM_ENTITIES_PER_REL=3
LEN_SENTENCE=5
NUM_H_T=2
DIM_EMBEDDING=2
indices = tf.constant([
[1, 3],
[0, 4]
])
data = tf.constant(np.reshape(np.arange(NUM_SENTENCES*LEN_SENTENCE*DIM_EMBEDDING), [NUM_SENTENCES, LEN_SENTENCE, DIM_EMBEDDING]))
With the index as stated, I want to retrieve elements 1 and 3 from first element, and 0 and 4 from second to result in
array([[[ 2, 3],
[ 6, 7]],
[[10, 11],
[18, 19]]])
I can obtain desired result if I do:
selector = [[[idx, elem]
for elem in arr]
for idx, arr in enumerate(indices)]
tf.gather_nd(data, selector)
but this doesn't work within a model. Here it is my code:
input_text = keras.Input(shape=(LEN_SENTENCE, DIM_EMBEDDING), name="input_sentence")
input_ent = keras.Input(shape=(NUM_ENTITIES_PER_REL, 2), dtype=tf.int32, name="entities_to_classify")
class Selector(layers.Layer):
def __init__(self, **kwargs):
super(Selector, self).__init__(**kwargs)
def call(self, inputs):
h_s = inputs[1]
indexes = inputs[0]
idxs = indexes.numpy()
selector = [[[idx, elem]
for elem in arr]
for idx, arr in enumerate(idxs)]
return tf.gather_nd(h_s, selector)
x = Selector(name="selector")([input_ent, input_text])
model = keras.Model(inputs=[input_ent, input_text], outputs=x, name='language_model')
keras.utils.plot_model(model, '/tmp/model.jpg', show_shapes=True)
and the result of executing it (I'm using tensorflow==2.0.0-beta1).
AttributeError: 'Tensor' object has no attribute 'numpy'
and I don't know how to solve this chicken-egg problem. Any ideas?
You can do that like this:
import tensorflow as tf
import numpy as np
NUM_SENTENCES = 2
NUM_ENTITIES_PER_REL = 3
LEN_SENTENCE = 5
NUM_H_T = 2
DIM_EMBEDDING = 2
with tf.Graph().as_default(), tf.Session() as sess:
indices = tf.constant([
[1, 3],
[0, 4]
])
data = tf.constant(np.reshape(np.arange(NUM_SENTENCES * LEN_SENTENCE * DIM_EMBEDDING),
[NUM_SENTENCES, LEN_SENTENCE, DIM_EMBEDDING]))
# Make first dimension indices
s = tf.shape(indices)
idx0 = tf.tile(tf.expand_dims(tf.range(s[0]), 1), [1, s[1]])
# Make full index
idx_gather = tf.stack([idx0, indices], axis=-1)
# Gather result
result = tf.gather_nd(data, idx_gather)
print(sess.run(result))
# [[[ 2 3]
# [ 6 7]]
#
# [[10 11]
# [18 19]]]
Tensor can't be cast to numpy use data instead
idxs = indexes[0].numpy()
I would like to do something like this piece of Numpy code, just in TensorFlow:
a = np.zeros([5, 2])
idx = np.random.randint(0, 2, (5,))
row_idx = np.arange(5)
a[row_idx, idx] = row_idx
meaning indexing all rows of a 2D tensor with another tensor and then assigning a tensor to that. I am absolutely clueless on how to achieve this.
What I can do so far in Tensorflow is the following
a = tf.Variable(tf.zeros((5, 2)))
idx = tf.constant([0, 1, 1, 0, 1])
row_idx = tf.range(5)
indices = tf.transpose([row_idx, idx])
r = tf.gather_nd(a, indices)
tf.assign(r, row_idx) # This line does not work
When I try to execute this, I get the following error in the last line:
AttributeError: 'Tensor' object has no attribute 'assign'
Is there a way around this? There must be some nice way to do this, I don't want to iterate with for loops over the data and manually assign this on a per-element basis. I know that right now array-indexing is not as advanced as Numpy's functionality, but this should still be possible somehow.
What you are trying to do is frequently done with tf.scatter_nd_update. However, that is most times not the right way to do it, you should not need a variable, just another tensor produced from the original tensor with some replaced values. Unfortunately, there is no straightforward way to do this in general. If your original tensor is really all zeros, then you can simply use tf.scatter_nd:
import tensorflow as tf
idx = tf.constant([0, 1, 1, 0, 1])
row_idx = tf.range(5)
indices = tf.stack([row_idx, idx], axis=1)
a = tf.scatter_nd(indices, row_idx, (5, 2))
with tf.Session() as sess:
print(sess.run(a))
# [[0 0]
# [0 1]
# [0 2]
# [3 0]
# [0 4]]
However, if the initial tensor is not all zeros, it is more complicated. One way to do that is do the same as above, then make a mask for the updated, and select between the original and the update according to the mask:
import tensorflow as tf
a = tf.ones((5, 2), dtype=tf.int32)
idx = tf.constant([0, 1, 1, 0, 1])
row_idx = tf.range(5)
indices = tf.stack([row_idx, idx], axis=1)
a_update = tf.scatter_nd(indices, row_idx, (5, 2))
update_mask = tf.scatter_nd(indices, tf.ones_like(row_idx, dtype=tf.bool), (5, 2))
a = tf.where(update_mask, a_update, a)
with tf.Session() as sess:
print(sess.run(a))
# [[0 1]
# [1 1]
# [1 2]
# [3 1]
# [1 4]]
I don't know about previous versions, but in Tensorflow 2.1 you can use tf.tensor_scatter_nd_update to do what you want in a single-line. In your code example, you could do:
a = tf.zeros((5, 2), dtype=tf.int32)
idx = tf.constant([0, 1, 1, 0, 1])
row_idx = tf.range(5)
indices = tf.transpose([row_idx, idx])
a = tf.tensor_scatter_nd_update(a, indices, row_idx)
An example
Suppose I have a tensor values with shape (2,2,2)
values = [[[0, 1],[2, 3]],[[4, 5],[6, 7]]]
And a tensor indicies with shape (2,2) which describes what values to be selected in the innermost dimension
indicies = [[1,0],[0,0]]
Then the result will be a (2,2) matrix with these values
result = [[1,2],[4,6]]
What is this operation called in tensorflow and how to do it?
General
Note that the above shape (2,2,2) is only an example, it can be any dimension. Some conditions for this operation:
ndim(values) -1 = ndim(indicies)
values.shape[:-1] == indicies.shape == result.shape
indicies.max() < values.shape[-1] -1
I think you can emulate this with tf.gather_nd. You will just have to convert "your" indices to a representation that is suitable for tf.gather_nd. The following example here is tied to your specific example, i.e. input tensors of shape (2, 2, 2) but I think this gives you an idea how you could write the conversion for input tensors with arbitrary shape, although I am not sure how easy it would be to implement this (haven't thought about it too long). Also, I'm not claiming that this is the easiest possible solution.
import tensorflow as tf
import numpy as np
values = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
values_tf = tf.constant(values)
indices = np.array([[1, 0], [0, 0]])
converted_idx = []
for k in range(values.shape[0]):
outer = []
for l in range(values.shape[1]):
inds = [k, l, indices[k][l]]
outer.append(inds)
print(inds)
converted_idx.append(outer)
with tf.Session() as sess:
result = tf.gather_nd(values_tf, converted_idx)
print(sess.run(result))
This prints
[[1 2]
[4 6]]
Edit: To handle arbitrary shapes here is a recursive solution that should work (only tested on your example):
def convert_idx(last_dim_vals, ori_indices, access_to_ori, depth):
if depth == len(last_dim_vals.shape) - 1:
inds = access_to_ori + [ori_indices[tuple(access_to_ori)]]
return inds
outer = []
for k in range(ori_indices.shape[depth]):
inds = convert_idx(last_dim_vals, ori_indices, access_to_ori + [k], depth + 1)
outer.append(inds)
return outer
You can use this together with the original code I posted like so:
...
converted_idx = convert_idx(values, indices, [], 0)
with tf.Session() as sess:
result = tf.gather_nd(values_tf, converted_idx)
print(sess.run(result))