dynamically appends tensor to list in tensorflow - python

Supposed I have a list of tensorflow tensor, I want to dynamicaly append extra tensor to this list under certain condition. e.g., if the maximum dot product between each tensor in the list and this extra tensor is larger than 0, than this extra tensor is appended to the list. Here is the code:
lists = []
for i in xrange(10):
a = tf.get_variable(name=str(i), shape=[3], dtype=tf.float32)
lists.append(a)
so right now we have a list of 10 tensors, each tensor has shape [3].
for j in xrange(11, 30):
b = tf.get_variable(name=str(j), shape=[3, 1], dtype=tf.float32)
c = tf.stack(lists)
e = tf.cond(tf.reduce_max(tf.reshape(lists, shape=[-1]), axis=0)>0.00, lambda: tf.stack(lists.append(tf.reshape(b, [-1]))), lambda: c)
lists = tf.unstack(e)
However this code has several problems, first of all,
TypeError: 'NoneType' object has no attribute '__getitem__'
This is because tf.stack(lists.append(tf.reshape(b, [-1]))), lists.append(tf.reshape(b, [-1])) is a 'NoneType'.
Second problem is that even if this part is working, then lists = tf.unstack(e) has a bug because ValueError: Cannot infer num from shape (?, 3) because tf.unstack() can not work on non-inferrable dimensions.
Would you guys please teach me how to implement this function? Thanks

So, you have at least two different problems here.
First problem: I don't understand what kind of reshape you aree doing. I would use tensordot instead. And I would not convert the tensor back into a list, if not needed.
For example:
c = tf.stack(lists) # shape [10,3]
for j in range(11, 30):
b = tf.get_variable(name=str(j), shape=[1, 3], dtype=tf.float32)
d = tf.tensordot(b, c, axes=[1,1]) # shape [1,10]
c = tf.cond(tf.reduce_max(d) > 0.00, lambda: tf.concat([c, b], 0), lambda: c) # shape [?,3]
Second problem: convert a tensor with non-inferrable dimensions into a list. There are lots of questions and answers about this topics:
http://www.google.com/search?q=tensorflow+unstack+can+not+work+on+non-inferrable+dimensions
Hope that helps.

Related

`torch.gather` without unbroadcasting

I have some batched input x of shape [batch, time, feature], and some batched indices i of shape [batch, new_time] which I want to gather into the time dim of x. As output of this operation I want a tensor y of shape [batch, new_time, feature] with values like this:
y[b, t', f] = x[b, i[b, t'], f]
In Tensorflow, I can accomplish this by using the batch_dims: int argument of tf.gather: y = tf.gather(x, i, axis=1, batch_dims=1).
In PyTorch, I can think of some functions which do similar things:
torch.gather of course, but this does not have an argument similar to Tensorflow's batch_dims. The output of torch.gather will always have the same shape as the indices. So I would need to unbroadcast the feature dim into i before passing it to torch.gather.
torch.index_select, but here, the indices must be one-dimensional. So to make it work I would need to unbroadcast x to add a "batch * new_time" dim, and then after torch.index_select reshape the output.
torch.nn.functional.embedding. Here, the embedding matrices would correspond to x. But this embedding function does not support the weights to be batched, so I run into the same issue as for torch.index_select (looking at the code, tf.embedding uses torch.index_select under the hood).
Is it possible to accomplish such gather operation without relying on unbroadcasting which is inefficient for large dims?
This is actually the most frequent case: when input and index tensors don't perfectly match the number of dimensions. You can still utilize torch.gather though since you can rewrite your expression:
y[b, t, f] = x[b, i[b, t], f]
as:
y[b, t, f] = x[b, i[b, t, f], f]
which ensures all three tensors have an equal number of dimensions. This reveals a third dimension on i, which we can easily create for free by unsqueezing a dimension and expanding it to the shape of x. You can do so with i[:,None].expand_as(x).
Here is a minimal example:
>>> b = 2; t = 3; f = 1
>>> x = torch.rand(b, t, f)
>>> i = torch.randint(0, t, (b, f))
>>> x.gather(1, i[:,None].expand_as(x))

How to update the a specific set of indices of a multi-dimensional tensor in Tensorflow

I have this multi-dimensional tensor of shape [1,32,32,155], of which I want to update
the [:,:,:,0:27] indices.
In pytorch, one would do this simply with index assign i.e [:,:,:,0:27] = [1,32,32,27].
Index assign is currently not supported in Tensorflow. Therefore, my first attempt was to do the following:
feat_ch = tf.unstack(feat, axis=3)
feat_ch[0:self.ncIn] = tf.unstack(upFeat, axis=3 )
feat = tf.stack(feat_ch, axis=3)
feat_ch being the [1,32,32,155], and upFeat being the tensor [1,32,32,27].
The idea here was to collapse the feat_ch tensor on the channel dimension, such that I get a list of 155 entries with 1,32,32. And then doing the same with upFeat, and then replace the first 27 of the feat_ch list with the 27 of the upFeat. Finally, stacking them up to get the [1,32,32,155] shaped tensor again (this time with the 27 first channels updated)
However, I am not sure if it does what I want. So I began to investigate what other alternatives to update.
Tensorflow has a method tensor_scatter_nd_update, which seems to be exactly what I wanted. However, I find it hard to wrap my head around. What I have tried so far is:
i1, i2, i3, i4 = tf.meshgrid(tf.range(1),
tf.range(32), tf.range(32), tf.range(27) , indexing="ij") #shape [1,32,32,27]
feat = tf.tensor_scatter_nd_update(feat, i1, upFeat)
The idea here was to create a mesh grid of the same shape and in such a way that each element corresponds to an index of the feat that I wish to update. This does not work, however and throws the following:
The inner -23 dimensions of output.shape=[1,32,32,155] must match the inner 1 dimensions of updates.shape=[1,32,32,27]: Shapes must be equal rank, but are 0 and 1
Am I understanding it wrong? Why does it not work? How would one update a ND-tensor?
Thanks
Use slice and concat:
feat = tf.random.uniform([1, 32, 32, 155])
updates = tf.zeros([1, 32, 32, 27])
result = tf.concat((feat[:,:,:,27:], updates), -1)

Divide a list of tensors by a list of scalars in tensorflow?

Is there a handy way to divide a list of tensors by a list of scalars? I'm trying to do something similar to the following, but get the indicated error on the last line:
import tensorflow as tf
tf.__version__
# '1.13.1'
import numpy as np
ds1 = tf.data.Dataset.from_tensor_slices(np.random.random([10, 3, 4])).batch(5, drop_remainder=True)
i = ds1.make_one_shot_iterator()
n = i.get_next()
n.shape
# TensorShape([Dimension(5), Dimension(3), Dimension(4)])
var = tf.Variable([1,2,3,4,5], dtype=np.float64)
op = n/var
# Traceback ....
# ValueError: Dimensions must be equal, but are 4 and 5 for 'truediv' (op: 'RealDiv') with input shapes: [5,3,4], [5].
My desired result is a list of tensors of shape [5, 3, 4] where the entries in the first are divided by 1, in the second by 2, in the third by 3, and so on. (The values 1-5 are standing in for computed values in my actual code.)
I'm pretty sure that the answer is going to be something easy, but I can't find the right set of search keywords to get SO or Google to cooperate.
As suggested by a commenter, reshaping var works:
op = n/tf.reshape(var, (5, 1, 1))
This yields the expected result. Notionally, I was originally asking too much in the way of shape inference from Tensorflow.

Use tf.gather to extract tensors row-wise based on another tensor row-wisely (first dimension)

I have two tensors with dimensions as A:[B,3000,3] and C:[B,4000] respectively. I want to use tf.gather() to use every single row from tensor C as index, and to use every row from tensor A as params, to get a result with size [B,4000,3].
Here is an example to make this more understandable: Say I have tensors as
A = [[1,2,3],[4,5,6],[7,8,9]],
C = [0,2,1,2,1],
result = [[1,2,3],[7,8,9],[4,5,6],[7,8,9],[4,5,6]],
by using tf.gather(A,C). It is all fine when applying to tensors with dimension less than 3.
But when it is the case as the description as the beginning, by applying tf.gather(A,C,axis=1), the shape of result tensor is
[B,B,4000,3]
It seems that tf.gather() just did the job for every element in tensor C as the indices to gather elements in tensor A. The only solution I am thinking about is to use a for loop, but that would extremely reduce the computational ability by using tf.gather(A[i,...],C[i,...]) to gain the correct size of tensor
[B,4000,3]
Thus, is there any function that is able to do this task similarly?
You need to use tf.gather_nd:
import tensorflow as tf
A = ... # B x 3000 x 3
C = ... # B x 4000
s = tf.shape(C)
B, cols = s[0], s[1]
# Make indices for first dimension
idx = tf.tile(tf.expand_dims(tf.range(B, dtype=C.dtype), 1), [1, cols])
# Complete index for gather_nd
gather_idx = tf.stack([idx, C], axis=-1)
# Gather result
result = tf.gather_nd(A, gather_idx)

TypeError: 'Tensor' object does not support item assignment in TensorFlow

I try to run this code:
outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state, sequence_length=real_length)
tensor_shape = outputs.get_shape()
for step_index in range(tensor_shape[0]):
word_index = self.x[:, step_index]
word_index = tf.reshape(word_index, [-1,1])
index_weight = tf.gather(word_weight, word_index)
outputs[step_index, :, :]=tf.mul(outputs[step_index, :, :] , index_weight)
But I get error on last line:
TypeError: 'Tensor' object does not support item assignment
It seems I can not assign to tensor, how can I fix it?
In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment.
The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop:
outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state,
sequence_length=real_length)
output_list = []
tensor_shape = outputs.get_shape()
for step_index in range(tensor_shape[0]):
word_index = self.x[:, step_index]
word_index = tf.reshape(word_index, [-1,1])
index_weight = tf.gather(word_weight, word_index)
output_list.append(tf.mul(outputs[step_index, :, :] , index_weight))
outputs = tf.stack(output_list)
 * With the exception of tf.Variable objects, using the Variable.assign() etc. methods. However, rnn.rnn() likely returns a tf.Tensor object that does not support this method.
Another way you can do it is like this.
aa=tf.Variable(tf.zeros(3, tf.int32))
aa=aa[2].assign(1)
then the output is:
array([0, 0, 1], dtype=int32)
ref:https://www.tensorflow.org/api_docs/python/tf/Variable#assign
When you have a tensor already,
convert the tensor to a list using tf.unstack (TF2.0) and then use tf.stack like #mrry has mentioned. (when using a multi-dimensional tensor, be aware of the axis argument in unstack)
a_list = tf.unstack(a_tensor)
a_list[50:55] = [np.nan for i in range(6)]
a_tensor = tf.stack(a_list)
Neither tf.Tensor nor tf.Variable is element-wise-assignable.
There is a trick however which is not the most efficient way of
course, especially when you do it iteratively.
You can create a mask and a new_layer tensor with new values and
then
do a Hadamard product (element-wise product).
x = original * mask + new_layer * (1-mask)
The original * mask part sets the specified values of original
to 0 and the second part, new_layer*(1-mask) assigns new_layer
tensor whatever you want without modifying the elements assigned to
0 by the mask tensor in the previous step.
Another way is to use numpy instead:
x = np.zeros((tensor dimensions))
Use Pytorch:
x = torch.zeros((tensor dimensions))
As this comment says, a workaround would be to create a NEW tensor with the previous one and a new one on the zones needed.
Create a mask of shape outputs with 0's on the indices you want to replace and 1's elsewhere (Can work also with True and False)
Create new matrix of shape outputs with the new desired value: new_values
Replace only the needed indexes with: outputs_new = outputs* mask + new_values * (1 - mask)
If you would provide me with an MWE I could do the code for you.
A good reference is this note: How to Replace Values by Index in a Tensor with TensorFlow-2.0

Categories