I have a value tensor and a reordering tensor. Reordering tensor gives ordering for each row in value tensor. How can I use this reordering tensor to actually reorder values in the value tensor.
This gives the desired result in numpy (Indexing one array by another in numpy):
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
result = values[np.arange(values.shape[0])[:,None],reorder_rows]
print(result)
# [[ 4 100 5]
# [ 10 500 20]]
How can I do the same in tf?
I have tried to play with slicing and tf.gather_nd but can't make it work.
Thanks.
Try the following:
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
import tensorflow as tf
values = tf.constant(values)
reorder_rows = tf.constant(reorder_rows, dtype=tf.int32)
x = tf.tile(tf.range(tf.shape(values)[0])[:,tf.newaxis], [1,tf.shape(values)[1]])
res = tf.gather_nd(values, tf.stack([x, reorder_rows], axis=-1))
sess = tf.InteractiveSession()
res.eval()
The following tf code should give the same result:
values = tf.constant([
[5,4,100],
[10,20,500]
])
reorder_rows = tf.constant([
[[0,1],[0,2],[0,0]],
[[1,0],[1,2],[1,1]]
])
result = tf.gather_nd(values, reorder_rows)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
result.eval()
#Result
#[[ 4, 100, 5],
#[ 10, 500, 20]]
Related
My English is poor. I will try my best to clarify my question.
My inputs are various, [[1,2],[3,4]] and [[5,6],[7,8],[10,11]].
The outputs that I want are [[1,0,2,0],[3,0,4,0]] and [[5,0,6,0],[7,0,8,0],[10,0,11,0]] (which means adding zeros between the numbers)
Here is my implemention:
import tensorflow as tf
import numpy as np
matrix1=[[1,2],[3,4]]
matrix2 = [[5,6],[7,8],[10,11]]
with tf.Session() as sess:
input = tf.placeholder(tf.float32, [None, 2])
output=how_to_add(input)
sess.run(tf.global_variables_initializer())
[matrix3] = sess.run([output], feed_dict={input:matrix1})
print(matrix3)
the code about how_to_add is:
def how_to_add(input):
shape = input.get_shape().as_list()
output=tf.Variable(tf.zeros(([shape[0],4))
with tf.control_dependencies([output[:,1::2].assign(input) ]):
output = tf.identity(output)
return output
but shape[0] is ?, so I got an error:
"Cannot convert a partially known TensorShape to a Tensor: %s" % s)
ValueError: Cannot convert a partially known TensorShape to a Tensor: (?, 4)
How to correct my codes?
supplementary:
These codes work:
import tensorflow as tf
import numpy as np
matrix1=[[1,2],[3,4]]
matrix2 = [[5,6],[7,8],[10,11]]
with tf.Session() as sess:
input = tf.placeholder(tf.float32, [2, 2]) #'None' is repalced with '2'
output=how_to_add(input)
sess.run(tf.global_variables_initializer())
[matrix3] = sess.run([output], feed_dict={input:matrix1})
print(matrix3)
the code about how_to_add is:
def how_to_add(input):
#shape = input.get_shape().as_list()
output=tf.Variable(tf.zeros(([2,4)) # 'shape[0]' is replaced with '2'
with tf.control_dependencies([output[:,1::2].assign(input) ]):
output = tf.identity(output)
return output
Although these codes work, they can only deal with matrix1 rather than matrix2.
Do not use a variable for this, that is not their purpose. You should create a new tensor that is made from your input tensor. For your problem, you can do that like this:
import tensorflow as tf
def interleave_zero_columns(matrix):
# Add a matrix of zeros along a new third dimension
a = tf.stack([matrix, tf.zeros_like(matrix)], axis=2)
# Reshape to interleave zeros across columns
return tf.reshape(a, [tf.shape(matrix)[0], -1])
# Test
matrix1 = [[1, 2], [3, 4]]
matrix2 = [[5, 6], [7, 8], [10, 11]]
with tf.Session() as sess:
input = tf.placeholder(tf.float32, [None, 2])
output = interleave_zero_columns(input)
print(sess.run(output, feed_dict={input: matrix1}))
# [[1. 0. 2. 0.]
# [3. 0. 4. 0.]]
print(sess.run(output, feed_dict={input: matrix2}))
# [[ 5. 0. 6. 0.]
# [ 7. 0. 8. 0.]
# [10. 0. 11. 0.]]
I am creating a multidimensional array.
import numpy as np
import tensorflow as tf
a = np.zeros((10, 4, 4, 1))
print(a.shape)
(10, 4, 4, 1)
I want to add rgb channels, so I am doing:
tf_a = tf.image.grayscale_to_rgb(a, name=None)
print(tf.rank(tf_a))
Tensor("Rank:0", shape=(), dtype=int32)
and it gives me a tensor with rank 0 instead of 4.
Also, the shape:
print(tf.shape(tf_a))
gives : Tensor("Shape:0", shape=(4,), dtype=int32)
In Tensorflow, tf.rank(tf_a) and tf.shape(tf_a) return tensors. Threore, you are printing the shape and rank of those tensors and not the shape and the rank of tf_a.
Therefore, I have edited your code slightly to get the actual results.
import numpy as np
import tensorflow as tf
a = np.zeros((10, 4, 4, 1))
tf_a = tf.image.grayscale_to_rgb(a, name=None)
sess = tf.Session()
with sess.as_default():
print(tf.rank(tf_a).eval()) # rank
print(tf.shape(tf_a).eval()) #shape
4 #rank
[10 4 4 3] #result
Hope this helps.
I am trying to implement a custom loss function and have come across this problem. The custom loss function will look something like this:
def customLoss(z):
y_pred = z[0]
y_true = z[1]
features = z[2]
...
return loss
In my situation, y_pred and y_true are actually greyscale images. The features contained in z[2] consists of a pair of locations (x,y) where I would like to compare y_pred and y_true. These locations depend on the input training sample, so when defining the model they are passed as inputs. So my question is: how do I use the tensor features to index into the tensors y_pred and y_true?
If you are using Tensorflow as backend, tf.gather_nd() could do the trick (Keras doesn't have an exact equivalent yet as far as I can tell):
from keras import backend as K
import tensorflow as tf
def customLoss(z):
y_pred = z[0]
y_true = z[1]
features = z[2]
# Gathering values according to 2D indices:
y_true_feat = tf.gather_nd(y_true, features)
y_pred_feat = tf.gather_nd(y_pred, features)
# Computing loss (to be replaced):
loss = K.abs(y_true_feat - y_pred_feat)
return loss
# Demonstration:
y_true = K.constant([[[0, 0, 0], [1, 1, 1]], [[2, 2, 2], [3, 3, 3]]])
y_pred = K.constant([[[0, 0, -1], [1, 1, 1]], [[0, 2, 0], [3, 3, 0]]])
coords = K.constant([[0, 1], [1, 0]], dtype="int64")
loss = customLoss([y_pred, y_true, coords])
tf_session = K.get_session()
print(loss.eval(session=tf_session))
# [[ 0. 0. 0.]
# [ 2. 0. 2.]]
Note 1: Keras however has K.gather() which only works for 1D indices. If you want to use native Keras only, you could still flatten your matrices and indices, to apply this method:
def customLoss(z):
y_pred = z[0]
y_true = z[1]
features = z[2]
y_shape = K.shape(y_true)
y_dims = K.int_shape(y_shape)[0]
# Reshaping y_pred & y_true from (N, M, ...) to (N*M, ...):
y_shape_flat = [y_shape[0] * y_shape[1]] + [-1] * (y_dims - 2)
y_true_flat = K.reshape(y_true, y_shape_flat)
y_pred_flat = K.reshape(y_pred, y_shape_flat)
# Transforming accordingly the 2D coordinates in 1D ones:
features_flat = features[0] * y_shape[1] + features[1]
# Gathering the values:
y_true_feat = K.gather(y_true_flat, features_flat)
y_pred_feat = K.gather(y_pred_flat, features_flat)
# Computing loss (to be replaced):
loss = K.abs(y_true_feat - y_pred_feat)
return loss
Note 2: To answer your question in comment, slicing can be done in a numpy-way with Tensorflow as backend:
x = K.constant([[[0, 1, 2], [3, 4, 5]], [[0, 0, 0], [0, 0, 0]]])
sess = K.get_session()
# When it comes to slicing, TF tensors work as numpy arrays:
slice = x[0, 0:2, 0:3]
print(slice.eval(session=sess))
# [[ 0. 1. 2.]
# [ 3. 4. 5.]]
# This also works if your indices are tensors (TF will call tf.slice() below):
coords_range_per_dim = K.constant([[0, 2], [0, 3]], dtype="int32")
slice = x[0,
coords_range_per_dim[0][0]:coords_range_per_dim[0][1],
coords_range_per_dim[1][0]:coords_range_per_dim[1][1]
]
print(slice.eval(session=sess))
# [[ 0. 1. 2.]
# [ 3. 4. 5.]]
I am having difficulty applying tf.scatter_nd_add() to 2D tensors. The documentation is a bit unclear and has does not contain an example for sparse update but only for full slice updates.
My case is the following:
updates - 2D tensor of shape [None, 6]
indices - 2D tensor of shape [None, 6]
ref - 2D Variable of zeros of shape [None, 6]
It is guaranteed that updates, indices and ref will always have their first dimension equal, but the size of that dimension can be varying. The update I want to perform looks like
for i, j:
k = indices[i][j]
ref[i][k] += updates[i][j]
Note that indices contains duplicates. tf.scatter_nd_add(ref, indices, updates) complains about shape mismatch and I cannot figure out how I need to restructure the tensors in order to performs the update.
I figured it out. Each 2D entry in indices must actually specify the absolute location that will get updated in ref. This means that indices must be 3D and then the non-vectorized update looks like:
for i, j:
r, k = indices[i][j]
ref[r][k] += updates[i][j]
In the above question it just happens that r is always equal to i.
Here is a full Tensorflow implementation with varying shapes. For clarity, in the following example, col_indices corresponds to indices from the original question:
import tensorflow as tf
import numpy as np
updates = tf.placeholder(dtype=tf.float32, shape=[None, 6])
col_indices = tf.placeholder(dtype=tf.int32, shape=[None, 6])
row_indices = tf.cumsum(tf.ones_like(col_indices), axis=0, exclusive=True)
indices = tf.concat([tf.expand_dims(row_indices, axis=-1),
tf.expand_dims(col_indices, axis=-1)], axis=-1)
tmp_var = tf.Variable(0, trainable=False, dtype=tf.float32, validate_shape=False)
ref = tf.assign(tmp_var, tf.zeros_like(updates), validate_shape=False)
# This makes sure that ref is always 0 before scatter_nd_add() runs
with tf.control_dependencies([target_var]):
result = tf.scatter_nd_add(ref, indices, updates)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Create example input data
np_input = np.arange(0, 6, 1, dtype=np.int32)
np_input = np.tile(np_input[None,:], [10, 1])
res = sess.run(result, feed_dict={updates: np_input, col_indices: np_input})
print(res)
I'm comparing tensorflow to numpy using same logical code.
When implementing tf.where, I can't get the same result as np.where
What is the problem of the code or usage below?
data :
X_batch = np.concatenate([np.arange(10).reshape(1, -1) for i in range(10)], axis=0)
tensorflow tf.where toy code :
X = tf.placeholder(dtype=tf.int32, shape=[10, 10])
with tf.Session() as sess:
print(sess.run(tf.where(X > 5, tf.zeros([10, 10], dtype=tf.int32),
X), feed_dict={X: X_batch}))
numpy np.where toy code :
np.where(X_batch > 5, np.zeros([10,10]), X_batch)
Code had some typo errors. correction has done
I edited the code. Input to tf.where() should be as same as np.where(). Therefore, arguments to the tf.where(), you should give zero 10*10 matrix and x_batch as the inputs as your np.where() method arguments.
import tensorflow as tf
import numpy as np
X_batch = np.concatenate([np.arange(10).reshape(1, -1) for i in range(10)], axis=0)
#print(X_batch )
X = tf.placeholder(dtype=tf.int32, shape=[10, 10])
with tf.Session() as sess:
print(sess.run(tf.where(X > 5, tf.fill([10, 10], 0),
X), feed_dict={X: X_batch}))
Hope this helps.