I have an Boolean tensor and I want to convert to a binary tensor of ones and zeros.
To put it into context - I have the following tensor
[[ True False True]
[False True False]
[ True False True]]
which I need to turn into ones and zeros so then I can multiply element wise with a value tensor, i.e.:
[[1. 0.64082676 0.90568966]
[0.64082676 1. 0.37999165]
[0.90568966 0.37999165 1. ]]
I tried both these functions
masks = tf.map_fn(logical, masks, dtype=tf.float32)
masks = tf.vectorized_map(logical, masks)
with
#tf.function
def logical(x):
if tf.equal(x, True):
return zero
return one
but unfortunately no luck. I also tried to multiply directly with the Boolean tensor but that was not allowed.
So any guidance on how to resolve this?
I think I solved it using this and some magic. Let penalties be the value tensor
test = tf.where(masks, penalties * 0.0, penalties * 1.0)
For people who need literally what the question asked:
tf.where(r, 1, 0)
where r is your boolean tensor
Related
Say I have a tensor as following :
var = tf.constant([0,0.05,0.2,0,0])
inverse_var = tf.math.reciprocal(var)
print(inverse_var)
Output : tf.Tensor([inf, 20. , 5. ,inf inf], shape=(5,), dtype=float32)
I want to make a new tensor from inverse_var tensor such that the infinity values are replaced with zero in the new tensor.
Final vector required - [ 0, 20, 5, 0, 0 ]
Here is a solution done using tf.tensor_scatter_nd_update method
import tensorflow as tf
var = tf.constant([0,0.05,0.2,0,0])
inverse_var = tf.math.reciprocal(var)
print(inverse_var)
mask = tf.math.is_inf(inverse_var)
indices = tf.where(mask) # found indices where infinite values are
print(indices)
updates=tf.zeros(len(indices)) # create 1D matrix of length of infinite values
inverse_var_inf = tf.tensor_scatter_nd_update(inverse_var,indices,updates) #updated using scatter_nd_update method
print(inverse_var_inf)
Thank you!
providing gist for reference
I wish to assign 0 to multiple locations in a Tensor of size = (n,m) at runtime.
I computed the indices using the where clause in Tensorflow, and called the scatter_nd_update function in order to assign a tf.constant(0) at the newly found multiple locations.
oscvec = tf.where(tf.math.logical_and(sgn2 > 0, sgn1 < 0))
updates = tf.placeholder(tf.float64, [None, None])
oscvec_empty = tf.placeholder(tf.int64, [None])
tf.cond(tf.not_equal(tf.size(oscvec), 0), tf.scatter_nd_update(save_parms, oscvec, tf.constant(0, dtype=tf.float64)),
tf.scatter_nd_update(save_parms, oscvec_empty, updates))
I will expect tf.where returns an empty tensor when the condition if not satisfied, and a non-empty tensor of indices for save_parms at some point. I decided to create and empty oscvec_empty tensor to deal with cases where the result for tf.where returns an empty tensor. But this does not seem to work....as seen from the following error which is generated when the Tensorflow if-else condition - tf.cond - is used to update save_parms parameter tensor via the tf.scatter_nd_update function:
ValueError: Shape must be at least rank 1 but is rank 0 for 'ScatterNdUpdate' (op: 'ScatterNdUpdate') with input shapes: [55], [?,1], [].
Is there a way to replace values at multiple locations in the save_parms tensor when oscvec is non-empty and not do so, when oscvec is empty? The sgn tensor corresponds to the result of sign function applied on save_parms based on a given criterion.
You can use tf.where() instead of such a complex approach in question.
import tensorflow as tf
vec1 = tf.constant([[ 0.05734377, 0.80147606, -1.2730557 ], [ 0.42826906, 1.1943488 , -0.10129673]])
vec2 = tf.constant([[ 1.5461133 , -0.38455755, -0.79792875], [ 1.5374309 , -1.5657802 , 0.05546811]])
sgn1 = tf.sign(vec1)
sgn2 = tf.sign(vec2)
save_parms = tf.random_normal(shape=sgn1.shape)
oscvec = tf.where(tf.math.logical_and(sgn2 > 0, sgn1 < 0),tf.zeros_like(save_parms),save_parms)
with tf.Session() as sess:
save_parms_val, oscvec_val = sess.run([save_parms, oscvec])
print(save_parms_val)
print(oscvec_val)
[[ 0.75645643 -0.646291 -1.2194813 ]
[ 1.5204562 -1.0625905 2.9939709 ]]
[[ 0.75645643 -0.646291 -1.2194813 ]
[ 1.5204562 -1.0625905 0. ]]
I have the following Tensor:
# (class, index)
obj_class_indexes = tf.constant([(0, 0), (0, 1), (0, 2), (1, 3)])
And to each value I'm looking for the objects with the same class.
For now I'm trying the following:
same_classes = tf.logical_and(tf.equal(obj_classes_indexes[:, 0], obj_classes_indexes[0][0]), \
obj_classes_indexes[:, 1] > obj_classes_indexes[0][1])
found_indexes = tf.where(same_classes)
with tf.Session() as sess:
print(sess.run(same_classes))
print(sess.run(indexes))
The expected output would be:
[False True True False]
[1, 2]
But it's giving me:
[False True True False]
[[1], [2]]
I don't think the logical_and output is actually the correct input to the tf.where function. Or Am I missing something?
Thanks!
There is nothin wrong with the output. tf.where() is expected to output a 2D tensor as quoted here:
"The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements"
If you want the output to be a 1D tensor as you have mentioned, you could just add a reshape op in your case as below:
found_indexes = tf.where(same_classes)
found_indexes = tf.reshape(found_indexes, [-1])
hope this helps!
I'm quite new to Python and numpy and I just cannot get this to work without manual iteration.
I have an n-dimensional data array with floating point values and an equally shaped boolean "mask" array. From that I need to get a new array in the same shape as the both others with all values from the data array where the mask array at the same position is True. Everything else should be 0.:
# given
data = np.array([[1., 2.], [3., 4.]])
mask = np.array([[True, False], [False, True]])
# target
[[1., 0.], [0., 4.]]
Seems like numpy.where() might offer this but I could not get it to work.
Bonus: Don't create new array but replace data values in-position where mask is False to prevent new memory allocation.
Thanks!
This should work
data[~mask] = 0
Numpy boolean array can be used as index (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.indexing.html#boolean-or-mask-index-arrays). The operation will be applied only on pixels with the value "True". Here you first need to invert your mask so False becomes True. You need the inversion because you want to operate on pixels with a False value.
Also, you can just multiply them. Because 'True' and 'False' is treated as '1' and '0' respectively when a boolean array is input in mathematical operations. So,
#element-wise multiplication
data*mask
or
np.multiply(data, mask)
Say, I have a tensor, it might contain positive and negative values:
[ 1, -1, 2, -2 ]
Now, I want to apply log(x) for positive values, and a constant -10 for negative values:
[ log(1), -10, log(2), -10 ]
In another word, I want to have a function like numpy.vectorize. Is this possible in tensorflow?
One possible way is to use a non-learnable variable, but I don't know if it can properly do back propagation.
tf.map_fn() enables you to map an arbitrary TensorFlow subcomputation across the elements of a vector (or the slices of a higher-dimensional tensor). For example:
a = tf.constant([1.0, -1.0, 2.0, -2.0])
def f(elem):
return tf.where(elem > 0, tf.log(elem), -10.0)
# Alternatively, if the computation is more expensive than `tf.log()`, use
# `tf.cond()` to ensure that only one branch is executed:
# return tf.where(elem > 0, lambda: tf.log(elem), lambda: -10.0)
result = tf.map_fn(f, a)
I found it, the tf.where does exactly this kind of job: https://www.tensorflow.org/api_docs/python/tf/where