how to explain the output of tf.rank in tensorflow - python

I am new in tensorflow and have a question about tf.rank method.
In the doc https://www.tensorflow.org/api_docs/python/tf/rank there is a simple example about the tf.rank:
# shape of tensor 't' is [2, 2, 3]
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.rank(t) # 3
But when I run the code below:
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
print(tf.rank(t)) # 3
I get output like:
Tensor("Rank:0", shape=(), dtype=int32)
Why can I get the output of "3"?

As I said in the comments of this question, tf.rank(t) creates a tensor in charge of evaluating the rank of tensor t. If you use the python print() function, it just prints information about the tensor itself.
Let's assign the tf.rank(t) tensor to a variable rank (as suggested by #Picnix_) and evaluate its value under a tf.Session():
import tensorflow as tf
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
rank = tf.rank(t)
with tf.Session() as sess:
rank_value = sess.run(rank)
print(rank_value) # Outputs --> 3
So, rank_value is the variable containing the value of tensor rank, and as documentation suggest its value is 3. Hope this puts some light on how tensorflow works.

Related

How do I find the max value in tf.Tensor?

How can I find the max value in each element so that I get 2, 4, 6, 8?
import tensorflow as tf
a = tf.constant([
[[1, 2]], [[3, 4]],
[[5, 6]], [[7, 8]]])
I tried the following code:
tf.reduce_max(a, keepdims=True)
but that just gives me 8 as output whilst ignoring the rest.
You have to change your axis parameter to -1 like this:
import tensorflow as tf
a = tf.constant([
[[1, 2]], [[3, 4]],
[[5, 6]], [[7, 8]]])
print(tf.reduce_max(a, axis=-1, keepdims=False))
'''
tf.Tensor(
[[2]
[4]
[6]
[8]], shape=(4, 1), dtype=int32)
'''
since you have a 3D-tensor and want to access the last dimension.

extracting subtensor from a tensor according to an index tensor

I have this tensor:
tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
and I have this index tensor:
tensor([0, 1])
and what I want to get is the subtensors according to dim 1 and the corresponding indices in the index tensor, that is:
tensor([[1, 2],
[7, 8]])
tried to use torch.gather() function and advanced indexing with no success, can anyone help?
You are implicitly using the index of each value of your index tensor. They just happen to be the same as the values. If you want to walk through the first level, elements of the tensor, you can use torch.arange to construct the first level indices.
import torch
from torch import tensor
t = tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
ix = tensor([0, 1])
ix0 = torch.arange(0, ix.shape.numel())
t[ix0, ix]
# returns:
tensor([[1, 2],
[7, 8]])

Python Numpy add an array at array index

I'm struggling with something that may be very simple or not possible.
I want to add an numpy array to another numpy array at a specific index.
a = np.zeros(shape=(17, 1, 2))
for i in range(10):
b = [i]
c = [1,2,3,4]
b.append(c)
# Here I want to add b in a at specific index but it's not working
# np.append(a[i][0][0], b)
At the end I want something like that :
a = [[[[0, [1,2,3,4]], ....]]]
Thank you
Your example is not very clear and you do not say what actually is going wrong. You are not doing anything with a in the loop for example. Also you are trying to mix lists and arrays.
Still, I think I know what you mean/need.
You can use insert and append for lists, as illustrated in the example below:
a = []
for i in range(10):
b = [i]
c = [1,2,3,4]
b.insert(1,c)
a.append( b )
print a
Update
Use list.insert(index, obj) to insert an object at a specific index.
If the following is not near to what you'd like, you'll really have to be more specific, dear OP.
I acknowledge numpy to be a powerful library, but you ask it to initialize zeros, which are int then want to add in list. You cannot expect the constructor to know at creation time that it needs to allocate space for object type data. What you want is to help the numpy ndarray constructor with type inference.
a = np.zeros(shape=(17, 1, 2), dtype=object)
for i in range(10):
b = [i]
c = [1,2,3,4]
b.append(c)
a[i] = b
a
#array([[[0, [1, 2, 3, 4]]],
#
# [[1, [1, 2, 3, 4]]],
#
# [[2, [1, 2, 3, 4]]],
#
# [[3, [1, 2, 3, 4]]],
#
# [[4, [1, 2, 3, 4]]],
# [[5, [1, 2, 3, 4]]],
# [[6, [1, 2, 3, 4]]],
# [[7, [1, 2, 3, 4]]],
# [[8, [1, 2, 3, 4]]],
# [[9, [1, 2, 3, 4]]],
# [[0, 0]],
# [[0, 0]],
# [[0, 0]],
# [[0, 0]],
# [[0, 0]],
# [[0, 0]],
# [[0, 0]]], dtype=object)

What does x=x[class_id] do when used on NumPy arrays

I am learning Python and solving a machine learning problem.
class_ids=np.arange(self.x.shape[0])
np.random.shuffle(class_ids)
self.x=self.x[class_ids]
This is a shuffle function in NumPy but I can't understand what self.x=self.x[class_ids] means. because I think it gives the value of the array to a variable.
It's a very complicated way to shuffle the first dimension of your self.x. For example:
>>> x = np.array([[1, 1], [2, 2], [3, 3], [4, 4], [5, 5]])
>>> x
array([[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]])
Then using the mentioned approach
>>> class_ids=np.arange(x.shape[0]) # create an array [0, 1, 2, 3, 4]
>>> np.random.shuffle(class_ids) # shuffle the array
>>> x[class_ids] # use integer array indexing to shuffle x
array([[5, 5],
[3, 3],
[1, 1],
[4, 4],
[2, 2]])
Note that the same could be achieved just by using np.random.shuffle because the docstring explicitly mentions:
This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same.
>>> np.random.shuffle(x)
>>> x
array([[5, 5],
[3, 3],
[1, 1],
[2, 2],
[4, 4]])
or by using np.random.permutation:
>>> class_ids = np.random.permutation(x.shape[0]) # shuffle the first dimensions indices
>>> x[class_ids]
array([[2, 2],
[4, 4],
[3, 3],
[5, 5],
[1, 1]])
Assuming self.x is a numpy array:
class_ids is a 1-d numpy array that is being used as an integer array index in the expression: x[class_ids]. Because the previous line shuffled class_ids, x[class_ids] evaluates to self.x shuffled by rows.
The assignment self.x=self.x[class_ids] assigns the shuffled array to self.x

Tensorflow: Repeat(tile) elements of a Tensor

I have an input tensor as follow:
a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
and the 'multiple' tensor:
mul= tf.constant([1, 3, 2])
Is it possible to tile a 3D tensor with the first element of a appears once, the second appears 3 times, the last element appears twice?
result = [
[[1, 2, 3]],
[[4, 5, 6],[4, 5, 6],[4, 5, 6]],
[[7, 8, 9], [7, 8, 9]]
]
Tensorflow 0.12
Thank you very much.
No, this is not possible. Read about tensors and shapes from the docs.
To understand why it is not possible imagine the matrix that has different number of elements in each row. It will clearly be not a matrix.
You can use numpy
import numpy as np
import tensorflow as tf
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
mul = np.array([1,3,2])
result = []
for i in range(len(mul)):
result.append(np.tile(a[i], (mul[i], 1)))
result = np.array(result)
I am sure you cannot have non-rectangular tensors in tensorflow. That is what is causing the problem. Otherwise I just extended the code of #Kris to run wholly on tensorflow.
import tensorflow as tf
sess = tf.InteractiveSession()
a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
mul = tf.constant([1,3,2])
result = []
for i in range(3):
result.append(tf.tile(a[i],[mul[i]]))
print([r.eval() for r in result])
#r_tensor = tf.stack(0,[r for r in result]) # Not possible

Categories