I am creating a multidimensional array.
import numpy as np
import tensorflow as tf
a = np.zeros((10, 4, 4, 1))
print(a.shape)
(10, 4, 4, 1)
I want to add rgb channels, so I am doing:
tf_a = tf.image.grayscale_to_rgb(a, name=None)
print(tf.rank(tf_a))
Tensor("Rank:0", shape=(), dtype=int32)
and it gives me a tensor with rank 0 instead of 4.
Also, the shape:
print(tf.shape(tf_a))
gives : Tensor("Shape:0", shape=(4,), dtype=int32)
In Tensorflow, tf.rank(tf_a) and tf.shape(tf_a) return tensors. Threore, you are printing the shape and rank of those tensors and not the shape and the rank of tf_a.
Therefore, I have edited your code slightly to get the actual results.
import numpy as np
import tensorflow as tf
a = np.zeros((10, 4, 4, 1))
tf_a = tf.image.grayscale_to_rgb(a, name=None)
sess = tf.Session()
with sess.as_default():
print(tf.rank(tf_a).eval()) # rank
print(tf.shape(tf_a).eval()) #shape
4 #rank
[10 4 4 3] #result
Hope this helps.
Related
Say I have two rank 1 tensors of different (important) length:
import tensorflow as tf
x = tf.constant([1, 2, 3])
y = tf.constant([4, 5])
Now I want to append y to the end of x to give me the tensor:
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([1, 2, 3, 4, 5], dtype=int32)>
But I can't seem to figure out how.
I will be doing this inside a function that I will decorate with tf.function, and it is my understanding that everything needs to be tensorflow operations for the tf.function decorator to work. That is, converting x and y to numpy arrays and back to a tensor will cause problems.
Thanks!
EDIT:
The solution is to use tf.concat() as pointed out by #Andrey:
tf.concat([x, y], axis=0)
It turns out that the problem originated when trying to append a single number to the end of a rank 1 tensor as follows:
x = tf.constant([1, 2, 3])
y = tf.constant(5)
tf.concat([x, y], axis=0)
which fails since here y is a rank 0 tensor of shape (). This can be solved by writing:
x = tf.constant([1, 2, 3])
y = tf.constant([5])
tf.concat([x, y], axis=0)
since y will then be a rank 1 tensor of shape (1,).
Use tf.concat():
import tensorflow as tf
t1 = tf.constant([1, 2, 3])
t2 = tf.constant([4, 5])
output = tf.concat([t1, t2], 0)
I want to apply "tf.nn.max_pool()" on a single image but I get a result with dimension that is totally different than the input:
import tensorflow as tf
import numpy as np
ifmaps_1 = tf.Variable(tf.random_uniform( shape=[ 7, 7, 3], minval=0, maxval=3, dtype=tf.int32))
ifmaps=tf.dtypes.cast(ifmaps_1, dtype=tf.float64)
ofmaps_tf = tf.nn.max_pool([ifmaps], ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding="SAME")[0] # no padding
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
print("ifmaps_tf = ")
print(ifmaps.eval())
print("ofmaps_tf = ")
result = sess.run(ofmaps_tf)
print(result)
I think this is related to trying to apply pooling to single example not on a batch. I need to do the pooling on a single example.
Any help is appreciated.
Your input is (7,7,3), kernel size is (3,3) and stride is (2,2). So if you do not want any paddings, (state in your comment), you should use padding="VALID", that will return a (3,3) tensor as output. If you use padding="SAME", it will return (4,4) tensor.
Usually, the formula of calculating output size for SAME pad is:
out_size = ceil(in_sizei/stride)
For VALID pad is:
out_size = ceil(in_size-filter_size+1/stride)
I'm trying to implement a custom Keras Layer in Tensorflow 2.0RC and need to concatenate a [None, Q] shaped tensor onto a [None, H, W, D] shaped tensor to produce a [None, H, W, D + Q] shaped tensor. It is assumed that the two input tensors have the same batch size even though it is not known beforehand. Also, none of H, W, D, and Q are known at write-time but are evaluated in the layer's build method when the layer is first called. The issue that I'm experiencing is when broadcasting the [None, Q] shaped tensor up to a [None, H, W, Q] shaped tensor in order to concatenate.
Here is an example of trying to create a Keras Model using the Functional API that performs variable-batch broadcasting from shape [None, 3] to shape [None, 5, 5, 3]:
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
y = tf.broadcast_to(y, [-1, 5, 5, 3]) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
Tensorflow produces the error:
InvalidArgumentError: Dimension -1 must be >= 0
And then when I change -1 to None it gives me:
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 5, 5, 3]. Consider casting elements to a supported type.
How can I perform the specified broadcasting?
You need to use the dynamic shape of y to determine the batch size. The dynamic shape of a tensor y is given by tf.shape(y) and is a tensor op representing the shape of y evaluated at runtime. The modified example demonstrates this by selecting between the old shape, [None, 1, 1, 3], and the new shape using tf.where.
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
# Retain the batch and depth dimensions, but broadcast along H and W
broadcast_shape = tf.where([True, False, False, True],
tf.shape(y), [0, 5, 5, 0])
y = tf.broadcast_to(y, broadcast_shape) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
# prints: "(8, 5, 5, 3)"
References:
"TensorFlow: Shapes and dynamic dimensions"
I have two vectors, weighted: shape (None, 3) and D: shape (None, 3, 5). Then I want to multiply weighted to D like weighted * D: shape(None, 3, 5).
I attached my image below. So each scalar value is multiplied to each row element.
So I tried multiply([weighted, D]), but I got an error ValueError: Operands could not be broadcast together with shapes (3, 5) (3,). I assume this is caused of different shape of inputs. Then, how do I fix this?
Update
multiply([weighted, Permute((2, 1))(D)]) worked. I am not sure but last element of shape must be same..
You can reshape weighted and use broadcasting to accomplish that. Like this:
weighted = weighted.reshape(-1, 3, 1)
result = weighted * D
Update 1: The same concept (broadcasting) can be used for instance in tensorflow with tf.expand_dims(weights, dim=2). My POC:
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
anp = np.array([[1, 2, 10], [2, 1, 10]])
bnp = np.random.random((2, 3, 5))
with tf.Session() as sess:
weighted = tf.placeholder(tf.float32, shape=(None, 3))
D = tf.placeholder(tf.float32, shape=(None, 3, 5))
rweighted = tf.expand_dims(weighted, dim=2)
result = rweighted * D
r = sess.run(result, feed_dict={weighted: anp, D: bnp})
print(bnp)
print("--")
print(r)
For keras use the backend API:
from keras import backend as K
...
K.expand_dims(weighted, 2)
I have a value tensor and a reordering tensor. Reordering tensor gives ordering for each row in value tensor. How can I use this reordering tensor to actually reorder values in the value tensor.
This gives the desired result in numpy (Indexing one array by another in numpy):
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
result = values[np.arange(values.shape[0])[:,None],reorder_rows]
print(result)
# [[ 4 100 5]
# [ 10 500 20]]
How can I do the same in tf?
I have tried to play with slicing and tf.gather_nd but can't make it work.
Thanks.
Try the following:
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
import tensorflow as tf
values = tf.constant(values)
reorder_rows = tf.constant(reorder_rows, dtype=tf.int32)
x = tf.tile(tf.range(tf.shape(values)[0])[:,tf.newaxis], [1,tf.shape(values)[1]])
res = tf.gather_nd(values, tf.stack([x, reorder_rows], axis=-1))
sess = tf.InteractiveSession()
res.eval()
The following tf code should give the same result:
values = tf.constant([
[5,4,100],
[10,20,500]
])
reorder_rows = tf.constant([
[[0,1],[0,2],[0,0]],
[[1,0],[1,2],[1,1]]
])
result = tf.gather_nd(values, reorder_rows)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
result.eval()
#Result
#[[ 4, 100, 5],
#[ 10, 500, 20]]