I would like to slice a tensor and store it in a variable. Slicing with fixed numbers works fine eg: t[0:2]. But slicing a variable with another tensor doesnt work. eg t[t1:t2] Also storing the slice in a tensor works fine but when I try to store it in a tf.Variable i get errors.
import tensorflow as tf
import numpy
i=tf.zeros([2,1],tf.int32)
i2=tf.get_variable('i2_variable',initializer=i) #putting a multidimensional tensor in a variable
i4=tf.ones([10,1],tf.int32)
sess=tf.Session()
sess.run(tf.global_variables_initializer()) #initializing variables
itr=tf.constant(0,tf.int32)
def w_c(i2,itr):
return tf.less(itr,2)
def w_b(i2,itr):
i2=i4[(itr*0):((itr*0)+2)] #doesnt work
#i2=i4[0:2] #works
#i=i4[(itr*0):((itr*0)+2)] #works with tensor i
itr=tf.add(itr,1)
return[i2,itr]
OP=tf.while_loop(w_c,w_b,[i2,itr])
print(sess.run(OP))
I get following error:
ValueError: Input tensor 'i2_variable/read:0' enters the
loop with shape (2, 1), but has shape (?, 1) after one iteration.
To allow the shape to vary across iterations,
use the `shape_invariants` argument of tf.while_loop to specify a less-specific shape.
The code does not throw the error if you specify shape_invariants.
OP=tf.while_loop(w_c,w_b,[i2,itr],shape_invariants=
[ tf.TensorShape([None, None]),
itr.get_shape()])
It returns this.
[array([[1],
[1]]), 2]
Related
What is the difference between these two?
1- tf.reshape(tensor, [-1])
2- tf.reshape(tensor, -1)
I can not find any difference between these two, but when I use -1 without brackets, an error occurs when trying to map the function to a TensorSliceDataset.
Here is the simplified version of the code:
def reshapeME(tensor):
reshaped = tf.reshape(tensor,-1)
return reshaped
new_y_test = y_test.map(reshapeME)
and here is the Error:
ValueError: Shape must be rank 1 but is rank 0 for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](one_hot, Reshape/shape)' with input shapes: [6], [].
If I add the bracket, there is no error. Also, there is no error when the function is used by calling and feeding a tensor.
tf.reshape expects a tensor or tensor-like variable as the shape in Graph mode:
A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor.
So, simple scalars will not work in this case. The map function of a tf.data.Dataset is always executed in Graph mode:
Note that irrespective of the context in which map_func is defined
(eager vs. graph), tf.data traces the function and executes it as a
graph.
I have a n-D array. I need to create a 1-D range tensor based on dimensions.
for an example:
x = tf.placeholder(tf.float32, shape=[None,4])
r = tf.range(start=0, limit=, delta=x.shape[0],dtype=tf.int32, name='range')
sess = tf.Session()
result = sess.run(r, feed_dict={x: raw_lidar})
print(r)
The problem is, x.shape[0] is none at the time of building computational graph. So I can not build the tensor using range. It gives an error.
ValueError: Cannot convert an unknown Dimension to a Tensor: ?
Any suggestion or help for the problem.
Thanks in advance
x.shape[0] might not exist yet when running this code is graph mode. If you want a value, you need to use tf.shape(x)[0].
More information about that behaviour in the documentation for tf.Tensor.get_shape. An excerpt (emphasis is mine):
tf.Tensor.get_shape() is equivalent to tf.Tensor.shape.
When executing in a tf.function or building a model using tf.keras.Input, Tensor.shape may return a partial shape (including None for unknown dimensions). See tf.TensorShape for more details.
>>> inputs = tf.keras.Input(shape = [10])
>>> # Unknown batch size
>>> print(inputs.shape)
(None, 10)
The shape is computed using shape inference functions that are registered for each tf.Operation.
The returned tf.TensorShape is determined at build time, without executing the underlying kernel. It is not a tf.Tensor. If you need a shape tensor, either convert the tf.TensorShape to a tf.constant, or use the tf.shape(tensor) function, which returns the tensor's shape at execution time.
I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime?
The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then.
You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that :
_showme_tensor = sess.run(showme_tensor)
and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that :
_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2])
The reason why some tensors are in shape of (?, ?) in tensorflow is that they are placeholders. It can change during operation depends on your input data.
So you must feed data into the placeholder,so it can tell what is the exact shape of your tensor.
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=(None, None))
print(x.shape) # ( ?,?)
with tf.Session() as sess:
rand_array = np.random.rand(3, 3)
after_sess_x = sess.run(x,feed_dict={x: rand_array})
print(after_sess_x.shape) # ( 3,3)
I'm trying to use previously learned weights of dimension m to initialize a weight tensor of dimension n where n > m. I can do it as I've done below.
all_weights['w1'] = tf.Variable(tf.zeros([n, output_sz], dtype=tf.float32))
all_weights['w1'] = all_weights['w1'][:m,:].assign(initial_weights['w1'])
However, I'm having an issue later on when the actual learning happens that I don't come across if I don't use weight sharing. w1 is initially a tf.Variable and I noticed it changes to a Tensor object after the slicing assignment: Tensor("strided_slice/_assign:0"). My issue is I'm getting the error:
`LookupError: No gradient defined for operation 'strided_slice_2/_assign' (op type: StridedSliceAssign)`.
Does this have to do with the type (Tensor vs tf.Variable)? Does it make sense to some how cast the Tensor to a tf.Variable? I tried to do this but then I get an error like:
`FailedPreconditionError: Attempting to use uninitialized value Variable_4
[[Node: strided_slice/_assign = StridedSliceAssign[Index=DT_INT32, T=DT_FLOAT, _class=["loc:#Variable_4"], begin_mask=3, ellipsis_mask=0, end_mask=2, new_axis_mask=0, shrink_axis_mask=0, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_4, strided_slice/stack, strided_slice/stack_1, strided_slice/stack_2, strided_slice/_assign/value)]]`
I'm relatively new to Tensorflow so any help would be highly appreciated. Thanks!
tf.Variable is a very different thing from Tensor. It does not make sense to "cast" between them.
The easiest solution is to just use the initial_weights directly in the Variable creation. For example, something like this:
import numpy as np
tf.Variable(np.append(initial_weights['w1'],
np.zeros((n-m, output_sz)),
axis=0),
dtype=tf.float32)
In ipython I imported tensorflow as tf and numpy as np and created an TensorFlow InteractiveSession.
When I am running or initializing some normal distribution with numpy input, everything runs fine:
some_test = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(2, 2)))
session.run(some_test)
Returns:
array([[-0.04152317, 0.19786302],
[-0.68232622, -0.23439092]])
Just as expected.
...but when I use the Tensorflow normal distribution function:
some_test = tf.constant(tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32))
session.run(some_test)
...it raises a Type error saying:
(...)
TypeError: List of Tensors when single Tensor expected
What am I missing here?
The output of:
sess.run(tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32))
alone returns the exact same thing which np.random.normal generates -> a matrix of shape (2, 2) with values taken from a normal distribution.
The tf.constant() op takes a numpy array (or something implicitly convertible to a numpy array), and returns a tf.Tensor whose value is the same as that array. It does not accept a tf.Tensor as its argument.
On the other hand, the tf.random_normal() op returns a tf.Tensor whose value is generated randomly according to the given distribution each time it runs. Since it returns a tf.Tensor, it cannot be used as the argument to tf.constant(). This explains the TypeError (which is unrelated to the use of tf.InteractiveSession, since it occurs when you build the graph).
I'm assuming you want your graph to include a tensor that (i) is randomly generated on its first use, and (ii) constant thereafter. There are two ways to do this:
Use NumPy to generate the random value and put it in a tf.constant(), as you did in your question:
some_test = tf.constant(
np.random.normal(loc=0.0, scale=1.0, size=(2, 2)).astype(np.float32))
(Potentially faster, as it can use the GPU to generate the random numbers) Use TensorFlow to generate the random value and put it in a tf.Variable:
some_test = tf.Variable(
tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32)
sess.run(some_test.initializer) # Must run this before using `some_test`