Is there a way to make a Tensor iterable without running eval() to get its numpy array?
I am trying to iterate through two parts of a tensor after using split() on it, but it happens within the construction of the hidden layers of my neural network, so it needs to happen before I am able to start a session.
import tensorflow as tf
x = tf.placeholder('float', [None, nbits])
layer = [x]
for i in range(1,numbits):
layer.append(tf.add(tf.matmul(weights[i-1], layer[i-1]), biases[i-1]))
aes, bes = tf.split(1, 2, layer[-1])
if i%2 == 1:
for am, a, b in zip(add_layer, aes, bes):
layer.append(am.ex(a, b))
The problem is that layer[-1] is a tf.placeholder at this point, so aes and bes are both tensors, and I can't iterate through them with zip().
Any ideas would be appreciated.
No, there isn't; not directly.
It's easiest to think about Tensorflow programs as being split into two phases: a building Python phase that builds a computation graph, and a execution phase that runs the computation graph. Nothing actually runs during the building phase; all computation happens during the execution phase. The building phase can't depend on the results of the execution phase, except by running the graph (session.run(), .eval(), etc.)
You can't iterate over a Tensor while building the graph, because it doesn't actually get evaluated to a specific set of values until you call session.run(). Instead it's just a reference to a node in the computation graph.
In general, you have to use Tensorflow functions to manipulate Tensors, not Python primitives (like zip). One way I like to think of it is that it's almost like a Tensor is a radioactive object in a sealed box, and you can only handle it indirectly using a robot that can perform a certain set of actions (Tensorflow library functions) :-) So you likely need to find a way to express your task using Tensorflow primitives.
If you gave a complete example of what you're trying to do, it might be possible to say more (it's not clear to me from your code fragment). One possibility might be to use tf.split to split the tensors up into Python lists of subtensors, and then use something like zip on the lists.
I hope that helps!
Related
I'm trying to parallelize different tensor operations. I'm aware that tf.vectorized_map and/or tf.map_fn can parallelize input tensor(s) with respect to their first axis, but that's not what I'm looking for. I'm looking for a way to parallelize a for loop on a set of tensors with possibly different shapes.
a = tf.ones((2))
b = tf.ones((2,2))
list_of_tensors = [a,b*2,a*3]
for t in list_of_tensors:
# some operation on t which may vary depending on its shape
Is there a possible way to parallelize this for loop on GPU with TensorFlow? (I'm open to any other library if possible i.e. JAX, numba etc.)
Thanks!
According to the documentation,
The shape and dtype of any intermediate or output tensors in the
computation of fn should not depend on the input to fn.
I'm struggling with this problem myself. I think the answer is one suggested in the comments: If you know the maximum length that your tensor can have, represent the variable length tensor by the maximum length tensor plus an integer which gives the actual length of the tensor. Whether this will be useful at all depends on the meaning of "any intermediate", because at some point you may still need the result of the actual shorter length tensor in your computation. It's a bit of a tail-chasing exercise. This part of Tensorflow is extremely frustrating, it's very very hacky to get things to work that should be easy, especially in the realm of obtaining true parallelism on the GPU for deterministic matrix algorithms, outside of the context of machine learning.
This might work inside the loop:
tf.autograph.experimental.set_loop_options(
shape_invariants=[(v, tf.TensorShape([None]))]
)
I have started using TensorFlow 2.0 and have a little uncertainty with regard to one aspect.
Suppose I have this use case: while ingesting data with the tf.data.Dataset I want to apply some specific augmentation operations upon some images. However, the external libraries that I am using require that the image is a numpy array, not a tensor.
When using tf.data.Dataset.from_tensor_slices(), the flowing data needs to be of type Tensor. Concrete example:
def my_function(tensor_image):
print(tensor_image.numpy()
return
data = tf.data.Dataset.from_tensor_slices(tensor_images).map(my_function)
The code above does not work yielding an
'Tensor' object has no attribute 'numpy' error.
I have read the documentation on TensorFlow 2.0 stating that if one wants to use an arbitrary python logic, one should use tf.py_function or only TensorFlow primitives according to:
How to convert "tensor" to "numpy" array in tensorflow?
My question is the following: Is there another way to use arbitrary python code in a function with a custom decorator/an easier way than to use tf.py_function?
To me honestly it seems that there must be a more elegant way than passing to a tf.py_function, transforming to a numpy array, perform operations A,B,C,D and then retransform to a tensor and yield the result.
There is no other way of doing it, because tf.data.Datasets are still (and they will always be, I suppose, for performance reasons) executed in graph mode and, thus, you cannot use anything outside of the tf.* methods, that can be easily converted by TensorFlow to its graph representation.
Using tf.py_function is the only way to mix Python execution (and thus, you can use any Python library) and graph execution when using a tf.data.Dataset object (on the contrary of what happens when using TensorFlow 2.0, that being eager by default allow this mixed execution naturally).
I have a tf.data.TFRecordDataset and a (computationally expensive) function, which I want to map to it. I use TensorFlow 1.12 and eager execution, and the function uses NumPy ndarray interpretations of the tensors in my dataset using EagerTensor.numpy(). However, code inside functions that are given to tf.Dataset.map() are not executed eagerly, which is why the .numpy() conversion doesn't work there and .map() is not an option anymore. Is it possible to for-loop through a dataset and modify the examples in it? Simply assigning to them doesn't seem to work.
No, not exactly.
A Dataset is inherently lazily evaluated and cannot be assigned to in that way - conceptually try to think of it as a pipeline rather than a variable: each value is read, passed through any map() operations, batch() ops, etc and surfaced to the model as needed. To "assign" a value would be to write it to disk in the .tfrecord file and just isn't likely to ever be supported (these files are specifically designed to be fast-read not random-accessed).
You could, instead, use TensorFlow to do your pre-processing and use TfRecordWriter to write to a NEW tfrecord with the expensive pre-processing completed then use this new dataset as the input to your model. If you have the disk space avilable this might well be your best option.
I have a tensor x and x.shape=(batch_size,10)
I want to add one to all of the elements, and take two different operations
x=x+1
for i in range(0,batch_size):
x[i]=x[i]+1
I got the same tensors with the two operations,but when I call loss.backward(), (2) takes much more time than (1) in back propagation.
What’s the difference between them???
This is to be expected. Firstly, the forward is also a lot slower: with your for loop, Python dispatches batch_size times the following requests to PyTorch:
fetch ith element of x
add 1
update ith element of x with the incremented value
Python is slow. In version two, Python dispatches a single message "add 1 everywhere" to PyTorch. PyTorch is much faster than Python (let alone GPU acceleration it's capable of). This is thanks to the technique called vectorization and is not specific to PyTorch, but essentially all Python (and many other) math packages.
Secondly, for your backward, PyTorch needs to keep track of all operations which happened to x and backpropagate through them. In the first case, there's batch_size of them, in the second, just one. Again, vectorization wins.
I have a model where I need to assign to the weights (trainable variables) new external values every N iterations.
I can think of a few solutions:
Save and restore
Not good as I would need to serialization, go through a file system calls, etc. (even if I use something like tmpfs)
Using placeholders and assign operations
I would create a placeholder and assign op for each trainable variable. Everytime I want to assign something to the weights, I ran the assign ops.
However, I understand that this means I will be forced to consider these placeholders in every feed_dict and pass dummy values everytime I run any operation in my graph.
In addition I would be using much more memory than necessary..
Use a feed_dict for trainable variable and trigger ops that assign each variable to itself?
Does this work? Is there any drawback?
Before coding something I thought it was a good idea to ask?
What is the recommended way to assign new external values to variables efficiently (memory/timewise)?
Your 3-rd option sounds like the best one.
You can feed values to tensors that aren’t placeholders.
TensorFlow's feed mechanism lets you inject data into any Tensor in a
computation graph. A python computation can thus feed data directly
into the graph.
Any tensors that are feedable can be fed. To check if a tensor is feedable or not, use: tf.Graph.is_feedable(tensor).
In recent versions of Tensorflow Variable class has load method. It does exactly what you want.
https://www.tensorflow.org/api_docs/python/tf/Variable#load
You can use the assign operations with placeholders.
I will be forced to consider these placeholders in every feed_dict and pass dummy values everytime I run any operation in my graph
In addition I would be using much more memory than necessary..
No. You would only need to feed values to the placeholders when you run the assign operations. Don't make the assign operation part of your training graph and only run them when you want to assign new values.
If the assigning turns out to be a bottleneck (for small N it might slow down your program) you can consider other methods of getting data into TensorFlow.