Increase I/O bound tensorflow training speed - python

I am facing a problem of improving the training speed / efficiency of a Tensorflow implementation of point cloud object detection algorithm.
The input data is a [8000, 100, 9] float32 tensor, with a size roughly 27MB per sample. On a batch size of 5, data loading becomes a bottleneck in training as most of the time GPU utlization rate is 0% until data arrives.
I have tried the following methods to increase data loading speed.
Use num_parallel_calls in tf.Dataset .map API, and use multiple threads for reading this big tensor. The problem is .map wraps a py_fun which is subject to Global Interpreter Lock and thus multi-threading does not improve I/O efficiency.
Use tf.Dataset .interleave API. Since it's also multi-threading based, it has the same problem as 2.
Use TFRecord format. This is even slower than method 1 and 2. Possibility is TFRecord will convert tensor to numpy, then serialize numpy to bytes, then wrap this bytes to tensorflow structure and write to disk. Numpy to Tensor takes a long time for my data as measured by tf.convert_to_tensor().
Any suggestions how to move forward would be helpful. Thanks!
Follow up on comments
Am I using slow disks? Data is stored on a mounted disk. Could be a reason.
Can the data be fit into GPU memory? Unfortunately no. There are ~70,000 samples. I tried cache a small dataset into RAM and GPU utlization rate is 30%~40%, which is probably the highest expectation for this particular network.

Some ideas:
You should use a combination of 1,2 and 3. If you save your files as TFRecords, you can read them in parallel, that's what they are designed for. Then, you will be able to use num_parallel_calls and interleave, because that way you don't have to wrap a py_func.
.map doesn't have to wrap a .py_func, you could for example use tf.keras.utils.get_file. That way you also avoid using py_func and use num_parallel_calls efficiently. I still recommend using TFRecords, they are designed for this use case.
Another option is to use an SSD to store your data instead of a Hard Disk.
You can also look into the .cache function of the tf.Dataset API. Maybe you can try loading a random subset of the data, training multiple eopchs on that, and then in the mean time fetch another subset of the data (using tf.prefetch), and then train multiple epochs on that, and so on. This idea is more of a long shot as it might affect performance, but it just might work in your case.

Related

How is tensorflow.data.Dataset loading handled in the fit method?

I wanted to know how data-loading and data transfer between CPU RAM and GPU memory is handled when I have a tf.data.Dataset and use the fit method of Keras.
Is one batch of data transferred at a time and then forward and backward propagation is done on that batch and then the new batch is sent from CPU RAM to GPU memory?
I know that in Keras's fit method there is max_queue_size, however, it says that
"Used for generator or keras.utils.Sequence input only"
How does tf.data data-loading under the hood work? Does it change anything if instead of using the fit method I create a custom loop like here
Are there links/guides where this is explained in enough detail?
The dataset will be sampled sequentially, unless otherwise specified. Since you're in control of the dataset with tf.data.Dataset, you can perform this operation of it yourself, with tf.data.Dataset.prefetch.
Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.prefetch(2)
>>> list(dataset.as_numpy_iterator())
[0, 1, 2]

how to enable keras fit() multiprocessing properly?

when I run fit() with multiprocessing=True i always get a deadlock and the following warning:
WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended.
how to run it properly?
Since it says "tf.data", i wonder if transforming my data into this format will make multiprocessing work. What specifically is meant/how to convert it?
my dataset: (reproducable)
Input_shape, labels =(20,4), 6
LEN_X.LEN_Y = 20000.3000
train_X,train_Y = np.asarray([np.random.random(Input_shape) for x in range(LEN_X )]), np.random.random((LEN_X ,labels))
validation_X,validation_Y = np.asarray([np.random.random(Input_shape) for x in range(LEN_Y)]), np.random.random((LEN_Y,labels))
sampleW = np.random.random((LEN_X ,1))
The multiprocessing doesn't accelerate the model itself. It only accelerates the data loading. And data loading delay is not a problem when all your data is already in-memory.
You could still use multiprocessing, however, but you must make sure that the underlying dataset is thread-safe and you have to carefully craft the data pipeline. That is quite time consuming. So, instead I suggest you speed up the model itself.
For that, you should look into:
changing all except last layer activations to RELU.
tweaking batch size. (optimal number depends on your hardware, and is almost always less than or equal to 32)
using Batch normalization to speed up convergence.
using higher learning rate (be careful not to overdo this step).
if you need faster convolutions, consider using Kaggle notebooks or vast.ai for GPU-enabled computations.
last but not least, try using a simpler, smaller model.
Comment down here if you have any additional questions.
Cheers.

Training changing input size RNN on Tensorflow

I want to train an RNN with different input size of sentence X, without padding. The logic used for this is that I am using Global Variables and for every step, I take an example, write the forward propagation i.e. build the graph, run the optimizer and then repeat the step again with another example. The program is extremely slow as compared to the numpy implementation of the same thing where I have implemented forward and backward propagation and using the same logic as above. The numpy implementation takes a few seconds while Tensorflow is extremely slow. Can running the same thing on GPU will be useful or I am doing some logical mistake ?
As a general guideline, GPU boosts performance only if you have calculation intensive code and little data transfer. In other words, if you train your model one instance at a time (or on small batch sizes) the overhead for data transfer to/from GPU can even make your code run slower! But if you feed in a good chunk of samples, then GPU will definitely boost your code.

Reading multiple images, process them to one image and feed through model

Is there a way to build following graph in Tensorflow:
Load some N images (N can vary for each set) using TF Queues and TF Image Readers.
Process these images to get fixed size image and prepare batches.
Feed these batches through the CNN model
Some questions/info:
I am trying to build data loading part in TF instead of Python functions and feed_dict. I guess, TF Data loading can train the model faster compared to python and feed_dict. Is that right ?
Building the graph for small N (N<5) is easy. Define exclusive nodes for each image in N and process on them. (working)
Can I use TF "while_loop" to build such functionality to read N images ??
Does Keras supports such functionality ?
Thanks for your suggestions.
I just did this last week! It was awesome, I learned a ton about tensorflow using things like tf.map_fn, and tf.cond. And it worked.
This week I just refactored my code to eliminate it all, because it was a bad idea.
Issues I ran into:
Doing preprocessing in tensorflow is messy to debug. Doing proper TDD will definitely benefit you here, but still not going to be particularly pretty or easy to debug.
You should be offloading the preprocessing to the CPU and leaving the GPU (assuming you're using one) to do training. A better approach is to just have a queue and load it from a thread/class that's dedicated to your preprocessing task. And doing the work in numpy/scikit/scikit-image is going to be easier to configure and test.
I thought I was so smart, corralling all my code into a single model. But the complexity of the preprocessing meant my model was really hard to iterate on, it got to be rigid code quickly - example is when I added my test set evaluation in, the preprocessing requirement was slightly different. Suddenly I had to add large sections of conditional code to my model and it got ugly quick.
That being said, my preprocessing steps were maybe more complex than yours. If you're sticking to simple things where you can just apply some of the simple image preprocessing steps it might still be easier for you to go this approach.
To answer your questions specifically:
Queues won't give any benefit over feed_dict that I know of. You still have a problem of moving data from a TF queue on the CPU to the GPU memory each iteration same as feed_dict does, watch this thread if you care about that topic, GPU queues are coming: https://github.com/tensorflow/tensorflow/issues/7679
You should just dequeue_many from the queue, process them as a batch. If you need to do something to each individual image just use tf.map_fn which will remove the first dimension and pass individual 3D images to your specified function. But heed my warning above when you go this route - you'll probably be happier just doing this in a separate thread.
Already answered in #2, use tf.map_fn to iterate over multiple images in a batch. it's pretty easy to use actually.
I don't know Keras.

Tensorflow: Is preprocessing on TFRecord files faster than real-time data preprocessing?

In Tensorflow, it seems that preprocessing could be done on either during training time, when the batch is created from raw images (or data), or when the images are already static. Given that theoretically, the preprocessing should take roughly equal time (if they are done using the same hardware), is there any practical disadvantage in doing data preprocessing (or even data augmentation) before training than during training in real-time?
As a side question, could data augmentation even be done in Tensorflow if was not done during training?
Is there any practical disadvantage in doing data preprocessing (or
even data augmentation) before training than during training in
real-time?
Yes, there are advantages (+++) and disadvantages (---):
Preprocessing before training:
--- preprocessed samples need to be stored: disk space consumption* (1)
--- only a "finite" amount of samples can be generated
+++ no runtime during training
---... but samples always need be read from storage, i.e. maybe storage (disk) I/O becomes bottleneck
--- not flexible: changing datset/augmentation requires generating a new augmented dataset
+++ for Tensorflow: Easily work on numpy.ndarray or other dataformats with any high-level image API (open-cv, PIL, ...) to do augmentation or even use any other language/tool you like.
Preprocessing during training ("real-time"):
+++ infinite amount of samples can be generated (as it is generated on-the-fly)
+++ flexible: changing dataset/augmentation only requires changing code
+++ if dataset fits in memory, no disk I/O needed for data after reading once
--- adds runtime to your training* (2)
--- for Tensorflow: Building the preprocessing as part of the graph requires working with Tensors and restricts usage of APIs working on ndarrays or other formats.* (3)
Some specific aspects discussed in detail:
(1) Reproducing experiments "with the same data" is kind of straightforward with a dataset generated before training. However this can be solved (even more!) elegantly with storing a seed for real-time data generation.
(2): Training runtime for preprocessing: There are ways to avoid an expensive preprocessing pipeline to get in the way of your actual training. Tensorflow itself recommends filling Queues with many (CPU-)threads so that data generation can independently keep up with GPU data consumption. You can read more about this in the input pipeline performance guide.
(3): Data augmentation in tensorflow
As a side question, could data augmentation even be done in Tensorflow
if was not done during (I think you mean) before training?
Yes, tensorflow offers some functionality to do augmentation. In terms of value augmentation of scalar/vector (or also more dimensional data), you can easily build something yourself with tf.multiply or other basic math ops. For image data, there are several ops implemented (see tf.image and tf.contrib.image), which should cover a lot of augmentation needs.
There are off-the-shelf preprocessing examples on github, one of which is used and described in the CNN tutorial (cifar10).
Personally, I would always try to use real-time preprocessing, as generating (potentially huge) datasets feels clunky. But it is perfectly viable, I've seen it done many times and (as you see above) it definitely has it's advantages.
I have been wondering the same thing and have been disappointed with my during-training-time image processing performance. It has taken me a while to appreciate quite how big an overhead the image manipulation can be.
I am going to make myself a nice fat juicy preprocessed/augmented data file. Run it overnight and then come in the next day and be twice as productive!
I am using a single GPU machine and it seems obvious to me that piece-by-piece model building is the way to go. However, the workflow-maths may look different if you have different hardware. For example, on my Macbook-Pro tensorflow was slow (on CPU) and image processing was blinding fast because it was automatically done on the laptop's GPU. Now I have moved to a proper GPU machine, tensorflow is running 20x faster and the image processing is the bottleneck.
Just work out how long your augmentation/preprocessing is going to take, work out how often you are going to reuse it and then do the maths.

Categories