Tensorflow: load images into memory only when needed - python

I am using TensorFlow V1.7 with the new high-level Estimator interface. I was able to create and train my own network with my own dataset.
However, the policy I use to I load images just doesn't seem right to me.
The approach I have used so far (largely inspired by the MNIST tutorial) is to load all images into memory since the beginning
(here is a tiny code snippet to give you an idea):
for filename in os.listdir(folder):
filepath = os.path.join(folder, filename)
# using OpenCV to read image
images.append(cv2.imread(filepath, cv2.IMREAD_GRAYSCALE))
labels.append(<corresponding label>)
# shuffle samples and labels in the same way
temp = list(zip(images, labels))
random.shuffle(temp)
images, labels = zip(*temp)
return images, labels
This means that I have to load into memory all my training set, containing something like 32k images, before training the net.
However since my batch size is 100 the net will not need more than 100 images at a time.
This approach seems quite weird to me. I understand that this way secondary memory is only accessed once, maximizing the performances; however, if my dataset was really big, this could overload my RAM, couldn't it?
As a consequence, I would like to use a lazy approach, only loading images when they are needed (i.e. when they happen to be in a batch).
How can I do this? I have searched the TF documentation, but I have not found anything so far.
Is there something I'm missing?

It's advised to use the Dataset module, which provides you the ability (among other things) to use queues, prefetching of a small number of examples to memory, number of threads and much more.

Related

How to load more images to memory with flow_from_directory

When I was training my model with data loaded by flow_from_directory with tensorflow, I accidentally deleted a few images from my training set directory, and it soon gave me the warning that it cannot find the file.
so it seems like it is actually reading the images during training, but since my dataset is not a large one, and my memory is only 40% used, I hope to slightly increase my training speed. Is there a way to tell tensorflow to prefetch more images to memory before training starts instead of reading images that current batch needs? Or is there an intentional reason that my memory is not used
You can change some of the parameters like batch_size in flow_from_directory which is default to 32.
And also after creating dataset you can increase the batch size and prefetch batches number also here dataset.batch(batch_size).prefetch(1)
If your dataset is small you can cache the dataset using dataset.cache() after loading and preprocessing data but before shuffling,repeating,batching, and prefetching so that each instance will be read and preprocessed once instead of once per epoch.
You can also check this documentation to optimize working with tf.data

Increase I/O bound tensorflow training speed

I am facing a problem of improving the training speed / efficiency of a Tensorflow implementation of point cloud object detection algorithm.
The input data is a [8000, 100, 9] float32 tensor, with a size roughly 27MB per sample. On a batch size of 5, data loading becomes a bottleneck in training as most of the time GPU utlization rate is 0% until data arrives.
I have tried the following methods to increase data loading speed.
Use num_parallel_calls in tf.Dataset .map API, and use multiple threads for reading this big tensor. The problem is .map wraps a py_fun which is subject to Global Interpreter Lock and thus multi-threading does not improve I/O efficiency.
Use tf.Dataset .interleave API. Since it's also multi-threading based, it has the same problem as 2.
Use TFRecord format. This is even slower than method 1 and 2. Possibility is TFRecord will convert tensor to numpy, then serialize numpy to bytes, then wrap this bytes to tensorflow structure and write to disk. Numpy to Tensor takes a long time for my data as measured by tf.convert_to_tensor().
Any suggestions how to move forward would be helpful. Thanks!
Follow up on comments
Am I using slow disks? Data is stored on a mounted disk. Could be a reason.
Can the data be fit into GPU memory? Unfortunately no. There are ~70,000 samples. I tried cache a small dataset into RAM and GPU utlization rate is 30%~40%, which is probably the highest expectation for this particular network.
Some ideas:
You should use a combination of 1,2 and 3. If you save your files as TFRecords, you can read them in parallel, that's what they are designed for. Then, you will be able to use num_parallel_calls and interleave, because that way you don't have to wrap a py_func.
.map doesn't have to wrap a .py_func, you could for example use tf.keras.utils.get_file. That way you also avoid using py_func and use num_parallel_calls efficiently. I still recommend using TFRecords, they are designed for this use case.
Another option is to use an SSD to store your data instead of a Hard Disk.
You can also look into the .cache function of the tf.Dataset API. Maybe you can try loading a random subset of the data, training multiple eopchs on that, and then in the mean time fetch another subset of the data (using tf.prefetch), and then train multiple epochs on that, and so on. This idea is more of a long shot as it might affect performance, but it just might work in your case.

Keras with Tensorflow: Use memory as it's needed [ResourceExhaustedError]

So I'm trying to tain my CNN with mutilple datasets and it seams that when I add enough data (such as when I add multiple sets as one or when I try to add the one that has over a million samples) it throws a ResourceExhaustedError.
As for the instructions here, I tried adding
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
to my code but this doesn't seam to make a difference.
I see 0.3 after printing out config.gpu_options.per_process_gpu_memory_fraction so that part seams to be ok.
I even threw in a config.gpu_options.allow_growth = True for good mesure but it doesn't seam to want to do anything but attempt to use all the memory at once only to find that it isn't enough.
The computer I'm trying to use to train this CNN has 4 GTX1080 Ti's with 12gb of dedicated memory each.
EDIT: I'm sorry for not specifying how I was loading the data, I honestly didn't realise there was more than one way. When I was learning, they always had examples where the loaded the datasets that were already built in and it took me a while to realise how to load a self-supplied dataset.
The way I'm doing it is that I'm creating two numpy arrays . One has the path or each image and the other has the corresponding label. Here's the most basic example of this:
data_dir = "folder_name"
# There is a folder for every form and in that folder is every line of that form
for filename in glob.glob(os.path.join(data_dir, '*', '*')):
# the format for file names are: "{author id}-{form id}-{line number}.png"
# filename is a path to the file so .split('\\')[-1] get's the raw file name without the path and .split('-')[0] get's the author id
authors.append(filename.split('\\')[-1].split('-')[0])
files.append(filename)
#keras requires numpy arrays
img_files = np.asarray(files)
img_targets = np.asarray(authors)
Are you sure you're not using a giant batch_size?
"Adding data": honestly I don't know what that means and if you could please describe exactly, with code, what you're doing here, it would be of help.
The number of samples should not cause any problems with GPU memory at all. What does cause a problem is a big batch_size.
Loading a huge dataset could cause a CPU RAM problem, not related with keras/tensorflow. A problem with a numpy array that is too big. (You can test this by simply loading your data "without creating any models")
If that is your problem, you should use a generator to load batches gradually. Again, since there is absolutely no code in your question, we can't help much.
But these are two forms of simply creating a generator for images:
Use the existing ImageDataGenerator and it's flow_from_directory() methods, explained here
Create your own coded generator, which can be:
A loop with yield
A class derived from keras.utils.Sequence
A quick example of a loop generator:
def imageBatchGenerator(imageFiles, imageLabels, batch_size):
while True:
batches = len(imageFiles) // batch_size
if len(imageFiles) % batch_size > 0:
batches += 1
for b in range(batches):
start = b * batch_size
end = (b+1) * batch_size
images = loadTheseImagesIntoNumpy(imageFiles[start:end])
labels = imageLabels[start:end]
yield images,labels
Warning: even with generators, you must make sure your batch size is not too big!
Using it:
model.fit_generator(imageBatchGenerator(files,labels,batchSize), steps_per_epoch = theNumberOfBatches, epochs= ....)
Dividing your model among GPUs
You should be able to decide which layers are processed by which GPU, this "could" probably optimize your RAM usage.
Example, when creating the model:
with tf.device('/gpu:0'):
createLayersThatGoIntoGPU0
with tf.device('/gpu:1'):
createLayersThatGoIntoGPU1
#you will probably need to go back to a previous GPU, as you must define your layers in a proper sequence
with tf.device('/cpu:0'):
createMoreLayersForGPU0
#and so on
I'm not sure this would be better or not, but it's worth trying too.
See more details here: https://keras.io/getting-started/faq/#how-can-i-run-a-keras-model-on-multiple-gpus
The ResourceExhaustedError is raised because you're trying to allocate more memory than is available in your GPUs or main memory. The memory allocation is approximately equal to your network footprint (to estimate this, save a checkpoint and look at the file size) plus your batch size multiplied by the size of a single element of your dataset.
It's difficult to answer your question directly without some more information about your setup, but there is one element of this question that caught my attention: you said that you get the error when you "add enough data" or "use a big enough dataset." That's odd. Notice that the size of your dataset is not included in the calculation for memory allocation footprint. Thus, the size of the dataset shouldn't matter. Since it does, that seems to imply that you are somehow attempting to load your entire dataset into GPU memory or main memory. If you're doing this, that's the origin of your problem. To fix it, use the TensorFlow Dataset API. Using a Dataset sidesteps this limited memory resources by hiding the data behind an Iterator that only yields batches when called. Alternatively, you could use the older feed_dict and QueueRunner data feeding structure, but I don't recommend it. You can find some examples of this here.
If you are already using the Dataset API, you'll need to post more of your code as an edit to your question for us to help you.
There is no setting that magically allows you more memory than your GPU has. It looks to me that your inputs are just to big to fit in the GPU RAM (along with all the required state and gradients).
you should use the config.gpu_options.allow_growth = True but not in order to get more memory, just to get an idea of how much memory you need per input length. start with a small length, see with nvidia-smi how much RAM does your GPU take and then increase the length. do that again and again until you understand what is the maximal length of inputs (batch size) that your GPU can hold.

Reading multiple images, process them to one image and feed through model

Is there a way to build following graph in Tensorflow:
Load some N images (N can vary for each set) using TF Queues and TF Image Readers.
Process these images to get fixed size image and prepare batches.
Feed these batches through the CNN model
Some questions/info:
I am trying to build data loading part in TF instead of Python functions and feed_dict. I guess, TF Data loading can train the model faster compared to python and feed_dict. Is that right ?
Building the graph for small N (N<5) is easy. Define exclusive nodes for each image in N and process on them. (working)
Can I use TF "while_loop" to build such functionality to read N images ??
Does Keras supports such functionality ?
Thanks for your suggestions.
I just did this last week! It was awesome, I learned a ton about tensorflow using things like tf.map_fn, and tf.cond. And it worked.
This week I just refactored my code to eliminate it all, because it was a bad idea.
Issues I ran into:
Doing preprocessing in tensorflow is messy to debug. Doing proper TDD will definitely benefit you here, but still not going to be particularly pretty or easy to debug.
You should be offloading the preprocessing to the CPU and leaving the GPU (assuming you're using one) to do training. A better approach is to just have a queue and load it from a thread/class that's dedicated to your preprocessing task. And doing the work in numpy/scikit/scikit-image is going to be easier to configure and test.
I thought I was so smart, corralling all my code into a single model. But the complexity of the preprocessing meant my model was really hard to iterate on, it got to be rigid code quickly - example is when I added my test set evaluation in, the preprocessing requirement was slightly different. Suddenly I had to add large sections of conditional code to my model and it got ugly quick.
That being said, my preprocessing steps were maybe more complex than yours. If you're sticking to simple things where you can just apply some of the simple image preprocessing steps it might still be easier for you to go this approach.
To answer your questions specifically:
Queues won't give any benefit over feed_dict that I know of. You still have a problem of moving data from a TF queue on the CPU to the GPU memory each iteration same as feed_dict does, watch this thread if you care about that topic, GPU queues are coming: https://github.com/tensorflow/tensorflow/issues/7679
You should just dequeue_many from the queue, process them as a batch. If you need to do something to each individual image just use tf.map_fn which will remove the first dimension and pass individual 3D images to your specified function. But heed my warning above when you go this route - you'll probably be happier just doing this in a separate thread.
Already answered in #2, use tf.map_fn to iterate over multiple images in a batch. it's pretty easy to use actually.
I don't know Keras.

Tensorflow: Is preprocessing on TFRecord files faster than real-time data preprocessing?

In Tensorflow, it seems that preprocessing could be done on either during training time, when the batch is created from raw images (or data), or when the images are already static. Given that theoretically, the preprocessing should take roughly equal time (if they are done using the same hardware), is there any practical disadvantage in doing data preprocessing (or even data augmentation) before training than during training in real-time?
As a side question, could data augmentation even be done in Tensorflow if was not done during training?
Is there any practical disadvantage in doing data preprocessing (or
even data augmentation) before training than during training in
real-time?
Yes, there are advantages (+++) and disadvantages (---):
Preprocessing before training:
--- preprocessed samples need to be stored: disk space consumption* (1)
--- only a "finite" amount of samples can be generated
+++ no runtime during training
---... but samples always need be read from storage, i.e. maybe storage (disk) I/O becomes bottleneck
--- not flexible: changing datset/augmentation requires generating a new augmented dataset
+++ for Tensorflow: Easily work on numpy.ndarray or other dataformats with any high-level image API (open-cv, PIL, ...) to do augmentation or even use any other language/tool you like.
Preprocessing during training ("real-time"):
+++ infinite amount of samples can be generated (as it is generated on-the-fly)
+++ flexible: changing dataset/augmentation only requires changing code
+++ if dataset fits in memory, no disk I/O needed for data after reading once
--- adds runtime to your training* (2)
--- for Tensorflow: Building the preprocessing as part of the graph requires working with Tensors and restricts usage of APIs working on ndarrays or other formats.* (3)
Some specific aspects discussed in detail:
(1) Reproducing experiments "with the same data" is kind of straightforward with a dataset generated before training. However this can be solved (even more!) elegantly with storing a seed for real-time data generation.
(2): Training runtime for preprocessing: There are ways to avoid an expensive preprocessing pipeline to get in the way of your actual training. Tensorflow itself recommends filling Queues with many (CPU-)threads so that data generation can independently keep up with GPU data consumption. You can read more about this in the input pipeline performance guide.
(3): Data augmentation in tensorflow
As a side question, could data augmentation even be done in Tensorflow
if was not done during (I think you mean) before training?
Yes, tensorflow offers some functionality to do augmentation. In terms of value augmentation of scalar/vector (or also more dimensional data), you can easily build something yourself with tf.multiply or other basic math ops. For image data, there are several ops implemented (see tf.image and tf.contrib.image), which should cover a lot of augmentation needs.
There are off-the-shelf preprocessing examples on github, one of which is used and described in the CNN tutorial (cifar10).
Personally, I would always try to use real-time preprocessing, as generating (potentially huge) datasets feels clunky. But it is perfectly viable, I've seen it done many times and (as you see above) it definitely has it's advantages.
I have been wondering the same thing and have been disappointed with my during-training-time image processing performance. It has taken me a while to appreciate quite how big an overhead the image manipulation can be.
I am going to make myself a nice fat juicy preprocessed/augmented data file. Run it overnight and then come in the next day and be twice as productive!
I am using a single GPU machine and it seems obvious to me that piece-by-piece model building is the way to go. However, the workflow-maths may look different if you have different hardware. For example, on my Macbook-Pro tensorflow was slow (on CPU) and image processing was blinding fast because it was automatically done on the laptop's GPU. Now I have moved to a proper GPU machine, tensorflow is running 20x faster and the image processing is the bottleneck.
Just work out how long your augmentation/preprocessing is going to take, work out how often you are going to reuse it and then do the maths.

Categories