Data Loading from disk in autokeras for regression - python

Iam working on a regression problem using autokeras. As my dataset is huge, I need to load the data from the hard disk in batches. The code I used for batch loading is given below:
class data_read(tf.keras.utils.Sequence):
def __init__(self, filename_ip, filename_op, batch_size):
self.x = filename_ip
self.y = filename_op
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
scipy.io.loadmat(file_name)['B']
for file_name in batch_x]),np.array([
scipy.io.loadmat(file_name1)['y_op'][0][3]
for file_name1 in batch_y])
The code for accessing the class 'data_read' is given below:
gen_train = data_read(filename_ip_tr,filename_op_tr,256)
Where filename_ip_tr and filename_op_tr are 2 lists which contain addresses to the input dataset and the output target value. The input dataset and the output dataset are of .mat files.
The code for training the model is given below:
model.fit(gen_train, verbose=1, epochs=10)
This code works if the 'model' is defined by the user (ie. 'model' is not using autokeras). But it does not work if the model is using autokeras.
ie. if model = ak.AutoModel() or ak.ImageRegressor().
An error of
AttributeError: 'data_read' object has no attribute 'shape'
will be displayed.
Please help me solve this issue. I have exhausted my resources. Thank you.

Related

Is memory supposed to be this high during model.fit using a generator?

The tensorflow versions that I can still recreate this behavior are: 2.7.0, 2.7.3, 2.8.0, 2.9.0. Actually, these are all the versions I've tried; I wasn't able to resolve the issue in any version.
OS: Ubuntu 20
GPU: RTX 2060
RAM: 16GB
I am trying to feed my data to a model using a generator:
class DataGen(tf.keras.utils.Sequence):
def __init__(self, indices, batch_size):
self.X = X
self.y = y
self.indices = indices
self.batch_size = batch_size
def __getitem__(self, index):
X_batch = self.X[self.indices][
index * self.batch_size : (index + 1) * self.batch_size
]
y_batch = self.y[self.indices][
index * self.batch_size : (index + 1) * self.batch_size
]
return X_batch, y_batch
def __len__(self):
return len(self.y[self.indices]) // self.batch_size
train_gen = DataGen(train_indices, 32)
val_gen = DataGen(val_indices, 32)
test_gen = DataGen(test_indices, 32)
where X, y is my dataset loaded from a .h5 file using h5py, and train_indices, val_indices, test_indices are the indices for each set that will be used on X and y.
I am creating the model and feeding the data using:
# setup model
base_model = tf.keras.applications.MobileNetV2(input_shape=(128, 128, 3),
include_top=False)
base_model.trainable = False
mobilenet1 = Sequential([
base_model,
Flatten(),
Dense(27, activation='softmax')
])
mobilenet1.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# model training
hist_mobilenet = mobilenet1.fit(train_gen, validation_data=val_gen, epochs=1)
The memory right before training is 8%, but the moment training starts it begins getting values from 30% up to 60%. Since I am using a generator and loading the data in small parts of 32 observations at a time, it seems odd to me that the memory climbs this high. Also, even when training stops, memory stays above 30%. I checked all global variables but none of them has such a large size. If I start another training session memory starts having even higher usage values and eventually jupyter notebook kernel dies.
Is something wrong with my implementation or this is normal?
Edit 1: some additional info.
Whenever the training stops, memory usage drops a little, but I can decrease it even more by calling garbage collector. However, I cannot bring it back down to 8%, even when I delete the history created by fit
the x and y batches' size sum up to 48 bytes; this outrages me! how come loading 48 of data at a time is causing the memory usage to increase that much? Supposedly I am using HDF5 dataset to be able to handle the data without overloading RAM. The next thing that comes to my mind is that fit creates some variables, but it doesn't make sense that it needs so many GBs of memory to store them
Literally, this is not a generator. When you instantiate DataGen, you create a complete class with full indices (def init (self, indices, batch_size)), with datasets (self.X, self.Y), with inheritance from Sequential, and so on.
The simplest real generator for tensorflow looks something like this:
from sklearn.model_selection import train_test_split
BATCH_SIZE = 32
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_val = X_train[int(len(X_train) * 0.8):]
X_train = X_train[int(len(X_train) * 0.8)]
y_val = y_train[int(len(y_train) * 0.8):]
y_train = y_train[:int(len(y_train) * 0.8)]
def gen_reader(X_train, y_train):
for data, label in zip(X_train, y_train):
yield data, label
train_ds = tf.data.Dataset.from_generator(gen_reader, args=[X_train, y_train], output_types=(tf.float64, tf.int8)).batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
val_ds = tf.data.Dataset.from_generator(gen_reader, args=[X_val, y_val], output_types=(tf.float64, tf.int8)).batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
test_ds = tf.data.Dataset.from_generator(gen_reader, args=[X_test, y_test], output_types=(tf.float64, tf.int8)).batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
...
hist_mobilenet = mobilenet1.fit(train_ds, validation_data=val_ds, epochs=1)
How to minimize RAM usage
From the very helpful comments and answers of our fellow friends, I came to this conclusion:
First, we have to save the data to an HDF5 file, so we would not have to load the whole dataset in memory.
import h5py as h5
import gc
file = h5.File('data.h5', 'r')
X = file['X']
y = file['y']
gc.collect()
I am using garbage collector just to be safe.
Then, we would not have to pass the data to the generator, as the X and y will be same for training, validation and testing. In order to differentiate between the different data, we will use index maps
# split data for validation and testing
val_split, test_split = 0.2, 0.1
train_indices = np.arange(len(X))[:-int(len(X) * (val_split + test_split))]
val_indices = np.arange(len(X))[-int(len(X) * (val_split + test_split)) : -int(len(X) * test_split)]
test_indices = np.arange(len(X))[-int(len(X) * test_split):]
class DataGen(tf.keras.utils.Sequence):
def __init__(self, index_map, batch_size):
self.X = X
self.y = y
self.index_map = index_map
self.batch_size = batch_size
def __getitem__(self, index):
X_batch = self.X[self.index_map[
index * self.batch_size : (index + 1) * self.batch_size
]]
y_batch = self.y[self.index_map[
index * self.batch_size : (index + 1) * self.batch_size
]]
return X_batch, y_batch
def __len__(self):
return len(self.index_map) // self.batch_size
train_gen = DataGen(train_indices, 32)
val_gen = DataGen(val_indices, 32)
test_gen = DataGen(test_indices, 32)
Last thing to notice is how I implemented the the data fetching inside __getitem__.
Correct solution:
X_batch = self.X[self.index_map[
index * self.batch_size : (index + 1) * self.batch_size
]]
Wrong solution:
X_batch = self.X[self.index_map][
index * self.batch_size : (index + 1) * self.batch_size
]
same for y
Notice the difference? In the wrong solution I am loading the whole dataset (training, validation or testing) in memory! Instead, in the correct solution I am only loading the batch meant to feed in the fit method.
With this setup, I managed to raise RAM only to 2.88 GB, which is pretty cool!
Make use of fit_generator instead of the fit method
I mean instead of
hist_mobilenet = mobilenet1.fit(train_gen, validation_data=val_gen, epochs=1)
Use
hist_mobilenet = mobilenet1.fit_generator(train_gen, validation_data=val_gen, epochs=1)
according to this answer it says
Keras' fit method loads all the data into memory at once meaning
changing your batch size will have no effect on the RAM it takes up.
Have a look at using which is designed for use with a large dataset.
I think the fit_generator will load data batch-wise and not take up the whole ram instantly.

Keras data generators for image inpainting using autoencoder

I am trying to train an autoencoder for image inpainting where the input images are the corrupted ones, and the output images are the ground truth.
The dataset used is organized as:
/Dataset
/corrupted
img1.jpg
img2.jpg
.
.
/groundTruth
img1.jpg
img2.jpg
.
.
The number of images used is relatively large. How can I feed the data to the model using Keras image data generators? I checked flow_from_directory method but couldn't find a proper class_mode to use (each image in the 'corrupted' folder maps to the one with the same name in 'groundTruth' folder)
If there no pre-built image data generator that provides the functionality you require, you can create your own custom data generator.
To do so, you must create your new data generator class by subclassing tf.keras.utils.Sequence. You are required to implement the __getitem__ and the __len__ methods in the your new class. __len__ must return the number of batches in your dataset, while __getitem__ must return the elements in a single batch as a tuple.
You can read the official docs here. Below is a code example:
from skimage.io import imread
from skimage.transform import resize
import numpy as np
import math
# Here, `x_set` is list of path to the images
# and `y_set` are the associated classes.
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
return np.array([
resize(imread(file_name), (200, 200))
for file_name in batch_x]), np.array(batch_y)
Hope the answer was helpful!

Colab: How to speed up training? How to load data from google drive to Colab efficiently?

I stored my data on Google Drive and mount the data to Colab by this code:
from google.colab import drive
drive.mount('/content/gdrive')
That works, but training for example a neural network is very slow. Now, I was wondering, if there is another option to get the data on Colab?
Note: I cannot store all the data on Colab at once because it is too big. So, I need to train my network batch-wise.
To get the data batch-wise I use a customized DataGenerator using keras.utils.Sequence.
class DataGenerator(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx*self.batch_size : (idx + 1)*self.batch_size]
batch_x = [imread(file_name) for file_name in batch_x]
batch_x = np.array(batch_x)
batch_x = batch_x * 1./255
batch_y = self.y[idx*self.batch_size : (idx + 1)*self.batch_size]
batch_y = np.array(batch_y)
return batch_x, batch_y
I am using TensorFlow for the configuration of the model.
The dataset contains more than 180.000 images of license plates, for which I would like to recognize the ground truth text, the image paths are the x_set data. y_set are the according labels for every image, which are one-hot encoded.

Keras difference between generator and sequence

I'm using a deep CNN+LSTM network to perfom a classification on a dataset of 1D signals. I'm using keras 2.2.4 backed by tensorflow 1.12.0. Since I have a large dataset and limited resources, I'm using a generator to load the data into the memory during the training phase. First, I tried this generator:
def data_generator(batch_size, preproc, type, x, y):
num_examples = len(x)
examples = zip(x, y)
examples = sorted(examples, key = lambda x: x[0].shape[0])
end = num_examples - batch_size + 1
batches = [examples[i:i + batch_size] for i in range(0, end, batch_size)]
random.shuffle(batches)
while True:
for batch in batches:
x, y = zip(*batch)
yield preproc.process(x, y)
Using the above method, I'm able to launch training with a mini-batch size up to 30 samples at a time. However, this kind of method does not guarantee that the network will only train once on each sample per epoch. Considering this comment from Keras's website:
Sequence is a safer way to do multiprocessing. This structure
guarantees that the network will only train once on each sample per
epoch which is not the case with generators.
I've tried another way of loading data using the following class:
class Data_Gen(Sequence):
def __init__(self, batch_size, preproc, type, x_set, y_set):
self.x, self.y = np.array(x_set), np.array(y_set)
self.batch_size = batch_size
self.indices = np.arange(self.x.shape[0])
np.random.shuffle(self.indices)
self.type = type
self.preproc = preproc
def __len__(self):
# print(self.type + ' - len : ' + str(int(np.ceil(self.x.shape[0] / self.batch_size))))
return int(np.ceil(self.x.shape[0] / self.batch_size))
def __getitem__(self, idx):
inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_x = self.x[inds]
batch_y = self.y[inds]
return self.preproc.process(batch_x, batch_y)
def on_epoch_end(self):
np.random.shuffle(self.indices)
I can confirm that using this method the network is training once on each sample per epoch but this time when I put more than 7 samples in the mini-batch, I got out of memory error:
OP_REQUIRES failed at random_op.cc: 202: Resource exhausted: OOM when
allocating tensor with shape...............
I can confirm that I'm using the same model architecture, configuration, and machine to do this test. I'm wondering why would be a difference between these 2 ways of loading data??
Please don't hesitate to ask for more details in case needed.
Thanks in advance.
EDITED:
Here is the code I'm using to fit the model:
reduce_lr = keras.callbacks.ReduceLROnPlateau(
factor=0.1,
patience=2,
min_lr=params["learning_rate"])
checkpointer = keras.callbacks.ModelCheckpoint(
filepath=str(get_filename_for_saving(save_dir)),
save_best_only=False)
batch_size = params.get("batch_size", 32)
path = './logs/run-{0}'.format(datetime.now().strftime("%b %d %Y %H:%M:%S"))
tensorboard = keras.callbacks.TensorBoard(log_dir=path, histogram_freq=0,
write_graph=True, write_images=False)
if index == 0:
print(model.summary())
print("Model memory needed for batchsize {0} : {1} Gb".format(batch_size, get_model_memory_usage(batch_size, model)))
if params.get("generator", False):
train_gen = load.data_generator(batch_size, preproc, 'Train', *train)
dev_gen = load.data_generator(batch_size, preproc, 'Dev', *dev)
valid_metrics = Metrics(dev_gen, len(dev[0]) // batch_size, batch_size)
model.fit_generator(
train_gen,
steps_per_epoch=len(train[0]) / batch_size + 1 if len(train[0]) % batch_size != 0 else len(train[0]) // batch_size,
epochs=MAX_EPOCHS,
validation_data=dev_gen,
validation_steps=len(dev[0]) / batch_size + 1 if len(dev[0]) % batch_size != 0 else len(dev[0]) // batch_size,
callbacks=[valid_metrics, MyCallback(), checkpointer, reduce_lr, tensorboard])
# train_gen = load.Data_Gen(batch_size, preproc, 'Train', *train)
# dev_gen = load.Data_Gen(batch_size, preproc, 'Dev', *dev)
# model.fit_generator(
# train_gen,
# epochs=MAX_EPOCHS,
# validation_data=dev_gen,
# callbacks=[valid_metrics, MyCallback(), checkpointer, reduce_lr, tensorboard])
Those methods are roughly the same. It is correct to subclass
Sequence when your dataset doesn't fit in memory. But you shouldn't
run any preprocessing in any of the class' methods because that will
be reexecuted once per epoch wasting lots of computing resources.
It is probably also easier to shuffle the samples rather than their
indices. Like this:
from random import shuffle
class DataGen(Sequence):
def __init__(self, batch_size, preproc, type, x_set, y_set):
self.samples = list(zip(x, y))
self.batch_size = batch_size
shuffle(self.samples)
self.type = type
self.preproc = preproc
def __len__(self):
return int(np.ceil(len(self.samples) / self.batch_size))
def __getitem__(self, i):
batch = self.samples[i * self.batch_size:(i + 1) * self.batch_size]
return self.preproc.process(*zip(batch))
def on_epoch_end(self):
shuffle(self.samples)
I think it is impossible to say why you run out of memory without
knowing more about your data. My guess would be that your preproc
function is doing something wrong. You can debug it by running:
for e in DataGen(batch_size, preproc, *train):
print(e)
for e in DataGen(batch_size, preproc, *dev):
print(e)
You will most likely run out of memory.

Index Error - Python - EMNIST dataset

I've been trying to construct a neural network to train the EMNIST datasets. The two segments of code below are in entirely different cells in jupyter notebook however they are the two that cause the error stated below. My problem comes from the fact that for one dataset the code runs fine then for this particular dataset i receive an error. If anyone could tell me where i've been going wrong it would be greatly appreciated.
IndexError: index 540774 is out of bounds for size 540774
def dense_to_one_hot(labels_dense, num_classes):
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
test_labels_flat = test_data_labels[["1"]].values.ravel()
test_labels_count = np.unique(test_labels_flat).shape[0]
test_labels = dense_to_one_hot(test_labels_flat, test_labels_count)
test_labels = test_labels.astype(np.uint8)

Categories