Keras data generators for image inpainting using autoencoder - python

I am trying to train an autoencoder for image inpainting where the input images are the corrupted ones, and the output images are the ground truth.
The dataset used is organized as:
/Dataset
/corrupted
img1.jpg
img2.jpg
.
.
/groundTruth
img1.jpg
img2.jpg
.
.
The number of images used is relatively large. How can I feed the data to the model using Keras image data generators? I checked flow_from_directory method but couldn't find a proper class_mode to use (each image in the 'corrupted' folder maps to the one with the same name in 'groundTruth' folder)

If there no pre-built image data generator that provides the functionality you require, you can create your own custom data generator.
To do so, you must create your new data generator class by subclassing tf.keras.utils.Sequence. You are required to implement the __getitem__ and the __len__ methods in the your new class. __len__ must return the number of batches in your dataset, while __getitem__ must return the elements in a single batch as a tuple.
You can read the official docs here. Below is a code example:
from skimage.io import imread
from skimage.transform import resize
import numpy as np
import math
# Here, `x_set` is list of path to the images
# and `y_set` are the associated classes.
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
return np.array([
resize(imread(file_name), (200, 200))
for file_name in batch_x]), np.array(batch_y)
Hope the answer was helpful!

Related

Data Loading from disk in autokeras for regression

Iam working on a regression problem using autokeras. As my dataset is huge, I need to load the data from the hard disk in batches. The code I used for batch loading is given below:
class data_read(tf.keras.utils.Sequence):
def __init__(self, filename_ip, filename_op, batch_size):
self.x = filename_ip
self.y = filename_op
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
scipy.io.loadmat(file_name)['B']
for file_name in batch_x]),np.array([
scipy.io.loadmat(file_name1)['y_op'][0][3]
for file_name1 in batch_y])
The code for accessing the class 'data_read' is given below:
gen_train = data_read(filename_ip_tr,filename_op_tr,256)
Where filename_ip_tr and filename_op_tr are 2 lists which contain addresses to the input dataset and the output target value. The input dataset and the output dataset are of .mat files.
The code for training the model is given below:
model.fit(gen_train, verbose=1, epochs=10)
This code works if the 'model' is defined by the user (ie. 'model' is not using autokeras). But it does not work if the model is using autokeras.
ie. if model = ak.AutoModel() or ak.ImageRegressor().
An error of
AttributeError: 'data_read' object has no attribute 'shape'
will be displayed.
Please help me solve this issue. I have exhausted my resources. Thank you.

Fast Data Generator for Training GoogLeNet model

I try to train GoogLeNet from scratch in Keras. I build the network architecture, and it is ready to train. Train GoogLeNet with auxiliaries outputs, the data generator should have three output labels. I write my custom data generator using tf.keras.utils.Sequence.
My custom generator is:
from skimage.transform import resize
from skimage.io import imread
import numpy as np
import math
from tensorflow.keras.utils import Sequence
class GoogLeNetDatasetGenerator(Sequence):
def __init__(self, X_train_path, y_train, batch_size):
"""
Initialize the GoogLeNet dataset generator.
:param X_train_path: Path of train images
:param y_train: Labels of train images
:param batch_size:
"""
self.X_train_path = X_train_path
self.y_train = y_train
self.batch_size = batch_size
self.indexes = np.arange(len(self.X_train_path))
np.random.shuffle(self.indexes)
def __len__(self):
"""
Denotes the number of batches per epoch
:return:
"""
return math.ceil(len(self.X_train_path) / self.batch_size)
def __getitem__(self, index):
"""
Get batch indexes from shuffled indexes
:param index:
:return:
"""
indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
X_batch_names = [self.X_train_path[i] for i in indexes]
y_batch_naive = self.y_train[indexes]
X_batch = np.array([resize(imread(file_name), (224, 224) for file_name in X_batch_names],
dtype='float32')
y_batch = [y_batch_naive, y_batch_naive, y_batch_naive]
return X_batch, y_batch
def on_epoch_end(self):
"""
Updates indexes after each epoch
:return:
"""
self.indexes = np.arange(len(self.X_train_path))
np.random.shuffle(self.indexes)
Also, I compile and train the model with the following codes:
# Compile model
model.compile(loss=[CategoricalCrossentropy(), CategoricalCrossentropy(), CategoricalCrossentropy()],
loss_weights=[1, 0.3, 0.3], optimizer='adam',
metrics=['accuracy'])
# Train model
history = model.fit(train_dataset, validation_data=test_dataset, epochs=100)
While using the GPU version of TensorFlow, loading images in the data generator is time-consuming. It causes the training process slow. Is there any suggestion or other solutions for speeding up the loading data?
P.S.
I search the StackOverflow question such as this, but I did not find any idea.
I found another faster solution. You can use tf.data.Dataset. In the first step, I list all training images directory. Using the map method helped me read the image and properly configure the corresponding label. Here is my sample code to load an image with the ternary label.
image_filenames = tf.constant(image_list)
slices_dataset = tf.data.Dataset.from_tensor_slices(image_filenames)
slices_labels = tf.data.Dataset.from_tensor_slices(label_list)
image_dataset = slices_dataset.map(map_func=process_image)
label_dataset = slices_labels.map(map_func=process_label)
x_dataset = image_dataset.shuffle(buffer_size=Cfg.BUFFER_SIZE, seed=0).\
batch(batch_size=Cfg.BATCH_SIZE)
y_dataset = label_dataset.shuffle(buffer_size=Cfg.BUFFER_SIZE, seed=0).\
batch(batch_size=Cfg.BATCH_SIZE)
dataset = tf.data.Dataset.zip((x_dataset, y_dataset))

Passing data from custom data generator to model.fit()

I am doing the end-to-end mapping. As I have to pass two images (input and output), I have created a custom generator. My generator gets two same images with different resolutions. Right now I can only get 5 images to pass to training but I want to pass the whole generator so that all my data gets trained. As I am new to using generators and yield I don't the correct way to pass the whole generator.
import os
import numpy as np
import cv2
class image_gen():
def __init__(self, idir,odir,batch_size, shuffle = True):
self.batch_index=0
self.idir=idir
self.odir=odir# directory containing input images
self.batch_size=batch_size #batch size is number of samples in a batch
self.shuffle=shuffle # set to True to shuffle images, False for no shuffle
self.label_list=[] # initialize list to hold sequential list of total labels generated
self.image_list=[] # initialize list to hold sequential list of total images filenames generated
self.i_list=os.listdir(self.idir)
self.o_list=os.listdir(self.odir)# list of images in directory
def get_images(self): # gets a batch of input images, resizes input image to make target images
while True:
input_image_batch=[]
output_image_batch=[]# initialize list to hold a batch of target images
sample_count=len(self.i_list) # determine total number of images available
for i in range(self.batch_index * self.batch_size, (self.batch_index + 1) * self.batch_size ): #iterate for a batch
j=i % sample_count # cycle j value over range of available images
k=j % self.batch_size # cycle k value over batch size
if self.shuffle: # if shuffle select a random integer between 0 and sample_count-1 to pick as the image=label pair
m=np.random.randint(low=0, high=sample_count-1, size=None, dtype=int)
else:
m=j # no shuffle
#input
path_to_in_img=os.path.join(self.idir,self.i_list[m])
path_to_out_img=os.path.join(self.odir,self.o_list[m])
# define the path to the m th image
input_image=cv2.imread(path_to_in_img)
input_image=cv2.resize( input_image,(3200,3200))#create the target image from the input image
output_image=cv2.imread(path_to_out_img)
output_image=cv2.resize(output_image,(3200,3200))
input_image_batch.append(input_image)
output_image_batch.append(output_image)
input_image_array=np.array(input_image_batch)
input_image_array = input_image_array / 255.0
output_image_array=np.array(output_image_batch)
output_image_array = output_image_array /255.0
self.batch_index= self.batch_index + 1
yield (input_image_array, output_image_array )
if self.batch_index * self.batch_size > sample_count:
break
This is how i get the images
batch_size=5
idir=r'D:\\train'
odir=r'D:\\Train\\train'#
shuffle=True
gen=image_gen(idir,odir,batch_size,shuffle=True) # instantiate an instance of the class
input_images,output_images = next(gen.get_images())
This is how i train.This way i only train 5 images and not the whole dataset
model.fit(input_images,output_images,validation_data = (valin_images,valout_images),batch_size= 5,epochs = 100)
when i try to pass the whole dataset
model.fit(gen(),validation_data = (valin_images,valout_images),batch_size= 5,epochs = 1)
I get a error "image_gen" object is not callable. How should i pass the generator to model.fit()
The reason why you have this problem is because this error is raised when you try to access a image_gen as if it were a function, but in fact it is an object of a class.
In the first snippet you provided, you accessed in fact the method of the class which is indeed a generator, which yielded some numpy arrays that could be fed as input to the model. The second snippet however fails, because of the error described in the first paragraph.
Two possible solutions for your problem would be the following:
Use a Keras Sequence() generator.
Use a function as a generator (def my_generator(...)).
I personally recommend the first solution, as the Sequence() generator ensures that you only train once per each sample during an epoch, property which is not satisfied in case of simple function generators.
Solution for Keras Sequence() :
You need to override the Sequence class and then overwrite its methods. A complete example from the TensorFlow official documentation is:
from skimage.io import imread
from skimage.transform import resize
import numpy as np
import math
# Here, `x_set` is list of path to the images
# and `y_set` are the associated classes.
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
return np.array([
resize(imread(file_name), (200, 200))
for file_name in batch_x]), np.array(batch_y)
You can use the above code as a starting point for your solution. Incidentally, it is likely your network will not train with such huge image dimensions, you could also try to lower them.
A solution for simple generator could be:
def my_generator(path_to_dataset, other_argument):
...
...
yield image_1, image_2
train_generator = my_generator(path_to_train,argument_1)
val_generator = my_generator(path_to_val,argument_2)
model.fit(train_generator,
steps_per_epoch=len(training_samples) // BATCH_SIZE,
epochs=10, validation_data=val_generator,
validation_steps=len(validation_samples) // BATCH_SIZE)

Load tensorflow images and create patches

I am using image_dataset_from_directory to load a very large RGB imagery dataset from disk into a Dataset. For example,
dataset = tf.keras.preprocessing.image_dataset_from_directory(
<directory>,
label_mode=None,
seed=1,
subset='training',
validation_split=0.1)
The Dataset has, say, 100000 images grouped into batches of size 32 yielding a tf.data.Dataset with spec (batch=32, width=256, height=256, channels=3)
I would like to extract patches from the images to create a new tf.data.Dataset with image spatial dimensions of, say, 64x64.
Therefore, I would like to create a new Dataset with 400000 patches still in batches of 32 with a tf.data.Dataset with spec (batch=32, width=64, height=64, channels=3)
I've looked at the window method and the extract_patches function but it's not clear from the documentation how to use them to create a new Dataset I need to start training on the patches. The window seems to be geared toward 1D tensors and the extract_patches seems to work with arrays and not with Datasets.
Any suggestions on how to accomplish this?
UPDATE:
Just to clarify my needs. I am trying to avoid manually creating the patches on disk. One, that would be untenable disk wise. Two, the patch size is not fixed. The experiments will be conducted over several patch sizes. So, I do not want to manually perform the patch creation either on disk or manually load the images in memory and perform the patching. I would prefer to have tensorflow handle the patch creation as part of the pipeline workflow to minimize disk and memory usage.
What you're looking for is tf.image.extract_patches. Here's an example:
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import numpy as np
data = tfds.load('mnist', split='test', as_supervised=True)
get_patches = lambda x, y: (tf.reshape(
tf.image.extract_patches(
images=tf.expand_dims(x, 0),
sizes=[1, 14, 14, 1],
strides=[1, 14, 14, 1],
rates=[1, 1, 1, 1],
padding='VALID'), (4, 14, 14, 1)), y)
data = data.map(get_patches)
fig = plt.figure()
plt.subplots_adjust(wspace=.1, hspace=.2)
images, labels = next(iter(data))
for index, image in enumerate(images):
ax = plt.subplot(2, 2, index + 1)
ax.set_xticks([])
ax.set_yticks([])
ax.imshow(image)
plt.show()
I believe you can use a python class generator. You can pass this generator to model.fit function if you want. I actually used it once for labels preprocessing.
I wrote the following dataset generator that loads a batch from your dataset, splits the images from the batch into multiple images based on the tile_shape parameter. If there are enough images, the next batch is returned.
In the example, I used a simple dataset from_tensor_slices for simplification. You can, of course, replace it with yours.
import tensorflow as tf
class TileDatasetGenerator:
def __init__(self, dataset, batch_size, tile_shape):
self.dataset_iterator = iter(dataset)
self.batch_size = batch_size
self.tile_shape = tile_shape
self.image_queue = None
def __iter__(self):
return self
def __next__(self):
if self._has_queued_enough_for_batch():
return self._dequeue_batch()
batch = next(self.dataset_iterator)
self._split_images(batch)
return self.__next__()
def _has_queued_enough_for_batch(self):
return self.image_queue is not None and tf.shape(self.image_queue)[0] >= self.batch_size
def _dequeue_batch(self):
batch, remainder = tf.split(self.image_queue, [self.batch_size, -1], axis=0)
self.image_queue = remainder
return batch
def _split_images(self, batch):
batch_shape = tf.shape(batch)
batch_splitted = tf.reshape(batch, shape=[-1, self.tile_shape[0], self.tile_shape[1], batch_shape[-1]])
if self.image_queue is None:
self.image_queue = batch_splitted
else:
self.image_queue = tf.concat([self.image_queue, batch_splitted], axis=0)
dataset = tf.data.Dataset.from_tensor_slices(tf.ones(shape=[128, 64, 64, 3]))
dataset.batch(32)
generator = TileDatasetGenerator(dataset, batch_size = 16, tile_shape = [32,32])
for batch in generator:
tf.print(tf.shape(batch))
Edit:
It is possible to convert the generator to tf.data.Dataset if you want, but it requires that you add a __call__ function to the generator returning an iterator (self in this case).
new_dataset = tf.data.Dataset.from_generator(generator, output_types=(tf.int64))

How to build a Custom Data Generator for Keras/tf.Keras where X images are being augmented and corresponding Y labels are also images

I am working on Image Binarization using UNet and have a dataset of 150 images and their binarized versions too. My idea is to augment the images randomly to make them look like they are differentso I have made a function which inserts any of the 4-5 types of Noises, skewness, shearing and so on to an image. I could have easily used
ImageDataGenerator(preprocess_function=my_aug_function) to augment the images but the problem is that my y target is also an image. Also, I could have used something like:
train_dataset = (
train_dataset.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
But it has 2 problems:
With larger dataset, it'll blow up the memory as data needs to be already in the memory
This is the crucial part that I need to augment the images on the go to make it look like I have a huge dataset.
Another Solution could be saving augmented images to a directory and making them 30-40K and then loading them. It would be silly thing to do.
Now the idea part is that I can use Sequence as the parent class but How can I keep on augmenting and generating new images on the fly with respective Y binarized images?
I have an idea as the below code. Can somebody help me with the augmentation and generation of y images. I have my X_DIR, Y_DIR where image names for binarised and original are same but stored in different directories.
class DataGenerator(tensorflow.keras.utils.Sequence):
def __init__(self, files_path, labels_path, batch_size=32, shuffle=True, random_state=42):
'Initialization'
self.files = files_path
self.labels = labels_path
self.batch_size = batch_size
self.shuffle = shuffle
self.random_state = random_state
self.on_epoch_end()
def on_epoch_end(self):
'Updates indexes after each epoch'
# Shuffle the data here
def __len__(self):
return int(np.floor(len(self.files) / self.batch_size))
def __getitem__(self, index):
# What do I do here?
def __data_generation(self, files):
# I think this is responsible for Augmentation but no idea how should I implement it and how does it works.
Custom Image Data Generator
load Directory data into dataframe for CustomDataGenerator
def data_to_df(data_dir, subset=None, validation_split=None):
df = pd.DataFrame()
filenames = []
labels = []
for dataset in os.listdir(data_dir):
img_list = os.listdir(os.path.join(data_dir, dataset))
label = name_to_idx[dataset]
for image in img_list:
filenames.append(os.path.join(data_dir, dataset, image))
labels.append(label)
df["filenames"] = filenames
df["labels"] = labels
if subset == "train":
split_indexes = int(len(df) * validation_split)
train_df = df[split_indexes:]
val_df = df[:split_indexes]
return train_df, val_df
return df
train_df, val_df = data_to_df(train_dir, subset="train", validation_split=0.2)
Custom Data Generator
import tensorflow as tf
from PIL import Image
import numpy as np
class CustomDataGenerator(tf.keras.utils.Sequence):
''' Custom DataGenerator to load img
Arguments:
data_frame = pandas data frame in filenames and labels format
batch_size = divide data in batches
shuffle = shuffle data before loading
img_shape = image shape in (h, w, d) format
augmentation = data augmentation to make model rebust to overfitting
Output:
Img: numpy array of image
label : output label for image
'''
def __init__(self, data_frame, batch_size=10, img_shape=None, augmentation=True, num_classes=None):
self.data_frame = data_frame
self.train_len = len(data_frame)
self.batch_size = batch_size
self.img_shape = img_shape
self.num_classes = num_classes
print(f"Found {self.data_frame.shape[0]} images belonging to {self.num_classes} classes")
def __len__(self):
''' return total number of batches '''
self.data_frame = shuffle(self.data_frame)
return math.ceil(self.train_len/self.batch_size)
def on_epoch_end(self):
''' shuffle data after every epoch '''
# fix on epoch end it's not working, adding shuffle in len for alternative
pass
def __data_augmentation(self, img):
''' function for apply some data augmentation '''
img = tf.keras.preprocessing.image.random_shift(img, 0.2, 0.3)
img = tf.image.random_flip_left_right(img)
img = tf.image.random_flip_up_down(img)
return img
def __get_image(self, file_id):
""" open image with file_id path and apply data augmentation """
img = np.asarray(Image.open(file_id))
img = np.resize(img, self.img_shape)
img = self.__data_augmentation(img)
img = preprocess_input(img)
return img
def __get_label(self, label_id):
""" uncomment the below line to convert label into categorical format """
#label_id = tf.keras.utils.to_categorical(label_id, num_classes)
return label_id
def __getitem__(self, idx):
batch_x = self.data_frame["filenames"][idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.data_frame["labels"][idx * self.batch_size:(idx + 1) * self.batch_size]
# read your data here using the batch lists, batch_x and batch_y
x = [self.__get_image(file_id) for file_id in batch_x]
y = [self.__get_label(label_id) for label_id in batch_y]
return tf.convert_to_tensor(x), tf.convert_to_tensor(y)
You can use libraries like albumentations and imgaug, both are good but I have heard there are issues with random seed with albumentations.
Here's an example of imgaug taken from the documentation here:
seq = iaa.Sequential([
iaa.Dropout([0.05, 0.2]), # drop 5% or 20% of all pixels
iaa.Sharpen((0.0, 1.0)), # sharpen the image
iaa.Affine(rotate=(-45, 45)), # rotate by -45 to 45 degrees (affects segmaps)
iaa.ElasticTransformation(alpha=50, sigma=5) # apply water effect (affects segmaps)
], random_order=True)
# Augment images and segmaps.
images_aug = []
segmaps_aug = []
for _ in range(len(input_data)):
images_aug_i, segmaps_aug_i = seq(image=image, segmentation_maps=segmap)
images_aug.append(images_aug_i)
segmaps_aug.append(segmaps_aug_i)
You are going in the right way with the custom generator. In __getitem__, make a batch using batch_x = self.files[index:index+batch_size] and same with batch_y, then augment them using X,y = __data_generation(batch_x, batch_y) which will load images(using any library you like, I prefer opencv), and return the augmented pairs (and any other manipulation).
Your __getitem__ will then return the tuple (X,y)
You can use ImageDataGenerator even if your label is an image.
Here is a simple example of how you can do that:
Code:
# Specifying your data augmentation here for both image and label
image_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
mask_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
# Provide the same seed and keyword arguments to the flow methods
seed = 1
image_generator = image_datagen.flow_from_directory(
data_dir,
class_mode=None,
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
data_dir,
class_mode=None,
seed=seed)
# Combine the image and label generator.
train_generator = zip(image_generator, mask_generator)
Now, if you iterate over it you will get:
for image, label in train_generator:
print(image.shape,label.shape)
break
Output:
(32, 256, 256, 3) (32, 256, 256, 3)
You can use this train_generator with fit() command.
Code:
model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=50)
With flow_from_directory your memory won't be cluttered and Imagedatagenerator will take care of the augmentation part.

Categories