Is it possible to get the file names that were loaded using flow_from_directory ?
I have :
datagen = ImageDataGenerator(
rotation_range=3,
# featurewise_std_normalization=True,
fill_mode='nearest',
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True
)
train_generator = datagen.flow_from_directory(
path+'/train',
target_size=(224, 224),
batch_size=batch_size,)
I have a custom generator for my multi output model like:
a = np.arange(8).reshape(2, 4)
# print(a)
print(train_generator.filenames)
def generate():
while 1:
x,y = train_generator.next()
yield [x] ,[a,y]
Node that at the moment I am generating random numbers for a but for real training , I wish to load up a json file that contains the bounding box coordinates for my images. For that I will need to get the file names that were generated using train_generator.next() method. After I have that , I can load the file, parse the json and pass it instead of a. It is also necessary that the ordering of the x variable and the list of the file names that I get is the same.
Yes is it possible, at least with version 2.0.4 (don't know about earlier version).
The instance of ImageDataGenerator().flow_from_directory(...) has an attribute with filenames which is a list of all the files in the order the generator yields them and also an attribute batch_index. So you can do it like this:
datagen = ImageDataGenerator()
gen = datagen.flow_from_directory(...)
And every iteration on generator you can get the corresponding filenames like this:
for i in gen:
idx = (gen.batch_index - 1) * gen.batch_size
print(gen.filenames[idx : idx + gen.batch_size])
This will give you the filenames of the images in the current batch.
You can make a pretty minimal subclass that returns the image, file_path tuple by inheriting the DirectoryIterator:
import numpy as np
from keras.preprocessing.image import ImageDataGenerator, DirectoryIterator
class ImageWithNames(DirectoryIterator):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filenames_np = np.array(self.filepaths)
self.class_mode = None # so that we only get the images back
def _get_batches_of_transformed_samples(self, index_array):
return (super()._get_batches_of_transformed_samples(index_array),
self.filenames_np[index_array])
In the init, I added a attribute that is the numpy version of self.filepaths so that we can easily index into that array to get the paths on each batch generation.
The only other change to the base class is to return a tuple that is the image batch super()._get_batches_of_transformed_samples(index_array) and the file paths self.filenames_np[index_array].
With that, you can make your generator like so:
imagegen = ImageDataGenerator()
datagen = ImageWithNames('/data/path', imagegen, target_size=(224,224))
And then check with
next(datagen)
at least with version 2.2.4,you can do it like this
datagen = ImageDataGenerator()
gen = datagen.flow_from_directory(...)
for file in gen.filenames:
print(file)
or get the file path
for filepath in gen.filepaths:
print(filepath)
Here is an example that works with shuffle=True as well. And also properly handles last batch. To make one pass:
datagen = ImageDataGenerator().flow_from_directory(...)
batches_per_epoch = datagen.samples // datagen.batch_size + (datagen.samples % datagen.batch_size > 0)
for i in range(batches_per_epoch):
batch = next(datagen)
current_index = ((datagen.batch_index-1) * datagen.batch_size)
if current_index < 0:
if datagen.samples % datagen.batch_size > 0:
current_index = max(0,datagen.samples - datagen.samples % datagen.batch_size)
else:
current_index = max(0,datagen.samples - datagen.batch_size)
index_array = datagen.index_array[current_index:current_index + datagen.batch_size].tolist()
img_paths = [datagen.filepaths[idx] for idx in index_array]
#batch[0] - x, batch[1] - y, img_paths - absolute path
the below code might help. Overriding the flow_from_directory
class AugmentingDataGenerator(ImageDataGenerator):
def flow_from_directory(self, directory, mask_generator, *args, **kwargs):
generator = super().flow_from_directory(directory, class_mode=None, *args, **kwargs)
seed = None if 'seed' not in kwargs else kwargs['seed']
while True:
for image_path in generator.filepaths:
# Get augmentend image samples
image = next(generator)
# print(image_path )
yield image,image_path
# Create training generator
train_datagen = AugmentingDataGenerator(
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1./255,
horizontal_flip=True
)
train_generator = train_datagen.flow_from_directory(
TRAIN_DIRECTORY_PATH,
target_size=(256, 256),
shuffle = False,
batch_size=BATCH_SIZE
)
# Create testing generator
test_datagen = AugmentingDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
TEST_DIRECTORY_PATH,
target_size=(256, 256),
shuffle = False, # inorder to get imagepath of the same image
batch_size=BATCH_SIZE
)
And to check your images and file path returned
image,file_path = next(test_generator)
# print(file_path)
# plt.imshow(image)
I needed exactly this and I developed a simple function that works with shuffle=True or shuffle=False.
def get_indices_from_keras_generator(gen, batch_size):
"""
Given a keras data generator, it returns the indices and the filepaths
corresponding the current batch.
:param gen: keras generator.
:param batch_size: size of the last batch generated.
:return: tuple with indices and filenames
"""
idx_left = (gen.batch_index - 1) * batch_size
idx_right = idx_left + gen.batch_size if idx_left >= 0 else None
indices = gen.index_array[idx_left:idx_right]
filenames = [gen.filenames[i] for i in indices]
return indices, filenames
Then, you would use it as follows:
for x, y in gen:
indices, filenames = get_indices_from_keras_generator(gen)
Related
I am training a model using custom generators, but just before finishing the first epoch, the model runs out of data. It gives me the following error:
Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least (steps_per_epoch * epochs) batches (in this case, 8740 batches). You may need to use the repeat() function when building your dataset
I have four generators (one for the train data, and another for the train label. Same thing with validation). I then zip train & label together. This is the prototype of my generators. I got the idea from here:
import numpy as np
import nibabel as nib
from tensorflow import keras
import os
def weirddivision(n,d):
return np.array(n)/np.array(d) if d else 0
class ImgDataGenerator(keras.utils.Sequence):
def __init__(self, file_list, batch_size=8, shuffle=True):
"""Constructor can be expanded,
with batch size, dimentation etc.
"""
self.file_list = file_list
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Take all batches in each iteration'
return int(np.floor(len(self.file_list) / self.batch_size))
def __getitem__(self, index):
'Get next batch'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# single file
file_list_temp = [self.file_list[k] for k in indexes]
# Set of X_train and y_train
X = self.__data_generation(file_list_temp)
return X
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.file_list))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, file_list_temp):
'Generates data containing batch_size samples'
train_loc = '/home/faruk/Desktop/BrainSeg/Dataset/Train/'
X = np.empty((self.batch_size,224,256,1))
# Generate data
for i, ID in enumerate(file_list_temp):
x_file_path = os.path.join(train_loc, ID)
img = np.load(x_file_path)
img = np.pad(img, pad_width=((14,13),(12,11)), mode='constant')
img = np.expand_dims(img,-1)
img = weirddivision(img, img.max())
# Store sample
X[i,] = img
return X
As mentioned, here I create four generators and zip them:
training_img_generator = ImgDataGenerator(train)
training_label_generator = LabelDataGenerator(train)
train_generator = zip(training_img_generator,training_label_generator)
val_img_generator = ValDataGenerator(val)
val_label_generator = ValLabelDataGenerator(val)
val_generator = zip(val_img_generator,val_label_generator)
Because the generator is generating data dynamically, I thought that maybe it was trying to generate more than what is actually available. Hence, I calculated the steps per epoch as follows and passed it to fit_generator:
batch_size = 8
spe = len(train)//batch_size # len(train) = 34965
val_spe = len(val)//batch_size # len(val) = 4347
History=model.fit_generator(generator=train_generator, validation_data=val_generator, epochs=2, steps_per_epoch=spe, validation_steps = val_spe, shuffle=True, verbose=1)
But still, this is not working. I have tried reducing the number of steps per epoch, and I am able to finish the first epoch, but the error then appears at the beginning of the second epoch. Apparently the generator needs to be repeated infinitely, but I don't know how to achieve this. Can I use an infinite while loop? If yes, where?
Try this:
train_generator = train_generator.repeat()
val_generator = val_generator.repeat()
I solved this. I was defining my Generator class as follows:
class ImgDataGenerator(keras.utils.Sequence)
However, my model was not sequential... It was functional. I solved this by creating my own custom generator without inheriting from the keras.utils.sequence.
I hope this is helpful to someone.
I am working on Image Binarization using UNet and have a dataset of 150 images and their binarized versions too. My idea is to augment the images randomly to make them look like they are differentso I have made a function which inserts any of the 4-5 types of Noises, skewness, shearing and so on to an image. I could have easily used
ImageDataGenerator(preprocess_function=my_aug_function) to augment the images but the problem is that my y target is also an image. Also, I could have used something like:
train_dataset = (
train_dataset.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
But it has 2 problems:
With larger dataset, it'll blow up the memory as data needs to be already in the memory
This is the crucial part that I need to augment the images on the go to make it look like I have a huge dataset.
Another Solution could be saving augmented images to a directory and making them 30-40K and then loading them. It would be silly thing to do.
Now the idea part is that I can use Sequence as the parent class but How can I keep on augmenting and generating new images on the fly with respective Y binarized images?
I have an idea as the below code. Can somebody help me with the augmentation and generation of y images. I have my X_DIR, Y_DIR where image names for binarised and original are same but stored in different directories.
class DataGenerator(tensorflow.keras.utils.Sequence):
def __init__(self, files_path, labels_path, batch_size=32, shuffle=True, random_state=42):
'Initialization'
self.files = files_path
self.labels = labels_path
self.batch_size = batch_size
self.shuffle = shuffle
self.random_state = random_state
self.on_epoch_end()
def on_epoch_end(self):
'Updates indexes after each epoch'
# Shuffle the data here
def __len__(self):
return int(np.floor(len(self.files) / self.batch_size))
def __getitem__(self, index):
# What do I do here?
def __data_generation(self, files):
# I think this is responsible for Augmentation but no idea how should I implement it and how does it works.
Custom Image Data Generator
load Directory data into dataframe for CustomDataGenerator
def data_to_df(data_dir, subset=None, validation_split=None):
df = pd.DataFrame()
filenames = []
labels = []
for dataset in os.listdir(data_dir):
img_list = os.listdir(os.path.join(data_dir, dataset))
label = name_to_idx[dataset]
for image in img_list:
filenames.append(os.path.join(data_dir, dataset, image))
labels.append(label)
df["filenames"] = filenames
df["labels"] = labels
if subset == "train":
split_indexes = int(len(df) * validation_split)
train_df = df[split_indexes:]
val_df = df[:split_indexes]
return train_df, val_df
return df
train_df, val_df = data_to_df(train_dir, subset="train", validation_split=0.2)
Custom Data Generator
import tensorflow as tf
from PIL import Image
import numpy as np
class CustomDataGenerator(tf.keras.utils.Sequence):
''' Custom DataGenerator to load img
Arguments:
data_frame = pandas data frame in filenames and labels format
batch_size = divide data in batches
shuffle = shuffle data before loading
img_shape = image shape in (h, w, d) format
augmentation = data augmentation to make model rebust to overfitting
Output:
Img: numpy array of image
label : output label for image
'''
def __init__(self, data_frame, batch_size=10, img_shape=None, augmentation=True, num_classes=None):
self.data_frame = data_frame
self.train_len = len(data_frame)
self.batch_size = batch_size
self.img_shape = img_shape
self.num_classes = num_classes
print(f"Found {self.data_frame.shape[0]} images belonging to {self.num_classes} classes")
def __len__(self):
''' return total number of batches '''
self.data_frame = shuffle(self.data_frame)
return math.ceil(self.train_len/self.batch_size)
def on_epoch_end(self):
''' shuffle data after every epoch '''
# fix on epoch end it's not working, adding shuffle in len for alternative
pass
def __data_augmentation(self, img):
''' function for apply some data augmentation '''
img = tf.keras.preprocessing.image.random_shift(img, 0.2, 0.3)
img = tf.image.random_flip_left_right(img)
img = tf.image.random_flip_up_down(img)
return img
def __get_image(self, file_id):
""" open image with file_id path and apply data augmentation """
img = np.asarray(Image.open(file_id))
img = np.resize(img, self.img_shape)
img = self.__data_augmentation(img)
img = preprocess_input(img)
return img
def __get_label(self, label_id):
""" uncomment the below line to convert label into categorical format """
#label_id = tf.keras.utils.to_categorical(label_id, num_classes)
return label_id
def __getitem__(self, idx):
batch_x = self.data_frame["filenames"][idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.data_frame["labels"][idx * self.batch_size:(idx + 1) * self.batch_size]
# read your data here using the batch lists, batch_x and batch_y
x = [self.__get_image(file_id) for file_id in batch_x]
y = [self.__get_label(label_id) for label_id in batch_y]
return tf.convert_to_tensor(x), tf.convert_to_tensor(y)
You can use libraries like albumentations and imgaug, both are good but I have heard there are issues with random seed with albumentations.
Here's an example of imgaug taken from the documentation here:
seq = iaa.Sequential([
iaa.Dropout([0.05, 0.2]), # drop 5% or 20% of all pixels
iaa.Sharpen((0.0, 1.0)), # sharpen the image
iaa.Affine(rotate=(-45, 45)), # rotate by -45 to 45 degrees (affects segmaps)
iaa.ElasticTransformation(alpha=50, sigma=5) # apply water effect (affects segmaps)
], random_order=True)
# Augment images and segmaps.
images_aug = []
segmaps_aug = []
for _ in range(len(input_data)):
images_aug_i, segmaps_aug_i = seq(image=image, segmentation_maps=segmap)
images_aug.append(images_aug_i)
segmaps_aug.append(segmaps_aug_i)
You are going in the right way with the custom generator. In __getitem__, make a batch using batch_x = self.files[index:index+batch_size] and same with batch_y, then augment them using X,y = __data_generation(batch_x, batch_y) which will load images(using any library you like, I prefer opencv), and return the augmented pairs (and any other manipulation).
Your __getitem__ will then return the tuple (X,y)
You can use ImageDataGenerator even if your label is an image.
Here is a simple example of how you can do that:
Code:
# Specifying your data augmentation here for both image and label
image_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
mask_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
# Provide the same seed and keyword arguments to the flow methods
seed = 1
image_generator = image_datagen.flow_from_directory(
data_dir,
class_mode=None,
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
data_dir,
class_mode=None,
seed=seed)
# Combine the image and label generator.
train_generator = zip(image_generator, mask_generator)
Now, if you iterate over it you will get:
for image, label in train_generator:
print(image.shape,label.shape)
break
Output:
(32, 256, 256, 3) (32, 256, 256, 3)
You can use this train_generator with fit() command.
Code:
model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=50)
With flow_from_directory your memory won't be cluttered and Imagedatagenerator will take care of the augmentation part.
I am building a model with multiple inputs as shown in pyimagesearch, however I can't load all images into RAM and I am trying to create a generator that uses flow_from_directory and get from a CSV file all the extra attributes for each image being processed.
Question: How do I get the attributes from the CSV to correspond with the images in each batch from the image generator?
def get_combined_generator(images_dir, csv_dir, split, *args):
"""
Creates train/val generators on images and csv data.
Arguments:
images_dir : string
Path to a directory with subdirectories for each class.
csv_dir : string
Path to a directory containing train/val csv files with extra attributes.
split : string
Current split being used (train, val or test)
"""
img_width, img_height, batch_size = args
datagen = ImageDataGenerator(
rescale=1. / 255)
generator = datagen.flow_from_directory(
f'{images_dir}/{split}',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=True,
class_mode='categorical')
df = pd.read_csv(f'{csv_dir}/{split}.csv', index_col='image')
def my_generator(image_gen, data):
while True:
i = image_gen.batch_index
batch = image_gen.batch_size
row = data[i * batch:(i + 1) * batch]
images, labels = image_gen.next()
yield [images, row], labels
csv_generator = my_generator(generator, df)
return csv_generator
I found a solution based on Luke's answer using a custom generator
import random
import pandas as pd
import numpy as np
from glob import glob
from keras.preprocessing import image as krs_image
# Create the arguments for image preprocessing
data_gen_args = dict(
horizontal_flip=True,
brightness_range=[0.5, 1.5],
shear_range=10,
channel_shift_range=50,
rescale=1. / 255,
)
# Create an empty data generator
datagen = ImageDataGenerator()
# Read the image list and csv
image_file_list = glob(f'{images_dir}/{split}/**/*.JPG', recursive=True)
df = pd.read_csv(f'{csv_dir}/{split}.csv', index_col=csv_data[0])
random.shuffle(image_file_list)
def custom_generator(images_list, dataframe, batch_size):
i = 0
while True:
batch = {'images': [], 'csv': [], 'labels': []}
for b in range(batch_size):
if i == len(images_list):
i = 0
random.shuffle(images_list)
# Read image from list and convert to array
image_path = images_list[i]
image_name = os.path.basename(image_path).replace('.JPG', '')
image = krs_image.load_img(image_path, target_size=(img_height, img_width))
image = datagen.apply_transform(image, data_gen_args)
image = krs_image.img_to_array(image)
# Read data from csv using the name of current image
csv_row = dataframe.loc[image_name, :]
label = csv_row['class']
csv_features = csv_row.drop(labels='class')
batch['images'].append(image)
batch['csv'].append(csv_features)
batch['labels'].append(label)
i += 1
batch['images'] = np.array(batch['images'])
batch['csv'] = np.array(batch['csv'])
# Convert labels to categorical values
batch['labels'] = np.eye(num_classes)[batch['labels']]
yield [batch['images'], batch['csv']], batch['labels']
I would suggest creating a custom generator given this relatively specific case. Something like the following (modified from a similar answer here) should suffice:
import os
import random
import pandas as pd
def generator(image_dir, csv_dir, batch_size):
i = 0
image_file_list = os.listdir(image_dir)
while True:
batch_x = {'images': list(), 'other_feats': list()} # use a dict for multiple inputs
batch_y = list()
for b in range(batch_size):
if i == len(image_file_list):
i = 0
random.shuffle(image_file_list)
sample = image_file_list[i]
image_file_path = sample[0]
csv_file_path = os.path.join(csv_dir,
os.path.basename(image_file_path).replace('.png', '.csv'))
i += 1
image = preprocess_image(cv2.imread(image_file_path))
csv_file = pd.read_csv(csv_file_path)
other_feat = preprocess_feats(csv_file)
batch_x['images'].append(image)
batch_x['other_feats'].append(other_feat)
batch_y.append(csv_file.loc[image_name, :]['class'])
batch_x['images'] = np.array(batch_x['images']) # convert each list to array
batch_x['other_feats'] = np.array(batch_x['other_feats'])
batch_y = np.eye(num_classes)[batch['labels']]
yield batch_x, batch_y
Then, you can use Keras's fit_generator() function to train your model.
Obviously, this assumes you have csv files with the same names as your image files, and that you have some custom preprocessing functions for images and csv files.
My scenario is that we have multiple peers with their own data, located in different directories, with the same sub-directory structure. I want to train the model using those data, but if I copy all of them to one folder, I can't keep track of which data is from whose (the new data is also created occasionally so it's not suitable to keep copy the files every time)
My data is now stored like this:
-user01
-user02
-user03
...
(all of them have similar sub-directory structure)
I have searched for solution, but I only found the multi-input case in here and here, which they concatenate multiple input into 1 single parallel input, which is not my case.
I know that the flow_from_directory() can only be fed by 1 directory at a time, so how can I build a custom one that can be fed by multiple directory at a time?
If my question is low-quality, please give advice on how to improve it, I have searched also on the github of keras but didn't find anything that I can adapt.
Thank you.
The Keras ImageDataGenerator flow_from_directory method has a follow_links parameter.
Maybe you can create one directory which is populated with symlinks to files in all the other directories.
This stack question discusses using symlinks with Keras ImageDataGenerator: Understanding 'follow_links' argument in Keras's ImageDataGenerator?
After so many days I hope you have found the solution to the problem,
but I will share another idea here so that new people like me who will
face the same problem in the future, get help.
A few days ago I had this kind of problem. follow_links will be a solution to your question, as user3731622 said. Also, I think the idea of merging two data generators will work. However, in that case, the batch sizes of the corresponding data generators have to be determined proportion to the extent of data in each relevant directory.
Batch size of sub-generators:
Where,
b = Batch Size Of Any Sub-generator
B = Desired Batch Size Of The Merged Generator
n = Number Of Images In That Directory Of Sub-generator
the sum of n = Total Number Of Images In All Directories
See the code below, this may help:
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import Sequence
import matplotlib.pyplot as plt
import numpy as np
import os
class MergedGenerators(Sequence):
def __init__(self, batch_size, generators=[], sub_batch_size=[]):
self.generators = generators
self.sub_batch_size = sub_batch_size
self.batch_size = batch_size
def __len__(self):
return int(
sum([(len(self.generators[idx]) * self.sub_batch_size[idx])
for idx in range(len(self.sub_batch_size))]) /
self.batch_size)
def __getitem__(self, index):
"""Getting items from the generators and packing them"""
X_batch = []
Y_batch = []
for generator in self.generators:
if generator.class_mode is None:
x1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
else:
x1, y1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
Y_batch = [*Y_batch, *y1]
if self.generators[0].class_mode is None:
return np.array(X_batch)
return np.array(X_batch), np.array(Y_batch)
def build_datagenerator(dir1=None, dir2=None, batch_size=32):
n_images_in_dir1 = sum([len(files) for r, d, files in os.walk(dir1)])
n_images_in_dir2 = sum([len(files) for r, d, files in os.walk(dir2)])
# Have to set different batch size for two generators as number of images
# in those two directories are not same. As we have to equalize the image
# share in the generators
generator1_batch_size = int((n_images_in_dir1 * batch_size) /
(n_images_in_dir1 + n_images_in_dir2))
generator2_batch_size = batch_size - generator1_batch_size
generator1 = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
rotation_range=5.,
horizontal_flip=True,
)
generator2 = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2,
horizontal_flip=False,
)
# generator2 has different image augmentation attributes than generaor1
generator1 = generator1.flow_from_directory(
dir1,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator1_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
generator2 = generator2.flow_from_directory(
dir2,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator2_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
return MergedGenerators(
batch_size,
generators=[generator1, generator2],
sub_batch_size=[generator1_batch_size, generator2_batch_size])
def test_datagen(batch_size=32):
datagen = build_datagenerator(dir1="./asdf",
dir2="./asdf2",
batch_size=batch_size)
print("Datagenerator length (Batch count):", len(datagen))
for batch_count, image_batch in enumerate(datagen):
if batch_count == 1:
break
print("Images: ", image_batch.shape)
plt.figure(figsize=(10, 10))
for i in range(image_batch.shape[0]):
plt.subplot(1, batch_size, i + 1)
plt.imshow(image_batch[i], interpolation='nearest')
plt.axis('off')
plt.tight_layout()
test_datagen(4)
We can generate image dataset using ImageDataGenerator with flow_from_directory method. For calling list of class, we can use oject.classes. But, how to call list of values? I've searched and still not found any.
Thanks :)
The ImageDataGenerator is a python generator, it would yield a batch of data with the shape same with your model inputs(like(batch_size,width,height,channels)) each time. The benefit of the generator is when your data set is too big, you can't put all the data to your limited memory, but, with the generator you can generate one batch data each time. and the ImageDataGenerator works with model.fit_generator(), model.predict_generator().
If you want to get the numeric data, you can use the next() function of the generator:
import numpy as np
data_gen = ImageDataGenerator(rescale = 1. / 255)
data_generator = datagen.flow_from_directory(
data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
data_list = []
batch_index = 0
while batch_index <= data_generator.batch_index:
data = data_generator.next()
data_list.append(data[0])
batch_index = batch_index + 1
# now, data_array is the numeric data of whole images
data_array = np.asarray(data_list)
Alternatively, you can use PIL and numpy process the image by yourself:
from PIL import Image
import numpy as np
def image_to_array(file_path):
img = Image.open(file_path)
img = img.resize((img_width,img_height))
data = np.asarray(img,dtype='float32')
return data
# now data is a tensor with shape(width,height,channels) of a single image.
Then, you can loop all your images with this function to get the numeric data.
Notice, I recommend you to use generator instead of get all the data directly, or, you might run out of memory.
'But, how to call list of values' - If I understood correctly, I guess you wish to know what all files are there in your data set - if that's correct, (or if not), there are various ways you can get values from your generator:
use object.filenames.
Object.filenames returns the list of all files in your target folder. I just use the len(object.filename) function to get the total number of files in my test folder. Then pass that number back into my generator and run it again.
generator.n
Other way to get number of all items in your test folder is generator.n
x , y = test_generator.next() to load my array and classes ( if inferred).
Or a = test_generator.next(), where your array and classes will be returned as tuple.
I only used this as my test data set was really small ( 60 images) and I was using extracted features to train and predict my model( that is feature array and not the image array).
If you are building a normal model, using generator to yield batches is much better way.
Create a function using generator
def generate_test_data_from_directory(folder_path, image_target_size = 224, batch_size = 5, channels = 3, class_mode = 'sparse' ):
'''fetch all out test data from directory'''
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
folder_path ,
target_size = (image_target_size, image_target_size),
batch_size = batch_size,
class_mode = class_mode)
total_images = test_generator.n
steps = total_images//batch_size
#iterations to cover all data, so if batch is 5, it will take total_images/5 iteration
x , y = [] , []
for i in range(steps):
a , b = test_generator.next()
x.extend(a)
y.extend(b)
return np.array(x), np.array(y)