The accuracy of fit_generator and fit is different - python

++ The fit_generator has been modified to fit.
The total number of DataSets is 12,507, True is 6,840 and False is 7,056.
The Data Set configuration is the same.
The model is the same.
A Model is :
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding='same', input_shape=(192, 112, 1)))
model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
.
.
.
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.summary()
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.categorical_crossentropy, metrics=['accuracy'])
history = model.fit(train_X, train_Y, epochs=15, batch_size=64, validation_split=0.2, verbose=2)
The accuracy when using fit is close to 100%.
B Modeil is :
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_gen = train_datagen.flow_from_directory(
TRAIN_PATH,
target_size=(192, 112),
classes=['true', 'false'],
class_mode='categorical',
batch_size=64,
color_mode='grayscale',
shuffle=True)
val_gen = val_datagen.flow_from_directory(
VAL_PATH,
target_size=(192, 112),
classes=['true', 'false'],
class_mode='categorical',
batch_size=64,
color_mode='grayscale',
shuffle=False)
test_gen = val_datagen.flow_from_directory(
VAL_PATH,
target_size=(192, 112),
classes=['true', 'false'],
class_mode='categorical',
batch_size=64,
color_mode='grayscale',
shuffle=False)
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding='same', input_shape=(192, 112, 1)))
model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
.
.
.
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy,
metrics=['accuracy'])
model.summary()
history = model.fit(
train_gen,
validation_data=val_gen,
epochs=15,
steps_per_epoch=len(train_gen)//64, # 64 is the batch_size
validation_steps=len(val_gen)//64,
verbose=2)
model.evaluate(test_gen,
batch_size=64,
verbose=2)
In this case, the accuracy is close to 50%.
Isn't A model and B model the same way?
Why do other precisions come out?
++
Here's how to load data from the A model:
true_Data_list = np.array(os.listdir(TRUE_DIR))
false_Data_list = np.array(os.listdir(FALSE_DIR))
# -------------------------------- Load True Set ----------------------------------------- #
for index in range(len(true_Data_list)): # 이미지 폴더 리스트 만들기
path_true = os.path.join(TRUE_DIR, true_Data_list[index])
image_true = ImageOps.grayscale(Image.open(path_true)) # True 이미지
image_true = np.reshape(np.asarray(image_true), (192, 112, 1)).astype(np.float32)
data_X.append([np.array(image_true)])
data_Y.append([1, 0])
Load False Set is repeated in the same way.
Then I will reshape and split.
data_X = np.reshape(data_X, (-1, 192, 112, 1)).astype(np.float32)
data_Y = np.reshape(data_Y, (-1, 2)).astype(np.int8)
train_X, test_X, train_Y, test_Y = train_test_split(data_X, data_Y, test_size=0.25, shuffle=True, random_state=625)
In the case of B model
TRAIN_PATH = 'dataset/train'
VAL_PATH = 'dataset/val'
TEST_PATH = 'dataset/test'
The created PATH will now be in
train_gen = train_datagen.flow_from_directory(TRAIN_PATH, ...
with each PATH having true and false folders
The photo is to be output via verbose = 2 in 1 epoch.
enter image description here

fit_generator is deprecated. Although they should give the slightly same results. You have a typo(?) I think,
train_batch_size = len(train_X) // 64
test_batch_size = len(test_X) // 64
They supposed to be the steps_per_epoch, while fitting you set them as batch_size. I am not sure whether you augmented data in both cases but in the first approach you use a high batch size. The data points you see in an epoch is different in both cases. Second approach seems more reliable, you can use fit() with generators also.

Related

Using my own Images for TripletSemiHardLoss from Keras addons

My image folder is set up as one main folder with 130 separate folders, each folder with its own images
folder_with_130_folders-
folder1_class1-
img_in_class1_folder.jpg
img_in_class1_folder.jpg
...
folder130_class130-
img_in_class130_folder.jpg
img_in_class130_folder.jpg
train_dataset = prod_images.flow_from_directory(directory, target_size=(225, 225), class_mode='categorical', subset='training', save_format='jpg')
validation_set = prod_images.flow_from_directory(directory, target_size=(225, 225), class_mode='categorical', subset='validation', save_format='jpg')
(x_train, y_train), (x_test, y_test) = train_dataset.next(), validation_set.next()
model = models.Sequential()
model.add(layers.Conv2D(filters=128, kernel_size=2, padding='same', activation='relu', input_shape=(225, 225, 3)))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(256, activation=None)) # No activation on final dense layer
model.add(layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1))) # L2 normalize embeddings
model.summary()
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.001), loss=tfa.losses.TripletSemiHardLoss())
model_fit = model.fit(train_dataset, steps_per_epoch=4, epochs=20, verbose=1, validation_data = validation_set)```
As stated in the docs regarding the tfa.losses.TripletSemiHardLoss:
We expect labels y_true to be provided as 1-D integer Tensor with
shape [batch_size] of multi-class integer labels. And embeddings
y_pred must be 2-D float Tensor of l2 normalized embedding vectors
You should, therefore, use sparse integer labels (sparse_categorical) instead of one-hot encoded labels (categorical). Here is a working example:
import tensorflow as tf
import tensorflow_addons as tfa
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
batch_size = 32
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
label_mode='int', # sparse categorical
subset="training",
seed=123,
image_size=(225, 225),
batch_size=batch_size)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=2, padding='same', activation='relu', input_shape=(225, 225, 3)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation=None)) # No activation on final dense layer
model.add(tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=-1))) # L2 normalize embeddings
model.summary()
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.001), loss=tfa.losses.TripletSemiHardLoss())
model_fit = model.fit(train_ds, epochs=5, verbose=1)
In your case you have to set the parameter class_mode to sparse:
flow_from_directory(directory, target_size=(225, 225), class_mode='sparse', subset='training', save_format='jpg')

Image classification- Why am I getting vastly different results training on Tensorflow vs Pytorch?

I am training an image classifier for a robot that detects "blocked" vs "free" paths. Using the same data on pytorch gives >0.98 accuracy on validation data whereas tensorflow only gives around 0.50-0.60 accuracy with a mode of 52.17%.
Pytorch is using pre-trained AlexNet implementation for which there is no counterpart on tensorflow. I tried my best to mirror the implementation on tensorflow as you can see below. I have also tried other CNN models including ResNet and InceptionV3 but all give me roughly the same validation accuracy on tensorflow.
Other information:
Input images are (224, 224, 3)
Both implementations point to same training and validation directories
I've tested both models with 4 images they have not seen before and pytorch model guessed correctly whereas tensorflow model did not.
Pytorch Code:
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=8,
shuffle=True,
num_workers=0
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=8,
shuffle=True,
num_workers=0
)
model = models.alexnet(pretrained=True)
model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2)
device = torch.device('cuda')
model = model.to(device)
NUM_EPOCHS = 30
BEST_MODEL_PATH = 'best_model.pth'
best_accuracy = 0.0
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
for epoch in range(NUM_EPOCHS):
for images, labels in iter(train_loader):
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = F.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
test_error_count = 0.0
for images, labels in iter(test_loader):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
test_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1))))
test_accuracy = 1.0 - float(test_error_count) / float(len(test_dataset))
print('%d: %f' % (epoch, test_accuracy))
if test_accuracy > best_accuracy:
torch.save(model.state_dict(), BEST_MODEL_PATH)
best_accuracy = test_accuracy
Tensorflow code:
model = keras.models.Sequential([
keras.layers.Conv2D(filters=64, kernel_size=(11, 11), strides=(4, 4), activation='relu',
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3)),
keras.layers.BatchNormalization(),
keras.layers.MaxPool2D(pool_size=(3, 3), strides=(2, 2)),
keras.layers.Conv2D(filters=192, kernel_size=(5, 5), strides=(1, 1), activation='relu', padding="same"),
keras.layers.MaxPool2D(pool_size=(3, 3), strides=(2, 2)),
keras.layers.Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding="same"),
keras.layers.Conv2D(filters=256, kernel_size=(1, 1), strides=(1, 1), activation='relu', padding="same"),
keras.layers.Conv2D(filters=256, kernel_size=(1, 1), strides=(1, 1), activation='relu', padding="same"),
keras.layers.MaxPool2D(pool_size=(3, 3), strides=(2, 2)),
keras.layers.Dropout(0.5),
keras.layers.Flatten(),
keras.layers.Dense(4096, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(4096, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
train_gen = ImageDataGenerator(
rescale=1.0 / 255,
samplewise_std_normalization=True,
# rotation_range=20,
# width_shift_range=0.2,
# height_shift_range=0.1,
brightness_range=(0.0, 0.5),
# shear_range=0.2,
zoom_range=0.3,
fill_mode='nearest'
)
val_gen = ImageDataGenerator(rescale=1.0 / 255)
train_data = train_gen.flow_from_directory(
train_path,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
class_mode='binary',
color_mode='rgb',
batch_size=8,
shuffle=True)
val_data = val_gen.flow_from_directory(
val_path,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
class_mode='binary',
color_mode='rgb',
batch_size=8,
shuffle=True)
optimizer = tf.keras.optimizers.SGD(lr=0.001, momentum=0.9)
loss = tf.keras.losses.binary_crossentropy
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
num_epochs = 40
save_path = 'F:\\Documents\\Jetbot\\collision_avoidance_tf\\model_save'
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(save_path, monitor='val_loss', save_best_only=True)
history = model.fit(train_data, epochs=num_epochs, validation_data=val_data, verbose=1,
callbacks=[model_checkpoint])
Any help is greatly appreciated. Thanks!

Keras model overfiting

im working on a multi class image classification problem in keras. Using the dog-breeds dataset on kaggle. My accuracy for 12 breeds is 95% yet, my validation accuracy is only 50%. It looks like the model is overfitting, but im not sure what i would need to do to prevent overfitting
Here's my basic training setup
from keras.utils.np_utils import to_categorical
from keras.layers import Conv2D, Activation, MaxPooling2D
from keras import optimizers
from keras.layers.normalization import BatchNormalization
img_width, img_height = 224, 224
datagen_top = ImageDataGenerator(
rotation_range=180,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
generator_top = datagen_top.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False)
nb_train_samples = len(generator_top.filenames)
num_classes = len(generator_top.class_indices)
train_data = bottleneck_features_train
# get the class lebels for the training data, in the original order
train_labels = generator_top.classes
# https://github.com/fchollet/keras/issues/3467
# convert the training labels to categorical vectors
train_labels = to_categorical(train_labels, num_classes=num_classes)
generator_top = datagen_top.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
nb_validation_samples = len(generator_top.filenames)
validation_data = bottleneck_features_validation
validation_labels = generator_top.classes
validation_labels = to_categorical(
validation_labels, num_classes=num_classes)
input_shape = train_data.shape[1:]
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer=optimizers.RMSprop(lr=2e-4),
loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_data, train_labels,
epochs=epochs,
batch_size=batch_size,
callbacks=[],
validation_data=(validation_data, validation_labels))
model.save_weights(top_model_weights_path)
(eval_loss, eval_accuracy) = model.evaluate(
validation_data, validation_labels, batch_size=batch_size, verbose=1)
notebook is on colab.
https://colab.research.google.com/drive/13RzXpxE-yMEuMFPHnmBpzD1gFXWxVyXK
A single layer network isn't gonna fly with an image classification problem. The network will never be able to generalize because there is no opportunity to. Try expanding the network with a few more layers and maybe try a CNN.
Example:
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
activation='relu',
input_shape=input_shape))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(),
metrics=['accuracy'])
This usually happens when you have too many layers and the resulting dimensionality (after striding and pooling) is lower than the minimum input size (convolutional kernel) of a subsequent layer.
Which is the image size of the dog-breeds data?
Have you made sure that the reshaping works correctly?

Fully connected layer output ValueError

I am working on a glaucoma detection CNN and I'm getting the following error
ValueError: Error when checking target: expected activation_1 to have shape (2,) but got array with shape (1,) for any other number at the final Dense layer except 1. Since the number of classifications is 2, I need to give Dense(2) before the activation function. But whenever I run the code with Dense(1), I get a good accuracy but during testing, everything is predicted to be from the same class. How do I solve this error without changing my Dense layer back to Dense(1)
This is the code:
img_width, img_height = 256, 256
input_shape = (img_width, img_height, 3)
train_data_dir = "data/train"
validation_data_dir = "data/validation"
nb_train_samples = 500
nb_validation_samples = 50
batch_size = 10
epochs = 10
model = Sequential()
model.add(Conv2D(3, (11, 11), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(Conv2D(96, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(192, (3, 3)))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(192, (3, 3)))
model.add(Flatten())
model.add(Dense(2))
model.add(Activation('softmax'))
model.summary()
model.compile(loss="binary_crossentropy", optimizer=optimizers.Adam(lr=0.001, beta_1=0.9,
beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False), metrics=["accuracy"])
# Initiate the train and test generators with data Augumentation
train_datagen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
rotation_range=30)
test_datagen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
rotation_range=30)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode="binary")
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
class_mode="binary")
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size
)
model.save('f1.h5')
Any help will be much appreciated.
That is because you specify class_mode='binary' in your image generators which means two classes will be encoded as 0 or 1 rather than [1,0] or [0,1]. You can easily solve this by changing your final layer to:
model.add(Dense(1, activation='sigmoid'))
# No need for softmax activation
model.compile(loss='binary_crossentropy', ...)
Binary cross-entropy on 0-1 is mathematically equivalent to 2 class softmax with cross-entropy so you achieve the same thing.

Keras ValueError when checking target

I'm trying to build a model in keras. I followed a tutorial almost to the letter, but I'm getting an error that says:
ValueError: Error when checking target: expected activation_5 to have shape (None, 1) but got array with shape (16, 13)
The code I have is the following:
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 16
epochs = 50
number_training_data = 999
number_validation_data = 100
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
'data/train', # this is the target directory
target_size=(200, 200),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(200, 200),
batch_size=batch_size,
class_mode='categorical')
model.fit_generator(
train_generator,
steps_per_epoch=number_training_data // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=number_validation_data // batch_size)
The dataset I have has 13 classes, so the shape of the array in the error message corresponds to the batch size and the number of classes. Any idea why I'm getting this error?
Your model is configured to perform binary classification, not multi-class classification with 13 classes. To do that you should change:
The number of units in the last dense to 13, the number of classes.
The activation at the output to softmax.
The loss to categorical cross-entropy (categorical_crossentropy).

Categories