GradCAM with Transfer-Learning and augmentation layers at imput - python

I'm trying to implement GradCAM to a transfer-learning model. For that reason I need an additional output from the last Convolutional Layer of the base model.
My model consists of preprocessing/augmentation layers, pretrained MobileNet and a custom head. When MobileNet is implemented one functional layer I always get a disconnected graph error. And because of augmentation layers at the beginning I didn't manage to implement MobileNet as single layers, as other solutions proposed. Thanks a lot for any help!
# transfer-learning model
base_model = MobileNetV2(input_shape=(224, 224, 3), include_top=False, weights='imagenet')
inputs = Input(shape=(224, 224, 3))
augmented = RandomFlip("horizontal")(inputs)
augmented = RandomRotation(0.1)(augmented)
augmented = RandomZoom(height_factor=(0.0, 0.3), width_factor=(0.0, 0.3),
fill_mode='constant')(augmented)
mobilenet = base_model(augmented)
pooling = GlobalAveragePooling2D()(mobilenet)
dropout = Dropout(0.5)(pooling)
outputs = Dense(len(classes), activation="softmax")(dropout)
model = Model(inputs=inputs, outputs=outputs)
model.summary()
And here's my model for GradCAM:
gradModel = Model(inputs=[model.inputs],
outputs=[model.get_layer('mobilenetv2_1.00_224').get_layer('Conv_1').output,
model.output])

I had a similar problem and ended up implementing the augmentation at the dataset level rather than in the model layers.
train_ds = tf.keras.utils.image_dataset_from_directory(
train_dir,
validation_split=0.3,
label_mode='categorical',
subset="training",
seed=s,
color_mode="rgb",
image_size=image_size,
batch_size=batch_size,
)
data_augmentation = tf.keras.Sequential([
tf.keras.layers.RandomFlip("horizontal"),
tf.keras.layers.RandomRotation(0.1),
tf.keras.layers.RandomZoom(height_factor=(0.0, 0.3), width_factor=(0.0, 0.3), fill_mode='constant')
])
train_ds = train_ds.map(
lambda x, y: (data_augmentation(x, training=True), y)
)
I would then feed this data into the model and it had the desired effect.
model.fit(train_ds, EPOCHS)

Related

ResNet50 Model Always Predicts 1 Class

I am working on a ResNet50 model to predict covid/non-covid presence in chest x-rays. However, my model currently only predicts class label 1... I have tried 3 different optimizers, 2 different loss functions, changing the learning rate multiple times from 1e-6 to 0.5, and changing the weights on the class labels...
Does anyone have any ideas what the issue could be? Why does it always predict class label 1?
Here is the code:
# import data
# train_ds = tf.keras.utils.image_dataset_from_directory(
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
DATASET_PATH+"Covid/",
labels="inferred",
batch_size=64,
image_size=(256, 256),
shuffle=True,
seed=COVID_SEED,
validation_split=0.2,
subset="training",
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
DATASET_PATH+"Covid/",
labels="inferred",
batch_size=64,
image_size=(256, 256),
shuffle=True,
seed=COVID_SEED,
validation_split=0.2,
subset="validation",
)
# split data
train_X = list()
train_y = list()
test_X = list()
test_y = list()
for image_batch_train, labels_batch_train in train_ds:
for index in range(0, len(image_batch_train)):
train_X.append(image_batch_train[index])
train_y.append(labels_batch_train[index])
for image_batch, labels_batch in val_ds:
for index in range(0, len(image_batch)):
test_X.append(image_batch[index])
test_y.append(labels_batch[index])
Conv_Base = ResNet50(weights=None, input_shape=(256, 256, 3), classes=2)
# The Convolutional Base of the Pre-Trained Model will be added as a Layer in this Model
for layer in Conv_Base.layers[:-8]:
layer.trainable = False
model = Sequential()
model.add(Conv_Base)
model.add(Flatten())
model.add(Dense(units = 1024, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.summary()
opt = Adadelta(learning_rate=0.3)
model.compile(optimizer = opt, loss = 'BinaryCrossentropy', metrics = ['accuracy'])
# try to add class weights to make it predict 0, since we currently only predict class label 1
class_weight = {0: 50.,
1: 1.}
r=model.fit(x = train_ds, validation_data = val_ds, epochs = COVID_EPOCHS, class_weight=class_weight)
#print the class labels of prediction
predictions = model.predict(val_ds)
predictions = np.ndarray.flatten(predictions)
predictions = np.where(predictions < 0, 0, 1) # Convert to 0 and 1.
np.set_printoptions(threshold=np.inf)
print(predictions)
Well done! I'll leave an answer here as well because I think you need to do more besides normalization.
When the weights are None (see here) the resnet weights are randomized. You are using a large convolutional feature extractor (the first layers of a Resnet) but this extractor was not trained on anything. You may achieve decent performance because the Dense layer that succeeds it compensates for this random initialization but chances are it's not what you're aiming for. Keep in mind your resnet weights are not trainable, so the feature extraction will never change.
The reason I suggested imagenet weights is because you're working with images and therefore it's reasonable to assume that your convolutional feature extractor needs to extract important image features such as colors, shapes, edges etc. The fact that the imagenet resnet was trained on 1000 classes or so is irrelevant because you chop it off before it reaches the output layer, which is where the class number bottleneck occurs. I would pursue the weights = 'imagenet' thing.

Keras: Transfer Learning - Image scaling worsens performance of the model significantly

I am working on an image classification problem with keras and tensorflow. I am using the VGG16 model with Imagenet weights and I am importing my data using the ImageDataGenerator from Keras.
Now I've been reading that one should always rescale the images using 1./255 for an efficient tranining. However, once I implement the scaling my model performs significantly worse than before. Changing the learning rate and batch size didn't help either.
Now I am questioning whether this is possible or if my model has some error. I am using standard .jpg image files.
from keras.preprocessing.image import ImageDataGenerator
IMAGE_SIZE = 224
BATCH_SIZE = 32
num_classes = 27
main_path = "C:/Users/abc/data"
final_path = os.path.join(main_path, "ML_DATA")
labels = listdir(gesamt_path)
data_generator = ImageDataGenerator(rescale=1./255, ### rescaling done here
validation_split=0.20)
train_generator = data_generator.flow_from_directory(final_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=True, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="training")
validation_generator = data_generator.flow_from_directory(final_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=False, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="validation")
Model definition and training
vgg16_model = keras.applications.vgg16.VGG16(weights='imagenet', include_top=True)
model = Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
for layer in model.layers:
layer.trainable = False
model.add(Dense(num_classes, activation='softmax'))
model.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit_generator(train_generator,
validation_data=validation_generator,
epochs=85, verbose=1,callbacks=[tbCallBack,earlystopCallback])
It could be that Imagenet Weights are not compatible with your new image dimension.
I see that your only trainable layer is the very last layer, a dense layer, which doesn’t know anything about image dimension. My suggestion is to also let the first few convolutional layers to be trainable, so that those layers can adapt to the rescaling.
Working with ResNet and imagenet weights I improved my results using:
ImageDataGenerator(preprocessing_function=preprocess_input)
With rescaling I obtained worse results too.
This information was useful to me:
https://github.com/matterport/Mask_RCNN/issues/231

Fine-tuning ResNet50 with Keras - val_loss keeps increasing

I am trying to customize resnet50 using keras with a tensorflow backend. However, upon tranining my val_loss keeps increasing. Trying different learning rates and batch sizes does not resolve the problem.
Using different preprocessing methods such as rescaling or using the preprocess_input function for resnet50 inside the ImageDataGenerator did not not solve the problem either.
This is the code I am using
Importing and preprocessing data:
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.resnet50 import preprocess_input, decode_predictions
IMAGE_SIZE = 224
BATCH_SIZE = 32
num_classes = 27
main_path = "C:/Users/aaron/Desktop/DATEN/data"
gesamt_path = os.path.join(main_path, "ML_DATA")
labels = listdir(gesamt_path)
data_generator = ImageDataGenerator(#rescale=1./255,
validation_split=0.20,
preprocessing_function=preprocess_input)
train_generator = data_generator.flow_from_directory(gesamt_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=True, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="training")
validation_generator = data_generator.flow_from_directory(gesamt_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), shuffle=False, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="validation")
Defining and training the model
img_width = 224
img_height = 224
model = keras.applications.resnet50.ResNet50()
classes = list(iter(train_generator.class_indices))
model.layers.pop()
for layer in model.layers:
layer.trainable=False
last = model.layers[-1].output
x = Dense(len(classes), activation="softmax")(last)
finetuned_model = Model(model.input, x)
finetuned_model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
for c in train_generator.class_indices:
classes[train_generator.class_indices[c]] = c
finetuned_model.classes = classes
earlystopCallback = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=8, verbose=1, mode='auto')
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
history = finetuned_model.fit_generator(train_generator,
validation_data=validation_generator,
epochs=85, verbose=1,callbacks=[tbCallBack,earlystopCallback])
You need to match the preprocessing used for the pretrained network, not come up your own preprocessing. Double check the network input tensor, i.e. whether the channel-wise average of your input matches that of the data used for the pretrained network.
It could be that your new data is very different from the data used for the pretrained network. In that case, all BN layers gonna migrate their pretrained mean/var to new values, so an increasing loss is also possible (but eventually the loss should decrease).
In your training you are using a pretrained model (resnet50) changing only the last layer because you want to predict only a few classes and not the 1000 classes the pretrained model was trained on (that's the meaning of transfer learning).
You are freezing all weights and you are not letting your model to train. Try:
model = keras.applications.resnet50.ResNet50(include_top=False, pooling='avg')
for layer in model.layers:
layer.trainable=False
last = model.output
x = Dense(512, activation='relu')(last)
x = Dropout(0.5)(x)
#x = BatchNormalization()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
#x = BatchNormalization()(x)
x = Dense(len(classes), activation="softmax")(x)
You can modify the code above, change 512 number of neurons, add or not dropout/batchnormalization, use as many dense layers as you want....
There is known ""problem"" (strange design) regarding BN in Keras and your bad result may be related to this issue.

Difference between fit and fit_generator when doing transfer learning in keras

I am trying to train a deep neural network using transfer learning in Keras with tensorflow. There are different ways to do that, if your data is small you can afford computing features using the pre-trained model for the entire data and then use those features to train and test a small network, this is good as you don't need to compute those features for each batch and at each epoch. However, if the data is large, it will be impossible to compute features for the entire data, in this case we use ImageDataGenerator, flow_from_directory and fit_generator. In this case features are computed each time fore each batch at each epoch which make things much slower. I was assuming that both approaches produce similar results in terms of accuracy and loss. The problem is that I took a small data-set and tried both approaches and got completely different results. I will appreciate if someone can tell if something is wrong in the provided code and/or why I am getting different results please?
Approach when having large data-set:
from keras.applications.inception_v3 import InceptionV3,preprocess_input
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
datagen= ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = datagen.flow_from_directory('data/train',
class_mode='categorical',
batch_size=64,...)
vaild_generator = datagen.flow_from_directory('data/valid',
class_mode='categorical',
batch_size=64,...)
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = Conv2D(filters = 128 , kernel_size = (2,2)) (x)
x = MaxPooling2D()(x)
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',...)
model.fit_generator(generator = train_generator,
steps_per_epoch = len (train_generator),
validation_data = valid_generator ,
validation_steps = len(valid_generator),
...)
Approach when having small data-set:
from keras.applications.inception_v3 import InceptionV3,preprocess_input
from keras.models import Sequential
from keras.utils import np_utils
base_model = InceptionV3(weights='imagenet', include_top=False)
train_features = base_model.predict(preprocess_input(train_data))
valid_features = base_model.predict(preprocess_input(valid_data))
model = Sequential()
model.add(Conv2D(filters = 128 , kernel_size = (2,2),
input_shape=(train_features [1],
train_features [2],
train_features [3])))
model.add(MaxPooling2D())
model.add(GlobalAveragePooling2D())
model.add(Dense(1024, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',...)
model.fit(train_features, np_utils.to_categorical(y_train,2),
validation_data=(valid_features, np_utils.to_categorical(y_valid,2)),
batch_size=64,...)

How to use InceptionV3 bottlenecks as input in Keras 2.0

I want to use bottlenecks for transfer learning using InceptionV3 in Keras.
I've used some of the tips on creating, loading and using bottlenecks from
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
My problem is that I don't know how to use a bottleneck (numpy array) as input to an InceptionV3 with a new top layer.
I get the following error:
ValueError: Error when checking input: expected input_3 to have shape
(None, None, None, 3) but got array with shape (248, 8, 8, 2048)
248 refers to the total number of images in this case.
I know that this line is wrong, but I dont't know how to correct it:
model = Model(inputs=base_model.input, outputs=predictions)
What is the correct way to input the bottleneck into InceptionV3?
Creating the InceptionV3 bottlenecks:
def create_bottlenecks():
datagen = ImageDataGenerator(rescale=1. / 255)
model = InceptionV3(include_top=False, weights='imagenet')
# Generate bottlenecks for all training images
generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
nb_train_samples = len(generator.filenames)
bottlenecks_train = model.predict_generator(generator, int(math.ceil(nb_train_samples / float(batch_size))), verbose=1)
np.save(open(train_bottlenecks_file, 'w'), bottlenecks_train)
# Generate bottlenecks for all validation images
generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
nb_validation_samples = len(generator.filenames)
bottlenecks_validation = model.predict_generator(generator, int(math.ceil(nb_validation_samples / float(batch_size))), verbose=1)
np.save(open(validation_bottlenecks_file, 'w'), bottlenecks_validation)
Loading the bottlenecks:
def load_bottlenecks(src_dir, bottleneck_file):
datagen = ImageDataGenerator(rescale=1. / 255)
generator = datagen.flow_from_directory(
src_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False)
num_classes = len(generator.class_indices)
# load the bottleneck features saved earlier
bottleneck_data = np.load(bottleneck_file)
# get the class lebels for the training data, in the original order
bottleneck_class_labels = generator.classes
# convert the training labels to categorical vectors
bottleneck_class_labels = to_categorical(bottleneck_class_labels, num_classes=num_classes)
return bottleneck_data, bottleneck_class_labels
Starting training:
def start_training():
global nb_train_samples, nb_validation_samples
create_bottlenecks()
train_data, train_labels = load_bottlenecks(train_data_dir, train_bottlenecks_file)
validation_data, validation_labels = load_bottlenecks(validation_data_dir, validation_bottlenecks_file)
nb_train_samples = len(train_data)
nb_validation_samples = len(validation_data)
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 2 classes
predictions = Dense(2, activation='softmax')(x)
# What is the correct input? Obviously not base_model.input.
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=optimizers.SGD(lr=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# train the model on the new data for a few epochs
history = model.fit(train_data, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(validation_data, validation_labels),
)
Any help would be appreciated!
This error happens when you try to train your model with input data in a different shape from the shape your model supports.
Your model supports (None, None, None, 3), meaning:
Any number of images
Any height
Any width
3 channels
So, you must make sure that train_data (and validation_data) matches this shape.
The system is telling that train_data.shape = (248,8,8,2048)
I see that train_data comes from load_botlenecks. Is it really supposed to be coming from there? What is train data supposed to be? An image? Something else? What is a bottleneck?
Your model starts in the Inception model, and the Inception model takes images.
But if bottlenecks are already results of the Inception model, and you want to feed only bottlenecks, then the Inception model should not participate of anything at all.
Start from:
inputTensor = Input((8,8,2048)) #Use (None,None,2048) if bottlenecks vary in size
x = GlobalAveragePooling2D()(inputTensor)
.....
Create the model with:
model = Model(inputTensor, predictions)
The idea is:
Inception model: Image -> Inception -> Bottlenecks
Your model: Bottlenecks -> Model -> Labels
The combination of the two models is only necessary when you don't have the bottlenecks preloaded, but you have your own images for which you want to predict the bottlenecks first. (Of course you can work with separate models as well)
Then you're going to input only images (the bottlenecks will be created by Inception and passed to your model, everything internally):
Combined model: Image -> Inception ->(bottlenecks)-> Model -> Labels
For that:
inputImage = Input((None,None,3))
bottleNecks = base_model(inputImage)
predictions = model(bottleNecks)
fullModel = Model(inputImage, predictions)

Categories