Keras model giving constantly same output class? - python

I've been trying to build a keras model following the "cats vs dogs" tutorials but unfortunately I'm always getting the same output class "cat". I know there's been a few posts where people have the same struggle. I've tried every approach but I still couldn't figure out what I'm doing wrong. A friend of mine told me I'm not labeling the classes correctly since my accuracy ratio changes based on how many images I have for each class, but I read on the tutorials that if I have sub-directories using the "flow_from_directory" method it already labels my classes based on the name of my folders, if someone could enlighten me on what I'm doing wrong here that'd be quite helpful. Here's a small code sample of my prototype:
# MODEL CONSTRUCTION -----------------------------------------
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(128,128,3))) #(3, 150, 150)
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts all our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid')) #sigmoid for binary outcome, softmax for more than two outcomes
model.compile(loss='binary_crossentropy', #since its a binary classification
optimizer='rmsprop',
metrics=['accuracy'])
#-------------------------------------------------------------
#augmentation configuration for training
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
#rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
#augmentation configuration for validating
valid_datagen = ImageDataGenerator(
rotation_range=60,
width_shift_range=0.4,
height_shift_range=0.1,
zoom_range=0.1,
vertical_flip=True,)
#augmentation configuration for testing
test_datagen = ImageDataGenerator(
rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
directory='data/train', # this is the target directory
target_size=(img_width, img_height), # all images will be resized to the given dimensions
color_mode="rgb",
#classes = ['dog', 'cat'],
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = valid_datagen.flow_from_directory(
directory='data/validation',
target_size=(img_width, img_height),
color_mode="rgb",
#classes = ['dog', 'cat'],
batch_size=batch_size,
class_mode='binary',
seed=42)
test_generator = test_datagen.flow_from_directory(
directory='data/test',
target_size=(img_width, img_height),
color_mode="rgb",
batch_size=1,
class_mode=None,
#shuffle=False,
)
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples / batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples / batch_size)
model.evaluate_generator(
generator=validation_generator
)
test_generator.reset()
pred=model.predict_generator(test_generator,verbose=1)
predicted_class_indices=np.argmax(pred,axis=1)
labels = (train_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class_indices]
Here's an image of the result when I test with some random images:

Output layer in your model is having 1 node that is activated with sigmoid activation function.
The output the model makes will be of 1 dimensional. Where each value will be less than 1 since the activation is sigmoid.
You are making predictions like this.
pred=model.predict_generator(test_generator,verbose=1)
predicted_class_indices=np.argmax(pred,axis=1)
You are making predictions with the model and then you are taking argmax on that. Since your output corresponding to each image will be a single value like 0.99 or 0.001.
Taking argmax on it will give always output 0. Hence the output you always get is 0. Which corresponds to cat.
If you want your model to make predictions properly, you must take the prediction made by the model and then map it to the class based on the threshold you need like this if you are keeping threshold as 0.5
pred=model.predict_generator(test_generator,verbose=1)
predicted_class_indices=[1 if x >= 0.5 else 0 for x in preds]

Why don't you use exact code and exact training data from the tutorial, that is recommended way to solve this kind of problems, when you mess things up and have no idea what to fix.

Related

How to deal with np.array as training set in Image Generator

I'm doing a ML model that takes pixel values from a numpy array as training and testing data. I defined a function that divides the dataset into images and labels. My task is to use Image Generator for data augmentation and then train the model. Everything goes smoothly until I am to train the model. It keeps giving me errors about the loss function used. When I use categorical_crossentropy it says I can either use 'sparse_categorical_crossentropy' or use function to_categorical. Well I tried both and there were still errors so I decided to try and use tf.convert_to_tensor() on my labels but now I get a shape error:
ValueError: A target array with shape (126, 25, 2) was passed for an output of shape (None, 3) while using as loss `categorical_crossentropy`. This loss expects targets to have the same shape as the output.
This is my code:
training_labels = tf.convert_to_tensor(training_labels)
testing_labels = tf.convert_to_tensor(testing_labels)
# Create an ImageDataGenerator and do Image Augmentation
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow(training_images,
training_labels,
batch_size=126
)
validation_generator = validation_datagen.flow(
testing_images,
testing_labels,
batch_size=126
)
# Keep These
print(training_images.shape)
print(testing_images.shape)
# Their output should be:
# (27455, 28, 28, 1)
# (7172, 28, 28, 1)
And here goes the model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
# Compile Model.
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# Train the Model
history = model.fit_generator(train_generator, validation_data=validation_generator, epochs=2)
model.evaluate(testing_images, testing_labels, verbose=0)
I got stuck with it, I googled for solution but with no success. Can you please help me somehow make a move?
Thanks a lot!
When using categorical cross entropy as the loss function, the labels should be one hot encoded and hence the number of neurons present in the final layer should be equal to the number of classes present in the dataset and that's the error you are getting. Since the number of output neurons is 3, I'm guessing you have 3 classes and hence the shape of training_labels/testing_labels should be (num of images in train/test, 3).
Below is a small snippet for cifar dataset.
from tf.keras.utils import to_categorical
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
num_classes = 10
# convert to one hot encoding
# shape will be (60000, 10) since there are 60000 images and 10 classes in cifar
y_train = to_categorical(y_train, num_classes)
# shape will be (10000, 10) since there are 10000 images and 10 classes in cifar
y_test = to_categorical(y_test, num_classes)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(x_train)
history = model.fit_generator(datagen.flow(x_train, y_train, batch_size=32), epochs=2)

ImageDataGenerator doesn't generate enough samples

I am following F.Chollet book "Deep learning with python" and can't get one example working.
In particular, I am running an example from chapter "Training a convnet from scratch on a small dataset".
My training dataset has 2000 sample and I am trying to extend it with augmentation using ImageDataGenerator. Despite that my code is exactly the same, I am getting error:
Your input ran out of data; interrupting training. Make sure that your
dataset or generator can generate at least steps_per_epoch * epochs
batches (in this case, 10000 batches).
from keras import layers
from keras import models
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
# creating model
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# model compilation
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
# model.summary()
# generating trains and test sets with rescaling 0-255 -> 0-1
train_dir = 'c:\\Work\\Code\\Python\\DL\\cats_and_dogs_small\\train\\'
validation_dir = 'c:\\Work\\Code\\Python\\DL\\cats_and_dogs_small\\validation\\'
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
Here is the link to github page for this book samples. Where you can check the code as well.
I am not sure what I am doing wrong and asking any advice. Thank you
It seems the batch_size should be 20 not 32.
Since you have steps_per_epoch = 100, it will execute next() on train generator 100 times before going to next epoch.
Now, in train_generator the batch_size is 32, so it can generate 2000/32 number of batches, given that you have 2000 number of training samples. And that is approximate 62.
So on 63th time executing next() on train_generator will give nothing and it will tell Your input ran out of data;
Ideally,
steps_per_epoch = total_traing_sample / batch_size
The above answer described your issue well. I would like to add one point, you can also get the correct steps_per_epoch value by adding these lines.
train_steps = train_generator.__len__()
val_steps = validation_generator.__len__()
I experienced the same issue while dealing with the data generation with augmentation.
The solutions provided above are true steps_per_epoch = total_training_sample / batch_size.
But since you are augmenting the images, I believe you would not mind passing the same image with different augmentations.
For me, I used the same keras version with tensorflow background as Chollet used and it removed this error. In that case, it will keep feeding you images indefinitely since it loop when it finishes from the available images.
Hopefully this helps

How can I solve Value error in resnet 50 implementation?

I am implementing resnet-50 on Kaggle and I am getting a value error. Kindly help me out
train_dir='../input/project/data/train'
test_dir='../input/project/data/test'
train_datagen=ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
color_mode='grayscale',
target_size=(28,28),
class_mode='binary',
batch_size=32,
)
test_generator = test_datagen.flow_from_directory(
test_dir,
color_mode='grayscale',
target_size=(28,28),
class_mode='binary',
batch_size=32,
shuffle='False',
)
model = Sequential()
model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path,input_tensor=Input(shape=(224,224,3))))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(2048, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(2, activation='sigmoid'))
model.layers[0].trainable = False
I am training a binary classifier and I am getting the error below
ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (512, 128, 1, 1) are incompatible
You have given input_tensor=Input(shape=(224,224,3)) while defining the ResNet50 base model. But you are giving target_size=(28,28) in your train_generator and test_generator. The training image shape which ResNet50 receiving i.e. target_size is different from what it expects i.e. input_tensor. Change your target_size to match with the shape mentioned in the input_tensor. Also, ResNet50 expects color_mode to be rgb rather grayscale.
This is because of weights which you are using(weights=resnet_weights_path). You've to use latest trained model. Input Image size can be any as per pre-trained model guidelines
Below worked for me
n_h, n_w, n_c =(256, 256, 3)
weights_path = '../input/d/aeryss/keras-pretrained- models/ResNet50_NoTop_ImageNet.h5'
ResNet50 = keras.applications.ResNet50(weights=weights_path ,include_top=False, input_shape=(n_h, n_w, n_c))
It looks like you are using pre-trained weights on your model. You should set the skip_mismatch=True keyword on the model.load_weights function. After returning the model variable, please set the following code:
model.load_weights(weights, by_name=True, skip_mismatch=True)
Where weight is your pre-trained model weight. It should ignore any mismatch of your model and the pre-trained weights.

Deep Learning - Candlestick Question (CNN Model)

I am new to Deep Learning and just have a question if the method I am using is correct.
Also, if anybody has suggestions on what to change on the model creation it would also be appreciated.
graphs look similar
I am using a CNN model to train candlesticks based on 'buy', sell', and 'do trade' pictures that look similar to the attached picture. (tried different number of bars but results where similar)
I based the code of this post:
https://towardsdatascience.com/making-a-i-that-looks-into-trade-charts-62e7d51edcba
I have made a few changes but kept the model training code similar (small changes did not produce significant accuracy)
# Input the size of your sample images
img_width, img_height = 150, 150
nb_filters1 = 32
nb_filters2 = 32
nb_filters3 = 64
conv1_size = 3
conv2_size = 2
conv3_size = 5
pool_size = 2
# We have 2 classes, buy and sell
classes_num = 3
batch_size = 128
lr = 0.001
chanDim =3
model = Sequential()
model.add(Convolution2D(nb_filters1, conv1_size, conv1_size, border_mode ='same', input_shape=(img_height, img_width , 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(Convolution2D(nb_filters2, conv2_size, conv2_size, border_mode ="same"))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size), dim_ordering='th'))
model.add(Convolution2D(nb_filters3, conv3_size, conv3_size, border_mode ='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size), dim_ordering='th'))
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(classes_num, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.rmsprop(),
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
#rescale=1. / 255,
horizontal_flip=False)
test_datagen = ImageDataGenerator(
#rescale=1. / 255,
horizontal_flip=False)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
#shuffle=True,
batch_size=batch_size,
class_mode='categorical'
)
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
#shuffle=True,
class_mode='categorical')
With this, I get an accuracy of 38% and if I remove the 'no trade' option, I get an accuracy of 52%.
Before training and after training does not improve accuracy drastically, that is why I am assuming the settings are not 100%
.
When predicting, the results always lean to one side (52% buy, 48% sell) and don't change much after a few hundred images.
Any suggestions?
I assume your three options are "buy", "sell", and "no trade". The reason why it jumps to 52% is because it's differentiating between 2 instead of 3 options.
With regards to the lower than expected accuracy, I recommend changing the loss to Adam. Also possibly move the dropout layer to the middle of the network. I have found success adding a dropout = .2 after each pooling layer. This way nodes are dropped throughout the network which allows for more "diversity" in node paths taken.

How to create a Keras face classifier between myself and others with a restrictive data set?

For the past 2 months I've been trying to create a classification model that can distinguish between myself and other people with Keras. I started from the dogs vs cats classifier and substituted the data set. Since then I have tweaked the network and the data set with some success. Also I have tried to augment my data set in many different combinations(flip, rotate, grayscale, lighten & darken the gamma; my augmentation turns 1 picture into 9).
For training I use my laptop's webcam to capture my face in different orientations and angles and I then split it in 3 (1/3 for validation and 2/3 for training). For the negative examples I have another data set of random people divided in the same way.
validation:
person: 300
other: 300
train:
person: 600
other: 600
To check my model I use some family photos on which I achieved around 80% accuracy but for this I only use 60 pictures, 36 of which are of myself.
img_width, img_height = 150, 150
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
print(train_generator.class_indices)
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
print(validation_generator.class_indices)
model.fit_generator(
train_generator,
steps_per_epoch=train_samples // batch_size,
epochs=epochs,
callbacks=[tensorboard],
validation_data=validation_generator,
validation_steps=validation_samples // batch_size)
model.save('model.h5')
All of my training attempts go pretty much the same way. First 1-2 epochs have close acc and loss values while the following ones jump to acc: 0.9 with loss: 0.1.
My assumption is that the problem is in the data set. What should I do in order to achieve a reasonable degree or accuracy by only using webcam taken photos?
Given the amount of data you have, a better approach would be to use transfer learning instead of training from scratch. You can start with one of the pre-trained models for ImageNet like Resnet or Inception. But I suspect models trained on large face dataset may perform better. You can check the facenet implementation from here. You can train only the last fully connected layer weights and 'freeze' the earlier layers. How to classify using Facenet can be found here.

Categories