Using CNN to get value from dartboard where dart landed - python

I want to use CNN in python to get values from dartboard (or the value of the field where dart landed) using pictures.
I took 208 photos of dartboard, in each dart is in specific location. I want to predict if the dart in next image is in specific field (208 pictures represent 4 classes/52 each) (single, double and triple from same field represent same number or in our case, same class.
sample dart in a field
Then i use similar picture to test model.
When I try to fit model I get something like this
208/208 [==============================] - 3s 15ms/sample - loss: 0.0010 - accuracy: 1.0000 - val_loss: 8.1726 - val_accuracy: 0.2500
Epoch 29/100
208/208 [==============================] - 3s 15ms/sample - loss: 9.8222e-04 - accuracy: 1.0000 - val_loss: 8.6713 - val_accuracy: 0.2500
Epoch 30/100
208/208 [==============================] - 3s 15ms/sample - loss: 8.5902e-04 - accuracy: 1.0000 - val_loss: 9.2214 - val_accuracy: 0.2500
Epoch 31/100
208/208 [==============================] - 3s 15ms/sample - loss: 7.9463e-04 - accuracy: 1.0000 - val_loss: 9.6584 - val_accuracy: 0.2500
As the accuracy hits 1 the val_accuracy stays the same, some previous model got me a little better result, but it was little better than this.
As I am new in the field I need some advice to get my model or whole program better.
Here is my current model_
model = Sequential()
model.add(Conv2D(32, kernel_size=3, activation='relu', input_shape=(640, 480, 3)))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(Conv2D(128, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(Conv2D(256, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, batch_size=16, epochs=100, validation_data=(Xtest,ytest))
AND MY SAMPLE PROGRAM
training_data = []
DATADIR = 'C:/PikadaNew'
dir = sorted(os.listdir(DATADIR), key=len)
def create_training_data():
for category in dir: # do dogs and cats
path = os.path.join(DATADIR,category)
class_num = dir.index(category)
for img in tqdm(os.listdir(path)):
try:
img_array = cv2.imread(os.path.join(path,img))
training_data.append([img_array, class_num])
except Exception as e:
pass
create_training_data()
DATATESTDIR = 'C:/PikadaNewTest'
dir1 = sorted(os.listdir(DATATESTDIR), key=len)
test_data = []
def create_test_data():
for category in dir1:
path = os.path.join(DATATESTDIR,category)
class_num = dir1.index(category)
for img in tqdm(os.listdir(path)):
try:
img_array = cv2.imread(os.path.join(path,img)) # convert to array
test_data.append([img_array, class_num])
except Exception as e:
pass
create_test_data()
#print(len(training_data))
#print(len(test_data))
X = []
y = []
Xtest = []
ytest = []
for features,label in training_data:
X.append(features)
y.append(label)
for features,label in test_data:
Xtest.append(features)
ytest.append(label)
X = np.array(X).reshape(-1, 640, 480, 3)
Xtest= np.array(Xtest).reshape(-1, 640, 480, 3)
y = np.array(y)
ytest = np.array(ytest)
y = to_categorical(y)
ytest = to_categorical(ytest)
X = X/255.0
Xtest = Xtest/255.0
X,y = shuffle(X,y)
Xtest,ytest = shuffle(Xtest,ytest)
Thanks and sorry for mistakes, i hope its understandable what i wanna to achieve
Every advice is much appreciated
Samo

You are facing an overfitting problem because your data are so small and the model in more complex than needed. you can try the following:
Add more data if you can.
Try to simplify the model by removing some layers.
Add dropout to the model and use regularizes.
Use smaller number of epochs.

Related

keras classifier wrong evaluation while learning is great

have small dataset
Found 1836 images belonging to 2 classes.
Found 986 images belonging to 2 classes.
standard architecture of model
image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.35
)
train_data_gen = image_generator.flow_from_directory(
directory=directory,
target_size=(IMG_SHAPE, IMG_SHAPE),
subset='training',
)
val_data_gen = image_generator.flow_from_directory(
directory=directory,
target_size=(IMG_SHAPE, IMG_SHAPE),
subset='validation',
)
---
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_SHAPE, IMG_SHAPE, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax'),
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
BATCH_SIZE = 128
EPOCHS = 7
total_train, total_val = train_data_gen.samples, val_data_gen.samples
steps_per_epoch = int(np.ceil(total_train / float(BATCH_SIZE)))
validation_freq = int(np.ceil(total_val / float(BATCH_SIZE)))
history = model.fit(
train_data_gen,
epochs=EPOCHS,
steps_per_epoch=steps_per_epoch,
validation_data=val_data_gen,
validation_freq=validation_freq
)
getting perfect metrics
Epoch 1/7
15/15 [==============================] - 66s 4s/step - loss: 1.0809 - accuracy: 0.4917
Epoch 2/7
15/15 [==============================] - 56s 4s/step - loss: 0.3475 - accuracy: 0.8729
Epoch 3/7
15/15 [==============================] - 60s 4s/step - loss: 0.1113 - accuracy: 0.9583
Epoch 4/7
15/15 [==============================] - 58s 4s/step - loss: 0.1987 - accuracy: 0.9109
Epoch 5/7
15/15 [==============================] - 59s 4s/step - loss: 0.1127 - accuracy: 0.9438
Epoch 6/7
15/15 [==============================] - 60s 4s/step - loss: 0.0429 - accuracy: 0.9854
Epoch 7/7
15/15 [==============================] - 49s 3s/step - loss: 0.0542 - accuracy: 0.9812
but after i evaluate it, i get completely biased to first class results
it works only when i run it for 1 epoch, but with a lack of accuracy
eval code
def make_pred(model, labled_dataset, IMG_SHAPE, img_path) -> LabelName:
def make_image(img_path):
# img = img_path.resize((IMG_SHAPE, IMG_SHAPE), Image.ANTIALIAS)
img = image.load_img(img_path, target_size=(IMG_SHAPE, IMG_SHAPE))
img = image.img_to_array(img)
return np.expand_dims(img, axis=0)
pred_id: List[List] = np.argmax(model.predict(make_image(img_path)), axis=1)
all_labels = list(labled_dataset.class_indices.keys())
return all_labels[int(pred_id)]
what wrong with it?
should i downsize source image before eval it?
I believe you need to do two things. One resize the images you wish to predict, then rescale the images as you did for the training images. I also recommend that you set the validation_freq=1 so that you can set how the validation loss and accuracy are trending. This allows you to see how your model is performing relative to over fitting etc. You can detect if your model is over fitting if the training loss continues to declined but in later epochs your validation loss begins to increase. If you see over fitting add a Dropout layer after your dense 512 node dense layer. Documentation is here. Prediction accuracy should be close to the validation accuracy for the last epoch. I also recommend you consider using the keras callback ModelCheckpoint. Documentation is here. Set it up to monitor validation loss and save the model with the lowest validation loss. Then load the saved model to do predictions. Finally I find it effective to use an adjustable learning rate. The keras callback ReduceLROnPlateau makes this easy to do. Documentation is here. Set it up to monitor validation loss. The callback will automatically reduce the learning rate by a factor (parameter factor) if after (parameter patience) patience number of epochs the validation loss fails to decrease. I use factor=.5 and patience=1. This allows you to use a larger learning rate initially and have it decrease as needed so convergence will be faster. One more thing in your val_data_gen set shuffle=False so the validation images are processed in the same order each time.
problem was in validation_freq which is should be
validation_steps after that we finally getting val_accuracy, so
training starts validating in proper way
on top of it IMG_SHAPE wasn't the same for ImageDataGenerator and
inside the model input_shape=(IMG_SHAPE, IMG_SHAPE, 3)),
using PIL for prediction may helped too, it gives slightly different
from keras.preprocessing.image results
def make_pred_PIL(model, labled_dataset, IMG_SHAPE, img_path) -> LabelName:
img = cv2.imread(img_path)
img = cv2.resize(img, (IMG_SHAPE, IMG_SHAPE))
img = np.array(img, dtype=np.float32)
img = np.reshape(img, (-1, IMG_SHAPE, IMG_SHAPE, 3))
pred_id: List[List] = np.argmax(model.predict(img), axis=1)
all_labels = list(labled_dataset.class_indices.keys())
return all_labels[int(pred_id)]

Validation Accuracy stuck at .5073

I am trying to create a regression model but my validation accuracy stays at .5073. I am trying to train on images and have the network find the position of an object and the rough area it covers. I increased the unfrozen layers and the plateau for accuracy dropped to .4927. I would appreciate any help finding out what I am doing wrong.
base = MobileNet(weights='imagenet', include_top=False, input_shape=(200,200,3), dropout=.3)
location = base.output
location = GlobalAveragePooling2D()(location)
location = Dense(16, activation='relu', name="locdense1")(location)
location = Dense(32, activation='relu', name="locdense2")(location)
location = Dense(64, activation='relu', name="locdense3")(location)
finallocation = Dense(3, activation='sigmoid', name="finalLocation")(location)
model = Model(inputs=base_model.input,outputs=finallocation)#[types, finallocation])
for layer in model.layers[:91]: #freeze up to 87
if ('loc' or 'Loc') in layer.name:
layer.trainable=True
else: layer.trainable=False
optimizer = Adam(learning_rate=.001)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=['accuracy'])
history = model.fit(get_batches(type='Train'), validation_data=get_batches(type='Validation'), validation_steps=500, steps_per_epoch=1000, epochs=10)
Data is generated from a tfrecord file which has image data and some labels. This is the last bit of that generator.
IMG_SIZE = 200
def format_position(image, positionx, positiony, width):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
labels = tf.stack([positionx, positiony, width])
return image, labels
Get batches:
dataset is loaded from two directories with tfrecord files, one for training, and other for validation
def get_batches(type):
dataset = load_dataset(type=type)
if type == 'Train':
databatch = dataset.repeat()
databatch = dataset.batch(32)
databatch = databatch.prefetch(2)
return databatch
```positionx positiony width``` are all normalized from 0-1 (relative position with respect to the image.
Here is an example output:
Epoch 1/10
1000/1000 [==============================] - 233s 233ms/step - loss: 0.0267 - accuracy: 0.5833 - val_loss: 0.0330 - val_accuracy: 0.5073
Epoch 2/10
1000/1000 [==============================] - 283s 283ms/step - loss: 0.0248 - accuracy: 0.6168 - val_loss: 0.0337 - val_accuracy: 0.5073
Epoch 3/10
1000/1000 [==============================] - 221s 221ms/step - loss: 0.0238 - accuracy: 0.6309 - val_loss: 0.0312 - val_accuracy: 0.5073
The final activation function in your model should not be sigmoid since it will output numbers between 0 and 1 and I am assuming your labels (i.e., positionx, positiony, and width are not in this range). You could replace it with either 'linear' or 'relu'.
You're doing regression, and your loss function is 'mean_squared_error'. You cannot use accuracy as the metric function. You should use 'mae' (mean absolute error) or 'mse' to check the difference between your predictions and actual target values.

CNN Acc and Loss constant with custom image data

I am trying to train a simple CNN model for a binary classification task in Keras with a dataset of images I mined. The problem is that I am getting constant accuracy, val_accuracy and loss after a couple of epochs. Am I processing the data the wrong way? Or is it something in the model settings?
At the beginning I was using softmax as the final activation function and categorical cossentropy, I was also using the to_categorical function on the labels.
After reading up on what usually causes this to happen I decided to use sigmoid and binary_crossentropy instead and not use to_categorical. Still the problem persists and I am starting to wonder whether it's my data the problem (the two classes are too similar) or the way I am feeding the image arrays.
conkeras1 = []
pics = os.listdir("/Matrices/")
# I do this for the images of both classes, just keeping it short.
for x in range(len(pics)):
img = image.load_img("Matrices/"+pics[x])
conkeras1.append(img)
conkeras = conkeras1+conkeras2
conkeras = np.array([image.img_to_array(x) for x in conkeras]).astype("float32")
conkeras = conkeras / 255 # I also tried normalizing with a z-score with no success
yecs1 = [1]*len(conkeras1)
yecs2 = [0]*len(conkeras2)
y_train = yecs1+yecs2
y_train = np.array(y_train).astype("float32")
model = Sequential([
Conv2D(64, (3, 3), input_shape=conkeras.shape[1:], padding="same", activation="relu"),
Conv2D(32, (3, 3), activation="relu", padding="same"),
Flatten(),
Dense(500, activation="relu"),
#Dense(4096, activation="relu"),
Dense(1, activation="sigmoid")
])
model.compile(loss=keras.losses.binary_crossentropy,
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=['accuracy'])
history = model.fit(conkeras, y_train,
batch_size=32,
epochs=32, shuffle=True,
verbose=1,
callbacks=[tensorboard])
The output I get is this:
975/975 [==============================] - 107s 110ms/step - loss: 8.0022 - acc: 0.4800
Epoch 2/32
975/975 [==============================] - 99s 101ms/step - loss: 8.1756 - acc: 0.4872
Epoch 3/32
975/975 [==============================] - 97s 100ms/step - loss: 8.1756 - acc: 0.4872
Epoch 4/32
975/975 [==============================] - 97s 99ms/step - loss: 8.1756 - acc: 0.4872
and these are the shapes of the traning set and labels
>>> conkeras.shape
(975, 100, 100, 3)
>>> y_train.shape
(975,)

Deep learning: Training set tends to be good and Validation set is bad

I am facing to a problem for which I have difficulties to understand why I have such behaviour.
I am trying to use a pre-trained resnet 50 (keras) model for a binary image classification, I also built a simple cnn. I have about 8k balanced RGB images of size 200x200 and I divided this set into three sub-sets (train 70%, validation 15%, test 15%).
I built a generator to feed data to my models based on keras.utils.Sequence.
The problem that I have is my models tends to learn on the training set but on validation set I have poor results on pre-trained resnet50 and on simple cnn.
I tried several things to solve this problem but Not improvement at all.
With and without Data augmentation on training set (rotation)
Images are normalised between [0,1]
With and without Regularizers
Variation of the learning rate
This is an example of results obtained:
Epoch 1/200
716/716 [==============================] - 320s 447ms/step - loss: 8.6096 - acc: 0.4728 - val_loss: 8.6140 - val_acc: 0.5335
Epoch 00001: val_loss improved from inf to 8.61396, saving model to ../models_saved/resnet_adam_best.h5
Epoch 2/200
716/716 [==============================] - 287s 401ms/step - loss: 8.1217 - acc: 0.5906 - val_loss: 10.9314 - val_acc: 0.4632
Epoch 00002: val_loss did not improve from 8.61396
Epoch 3/200
716/716 [==============================] - 249s 348ms/step - loss: 7.5357 - acc: 0.6695 - val_loss: 11.1432 - val_acc: 0.4657
Epoch 00003: val_loss did not improve from 8.61396
Epoch 4/200
716/716 [==============================] - 284s 397ms/step - loss: 7.5092 - acc: 0.6828 - val_loss: 10.0665 - val_acc: 0.5351
Epoch 00004: val_loss did not improve from 8.61396
Epoch 5/200
716/716 [==============================] - 261s 365ms/step - loss: 7.0679 - acc: 0.7102 - val_loss: 4.2205 - val_acc: 0.5351
Epoch 00005: val_loss improved from 8.61396 to 4.22050, saving model to ../models_saved/resnet_adam_best.h5
Epoch 6/200
716/716 [==============================] - 285s 398ms/step - loss: 6.9945 - acc: 0.7161 - val_loss: 10.2276 - val_acc: 0.5335
....
This is classes used to load data into my models.
class DataGenerator(keras.utils.Sequence):
def __init__(self, inputs,
labels, img_size,
input_shape,
batch_size, num_classes,
validation=False):
self.inputs = inputs
self.labels = labels
self.img_size = img_size
self.input_shape = input_shape
self.batch_size = batch_size
self.num_classes = num_classes
self.validation = validation
self.indexes = np.arange(len(self.inputs))
self.inc = 0
def __getitem__(self, index):
"""Generate one batch of data
Parameters
----------
index :the index from which batch will be taken
Returns
-------
out : a tuple that contains (inputs and labels associated)
"""
batch_inputs = np.zeros((self.batch_size, *self.input_shape))
batch_labels = np.zeros((self.batch_size, self.num_classes))
# Generate data
for i in range(self.batch_size):
# choose random index in features
if self.validation:
index = self.indexes[self.inc]
self.inc += 1
if self.inc == len(self.inputs):
self.inc = 0
else:
index = random.randint(0, len(self.inputs) - 1)
batch_inputs[i] = self.rgb_processing(self.inputs[index])
batch_labels[i] = to_categorical(self.labels[index], num_classes=self.num_classes)
return batch_inputs, batch_labels
def __len__(self):
"""Denotes the number of batches per epoch
Returns
-------
out : number of batches per epochs
"""
return int(np.floor(len(self.inputs) / self.batch_size))
def rgb_processing(self, path):
img = load_img(path)
rgb = img.get_rgb_array()
if not self.validation:
if random.choice([True, False]):
rgb = random_rotation(rgb)
return rgb/np.max(rgb)
class Models:
def __init__(self, input_shape, classes):
self.input_shape = input_shape
self.classes = classes
pass
def simpleCNN(self, optimizer):
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=self.input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(self.classes), activation='softmax'))
model.compile(loss=keras.losses.binary_crossentropy,
optimizer=optimizer,
metrics=['accuracy'])
return model
def resnet50(self, optimizer):
model = keras.applications.resnet50.ResNet50(include_top=False,
input_shape=self.input_shape,
weights='imagenet')
model.summary()
model.layers.pop()
model.summary()
for layer in model.layers:
layer.trainable = False
output = Flatten()(model.output)
#I also tried to add dropout layers here with batch normalization but it does not change results
output = Dense(len(self.classes), activation='softmax')(output)
finetuned_model = Model(inputs=model.input,
outputs=output)
finetuned_model.compile(optimizer=optimizer,
loss=keras.losses.binary_crossentropy,
metrics=['accuracy'])
return finetuned_model
This is how these functions are called:
train_batches = DataGenerator(inputs=train.X.values,
labels=train.y.values,
img_size=img_size,
input_shape=input_shape,
batch_size=batch_size,
num_classes=len(CLASSES))
validate_batches = DataGenerator(inputs=validate.X.values,
labels=validate.y.values,
img_size=img_size,
input_shape=input_shape,
batch_size=batch_size,
num_classes=len(CLASSES),
validation=True)
if model_name == "cnn":
model = models.simpleCNN(optimizer=Adam(lr=0.0001))
elif model_name == "resnet":
model = models.resnet50(optimizer=Adam(lr=0.0001))
early_stopping = EarlyStopping(patience=15)
checkpointer = ModelCheckpoint(output_name + '_best.h5', verbose=1, save_best_only=True)
history = model.fit_generator(train_batches, steps_per_epoch=num_train_steps, epochs=epochs,
callbacks=[early_stopping, checkpointer], validation_data=validate_batches,
validation_steps=num_valid_steps)
I finally found the principal element that causes this over-fitting. Since I use a pre-trained model. I was set layers as non-trainable. Thus I tried to put them as trainable and It seems that it solves the problem.
for layer in model.layers:
layer.trainable = False
My hypothesis is that my images are too far away from data used to train the model.
I also added some dropouts and batch normalization at the end of the resnet model.

Keras LSTM Multi input and output model not update after each epoch

I'm very new to keras and I was following this doc to produce a multi-input and multi-output model. However, after each epoch, the results remain the same. Could someone point me out where I got stuck in?
My code is something like
main_input = Input(shape = (maxlen, ), name="main_input")
x = Embedding(94, 64)(main_input) # dic length = 94
lstm_out0 = LSTM(256, activation="relu", dropout=0.1,
recurrent_dropout=0.2, return_sequences=True)(x)
lstm_out = LSTM(256, activation="relu", dropout=0.1, recurrent_dropout=0.2)(lstm_out0)
auxiliary_input = Input(shape=(maxlen,), dtype="int32", name='aux_input')
aux_embed = Embedding(94, 64)(auxiliary_input)
aux_lstm_out = LSTM(256, activation="relu", dropout=0.2, recurrent_dropout=0.2)(aux_embed)
auxiliary_output = Dense(10, activation="softmax", name="aux_output")(lstm_out)
x = keras.layers.concatenate([aux_lstm_out, lstm_out])
x = Dense(64, activation='relu')(x)
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
model.compile(optimizer='rmsprop', loss={'main_output': 'binary_crossentropy', 'aux_output': 'categorical_crossentropy'},metrics=['accuracy'])
model.fit([X_train, X_aux_train], [train_label, aux_train_label],
validation_data=[[X_dev, X_aux_dev], [dev_label,aux_dev_label]],
epochs=10, batch_size=batch_size)
The main input is a sequence of chars while the main output is a binary value. The aux input is also a sequence of chars while the aux output is a categorical label.
The output is something like
Train on 200000 samples, validate on 20000 samples
Epoch 1/10
200000/200000 [==============================] - 892s - loss: 7.3824 - main_output_loss: 5.8560 - aux_output_loss: 1.5264 - main_output_acc: 0.5186 - aux_output_acc: 0.5371 - val_loss: 9.5776 - val_main_output_loss: 8.0590 - val_aux_output_loss: 1.5186 - val_main_output_acc: 0.5000 - val_aux_output_acc: 0.5362
Epoch 2/10
200000/200000 [==============================] - 894s - loss: 9.5818 - main_output_loss: 8.0586 - aux_output_loss: 1.5233 - main_output_acc: 0.5000 - aux_output_acc: 0.5372 - val_loss: 9.5771 - val_main_output_loss: 8.0590 - val_aux_output_loss: 1.5181 - val_main_output_acc: 0.5000 - val_aux_output_acc: 0.5362
I ran > 5 epochs and the results are almost all the same. The input data is prepared through features: sequence.pad_sequences label: to_categorical(for multiclass)

Categories