Issue in predict function of the trained model in Keras [duplicate] - python

This question already has answers here:
predict with Keras fails due to faulty environment setup
(3 answers)
Closed 3 years ago.
I was performing a classification problem on some set of images, where my number of classes are three. Now since I am performing CNN, so it has a convolution layer and Pooling layer and then a few dense layers; the model parameters are shown below:
def baseline_model():
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 100, 100), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(60, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
The model runs perfectly and shows me the accuracy and validation error, etc,. as shown below:
model = baseline_model()
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=20, verbose=1)
scores = model.evaluate(X_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
Which gives me output:
Train on 514 samples, validate on 129 samples
Epoch 1/5
514/514 [==============================] - 23s 44ms/step - loss: 1.2731 - acc: 0.4202 - val_loss: 1.0349 - val_acc: 0.4419
Epoch 2/5
514/514 [==============================] - 18s 34ms/step - loss: 1.0172 - acc: 0.4416 - val_loss: 1.0292 - val_acc: 0.4884
Epoch 3/5
514/514 [==============================] - 17s 34ms/step - loss: 0.9368 - acc: 0.5817 - val_loss: 0.9915 - val_acc: 0.4806
Epoch 4/5
514/514 [==============================] - 18s 34ms/step - loss: 0.7367 - acc: 0.7101 - val_loss: 0.9973 - val_acc: 0.4961
Epoch 5/5
514/514 [==============================] - 17s 32ms/step - loss: 0.4587 - acc: 0.8521 - val_loss: 1.2328 - val_acc: 0.5039
CNN Error: 49.61%
The issue occurs in the prediction part.
So for my test images, for whom I need predictions; when I run model.predict(), it gives me this error:
TypeError: data type not understood
I can show the full error if required.
And just to show, the shape of my training images and images I am finally using to predict on:
X_train.shape
(514, 1, 100, 100)
final.shape
(277, 1, 100, 100)
So I've no idea what this error means and what's the issue. Even the data type of my image values is same 'float32'. So the shape is same and data type is same, then why is this issue coming?

It is similar to predict with Keras fails due to faulty environment setup
I had the same issue with anaconda and python 3.7. I resolved it when I changed to WPy-3670
with python 3.6 and everything downgraded.

Related

Val_acc doesn't increse

I'm trying to train a model with transfer learning, mobilenetv2, but my val accurary stops incressing at around 0,60. I've tried to train the top layers that I've build, after that I've tried to also train some of the mobilenets layers. Same result. How can I fix it? I have to mention that I am new to deep learning and I am not sure that the top layers I've build are right. Feel free to correct me.
IMAGE_SIZE = 224
BATCH_SIZE = 64
train_data_dir = "/content/FER2013/Training"
validation_data_dir = "/content/FER2013/PublicTest"
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0)
train_generator = datagen.flow_from_directory(
train_data_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
class_mode = 'categorical',
batch_size=BATCH_SIZE)
val_generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
class_mode = 'categorical',
batch_size=BATCH_SIZE)
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(7, activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer = tf.keras.optimizers.Adam(1e-5), #I've tried with .Adam as well
metrics=['accuracy'])
from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3)
early_stopper = EarlyStopping(monitor='val_accuracy', min_delta=0, patience=6, mode='auto')
checkpointer = ModelCheckpoint('/content/weights.hd5', monitor='val_loss', verbose=1, save_best_only=True)
epochs = 50
learning_rate = 0.004 #I've tried other values as well
history_fine = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=epochs,
callbacks=[lr_reducer, checkpointer, early_stopper],
validation_data=val_generator,
validation_steps=len(val_generator))
Epoch 1/50
448/448 [==============================] - ETA: 0s - loss: 1.7362 - accuracy: 0.2929
Epoch 00001: val_loss improved from inf to 1.58818, saving model to /content/weights.hd5
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: /content/weights.hd5/assets
448/448 [==============================] - 166s 370ms/step - loss: 1.7362 - accuracy: 0.2929 - val_loss: 1.5882 - val_accuracy: 0.4249
Epoch 2/50
448/448 [==============================] - ETA: 0s - loss: 1.3852 - accuracy: 0.4664
Epoch 00002: val_loss improved from 1.58818 to 1.31690, saving model to /content/weights.hd5
INFO:tensorflow:Assets written to: /content/weights.hd5/assets
448/448 [==============================] - 165s 368ms/step - loss: 1.3852 - accuracy: 0.4664 - val_loss: 1.3169 - val_accuracy: 0.4827
Epoch 3/50
448/448 [==============================] - ETA: 0s - loss: 1.2058 - accuracy: 0.5277
Epoch 00003: val_loss improved from 1.31690 to 1.21979, saving model to /content/weights.hd5
INFO:tensorflow:Assets written to: /content/weights.hd5/assets
448/448 [==============================] - 165s 368ms/step - loss: 1.2058 - accuracy: 0.5277 - val_loss: 1.2198 - val_accuracy: 0.5271
Epoch 4/50
448/448 [==============================] - ETA: 0s - loss: 1.0828 - accuracy: 0.5861
Epoch 00004: val_loss improved from 1.21979 to 1.18972, saving model to /content/weights.hd5
INFO:tensorflow:Assets written to: /content/weights.hd5/assets
448/448 [==============================] - 166s 370ms/step - loss: 1.0828 - accuracy: 0.5861 - val_loss: 1.1897 - val_accuracy: 0.5533
Epoch 5/50
448/448 [==============================] - ETA: 0s - loss: 0.9754 - accuracy: 0.6380
Epoch 00005: val_loss improved from 1.18972 to 1.13336, saving model to /content/weights.hd5
INFO:tensorflow:Assets written to: /content/weights.hd5/assets
448/448 [==============================] - 165s 368ms/step - loss: 0.9754 - accuracy: 0.6380 - val_loss: 1.1334 - val_accuracy: 0.5743
Epoch 6/50
448/448 [==============================] - ETA: 0s - loss: 0.8761 - accuracy: 0.6848
Epoch 00006: val_loss did not improve from 1.13336
448/448 [==============================] - 153s 342ms/step - loss: 0.8761 - accuracy: 0.6848 - val_loss: 1.1348 - val_accuracy: 0.5882
Epoch 7/50
448/448 [==============================] - ETA: 0s - loss: 0.7783 - accuracy: 0.7264
Epoch 00007: val_loss did not improve from 1.13336
448/448 [==============================] - 153s 341ms/step - loss: 0.7783 - accuracy: 0.7264 - val_loss: 1.1392 - val_accuracy: 0.5893
Epoch 8/50
448/448 [==============================] - ETA: 0s - loss: 0.6832 - accuracy: 0.7638
Epoch 00008: val_loss did not improve from 1.13336
448/448 [==============================] - 153s 342ms/step - loss: 0.6832 - accuracy: 0.7638 - val_loss: 1.1542 - val_accuracy: 0.6052
Since your validation loss is increasing while your training loss decreases, I think you may have a problem of overfitting to your training sets. Some things that could help are:
Use less dense layers. I think there are too many, but I could be wrong since I don't know what problem you are solving.
Add dropout layers after every dense layer.
Increase the dropout rates.
Use data augmentation on your training set (since you are already using ImageDataGenerator it won't be that hard).
Reduce the number of neurons in dense layers.
Use regularization.
You can try applying any of them or multiple of them at the same time. Tweaking your model is a lot of trial and error, do some experiments, and keep the model that achieves the best performance.
With your training loss decreasing and your validation loss increasing it appears that you are in an over fitting situation. The more complex your model the higher the chance of this happening so I suggest you try to simplify your model. Try removing all the dense layers except for the top layer which is producing your classifications. Run that and see how it does. My experience with MobileNet is that it should work well. If not add an additional dense layer with about 128 neurons followed by a drop out layer. Use the drop out value to adjust for over fitting. If you are still over fitting you might want to consider adding regularizers to this dense layer. Documentation is here. It could also be the case that you need more training samples. Use the Image Data Generator to augment your data set. For example set horizontal_flip-True. I notice you did not include the MobileNetV2 pre-processing function in the Image Data Generator. I believe this is necessary so modify the code as shown below. I have also found I get the best results with MobileNet if I train the entire model. So before you compile your model add the code below.
ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input)
for layer in model.layers:
layer.trainable=True
As an aside you can delete that layer in you model and within the MobileNet set the parameter pooling='max'. I see you start out with a very small learning rate try something like .005. Since you have the ReduceLROnPlateau callback this will adjust it if it is to large but will allow you to converge faster.

Why extra LSTM layer is getting bad results than normal LSTM model?

I am creating a Keras model. I am trying to variations.
That is my first model:
es = EarlyStopping(monitor='val_loss')
model = Sequential()
model.add(LSTM(100,input_shape=(TIME_STEPS,11), dropout=0.0,
recurrent_dropout=0.0, kernel_initializer='random_uniform'
))
model.add(Dropout(0.25))
#######model.add(LSTM(64))
model.add(Dense(15, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer=kr.optimizers.rmsprop(0.01),
metrics=[tf.keras.metrics.BinaryAccuracy()])
csv_logger = kr.callbacks.CSVLogger('sonuclar.log')
history = model.fit(x_train, #train girdiler
y_train, #train çıktılar
epochs=150,
verbose=2,
batch_size=BATCH_SIZE,
shuffle=False,
validation_data=(x_test1,
y_test1),
callbacks=[EarlyStopping(monitor='val_loss', patience=21),
ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)])
And this model have just 1 Lstm layer and 2 dense layer. That is my loss results:
.
.
.
Epoch 99/150
- 0s - loss: 8.0949e-04 - binary_accuracy: 4.9048e-04 - val_loss: 3.7912e-04 - val_binary_accuracy: 4.8986e-04
Epoch 100/150
- 0s - loss: 7.9101e-04 - binary_accuracy: 4.9053e-04 - val_loss: 9.9216e-05 - val_binary_accuracy: 4.8991e-04
Epoch 101/150
- 0s - loss: 6.8317e-04 - binary_accuracy: 4.9057e-04 - val_loss: 3.0611e-04 - val_binary_accuracy: 4.8996e-04
Epoch 102/150
- 0s - loss: 9.5524e-04 - binary_accuracy: 4.9061e-04 - val_loss: 7.6808e-05 - val_binary_accuracy: 4.9000e-04
Epoch 103/150
- 0s - loss: 6.7897e-04 - binary_accuracy: 4.9065e-04 - val_loss: 2.7978e-04 - val_binary_accuracy: 4.9005e-04
Epoch 104/150
- 0s - loss: 5.9103e-04 - binary_accuracy: 4.9069e-04 - val_loss: 6.1831e-04 - val_binary_accuracy: 4.9009e-04
Epoch 105/150
- 0s - loss: 8.2365e-04 - binary_accuracy: 4.9072e-04 - val_loss: 6.4325e-05 - val_binary_accuracy: 4.9014e-04
Epoch 106/150
- 0s - loss: 7.1716e-04 - binary_accuracy: 4.9076e-04 - val_loss: 1.0926e-04 - val_binary_accuracy: 4.9018e-04
Epoch 107/150
- 0s - loss: 6.5435e-04 - binary_accuracy: 4.9080e-04 - val_loss: 2.2587e-04 - val_binary_accuracy: 4.9022e-04
Epoch 108/150
- 0s - loss: 7.6734e-04 - binary_accuracy: 4.9083e-04 - val_loss: 7.6250e-05 - val_binary_accuracy: 4.9026e-04
Epoch 109/150
- 0s - loss: 6.4531e-04 - binary_accuracy: 4.9087e-04 - val_loss: 5.4440e-04 - val_binary_accuracy: 4.9030e-04
Epoch 110/150
- 0s - loss: 7.2096e-04 - binary_accuracy: 4.9091e-04 - val_loss: 8.7251e-05 - val_binary_accuracy: 4.9034e-04
Epoch 111/150
- 0s - loss: 7.3333e-04 - binary_accuracy: 4.9094e-04 - val_loss: 2.8440e-04 - val_binary_accuracy: 4.9038e-04
Epoch 112/150
If I try to use a second Lstm layer this model should be smart than previous model for predicting stock prices but this code:
es = EarlyStopping(monitor='val_loss')
model = Sequential()
model.add(LSTM(100,input_shape=(TIME_STEPS,11), dropout=0.0,
recurrent_dropout=0.0, kernel_initializer='random_uniform'
,return_sequences=True))
model.add(Dropout(0.25))
model.add(LSTM(64))
model.add(Dense(15, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer=kr.optimizers.rmsprop(0.01),
metrics=[tf.keras.metrics.BinaryAccuracy()])
csv_logger = kr.callbacks.CSVLogger('sonuclar.log')
history = model.fit(x_train, #train girdiler
y_train, #train çıktılar
epochs=150,
verbose=2,
batch_size=BATCH_SIZE,
shuffle=False,
validation_data=(x_test1,
y_test1),
callbacks=[EarlyStopping(monitor='val_loss', patience=21),
ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)])
has bad training result than previous model. This can't be overfitting because I dont use predicting. Why second model is bad than first model.
Last one, my data set have 15 stock price features and I am trying to predict stock prices
Adding more components to the neural network does not necessarily mean that you are going to improve upon simpler models on your task. Such things are considered as design decisions (like hyperparameters setting) that you need to experiment with your task, and dataset to end up finding the optimal decisions.
Actually, by adding more modules (like the second LSTM network you just added), you're increasing the model parameters that have to be trained ––also, you will have to give more time to your network to get trained. When the model parameters are getting large in count, the model gets more complex, having hard time fitting on the training instances as it needs to optimize parameters in a way that can optimally fit the training instances.

Could validation data be a generator in tensorflow.keras 2.0?

In official documents of tensorflow.keras,
validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors
tuple (x_val, y_val, val_sample_weights) of Numpy arrays
dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided.
It does not mention if generator could act as validation_data. So I want to know if validation_data could be a datagenerator? like the following codes:
net.fit_generator(train_it.generator(), epoch_iterations * batch_size, nb_epoch=nb_epoch, verbose=1,
validation_data=val_it.generator(), nb_val_samples=3,
callbacks=[checker, tb, stopper, saver])
Update:
In the official documents of keras, the same contents, but another sentense is added:
dataset or a dataset iterator
Considering that
dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided.
I think there should be 3 cases. Keras' documents are correct. So I will post an issue in tensorflow.keras to update the documents.
Yes it can, that's strange that it is not in the doc but is it working exactly like the x argument, you can also use a keras.Sequence or a generator. In my project I often use keras.Sequence that acts like a generator
Minimum working example that shows that it works :
import numpy as np
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
def generator(batch_size): # Create empty arrays to contain batch of features and labels
batch_features = np.zeros((batch_size, 1000))
batch_labels = np.zeros((batch_size,1))
while True:
for i in range(batch_size):
yield batch_features, batch_labels
model = Sequential()
model.add(Dense(125, input_shape=(1000,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
train_generator = generator(64)
validation_generator = generator(64)
model.fit(train_generator, validation_data=validation_generator, validation_steps=100, epochs=100, steps_per_epoch=100)
100/100 [==============================] - 1s 13ms/step - loss: 0.6689 - accuracy: 1.0000 - val_loss: 0.6448 - val_accuracy: 1.0000
Epoch 2/100
100/100 [==============================] - 0s 4ms/step - loss: 0.6223 - accuracy: 1.0000 - val_loss: 0.6000 - val_accuracy: 1.0000
Epoch 3/100
100/100 [==============================] - 0s 4ms/step - loss: 0.5792 - accuracy: 1.0000 - val_loss: 0.5586 - val_accuracy: 1.0000
Epoch 4/100
100/100 [==============================] - 0s 4ms/step - loss: 0.5393 - accuracy: 1.0000 - val_loss: 0.5203 - val_accuracy: 1.0000

Keras Concatenated Model Doesn't learn

i'm trying to build a model that can predict emotions using 7 models concatenated .
Each of the 7 model represents a part of the face: mouth, left_eye, right_eye...ect
the problem is the model doesn't learn at all: from the 2nd epoch to the last one 100 : i have 15% accuracy, no changes in acuracy or loss during all the epochs.
i think maybe the problem is in my model cocatenated or my fit function ( the train and labels data)
there is 7 Emotions : sad, angry , happy ....ect
Here is my model and my compile and train and my datasets
Model
from keras.layers import Conv2D, MaxPooling2D, Input, concatenate
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Flatten
def build_all_faceparts_model(input_shape,batch_shape,num_classes):
input1=Input(input_shape)
input2=Input(input_shape)
input3=Input(input_shape)
input4=Input(input_shape)
input5=Input(input_shape)
input6=Input(input_shape)
input7=Input(input_shape)
# Create the model for right eye
right_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input1, batch_input_shape = batch_shape) (input1)
right_eye=MaxPooling2D(pool_size=(2, 2))(right_eye)
right_eye=Dropout(0.25)(right_eye)
right_eye=Flatten()(right_eye)
# Create the model for leftt eye
left_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input2, batch_input_shape = batch_shape) (input2)
left_eye=MaxPooling2D(pool_size=(2, 2))(left_eye)
left_eye=Dropout(0.25)(left_eye)
left_eye=Flatten()(left_eye)
# Create the model for right eyebrow
right_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input3, batch_input_shape = batch_shape) (input3)
right_eyebrow=MaxPooling2D(pool_size=(2, 2))(right_eyebrow)
right_eyebrow=Dropout(0.25)(right_eyebrow)
right_eyebrow=Flatten()(right_eyebrow)
# Create the model for leftt eye
left_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input4, batch_input_shape = batch_shape) (input4)
left_eyebrow=MaxPooling2D(pool_size=(2, 2))(left_eyebrow)
left_eyebrow=Dropout(0.25)(left_eyebrow)
left_eyebrow=Flatten()(left_eyebrow)
# Create the model for mouth
mouth=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input5, batch_input_shape = batch_shape) (input5)
mouth=MaxPooling2D(pool_size=(2, 2))(mouth)
mouth=Dropout(0.25)(mouth)
mouth=Flatten()(mouth)
# Create the model for nose
nose=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input6, batch_input_shape = batch_shape) (input6)
nose=MaxPooling2D(pool_size=(2, 2))(nose)
nose=Dropout(0.25)(nose)
nose=Flatten()(nose)
# Create the model for jaw
jaw=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input7, batch_input_shape = batch_shape) (input7)
jaw=MaxPooling2D(pool_size=(2, 2))(jaw)
jaw=Dropout(0.25)(jaw)
jaw=Flatten()(jaw)
concatenated = concatenate([right_eye, left_eye, right_eyebrow, left_eyebrow, mouth, nose, jaw],axis = -1)
out = Dense(num_classes, activation='softmax')(concatenated)
model = Model([input1,input2,input3,input4,input5,input6,input7], out)
return model
train and test datasets Here X_train_all is a list of datasets, not like y_train_all
X_train_all=[X_train_mouth,X_train_right_eyebrow,X_train_left_eyebrow,X_train_right_eye,X_train_left_eye,X_train_nose,X_train_jaw]
X_test_all=[X_test_mouth,X_test_right_eyebrow,X_test_left_eyebrow,X_test_right_eye,X_test_left_eye,X_test_nose,X_test_jaw]
y_train_all=y_train_mouth+y_train_right_eyebrow+y_train_left_eyebrow+y_train_right_eye+y_train_left_eye+y_train_nose+y_train_jaw
y_test_all=y_test_mouth+y_test_right_eyebrow+y_test_left_eyebrow+y_test_right_eye+y_test_left_eye+y_test_nose+y_test_jaw
compile
from keras.optimizers import Adam
input_shape =X_train_mouth[0].shape
batch_shape = X_train_mouth[0].shape
model_all_faceparts=build_all_faceparts_model(input_shape,batch_shape,7)
#Compile Model
model_all_faceparts.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-3),metrics=["accuracy"])
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3)
early_stopper = EarlyStopping(monitor='val_acc', min_delta=0, patience=15, mode='auto')
checkpointer = ModelCheckpoint(current_dir+'/weights_jaffe.hd5', monitor='val_loss', verbose=1, save_best_only=True)
Train
history=model_all_faceparts.fit(
X_train_all, y_train_all, batch_size=7, epochs=100, verbose=1,callbacks=[lr_reducer, checkpointer, early_stopper])
output
Epoch 1/100
181/181 [==============================] - 19s 107ms/step - loss: 94.6603 - acc: 0.1271
Epoch 2/100
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:1109: RuntimeWarning: Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,acc,lr
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:569: RuntimeWarning: Early stopping conditioned on metric `val_acc` which is not available. Available metrics are: loss,acc,lr
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 3/100
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 4/100
181/181 [==============================] - 15s 83ms/step - loss: 95.9962 - acc: 0.1492
Epoch 5/100
181/181 [==============================] - 15s 84ms/step - loss: 95.9962 - acc: 0.1492
Epoch 6/100
181/181 [==============================] - 15s 85ms/step - loss: 95.9962 - acc: 0.1492
Epoch 7/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 8/100
181/181 [==============================] - 16s 87ms/step - loss: 95.9962 - acc: 0.1492
Epoch 9/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 10/100
(I completly forgot this post)
The problem was in the model itself, i just changed the model (added some layers) and everything was fine concluding to 93% accuracy!
PS: thanks to the tensorflow support guy that did remind me to post an answer

keras convolutional neural network is predicting all zeros no matter what type of activation function I use

I am new to using convolutional neural networks as well as keras. As a side project I scraped MLB player's headshots from baseball-reference. For each player I broke their image into different blocks(15x15 pixels) and then randomly put images back together and recorded whether or not the images actually fit together. My goal is to create a convolutional neural netowrk that can recognize when 2 images actually go together.
My input data is 15x30x3 (it is 2 15x15 blocks put together to make a 15x30 image) and a 1 or 0 for the target for whether or not the 2 images actually go together.
My data consists of:
0.7186 non-matches and
0.2813 matches
I structured my model as follows:
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(15,30,3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=64, epochs=10, verbose=1)
This results in:
Epoch 1/10
136996/136996 [==============================] - 21s 154us/step - loss:
4.5399 - acc: 0.7183
Epoch 2/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5369 - acc: 0.7185
Epoch 3/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5355 - acc: 0.7186
Epoch 4/10
136996/136996 [==============================] - 16s 116us/step - loss:
4.5354 - acc: 0.7186
Epoch 5/10
136996/136996 [==============================] - 16s 116us/step - loss:
4.5393 - acc: 0.7184
Epoch 6/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5373 - acc: 0.7185
Epoch 7/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5369 - acc: 0.7185
Epoch 8/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5374 - acc: 0.7185
Epoch 9/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5374 - acc: 0.7185
Epoch 10/10
136996/136996 [==============================] - 16s 117us/step - loss:
4.5360 - acc: 0.7186
If I change my output layer to
model.add(Dense(2, activation='softmax'))
and use categorical_crossentropy as my loss the results are very similar and still basically predicting all zero's. I have also fiddled with the optimizer.
Notice that the accuracy always roughly matches the proportion of non-matches in my dataset. IE it is always predicting 0.
What am I doing wrong?
Thank you for any and all input.
In line with #dennlinger 's comment, you should check if your classes are imbalanced. If they are, try to have similar number of samples or you need to use class weights.
In your case, you have two classes, weights can be calculated easily like this: weight_0 = (total number of samples) / (samples with zero) and weight_1 = (total number of samples) / (samples with one)
Then, add class_weight parameter in your fit function. You can follow this answer

Categories