I'm pretty new to machine learning, and this is my first project using tensorflow and keras. I'm trying to predict a numerical value given a data set but my model is working.
x_train, x_test, y_train, y_test = train_test_split(dates, prices, test_size=0.33)
x_train = np.reshape(x_train,(x_train.shape[0], 1, x_train.shape[1]))
model = Sequential()
model.add(LSTM(30, return_sequences=True, input_shape= (x_train.shape[0], 1)))
# model.add(Flatten())
model.add(Dropout(0.25))
model.add(Dense(100,activation = 'relu'))
model.add(Dropout(0.25))
model.add(Dense(1,activation='softmax'))
model.compile(optimizer='adam',loss='mean_squared_error', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=32, epochs=5)
This is my model above, but whenever it runs it shows this below:
Epoch 1/5
WARNING:tensorflow:Model was constructed with shape (None, 1686, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1686, 1), dtype=tf.float32, name='lstm_input'), name='lstm_input', description="created by layer 'lstm_input'"), but it was called on an input with incompatible shape (None, 1, 1).
WARNING:tensorflow:Model was constructed with shape (None, 1686, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1686, 1), dtype=tf.float32, name='lstm_input'), name='lstm_input', description="created by layer 'lstm_input'"), but it was called on an input with incompatible shape (None, 1, 1).
53/53 [==============================] - 2s 2ms/step - loss: 1314.1159 - accuracy: 0.0000e+00
Epoch 2/5
53/53 [==============================] - 0s 2ms/step - loss: 1332.1348 - accuracy: 0.0000e+00
Epoch 3/5
53/53 [==============================] - 0s 2ms/step - loss: 1307.5851 - accuracy: 0.0000e+00
Epoch 4/5
53/53 [==============================] - 0s 2ms/step - loss: 1327.0625 - accuracy: 0.0000e+00
Epoch 5/5
53/53 [==============================] - 0s 2ms/step - loss: 1314.4220 - accuracy: 0.0000e+00
<tensorflow.python.keras.callbacks.History at 0x7f1cdfc56668>
Does anyone know how to fix this and improve accuracy and loss?
Help is greatly appreciated.
If you are trying to predict a numerical value then you are NOT doing a classification problem but rather a regression problem. Therefore your dense layer with 1 neuron should have no activation function.
Replace your last layer of Sequential model with this:
model.add(Dense(1, kernel_initializer='normal'))
Regression task dont need softmax as activation function because softmax is mostly used in multi class classification
Related
My code is
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28, 5)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(2)])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(X_train, train_labels, epochs=10)
And my output is
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3920) 0
dense (Dense) (None, 128) 501888
dense_1 (Dense) (None, 2) 258
=================================================================
Total params: 502,146
Trainable params: 502,146
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
219/219 [==============================] - 2s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 2/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 3/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 4/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 5/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 6/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 7/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 8/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 9/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 10/10
219/219 [==============================] - 1s 3ms/step - loss: nan - accuracy: 0.0000e+00
<keras.callbacks.History at 0x7f8750280790>
Why do all training accuracy converge to 0? My dataset is
print(X_train.shape)
print(X_test.shape)
(7000, 28, 28, 5)
(3000, 28, 28, 5)
print(train_labels.shape)
(7000, 1)
And I tried other models, including con2D model or logistic regression model, but accuracy is always 0. That's really weird. Does the issue come from my dataset? My train_labels only contains 1s and (-1)s.
Try adjusting the learning rates or the label is not suitable, because the loss function value return is NaN.
First you need to consider the label as an int or float format.
Look into the sample dataset label distributions and vary, have 1 or 2 (1, 2) as your networks required.
If from_logits is enabled, you need to compare the output of the networks with the label and logits shape return. Example (1, 2) with 2 number of the output layer.
The Flatten layer is working in old versions. You should use the Input layer that is suitable with a dataset or you rename it as 'flatten_input' as your input layer name.
The rest is about data and networks suitable tasks, contrast input, and target. Try to add more layers or image alignment to create a contrast of data, but if the data is not an image, but a screen or shared information in resizes scales, networks should match to data.
Sample: Working with the Flatten layer, you need to map of the input name to it.
dataset = {
"flatten_input" :[],
"label" : []
}
dataset["flatten_input"].append(tf.constant(image, shape=(1, 28, 28, 1)))
dataset["label"].append(tf.constant(label, shape=(1, 1, 1, 64)))
Sample: Simply working on the MNIST dataset
import tensorflow as tf
import tensorflow_datasets as tfds
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: DataSets
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
ds = tfds.load('mnist', split='train', shuffle_files=True)
ds = ds.shuffle(1024).batch(64).prefetch(tf.data.experimental.AUTOTUNE)
assert isinstance(ds, tf.data.Dataset)
for example in ds.take(1):
image, label = example["image"], example["label"]
ls_image = []
ls_label = []
for i in range(label.shape[0]):
ls_image.append(tf.constant(image[i], shape=(1, 28, 28, 1)).numpy())
### should reflects the label in number format ###
ls_label.append(tf.constant(0, shape=(1, 1, 1, 1)).numpy())
image = tf.constant( ls_image, shape=(64, 1, 784, 1) )
label = tf.constant( ls_label, shape=(64, 1, 1, 1) )
dataset = tf.data.Dataset.from_tensor_slices(( image, label ))
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(784, 1)),
tf.keras.layers.Dense(256),
tf.keras.layers.Dense(256),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(2)
])
model.summary()
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.Nadam(
learning_rate=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
name='Nadam'
)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.AUTO,
name='sparse_categorical_crossentropy'
)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit( dataset, epochs=50 )
Output: Numbers happen
Epoch 1/50
51/64 [======================>.......] - ETA: 0s - loss: 7.0123e-09 - accuracy: 1.0000
import numpy as np
import pandas as pd
from numpy.random import seed
import tensorflow as tf
from tensorflow import keras
from keras import Sequential
from keras.layers import Dense, Conv1D, MaxPooling2D, Activation
from sklearn.model_selection import train_test_split
seed(1)
tf.random.set_seed(2)
droprate = 0.5
dataset = pd.read_csv('filecounts.csv')
data = np.array(pd.get_dummies(dataset['counts']))
model = Sequential()
model.add(Conv1D(8, kernel_size=3, padding="same", activation="relu",input_shape=(12, 12, 10)))
model.add(MaxPooling2D(pool_size=2))
...
model.add(Conv1D(4, kernel_size=3, padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=2))
...
model.add(Conv1D(1, kernel_size=3, padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=2))
...
model.add(Activation("softmax"))
sgd = keras.optimizers.SGD(learning_rate=1)
train, test = train_test_split(data, test_size=0.5)
model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train, epochs=100, batch_size=10)
_, accuracy = model.evaluate(test, verbose=0, steps=1)
print('Accuracy: %.2f' % (accuracy*100))
Conv1D expects input shape 3+D tensor with shape: batch_shape + (steps, input_dim)and output 3+D tensor with shape: batch_shape + (new_steps, filters) with or without padding='same'.
Error is due to Maxpooling2D expects 4D tensor with shape (batch_size, rows, cols, channels).
Working sample code
import tensorflow as tf
import numpy as np
import tensorflow.keras as keras
X_train = np.random.random((12,12,10))
y_train = np.random.random((12, 1))
model = tf.keras.Sequential()
model.add(keras.layers.Conv1D(8, kernel_size=3, padding="same", activation="relu",input_shape=(12, 10)))
model.add(keras.layers.MaxPool1D(pool_size=2))
model.add(keras.layers.Conv1D(4, kernel_size=3, padding="same", activation="relu"))
model.add(keras.layers.MaxPool1D(pool_size=2))
model.add(keras.layers.Conv1D(1, kernel_size=3, padding="same", activation="relu"))
model.add(keras.layers.MaxPool1D(pool_size=2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units = 128, activation = 'relu'))
model.add(keras.layers.Dense(units = 1, activation = 'softmax'))
model.compile(optimizer = 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])
model.fit(X_train, y_train, epochs = 15)
Output
Epoch 1/10
1/1 [==============================] - 1s 944ms/step - loss: 0.6914 - accuracy: 0.0000e+00
Epoch 2/10
1/1 [==============================] - 0s 9ms/step - loss: 0.6900 - accuracy: 0.0000e+00
Epoch 3/10
1/1 [==============================] - 0s 9ms/step - loss: 0.6885 - accuracy: 0.0000e+00
Epoch 4/10
1/1 [==============================] - 0s 7ms/step - loss: 0.6870 - accuracy: 0.0000e+00
Epoch 5/10
1/1 [==============================] - 0s 8ms/step - loss: 0.6856 - accuracy: 0.0000e+00
Epoch 6/10
1/1 [==============================] - 0s 9ms/step - loss: 0.6841 - accuracy: 0.0000e+00
Epoch 7/10
1/1 [==============================] - 0s 8ms/step - loss: 0.6828 - accuracy: 0.0000e+00
Epoch 8/10
1/1 [==============================] - 0s 14ms/step - loss: 0.6814 - accuracy: 0.0000e+00
Epoch 9/10
1/1 [==============================] - 0s 7ms/step - loss: 0.6801 - accuracy: 0.0000e+00
Epoch 10/10
1/1 [==============================] - 0s 11ms/step - loss: 0.6789 - accuracy: 0.0000e+00
<keras.callbacks.History at 0x7eff56169810>
I am training a simple machine learning model that takes a 1D description of a physical system (502 elements) and predicts the total energy (1 element). As I am new to TensorFlow I have used a simple dense neural network with two hidden layers of 64 neurons each:
Model: "total_energy"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
charge_density_x_max (InputL [(None, 502)] 0
_________________________________________________________________
hidden_1 (Dense) (None, 64) 32192
_________________________________________________________________
hidden_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense (Dense) (None, 1) 65
=================================================================
Total params: 36,417
Trainable params: 36,417
Non-trainable params: 0
_________________________________________________________________
This is my source code for the training, evaluation and prediction:
# imports
import os
import ast
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
# load the dataset from the csv file
data = pd.read_csv('1e_data.csv')
# load in the data
x_train = np.zeros(shape=(600, 502))
x_test = np.zeros(shape=(400, 502))
y_train = np.zeros(shape=(600))
y_test = np.zeros(shape=(400))
for i in range(0, 1000):
if i < 600:
x_train[i,:] = np.append(np.array(ast.literal_eval(data.loc[i,'n'])), float(data.loc[i,'xmax']))
y_train[i] = float(data.loc[i,'E'])
else:
x_test[i-600,:] = np.append(np.array(ast.literal_eval(data.loc[i,'n'])), float(data.loc[i,'xmax']))
y_test[i-600] = float(data.loc[i,'E'])
# build the neural network model
inputs = tf.keras.Input(shape=(502,), name='charge_density_x_max')
hidden1 = tf.keras.layers.Dense(64, activation='sigmoid', name='hidden_1')(inputs)
hidden2 = tf.keras.layers.Dense(64, activation='sigmoid', name='hidden_2')(hidden1)
outputs = tf.keras.layers.Dense(1)(hidden2)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name='total_energy')
# save the info of the model
with open('model_info.dat','w') as fh:
model.summary(print_fn=lambda x: fh.write(x + '\n'))
# compile the model
model.compile(optimizer='adam', loss='mean_absolute_percentage_error', metrics=['accuracy'])
# perform the training
model.fit(x_train, y_train, epochs=10)
# evaluate the model for accuracy
model.evaluate(x_test, y_test, verbose=2)
Yet when I run this it seems to do no training at all, giving an accuracy of 0.0000e+00:
Epoch 1/10
600/600 [==============================] - 0s 196us/sample - loss: 289.0616 - acc: 0.0000e+00
Epoch 2/10
600/600 [==============================] - 0s 37us/sample - loss: 144.5967 - acc: 0.0000e+00
Epoch 3/10
600/600 [==============================] - 0s 46us/sample - loss: 97.2109 - acc: 0.0000e+00
Epoch 4/10
600/600 [==============================] - 0s 46us/sample - loss: 108.0698 - acc: 0.0000e+00
Epoch 5/10
600/600 [==============================] - 0s 47us/sample - loss: 84.5921 - acc: 0.0000e+00
Epoch 6/10
600/600 [==============================] - 0s 38us/sample - loss: 79.9309 - acc: 0.0000e+00
Epoch 7/10
600/600 [==============================] - 0s 38us/sample - loss: 80.6755 - acc: 0.0000e+00
Epoch 8/10
600/600 [==============================] - 0s 47us/sample - loss: 87.5954 - acc: 0.0000e+00
Epoch 9/10
600/600 [==============================] - 0s 46us/sample - loss: 73.6634 - acc: 0.0000e+00
Epoch 10/10
600/600 [==============================] - 0s 38us/sample - loss: 78.0825 - acc: 0.0000e+00
400/400 - 0s - loss: 70.3813 - acc: 0.0000e+00
I have probably made a simple mistake here, but I do not know how to begin debugging. This should perform at least some training, but at the moment it seems to just skip the training and give an accuracy of 0.
You are in a regression setting, where accuracy is meaningless (it is meaningful only for classification problems); see What function defines accuracy in Keras when the loss is mean squared error (MSE)? for more details (it is applicable in your case, too, despite the use of a different loss).
The fact that your network does indeed learn is apparent from the reduction in your loss, which is the actual quantity of interest in regression problems (you simply don't need any metrics here).
Independently of the above, you should probably change the sigmoid activations to relu (we normally do not use sigmoid nowadays for intermediate layers).
i'm trying to build a model that can predict emotions using 7 models concatenated .
Each of the 7 model represents a part of the face: mouth, left_eye, right_eye...ect
the problem is the model doesn't learn at all: from the 2nd epoch to the last one 100 : i have 15% accuracy, no changes in acuracy or loss during all the epochs.
i think maybe the problem is in my model cocatenated or my fit function ( the train and labels data)
there is 7 Emotions : sad, angry , happy ....ect
Here is my model and my compile and train and my datasets
Model
from keras.layers import Conv2D, MaxPooling2D, Input, concatenate
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Flatten
def build_all_faceparts_model(input_shape,batch_shape,num_classes):
input1=Input(input_shape)
input2=Input(input_shape)
input3=Input(input_shape)
input4=Input(input_shape)
input5=Input(input_shape)
input6=Input(input_shape)
input7=Input(input_shape)
# Create the model for right eye
right_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input1, batch_input_shape = batch_shape) (input1)
right_eye=MaxPooling2D(pool_size=(2, 2))(right_eye)
right_eye=Dropout(0.25)(right_eye)
right_eye=Flatten()(right_eye)
# Create the model for leftt eye
left_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input2, batch_input_shape = batch_shape) (input2)
left_eye=MaxPooling2D(pool_size=(2, 2))(left_eye)
left_eye=Dropout(0.25)(left_eye)
left_eye=Flatten()(left_eye)
# Create the model for right eyebrow
right_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input3, batch_input_shape = batch_shape) (input3)
right_eyebrow=MaxPooling2D(pool_size=(2, 2))(right_eyebrow)
right_eyebrow=Dropout(0.25)(right_eyebrow)
right_eyebrow=Flatten()(right_eyebrow)
# Create the model for leftt eye
left_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input4, batch_input_shape = batch_shape) (input4)
left_eyebrow=MaxPooling2D(pool_size=(2, 2))(left_eyebrow)
left_eyebrow=Dropout(0.25)(left_eyebrow)
left_eyebrow=Flatten()(left_eyebrow)
# Create the model for mouth
mouth=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input5, batch_input_shape = batch_shape) (input5)
mouth=MaxPooling2D(pool_size=(2, 2))(mouth)
mouth=Dropout(0.25)(mouth)
mouth=Flatten()(mouth)
# Create the model for nose
nose=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input6, batch_input_shape = batch_shape) (input6)
nose=MaxPooling2D(pool_size=(2, 2))(nose)
nose=Dropout(0.25)(nose)
nose=Flatten()(nose)
# Create the model for jaw
jaw=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input7, batch_input_shape = batch_shape) (input7)
jaw=MaxPooling2D(pool_size=(2, 2))(jaw)
jaw=Dropout(0.25)(jaw)
jaw=Flatten()(jaw)
concatenated = concatenate([right_eye, left_eye, right_eyebrow, left_eyebrow, mouth, nose, jaw],axis = -1)
out = Dense(num_classes, activation='softmax')(concatenated)
model = Model([input1,input2,input3,input4,input5,input6,input7], out)
return model
train and test datasets Here X_train_all is a list of datasets, not like y_train_all
X_train_all=[X_train_mouth,X_train_right_eyebrow,X_train_left_eyebrow,X_train_right_eye,X_train_left_eye,X_train_nose,X_train_jaw]
X_test_all=[X_test_mouth,X_test_right_eyebrow,X_test_left_eyebrow,X_test_right_eye,X_test_left_eye,X_test_nose,X_test_jaw]
y_train_all=y_train_mouth+y_train_right_eyebrow+y_train_left_eyebrow+y_train_right_eye+y_train_left_eye+y_train_nose+y_train_jaw
y_test_all=y_test_mouth+y_test_right_eyebrow+y_test_left_eyebrow+y_test_right_eye+y_test_left_eye+y_test_nose+y_test_jaw
compile
from keras.optimizers import Adam
input_shape =X_train_mouth[0].shape
batch_shape = X_train_mouth[0].shape
model_all_faceparts=build_all_faceparts_model(input_shape,batch_shape,7)
#Compile Model
model_all_faceparts.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-3),metrics=["accuracy"])
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3)
early_stopper = EarlyStopping(monitor='val_acc', min_delta=0, patience=15, mode='auto')
checkpointer = ModelCheckpoint(current_dir+'/weights_jaffe.hd5', monitor='val_loss', verbose=1, save_best_only=True)
Train
history=model_all_faceparts.fit(
X_train_all, y_train_all, batch_size=7, epochs=100, verbose=1,callbacks=[lr_reducer, checkpointer, early_stopper])
output
Epoch 1/100
181/181 [==============================] - 19s 107ms/step - loss: 94.6603 - acc: 0.1271
Epoch 2/100
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:1109: RuntimeWarning: Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,acc,lr
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:569: RuntimeWarning: Early stopping conditioned on metric `val_acc` which is not available. Available metrics are: loss,acc,lr
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 3/100
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 4/100
181/181 [==============================] - 15s 83ms/step - loss: 95.9962 - acc: 0.1492
Epoch 5/100
181/181 [==============================] - 15s 84ms/step - loss: 95.9962 - acc: 0.1492
Epoch 6/100
181/181 [==============================] - 15s 85ms/step - loss: 95.9962 - acc: 0.1492
Epoch 7/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 8/100
181/181 [==============================] - 16s 87ms/step - loss: 95.9962 - acc: 0.1492
Epoch 9/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 10/100
(I completly forgot this post)
The problem was in the model itself, i just changed the model (added some layers) and everything was fine concluding to 93% accuracy!
PS: thanks to the tensorflow support guy that did remind me to post an answer
This question already has answers here:
predict with Keras fails due to faulty environment setup
(3 answers)
Closed 3 years ago.
I was performing a classification problem on some set of images, where my number of classes are three. Now since I am performing CNN, so it has a convolution layer and Pooling layer and then a few dense layers; the model parameters are shown below:
def baseline_model():
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 100, 100), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(60, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
The model runs perfectly and shows me the accuracy and validation error, etc,. as shown below:
model = baseline_model()
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=20, verbose=1)
scores = model.evaluate(X_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
Which gives me output:
Train on 514 samples, validate on 129 samples
Epoch 1/5
514/514 [==============================] - 23s 44ms/step - loss: 1.2731 - acc: 0.4202 - val_loss: 1.0349 - val_acc: 0.4419
Epoch 2/5
514/514 [==============================] - 18s 34ms/step - loss: 1.0172 - acc: 0.4416 - val_loss: 1.0292 - val_acc: 0.4884
Epoch 3/5
514/514 [==============================] - 17s 34ms/step - loss: 0.9368 - acc: 0.5817 - val_loss: 0.9915 - val_acc: 0.4806
Epoch 4/5
514/514 [==============================] - 18s 34ms/step - loss: 0.7367 - acc: 0.7101 - val_loss: 0.9973 - val_acc: 0.4961
Epoch 5/5
514/514 [==============================] - 17s 32ms/step - loss: 0.4587 - acc: 0.8521 - val_loss: 1.2328 - val_acc: 0.5039
CNN Error: 49.61%
The issue occurs in the prediction part.
So for my test images, for whom I need predictions; when I run model.predict(), it gives me this error:
TypeError: data type not understood
I can show the full error if required.
And just to show, the shape of my training images and images I am finally using to predict on:
X_train.shape
(514, 1, 100, 100)
final.shape
(277, 1, 100, 100)
So I've no idea what this error means and what's the issue. Even the data type of my image values is same 'float32'. So the shape is same and data type is same, then why is this issue coming?
It is similar to predict with Keras fails due to faulty environment setup
I had the same issue with anaconda and python 3.7. I resolved it when I changed to WPy-3670
with python 3.6 and everything downgraded.