Gradient Descent loss and accuracy doesn't change through iteration - python

I'm using keras to implement a basic CNN emotion detection. here is my model architecture
def HappyModel(input_shape):
X_Input = Input(input_shape)
X = ZeroPadding2D((3,3))(X_Input)
X = Conv2D(32, (7,7), strides=(1,1), name='conv0')(X)
X = BatchNormalization(axis = 3, name='bn0')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2,2), name='mp0')(X)
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc0')(X)
model = Model(inputs = X_Input, outputs = X, name='hmodel')
return model
happyModel = HappyModel(X_train.shape[1:])
happyModel.compile(Adam(lr=0.1) ,loss= 'binary_crossentropy', metrics=['accuracy'])
happyModel.fit(X_train, Y_train, epochs = 50, batch_size=16, validation_data=(X_test, Y_test))
it appears that the model loss and accuracy doesn't change at all in every epoch step. it feels like the gradient descent stuck on local minima as follow:
https://i.imgur.com/9As8v0c.png
have tried using Adam and SGD optimizer, with both learning rate .1 and .5, still no luck.
it turns out if I change the compile method command parameters, the model would converge nicely on training epoch
happyModel.compile(optimizer = 'adam' ,loss= 'binary_crossentropy', metrics=['accuracy'])
Keras documentation says that if we write the parameter this way, it would use the default parameters for adam (https://keras.io/optimizers/)
keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
but then, if I change the model compile method to the default parameters
happyModel.compile(Adam(lr=0.001, beta_1=0.9, beta_2=0.999, decay=0.0),loss= 'binary_crossentropy', metrics=['accuracy'])
accuracy and loss still stuck.
what's the difference between the two different implementations of Adam optimizer on Keras?

You can check a closed issue on official keras-team page:
https://github.com/keras-team/keras/issues/5564
You probably have a syntax issue, since the two ways are not completely equivalent.

Related

How to find a single accuracy in CNN model for research purpose?

Already, I have completed a project with 20 epochs using CNN model,
model training code is given below,
model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(16,(3,3), activation='relu', input_shape=(200, 200, 3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(32,(3,3), activation='relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(64,(3,3), activation='relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid'),
])
Then, compile the model,
model.compile(loss = 'binary_crossentropy',
optimizer = RMSprop(lr=0.001),
metrics = ['accuracy'])
Now, fit the model with 20 epochs,
model_fit = model.fit(train_dataset,
steps_per_epoch=3,
epochs= 20,
validation_data = validation_dataset)
After train the model, show the accuracy and loss and that's is given below,
Note: I am failed to find a single accuracy so that I am failed to write it in research papers. Because I can't write whole accuracy in papers. I should use a single accuracy. So How should i find a single accuracy or how to write a single accuracy. Please help who know it.
I did comment above about the issue that you are having, but if you really only want one number, you can use the code below to evaluate both your loss and accuracy.
# Evaluate the loss and accuracy
loss, accuracy = model.evaluate(testingDataset)
# Print the accuracy
print("Accuracy: " + str(accuracy))
# Print the loss
print("Loss: " + str(loss))

Overfitting in LSTM even after using regularizers

I am having a time series prediction problem and building an LSTM like below :
def create_model():
model = Sequential()
model.add(LSTM(50,kernel_regularizer=l2(0.01), recurrent_regularizer=l2(0.01), bias_regularizer=l2(0.01), input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.591))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
When I train the model on 5 splits like below :
tss = TimeSeriesSplit(n_splits = 5)
X = data.drop(labels=['target_prediction'], axis=1)
y = data['target_prediction']
for train_index, test_index in tss.split(X):
train_X, test_X = X.iloc[train_index, :].values, X.iloc[test_index,:].values
train_y, test_y = y.iloc[train_index].values, y.iloc[test_index].values
model=create_model()
history = model.fit(train_X, train_y, epochs=10, batch_size=64,validation_data=(test_X, test_y), verbose=0, shuffle=False)
I get an overfitting problem. The graph of loss is attached
I am not sure why there is overfitting when I use regularizers in my Keras model. Any help is appreciated .
EDIT:
Tried the architectures
def create_model():
model = Sequential()
model.add(LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def create_model(x,y):
# define LSTM
model = Sequential()
model.add(Bidirectional(LSTM(20, return_sequences=True), input_shape=(x,y)))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
but still it is overfitting.
First of all remove all your regularizers and dropout. You are literally spamming with all the tricks out there and 0.5 dropout is too high.
Reduce the number of units in your LSTM. Start from there. Reach a point where your model stops overfitting.
Then, add dropout if required.
After that, the next step is to add the tf.keras.Bidirectional. If still, you are not satfisfied then, increase number of layers. Remember to keep return_sequences True for every LSTM layer except the last one.
It is seldom I come across networks using layer regularization despite the availability because dropout and layer regularization have a same effect and people usually go with dropout (at maximum, I have seen 0.3 being used).

How to use a well trained model as input to another model?

I'll start with the code and after put my question.
model1_input= keras.Input(shape=(5,10))
x = layers.Dense(16, activation='relu')(model1_input)
model1_output = layers.Dense(4)(x)
model1= keras.Model(model1_input, model1_output, name='model1')
model1.summary()
//----
model2_input= keras.Input(shape=(5,10))
y = layers.Dense(16, activation='relu')(model2_input)
model2_output = layers.Dense(4)(y)
model2= keras.Model(model2_input, model2_output, name='model2')
model2.summary()
//----
model3_input= keras.Input(shape=(5, 10))
layer1 = model1(model3_input)
layer2 = model2(layer1)
model3_output = layers.Dense(1)(layer2)
model3= keras.Model(model3_input, model3_output , name='model3')
model3.summary()
model3.compile(loss='mse', optimizer='adam')
model3.fit(inputs, outputs, epochs=10, batch_size=32)
When execute this code, what will happen with the model 1 and model 2 weights? they would stay untrained?
I would like to use trained model1 and trained model2 predictions to train model3. Can I write something like that?
model1_input= keras.Input(shape=(5,10))
x = layers.Dense(16, activation='relu')(model1_input)
model1_output = layers.Dense(4)(x)
model1= keras.Model(model1_input, model1_output, name='model1')
model1.summary()
model1.compile(loss='mse', optimizer='adam')
model1.fit(model1_inputs, model1_outputs, epochs=10, batch_size=32)
//----
model2_input= keras.Input(shape=(5,10))
y = layers.Dense(16, activation='relu')(model2_input)
model2_output = layers.Dense(4)(y)
model2= keras.Model(model2_input, model2_output, name='model2')
model2.summary()
model2.compile(loss='mse', optimizer='adam')
model2.fit(model2_inputs, model2_outputs, epochs=10, batch_size=32)
//----
model3_input= keras.Input(shape=(5, 10))
layer1 = model1(model3_input)
layer2 = model2(layer1)
model3_output = layers.Dense(1)(layer2)
model3= keras.Model(model3_input, model3_output , name='model3')
model3.summary()
model3.compile(loss='mse', optimizer='adam')
model3.fit(inputs, outputs, epochs=10, batch_size=32)
I'm afraid when I train model3 this will change the already trained weights of models 1 and 2. In this case, what will happen with models 1 and 2 weights?
I am not sure if keras works that way but even if it does it still will change the weights as long as the layers are trainable. Try freezing the layers. These links might help you 1,2.
Another option will be to branch the layers like this.

Can't train a keras model to approximate a simple function

I just got started with machine learning and I tried to write a simple program where the nn will learn the simple function y = f(x) = 2x.
Here's the code:
#x is a 1D array of 1 to 1000
x = np.arange(1,1000, 1)
y = x*2
xtrain = x[:750]
ytrain = y[:750]
xtest = x[750:]
ytest = y[750:]
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D
model = Sequential()
model.add(Dense(128, input_dim=1, activation='relu'))
model.add(Dense(1, activation='relu'))
model.compile(loss='mean_squared_error',
optimizer='sgd',
metrics=['accuracy'])
model.summary()
history = model.fit(xtrain, ytrain,
batch_size=100,
epochs=20,
verbose=1,
validation_split=0.2)
I get the following output, no matter how I change the architecture or the hyperparameters:
79999/79999 [==============================] - 1s 13us/step - loss: 8533120007.8465 - acc: 0.0000e+00 - val_loss: 32532613324.8000 - val_acc: 0.0000e+00
the accuracy is 0 all the time. what am I doing wrong?
It's actually what you would expect if you blindly run and expect gradient descent methods to learn any function. The behaviour you observe stems from 2 reasons:
The derivative that SGD uses to update weights actually depends on the input. Take a very simple case y = f(wx + b), the derivative of y with respect to w is f'(wx + b)*x using the chain rule. So when there is an update for an input that is extremely large / unnormalised it blows up. Now the update is basically w' = w - alpha*gradient, so the weight suddenly becomes very small, in fact negative.
After a single gradient update the output became negative because the SGD just overshot. Since you again have relu in the final layer, that just outputs 0 and the training stalls because when the output is negative derivative of relu is 0.
You can reduce the datasize to np.arange(1, 10) and reduce the number of hidden neurons to say 12 (more neurons make the output even more negative after single update as all their weights become negative as well) and you will be able to train the network.
I think it works check this out. I used randn instead of arange. Other things are pretty much the same.
x = np.random.randn(1000)
y = x*2
xtrain = x[0:750]
ytrain = y[0:750]
model = Sequential()
model.add(Dense(128, input_dim=1, activation='relu'))
model.add(Dense(1))
model.summary()
sgd = optimizers.SGD(lr=0.01, decay=1e-6)
model.compile(loss='mean_squared_error',
optimizer=sgd,
metrics=['mae'])
history = model.fit(xtrain, ytrain,
batch_size=100,
epochs=20,
verbose=1,
validation_split=0.2)
If you want to use the earlier dataset(ie arange). Here is accompanying code for that.
x = np.arange(1,1000, 1)
y = x*2
xtrain = x[0:750]
ytrain = y[0:750]
model = Sequential()
model.add(Dense(128, input_dim=1, activation='relu'))
model.add(Dense(1))
model.summary()
sgd = optimizers.Adam(lr=0.0001)
model.compile(loss='mean_squared_error',
optimizer=sgd,
metrics=['mae'])
history = model.fit(xtrain, ytrain,
batch_size=100,
epochs=200,
verbose=1,
validation_split=0.2)

VGG16 different epoch and batch size generating the same result

I am trying to learn how vgg16 works. Below is my code, using vgg16 for another classification.
# Generate a model with all layers (with top)
model_vgg16_conv = VGG16(weights='imagenet', include_top=False)
model_vgg16_conv.summary()
# create your own input format
input = Input(shape=(128,128,3),name = 'image_input')
# Use the generated model
output_vgg16_conv = model_vgg16_conv(input)
# Add the fully-connected layers
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(5, activation='softmax', name='predictions')(x)
#Create your own model
model = Model(input=input, output=x)
#In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training
model.summary()
# Specify an optimizer to use
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
# Choose loss function, optimization method, and metrics (which results to display)
model.compile(
optimizer = adam,
loss='categorical_crossentropy',
metrics=['accuracy']
)
model.fit(X_train,y_train,epochs=10,batch_size=10,verbose=2)
# model.fit(X_train,y_train,epochs=30,batch_size=100,verbose=2)
result = model.predict(y_test) # same result
For some reason, using different epoch size and batch size generate exactly the same result. Am I doing something wrong?

Categories