overfitting alexnet -- python - python

I have an alexnet neural network that I wrote it from scratch using tensorflow and I used 6000 images as train_data.
but while training, the validation accuracy is not changing and it is greater than training accuracy, I guess it is overfitting. In addition, validation loss is increasing.
Is it possible to solve overfitting problem with 1000 data for saving time?
How can I prevent overfitting?
I attached my alexnet code below.
Thanks
def CreateModel():
model = Sequential()
# 1st Convolutional Layer
model.add(Conv2D(filters=96, input_shape=(227,227,3), kernel_size=(11,11), strides=(4,4), padding='valid'))
model.add(Activation('relu'))
# Max Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# 2nd Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Max Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# 3rd Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# 4th Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# 5th Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Max Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Passing it to a Fully Connected layer
model.add(Flatten())
# 1st Fully Connected Layer
model.add(Dense(4096, input_shape=(224*224*3,)))
model.add(Activation('relu'))
# Add Dropout to prevent overfitting
model.add(Dropout(0.5))
# 2nd Fully Connected Layer
model.add(Dense(4096))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.5))
# 3rd Fully Connected Layer
model.add(Dense(1000))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.5))
# Output Layer
model.add(Dense(2))
model.add(Activation('softmax'))
model.summary()
return model
alexNet_model = CreateModel()
alexNet_model.compile(loss='sparse_categorical_crossentropy' , optimizer='adam', metrics=["accuracy"])
batch_size = 4
epochs = 5
history = alexNet_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(x_validation, y_validation))

To mitigate overfitting. You can try to implement below steps
1.Shuffle the Data, by using shuffle=True in alexNet_model.fit. Code is shown below:
history = alexNet_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(x_validation, y_validation), shuffle = True)
2.Use Early Stopping. Code is shown below
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
3.Use Regularization. Code for Regularization is shown below (You can try l1 Regularization or l1_l2 Regularization as well):
from tensorflow.keras.regularizers import l2
Regularizer = l2(0.001)
alexNet_model.add(Conv2D(96,11, 11, input_shape = (227,227,3),strides=(4,4), padding='valid', activation='relu', data_format='channels_last',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
alexNet_model.add(Dense(units = 2, activation = 'sigmoid',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))
4.You can try using BatchNormalization.
5.Perform Image Data Augmentation using ImageDataGenerator. Refer this link for more info about that.
6.If the Pixels are not Normalized, Dividing the Pixel Values with 255 also helps

Related

CNN-LSTM Timeseries input for TimeDistributed layer

I created a CNN-LSTM for survival prediction of web sessions, my training data looks as follows:
print(x_train.shape)
(288, 3, 393)
with (samples, timesteps, features) and my model:
model = Sequential()
model.add(TimeDistributed(Conv1D(128, 5, activation='relu'),
input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(TimeDistributed(MaxPooling1D()))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(64, stateful=True, return_sequences=True))
model.add(LSTM(16, stateful=True))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
However, the TimeDistributed Layer requires a minimum of 3 dimensions, how should I transform the data to get it work?
Thanks a lot!
your data are in 3d format and this is all you need to feed a conv1d or an LSTM. if your target is 2D remember to set return_sequences=False in your last LSTM cell.
using a flatten before an LSTM is a mistake because you are destroying the 3D dimensionality
pay attention also on the pooling operation in order to not have a negative time dimension to reduce (I use 'same' padding in the convolution above in order to avoid this)
below is an example in a binary classification task
n_sample, time_step, n_features = 288, 3, 393
X = np.random.uniform(0,1, (n_sample, time_step, n_features))
y = np.random.randint(0,2, n_sample)
model = Sequential()
model.add(Conv1D(128, 5, padding='same', activation='relu',
input_shape=(time_step, n_features)))
model.add(MaxPooling1D())
model.add(LSTM(64, return_sequences=True))
model.add(LSTM(16, return_sequences=False))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X,y, epochs=3)

How to get weights from keras model?

I'm trying to build a 2 layered neural network for MNIST dataset and I want to get weights from my model.
I found a similar question her on SO and I tried this,
model.get_weights()
But It returned 11 values when I check the len(model.get_weights()) Isn't it suppose to return 3 weights? I have even disabled bias.
model = Sequential()
model.add(Flatten(input_shape = (28, 28)))
model.add(Dense(512, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(128, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.add(BatchNormalization())
model.add(Dropout(0.1))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
result = model.fit(x_train, y_train, validation_split=0.25, epochs=10,
batch_size=128, verbose=1)
To get the weights of a particular layer, you could retrieve this layer by using its name and call get_weights on it (as shubham-panchal said in its comment).
For example:
model.get_layer('dense').get_weights()
or
model.get_layer('dense_2').get_weights()
You could go though the layers of your model and retrieve its name and weights:
{layer.name: layer.get_weights() for layer in model.layers}

Keras Model for Molecular Activity

I am experimenting with the Merku molecular activity challenge and I have created the train and test dataset.
The shape of the data is the following:
x_train.shape=(1452, 4306)
y_train.shape=(1452, 1)
x_test.shape=(363, 4306)
y_test.shape=(363, 1)
I have used the Dense layer for defining the model as follows:
model = Sequential()
model.add(Dense(100, activation="relu", input_shape=(4306,)))
model.add(Dense(50, activation="relu"))
model.add(Dense(25, activation="relu"))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1))
# Compile the model
model.compile(
loss='categorical_crossentropy',
optimizer="adam",
)
model.summary()
# Train the model
model.fit(
x_train,
y_train,
batch_size=300,
epochs=900,
validation_data=(x_test, y_test),
shuffle=True
)
While trying the above code, the following error occurred:
ValueError: Input 0 is incompatible with layer flatten_23: expected min_ndim=3, found ndim=2
How can I resolve this error?
Just remove the flatten layer:
model = Sequential()
model.add(Dense(100, activation="relu", input_shape=(4306,)))
model.add(Dense(50, activation="relu"))
model.add(Dense(25, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(1))
The data sent to sequential layers is essentially 1-D (ignoring the batch column) so there's nothing to flatten. The data entering the flatten layer is already 1D.
EDIT -- for regression:
Categorical crossentropy is not an appropriate cost function for regression, you need to use mean-square error, which is commonly used for all regression tasks:
model.compile(
loss='mse',
optimizer="adam",
)

Keras TensorBoard visulize Conv Kernels

I am using Keras with TensorFlow as backend.
Now i want to use the TensorBoard callback to visualize my conv layer kernels.
But i can only see the first conv layer kernel in TensorBoard and my Dense layers at the end.
For the other conv layers i can just the the bias values and not the kernels.
Here is my sample code for the Keras model.
tb = TensorBoard(
log_dir=log_dir,
histogram_freq=epochs,
write_images=True)
# Define the DNN
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=3, input_shape=(width, height, depth), name="conv1"))
model.add(Activation("relu"))
model.add(Conv2D(filters=16, kernel_size=3, name="conv2"))
model.add(Activation("relu"))
model.add(MaxPool2D())
model.add(Conv2D(filters=32, kernel_size=3, name="conv3"))
model.add(Activation("relu"))
model.add(Conv2D(filters=32, kernel_size=3, name="conv4"))
model.add(Activation("relu"))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(128))
model.add(Activation("relu"))
model.add(Dense(num_classes, name="features"))
model.add(Activation("softmax"))
# Print the DNN layers
model.summary()
# Train the DNN
lr = 1e-3
optimizer = Adam(lr=lr)
model.compile(loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
model.fit(x_train, y_train, verbose=1,
batch_size=batch_size, epochs=epochs,
validation_data=(x_test, y_test),
callbacks=[tb])
And this is what i see in TensorBoard.
(I minimized the Kernels of my first conv layer)
TB Screenshot
What am i missing to visulize all my kernels?
This is the expected (but not specified in the documentation) behaviour of the Tensorboard callback. See the answer on this related bug report of Tensorboard GitHub page:
The TensorBoard Keras callback calls tf.summary.image without
overriding the default for max_outputs, so there’s no way to visualize
more than the first 3 kernels via the callback at this time.
You need to visualize the kernels with your own call of the tf.summary.image.

Change CNN to LSTM keras tensorflow

I have a CNN and like to change this to a LSTM, but when I modified my code I receive the same error: ValueError: Input 0 is incompatible with layer gru_1: expected ndim=3, found ndim=4
I already change ndim but didn't work.
follow my cnn
def build_model(X,Y,nb_classes):
nb_filters = 32 # number of convolutional filters to use
pool_size = (2, 2) # size of pooling area for max pooling
kernel_size = (3, 3) # convolution kernel size
nb_layers = 4
input_shape = (1, X.shape[2], X.shape[3])
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid', input_shape=input_shape))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
for layer in range(nb_layers-1):
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(BatchNormalization(axis=1))
model.add(ELU(alpha=1.0))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation("softmax"))
return model
and follow how i like to did my LSTM
data_dim = 41
timesteps = 20
num_classes = 10
model = Sequential()
model.add(LSTM(256, return_sequences=True, input_shape=(timesteps, data_dim)))
model.add(Dropout(0.5))
model.add(LSTM(128, return_sequences=True, input_shape=(timesteps, data_dim)))
model.add(Dropout(0.25))
model.add(LSTM(64))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
What I was doing wrong?
Thanks
The LSTM code is fine, it executes with no errors for me.
The error you are seeing is related to internal incompatibility of the tensors within the model itself, not related to training data, in which case you'll get an "Exception: Invalid input shape"
What's confusing in your error is that it refers to a GRU layer, which isn't contained anywhere in your model definition. If your model only contains LSTM, you should get an error that calls out the LSTM layer that it conflicts with.
Perhaps check
model.get_config()
and make sure all the layers and configs are what you intended.
In particular, the first layer should say this:
batch_input_shape': (None, 20, 41)

Categories