I have a classification problem that target contains 5 classes, 15 features(all continuous)
and have 1 million for training data, 0.5 million for validation data.
e.g.,
shape of X_train = (1000000,15)
shape of X_validation = (500000,15)
First, I used Random Forest that can get 88% Avg. Accuracy.
After that I tried many Neural Network architecture, the best one got ~80% Avg. Accuracy both on training and validation data, which was worse than Random forest.
(I don't know much about designing Neural Network architecture)
Following is the best one of my NN architecture. (~80% Avg.Accuracy)
model = Sequential()
model.add(Dense(1000, input_dim=15, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(900, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(800, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(700, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dense(5, activation='softmax'))#output layer
adadelta = Adadelta()
model.compile(loss='categorical_crossentropy', optimizer=adadelta, metrics=['accuracy'])
Batch Size = 128 and epochs = 100
I have read this question. The answer point out that NN needs amount of data and some regulization. I think my data size is good enough and I have also tried higer Dropout rate and L2 regulization but still not working.
What could the problem be?
This is biological data that I have no domain knowledge so sorry about that I can't explain it. I've plot the feature distribution as below, all features are between 0 to 3
Related
Dataset: train.csv
Approach
I have four classes to be predicted and they are really very imbalanced so i tried using SMOTE and a feed forward network but using smote is giving very poor results as compared to original dataset on the test data
model architecture
#model architecture
from tensorflow.keras.layers import Dense, BatchNormalization, Dropout, Flatten
model = tf.keras.Sequential()
model.add(Dense(512, activation='relu', input_shape=(7, )))
model.add(BatchNormalization())
model.add(Dense(256, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(64, activation='relu'))
model.add(Dense(4, activation='softmax'))
earlystopping = tf.keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=40,
mode="auto",
restore_best_weights=True,
)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
So how to approach for this problem and increase the f1-score on the test dataset
Any help is appreciated
Below is an explanation of what could be the best approach for your case.
SMOTE
Usually SMOTE balances out the data by random upsampling, so even if you have a data sample distribution like Class A having 15000 Records and Class B having 200 records it would upsample the Class B to 15000 Records too.
Having too many random samples generated from the 200 Records it self sometimes makes the model very hard to learn and differentiate between classes, since the upsampling has significantly increased Class B records from 200 to 15000 by duplicating it.
Possible Solutions
Instead of SMOTE I would recommend to try Stratified Sampling between the train/test and then try building the model on top of it.
Having class weights as parameter is another best approach and its present almost for all ML algorithms. In your case for Keras you can Refer Here it could be very helpful.
I hope someone can point out where I am going wrong with my RNN. The long and short of my problem is that no matter the structure of my network, the predictions are always along the lines of this:
I have tried 1, 2, 3, and 4 layers of LSTMs each with varying neuron counts and either relu or tanh activation functions. For the above image, the network was setup as:
model = Sequential()
model.add(LSTM(128, activation='relu', return_sequences=True, input_shape=(length, scaled_train_data.shape[1])))
model.add(LSTM(256, activation='relu', return_sequences=True))
model.add(LSTM(256, activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dense(scaled_train_data.shape[1]))
model.compile(optimizer='adam', loss="mse")
The actual training of the model passes ok, without event:
My data is financial data. There are around 70k rows and I have approx. 70/30 train/test split.
Where am I going wrong? Thanks!
So from asking about and reading around, it seems RNNs might not be the best solution for financial / random walk data - at least with the setup I am using. I wonder if using averages might produce better results?
Anyway, moving on to Reinforcement Learning.
I've trained simple GRU with attention layer and now I'm trying to visualize attention weights (I've already got them). Input is 2 one-hot encoded sequences (one is correct, the other is almost the same but has permutations of letters). The task is to define which one of the sequences is correct.
Here's my NN:
optimizer = keras.optimizers.RMSprop()
max_features = 4 #number of words in the dictionary
num_classes = 2
model = keras.Sequential()
model.add(GRU(128, input_shape=(70, max_features), return_sequences=True, activation='tanh'))
model.add(Dropout(0.5))
atn_layer = model.add(SeqSelfAttention())
model.add(Flatten())
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
I've tried several things found on StackOverflow but didn't succeed. The thing particularly is that I don't understand how to couple my input and attention weights. I'd appreciate any help and suggestions.
My model is experiencing wild and big fluctuations in the validation loss and does not converge.
I am doing an image recognition project with my three dogs i.e. classifying the dog in the image. Two dogs are very similar and the 3rd is very different. I took 10 minute videos of each dog, separately. Frames were extracted as images at each second. My dataset consists of about 1800 photos, 600 of each dog.
This block of code is responsible for augmenting and creating the data to feed the model.
randomize = np.arange(len(imArr)) # imArr is the numpy array of all the images
np.random.shuffle(randomize) # Shuffle the images and labels
imArr = imArr[randomize]
imLab= imLab[randomize] # imLab is the array of labels of the images
lab = to_categorical(imLab, 3)
gen = ImageDataGenerator(zoom_range = 0.2,horizontal_flip = True , vertical_flip = True,validation_split = 0.25)
train_gen = gen.flow(imArr,lab,batch_size = 64, subset = 'training')
test_gen = gen.flow(imArr,lab,batch_size =64,subset = 'validation')
This picture is the result of the model below.
model = Sequential()
model.add(Conv2D(16, (11, 11),strides = 1, input_shape=(imgSize,imgSize,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3),strides = 2))
model.add(BatchNormalization(axis=-1))
model.add(Conv2D(32, (5, 5),strides = 1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3),strides = 2))
model.add(BatchNormalization(axis=-1))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3),strides = 2))
model.add(BatchNormalization(axis=-1))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(BatchNormalization(axis=-1))
model.add(Dropout(0.3))
#Fully connected layer
model.add(Dense(256))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(3))
model.add(Activation('softmax'))
sgd = SGD(lr=0.004)
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
batch_size = 64
epochs = 100
model.fit_generator(train_gen, steps_per_epoch=(len(train_gen)), epochs=epochs, validation_data=test_gen, validation_steps=len(test_gen),shuffle = True)
Things I have tried.
High/low Learning rate ( 0.01 -> 0.0001)
Increase Dropout to 0.5 in both Dense layers
Increase/Decrease size of both Dense Layers ( 128 min -> 4048 max)
Increased number of CNN layers
Introduced Momentum
Increased/Decreased Batch Size
Things I have not tried
I have not used any other loss or metric
I have not used any other optimiser.
Have not adjusted any parameters of the CNN layers
It seems that there is some form of randomness or too many parameters in my model. I am aware that it is currently overfitting, but that should not be the cause of the volatility(?).
I am not too worried about the performance of the model. I would like to achieve about a 70% accuracy. All I want to do now is to stabilise the validation accuracy and to converge.
Note:
At some epochs, the training loss is very low ( <0.1 ) but validation
loss is very high ( > 3 ).
The videos are taken on different backgrounds, but +- the same amount on each background for each dog.
Some images are a bit blurry.
Change the optimizer to Adam, definitely better. In your code you are using it but with default parameters, you are creating an SGD optimizer but in the compile line you introduce an Adam with no parameters. Play with the actual parameters of your optimizer.
I encourage you to take out the dropout first, see what is happening and the if you manage to overfit, start with low dropout and go up.
Also it might be due to some of your test samples are very hard to detect and thus increase the loss, maybe take out the shuffle in the validation set and watch for any peridiocities to try to find out if there are validation samples hard to detect.
Hope it helps!
I see you have tried a lot of different things. Few suggestions:
I see you use large filters in your Conv2D eg. 11x11 and 5x5. If your image dimensions are not very big, you should definitely go for lower filter dimensions like 3x3.
Try different optimizers, try Adam with varying lr if you haven't.
Otherwise, I don't see much problems. Maybe you need more data for the network to learn better.
I'm implementing a CNN for speech recognition. The input is MEL frequencies with shape (85314, 99, 1) and the labels are one-hot encoded with 35 output classes (shape: (85314, 35)). When I run the model the training accuracy (image 2) starts high and stays the same over the number of epochs, while the loss on validation (image 1) increases. Hence, it is probably overfitting but I cannot find the origin of the issue. I already decreased the learning rate and played with batch sizes but the results stays the same. Also the amount of training data should be sufficient. Is there another issue with my hyper-parameter settings somewhere?
My model and hyper-parameters are defined as follows:
#hyperparameters
input_dimension = 85314
learning_rate = 0.0000025
momentum = 0.85
hidden_initializer = random_uniform(seed=1)
dropout_rate = 0.2
# create model
model = Sequential()
model.add(Convolution1D(nb_filter=32, filter_length=3, input_shape=(99, 1), activation='relu'))
model.add(Convolution1D(nb_filter=16, filter_length=1, activation='relu'))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(64, activation='relu'))
model.add(Dense(35, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['acc'])
history = model.fit(frequencies_train, labels_hot, validation_split=0.2, epochs=10, batch_size=50)
You are using "binary_crossentropy" for a problem of multiple classes. Change it to "categorical_crossentrop".
The accuracy computed with Keras using the binary_crossentropy with a model of more than 2 labels is just wrong.