I am trying to train a model for sentiment analysis and it shows an accuracy of 90% when splitting the data into training and testing! But whenever I am testing it on a new phrase is has pretty much the same result(usually it's in the range 0.86 - 0.95)!
Here is the code:
sentences = data['text'].values.astype('U')
y = data['label'].values
sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.2, random_state=1000)
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(sentences_train)
X_test = tokenizer.texts_to_sequences(sentences_test)
vocab_size = len(tokenizer.word_index) + 1
maxlen = 100
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
embedding_dim = 50
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim,input_length=maxlen))
model.add(layers.Flatten())
model.add(layers.Dense(10, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train,
epochs=5,
verbose=True,
validation_data=(X_test, y_test),
batch_size=10)
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
The training data is a CSV file with 3 coumns: (id, text, labels(0,1)), where 0 is positive and 1 is negative.
Training Accuracy: 0.9855
Testing Accuracy: 0.9013
Testing it on new sentences like 'This is just a text!' and 'hate preachers!' would predict the same result [0.85],[0.83].
It seems that you're victim of overfitting. In other words, our model would overfit to the training data. Although it’s often possible to achieve high accuracy on the training set, as in your case, what we really want is to develop models that generalize well to testing data (or data they haven’t seen before).
You can follow these steps in order to prevent overfitting.
Also, in order to increase the algorithm performance I suggest you to increase the number of neurons for Dense layer and set more epochsin order to increase the performance of the algorithm when comes to testing it to new data.
Related
I need to create a model to predict multiple labels based on sixteen (16) input features. The dataset has 4486 instances, each instance has a different number of labels (48 different labels).
This is how the data looks:
X Data example
Y Data example
The challenge is to predict labels on a new instance; I know the learning is the problem because the imbalance in the number of labels, this make the learning a bit difficult.
I will appreciate commets and advice regarding how to tackle this issue.
My best result is 30% in accuracy, but I've noticed it predicts the same labels sometimes and not given any satissfactory results so far...
This is the model I've implemented:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
n_inputs, n_outputs = X_train.shape[1], y_train.shape[1]
nodes = math.sqrt(n_inputs*n_outputs)
model = Sequential()
model.add(Dense(nodes, activation='relu'))
model.add(Flatten())
model.add(Dense(n_outputs, activation = 'sigmoid'))
model.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = [tf.keras.metrics.BinaryAccuracy(),tf.keras.metrics.AUC(), 'accuracy'])
history = model.fit(X_train, y_train, epochs=300, verbose=1, shuffle=True, validation_data=
(X_test, y_test), batch_size=8)
I want to binary classify breast cancer histopathological images from the BreakHis dataset (https://www.kaggle.com/ambarish/breakhis) using transfer learning and the Inception Resnet v2. The goal is to freeze all layers and train the fully connected layer by adding two neurons to the model. In particular, initially I want to consider the images related to the magnificant factor 40X (Benign: 625, Malignant: 1370). Here is a summary of what I do:
I read the images and resize them to 150x150
I partition the dataset into training, validation and test set
I load the pre-trained network Inception Resnet v2
I freeze all the layers I add the two neurons for binary
classification (1 = "benign", 0 = "malignant")
I compile the model using as activation function the Adam method
I carry out the training
I make the prediction
I calculate the accuracy
This is the code:
data = dataset[dataset["Magnificant"]=="40X"]
def preprocessing(dataset, img_size):
# images
X = []
# labels
y = []
i = 0
for image in list(dataset["Path"]):
# Ridimensiono e leggo le immagini
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR),
(img_size, img_size), interpolation=cv2.INTER_CUBIC))
basename = os.path.basename(image)
# Get labels
if dataset.loc[i][2] == "benign":
y.append(1)
else:
y.append(0)
i = i+1
return X, y
X, y = preprocessing(data, 150)
X = np.array(X)
y = np.array(y)
# Splitting
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify = y_40, shuffle=True, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, stratify = y_train, shuffle=True, random_state=1)
conv_base = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=[150, 150, 3])
# Freezing
for layer in conv_base.layers:
layer.trainable = False
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
opt = tf.keras.optimizers.Adam(learning_rate=0.0002)
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
model.compile(loss=loss, optimizer=opt, metrics = ["accuracy", tf.metrics.AUC()])
batch_size = 32
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size)
val_generator = val_datagen.flow(X_val, y_val, batch_size=batch_size)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
ntrain =len(X_train)
nval = len(X_val)
len(y_train)
epochs = 70
history = model.fit_generator(train_generator,
steps_per_epoch=ntrain // batch_size,
epochs=epochs,
validation_data=val_generator,
validation_steps=nval // batch_size, callbacks=[callback])
This is the output of the training at the last epoch:
Epoch 70/70
32/32 [==============================] - 3s 84ms/step - loss: 0.0499 - accuracy: 0.9903 - auc_5: 0.9996 - val_loss: 0.5661 - val_accuracy: 0.8250 - val_auc_5: 0.8521
I make the prediction:
test_datagen = ImageDataGenerator(rescale=1./255)
x = X_test
y_pred = model.predict(test_datagen.flow(x))
y_p = []
for i in range(len(y_pred)):
if y_pred[i] > 0.5:
y_p.append(1)
else:
y_p.append(0)
I calculate the accuracy:
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_p)
print(accuracy)
This is the accuracy value I get: 0.5459098497495827
Why do I get such low accuracy, I have done several tests but I always get similar results? (HELP ME)
When doing transfer learning, especially with frozen weights, it is extremely important to do the same pre-processing as was used when the network was originally trained.
For the InceptionResNetV2 network the pre-processing type is "tf" in the tensorflow / keras libraries, which corresponds to dividing by 127 then subtracting 1 for the imagenet weights. You are instead dividing by 255.
Fortunately you do not have to wade through the code to find out what function was used, as they are exposed in the API. Simply do
train_datagen = ImageDataGenerator(preprocessing_function=tf.keras.applications.inception_resnet_v2.preprocess_input)
and so on for validation and test
I am using an LSTM model in Keras. During the fitting stage, I added the validation_data paramater. When I plot my training vs validation loss, it seems there are major overfitting issues. My validation loss just won't decrease.
My full data is a sequence with shape [50,]. The first 20 records are used as training and the remaining used for the test data.
I have tried adding dropout and reducing the model complexity as much as I can and still no luck.
# transform data to be stationary
raw_values = series.values
diff_values = difference_series(raw_values, 1)
# transform data to be supervised learning
# using a sliding window
supervised = timeseries_to_supervised(diff_values, 1)
supervised_values = supervised.values
# split data into train and test-sets
train, test = supervised_values[:20], supervised_values[20:]
# transform the scale of the data
# scale function uses MinMaxScaler(feature_range=(-1,1)) and fit via training set and is applied to both train and test.
scaler, train_scaled, test_scaled = scale(train, test)
batch_size = 1
nb_epoch = 1000
neurons = 1
X, y = train_scaled[:, 0:-1], train_scaled[:, -1]
X = X.reshape(X.shape[0], 1, X.shape[1])
testX, testY = test_scaled[:, 0:-1].reshape(-1,1,1), test_scaled[:, -1]
model = Sequential()
model.add(LSTM(units=neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]),
stateful=True))
model.add(Dropout(0.1))
model.add(Dense(1, activation="linear"))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X, y, epochs=nb_epoch, batch_size=batch_size, verbose=0, shuffle=False,
validation_data=(testX, testY))
This what it looks like when changing the amount of neurons. I even tried using Keras Tuner (hyperband) to find the optimal parameters.
def fit_model(hp):
batch_size = 1
model = Sequential()
model.add(LSTM(units=hp.Int("units", min_value=1,
max_value=20, step=1),
batch_input_shape=(batch_size, X.shape[1], X.shape[2]),
stateful=True))
model.add(Dense(units=hp.Int("units", min_value=1, max_value=10),
activation="linear"))
model.compile(loss='mse', metrics=["mse"],
optimizer=keras.optimizers.Adam(
hp.Choice("learning_rate", values=[1e-2, 1e-3, 1e-4])))
return model
X, y = train_scaled[:, 0:-1], train_scaled[:, -1]
X = X.reshape(X.shape[0], 1, X.shape[1])
tuner = kt.Hyperband(
fit_model,
objective='mse',
max_epochs=100,
hyperband_iterations=2,
overwrite=True)
tuner.search(X, y, epochs=100, validation_split=0.2)
When evaluating the model against X_test and y_test, I get the same loss and accuracy score. But when fitting the "best model", I get this:
However, my predictions looks very reasonable against my true values. What should I do to get a better fit?
20 records as training data is too small. There won't be enough variation in the training data for the model to approximate a function accurately, and so your validation data, which is likely much smaller than 20, will likely contain an example wildly different from just those 20 in the training data (i.e. it hasn't seen an example of that nature during training) resulting in a loss that is much higher.
I am trying to use GloVe embeddings to train a rnn model based on this article.
I have a labeled data: text(tweets) on one column, labels on another (hate, offensive or neither).
However the model seems to predict only one class in the result.
This is the LSTM model:
model = Sequential()
hidden_layer = 3
gru_node = 32
# model embedding matrix here....
for i in range(0,hidden_layer):
model.add(GRU(gru_node,return_sequences=True, recurrent_dropout=0.2))
model.add(Dropout(dropout))
model.add(GRU(gru_node, recurrent_dropout=0.2))
model.add(Dropout(dropout))
model.add(Dense(64, activation='softmax'))
model.add(Dense(nclasses, activation='softmax'))
start=time.time()
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
fitting the model:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1)
X_train_Glove,X_test_Glove, word_index, embeddings_index = loadData_Tokenizer(X_train, X_test)
model_RNN = Build_Model_RNN_Text(word_index,embeddings_index, 20)
model_RNN.fit(X_train_Glove,y_train,
validation_data=(X_test_Glove, y_test),
epochs=4,
batch_size=128,
verbose=2)
y_preds = model_RNN.predict_classes(X_test_Glove)
print(metrics.classification_report(y_test, y_preds))
Results:
classification report
Confusion matrix
Am I missing something here?
Update:
this is what the distribution looks like
and the model summary, more or less
How the distribution of your data looks like? The first suggestion is to stratify train/test split (here is the link for the documentation).
The second question is how much data do you have in comparison with the complexity of the model? Maybe, your model is so complex, that just do overfitting. You can use the command model.summary() to see the number of trainable parameters.
I am having some trouble with my ANN. It is only predicting '0.' The dataset is imbalanced (10:1), ALTHOUGH, I undersampled the training dataset, so I am unsure of what is going on. I am getting 92-93% accuracy on the balanced training set, although on testing (on an unbalanced test set) it just predicts zeroes. Unsure of where to go from here. Anything helps. The data has been one hot encoded and scaled.
#create 80/20 train-test split
train, test = train_test_split(selection, test_size=0.2)
# Class count
count_class_0, count_class_1 = train.AUDITED_FLAG.value_counts()
# Divide by class
df_class_0 = train[train['AUDITED_FLAG'] == 0]
df_class_1 = train[train['AUDITED_FLAG'] == 1]
df_class_0_under = df_class_0.sample(count_class_1)
train_under = pd.concat([df_class_0_under, df_class_1], axis=0)
print('Random under-sampling:')
print(train_under.AUDITED_FLAG.value_counts())
train_under.AUDITED_FLAG.value_counts().plot(kind='bar', title='Count (target)');
Random under-sampling:
1.0 112384
0.0 112384
#split features and labels
y_train = np.array(train_under['AUDITED_FLAG'])
X_train = train_under.drop('AUDITED_FLAG', axis=1)
y_test = np.array(test['AUDITED_FLAG'])
X_test = test.drop('AUDITED_FLAG', axis=1)
y_train = y_train.astype(int)
y_test = y_test.astype(int)
# define model
model = Sequential()
model.add(Dense(6, input_dim=179, activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
history = model.fit(X_train, y_train, epochs=5, batch_size=16, verbose=1)
#validate
test_loss, test_acc = model.evaluate(X_test, y_test)
# evaluate the model
_, train_acc = model.evaluate(X_train, y_train, verbose=0)
_, test_acc = model.evaluate(X_test, y_test, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
print('test_acc:', test_acc)
# plot history
pyplot.plot(history.history['acc'], label='train')
#pyplot.plot(history.history['val_acc'], label='test')
Train: 0.931, Test: 0.921
#preds
y_pred = model.predict(X_test)
y_pred_bool = np.argmax(y_pred, axis=1)
# #plot confusion matrix
y_actu = pd.Series(y_test, name='Actual')
y_pred_bool = pd.Series(y_pred_bool, name='Predicted')
print(pd.crosstab(y_actu, y_pred_bool))
'''
Predicted 0
Actual
0 300011
1 28030
This is not right:
y_pred_bool = np.argmax(y_pred, axis=1)
Argmax is only used with categorical cross-entropy loss and softmax outputs. For binary cross-entropy and sigmoid outputs, you should round the outputs, which is equivalent to thresholding predictions > 0.5:
y_pred_bool = np.round(y_pred)
This is what Keras does to compute binary accuracy.