I have difficulty in plotting OneClassSVM's AUC plot in python (I am using sklearn which generates confusion matrix like [[tp, fp],[fn,tn]] with fn=tn=0.
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y_test, y_nb_predicted)
roc_auc = auc(fpr, tpr) # this generates ValueError[1]
print "Area under the ROC curve : %f" % roc_auc
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
I want to handle error [1] and plot AUC for OneClassSVM.
[1] ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
Please see my answer on a similar question. The gist is:
OneClassSVM fundamentally doesn't support converting a decision into a probability score, so you cannot pass the necessary scores into functions that require varying a score threshold, such as for ROC or Precision-Recall curves and scores.
You can approximate this type of score by computing the max value of your OneClassSVM's decision function across your input data, call it MAX, and then score the prediction for a given observation y by computing y_score = MAX - decision_function(y).
Use these scores to pass as y_score to functions such as average_precision_score, etc., which will accept non-thresholded scores instead of probabilities.
Finally, keep in mind that ROC will make less physical sense for OneClassSVM specifically because OneClassSVM is intended for situations where there is an expected and huge class imbalance (outliers vs. non-outliers), and ROC will not accurately up-weight the relative success on the small amount of outliers.
Use the predprobs function to calculate the scores or probabilities/scores as asked in the auc(y_true, y_score), the issue is because of y_score. you can convert it as shown in the following line of code
# Classifier - Algorithm - SVM
# fit the training dataset on the classifier
SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto',probability=True)
SVM.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y))
probs = SVM.predict_proba(Test_X_Tfidf)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(Test_Y, preds)
print("SVM Area under curve -> ",auc(fpr, tpr))
see the difference between the accuracy_score and the auc(), you need the scores of predictions.
share edit delete flag
Related
I try to figure out what is the best threshold to turn probability predictions (of logistic regression) into hard labels in binary classification. I read that Youden’s J statistics (calculating True Positive Rate – False Positive Rate at different thresholds and pick one with highest TPR-FPR value) is a good way to do this. So I have put together the following:
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, roc_curve
model = LogisticRegression()
model.fit(X, y)
y_pred = model.predict(X)
y_pred_proba = model.predict_proba(X)
y_pred_proba = y_pred_proba[:, 1]
fpr, tpr, thresholds = roc_curve(y, y_pred_proba, drop_intermediate=True)
Then, I calculated the best threshold with the following:
best_J_index = np.argmax(tpr-fpr)
best_threshold = thresholds[best_J_index]
This indeed gives better result than the default 0.5 probability threshold, but when I check all thresholds in thresholds regards to F1-score with the following:
for thr in thresholds:
y_pred_hard = np.where(y_pred_proba > thr, 1, 0)
print(f"Threshold: {thr}, F1: {f1_score(y, y_pred_hard)}")
I got different best threshold! Is there any calculation error in my script, or F1 and Youden's do not necessarily agree in thresholding (if so, why not)?
I am running a Convolutional Neural Network. After it finishes running, I use some metrics to evaluate the performance of the model. 2 of the metrics are the auc and roc_auc_score from sklearn
AUC function: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html?highlight=auc#sklearn.metrics.auc
AUROC function: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score
The code I am using is the following:
print(pred)
fpr, tpr, thresholds = metrics.roc_curve(true_classes, pred, pos_label=1)
print("-----AUC-----")
print(metrics.auc(fpr, tpr))
print("----ROC AUC-----")
print(metrics.roc_auc_score(true_classes, pred))
Where true_classes is a table which is of the form : [0 1 0 1 1 0] where 1 is the positive label and 0 the negative.
And pred is the predictions of the model:
prediction = classifier.predict(test_final)
prediction1 = []
predictions = []
for preds in prediction:
prediction1.append(preds[0])
pred = prediction1
However I am getting the same AUC and ROC AUC value no matter how many times I run the test (What I mean by that is that AUC and ROC AUC values in each test are the same. Not that they remain the same on all the tests. For example for test 1 I get AUC = 0.987 and ROC_AUC = 0.987 and for test 2 I get AUC = 0.95 and ROC_AUC = 0.95) . Am I doing something wrong? Or is it normal?
As per documentation linked, metrics.auc is a general case method to calculate area under a curve from points of that curve.
metrics.roc_auc_score is a specific case method used to calculate Area Under Curve for ROC curve.
You would not expect to see different results if you're using the same data to calculate both, as metrics.roc_auc_score will do the same thing as metrics.auc and, most likely, use the metrics.auc method itself, under the hood (i.e. use the general method for the specific task of calculating Area under ROC curve).
I am using Python 3.6,sklearn.svm.OneClassSVM to practice OSVM and I want to
calculate ROC, AUC.
I have used decision_function() to calculate ROC and AUC ,the code is below.
I want to evaluate the value that I calculate by decision_function.
Can I only use predicted label and real label to obtain ROC, AUC value?
y_score = oneclass.decision_function(testing_data)
roc_auc = metrics.roc_auc_score(Y_test, y_score)
I am not sure if I get your question complete correctly, but if you do this:
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(X_train)
y_pred_train = clf.predict(X_train)
y_score = clf.predict(X_test)
Then you should be able to use:
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, y_score)
``
I have a question related to the roc_curve from scikit-learn for a deep learning exercise, I have noticed that my data has 1 as the positive label. After my training the testing accuracy is coming around 74% but the roc area under curve(AUC) score coming as only as .24.
y_pred = model.predict([x_test_real[:, 0],x_test_real[:, 1]])
fpr, tpr, thresholds = metrics.roc_curve(y_test_real, y_pred,pos_label=1)
roc_auc = metrics.auc(fpr, tpr)
print("roc_auc: %0.2f" % roc_auc)
If I change the pos_label to 0. The auc score becomes 0.76(obviously)
y_pred = model.predict([x_test_real[:, 0],x_test_real[:, 1]])
fpr, tpr, thresholds = metrics.roc_curve(y_test_real, y_pred,pos_label=0)
roc_auc = metrics.auc(fpr, tpr)
print("roc_auc: %0.2f" % roc_auc)
Now I ran a small experiment, I changed my training and testing labels(which are binary classification)
y_train_real = 1 - y_train_real
y_test_real = 1 - y_test_real
like this, which should flip the positive and negative labels from 1 to 0. Then I run my code again. This time expecting the behavior of the roc auc to flip as well. But NO!
fpr, tpr, thresholds = metrics.roc_curve(y_test_real, y_pred,pos_label=0)
Is still giving .80 and with pos_label=1 is giving .2. This is confusing me,
If I change the positive label in my training target should it not affect the roc_curve auc values??
Which case is the correct analysis
Does the output has anything to do with the loss function used? I am solving a binary classification problem of match and not match using "contrastive loss"
Can anyone help me here? :)
I have a multilabel classifier written in Keras from which I want to compute AUC and plot a ROC curve for every element classified from my test set.
Everything seems fine, except that some elements have a roc curve that have a slope as follows:
I don't know how to interpret the slope in such cases.
Basically my workflow goes as follows, I have a pre-trained model, instance of Keras, and I have the features X and the binarized labels y, every element in y is an array of length 1000, as it is a multilabel classification problem each element in y might contain many 1s, indicating that the element belongs to multiples classes, so I used the built-in loss of binary_crossentropy and my outputs of the model prediction are score probailities. Then I plot the roc curve as follows.
from sklearn.metrics import roc_curve, auc
#...
for xi, yi in (X_test, y_test):
y_pred = model.predict([xi])[0]
fpr, tpr, _ = roc_curve(yi, y_pred)
plt.plot(fpr, tpr, color='darkorange', lw=0.5)
The predict method returns probabilities, as I'm using the functional api of keras.
Does anyone knows why roc curves looks like this?
Asking in the mailing list of scikit-learn, they answered:
Slope usually means there are ties in your predictions.
Which is the case in this problem.