How to calculate ROC_AUC score having 3 classes - python

I have a data having 3 class labels(0,1,2). I tried to make ROC curve. and did it by using pos_label parameter.
fpr, tpr, thresholds = metrics.roc_curve(Ytest, y_pred_prob, pos_label = 0)
By changing pos_label to 0,1,2- I get 3 graphs, Now I am having issue in calculating AUC score.
How can I average the 3 graphs and plot 1 graph from it and then calculate the Roc_AUC score.
i am having error in by this
metrics.roc_auc_score(Ytest, y_pred_prob)
ValueError: multiclass format is not supported
please help me.
# store the predicted probabilities for class 0
y_pred_prob = cls.predict_proba(Xtest)[:, 0]
#first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(Ytest, y_pred_prob, pos_label = 0)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
# store the predicted probabilities for class 1
y_pred_prob = cls.predict_proba(Xtest)[:, 1]
#first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(Ytest, y_pred_prob, pos_label = 0)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
# store the predicted probabilities for class 2
y_pred_prob = cls.predict_proba(Xtest)[:, 2]
#first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(Ytest, y_pred_prob, pos_label = 0)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
from the above code. 3 roc curves are generated. Due to multi-classes.
I want to have a one roc curve from above 3 by taking average or mean. Then, one roc_auc score from that.

Highlights in the multi-class AUC:
You cannot calculate a common AUC for all classes. You must calculate the AUC for each class separately. Just as you have to calculate the recall, precision is separate for each class when making a multi-class classification.
THE SIMPLEST method of calculating the AUC for individual classes:
We choose a classifier
from sklearn.linear_model import LogisticRegression
LRE = LogisticRegression(solver='lbfgs')
LRE.fit(X_train, y_train)
I am making a list of multi-class classes
d = y_test.unique()
class_name = list(d.flatten())
class_name
Now calculate the AUC for each class separately
for p in class_name:
`fpr, tpr, thresholds = metrics.roc_curve(y_test,
LRE.predict_proba(X_test)[:,1], pos_label = p)
auroc = round(metrics.auc(fpr, tpr),2)
print('LRE',p,'--AUC--->',auroc)`

For multiclass, it is often useful to calculate the AUROC for each class. For example, here's an excerpt from some code I use to calculate AUROC for each class separately, where label_meanings is a list of strings describing what each label is, and the various arrays are formatted such that each row is a different example and each column corresponds to a different label:
for label_number in range(len(label_meanings)):
which_label = label_meanings[label_number] #descriptive string for the label
true_labels = true_labels_array[:,label_number]
pred_probs = pred_probs_array[:,label_number]
#AUROC and AP (sliding across multiple decision thresholds)
fpr, tpr, thresholds = sklearn.metrics.roc_curve(y_true = true_labels,
y_score = pred_probs,
pos_label = 1)
auc = sklearn.metrics.auc(fpr, tpr)
If you want to plot an average AUC curve across your three classes: This code https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html includes parts that calculate the average AUC so that you can make a plot (if you have three classes, it will plot the average AUC for the three classes.)
If you just want an average AUC across your three classes: once you have calculated the AUC of each class separately you can average the three numbers to get an overall AUC.
If you want more background on AUROC and how it is calculated for single class versus multi class you can see this article, Measuring Performance: AUC (AUROC).

Related

How to plot average ROC and AUC in Python?

I need to perform a classification of users using binary classification (User 1 or 0 in each case).
I have 30 users and there are 30 sets of FPR and TPR.
I did not use roc_curve(y_test.ravel(), y_score.ravel()) to get FPR and TPF (there is a reason for this which I have to classify each of them using binary classification and generate FPR Aand TPF using my own code).
Actually, my setting was I did not store class labels as multi-class. What I did was I take one user as a positive class and the rest as negative class. I repeated for all other users. Then I calculated FPR and TPF using my own code without using roc_auc_score.
Let say I already have the values of FPR and TPF in alist.
I have these codes:
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
from scipy import interp
n_classes=30
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr_svc[i] for i in range(n_classes)])) # Classified using SVC
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr_svc[i], tpr_svc[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr = all_fpr[:]
tpr = mean_tpr[:]
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Acceptance Rate')
plt.ylabel('True Acceptance Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
But, it produced this figure which is look weird.
Moreover, how do I get the average AUC as well?
I'm not sure I understood your setting, but in general you can compute the average AUC for a multi-class classifier using sklearns roc_auc_score. If by average you mean to compare each class with every other class use ovo (One vs. One). Otherwise, if you prefer to compare each class to all the others together, use ovr (One vs. Rest). Here's the documentation for the multi_class param:
multi_class {‘raise’, ‘ovr’, ‘ovo’}, default=’raise’
Multiclass only. Determines the type of configuration to use. The default value raises
an error, so either ovr or ovo must be passed explicitly.
'ovr': Computes the AUC of each class against the rest [3] [4]. This
treats the multiclass case in the same way as the multilabel case.
Sensitive to class imbalance even when average == 'macro', because
class imbalance affects the composition of each of the ‘rest’
groupings.
'ovo': Computes the average AUC of all possible pairwise combinations
of classes [5]. Insensitive to class imbalance when average ==
'macro'.
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.htm

Why are my precision-recall and ROC curves not smooth?

I have some data labeled as either a 0 or 1 and I am trying to predict these classes using a random forest. Each instance is labeled with 20 features that are used to train the random forest (~30.000 training instances and ~6000 test instances.
I am plotting the precision-recall and ROC curves using the following code:
precision, recall, _ = precision_recall_curve(y_test, y_pred)
plt.step(recall, precision, color='b', alpha=0.2,where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2, color='b')
fpr, tpr, _ = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
All the PR and ROC curves I have seen thus far always have a jagged/smooth decline in precision/recall and a smooth/jagged increase in the ROC line. But my PR and ROC curves for some reason always look like this:
For some reason the only have a single point where they change direction. Is this due to a coding error by me or something inherent about the data/classification problem? If so, how can this behavior be explained?
I suspect you used the RandomForestClassifier.predict() method which results in either 0 or 1 depending on the predicted class.
To get the probability, which is the fraction of trees voted for a specific class, you have to use the RandomForestClassifier.predict_proba() method.
Using these probabilities as input for your curve calculations should fix the problem.
EDIT: The curve creation methods of scikit-learn sort the predictions first according to the prediction score, then according to their real/observed value, therefore the curves have these "bends".
Inside the precision_recall_curve, the y_pred must be the probabilities of the target class AND NOT the actual predicted class.
Since you are using a RandomForestClassifier, use predict_proba(X) to get the probabilities.
rf = RandomForestClassifier()
probas_pred = rf.predict_proba(X_test)
precision, recall, _ = precision_recall_curve(y_true, probas_pred)
plt.step(recall, precision, color='b', alpha=0.2,where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2, color='b')

Identical AUC for each fold in cross-validation ROC curves [Python]

UPDATE
I randomly & independently shuffled the data as #Paul suggest and my classifier has random performance now .
I have an imbalanced dataset with around 200.000 instances and 50 predictors. The imbalance has a 4:1 ratio for the negative class (i.e class 0). In other words the negative class makes around 80% of the samples and the positive just 20% of the samples.
It's a binary classification problem where I have a target vector with 0's and 1's.
I have been trying to fit several classifiers like logistic regression and random forest.
I evaluate them with cross -validation skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=999)and ROC roc_curve from Python sklearn v.018
My Problem
My ROC curve for each validation fold are almost the same and I have no idea why. The AUC is identical and always absurdly good (0.9). Although the Precision-Recall Curve shows worse AUC=0.74 (which I think it's more accurate).
I tried following this example for ROC with cross-validation: http://lijiancheng0614.github.io/scikit-learn/auto_examples/model_selection/plot_roc_crossval.html#example-model-selection-plot-roc-crossval-py
ROC curves Logistic Regression
ROC curves Confidence Interval Logistic Regression [zoomed in]
Precision Recall curves
The question:
Why does the performance of the model seem to be similar on each fold? shouldn't the AUC differ at least slightly?
Code Below
X, y = shuffle(X, y, random_state=0)
clasifier = linear_model.LogisticRegression(class_weight = "balanced")
clasifier.fit(X,y)
fig, ax1 = plt.subplots(figsize=(12, 8))
mean_tpr = 0.0
mean_fpr = linspace(0, 1, 100)
skf = StratifiedKFold(n_splits=n_folds, shuffle=True, random_state=999)
for i, (train_index, test_index) in enumerate(skf.split(X,y)):
# calculate the probability of each class assuming it to be positive
probas_ = classifier.fit(X[train_index], y[train_index]).predict_proba(X[test_index])
# Compute ROC curve and area under the curve
fpr, tpr, thresholds = roc_curve(y[test_index], probas_[:, 1], pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i+1, roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random', lw=2)
mean_tpr /= n_folds
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='Mean ROC (area = %0.2f)' % mean_auc, lw=3)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate (1- specificity)', fontsize=18)
plt.ylabel('True Positive Rate (sensitivity)', fontsize=18)

roc curve with sklearn [python]

I have an understanding problem by using the roc libraries.
I want to plot a roc curve with a python
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html
I am writing a program which evalutes detectors (haarcascade, neuronal networks) and want to evaluate them.
So I already have the data saved in a file in the following format:
0.5 TP
0.43 FP
0.72 FN
0.82 TN
...
whereas TP means True Positive, FP - False Positivve, FN - False Negative, TN - True Negative
I parse it and fill 4 arrays with this data set.
Then I want to put this in
fpr, tpr = sklearn.metrics.roc_curve(y_true, y_score, average='macro', sample_weight=None)
but how to do this? What is y_true in my case and y_score?
afterwards, I put it fpr, tpr in
auc = sklearn.metric.auc(fpr, tpr)
Quotting Wikipedia:
The ROC is created by plotting the FPR (false positive rate) vs the TPR (true positive rate) at various thresholds settings.
In order to compute FPR and TPR, you must provide the true binary value and the target scores to the function sklearn.metrics.roc_curve.
So in your case, I would do something like this :
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
# Compute fpr, tpr, thresholds and roc auc
fpr, tpr, thresholds = roc_curve(y_true, y_score)
roc_auc = auc(fpr, tpr)
# Plot ROC curve
plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--') # random predictions curve
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate or (1 - Specifity)')
plt.ylabel('True Positive Rate or (Sensitivity)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
If you want to have a deeper understanding of how the False positive rate and the True positive rate are computed for all the possible thresholds values, I suggest you to read this article

How to calculate AUC for One Class SVM in python?

I have difficulty in plotting OneClassSVM's AUC plot in python (I am using sklearn which generates confusion matrix like [[tp, fp],[fn,tn]] with fn=tn=0.
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y_test, y_nb_predicted)
roc_auc = auc(fpr, tpr) # this generates ValueError[1]
print "Area under the ROC curve : %f" % roc_auc
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
I want to handle error [1] and plot AUC for OneClassSVM.
[1] ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
Please see my answer on a similar question. The gist is:
OneClassSVM fundamentally doesn't support converting a decision into a probability score, so you cannot pass the necessary scores into functions that require varying a score threshold, such as for ROC or Precision-Recall curves and scores.
You can approximate this type of score by computing the max value of your OneClassSVM's decision function across your input data, call it MAX, and then score the prediction for a given observation y by computing y_score = MAX - decision_function(y).
Use these scores to pass as y_score to functions such as average_precision_score, etc., which will accept non-thresholded scores instead of probabilities.
Finally, keep in mind that ROC will make less physical sense for OneClassSVM specifically because OneClassSVM is intended for situations where there is an expected and huge class imbalance (outliers vs. non-outliers), and ROC will not accurately up-weight the relative success on the small amount of outliers.
Use the predprobs function to calculate the scores or probabilities/scores as asked in the auc(y_true, y_score), the issue is because of y_score. you can convert it as shown in the following line of code
# Classifier - Algorithm - SVM
# fit the training dataset on the classifier
SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto',probability=True)
SVM.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y))
probs = SVM.predict_proba(Test_X_Tfidf)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(Test_Y, preds)
print("SVM Area under curve -> ",auc(fpr, tpr))
see the difference between the accuracy_score and the auc(), you need the scores of predictions.
share edit delete flag

Categories