I have a simple sklearn model, where I would like to get some metrics during the training, specifically Precision and Recall.
I know that there are metrics to get from sklearn for that, like recall_score(y_true, y_pred).
but what I am looking for is to have a metric for each step so that I can plot a graph like this (this was done with pytorch/tensorboard):
What would be the right way to have a curve for Recall and Precision after training instead of the scores?
Related
How to evaluate my MLPClassifier model? Is confusion matrix, accuracy, classification report enough? Do i need ROC for evaluating my MLPClassifier result? And aside from that how can i plot loss for test and training set, i used loss_curve function but it only show the loss plot for training set.
Ps. I'm dealing with multi-class classification problem.
This is a very open question and with no code, so I will answer you with what I think is best. Usually for multi-label classification problem it is standard to use accuracy as a measure to track training. Another good measure is called f1-score. Sklearn's classification_report is a very good method to track training.
Confusion matrices come after you train the model. They are used to check where the model is failing by evaluating which classes are harder to predict.
ROC curves are, usually, for binary classification problems. They can be adapted to multi-class by doing a one class vs the rest approach.
For the losses, it seems to me you might be confusing things. Training takes place over epochs, testing does not. If you train over 100 epochs, then you have 100 values for the loss to plot. Testing does not use epochs, at most it uses batches, therefore plotting the loss does not make sense. If instead you are talking about validation data, then yes you can plot the loss just like with the training data.
I'm trying to find out how sklearn's gradient boosting classifier makes predictions from the different estimators.
I want to translate the sklearn model into base python to perform predictions. I know how to get the individual estimators from the model but I do not know how get from those individual estimator scores to the final probability predictions made by the ensembled model. I believe there is a sigmoid function or something but I can't work out what.
GBC = GradientBoostingClassifier(n_estimators=1)
GBC.fit(x_train, y_train, sample_weight=None)
GBC.predict_proba(np.array(x_test.iloc[0]).reshape(1,-1))
this returns the probabilities: array([[0.23084247, 0.76915753]])
but when I run:
Sole_estimator = GBC.estimators_[0][0]
Sole_estimator.predict(np.array(x_test.iloc[0]).reshape(1,-1))
which returns array([1.34327168])
applying scipy's expit to the output
expit(Sole_estimator.predict(np.array(x_test.iloc[0]).reshape(1,-1)))
I get:
array([0.79302745])
I believe the .init_ estimator contributes to the predictions but havent found out how. I would also appreciate any indication about how the predictions are made with > 1 n_estimators - if it varies.
Thanks :)
I'm using SGDClassifier with loss function = "hinge". But hinge loss does not support probability estimates for class labels.
I need probabilities for calculating roc_curve. How can I get probabilities for hinge loss in SGDClassifier without using SVC from svm?
I've seen people mention about using CalibratedClassifierCV to get the probabilities but I've never used it and I don't know how it works.
I really appreciate the help. Thanks
In the strict sense, that's not possible.
Support vector machine classifiers are non-probabilistic: they use a hyperplane (a line in 2D, a plane in 3D and so on) to separate points into one of two classes. Points are only defined by which side of the hyperplane they are on., which forms the prediction directly.
This is in contrast with probabilistic classifiers like logistic regression and decision trees, which generate a probability for every point that is then converted to a prediction.
CalibratedClassifierCV is a sort of meta-estimator; to use it, you simply pass your instance of a base estimator to its constructor, so this will work:
base_model = SGDClassifier()
model = CalibratedClassifierCV(base_model)
model.fit(X, y)
model.predict_proba(X)
What it does is perform internal cross-validation to create a probability estimate. Note that this is equivalent to what sklearn.SVM.SVC does anyway.
Can someone explain how to use the oob_decision_function_ attribute for the python SciKit Random Forest Classifier? I want to use it to plot learning curves comparing training and validation error against different training set sizes in order to identify overfitting and other problems. Can't seem to find any information about how to do this.
You can pass in a custom scoring function into any of the scoring parameters in the model evaluation fields, it needs to have the signiture classifier, X, y_true -> score.
For your case you could use something like
from sklearn.learning_curve import learning_curve
learning_curve(r, X, y, cv=3, scoring=lambda c,x,y: c.oob_score_)
This will compute 3-fold cross validated oob scores against different training set sizes. Btw I don't think you should get overfitting with random forests, that's one of the benefits of them.
are there any evaluation metrics available for multiclass-multilabel classification?
for example, I'm taking part in the following competition at kaggle and it requires ROC AUC as evaluation metric.: http://www.kaggle.com/c/mlsp-2013-birds
Is it possible to do this using sklearn?
There's this library from Kaggle's Director of Engineering:
https://github.com/benhamner/Metrics/tree/master/Python
As of 2021, sklearn.metrics includes several functions you can use for evaluating multiclass-multilabel classification models. For example accuracy_score can calculate the fraction of correct (i.e. all predicted labels are correct) predictions. The hamming_loss function can calculate the Hamming Loss, or fraction of labels that are incorrectly predicted, in a given test set. You can find an in-depth discussion of the available metrics here.