I am using RandomForestClassifier as follows using cross validation for a binary classification (class labels are 0 and 1).
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_score
clf=RandomForestClassifier(random_state = 42, class_weight="balanced")
k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)
accuracy = cross_val_score(clf, X, y, cv=k_fold, scoring = 'accuracy')
print("Accuracy: " + str(round(100*accuracy.mean(), 2)) + "%")
f1 = cross_val_score(clf, X, y, cv=k_fold, scoring = 'f1_weighted')
print("F Measure: " + str(round(100*f1.mean(), 2)) + "%")
Now I want to order my data using prediction probabilities of class 1 with cross validation results. For that I tried the following two ways.
pred = clf.predict_proba(X)[:,1]
print(pred)
probs = clf.predict_proba(X)
best_n = np.argsort(probs, axis=1)[:,-6:]
I get the following error
NotFittedError: This RandomForestClassifier instance is not fitted
yet. Call 'fit' with appropriate arguments before using this method.
for both the situations.
I am just wondering where I am making things wrong.
I am happy to provide more details if needed.
In case, you want to use the CV model for a unseen data point/s, use the following approach.
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_validate
iris = datasets.load_iris()
X = iris.data
y = iris.target
clf = RandomForestClassifier(n_estimators=10, random_state = 42, class_weight="balanced")
cv_results = cross_validate(clf, X, y, cv=3, return_estimator=True)
clf_fold_0 = cv_results['estimator'][0]
clf_fold_0.predict_proba([iris.data[133]])
# array([[0. , 0.5, 0.5]])
Have a look at the documentation it specifies that the probability is calculated based on the mean results from the trees.
In your case, you first need to call the fit() method to generate the tress in the model. Once you fit the model on the training data, you can call the predict_proba() method.
This is also specified in the error.
# Fit model
model = RandomForestClassifier(...)
model.fit(X_train, Y_train)
# Probabilty
model.predict_proba(X)[:,1]
I solved my problem using the following code:
proba = cross_val_predict(clf, X, y, cv=k_fold, method='predict_proba')
print(proba[:,1])
print(np.argsort(proba[:,1]))
Related
After making a desicion tree function, I have decided to check how accurate is the tree, and to confirm that atleast the first split is the same if I'll make another trees with the same data
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import os
from sklearn import tree
from sklearn import preprocessing
import sys
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
.....
def desicion_tree(data_set:pd.DataFrame,val_1 : str, val_2 : str):
#Encoder -- > fit doesn't accept strings
feature_cols = data_set.columns[0:-1]
X = data_set[feature_cols] # Independent variables
y = data_set.Mut #class
y = y.to_list()
le = preprocessing.LabelBinarizer()
y = le.fit_transform(y)
# Split data set into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1) # 75%
# Create Decision Tree classifer object
clf = DecisionTreeClassifier(max_depth= 4, criterion= 'entropy')
# Train Decision Tree Classifer
clf.fit(X_train, y_train)
# Predict the response for test dataset
y_pred = clf.predict(X_test)
#Perform cross validation
for i in range(2, 8):
plt.figure(figsize=(14, 7))
# Perform Kfold cross validation
#cv = ShuffleSplit(test_size=0.25, random_state=0)
kf = KFold(n_splits=5,shuffle= True)
scores = cross_val_score(estimator=clf, X=X, y=y, n_jobs=4, cv=kf)
print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std()))
tree.plot_tree(clf,filled = True,feature_names=feature_cols,class_names=[val_1,val_2])
plt.show()
desicion_tree(car_rep_sep_20, 'Categorial', 'Non categorial')
Down , I wrote a loop in order to rectreate the tree with splitted values using Kfold. The accuracy is changing (around 90%) but the tree is the same, where did I mistaken?
cross_val_score clones the estimator in order to fit-and-score on the various folds, so the clf object remains the same as when you fit it to the entire dataset before the loop, and so the plotted tree is that one rather than any of the cross-validated ones.
To get what you're after, I think you can use cross_validate with option return_estimator=True. You also shouldn't need the loop, if your cv object has the number of splits desired:
kf = KFold(n_splits=5, shuffle=True)
cv_results = cross_validate(
estimator=clf,
X=X,
y=y,
n_jobs=4,
cv=kf,
return_estimator=True,
)
print("%0.2f accuracy with a standard deviation of %0.2f" % (
cv_results['test_score'].mean(),
cv_results['test_score'].std(),
))
for est in cv_results['estimator']:
tree.plot_tree(est, filled=True, feature_names=feature_cols, class_names=[val_1, val_2])
plt.show();
Alternatively, loop manually over the folds (or other cv iteration), fitting the model and plotting its tree in the loop.
I built a linear model with the sklearn based on the Cement and Concrete Composites dataset.
Initially, i used the train_test_split(X, Y, test_size=0.3, Shuffle=False) and i found the train and test error.
Now i try to run the same model 10 times with Shuffle=True and compute the mean and sd of the errors. The new results should be compared to the first ones.
How could i loop the same model n times and save the errors in a list?
Try something like this:
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
errors = []
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, Shuffle=True)
model = LinearRegression() # the model you want to use here
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
error = accuracy_score(y_test, y_pred) # the error metric you want to use here
errors.append(error)
What you need is cross-validation: repeated evaluation of the model on different splits of the same data. train_test_split in this case is a wrapper around ShuffleSplit cross-validation.
In your case it might look like this:
from sklearn.model_selection import ShuffleSplit, cross_val_score
import numpy as np
from sklearn.linear_model import LinearRegression
X, y = ... # read dataset
model = LinearRegression()
# n_splits=10 is for 10 random shuffled train-test splits
cv = ShuffleSplit(n_splits=10, test_size=.3, random_state=0)
scores = cross_val_score(model, X, y, cv=cv, scoring='neg_mean_squared_error')
np.mean(scores), np.std(scores)
If you want to compute the error on your own or do anything else with models/results, you could do it like this:
for train_ids, test_ids in cv.split(X):
model.fit(X[train_ids], y[train_ids])
model.score(X[test_ids], y[test_ids])
...
More about this:
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html
I am using cross-validation to evaluate my ML models but now I want to look into the distribution of the errors, i.e. I want to get the average error of specific data points whenever they are in the test set.
from sklearn import linear_model
from sklearn.model_selection import KFold, cross_val_score
X = #data points
y = #output
lm = linear_model.LinearRegression()
kfold = KFold(n_splits=10)
scores = cross_val_score(lm, X, y, scoring='neg_mean_squared_error', cv=kfold)
rmse_scores = [np.sqrt(abs(s)) for s in scores]
print('Testing RMSE (lin reg): {:.3f}'.format(np.mean(rmse_scores)))
Is there an easy way to get the individual errors of each of the data points whenever they are in the test set (not training error) using cross-validation with scikit-learn?
Thank you!
If I understood your question correctly, this should be what you are looking for.
kf = KFold(n_splits=3)
error = []
for train_index, val_index in kf.split(X, y):
Xtrain, X_val = X[train_index], X[val_index]
ytrain, y_val = y[train_index], y[val_index]
model.fit(Xtrain, ytrain)
pred = model.predict(X_val)
current_error = mean_squared_error(y_val, pred) # error per iteration
error.append(current_error)
print(np.mean(error)) # get mean error after CV
I have 4 features and one target variable. I am using RandomForestRegressor instead of RandomForestClassifer as my target variable is float. When I am trying to fit my model and then output them in sorted order to get the important features I am getting Not fitted error how to fix it?
Code:
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn import datasets
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import accuracy_score
# Split the data into 30% test and 70% training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
feat_labels = data.columns[:4]
regr = RandomForestRegressor(max_depth=2, random_state=0)
#clf = RandomForestClassifier(n_estimators=100, random_state=0)
# Train the classifier
#clf.fit(X_train, y_train)
regr.fit(X, y)
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]]))
You are fitting to regr but calling the feature importances on clf. Try calling this instead:
importances = regr.feature_importances_
I noticed that previously your classifier was being fit with the training data you setup, but the regressor is now being fit with X and y.
However, I don't see here where you're setting X and y in the first place or even more where you actually load in a dataset. Could it be you forgot this step as well as what Harpal mentioned in another answer?
I'm trying to use GridSearchCV from SKlearn to tune hyperparameters for my estimator.
In the first step, the estimator is used to for is SequentialFeatureSelection, which is a custom library that performs wrapper based feature selection. This means iteratively adding new features and identifying the ones where the estimator performs best with. Hence, the SequentialFeatureSelection method requires my estimator. This library is programmed so that it is perfectly fine to use with SKlearn, so I integrate it in the first step of the GridSearchCV pipeline to transform the features to the ones selected.
In the second step, I would like to use exactly the same classifier with exactly the same parameters to be fitted and predict the outcome. However with the parameter grid, I can only either set the parameters to the classifier that I pass to SequentialFeatureSelector OR the ones in 'clf' and I cannot assure that they are always the same.
Finally, with the selected features and selected parameters I want to predict on a previously held out test-set.
On the bottom of the page of the SFS library, they show how to use SFS with GridSearchCV, but there the KNN algorithm used to select features and the one used to predict are also using different parameters. And when I check for myself after traininf SFS and GridSearchCV, the parameters are never the same, even when I use the clf.clone() as proposed. Here is my code:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True, # Clone like in Tutorial
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs), ("clf", clf)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
# Both estimators should have depth 5!
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
print("CLF Final Estimator Depth: " + str(gs.best_estimator_._final_estimator.max_depth))
# Evaluate...
y_test_pred = gs.predict(x_test)
# Accuracy etc...
The question would be, how do I assure that they always have the same parameters set within the same pipeline?
Thanks!
I found a solution, where I overwrite some methods of the SequentialFeatureSelector (SFS) class to also use its estimator for predicting after transformation. This is done by introducing a Custom SFS class 'CSequentialFeatureSelector' that overwrites the following methods from SFS:
In the fit(self, X, y) method, not only the normal fit is performed, but also the self.estimator is the fitted on the transformed data, so that it is possible to implement predict and predict_proba methods for the SFS class.
I implemented predict and predict_probba methods for the SFS class, that call the predict and predict_probba methods of the fitted self.estimator.
Hence, I only have one estimator left that is used for SFS and predicting.
Here is some of the code:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
class CSequentialFeatureSelector(mlxtend.feature_selection.SequentialFeatureSelector):
def predict(self, X):
X = self.transform(X)
return self.estimator.predict(X)
def predict_proba(self, X):
X = self.transform(X)
return self.estimator.predict_proba(X)
def fit(self, X, y):
self.fit_helper(X, y) # fit helper is the 'old' fit method, which I copied and renamed to fit_helper
self.estimator.fit(self.transform(X), y)
return self
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [3, 4, 5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True,
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
# Now only one object in the pipeline (in fact this is not even needed anymore)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
y_test_pred = gs.predict(x_test)
# Evaluate performance of y_test_pred