Specifying tree_method param for XGBoost in Python - python

I'm working on a predictive model using XGBoost (latest version on PyPl: 0.6) in Python, and have been developing it training on about half of my data. Now that I have my final model, I trained it on all my data, but got this message, which I've never seen before:
Tree method is automatically selected to be 'approx' for faster speed.
to use old behavior(exact greedy algorithm on single machine), set
tree_method to 'exact'"
As a reproduceable example, the following code also produces that message on my machine:
import numpy as np
import xgboost as xgb
rows = 10**7
cols = 20
X = np.random.randint(0, 100, (rows, cols))
y = np.random.randint(0,2, size=rows)
clf = xgb.XGBClassifier(max_depth=5)
clf.fit(X,y)
I've tried setting tree_method to 'exact' in both the initialization and fit() steps of my model, but each throws errors:
import xgboost as xgb
clf = xgb.XGBClassifier(tree_method = 'exact')
clf
> __init__() got an unexpected keyword argument 'tree_method'
my_pipeline.fit(X_train, Y_train, clf__tree_method='exact')
> self._final_estimator.fit(Xt, y, **fit_params) TypeError: fit() got an
> unexpected keyword argument 'tree_method'
How can I specify tree_method='exact' with XGBoost in Python?

According to the XGBoost parameter documentation, this is because the default for tree_method is "auto". The "auto" setting is data-dependent: for "small-to-medium" data, it will use the "exact" approach and for "very-large" datasets, it will use "approximate". When you started to use your whole training set (instead of 50%), you must have crossed the training-size threshold that changes the auto-value for tree_method. It's unclear from the docs how many observations are required to reach that threshold, but it seems that it's somewhere between 5 and 10 million rows (since you have rows = 10**7).
I don't know if the tree_method argument is exposed in the XGBoost Python module (it sounds like it's not, so maybe file a bug report?), but tree_method is exposed in the R API.
The docs describe why you see that warning message:

It is still not implemented in the scikit-learn API for xgboost.
Hence I'm referencing the below code example from here.
import xgboost as xgb
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits(2)
X = digits['data']
y = digits['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
dtrain = xgb.DMatrix(X_train, y_train)
dtest = xgb.DMatrix(X_test, y_test)
param = {'objective': 'binary:logistic',
'tree_method':'hist',
'grow_policy':"lossguide",
'eval_metric': 'auc'}
res = {}
bst = xgb.train(param, dtrain, 10, [(dtrain, 'train'), (dtest, 'test')], evals_result=res)

You can use GPU from sklearn API in xGBoost. You can use it like this:
import xgboost
xgb = xgboost.XGBClassifier(n_estimators=200, tree_method='gpu_hist', predictor='gpu_predictor')
xgb.fit(X_train, y_train)
You can use different tree methods. Refer to the documentation to choose the most appropriate methods for your need.

Related

Getting Feature Importances for XGBoost multioutput

Beginning in xgboost version 1.6, you can now run multioutput models directly. In the past, I had been using the scikit learn wrapper MultiOutputRegressor around an xgbregressor estimator. I could then access the individual models feature importance by using something thing like wrapper.estimators_[i].feature_importances_
Now, however, when I run feature_importances_ on a multioutput model of xgboostregressor, I only get one set of features even through I have more than one target. Any idea what this array of feature importances actually represents? Is it the first, last, or some sort of average across all the targets? Is this a function that perhaps just not ready to handle multioutput?
*Realize questions are always easier to answer when you have some code to test:
import numpy as np
from sklearn import datasets
import xgboost as xgb
from sklearn.multioutput import MultiOutputRegressor
from numpy.testing import assert_almost_equal
n_targets = 3
X, y = datasets.make_regression(n_targets=n_targets)
X_train, y_train = X[:50], y[:50]
X_test, y_test = X[50:], y[50:]
single_run_features = {}
references = np.zeros_like(y_test)
for n in range(0,n_targets):
xgb_indi_model = xgb.XGBRegressor(random_state=0)
xgb_indi_model.fit(X_train, y_train[:, n])
references[:,n] = xgb_indi_model.predict(X_test)
single_run_features[n] = xgb_indi_model.feature_importances_
xgb_multi_model = xgb.XGBRegressor(random_state=0)
xgb_multi_model.fit(X_train, y_train)
y__multi_pred = xgb_multi_model.predict(X_test)
xgb_scikit_model = MultiOutputRegressor(xgb.XGBRegressor(random_state=0))
xgb_scikit_model.fit(X_train, y_train)
y_pred = xgb_scikit_model.predict(X_test)
print(assert_almost_equal(references, y_pred))
print(assert_almost_equal(y__multi_pred, y_pred))
scikit_features = {}
for i in range(0,n_targets):
scikit_features[i] = xgb_scikit_model.estimators_[i].feature_importances_
xgb_multi_model_features = xgb_multi_model.feature_importances_
single_run_features
scikit_features
The features importances match for the loop version of single target model, single_run_features, and the MultiOutputRegressor version, scikit_features. The issues is the results in xgb_multi_model_features. Any suggestions??
Is it the first, last, or some sort of average across all the targets
It's the average of all targets.

Right way to use RFECV and Permutation Importance - Sklearn

There is a proposal to implement this in Sklearn #15075, but in the meantime, eli5 is suggested as a solution. However, I'm not sure if I'm using it the right way. This is my code:
from sklearn.datasets import make_friedman1
from sklearn.feature_selection import RFECV
from sklearn.svm import SVR
import eli5
X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
estimator = SVR(kernel="linear")
perm = eli5.sklearn.PermutationImportance(estimator, scoring='r2', n_iter=10, random_state=42, cv=3)
selector = RFECV(perm, step=1, min_features_to_select=1, scoring='r2', cv=3)
selector = selector.fit(X, y)
selector.ranking_
#eli5.show_weights(perm) # fails: AttributeError: 'PermutationImportance' object has no attribute 'feature_importances_'
There are a few issues:
I am not sure if I am using cross-validation the right way. PermutationImportance is using cv to validate importance on the validation set, or cross-validation should be only with RFECV? (in the example, I used cv=3 in both cases, but not sure if that's the right thing to do)
If I uncomment the last line, I'll get a AttributeError: 'PermutationImportance' ... is this because I fit using RFECV? what I'm doing is similar to the last snippet here: https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html
as a less important issue, this gives me a warning when I set cv in eli5.sklearn.PermutationImportance :
.../lib/python3.8/site-packages/sklearn/utils/validation.py:68: FutureWarning: Pass classifier=False as keyword args. From version 0.25 passing these as positional arguments will result in an error warnings.warn("Pass {} as keyword args. From version 0.25 "
The whole process is a bit vague. Is there a way to do it directly in Sklearn? e.g. by adding a feature_importances attribute?
Since the objective is to select the optimal number of features with permutation importance and recursive feature elimination, I suggest using RFECV and PermutationImportance in conjunction with a CV splitter like KFold. The code could then look like this:
import warnings
from eli5 import show_weights
from eli5.sklearn import PermutationImportance
from sklearn.datasets import make_friedman1
from sklearn.feature_selection import RFECV
from sklearn.model_selection import KFold
from sklearn.svm import SVR
warnings.filterwarnings("ignore", category=FutureWarning)
X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
splitter = KFold(n_splits=3) # 3 folds as in the example
estimator = SVR(kernel="linear")
selector = RFECV(
PermutationImportance(estimator, scoring='r2', n_iter=10, random_state=42, cv=splitter),
cv=splitter,
scoring='r2',
step=1
)
selector = selector.fit(X, y)
selector.ranking_
show_weights(selector.estimator_)
Regarding your issues:
PermutationImportance will calculate the feature importance and RFECV the r2 scoring with the same strategy according to the splits provided by KFold.
You called show_weights on the unfitted PermutationImportance object. That is why you got an error. You should access the fitted object with the estimator_ attribute instead.
Can be ignored.

What is the expected_value field of TreeExplainer for a Random Forest?

I used SHAP to explain my RF
RF_best_parameters = RandomForestRegressor(random_state=24, n_estimators=100)
RF_best_parameters.fit(X_train, y_train.values.ravel())
shap_explainer_model = shap.TreeExplainer(RF_best_parameters)
The TreeExplainer class has an attribute expected_value.
My first guess that this field is the mean of the predicted y, according to the X_train (I also read this here )
But it is not.
The output of the command:
shap_explainer_model.expected_value
is 0.2381.
The output of the command:
RF_best_parameters.predict(X_train).mean()
is 0.2389.
As we can see the values are not same.
So what is the meaning of the expected_value here?
This is due to a peculiarity of the method when used with the Random Forest algorithm; quoting from the response in the relevant Github thread shap explainer expected_value is different from model expected value:
It is because of how sklearn records the training samples in the tree models it builds. Random forests use a random subsample of the data to train each tree, and it is that random subsample that is used in sklearn to record the leaf sample weights in the model. Since TreeExplainer uses the recorded leaf sample weights to represent the training dataset, it will depend on the random sampling used during training. This will cause small variations like the ones you are seeing.
We can actually verify that this behavior is not present with other algorithms, say Gradient Boosting Trees:
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
import numpy as np
import shap
shap.__version__
# 0.37.0
X, y = make_regression(n_samples=1000, n_features=10, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
gbt = GradientBoostingRegressor(random_state=0)
gbt.fit(X_train, y_train)
mean_pred_gbt = np.mean(gbt.predict(X_train))
mean_pred_gbt
# -11.534353657511172
gbt_explainer = shap.TreeExplainer(gbt)
gbt_explainer.expected_value
# array([-11.53435366])
np.isclose(mean_pred_gbt, gbt_explainer.expected_value)
# array([ True])
But for RF, we get indeed a "small variation" as mentioned by the main SHAP developer in the thread above:
rf = RandomForestRegressor(random_state=0)
rf.fit(X_train, y_train)
rf_explainer = shap.TreeExplainer(rf)
rf_explainer.expected_value
# array([-11.59166808])
mean_pred_rf = np.mean(rf.predict(X_train))
mean_pred_rf
# -11.280125877556388
np.isclose(mean_pred_rf, rf_explainer.expected_value)
# array([False])
Just try :
shap_explainer_model = shap.TreeExplainer(RF_best_parameters, data=X_train, feature_perturbation="interventional", model_output="raw")
Then the shap_explainer_model.expected_value should give you the mean prediction of your model on train data.
Otherwise, TreeExplainer uses feature_perturbation="tree_path_dependent"; accoding to the documentation:
The “tree_path_dependent” approach is to just follow the trees and use the number of training examples that went down each leaf to represent the background distribution. This approach does not require a background dataset and so is used by default when no background dataset is provided.

python imblearn make_pipeline TypeError: Last step of Pipeline should implement fit

I am trying to implement SMOTE of imblearn inside the Pipeline. My data sets are text data stored in pandas dataframe. Please see below the code snippet
text_clf =Pipeline([('vect', TfidfVectorizer()),('scale', StandardScaler(with_mean=False)),('smt', SMOTE(random_state=5)),('clf', LinearSVC(class_weight='balanced'))])
After this I am using GridsearchCV.
grid = GridSearchCV(text_clf, parameters, cv=4, n_jobs=-1, scoring = 'accuracy')
Where parameters are nothing but tuning parameters mostly for TfidfVectorizer().
I am getting the following error.
All intermediate steps should be transformers and implement fit and transform. 'SMOTE
Post this error, I have changed the code to as follows.
vect = TfidfVectorizer(use_idf=True,smooth_idf = True, max_df = 0.25, sublinear_tf = True, ngram_range=(1,2))
X = vect.fit_transform(X).todense()
Y = vect.fit_transform(Y).todense()
X_Train,X_Test,Y_Train,y_test = train_test_split(X,Y, random_state=0, test_size=0.33, shuffle=True)
text_clf =make_pipeline([('smt', SMOTE(random_state=5)),('scale', StandardScaler(with_mean=False)),('clf', LinearSVC(class_weight='balanced'))])
grid = GridSearchCV(text_clf, parameters, cv=4, n_jobs=-1, scoring = 'accuracy')
Where parameters are nothing but tuning Cin SVC classifiers.
This time I am getting the following error:
Last step of Pipeline should implement fit.SMOTE(....) doesn't
What is going here? Can anyone please help?
imblearn.SMOTE has no transform method. Docs is here.
But all steps except the last in a pipeline should have it, along with fit.
To use SMOTE with sklearn pipeline you should implement a custom transformer calling SMOTE.fit_sample() in transform method.
Another easier option is just to use ibmlearn pipeline:
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline as imbPipeline
# This doesn't work with sklearn.pipeline.Pipeline because
# SMOTE doesn't have a .tranform() method.
# (It has .fit_sample() or .sample().)
pipe = imbPipeline([
...
('oversample', SMOTE(random_state=5)),
('clf', LinearSVC(class_weight='balanced'))
])

StratifiedShuffleSplit reporting multiple args for n_iter

I'm trying to use scikit-learn's StratifiedShuffleSplit to make a single split of my dataset that preserves class sample ratios.
from sklearn.datasets import load_files
from sklearn.model_selection import StratifiedShuffleSplit
dataset = load_files('reviews/aggregated/')
split = StratifiedShuffleSplit(dataset.target, n_iter=1, test_size=0.2)
train_idx, test_idx = next(iter(split))
train_X, train_y = dataset.data[train_idx], dataset.target[train_idx]
test_X, test_y = dataset.data[test_idx], dataset.target[test_idx]
This gives me the below error:
TypeError: __init__() got multiple values for keyword argument 'n_iter'
But I'm clearly only passing a single value for it. Is StratifiedShuffleSplit somehow incompatible with datasets? The docs don't seem to have an answer
It turns out the documentation was outdated. Looking at the docstrings, I found that the correct way to do this is:
sss = StratifiedShuffleSplit(n_iter=1, test_size=0.2)
train_idx, test_idx = next(sss.split(dataset.data, dataset.target))

Categories