How to let the regressor in the pipeline unpack the parameters? - python

So I am using scikit-learn pipeline to cut down the steps of writing repetitive code. However I can't see to figure out the way to write a code that unpack the parameters for each regressor.
Before using Pipeline, I write a class to unpack the parameters. That works just fine though I do believe there is a better way to go with this.
I don't want to keep writing manually the parameters
from sklearn.pipeline import make_pipeline
pipe_et = make_pipeline(StandardScaler(), ExtraTreesRegressor(n_estimators = 1000, random_state = Seed))
pipe_rf = make_pipeline(StandardScaler(), XGBRegressor())
This is an example of the parameters I want to be unpacked
rf_params = {'n_estimators': 1000, 'n_jobs': -1, 'warm_start': True, 'max_features':2}
There is no error. I don't want to do extra labor work but I expect **params to work but I don't know how to proceed with that. Please help me with my coding style

You could just loop through your pipe_rf object and get the parameters like this:
for i in pipe_rf:
print (i.get_params())
OUTPUT
{'copy': True, 'with_mean': True, 'with_std': True}
{'base_score': 0.5, 'booster': 'gbtree', 'colsample_bylevel': 1, 'colsample_bynode': 1, 'colsample_bytree': 1, 'gamma': 0, 'importance_type': 'gain', 'learning_rate': 0.1, 'max_delta_step': 0, 'max_depth': 3, 'min_child_weight': 1, 'missing': None, 'n_estimators': 100, 'n_jobs': 1, 'nthread': None, 'objective': 'reg:linear', 'random_state': 0, 'reg_alpha': 0, 'reg_lambda': 1, 'scale_pos_weight': 1, 'seed': None, 'silent': None, 'subsample': 1, 'verbosity': 1}
Hope this helps!

You need to format the parameters of the estimator using __ so that it can be fed as params for pipeline. I have written a small function that can take pipeline and parameters for the estimator, then it would return the appropriate params for the estimator.
Try this example:
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler
pipe_rf = make_pipeline(StandardScaler(), RandomForestRegressor())
rf_params = {'n_estimators': 10, 'n_jobs': -1, 'warm_start': True, 'max_features':2}
def params_formatter(pipeline, est_params):
est_name = pipeline.steps[-1][0]
return {'{}__{}'.format(est_name,k):v for k,v in est_params.items()}
params = params_formatter(pipe_rf, rf_params)
pipe_rf.set_params(**params)
# Pipeline(memory=None,
# steps=[('standardscaler',
# StandardScaler(copy=True, with_mean=True, with_std=True)),
# ('randomforestregressor',
# RandomForestRegressor(bootstrap=True, criterion='mse',
# max_depth=None, max_features=2,
# max_leaf_nodes=None,
# min_impurity_decrease=0.0,
# min_impurity_split=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0,
# n_estimators=10, n_jobs=-1,
# oob_score=False, random_state=None,
# verbose=0, warm_start=True))],
# verbose=False)

Related

GridSearchCV unexpected behaviour (always returns the first parameter as the best)

I've got a multiclass classification problem and I need to find the best parameters. I cannot change the max_iter, solver, and tol (they are given), but I'd like to check which penalty is better. However, GridSearchCV always returns the first given penalty as the best one.
Example:
from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
fixed_params = {
'random_state': 42,
'multi_class': 'multinomial',
'solver': 'saga',
'tol': 1e-3,
'max_iter': 500
}
parameters = [
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['l1', 'l2', None]},
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['elasticnet'], 'l1_ratio': np.arange(0.0, 1.0, 0.1)}
]
model = GridSearchCV(LogisticRegression(**fixed_params), parameters, n_jobs=-1, verbose=10, scoring='f1_macro' ,cv=cv)
model.fit(X_train, y_train)
print(model.best_score_)
# 0.6836409100287101
print(model.best_params_)
# {'C': 0.1, 'penalty': 'l2'}
If I change the order of parameters rows, the result will be quite opposite:
from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
fixed_params = {
'random_state': 42,
'multi_class': 'multinomial',
'solver': 'saga',
'tol': 1e-3,
'max_iter': 500
}
parameters = [
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['elasticnet'], 'l1_ratio': np.arange(0.0, 1.0, 0.1)}
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['l1', 'l2', None]}
]
model = GridSearchCV(LogisticRegression(**fixed_params), parameters, n_jobs=-1, verbose=10, scoring='f1_macro' ,cv=cv)
model.fit(X_train, y_train)
print(model.best_score_)
# 0.6836409100287101
print(model.best_params_)
# {'C': 0.1, 'l1_ratio': 0.0, 'penalty': 'elasticnet'}
So, the best_score_ is the same for both options, but the best_params_ is not.
Could you please tell me what is wrong?
Edited
GridSearchCV gives a worse result in comparison to baseline with default parameters.
Baseline:
baseline_model = LogisticRegression(multi_class='multinomial', solver='saga', tol=1e-3, max_iter=500)
baseline_model.fit(X_train, y_train)
train_pred_baseline = baseline_model.predict(X_train)
print(f1_score(y_train, train_pred_baseline, average='micro'))
LogisticRegression(C=1.0, class_weight=None, dual=False,
fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=500,
multi_class='multinomial', n_jobs=None, penalty='l2',
random_state=None, solver='saga', tol=0.001, verbose=0,
warm_start=False)
Baseline gives me f1_micro better than GridSearchCV:
0.7522768670309654
Edited-2
So, according to best f1_score performance, C = 1 is the best choice for my model. But GridSearchCV returns me C = 0.1.
I think, I miss something...
Baseline's f1_macro better than GridSearchCV too:
train_pred_baseline = baseline_model.predict(X_train)
print(f1_score(y_train, train_pred_baseline, average='macro'))
# 0.7441968750050458
Actually there's nothing wrong. Here's the thing. Elasticnet uses both L1 and L2 penalty terms. However, if your l1_ratio is 0, then you're basically applying L2 regularization so you're only using the L2 penalty term. As stated in the docs:
Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2.
Since your second result had l1_ratio to be 0, it's equivalent to using L2 penalty terms.

Invalid parameter for estimator Pipeline (SVR)

I have a data set with 100 columns of continuous features, and a continuous label, and I want to run SVR; extracting features of relevance, tuning hyper parameters, and then cross-validating my model that is fit to my data.
I wrote this code:
X_train, X_test, y_train, y_test = train_test_split(scaled_df, target, test_size=0.2)
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
# define the pipeline to evaluate
model = SVR()
fs = SelectKBest(score_func=mutual_info_regression)
pipeline = Pipeline(steps=[('sel',fs), ('svr', model)])
# define the grid
grid = dict()
#How many features to try
grid['estimator__sel__k'] = [i for i in range(1, X_train.shape[1]+1)]
# define the grid search
#search = GridSearchCV(pipeline, grid, scoring='neg_mean_squared_error', n_jobs=-1, cv=cv)
search = GridSearchCV(
pipeline,
# estimator=SVR(kernel='rbf'),
param_grid={
'estimator__svr__C': [0.1, 1, 10, 100, 1000],
'estimator__svr__epsilon': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 1, 5, 10],
'estimator__svr__gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 1, 5, 10]
},
scoring='neg_mean_squared_error',
verbose=1,
n_jobs=-1)
for param in search.get_params().keys():
print(param)
# perform the search
results = search.fit(X_train, y_train)
# summarize best
print('Best MAE: %.3f' % results.best_score_)
print('Best Config: %s' % results.best_params_)
# summarize all
means = results.cv_results_['mean_test_score']
params = results.cv_results_['params']
for mean, param in zip(means, params):
print(">%.3f with: %r" % (mean, param))
I get the error:
ValueError: Invalid parameter estimator for estimator Pipeline(memory=None,
steps=[('sel',
SelectKBest(k=10,
score_func=<function mutual_info_regression at 0x7fd2ff649cb0>)),
('svr',
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1,
gamma='scale', kernel='rbf', max_iter=-1, shrinking=True,
tol=0.001, verbose=False))],
verbose=False). Check the list of available parameters with `estimator.get_params().keys()`.
When I print estimator.get_params().keys(), as suggested in the error message, I get:
cv
error_score
estimator__memory
estimator__steps
estimator__verbose
estimator__sel
estimator__svr
estimator__sel__k
estimator__sel__score_func
estimator__svr__C
estimator__svr__cache_size
estimator__svr__coef0
estimator__svr__degree
estimator__svr__epsilon
estimator__svr__gamma
estimator__svr__kernel
estimator__svr__max_iter
estimator__svr__shrinking
estimator__svr__tol
estimator__svr__verbose
estimator
iid
n_jobs
param_grid
pre_dispatch
refit
return_train_score
scoring
verbose
Fitting 5 folds for each of 405 candidates, totalling 2025 fits
But when I change the line:
pipeline = Pipeline(steps=[('sel',fs), ('svr', model)])
to:
pipeline = Pipeline(steps=[('estimator__sel',fs), ('estimator__svr', model)])
I get the error:
ValueError: Estimator names must not contain __: got ['estimator__sel', 'estimator__svr']
Could someone explain what I'm doing wrong, i.e. how do I combine the pipeline/feature selection step into the GridSearchCV?
As a side note, if I comment out pipeline in the GridSearchCV, and uncomment estimator=SVR(kernal='rbf'), the cell runs without issue, but in that case, I presume I am not incorporating the feature selection in, as it's not called anywhere. I have seen some previous SO questions, e.g. here, but they don't seem to answer this specific question.
Is there a cleaner way to write this?
The first error message is about the pipeline parameters, not the search parameters, and indicates that your param_grid is bad, not the pipeline step names. Running pipeline.get_params().keys() should show you the right parameter names. Your grid should be:
param_grid={
'svr__C': [0.1, 1, 10, 100, 1000],
'svr__epsilon': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 1, 5, 10],
'svr__gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 1, 5, 10]
},
I don't know how substituting the plain SVR for the pipeline runs; your parameter grid doesn't specify the right things there either...

Recursive Feature Elimination and Grid Search for SVR using scikit-learn

I'm using SVR to solve a prediction problem and I would like to do feature selection as well as hyper-parameters search. I'm trying to use both RFECV and GridSearchCV but I'm receiving errors from my code.
My code is as follow:
def svr_model(X, Y):
estimator=SVR(kernel='rbf')
param_grid={
'C': [0.1, 1, 100, 1000],
'epsilon': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10],
'gamma': [0.0001, 0.001, 0.005, 0.1, 1, 3, 5]
}
selector = RFECV(estimator, step = 1, cv = 5)
gsc = GridSearchCV(
selector,
param_grid,
cv=5, scoring='neg_root_mean_squared_error', verbose=0, n_jobs=-1)
grid_result = gsc.fit(X, Y)
best_params = grid_result.best_params_
best_svr = SVR(kernel='rbf', C=best_params["C"], epsilon=best_params["epsilon"], gamma=best_params["gamma"],
coef0=0.1, shrinking=True,
tol=0.001, cache_size=200, verbose=False, max_iter=-1)
scoring = {
'abs_error': 'neg_mean_absolute_error',
'squared_error': 'neg_mean_squared_error',
'r2':'r2'}
scores = cross_validate(best_svr, X, Y, cv=10, scoring=scoring, return_train_score=True, return_estimator = True)
return scores
The errors is
ValueError: Invalid parameter C for estimator RFECV(cv=5,
estimator=SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1,
gamma='scale', kernel='rbf', max_iter=-1, shrinking=True,
tol=0.001, verbose=False),
min_features_to_select=1, n_jobs=None, scoring=None, step=1, verbose=0). Check the list of available parameters with `estimator.get_params().keys()`.
I'm quite new to using machine learning, any help would be highly appreciated.
Grid search runs the selector initialized with different combinations of parameters passed in the param_grid. But in this case, we want the grid search to initialize the estimator inside the selector. This is achieved by using the dictionary naming style <estimator>__<parameter>. Follow the docs for more details.
Working code
estimator=SVR(kernel='linear')
selector = RFECV(estimator, step = 1, cv = 5)
gsc = GridSearchCV(
selector,
param_grid={
'estimator__C': [0.1, 1, 100, 1000],
'estimator__epsilon': [0.0001, 0.0005],
'estimator__gamma': [0.0001, 0.001]},
cv=5, scoring='neg_mean_squared_error', verbose=0, n_jobs=-1)
grid_result = gsc.fit(X, Y)
Two other bugs in your code
neg_root_mean_squared_error is not a valid scoring method
rbf kernel does not return feature importance, hence you cannot use this kernel if you want to use RFECV

GridSearchCV - FitFailedWarning: Estimator fit failed

I am running this:
# Hyperparameter tuning - Random Forest #
# Hyperparameters' grid
parameters = {'n_estimators': list(range(100, 250, 25)), 'criterion': ['gini', 'entropy'],
'max_depth': list(range(2, 11, 2)), 'max_features': [0.1, 0.2, 0.3, 0.4, 0.5],
'class_weight': [{0: 1, 1: i} for i in np.arange(1, 4, 0.2).tolist()], 'min_samples_split': list(range(2, 7))}
# Instantiate random forest
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(random_state=0)
# Execute grid search and retrieve the best classifier
from sklearn.model_selection import GridSearchCV
classifiers_grid = GridSearchCV(estimator=classifier, param_grid=parameters, scoring='balanced_accuracy',
cv=5, refit=True, n_jobs=-1)
classifiers_grid.fit(X, y)
and I am receiving this warning:
.../anaconda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:536:
FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details:
TypeError: '<' not supported between instances of 'str' and 'int'
Why is this and how can I fix it?
I had similar issue of FitFailedWarning with different details, after many runs I found, the parameter value passing has the error, try
parameters = {'n_estimators': [100,125,150,175,200,225,250],
'criterion': ['gini', 'entropy'],
'max_depth': [2,4,6,8,10],
'max_features': [0.1, 0.2, 0.3, 0.4, 0.5],
'class_weight': [0.2,0.4,0.6,0.8,1.0],
'min_samples_split': [2,3,4,5,6,7]}
This will pass for sure, for me it happened in XGBClassifier, somehow the values datatype mixing up
One more is if the value exceeds the range, for example in XGBClassifier 'subsample' paramerters max value is 1.0, if it is set as 1.1, FitFailedWarning will occur
For me this was giving same error but after removing none from max_dept it is fitting properly.
param_grid={'n_estimators':[100,200,300,400,500],
'criterion':['gini', 'entropy'],
'max_depth':['None',5,10,20,30,40,50,60,70],
'min_samples_split':[5,10,20,25,30,40,50],
'max_features':[ 'sqrt', 'log2'],
'max_leaf_nodes':[5,10,20,25,30,40,50],
'min_samples_leaf':[1,100,200,300,400,500]
}
code which is running properly:
param_grid={'n_estimators':[100,200,300,400,500],
'criterion':['gini', 'entropy'],
'max_depth':[5,10,20,30,40,50,60,70],
'min_samples_split':[5,10,20,25,30,40,50],
'max_features':[ 'sqrt', 'log2'],
'max_leaf_nodes':[5,10,20,25,30,40,50],
'min_samples_leaf':[1,100,200,300,400,500]
}
I too got same error and when I passed hyperparameters as in MachineLearningMastery, I got output without warning...
Try this way if anyone get similar issues...
# grid search logistic regression model on the sonar dataset
from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/sonar.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
# define model
model = LogisticRegression()
# define evaluation
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# define search space
space = dict()
space['solver'] = ['newton-cg', 'lbfgs', 'liblinear']
space['penalty'] = ['none', 'l1', 'l2', 'elasticnet']
space['C'] = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10, 100]
# define search
search = GridSearchCV(model, space, scoring='accuracy', n_jobs=-1, cv=cv)
# execute search
result = search.fit(X, y)
# summarize result
print('Best Score: %s' % result.best_score_)
print('Best Hyperparameters: %s' % result.best_params_)
Make sure the y-variable is an int, not bool or str.
Change your last line of code to make the y series a 0 or 1, for example:
classifiers_grid.fit(X, list(map(int, y)))

How to get the params from a saved XGBoost model

I'm trying to train a XGBoost model using the params below:
xgb_params = {
'objective': 'binary:logistic',
'eval_metric': 'auc',
'lambda': 0.8,
'alpha': 0.4,
'max_depth': 10,
'max_delta_step': 1,
'verbose': True
}
Since my input data is too big to be fully loaded into the memory, I adapt the incremental training:
xgb_clf = xgb.train(xgb_params, input_data, num_boost_round=rounds_per_batch,
xgb_model=model_path)
The code for prediction is
xgb_clf = xgb.XGBClassifier()
booster = xgb.Booster()
booster.load_model(model_path)
xgb_clf._Booster = booster
raw_probas = xgb_clf.predict_proba(x)
The result seemed good. But when I tried to invoke xgb_clf.get_xgb_params(), I got a param dict in which all params were set to default values.
I can guess that the root cause is when I initialized the model, I didn't pass any params in. So the model was initialized using the default values but when it predicted, it used an internal booster that had been fitted using some pre-defined params.
However, I wonder is there any way that, after I assign a pre-trained booster model to a XGBClassifier, I can see the real params that are used to train the booster, but not those which are used to initialize the classifier.
You seem to be mixing the sklearn API with the functional API in your code, if you stick to either one you should get the parameters to persist in the pickle. Here's an example using the sklearn API.
import pickle
import numpy as np
import xgboost as xgb
from sklearn.datasets import load_digits
digits = load_digits(2)
y = digits['target']
X = digits['data']
xgb_params = {
'objective': 'binary:logistic',
'reg_lambda': 0.8,
'reg_alpha': 0.4,
'max_depth': 10,
'max_delta_step': 1,
}
clf = xgb.XGBClassifier(**xgb_params)
clf.fit(X, y, eval_metric='auc', verbose=True)
pickle.dump(clf, open("xgb_temp.pkl", "wb"))
clf2 = pickle.load(open("xgb_temp.pkl", "rb"))
assert np.allclose(clf.predict(X), clf2.predict(X))
print(clf2.get_xgb_params())
which produces
{'base_score': 0.5,
'colsample_bylevel': 1,
'colsample_bytree': 1,
'gamma': 0,
'learning_rate': 0.1,
'max_delta_step': 1,
'max_depth': 10,
'min_child_weight': 1,
'missing': nan,
'n_estimators': 100,
'objective': 'binary:logistic',
'reg_alpha': 0.4,
'reg_lambda': 0.8,
'scale_pos_weight': 1,
'seed': 0,
'silent': 1,
'subsample': 1}
If you are training like this -
dtrain = xgb.DMatrix(x_train, label=y_train)
model = xgb.train(model_params, dtrain, model_num_rounds)
Then the model returned is a Booster.
import json
json.loads(model.save_config())
the model.save_config() function lists down model parameters in addition to other configurations.
To add to #ytsaig's answer, if you are using early_stopping_rounds argument in clf.fit() method then certain additional parameters are generated but not returned as part of clf.get_xgb_params() method. These can be accessed directly as follows: clf.best_score, clf.best_iteration and clf.best_ntree_limit.
Ref: https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier.fit

Categories