GridSearchCV with LGBMRegressor can't find best parameters - python

I have 2 regressors:
import lightgbm as lgb
from sklearn.model_selection import GridSearchCV
params = {
'num_leaves': [7, 14, 21, 28, 31, 50],
'learning_rate': [0.1, 0.03, 0.003],
'max_depth': [-1, 3, 5],
'n_estimators': [50, 100, 200, 500],
}
grid = GridSearchCV(lgb.LGBMRegressor(random_state=0), params, scoring='r2', cv=5)
grid.fit(X_train, y_train)
reg = lgb.LGBMRegressor(random_state=0)
As you see, I defined a random_state for both regressors. GridSearchCV must find the best params for estimator to increace its scroring. But
r2_score(y_train, grid.predict(X_train)) # output is 0.69
r2_score(y_train, reg.predict(X_train)) # output is 0.84
So, how can find best params for LGBMRegressor?

Based on the documentation here, after calling grid.fit() you can find the best estimator (ready model) and params here:
grid.best_estimator_
grid.best_params_
FYI: random_state works just for random cases (when shuffling for example).
In your case the params for models are different and the results of your metric R2 are accordingly different.

So, I believe you would have to script it like:
params = {
'num_leaves': [7, 14, 21, 28, 31, 50],
'learning_rate': [0.1, 0.03, 0.003],
'max_depth': [-1, 3, 5],
'n_estimators': [50, 100, 200, 500],
}
grid = GridSearchCV(lgb.LGBMRegressor(random_state=0), params, scoring='r2', cv=5)
grid.fit(X_train, y_train)
reg = lgb.LGBMRegressor(random_state=0)
reg.fit(X_train,y_train)
lgbm_tuned = grid.best_estimator_
r2_tuned = grid.best_params_
r2_regular = r2_score(y_train, reg.predict(X_train))
when r2_tuned is the best score found with Grid Search, lgbm_tuned is your model defined with the best parameters and r2_regular is your score with default parameters.
It is weird to find a worst result after gridsearch, specially when the parameters for the gridsearch includes the default parameters for LightGBM.

Related

How can i adjust ranges for parameters for GridSearchCV?

I'm trying to tune parameters for XGBoost Text Classification.
parameters = {
'max_depth': range(2, 11, 2),
'n_estimators': range(200, 250, 10),
'learning_rate': [0.1, 0.01, 0.09],
'min_child_weight':range(0,20,4)
}
grid_search = GridSearchCV(
estimator=model,
param_grid=parameters,
scoring = 'roc_auc',
n_jobs = 10,
cv = 10,
verbose=True
)
However the code runs for hours but gives no results. Any recommendations about the ranges or any other things will be appreciated!

Tune XGB Parameters

I am working on a project with a dataset of aircraft engines and their lifetime. I need to use XGBRegressor to have the highest success rate of my model on my validation data.
I am having trouble understanding the XGBRegressor documentation, I was wondering if you know how I could optimize the search for optimal parameters instead of testing everything by hand.
I attached a part of my code related to XGB.
from xgboost import XGBRegressor
xgb = XGBRegressor(learning_rate = 0.3, max_depth = 7, n_estimators = 230, subsample = 0.7, colsample_bylevel = 0.7, colsample_bytree = 0.7, min_child_weight = 4, reg_alpha = 10, reg_lambda = 10)
xgb.fit(X_train, y_train)
The following answer will help you achieve this; you can add more Hyperparameters or more classifiers to test with different approaches. if you set cv=5 it will do 5-fold cross validation; but if you have a specific validation and only want to get perfect results you can add pass this to cv:
indices = np.arange(len(X_train))
train_idx, test_idx = train_test_split(indices, test_size=0.2)
cv_indices=[(train_idx, test_idx)]
otherwise use: cv=5 to do 5-fold CV while searching for parameters.
from sklearn.model_selection import GridSearchCV
from xgboost import XGBRegressor
dict_classifiers = {
"XGB": XGBRegressor()
}
params = {
"XGB": {'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5], "n_estimators": [300, 600],
"learning_rate": [0.001, 0.01, 0.1],
}
}
for classifier_name in dict_classifiers.keys() & params:
print("training: ", classifier_name)
gridSearch = GridSearchCV(
estimator=dict_classifiers[classifier_name], param_grid=params[classifier_name], cv=cv_indices)
gridSearch.fit(X_train, # shoud have shape of (n_samples, n_features)
y_train.reshape((-1))) #this should be an array with shape (n_samples,)
print(gridSearch.best_score_, gridSearch.best_params_)

warnings.warn("Estimator fit failed. The score on this train-test"

FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
I am facing the above error after running the below code. Because of this I am getting 0.0 in Precision and F-score in classification report. Please help me in resolving this.
rfcl = RandomForestClassifier(n_estimators = 500,random_state=0)
rfcl = rfcl.fit(X_train, train_labels)
from sklearn.model_selection import GridSearchCV
param_grid = {
'max_depth': [7, 10],
'max_features': [4, 6],
'min_samples_leaf': [50, 100],
'min_samples_split': [150, 300],
'n_estimators': [301, 500]
}
rfcl = RandomForestClassifier()
grid_search = GridSearchCV(estimator = rfcl, param_grid = param_grid, cv=3)
grid_search.fit(X_train, train_labels)
https://datascience.stackexchange.com/questions/81753/gridsearchcv-to-fine-tune-outputs-valueerror-and-fitfailedwarning
try printing
print(train_df.info()) print(test_df.info())
In my case, I used 0.0, 0,5 and 1.0 because I normalize 0 1 2. When I changed to 0 1 2 my "y". Works!!!

Addig gridsearchCV result automatically to new random forest

I am using RandomForestRegressor for a regression problem, and using gridsearch to find the best hyperparameters.
from sklearn.model_selection import GridSearchCV
param_grid = {
'bootstrap': [True],
'max_depth': [2, 3, 5],
'max_features': ['sqrt'],
#'min_samples_leaf': [2, 3, 4, 5],
'min_samples_split': [10, 20, 50],
'n_estimators': [50, 100, 200, 500]
}
rf = RandomForestRegressor()
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 3, n_jobs = -1, verbose = 2, return_train_score=True)
I get the best parameter using this line:
grid_search.best_params_
Is there a way to automatically add this best parameter to a new randomforestregressor, e.g. to check RMSE value? Something like this:
best_rf=RandomForestRegressor(grid_search.best_params_)
But sadly this doesn't work i get an error: "n_estimators must be an integer, got <class 'dict'>."

RandomForestRegressor used with GridSearchCV and RandomSearchCV may be overfitting on test set

I am following along with the book titled: Hands-On Machine Learning with SciKit-Learn, Keras and TensorFlow by Aurelien Geron (link). In chapter 2 you get hands on with actually building an ML system using a dataset from StatLib's California Housing Prices (link).
I have been running cross validation tests using BOTH GridSearchCV and RandomSearchCV to test and see which performs better (they both perform about the same, depending on the run GridSearch will perform better than RandomSearch and vice versa). During my cross validation of the training set, all of my RMSE's come back (after about 10 folds) looking like so:
49871.10156541779 {'max_features': 6, 'n_estimators': 100} GRID SEARCH CV
49573.67188289324 {'max_features': 6, 'n_estimators': 300} GRID SEARCH CV
49759.116323927 {'max_features': 8, 'n_estimators': 100} GRID SEARCH CV
49388.93702859155 {'max_features': 8, 'n_estimators': 300} GRID SEARCH CV
49759.445071611895 {'max_features': 10, 'n_estimators': 100} GRID SEARCH CV
49517.74394767381 {'max_features': 10, 'n_estimators': 300} GRID SEARCH CV
49796.22587441326 {'max_features': 12, 'n_estimators': 100} GRID SEARCH CV
49616.61833604992 {'max_features': 12, 'n_estimators': 300} GRID SEARCH CV
49795.571075148444 {'max_features': 14, 'n_estimators': 300} GRID SEARCH CV
49790.38581725693 {'n_estimators': 100, 'max_features': 12} RANDOM SEARCH CV
49462.758078362356 {'n_estimators': 300, 'max_features': 8} RANDOM SEARCH CV
Please note that I am selecting the best results out of about 50 or so results to present here. I am using the following code to generate this:
param_grid = [{'n_estimators' : [3, 10, 30, 100, 300],
'max_features' : [2, 4, 6, 8, 10, 12, 14]},
{'bootstrap' : [False], 'n_estimators' : [3, 10, 12],
'max_features' : [2, 3, 4]}]
forest_regressor = RandomForestRegressor({'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse',
'max_depth': None, 'max_features': 8, 'max_leaf_nodes': None,
'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0,
'n_estimators': 300, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False})
grid_search = GridSearchCV(forest_regressor, param_grid, cv=10, scoring="neg_mean_squared_error",
return_train_score=True, refit=True)
grid_search.fit(Dataframe, TrainingLabels)
prediction = grid_search.predict(Dataframe)
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params, "GRID SEARCH CV")
##################################################################################
#Randomized Search Cross Validation
param_grid = [{'n_estimators' : [3, 10, 30, 100, 300],
'max_features' : [2, 4, 6, 8, 10, 12, 14]},
{'bootstrap' : [False], 'n_estimators' : [3, 10, 12],
'max_features' : [2, 3, 4]}]
forest_regressor = RandomForestRegressor({'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse',
'max_depth': None, 'max_features': 8, 'max_leaf_nodes': None,
'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0,
'n_estimators': 300, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False})
rand_search = RandomizedSearchCV(forest_regressor, param_grid, cv=10, refit=True,
scoring='neg_mean_squared_error', return_train_score=True)
rand_search.fit(Dataframe, TrainingLabels)
prediction = rand_search.predict(Dataframe)
cvres = rand_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params, "RANDOM SEARCH CV")
Now, I am doing things a little differently than what the book states; my pipeline looks as such:
import pandas as pd
import numpy as np
from sklearn.impute import KNNImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.compose import make_column_transformer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy import stats
class Dataframe_Manipulation:
def __init__(self):
self.dataframe = pd.read_csv(r'C:\Users\bohayes\AppData\Local\Programs\Python\Python38\Excel and Text\housing.csv')
def Cat_Creation(self):
# Creation of an Income Category to organize the median incomes into strata (bins) to sample from
self.income_cat = self.dataframe['income_category'] = pd.cut(self.dataframe['median_income'],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
self.rooms_per_house_cat = self.dataframe['rooms_per_house'] = self.dataframe['total_rooms']/self.dataframe['households']
self.bedrooms_per_room_cat = self.dataframe['bedrooms_per_room'] = self.dataframe['total_bedrooms']/self.dataframe['total_rooms']
self.pop_per_house = self.dataframe['pop_per_house'] = self.dataframe['population'] / self.dataframe['households']
return self.dataframe
def Fill_NA(self):
self.imputer = KNNImputer(n_neighbors=5, weights='uniform')
self.dataframe['total_bedrooms'] = self.imputer.fit_transform(self.dataframe[['total_bedrooms']])
self.dataframe['bedrooms_per_room'] = self.imputer.fit_transform(self.dataframe[['bedrooms_per_room']])
return self.dataframe
def Income_Cat_Split(self):
self.inc_cat_split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for self.train_index, self.test_index in self.inc_cat_split.split(self.dataframe, self.dataframe['income_category']):
self.strat_train_set = self.dataframe.loc[self.train_index].reset_index(drop=True)
self.strat_test_set = self.dataframe.loc[self.test_index].reset_index(drop=True)
# the proportion is the % of total instances and which strata they are assigned to
self.proportions = self.strat_test_set['income_category'].value_counts() / len(self.strat_test_set)
# Only pulling out training set!!!!!!!!!!!!!!!
return self.strat_train_set, self.strat_test_set
def Remove_Cats_Test(self):
self.test_labels = self.strat_test_set['median_house_value'].copy()
self.strat_test_set = self.strat_test_set.drop(['median_house_value'], axis=1)
return self.test_labels
def Remove_Cats_Training(self):
self.training_labels = self.strat_train_set['median_house_value'].copy()
self.strat_train_set = self.strat_train_set.drop(['median_house_value'], axis=1)
return self.training_labels
def Encode_Transform(self):
self.column_trans = make_column_transformer((OneHotEncoder(), ['ocean_proximity']), remainder='passthrough')
self.training_set_encoded = self.column_trans.fit_transform(self.strat_train_set)
self.test_set_encoded = self.column_trans.fit_transform(self.strat_test_set)
return self.training_set_encoded, self.test_set_encoded
def Standard_Scaler(self):
self.scaler = StandardScaler()
self.scale_training_set = self.scaler.fit(self.training_set_encoded)
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
self.scaled_training_set = self.scaler.transform(self.training_set_encoded)
self.scaled_test_set = self.scaler.transform(self.test_set_encoded)
return self.scaled_training_set
def Test_Set(self):
return self.scaled_test_set
A = Dataframe_Manipulation()
B = A.Cat_Creation()
C = A.Fill_NA()
D = A.Income_Cat_Split()
TestLabels = A.Remove_Cats_Test()
TrainingLabels = A.Remove_Cats_Training()
G = A.Encode_Transform()
TrainingSet = A.Standard_Scaler()
TestSet = A.Test_Set()
The Grid and Random Searches come after this bit, however my RMSE scores come back drastically different when I test them on the TestSet, which leads me to believe that I am overfitting, however maybe the RSME's look different because I am using a smaller test set? Here you go:
19366.910530221918
19969.043158986697
Now here is the code that generates that: and it comes after I run Grid and Random Searches and fit the test labels and test set to the model:
#Final Grid Model
final_grid_model = grid_search.best_estimator_
final_grid_prediction = final_grid_model.predict(TestSet)
final_grid_mse = mean_squared_error(TestLabels, final_grid_prediction)
final_grid_rmse = np.sqrt(final_grid_mse)
print(final_grid_rmse)
###################################################################################
#Final Random Model
final_rand_model = rand_search.best_estimator_
final_rand_prediction = final_rand_model.predict(TestSet)
final_rand_mse = mean_squared_error(TestLabels, final_rand_prediction)
final_rand_rmse = np.sqrt(final_rand_mse)
print(final_rand_rmse)
Just to make sure I also did a confidence score on the model as well and these are the code and results:
#Confidence Grid Search
confidence = 0.95
squared_errors = (final_grid_prediction - TestLabels) ** 2
print(np.sqrt(stats.t.interval(confidence, len(squared_errors) - 1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors))))
###################################################################################
#Confidence Random Search
confidence1 = 0.95
squared_errors1 = (final_rand_prediction - TestLabels) ** 2
print(np.sqrt(stats.t.interval(confidence1, len(squared_errors1) - 1,
loc=squared_errors1.mean(),
scale=stats.sem(squared_errors1))))
>>>[18643.4914044 20064.26363526]
[19222.30464011 20688.84660134]
Why is it that my average RMSE score on the TrainingSet is about 49,000 and that same score on the test set is averaging at about 19,000? I must be overfitting, but I am not sure how or where I am going wrong.
tl;dr: Your code is unnecessarily convoluted for such a (standard) job; do not re-invent the wheel, go with a pipeline instead.
There is an error in how you scale your data, which most probably is the root cause of the observed behavior here; in the second line:
self.scale_training_set = self.scaler.fit(self.training_set_encoded)
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
you essentially overwrite your scaler with the results on the test set fit, and subsequently you actually scale your training data with this test-fitted scaler:
self.scaled_training_set = self.scaler.transform(self.training_set_encoded)
Since your test set is only 20% of the dataset, what happens is that it does not contain enough values to adequately cover the whole range (min-max) of the (bigger) training set; as a result, the training set is mis-scaled (actually containing values well above the max value of the test set), which probably leads to a higher RMSE (which is not scale invariant, and by definition depends on the scale pf the predictions).
You may think that using StratifiedShuffleSplit upstream should have protected you from such a case, but truth is that StratifiedShuffleSplit is only good for classification datasets, and it is actually meaningless in regression ones (I am genuinely surprised that it does not throw an error here).
To remedy this issue, you should just remove the line
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
from your Standard_Scaler() function.
Keep in mind that, in general, we never fit on a test set - we only transform; scikit-learn pipelines, apart from saving you from having to write all this boilerplate code (thus increasing the probability of coding errors), will protect you from this kind of error...

Categories