Invalid Parameter loss for estimator SVR - python

This is my code
i used grid search cv for hyper parameter tuning. but it shows error.
param_grid = {"kernel" : ['linear', 'poly', 'rbf', 'sigmoid'],
'loss' : ['epsilon_insensitive', 'squared_epsilon_insensitive'],
"max_iter" : [1,10,20],
'C' : [np.arange(0,20,1)]}
model = GridSearchCV(estimator = svr, param_grid = param_grid, cv = 5, verbose = 3, n_jobs = -1)
m1 = model.fit(x_train,y_train)
ValueError: Invalid parameter loss for estimator SVR(C=array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19]),
kernel='linear'). Check the list of available parameters with `estimator.get_params().keys()`.

Some errors that I spotted:
You seem to be specifying a loss parameter and possible values, that are only defined for a LinearSVR, not a SVR. On another hand, if you do want to use a LinearSVR, you can't specify a kernel, since it has to be linear.
I also noticed that 'C' : [np.arange(0,20,1)] in the definition of the grid will yield an error, since it results in a nested list. Just use np.arange(0,20,1)
Assuming then you have a SVR, the following should work for you:
from sklearn.svm import SVR
svr = SVR()
param_grid = {"kernel" : ['linear', 'poly', 'rbf', 'sigmoid'],
"max_iter" : [1,10,20],
'C' : np.arange(0,20,1)}
model = GridSearchCV(estimator = svr, param_grid = param_grid,
cv = 5, verbose = 3, n_jobs = -1)
m1 = model.fit(X_train, y_train)

Related

Visualizing change in hyper-parameter tuning using GridSearchCV for Support vector machine model

I have created an SVM model, and am using gridsearch to tune the hyper-parameters C, gamma and the kernel. Is there any way of visualizing the change in accuracy in tuning these? This is my code:
#Initiate model
svmmodel = svm.SVC()
classifier_svm = svm.SVC(kernel='linear')
classifier_svm.fit(x_trainvec, y_train)
prediction_svm = classifier_svm.predict(x_testvec)
#Tune hyperparameters
param_svm = {'C': [0.1, 1, 10],
'gamma': [1, 0.1],
'kernel': ['linear', 'poly', 'rbf']}
gridsvm = GridSearchCV(classifier_svm, param_svm, refit = True, return_train_score = True, cv = 5, verbose = 10, scoring = 'f1_weighted')

Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty

I'm building a logistic regression model to predict a binary target feature. I want to try different values of different parameters using the param_grid argument, to find the best fit with the best values. This is my code:
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state = 42)
logModel = LogisticRegression(C = 1, penalty='l1',solver='liblinear');
Grid_params = {
"penalty" : ['l1','l2','elasticnet','none'],
"C" : [0.001, 0.01, 0.1, 1, 10, 100, 1000], # Basically smaller C specify stronger regularization.
'solver' : ['lbfgs','newton-cg','liblinear','sag','saga'],
'max_iter' : [50,100,200,500,1000,2500]
}
clf = GridSearchCV(logModel, param_grid=Grid_params, cv = 10, verbose = True, n_jobs=-1,error_score='raise')
clf_fitted = clf.fit(X_train,Y_train)
And this is where I get the error. I have read already that some solvers dont work with l1, and some don't work with l2. How can I tune the param_grid in this case?
I tried also using only simple logModel = LogisticRegression() but didn't work.
Full error:
ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty.
Gridsearch accepts the list of dicts for that purpose, given you absolutely need to include solvers into grid, you should be able to do something like this:
Grid_params = [
{'solver' : ['saga'],
'penalty' : ['elasticnet', 'l1', 'l2', 'none'],
'max_iter' : [50,100,200,500,1000,2500],
'C' : [0.001, 0.01, 0.1, 1, 10, 100, 1000]},
{'solver' : ['newton-cg', 'lbfgs'],
'penalty' : ['l2','none'],
'max_iter' : [50,100,200,500,1000,2500],
'C' : [0.001, 0.01, 0.1, 1, 10, 100, 1000]},
# add more parameter sets as needed...
]

Addig gridsearchCV result automatically to new random forest

I am using RandomForestRegressor for a regression problem, and using gridsearch to find the best hyperparameters.
from sklearn.model_selection import GridSearchCV
param_grid = {
'bootstrap': [True],
'max_depth': [2, 3, 5],
'max_features': ['sqrt'],
#'min_samples_leaf': [2, 3, 4, 5],
'min_samples_split': [10, 20, 50],
'n_estimators': [50, 100, 200, 500]
}
rf = RandomForestRegressor()
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 3, n_jobs = -1, verbose = 2, return_train_score=True)
I get the best parameter using this line:
grid_search.best_params_
Is there a way to automatically add this best parameter to a new randomforestregressor, e.g. to check RMSE value? Something like this:
best_rf=RandomForestRegressor(grid_search.best_params_)
But sadly this doesn't work i get an error: "n_estimators must be an integer, got <class 'dict'>."

RandomForestRegressor used with GridSearchCV and RandomSearchCV may be overfitting on test set

I am following along with the book titled: Hands-On Machine Learning with SciKit-Learn, Keras and TensorFlow by Aurelien Geron (link). In chapter 2 you get hands on with actually building an ML system using a dataset from StatLib's California Housing Prices (link).
I have been running cross validation tests using BOTH GridSearchCV and RandomSearchCV to test and see which performs better (they both perform about the same, depending on the run GridSearch will perform better than RandomSearch and vice versa). During my cross validation of the training set, all of my RMSE's come back (after about 10 folds) looking like so:
49871.10156541779 {'max_features': 6, 'n_estimators': 100} GRID SEARCH CV
49573.67188289324 {'max_features': 6, 'n_estimators': 300} GRID SEARCH CV
49759.116323927 {'max_features': 8, 'n_estimators': 100} GRID SEARCH CV
49388.93702859155 {'max_features': 8, 'n_estimators': 300} GRID SEARCH CV
49759.445071611895 {'max_features': 10, 'n_estimators': 100} GRID SEARCH CV
49517.74394767381 {'max_features': 10, 'n_estimators': 300} GRID SEARCH CV
49796.22587441326 {'max_features': 12, 'n_estimators': 100} GRID SEARCH CV
49616.61833604992 {'max_features': 12, 'n_estimators': 300} GRID SEARCH CV
49795.571075148444 {'max_features': 14, 'n_estimators': 300} GRID SEARCH CV
49790.38581725693 {'n_estimators': 100, 'max_features': 12} RANDOM SEARCH CV
49462.758078362356 {'n_estimators': 300, 'max_features': 8} RANDOM SEARCH CV
Please note that I am selecting the best results out of about 50 or so results to present here. I am using the following code to generate this:
param_grid = [{'n_estimators' : [3, 10, 30, 100, 300],
'max_features' : [2, 4, 6, 8, 10, 12, 14]},
{'bootstrap' : [False], 'n_estimators' : [3, 10, 12],
'max_features' : [2, 3, 4]}]
forest_regressor = RandomForestRegressor({'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse',
'max_depth': None, 'max_features': 8, 'max_leaf_nodes': None,
'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0,
'n_estimators': 300, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False})
grid_search = GridSearchCV(forest_regressor, param_grid, cv=10, scoring="neg_mean_squared_error",
return_train_score=True, refit=True)
grid_search.fit(Dataframe, TrainingLabels)
prediction = grid_search.predict(Dataframe)
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params, "GRID SEARCH CV")
##################################################################################
#Randomized Search Cross Validation
param_grid = [{'n_estimators' : [3, 10, 30, 100, 300],
'max_features' : [2, 4, 6, 8, 10, 12, 14]},
{'bootstrap' : [False], 'n_estimators' : [3, 10, 12],
'max_features' : [2, 3, 4]}]
forest_regressor = RandomForestRegressor({'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse',
'max_depth': None, 'max_features': 8, 'max_leaf_nodes': None,
'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0,
'n_estimators': 300, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False})
rand_search = RandomizedSearchCV(forest_regressor, param_grid, cv=10, refit=True,
scoring='neg_mean_squared_error', return_train_score=True)
rand_search.fit(Dataframe, TrainingLabels)
prediction = rand_search.predict(Dataframe)
cvres = rand_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params, "RANDOM SEARCH CV")
Now, I am doing things a little differently than what the book states; my pipeline looks as such:
import pandas as pd
import numpy as np
from sklearn.impute import KNNImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.compose import make_column_transformer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy import stats
class Dataframe_Manipulation:
def __init__(self):
self.dataframe = pd.read_csv(r'C:\Users\bohayes\AppData\Local\Programs\Python\Python38\Excel and Text\housing.csv')
def Cat_Creation(self):
# Creation of an Income Category to organize the median incomes into strata (bins) to sample from
self.income_cat = self.dataframe['income_category'] = pd.cut(self.dataframe['median_income'],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
self.rooms_per_house_cat = self.dataframe['rooms_per_house'] = self.dataframe['total_rooms']/self.dataframe['households']
self.bedrooms_per_room_cat = self.dataframe['bedrooms_per_room'] = self.dataframe['total_bedrooms']/self.dataframe['total_rooms']
self.pop_per_house = self.dataframe['pop_per_house'] = self.dataframe['population'] / self.dataframe['households']
return self.dataframe
def Fill_NA(self):
self.imputer = KNNImputer(n_neighbors=5, weights='uniform')
self.dataframe['total_bedrooms'] = self.imputer.fit_transform(self.dataframe[['total_bedrooms']])
self.dataframe['bedrooms_per_room'] = self.imputer.fit_transform(self.dataframe[['bedrooms_per_room']])
return self.dataframe
def Income_Cat_Split(self):
self.inc_cat_split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for self.train_index, self.test_index in self.inc_cat_split.split(self.dataframe, self.dataframe['income_category']):
self.strat_train_set = self.dataframe.loc[self.train_index].reset_index(drop=True)
self.strat_test_set = self.dataframe.loc[self.test_index].reset_index(drop=True)
# the proportion is the % of total instances and which strata they are assigned to
self.proportions = self.strat_test_set['income_category'].value_counts() / len(self.strat_test_set)
# Only pulling out training set!!!!!!!!!!!!!!!
return self.strat_train_set, self.strat_test_set
def Remove_Cats_Test(self):
self.test_labels = self.strat_test_set['median_house_value'].copy()
self.strat_test_set = self.strat_test_set.drop(['median_house_value'], axis=1)
return self.test_labels
def Remove_Cats_Training(self):
self.training_labels = self.strat_train_set['median_house_value'].copy()
self.strat_train_set = self.strat_train_set.drop(['median_house_value'], axis=1)
return self.training_labels
def Encode_Transform(self):
self.column_trans = make_column_transformer((OneHotEncoder(), ['ocean_proximity']), remainder='passthrough')
self.training_set_encoded = self.column_trans.fit_transform(self.strat_train_set)
self.test_set_encoded = self.column_trans.fit_transform(self.strat_test_set)
return self.training_set_encoded, self.test_set_encoded
def Standard_Scaler(self):
self.scaler = StandardScaler()
self.scale_training_set = self.scaler.fit(self.training_set_encoded)
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
self.scaled_training_set = self.scaler.transform(self.training_set_encoded)
self.scaled_test_set = self.scaler.transform(self.test_set_encoded)
return self.scaled_training_set
def Test_Set(self):
return self.scaled_test_set
A = Dataframe_Manipulation()
B = A.Cat_Creation()
C = A.Fill_NA()
D = A.Income_Cat_Split()
TestLabels = A.Remove_Cats_Test()
TrainingLabels = A.Remove_Cats_Training()
G = A.Encode_Transform()
TrainingSet = A.Standard_Scaler()
TestSet = A.Test_Set()
The Grid and Random Searches come after this bit, however my RMSE scores come back drastically different when I test them on the TestSet, which leads me to believe that I am overfitting, however maybe the RSME's look different because I am using a smaller test set? Here you go:
19366.910530221918
19969.043158986697
Now here is the code that generates that: and it comes after I run Grid and Random Searches and fit the test labels and test set to the model:
#Final Grid Model
final_grid_model = grid_search.best_estimator_
final_grid_prediction = final_grid_model.predict(TestSet)
final_grid_mse = mean_squared_error(TestLabels, final_grid_prediction)
final_grid_rmse = np.sqrt(final_grid_mse)
print(final_grid_rmse)
###################################################################################
#Final Random Model
final_rand_model = rand_search.best_estimator_
final_rand_prediction = final_rand_model.predict(TestSet)
final_rand_mse = mean_squared_error(TestLabels, final_rand_prediction)
final_rand_rmse = np.sqrt(final_rand_mse)
print(final_rand_rmse)
Just to make sure I also did a confidence score on the model as well and these are the code and results:
#Confidence Grid Search
confidence = 0.95
squared_errors = (final_grid_prediction - TestLabels) ** 2
print(np.sqrt(stats.t.interval(confidence, len(squared_errors) - 1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors))))
###################################################################################
#Confidence Random Search
confidence1 = 0.95
squared_errors1 = (final_rand_prediction - TestLabels) ** 2
print(np.sqrt(stats.t.interval(confidence1, len(squared_errors1) - 1,
loc=squared_errors1.mean(),
scale=stats.sem(squared_errors1))))
>>>[18643.4914044 20064.26363526]
[19222.30464011 20688.84660134]
Why is it that my average RMSE score on the TrainingSet is about 49,000 and that same score on the test set is averaging at about 19,000? I must be overfitting, but I am not sure how or where I am going wrong.
tl;dr: Your code is unnecessarily convoluted for such a (standard) job; do not re-invent the wheel, go with a pipeline instead.
There is an error in how you scale your data, which most probably is the root cause of the observed behavior here; in the second line:
self.scale_training_set = self.scaler.fit(self.training_set_encoded)
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
you essentially overwrite your scaler with the results on the test set fit, and subsequently you actually scale your training data with this test-fitted scaler:
self.scaled_training_set = self.scaler.transform(self.training_set_encoded)
Since your test set is only 20% of the dataset, what happens is that it does not contain enough values to adequately cover the whole range (min-max) of the (bigger) training set; as a result, the training set is mis-scaled (actually containing values well above the max value of the test set), which probably leads to a higher RMSE (which is not scale invariant, and by definition depends on the scale pf the predictions).
You may think that using StratifiedShuffleSplit upstream should have protected you from such a case, but truth is that StratifiedShuffleSplit is only good for classification datasets, and it is actually meaningless in regression ones (I am genuinely surprised that it does not throw an error here).
To remedy this issue, you should just remove the line
self.scale_test_set = self.scaler.fit(self.test_set_encoded)
from your Standard_Scaler() function.
Keep in mind that, in general, we never fit on a test set - we only transform; scikit-learn pipelines, apart from saving you from having to write all this boilerplate code (thus increasing the probability of coding errors), will protect you from this kind of error...

GridSearchCV with LGBMRegressor can't find best parameters

I have 2 regressors:
import lightgbm as lgb
from sklearn.model_selection import GridSearchCV
params = {
'num_leaves': [7, 14, 21, 28, 31, 50],
'learning_rate': [0.1, 0.03, 0.003],
'max_depth': [-1, 3, 5],
'n_estimators': [50, 100, 200, 500],
}
grid = GridSearchCV(lgb.LGBMRegressor(random_state=0), params, scoring='r2', cv=5)
grid.fit(X_train, y_train)
reg = lgb.LGBMRegressor(random_state=0)
As you see, I defined a random_state for both regressors. GridSearchCV must find the best params for estimator to increace its scroring. But
r2_score(y_train, grid.predict(X_train)) # output is 0.69
r2_score(y_train, reg.predict(X_train)) # output is 0.84
So, how can find best params for LGBMRegressor?
Based on the documentation here, after calling grid.fit() you can find the best estimator (ready model) and params here:
grid.best_estimator_
grid.best_params_
FYI: random_state works just for random cases (when shuffling for example).
In your case the params for models are different and the results of your metric R2 are accordingly different.
So, I believe you would have to script it like:
params = {
'num_leaves': [7, 14, 21, 28, 31, 50],
'learning_rate': [0.1, 0.03, 0.003],
'max_depth': [-1, 3, 5],
'n_estimators': [50, 100, 200, 500],
}
grid = GridSearchCV(lgb.LGBMRegressor(random_state=0), params, scoring='r2', cv=5)
grid.fit(X_train, y_train)
reg = lgb.LGBMRegressor(random_state=0)
reg.fit(X_train,y_train)
lgbm_tuned = grid.best_estimator_
r2_tuned = grid.best_params_
r2_regular = r2_score(y_train, reg.predict(X_train))
when r2_tuned is the best score found with Grid Search, lgbm_tuned is your model defined with the best parameters and r2_regular is your score with default parameters.
It is weird to find a worst result after gridsearch, specially when the parameters for the gridsearch includes the default parameters for LightGBM.

Categories