LinearSVC and LogisticRegression are equivalent? - python

I am comparing performance of LinearSVC and LogisticRegression of scikit.learn on textual data. I am using LinearSVC with 'l2' penalty and 'squared_hinge' loss. I use LogisticRegression with 'l1' penalty. However I find that their performance is near identical on my data sets (in terms of classification accuracy, precision/recall etc., as well as running times). This can't be sheer coincidence, and leads me to suspect that these are in fact identical implementations. Is that the case?
If I am to compare LogisticRegression with a support vector based implementation (that can handle multiclass data) which class in scikit.learn should I use?
Here's my code
scorefunc = make_scorer(fbeta_score, beta = 1, pos_label = None)
splits = StratifiedShuffleSplit(data.labels, n_iter = 5, test_size = 0.2)
params1 = {'penalty':['l1'],'C':[0.0001, 0.001, 0.01, 0.1, 1.0]}
lr = GridSearchCV(LogisticRegression(), param_grid = params1, cv = splits, n_jobs = 5, scoring = scorefunc)
lr.fit(data.combined_mat, data.labels)
print lr.best_params_, lr.best_score_
>> {'penalty': 'l1', 'C': 1.0} 0.91015049974
params2 = {'C':[0.0001, 0.001, 0.01, 0.1, 1.0], 'penalty':['l2']}
svm = GridSearchCV(LinearSVC(), param_grid = params2, cv = splits, n_jobs = 5, scoring = scorefunc)
svm.fit(data.combined_mat, data.labels)
print svm.best_params_, svm.best_score_
>> {'penalty': 'l2', 'C': 1.0} 0.968320989681

Related

Visualizing change in hyper-parameter tuning using GridSearchCV for Support vector machine model

I have created an SVM model, and am using gridsearch to tune the hyper-parameters C, gamma and the kernel. Is there any way of visualizing the change in accuracy in tuning these? This is my code:
#Initiate model
svmmodel = svm.SVC()
classifier_svm = svm.SVC(kernel='linear')
classifier_svm.fit(x_trainvec, y_train)
prediction_svm = classifier_svm.predict(x_testvec)
#Tune hyperparameters
param_svm = {'C': [0.1, 1, 10],
'gamma': [1, 0.1],
'kernel': ['linear', 'poly', 'rbf']}
gridsvm = GridSearchCV(classifier_svm, param_svm, refit = True, return_train_score = True, cv = 5, verbose = 10, scoring = 'f1_weighted')

Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty

I'm building a logistic regression model to predict a binary target feature. I want to try different values of different parameters using the param_grid argument, to find the best fit with the best values. This is my code:
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state = 42)
logModel = LogisticRegression(C = 1, penalty='l1',solver='liblinear');
Grid_params = {
"penalty" : ['l1','l2','elasticnet','none'],
"C" : [0.001, 0.01, 0.1, 1, 10, 100, 1000], # Basically smaller C specify stronger regularization.
'solver' : ['lbfgs','newton-cg','liblinear','sag','saga'],
'max_iter' : [50,100,200,500,1000,2500]
}
clf = GridSearchCV(logModel, param_grid=Grid_params, cv = 10, verbose = True, n_jobs=-1,error_score='raise')
clf_fitted = clf.fit(X_train,Y_train)
And this is where I get the error. I have read already that some solvers dont work with l1, and some don't work with l2. How can I tune the param_grid in this case?
I tried also using only simple logModel = LogisticRegression() but didn't work.
Full error:
ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty.
Gridsearch accepts the list of dicts for that purpose, given you absolutely need to include solvers into grid, you should be able to do something like this:
Grid_params = [
{'solver' : ['saga'],
'penalty' : ['elasticnet', 'l1', 'l2', 'none'],
'max_iter' : [50,100,200,500,1000,2500],
'C' : [0.001, 0.01, 0.1, 1, 10, 100, 1000]},
{'solver' : ['newton-cg', 'lbfgs'],
'penalty' : ['l2','none'],
'max_iter' : [50,100,200,500,1000,2500],
'C' : [0.001, 0.01, 0.1, 1, 10, 100, 1000]},
# add more parameter sets as needed...
]

GridSearchCV unexpected behaviour (always returns the first parameter as the best)

I've got a multiclass classification problem and I need to find the best parameters. I cannot change the max_iter, solver, and tol (they are given), but I'd like to check which penalty is better. However, GridSearchCV always returns the first given penalty as the best one.
Example:
from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
fixed_params = {
'random_state': 42,
'multi_class': 'multinomial',
'solver': 'saga',
'tol': 1e-3,
'max_iter': 500
}
parameters = [
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['l1', 'l2', None]},
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['elasticnet'], 'l1_ratio': np.arange(0.0, 1.0, 0.1)}
]
model = GridSearchCV(LogisticRegression(**fixed_params), parameters, n_jobs=-1, verbose=10, scoring='f1_macro' ,cv=cv)
model.fit(X_train, y_train)
print(model.best_score_)
# 0.6836409100287101
print(model.best_params_)
# {'C': 0.1, 'penalty': 'l2'}
If I change the order of parameters rows, the result will be quite opposite:
from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
fixed_params = {
'random_state': 42,
'multi_class': 'multinomial',
'solver': 'saga',
'tol': 1e-3,
'max_iter': 500
}
parameters = [
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['elasticnet'], 'l1_ratio': np.arange(0.0, 1.0, 0.1)}
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['l1', 'l2', None]}
]
model = GridSearchCV(LogisticRegression(**fixed_params), parameters, n_jobs=-1, verbose=10, scoring='f1_macro' ,cv=cv)
model.fit(X_train, y_train)
print(model.best_score_)
# 0.6836409100287101
print(model.best_params_)
# {'C': 0.1, 'l1_ratio': 0.0, 'penalty': 'elasticnet'}
So, the best_score_ is the same for both options, but the best_params_ is not.
Could you please tell me what is wrong?
Edited
GridSearchCV gives a worse result in comparison to baseline with default parameters.
Baseline:
baseline_model = LogisticRegression(multi_class='multinomial', solver='saga', tol=1e-3, max_iter=500)
baseline_model.fit(X_train, y_train)
train_pred_baseline = baseline_model.predict(X_train)
print(f1_score(y_train, train_pred_baseline, average='micro'))
LogisticRegression(C=1.0, class_weight=None, dual=False,
fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=500,
multi_class='multinomial', n_jobs=None, penalty='l2',
random_state=None, solver='saga', tol=0.001, verbose=0,
warm_start=False)
Baseline gives me f1_micro better than GridSearchCV:
0.7522768670309654
Edited-2
So, according to best f1_score performance, C = 1 is the best choice for my model. But GridSearchCV returns me C = 0.1.
I think, I miss something...
Baseline's f1_macro better than GridSearchCV too:
train_pred_baseline = baseline_model.predict(X_train)
print(f1_score(y_train, train_pred_baseline, average='macro'))
# 0.7441968750050458
Actually there's nothing wrong. Here's the thing. Elasticnet uses both L1 and L2 penalty terms. However, if your l1_ratio is 0, then you're basically applying L2 regularization so you're only using the L2 penalty term. As stated in the docs:
Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2.
Since your second result had l1_ratio to be 0, it's equivalent to using L2 penalty terms.

GridSearchCV - FitFailedWarning: Estimator fit failed

I am running this:
# Hyperparameter tuning - Random Forest #
# Hyperparameters' grid
parameters = {'n_estimators': list(range(100, 250, 25)), 'criterion': ['gini', 'entropy'],
'max_depth': list(range(2, 11, 2)), 'max_features': [0.1, 0.2, 0.3, 0.4, 0.5],
'class_weight': [{0: 1, 1: i} for i in np.arange(1, 4, 0.2).tolist()], 'min_samples_split': list(range(2, 7))}
# Instantiate random forest
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(random_state=0)
# Execute grid search and retrieve the best classifier
from sklearn.model_selection import GridSearchCV
classifiers_grid = GridSearchCV(estimator=classifier, param_grid=parameters, scoring='balanced_accuracy',
cv=5, refit=True, n_jobs=-1)
classifiers_grid.fit(X, y)
and I am receiving this warning:
.../anaconda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:536:
FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details:
TypeError: '<' not supported between instances of 'str' and 'int'
Why is this and how can I fix it?
I had similar issue of FitFailedWarning with different details, after many runs I found, the parameter value passing has the error, try
parameters = {'n_estimators': [100,125,150,175,200,225,250],
'criterion': ['gini', 'entropy'],
'max_depth': [2,4,6,8,10],
'max_features': [0.1, 0.2, 0.3, 0.4, 0.5],
'class_weight': [0.2,0.4,0.6,0.8,1.0],
'min_samples_split': [2,3,4,5,6,7]}
This will pass for sure, for me it happened in XGBClassifier, somehow the values datatype mixing up
One more is if the value exceeds the range, for example in XGBClassifier 'subsample' paramerters max value is 1.0, if it is set as 1.1, FitFailedWarning will occur
For me this was giving same error but after removing none from max_dept it is fitting properly.
param_grid={'n_estimators':[100,200,300,400,500],
'criterion':['gini', 'entropy'],
'max_depth':['None',5,10,20,30,40,50,60,70],
'min_samples_split':[5,10,20,25,30,40,50],
'max_features':[ 'sqrt', 'log2'],
'max_leaf_nodes':[5,10,20,25,30,40,50],
'min_samples_leaf':[1,100,200,300,400,500]
}
code which is running properly:
param_grid={'n_estimators':[100,200,300,400,500],
'criterion':['gini', 'entropy'],
'max_depth':[5,10,20,30,40,50,60,70],
'min_samples_split':[5,10,20,25,30,40,50],
'max_features':[ 'sqrt', 'log2'],
'max_leaf_nodes':[5,10,20,25,30,40,50],
'min_samples_leaf':[1,100,200,300,400,500]
}
I too got same error and when I passed hyperparameters as in MachineLearningMastery, I got output without warning...
Try this way if anyone get similar issues...
# grid search logistic regression model on the sonar dataset
from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/sonar.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
# define model
model = LogisticRegression()
# define evaluation
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# define search space
space = dict()
space['solver'] = ['newton-cg', 'lbfgs', 'liblinear']
space['penalty'] = ['none', 'l1', 'l2', 'elasticnet']
space['C'] = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10, 100]
# define search
search = GridSearchCV(model, space, scoring='accuracy', n_jobs=-1, cv=cv)
# execute search
result = search.fit(X, y)
# summarize result
print('Best Score: %s' % result.best_score_)
print('Best Hyperparameters: %s' % result.best_params_)
Make sure the y-variable is an int, not bool or str.
Change your last line of code to make the y series a 0 or 1, for example:
classifiers_grid.fit(X, list(map(int, y)))

Tuning parameters of the classifier used by BaggingClassifier

Say that I want to train BaggingClassifier that uses DecisionTreeClassifier:
dt = DecisionTreeClassifier(max_depth = 1)
bc = BaggingClassifier(dt, n_estimators = 500, max_samples = 0.5, max_features = 0.5)
bc = bc.fit(X_train, y_train)
I would like to use GridSearchCV to find the best parameters for both BaggingClassifier and DecisionTreeClassifier (e.g. max_depth from DecisionTreeClassifier and max_samples from BaggingClassifier), what is the syntax for this?
I found the solution myself:
param_grid = {
'base_estimator__max_depth' : [1, 2, 3, 4, 5],
'max_samples' : [0.05, 0.1, 0.2, 0.5]
}
clf = GridSearchCV(BaggingClassifier(DecisionTreeClassifier(),
n_estimators = 100, max_features = 0.5),
param_grid, scoring = choosen_scoring)
clf.fit(X_train, y_train)
i.e. saying that max_depth "belongs to" __ the base_estimator, i.e. my DecisionTreeClassifier in this case. This works and returns the correct results.
If you are using a pipeline then you can extend the accepted answer with something like this (note the double, double underscores):
model = {'model': BaggingClassifier,
'kwargs': {'base_estimator': DecisionTreeClassifier()}
'parameters': {
'name__base_estimator__max_leaf_nodes': [10,20,30]
}}
pipeline = Pipeline([('name', model['model'](**model['kwargs'])])
cv_model = GridSearchCV(pipeline, param_grid=model['parameters'], cv=cv, scoring=scorer)

Categories