Tunning treshold in Binarizer to BernoulliNB() classifier - python

I want to use the BernoulliNB() classifier, and my data is not binarized. So I want to choose the best binarization threshold by GridsearchCV().
My code looks like:
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import BernoulliNB
from sklearn.preprocessing import Binarizer
pipeline = Pipeline([('binarizer', Binarizer()), ('classifier', BernoulliNB())])
params = {'estimator__binarizer__threshold': np.logspace(0, 5, 20)}
clf = GridSearchCV(pipeline, param_grid=params, cv=5, refit=True)
clf.fit(X_train,y_train)
clf.best_estimator_.score(X_test, y_test)
It gives me error:
ValueError: Check the list of available parameters with estimator.get_params().keys().
I don't know what's wrong.

Yes, my bad. In the comment, I just spotted the spelling mistake of 'treshold' and in a hurry, did not give attention to estimator part.
For a pipeline, the parameters can be accessed by using the two parts:
Name of the steps like binarizer or classifier here
Actual param name for that particular name from step 1.
You dont need to append estimator to the above parts. So in your case, you will need to use the following:
params = {'binarizer__threshold': np.logspace(0, 5, 20)}
to access the 'threshold' param of the 'binarizer' step of pipeline.

Related

Why are Pipelines used as part of GridsearchCV and not the other way around?

Though I understand the potential benefits, especially in combination with GridSearchCV, I wonder why it is always used like this (or at least from how I understand it):
Pipeline steps are set for each classifier (with 'passthrough' for the clf step). Then, GridSearchCV equips the pipeline with multiple parameters and classifiers.
I am not sure if this is true, but from my point of view, it seems as if this causes the steps before the classifier to run multiple times, even if they are always used with the same parameter.
This leads me to the question, why it is not used the other way around... or if this would even be possible?
Here is a picture of the situation in my head with example configuration:
First let's create a dataset
from sklearn.datasets import make_classification
from sklearn import svm
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# generate some data to play with
X, y = make_classification(n_informative=5, n_redundant=0, random_state=42)
Now the usual way of working with a grid_search is to try different parameters for all steps.
As an example let's use PCA and SVC.
pipe = Pipeline(steps=[('pca', PCA()), ('svm', svm.SVC())])
# Parameters of pipelines can be set using ‘__’ separated parameter names:
param_grid = {
'pca__n_components': [5, 15, 30, 45, 64],
'svm__C': [1, 5, 10],
}
gs = GridSearchCV(pipe, param_grid, n_jobs=-1)
gs.fit(X, y)
However if you want you can apply the previous steps to the classifier itself and only perform the GridSearch on the classifier:
pca = PCA()
X_pca, y_pca = pca.fit_transform(X, y)
parameters = {'C':[1, 5, 10]}
svc = svm.SVC()
gs = GridSearchCV(svc, parameters)
gs.fit(X_pca, y_pca)
The problem is that this way you can't test parameter correlations between different steps.

python imblearn make_pipeline TypeError: Last step of Pipeline should implement fit

I am trying to implement SMOTE of imblearn inside the Pipeline. My data sets are text data stored in pandas dataframe. Please see below the code snippet
text_clf =Pipeline([('vect', TfidfVectorizer()),('scale', StandardScaler(with_mean=False)),('smt', SMOTE(random_state=5)),('clf', LinearSVC(class_weight='balanced'))])
After this I am using GridsearchCV.
grid = GridSearchCV(text_clf, parameters, cv=4, n_jobs=-1, scoring = 'accuracy')
Where parameters are nothing but tuning parameters mostly for TfidfVectorizer().
I am getting the following error.
All intermediate steps should be transformers and implement fit and transform. 'SMOTE
Post this error, I have changed the code to as follows.
vect = TfidfVectorizer(use_idf=True,smooth_idf = True, max_df = 0.25, sublinear_tf = True, ngram_range=(1,2))
X = vect.fit_transform(X).todense()
Y = vect.fit_transform(Y).todense()
X_Train,X_Test,Y_Train,y_test = train_test_split(X,Y, random_state=0, test_size=0.33, shuffle=True)
text_clf =make_pipeline([('smt', SMOTE(random_state=5)),('scale', StandardScaler(with_mean=False)),('clf', LinearSVC(class_weight='balanced'))])
grid = GridSearchCV(text_clf, parameters, cv=4, n_jobs=-1, scoring = 'accuracy')
Where parameters are nothing but tuning Cin SVC classifiers.
This time I am getting the following error:
Last step of Pipeline should implement fit.SMOTE(....) doesn't
What is going here? Can anyone please help?
imblearn.SMOTE has no transform method. Docs is here.
But all steps except the last in a pipeline should have it, along with fit.
To use SMOTE with sklearn pipeline you should implement a custom transformer calling SMOTE.fit_sample() in transform method.
Another easier option is just to use ibmlearn pipeline:
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline as imbPipeline
# This doesn't work with sklearn.pipeline.Pipeline because
# SMOTE doesn't have a .tranform() method.
# (It has .fit_sample() or .sample().)
pipe = imbPipeline([
...
('oversample', SMOTE(random_state=5)),
('clf', LinearSVC(class_weight='balanced'))
])

Custom scoring function RandomForestRegressor

Using RandomSearchCV, I managed to find a RandomForestRegressor with the best hyperparameters.
But, to this, I used a custom score function matching my specific needs.
Now, I don't know how to use
best_estimator_ - a RandomForestRegressor - returned by the search
with my custom scoring function.
Is there a way to pass a custom scoring function to a RandomForestRegressor?
Scoring function in RandomizedSearchCV will only calculate the score of the predicted data from the model for each combination of hyper-parameters specified in the grid, and the hyper-parameters with the highest average score on test folds wins.
It does not in any way alter the behaviour of the internal algorithm of RandomForest (other than finding the hyperparameters, of-course).
Now you have best_estimator_ (a RandomForestRegressor), with the best found hyper-parameters already set and the model already trained on the whole data you sent to RandomizedSearchCV (if you used refit=True, which is True by default).
So I'm not sure what you want to do with passing that scorer to the model. The best_estimator_ model can be directly used to get predictions on the new data by using the predict() method. After that, the custom scoring you used can be used to compare the predictions with the actual model. There's nothing more to it.
A simple example of this would be:
from scipy.stats import randint as sp_randint
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV, train_test_split
from sklearn.datasets import load_boston
from sklearn.metrics import r2_score, make_scorer
X, y = load_boston().data, load_boston().target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = RandomForestRegressor()
# Your custom scoring strategy
def my_custom_score(y_true, y_pred):
return r2_score(y_true, y_pred)
# Wrapping it in make_scorer to able to use in RandomizedSearch
my_scorer = make_scorer(my_custom_score)
# Hyper Parameters to be tuned
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(2, 11),}
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=20, scoring=my_scorer)
random_search.fit(X_train, y_train)
# Best found parameters set and model trained on X_train, y_train
best_clf = random_search.best_estimator_
# Get predictions on your new data
y_test_pred = best_clf.predict(X_test)
# Calculate your score on the predictions with respect to actual values
print(my_custom_score(y_test, y_test_pred))

Basic Sklearn: How to Pass Scoring Function to Fit Method

I'm using sklearn to do some machine learning. I often use GridSearchCV to explore hyperparameters and perform cross-validation. Using this, I can specify a scoring function, like this:
scores = -cross_val_score(svr, X, Y, cv=10, scoring='neg_mean_squared_error')
However, I want to train my SVR model using mean squared error. Unfortunately, there's no scoring parameter in either the constructor for SVR or the fit method.
How should I do this?
Thanks!
I typically use Pipeline to do it. You can create list of pipelines including SVR model (and others if you want). Then, you can apply GridSearchCV where putting pipeline in as your argument.
Here, you can add params_grid where searching space can be defined as pipelinename__paramname (double underscore in between). For example, I have pipeline name svr and I want to search on parameter C, I can put the key in my parameter dictionary as svr__C.
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score, GridSearchCV
from sklearn.svm import SVR
c_range = np.arange(1, 10, 1)
pipeline = Pipeline([('svr', SVR())])
params_grid = {'svr__C': c_range}
# grid search with 3-fold cross validation
gridsearch_model = GridSearchCV(pipeline, params_grid,
cv=3, scoring='neg_mean_squared_error')
Then, you can do the same procedure by fitting training data and find best score and parameters
gridsearch_model.fit(X_train, y_train)
print(gridsearch_model.best_params_, gridsearch_model.best_score_)
You can also use cross_val_score to find the score:
cross_val_score(gridsearch_model, X_train, y_train,
cv=3, scoring='neg_mean_squared_error')
Hope this helps!

Prevent overfitting in Logistic Regression using Sci-Kit Learn

I trained a model using Logistic Regression to predict whether a name field and description field belong to a profile of a male, female, or brand. My train accuracy is around 99% while my test accuracy is around 83%. I have tried implementing regularization by tuning the C parameter but the improvements were barely noticed. I have around 5,000 examples in my training set. Is this an instance where I just need more data or is there something else I can do in Sci-Kit Learn to get my test accuracy higher?
overfitting is a multifaceted problem. It could be your train/test/validate split (anything from 50/40/10 to 90/9/1 could change things). You might need to shuffle your input. Try an ensemble method, or reduce the number of features. you might have outliers throwing things off
then again, it could be none of these, or all of these, or some combination of these.
for starters, try to plot out test score as a function of test split size, and see what you get
#The 'C' value in Logistic Regresion works very similar as the Support
#Vector Machine (SVM) algorithm, when I use SVM I like to use #Gridsearch
#to find the best posible fit values for 'C' and 'gamma',
#maybe this can give you some light:
# For SVC You can remove the gamma and kernel keys
# param_grid = {'C': [0.1,1, 10, 100, 1000],
# 'gamma': [1,0.1,0.01,0.001,0.0001],
# 'kernel': ['rbf']}
param_grid = {'C': [0.1,1, 10, 100, 1000]}
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix
# Train and fit your model to see initial values
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101)
model = SVC()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
# Find the best 'C' value
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.best_params_
c_val = grid.best_estimator_.C
#Then you can re-run predictions on this grid object just like you would with a normal model.
grid_predictions = grid.predict(X_test)
# use the best 'C' value found by GridSearch and reload your LogisticRegression module
logmodel = LogisticRegression(C=c_val)
logmodel.fit(X_train,y_train)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))

Categories