Grid-search with specific validation data - python

I'm looking for a way to grid-search for hyperparameters in sklearn, without using K-fold validation. I.e I want my grid to train on on specific dataset (X1,y1 in the example below) and validate itself on specific hold-out dataset (X2,y2 in the example below).
X1,y2 = train data
X2,y2 = validation data
clf_ = SVC(kernel='rbf',cache_size=1000)
Cs = [1,10.0,50,100.0,]
Gammas = [ 0.4,0.42,0.44,0.46,0.48,0.5,0.52,0.54,0.56]
clf = GridSearchCV(clf_,dict(C=Cs,gamma=Gammas),
cv=???, # validate on X2,y2
n_jobs=8,verbose=10)
clf.fit(X1, y1)

Use the hypopt Python package (pip install hypopt). It's a professional package created specifically for parameter optimization with a validation set. It works with any scikit-learn model out-of-the-box and can be used with Tensorflow, PyTorch, Caffe2, etc. as well.
# Code from https://github.com/cgnorthcutt/hypopt
# Assuming you already have train, test, val sets and a model.
from hypopt import GridSearch
param_grid = [
{'C': [1, 10, 100], 'kernel': ['linear']},
{'C': [1, 10, 100], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# Grid-search all parameter combinations using a validation set.
opt = GridSearch(model = SVR(), param_grid = param_grid)
opt.fit(X_train, y_train, X_val, y_val)
print('Test Score for Optimized Parameters:', opt.score(X_test, y_test))

clf = GridSearchCV(clf_,dict(C=Cs,gamma=Gammas),cv=???, # validate on X2,y2,n_jobs=8,verbose=10)
n_jobs>1 does not make any sense. If n_jobs=-1 it means the processing will use all the cores on your machine. If it is 1 only one core would be use.
If cv =5 it will run five cross validations for every iteration.
In your case total number of iterations will be 9(size of Cs)*5(Size of gammas)*5(Value of CV)
If you are using cross validation it does not make any sense to hold out the data for rechecking your model. If you are not confident about the performance you can just increase the cv to get a better fit.
This will be very time consuming especially for SVM ,I will rather suggest you to use RandomSearchCV which allows you give the number of iterations you want your model to randomly select.

Related

Why did Run best_estimator_ from GridSearch using cross-validation produce different accuracy score?

Basically, I want to perform binary classification using SVM (SVC) from sk-learn. Since I do not have separate training and testing data, I use cross-validation to evaluate the effectiveness of the feature set that I use.
Then, I use GridSearchCV to find the best estimator and set the cross-validation parameter to 10. Because I want to analyze the prediction result, I use the best estimator to perform cross-validation using the same dataset (of course I use 10-fold cross-validation).
However, when I print the scores of performance (precision, recall, f-measure, and accuracy), It produces different scores. Why do you think this happen?
I am wondering, in sk-learn should I specify the label for positive one? In my dataset, I have already labelled the positive case as 1.
Lastly, the following text is the snippet for my code.
tuned_parameters = [{'kernel': ['linear','rbf'], 'gamma': [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10], 'C': [0.1, 1, 5, 10, 50, 100, 1000]}]
scoring = ['f1_macro', 'precision_macro', 'recall_macro', 'accuracy']
clf = GridSearchCV(svm.SVC(), tuned_parameters, cv=10, scoring= scoring, refit='f1_macro')
clf.fit(feature, label)
param_C = clf.cv_results_['param_C']
param_gamma = clf.cv_results_['param_gamma']
P = clf.cv_results_['mean_test_precision_macro']
R = clf.cv_results_['mean_test_recall_macro']
F1 = clf.cv_results_['mean_test_f1_macro']
A = clf.cv_results_['mean_test_accuracy']
#print clf.best_estimator_
print clf.best_score_
scoring2 = ['f1', 'precision', 'recall', 'accuracy']
scores = cross_validate(clf.best_estimator_, feature, label, cv=n, scoring=scoring2, return_train_score=True)
print scores
scores_f1 = np.mean(scores['test_f1'])
scores_p = np.mean(scores['test_precision'])
scores_r = np.mean(scores['test_recall'])
scores_a = np.mean(scores['test_accuracy'])
print '\t'.join([str(scores_f1), str(scores_p), str(scores_r),str(scores_a)])
It may be due to that the cross-validation splits used in cross_validate and GridSearchCV are different, due to the randomness. The effect of this randomness becomes even larger as your dataset is very small (93) and the number of folds is so large (10). A possible fix is to feed into cv a fix train/test splits, and reduce the number of folds to reduce the variance, i.e.
kfolds=StratifiedKFold(n_splits=3).split(feature, label)
...
clf = GridSearchCV(..., cv=kfolds, ...)
...
scores = cross_validate(..., cv=kfolds, ...)

Prevent overfitting in Logistic Regression using Sci-Kit Learn

I trained a model using Logistic Regression to predict whether a name field and description field belong to a profile of a male, female, or brand. My train accuracy is around 99% while my test accuracy is around 83%. I have tried implementing regularization by tuning the C parameter but the improvements were barely noticed. I have around 5,000 examples in my training set. Is this an instance where I just need more data or is there something else I can do in Sci-Kit Learn to get my test accuracy higher?
overfitting is a multifaceted problem. It could be your train/test/validate split (anything from 50/40/10 to 90/9/1 could change things). You might need to shuffle your input. Try an ensemble method, or reduce the number of features. you might have outliers throwing things off
then again, it could be none of these, or all of these, or some combination of these.
for starters, try to plot out test score as a function of test split size, and see what you get
#The 'C' value in Logistic Regresion works very similar as the Support
#Vector Machine (SVM) algorithm, when I use SVM I like to use #Gridsearch
#to find the best posible fit values for 'C' and 'gamma',
#maybe this can give you some light:
# For SVC You can remove the gamma and kernel keys
# param_grid = {'C': [0.1,1, 10, 100, 1000],
# 'gamma': [1,0.1,0.01,0.001,0.0001],
# 'kernel': ['rbf']}
param_grid = {'C': [0.1,1, 10, 100, 1000]}
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix
# Train and fit your model to see initial values
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101)
model = SVC()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
# Find the best 'C' value
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.best_params_
c_val = grid.best_estimator_.C
#Then you can re-run predictions on this grid object just like you would with a normal model.
grid_predictions = grid.predict(X_test)
# use the best 'C' value found by GridSearch and reload your LogisticRegression module
logmodel = LogisticRegression(C=c_val)
logmodel.fit(X_train,y_train)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))

Performing grid search with a predefined validation set Sklearn

This question has been asked several times before. But I get an error when following the answer
First I specify which part is the training set and the validation set as follows.
my_test_fold = []
for i in range(len(train_x)):
my_test_fold.append(-1)
for i in range(len(test_x)):
my_test_fold.append(0)
And then gridsearch is performed.
from sklearn.model_selection import PredefinedSplit
param = {
'n_estimators':[200],
'max_depth':[5],
'min_child_weight':[3],
'reg_alpha':[6],
'gamma':[0.6],
'scale_neg_weight':[1],
'learning_rate':[0.09]
}
gsearch1 = GridSearchCV(estimator = XGBClassifier(
objective= 'reg:linear',
seed=1),
param_grid = param,
scoring='roc_auc',
cv = PredefinedSplit(test_fold=my_test_fold),
verbose = 1)
gsearch1.fit(new_data_df, df_y)
But I get the following error
object of type 'PredefinedSplit' has no len()
Try to replace
cv = PredefinedSplit(test_fold=my_test_fold)
with
cv = list(PredefinedSplit(test_fold=my_test_fold).split(new_data_df, df_y))
The reason is that you may need to apply the split method to actually get the split into training and testing (and then transform it from an iterable object to a list object).
The hypopt Python package (pip install hypopt), for which I am an author, was created for this exact purpose: parameter optimization with a validation set. It works with scikit-learn models and can be used with Tensorflow, PyTorch, Caffe2, etc.
# Code from https://github.com/cgnorthcutt/hypopt
# Assuming you already have train, test, val sets and a model.
from hypopt import GridSearch
param_grid = [
{'C': [1, 10, 100], 'kernel': ['linear']},
{'C': [1, 10, 100], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# Grid-search all parameter combinations using a validation set.
opt = GridSearch(model = SVR(), param_grid = param_grid)
opt.fit(X_train, y_train, X_val, y_val)
print('Test Score for Optimized Parameters:', opt.score(X_test, y_test))
Edit: Has something changed with hypopt to cause the sudden recent downvotes? Some feedback would help as hypopt solves this exact problem and if there is an issue, we should fix it.

Held out training and validation set in gridsearchcv sklearn

I see that in gridsearchcv best parameters are determined based on cross-validation, but what I really want to do is to determine the best parameters based on one held out validation set instead of cross validation.
Not sure if there is a way to do that. I found some similar posts where customizing the cross-validation folds. However, again what I really need is to train on one set and validate the parameters on a validation set.
One more information about my dataset is basically a text series type created by panda.
I did come up with an answer to my own question through the use of PredefinedSplit
for i in range(len(doc_train)-1):
train_ind[i] = -1
for i in range(len(doc_val)-1):
val_ind[i] = 0
ps = PredefinedSplit(test_fold=np.concatenate((train_ind,val_ind)))
and then in the gridsearchCV arguments
grid_search = GridSearchCV(pipeline, parameters, n_jobs=7, verbose=1 , cv=ps)
Use the hypopt Python package (pip install hypopt). It's a professional package created specifically for parameter optimization with a validation set. It works with any scikit-learn model out-of-the-box and can be used with Tensorflow, PyTorch, Caffe2, etc. as well.
# Code from https://github.com/cgnorthcutt/hypopt
# Assuming you already have train, test, val sets and a model.
from hypopt import GridSearch
param_grid = [
{'C': [1, 10, 100], 'kernel': ['linear']},
{'C': [1, 10, 100], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# Grid-search all parameter combinations using a validation set.
opt = GridSearch(model = SVR(), param_grid = param_grid)
opt.fit(X_train, y_train, X_val, y_val)
print('Test Score for Optimized Parameters:', opt.score(X_test, y_test))

Ensuring right order of operations in random forest classification in scikit learn

I would like to ensure that the order of operations for my machine learning is right:
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.grid_search import GridSearchCV
# 1. Initialize model
model = RandomForestClassifier(5000)
# 2. Load dataset
iris = datasets.load_iris()
X, y = iris.data, iris.target
# 3. Remove unimportant features
model = SelectFromModel(model, threshold=0.5).estimator
# 4. cross validate model on the important features
k_fold = KFold(n=len(data), n_folds=10, shuffle=True)
for k, (train, test) in enumerate(k_fold):
self.model.fit(data[train], target[train])
# 5. grid search for best parameters
param_grid = {
'n_estimators': [1000, 2500, 5000],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth': [3, 5, data.shape[1]]
}
gs = GridSearchCV(estimator=model, param_grid=param_grid)
gs.fit(X, y)
model = gs.best_estimator_
# Now the model can be used for prediction
Please let me know if this order looks good or if something can be done to improve it.
--EDIT, clarifying to reduce downvotes.
Specifically,
1. Should the SelectFromModel be done after cross validation?
Should grid search be done before cross validation?
The main problem with your approach is you are confusing the feature selection transformer with the final estimator. What you will need to do is create two stages, the transformer first:
rf_feature_imp = RandomForestClassifier(100)
feat_selection = SelectFromModel(rf_feature_imp, threshold=0.5)
Then you need a second phase where you use the reduced feature set to train a classifier on the reduced feature set.
clf = RandomForestClassifier(5000)
Once you have your phases, you can build a pipeline to combine the two into a final model.
model = Pipeline([
('fs', feat_selection),
('clf', clf),
])
Now you can perform a GridSearch on your model. Keep in mind you have two stages, so the parameters must be specified by stage fs or clf. In terms of the feature selection stage, you can also access the base estimator using fs__estimator. Below is an example of how to search parameters on any of the three objects.
params = {
'fs__threshold': [0.5, 0.3, 0.7],
'fs__estimator__max_features': ['auto', 'sqrt', 'log2'],
'clf__max_features': ['auto', 'sqrt', 'log2'],
}
gs = GridSearchCV(model, params, ...)
gs.fit(X,y)
You can then make predictions with gs directly or using gs.best_estimator_.

Categories