is there a way to have a similar, nice output for the scikit logistic regression models as in statsmodels? With all the p-values, std. errors etc. in one table?
As you and others have pointed out, this is a limitation of scikit learn. Before discussing below a scikit approach for your question, the “best” option is to use statsmodels as follows:
import statsmodels.api as sm
smlog = sm.Logit(y,sm.add_constant(X)).fit()
smlog.summary()
X represents your input features/predictors matrix and y represents the outcome variable. Statsmodels works well if X lacks highly correlated features, lacks low variance features, feature(s) don’t generate “perfect/quasi-perfect separation”, and any categorical features are reduced to “n-1” levels i.e., dummy-coded (and not “n” levels i.e., one-hot encoded as described here: dummy variable trap).
However, if above isn't feasible/practical, one scikit approach is coded below for fairly equivalent results - in terms of feature coefficients/odds with their standard errors and 95%CI estimates. Essentially, the code generates these results from distinct logistic regression scikit models trained against distinct test-train splits of your data. Again, make sure categorical features are dummy coded to n-1 levels (or your scikit coefficients will be incorrect for categorical features).
#Instantiate logistic regression model with regularization turned OFF
log_nr = LogisticRegression(fit_intercept = True, penalty
= "none")
##Generate 5 distinct random numbers - as random seeds for 5 test-train splits
import random
randomlist = random.sample(range(1, 10000), 5)
##Create features column
coeff_table = pd.DataFrame(X.columns, columns=["features"])
##Assemble coefficients over logistic regression models on 5 random data splits
#iterate over random states while keeping track of `i`
from sklearn.model_selection import train_test_split
for i, state in enumerate(randomlist):
train_x, test_x, train_y, test_y = train_test_split(X, y, stratify=y,
test_size=0.3, random_state=state) #5 test-train splits
log_nr.fit(train_x, train_y) #fit logistic model
coeff_table[f"coefficients_{i+1}"] = np.transpose(log_nr.coef_)
##Calculate mean and std error for model coefficients (from 5 models above)
coeff_table["mean_coeff"] = coeff_table.mean(axis=1)
coeff_table["se_coeff"] = coeff_table.iloc[:, 1:6].sem(axis=1)
#Calculate 95% CI intervals for feature coefficients
coeff_table["95ci_se_coeff"] = 1.96*coeff_table["se_coeff"]
coeff_table["coeff_95ci_LL"] = coeff_table["mean_coeff"] -
coeff_table["95ci_se_coeff"]
coeff_table["coeff_95ci_UL"] = coeff_table["mean_coeff"] +
coeff_table["95ci_se_coeff"]
Finally, (optionally) convert coefficients to odds by exponentiating as follows. Odds ratios are my favorite output from logistic regression and these are appended to your dataframe using code below.
#Calculate odds ratios and 95% CI (LL = lower limit, UL = upper limit) intervals for each feature
coeff_table["odds_mean"] = np.exp(coeff_table["mean_coeff"])
coeff_table["95ci_odds_LL"] = np.exp(coeff_table["coeff_95ci_LL"])
coeff_table["95ci_odds_UL"] = np.exp(coeff_table["coeff_95ci_UL"])
This answer builds upon on a somewhat related reply by #pciunkiewicz available here : Collate model coefficients across multiple test-train splits from sklearn
Related
I'm running a feature selection using sns.heatmap and one using sklearn feature_importances.
When using the same data I get two difference values.
Here is the heatmap
and heatmap code
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
training_data = pd.read_csv(
"/Users/aus10/NFL/Data/Betting_Data/CBB/Training_Data_Betting_CBB.csv")
df_model = training_data.copy()
df_model = df_model.dropna()
df_model = df_model.drop(['Money_Line', 'Money_Line_Percentage', 'Money_Line_Money', 'Money_Line_Move', 'Money_Line_Direction', "Spread", 'Spread_Percentage', 'Spread_Money', 'Spread_Move', 'Spread_Direction',
"Win", "Money_Line_Percentage", 'Cover'], axis=1)
X = df_model.loc[:, ['Total', 'Total_Move', 'Over_Percentage', 'Over_Money',
'Under_Percentage', 'Under_Money']] # independent columns
y = df_model['Over_Under'] # target column
# get correlations of each features in dataset
corrmat = df_model.corr()
top_corr_features = corrmat.index
plt.figure(figsize=(20, 20))
# plot heat map
g = sns.heatmap(
df_model[top_corr_features].corr(), annot=True, cmap='hot')
plt.xticks(rotation=90)
plt.yticks(rotation=45)
plt.show()
Here is the feature_importances bar graph
and the code
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.inspection import permutation_importance
training_data = pd.read_csv(
"/Users/aus10/NFL/Data/Betting_Data/CBB/Training_Data_Betting_CBB.csv", index_col=False)
df_model = training_data.copy()
df_model = df_model.dropna()
X = df_model.loc[:, ['Total', 'Total_Move', 'Over_Percentage', 'Over_Money',
'Under_Percentage', 'Under_Money']] # independent columns
y = df_model['Over_Under'] # target column
model = RandomForestClassifier(
random_state=1, n_estimators=100, min_samples_split=100, max_depth=5, min_samples_leaf=2)
skf = StratifiedKFold(n_splits=2)
skf.get_n_splits(X, y)
StratifiedKFold(n_splits=2, random_state=None, shuffle=False)
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model.fit(X_train, y_train)
# use inbuilt class feature_importances of tree based classifiers
print(model.feature_importances_)
# plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
perm_importance = permutation_importance(model, X_test, y_test)
feat_importances.nlargest(5).plot(kind='barh')
print(perm_importance)
plt.show()
I'm not sure which one is more accurate or if I'm using them in the correct way? Should I being using the heatmap to eliminate collinearity and the feature importances to actually selection my group of features?
You are comparing two different things, why would you expect them to be the same? And what would it even mean in this case?
Feature importances in tree based models are computed based on how many times given feature was used for splitting. Feature that is used more often for a split is more important (for a particular model fitted with particular dataset) than a feature that is used less often.
Correlation on the other hand is a measure of linear relationship between 2 features.
I'm not sure which one is more accurate
What do you mean by accuracy? Both of these are accurate in what they are measuring. It is just that none of these directly tells you which feature/s to throw away.
Note that just because 2 features are correlated, it doesn't mean that you can automatically throw one of them away. Collinearity can cause issues with interpretability of the model. If you have highly correlated features, then you can't say which one is more important based on the weights associated with these features. Collinearity should not affect the prediction power of the model. More often, you will find that by throwing away one of the correlated features, your model's prediction power decreases.
Collinearity in a dataset can therefore make feature importances of your random forrest model less interpretable in a sense that you can't rely on their strict ordering. But again, it should not affect the predictive power of the model (except that the model is more prone to overfitting due to having more degrees of freedom).
Should I being using the heatmap to eliminate collinearity and the feature importances to actually selection my group of features?
Feature engineering/selection is more of an art than science (outside of end-to-end deep learning). There is no correct answer here and you will need to develop your own heuristics and try different things to see which one works better in which scenario.
Example of a simple heuristic based on feature importances and correlation can be (assuming that you have large number of features):
fit the random forrest model and measure the feature importances
throw away those that seem to have no impact on the model (close to 0 importance)
refit the model with the new subset of your original data and see whether the metric of your interest (accuracy, MSE, ...) stays approximately the same as in the step 1.
if you still have a lot of features, you can repeat the step 1-3, increasing the throw-away threshold until your metric of interest starts worsening
measure the correlation of the features that you are left with and select the most correlated pairs (based on some threshold, e.g. (|c| > 0.8))
pick one pair; drop a feature from this pair; measure model performance; return the dropped feature; repeat for each each pair
drop the feature that seems to have the least negative effect on the model's performance based on the results from step 6.
repeat steps 6-7 until the model's performance starts dropping
I have the following code for gradient boosting classifier to be used for binary classification problem.
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
#Creating training and test dataset
X_train, X_test, y_train, y_test =
train_test_split(X,y,test_size=0.30,random_state=1)
#Count of goods in the training set
#This count is 50000
y0 = len(y_train[y_train['bad_flag'] == 0])
#Count of bads in the training set
#This count is 100
y1 = len(y_train[y_train['bad_flag'] == 1])
#Creating the sample_weights array. Include all bad customers and
#twice the number of goods as bads
w0=(y1/y0)*2
w1=1
sample_weights = np.zeros(len(y_train))
sample_weights[y_train['bad_flag'] == 0] = w0
sample_weights[y_train['bad_flag'] == 1] = w1
model=GradientBoostingClassifier(
n_estimators=100,max_features=0.5,random_state=1)
model=model.fit(X_train, y_train.values.ravel(),sample_weights)
My thinking about writing this code is as follows:-
sample_weights will allow model.fit to select all 100 bads and 200 goods from the training set and this same set of 300 customers will be used to fit 100 estimators in forward stage-wise fashion. I want to undersample my training set because the two response classes are highly imbalanced. Please let me know if my understanding of the code is correct?
Also, I would like to confirm that n_estimators=100 means that 100 estimators will be fit on the same set of 300 customers. This also means that there is no bootstrapping in gradient boosting classifier as seen in bagging classifier.
As far as I understand, this is not how it works. By default, you have GradientBoostingClassifier(subsample = 1.0) which means that the sample size that will be used at each stage (for each of the n_estimators) will be the same as in your original dataset. The weights will not change anything to the size of the subsample. If you want to enforce 300 observations for each stage, you need to set subsample = 300/(50000+100) in addition to the weight definition.
The answer is no. For each stage, a new fraction subsample of observations will be drawn. You can read more about it here: https://scikit-learn.org/stable/modules/ensemble.html#gradient-boosting. It says:
At each iteration the base classifier is trained on a fraction subsample of the available training data.
So, as a result, there is some bootstraping combined with the boosting algorithm.
I'm training a binary classifier using python and the popular scikit-learn module's SVM class. After training I use the predict method to make a classification as laid out in sci-kit's SVC documentation.
I would like to know more about the significance of my sample features to the resulting classification made by the trained decision_function (support vectors). Any strategies for evaluating feature significance when making predictions with such a model are welcome.
Thanks!
Andre
So, how do we interpret feature significance for a given sample's classification?
I think using a linear kernel is the most straightforward way to first approach this because of the significance/relative simplicity of the svc.coef_ attribute of a trained model. check out Bitwise's answer.
Below I will train a linear kernel SVM using scikit training data. Then we will look at the coef_ attribute. I will include a simple plot showing how the dot product of the classifier's coefficients and training feature data divide the resulting classes.
from sklearn import svm
from sklearn.datasets import load_breast_cancer
import numpy as np
import matplotlib.pyplot as plt
data = load_breast_cancer()
X = data.data # training features
y = data.target # training labels
lin_clf = svm.SVC(kernel='linear')
lin_clf.fit(X,y)
scores = np.dot(X, lin_clf.coef_.T)
b0 = y==0 # boolean or "mask" index arrays
b1 = y==1
malignant_scores = scores[b1]
benign_scores = scores[b1]
fig = plt.figure()
fig.suptitle("score breakdown by classification", fontsize=14, fontweight='bold')
score_box_plt = ply.boxplot(
[malignant_scores, benign_scores],
notch=True,
labels=list(data.target_names),
vert=False
)
plt.show(score_box_plt)
As you can see we do seem to have accessed the appropriate intercept and coefficient values. There is obvious separation of class scores with our decision boundary hovering around 0.
Now that we have a scoring system based on our linear coefficients we can easily investigate how each feature contributed to final classification. Here we display each features effect on the final score of that sample.
## sample we're using X[2] --> classified benign, lin_clf score~(-20)
lin_clf.predict(X[2].reshape(1,30))
contributions = np.multiply(X[2], lin_clf.coef_.reshape((30,)))
feature_number = np.arange(len(contributions)) +1
plt.bar(feature_number, contributions, align='center')
plt.xlabel('feature index')
plt.ylabel('score contribution')
plt.title('contribution to classification outcome by feature index')
plt.show(feature_contrib_bar)
We can also simply sort this same data to get a contribution-ranked list of features for a given classification to see which feature contributed the most to the score we are assessing the composition of.
abs_contributions = np.flip(np.sort(np.absolute(contributions)), axis=0)
feat_and_contrib = []
for contrib in abs_contributions:
if contrib not in contributions:
contrib = -contrib
feat = np.where(contributions == contrib)
feat_and_contrib.append((feat[0][0], contrib))
else:
feat = np.where(contributions == contrib)
feat_and_contrib.append((feat[0][0], contrib))
# sorted by max abs value. each row a tuple:;(feature index, contrib)
feat_and_contrib
From that ranked list we can see that the top five feature indices that contributed to the final score (of around -20 along with a classification 'benign') were [0, 22, 13, 2, 21] which correspond to the feature names in our data set; ['mean radius', 'worst perimeter', 'area error', 'mean perimeter', 'worst texture'].
Suppose You have Bag of word Featurization and you want to know which words are important
for classification then use this code for linear svm
weights = np.abs(lr_svm.coef_[0])
sorted_index = np.argsort(wt)[::-1]
top_10 = sorted_index[:10]
terms = text_vectorizer.get_feature_names()
for ind in top_10:
print(terms[ind])
You can use SelectFromModel in sklearn to get the names of the most relevant features of your model. Here is an example of extracting the features for LassoCV.
You can also check out this example which makes use of coef_ attribute in SVM to visualize the top most features.
Can I use sklearn's BaggingClassifier to produce continuous predictions? Is there a similar package? My understanding is that the bagging classifier predicts several classifications with different models, then reports the majority answer. It seems like this algorithm could be used to generate probability functions for each classification then reporting the mean value.
trees = BaggingClassifier(ExtraTreesClassifier())
trees.fit(X_train,Y_train)
Y_pred = trees.predict(X_test)
If you're interested in predicting probabilities for the classes in your classifier, you can use the predict_proba method, which gives you a probability for each class. It's a one-line change to your code:
trees = BaggingClassifier(ExtraTreesClassifier())
trees.fit(X_train,Y_train)
Y_pred = trees.predict_proba(X_test)
The shape of Y_pred will be [n_samples, n_classes].
If your Y_train values are continuous and you want to predict those continuous values (i.e., you're working on a regression problem), then you can use the BaggingRegressor instead.
I typically use BaggingRegressor() for continuous values, and then compare performance with RMSE. example below:
from sklearn.ensemble import BaggingReressor
trees = BaggingRegressor()
trees.fit(X_train,Y_train)
scores_RMSE = math.sqrt(metrics.mean_squared_error(Y_test, trees.predict(X_test))
I have a dataframe X which is comprised of 60 features and ~ 450k outcomes. My response variable y is categorical (survival, no survival).
I would like to use RFECV to reduce the number of significant features for my estimator (right now, logistic regression) on Xtrain, which I would like to score of accuracy under an ROC Curve. "Features Selected" is a list of all features.
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV
import sklearn.linear_model as lm
# Create train and test datasets to evaluate each model
Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,train_size = 0.70)
# Use RFECV to reduce features
# Create a logistic regression estimator
logreg = lm.LogisticRegression()
# Use RFECV to pick best features, using Stratified Kfold
rfecv = RFECV(estimator=logreg, cv=StratifiedKFold(ytrain, 10), scoring='roc_auc')
# Fit the features to the response variable
X_new = rfecv.fit_transform(Xtrain[features_selected], ytrain)
I have a few questions:
a) X_new returns different features when run on separate occasions (one time it returned 5 features, another run it returned 9. One is not a subset of the other). Why would this be?
b) Does this imply an unstable solution? While using the same seed for StratifiedKFold should solve this problem, does this mean I need to reconsider the approach in totality?
c) IN general, how do I approach tuning? e.g., features are selected BEFORE tuning in my current implementation. Would tuning affect the significance of certain features? Or should I tune simultaneously?
In k-fold cross-validation, the original sample is randomly partitioned into k equal size sub-samples. Therefore, it's not surprising to get different results every time you execute the algorithm. Source
There is an approach, so-called Pearson's correlation coefficient. By using this method, you can calculate the a correlation coefficient between each two features, and aim for removing features with a high correlation. This method could be considered as a stable solution to such a problem. Source