I am trying to run a stepwise automated search procedure on Python with linear regression, with my code shown below, using code from https://datascience.stackexchange.com/a/24447 I did not change any of the code given by the contributor, but am still encountering errors:
from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import statsmodels.api as sm
data = load_boston()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
def stepwise_selection(X, y,
initial_list=[],
threshold_in=0.01,
threshold_out = 0.05,
verbose=True):
""" Perform a forward-backward feature selection
based on p-value from statsmodels.api.OLS
Arguments:
X - pandas.DataFrame with candidate features
y - list-like with the target
initial_list - list of features to start with (column names of X)
threshold_in - include a feature if its p-value < threshold_in
threshold_out - exclude a feature if its p-value > threshold_out
verbose - whether to print the sequence of inclusions and exclusions
Returns: list of selected features
Always set threshold_in < threshold_out to avoid infinite looping.
See https://en.wikipedia.org/wiki/Stepwise_regression for the details
"""
included = list(initial_list)
while True:
changed=False
# forward step
excluded = list(set(X.columns)-set(included))
new_pval = pd.Series(index=excluded)
for new_column in excluded:
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included+[new_column]]))).fit()
new_pval[new_column] = model.pvalues[new_column]
best_pval = new_pval.min()
if best_pval < threshold_in:
best_feature = new_pval.argmin()
included.append(best_feature)
changed=True
if verbose:
print('Add {:30} with p-value {:.6}'.format(best_feature, best_pval))
# backward step
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included]))).fit()
# use all coefs except intercept
pvalues = model.pvalues.iloc[1:]
worst_pval = pvalues.max() # null if pvalues is empty
if worst_pval > threshold_out:
changed=True
worst_feature = pvalues.argmax()
included.remove(worst_feature)
if verbose:
print('Drop {:30} with p-value {:.6}'.format(worst_feature, worst_pval))
if not changed:
break
return included
result = stepwise_selection(X, y)
print('resulting features:')
print(result)
However, I have run into the following error:
--------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-782c721f1ba0> in <module>
59 return included
60
---> 61 result = stepwise_selection(X, y)
62
63 print('resulting features:')
<ipython-input-21-782c721f1ba0> in stepwise_selection(X, y, initial_list, threshold_in, threshold_out, verbose)
45
46 # backward step
---> 47 model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included]))).fit()
48 # use all coefs except intercept
49 pvalues = model.pvalues.iloc[1:]
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2804 if is_iterator(key):
2805 key = list(key)
-> 2806 indexer = self.loc._get_listlike_indexer(key, axis=1, raise_missing=True)[1]
2807
2808 # take() does not accept boolean indexers
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _get_listlike_indexer(self, key, axis, raise_missing)
1551
1552 self._validate_read_indexer(
-> 1553 keyarr, indexer, o._get_axis_number(axis), raise_missing=raise_missing
1554 )
1555 return keyarr, indexer
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing)
1638 if missing == len(indexer):
1639 axis_name = self.obj._get_axis_name(axis)
-> 1640 raise KeyError(f"None of [{key}] are in the [{axis_name}]")
1641
1642 # We (temporarily) allow for some missing keys with .loc, except in
KeyError: "None of [Int64Index([8], dtype='int64')] are in the [columns]"
Expected output should be this:
Add LSTAT with p-value 5.0811e-88
Add RM with p-value 3.47226e-27
Add PTRATIO with p-value 1.64466e-14
Add DIS with p-value 1.66847e-05
Add NOX with p-value 5.48815e-08
Add CHAS with p-value 0.000265473
Add B with p-value 0.000771946
Add ZN with p-value 0.00465162
resulting features:
['LSTAT', 'RM', 'PTRATIO', 'DIS', 'NOX', 'CHAS', 'B', 'ZN']
Appreciate any help given, thank you!
I am not sure how the code actually worked in the first place, maybe argmax worked differently. You get the error because of this line:
best_feature = new_pval.argmin()
You need the actual name of the feature, so if you change it to:
new_pval[new_column] = model.pvalues[new_column]
And likewise this line:
worst_feature = pvalues.argmax()
To:
worst_feature = new_pval.index[pvalues.argmax()]
I get this:
Add LSTAT with p-value 5.0811e-88
Add RM with p-value 3.47226e-27
Add PTRATIO with p-value 1.64466e-14
Add DIS with p-value 1.66847e-05
Add NOX with p-value 5.48815e-08
Add CHAS with p-value 0.000265473
Add B with p-value 0.000771946
Add ZN with p-value 0.00465162
resulting features:
['LSTAT', 'RM', 'PTRATIO', 'DIS', 'NOX', 'CHAS', 'B', 'ZN']
Although here, from a statistical point of view, I have some doubts about the implementation. I suggest you maybe post this in cross-validated or as another question.
Related
A variation of this post, without the detailed traceback, had been posted in the SO about two hours ago. This version contains the whole traceback.)
I am running StatsModels to get parameter estimates from ordinary least-squares (OLS). Data-processing and model-specific commands are shown below. When I use import statsmodels.formula.api as smas the operative api, the OLS works as desired (after I drop some 15 rows programmatically), giving intuitive results. But when I switch to import statsmodels.api as sm as the binding api, without changing the code almost at all, things fall apart, and Python interpreter triggers an error saying that 'inc_2 is not in the index'. Mind you, inc_2 was computed after the dataframe was read into StatsModels in both model runs: and yet the run was successful in the first, but not in the second. (BTW, p_c_inc_18 is per-capita income, and inc_2 is the former squarred. inc_2 is the offensive element in the second run.)
import pandas as pd
import numpy as np
import statsmodels.api as sm
%matplotlib inline import
matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid") eg=pd.read_csv(r'C:/../../../une_edu_pipc_06.csv') pd.options.display.precision = 3
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
sm_col = eg["lt_hsd_17"] + eg["hsd_17"]
eg["ut_hsd_17"] = sm_col
sm_col2 = eg["sm_col_17"] + eg["col_17"] eg["bnd_hsd_17"] = sm_col2
eg["d_09"]= eg["Rate_09"]-eg["Rate_06"]
eg["d_10"]= eg["Rate_10"]-eg["Rate_06"] inc_2=eg["p_c_inc_18"]*eg["p_c_inc_18"]
X = eg[["p_c_inc_18","ut_hsd_17","d_10","inc_2"]]
y = eg["Rate_18"]
X = sm.add_constant(X)
mod = sm.OLS(y, X)
res = mod.fit()
print(res.summary())
Here is the traceback in full.
KeyError Traceback (most recent call last)
<ipython-input-21-e2f4d325145e> in <module>
17 eg["d_10"]= eg["Rate_10"]-eg["Rate_06"]
18 inc_2=eg["p_c_inc_18"]*eg["p_c_inc_18"]
---> 19 X = eg[["p_c_inc_18","ut_hsd_17","d_10","inc_2"]]
20 y = eg["Rate_18"]
21 X = sm.add_constant(X)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2804 if is_iterator(key):
2805 key = list(key)
-> 2806 indexer = self.loc._get_listlike_indexer(key, axis=1, raise_missing=True)[1]
2807
2808 # take() does not accept boolean indexers
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _get_listlike_indexer(self, key, axis, raise_missing)
1550 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr)
1551
-> 1552 self._validate_read_indexer(
1553 keyarr, indexer, o._get_axis_number(axis), raise_missing=raise_missing
1554 )
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing)
1644 if not (self.name == "loc" and not raise_missing):
1645 not_found = list(set(key) - set(ax))
-> 1646 raise KeyError(f"{not_found} not in index")
1647
1648 # we skip the warning on Categorical/Interval
KeyError: "['inc_2'] not in index"
What am I doing wrong?
The syntax you used insists that a list of strings is a legal index into eg. If you print(eg), you'll see that it has no such element. I think what you meant was to make a list of elements, each indexed by a single string.
X = [
eg["p_c_inc_18"],
eg["ut_hsd_17"],
eg["d_10"],
eg["inc_2"]
]
I am trying to change the very simplest getting started - example of pymc3 (https://docs.pymc.io/notebooks/getting_started.html), the motivating example of linear regression into fitting a stretched exponential.
The simplest version of the model I tried is y = exp(-x**beta)
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
# Initialize random number generator
np.random.seed(1234)
# True parameter values
sigma = .1
beta = 1
# Size of dataset
size = 1000
# Predictor variable
X1 = np.random.randn(size)
# Simulate outcome variable
Y = np.exp(-X1**beta) + np.random.randn(size)*sigma
# specify the model
import pymc3 as pm
import theano.tensor as tt
print('Running on PyMC3 v{}'.format(pm.__version__))
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
beta = pm.HalfNormal('beta', sigma=1)
sigma = pm.HalfNormal('sigma', sigma=1)
# Expected value of outcome
mu = pm.math.exp(-X1**beta)
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal('Y_obs', mu=mu, sigma=sigma, observed=Y)
with basic_model:
# draw 500 posterior samples
trace = pm.sample(500)
which yields the output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, beta]
Sampling 4 chains: 0%| | 0/4000 [00:00<?, ?draws/s]/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:2920: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:2920: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
Bad initial energy, check any log probabilities that are inf or -inf, nan or very small:
Y_obs NaN
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/pymc3/parallel_sampling.py", line 160, in _start_loop
point, stats = self._compute_point()
File "/opt/conda/lib/python3.7/site-packages/pymc3/parallel_sampling.py", line 191, in _compute_point
point, stats = self._step_method.step(self._point)
File "/opt/conda/lib/python3.7/site-packages/pymc3/step_methods/arraystep.py", line 247, in step
apoint, stats = self.astep(array)
File "/opt/conda/lib/python3.7/site-packages/pymc3/step_methods/hmc/base_hmc.py", line 144, in astep
raise SamplingError("Bad initial energy")
pymc3.exceptions.SamplingError: Bad initial energy
"""
The above exception was the direct cause of the following exception:
SamplingError Traceback (most recent call last)
SamplingError: Bad initial energy
The above exception was the direct cause of the following exception:
ParallelSamplingError Traceback (most recent call last)
<ipython-input-310-782c941fbda8> in <module>
1 with basic_model:
2 # draw 500 posterior samples
----> 3 trace = pm.sample(500)
/opt/conda/lib/python3.7/site-packages/pymc3/sampling.py in sample(draws, step, init, n_init, start, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, **kwargs)
435 _print_step_hierarchy(step)
436 try:
--> 437 trace = _mp_sample(**sample_args)
438 except pickle.PickleError:
439 _log.warning("Could not pickle model, sampling singlethreaded.")
/opt/conda/lib/python3.7/site-packages/pymc3/sampling.py in _mp_sample(draws, tune, step, chains, cores, chain, random_seed, start, progressbar, trace, model, **kwargs)
967 try:
968 with sampler:
--> 969 for draw in sampler:
970 trace = traces[draw.chain - chain]
971 if (trace.supports_sampler_stats
/opt/conda/lib/python3.7/site-packages/pymc3/parallel_sampling.py in __iter__(self)
391
392 while self._active:
--> 393 draw = ProcessAdapter.recv_draw(self._active)
394 proc, is_last, draw, tuning, stats, warns = draw
395 if self._progress is not None:
/opt/conda/lib/python3.7/site-packages/pymc3/parallel_sampling.py in recv_draw(processes, timeout)
295 else:
296 error = RuntimeError("Chain %s failed." % proc.chain)
--> 297 raise error from old_error
298 elif msg[0] == "writing_done":
299 proc._readable = True
ParallelSamplingError: Bad initial energy
INFO (theano.gof.compilelock): Waiting for existing lock by process '30255' (I am process '30252')
INFO (theano.gof.compilelock): To manually release the lock, delete /home/jovyan/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-buster-sid-x86_64-3.7.3-64/lock_dir
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:2920: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:2920: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
Instead of the stretched exponential, I have also tried power laws, and sine functions. It seems to me that the problem arises as soon as my model is not injective. Can this be an issue (as apparent, I am a newbie in this field)? Can I restrict sampling to only positive x values? Are there any tricks to this?
So the problem here is that
X1**beta
is only defined when X1 >= 0, or when beta is an integer. When you feed this into your observations, for most places, beta will be a float, and so many of
mu = pm.math.exp(-X1**beta)
will be nan.
I found this out with
>>> basic_model.check_test_point()
beta_log__ -0.77
sigma_log__ -0.77
Y_obs NaN
Name: Log-probability of test_point, dtype: float64
I am not sure what model you are trying to specify! There are ways to require beta to be an integer, and ways to require that X1 be positive, but I would need more details to help you describe the model.
i was trying fit OLS model, thats works correctly without robust estimation, but i want improve my regression so, like below, i try to implement that with this problem, in comments have other attempts to solve it.
I don't know if a apply correctly the keyword, so I apresure any helps.
Code:
# Fit and summarize OLS model
sumrz = dict()
for i, ca in enumerate(ccaa):
x = sm.add_constant(data.dy[ca])
mod = sm.OLS(endog=data.du[ca], exog=x, hasconst=True, missing='drop')
res = mod.fit(cov_type='HAC', cov_kwds={'maxlags':1})
# res = res.get_robustcov_results(cov_type='HAC', maxlags=1, use_correction=True)
# res = res.get_robustcov_results(cov_type='HC0')
sumrz[ca] = res.summary(xname=['const','dy'], yname='du', title=ca)
Error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-114-87912e59a35d> in <module>()
9 # res = res.get_robustcov_results(cov_type='HAC', maxlags=1, use_correction=True)
10 # res = res.get_robustcov_results(cov_type='HC0')
---> 11 sumrz[ca] = res.summary(xname=['const','dy'], yname='du', title=ca)
/Users/mmngreco/anaconda/lib/python2.7/site-packages/statsmodels/regression/linear_model.pyc in summary(self, yname, xname, title, alpha)
1950 top_right = [('R-squared:', ["%#8.3f" % self.rsquared]),
1951 ('Adj. R-squared:', ["%#8.3f" % self.rsquared_adj]),
-> 1952 ('F-statistic:', ["%#8.4g" % self.fvalue] ),
1953 ('Prob (F-statistic):', ["%#6.3g" % self.f_pvalue]),
1954 ('Log-Likelihood:', None), #["%#6.4g" % self.llf]),
/Users/mmngreco/anaconda/lib/python2.7/site-packages/statsmodels/tools/decorators.pyc in __get__(self, obj, type)
92 if _cachedval is None:
93 # Call the "fget" function
---> 94 _cachedval = self.fget(obj)
95 # Set the attribute in obj
96 # print("Setting %s in cache to %s" % (name, _cachedval))
/Users/mmngreco/anaconda/lib/python2.7/site-packages/statsmodels/regression/linear_model.pyc in fvalue(self)
1214 # assume const_idx exists
1215 idx = lrange(k_params)
-> 1216 idx.pop(const_idx)
1217 mat = mat[idx] # remove constant
1218 ft = self.f_test(mat)
TypeError: an integer is required
(It's good to see a full traceback in a question.)
The following is my guess based on the traceback.
I guess there is a bug in the constant detection if hasconst=True is specifiec.
Try to leave out the argument hasconst=True.
Background
If we don't allow for misspecified heteroscedasticity or correlation, and we don't use a robust covariance matrix, then the F statistic can be calculated from the residual sum of squares.
If a robust cov_type is specified, then we use the Wald test for the null hypothesis that all slope coefficients are zero. This is valid with a robust covariance of the parameters even if heteroscedasticity or correlation are misspecified.
In this case the index for the column with the constant, const_idx, is not correctly set and we get the TypeError.
I'm again having trouble using the scikit-learn silhouette coefficient. (first question was here : silhouette coefficient in python with sklearn).
I make a clustering that can be very unbalanced but with a lot of individuals so I want to use the sampling parameter of the silhouette coefficient. I was wondering if the subsampling was stratified, meaning sampling with respect to clusters. I take the iris dataset as an example but my dataset is far bigger (and that's why I need sampling).
My code is :
from sklearn import datasets
from sklearn.metrics import *
iris = datasets.load_iris()
col = iris.feature_names
name = iris.target_names
X = pd.DataFrame(iris.data, columns = col)
y = iris.target
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)
which works. But now If I biased that with :
y[0:148] =0
y[148] = 1
y[149] = 2
print y
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)
I get :
ValueError Traceback (most recent call last)
<ipython-input-12-68a7fba49c54> in <module>()
4 y[149] =2
5 print y
----> 6 s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_score(X, labels, metric, sample_size, random_state, **kwds)
82 else:
83 X, labels = X[indices], labels[indices]
---> 84 return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
85
86
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_samples(X, labels, metric, **kwds)
146 for i in range(n)])
147 B = np.array([_nearest_cluster_distance(distances[i], labels, i)
--> 148 for i in range(n)])
149 sil_samples = (B - A) / np.maximum(A, B)
150 # nan values are for clusters of size 1, and should be 0
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in _nearest_cluster_distance(distances_row, labels, i)
200 label = labels[i]
201 b = np.min([np.mean(distances_row[labels == cur_label])
--> 202 for cur_label in set(labels) if not cur_label == label])
203 return b
/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.pyc in amin(a, axis, out, keepdims)
1980 except AttributeError:
1981 return _methods._amin(a, axis=axis,
-> 1982 out=out, keepdims=keepdims)
1983 # NOTE: Dropping the keepdims parameter
1984 return amin(axis=axis, out=out)
/usr/lib/python2.7/dist-packages/numpy/core/_methods.pyc in _amin(a, axis, out, keepdims)
12 def _amin(a, axis=None, out=None, keepdims=False):
13 return um.minimum.reduce(a, axis=axis,
---> 14 out=out, keepdims=keepdims)
15
16 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):
ValueError: zero-size array to reduction operation minimum which has no identity
an error which is due I think to the fact that sampling is random not stratified so it has not taken into account the two small clusters.
Am I correct ?
Yes you are correct. The sampling is not stratified since it doesn't take the labels into consideration when doing the sampling.
This is how the sample is taken (version 0.14.1)
indices = random_state.permutation(X.shape[0])[:sample_size]
Where X is the input array of size [n_samples_a, n_samples_a] or [n_samples_a, n_features].
I think you are right, the current implementation does not support balanced resampling.
Just an update for year 2020:
As of scikit-learn 0.22.1, the sampling remains random (i.e. not stratified).
The source code is still:
indices = random_state.permutation(X.shape[0])[:sample_size]
I'm having trouble computing the silhouette coefficient in python with sklearn.
Here is my code :
from sklearn import datasets
from sklearn.metrics import *
iris = datasets.load_iris()
X = pd.DataFrame(iris.data, columns = col)
y = pd.DataFrame(iris.target,columns = ['cluster'])
s = silhouette_score(X, y, metric='euclidean',sample_size=int(50))
I get the error :
IndexError: indices are out-of-bounds
I want to use the sample_size parameter because when working with very large datasets, silhouette is too long to compute. Anyone knows how this parameter could work ?
Complete traceback :
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-72-70ff40842503> in <module>()
4 X = pd.DataFrame(iris.data, columns = col)
5 y = pd.DataFrame(iris.target,columns = ['cluster'])
----> 6 s = silhouette_score(X, y, metric='euclidean',sample_size=50)
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_score(X, labels, metric, sample_size, random_state, **kwds)
81 X, labels = X[indices].T[indices].T, labels[indices]
82 else:
---> 83 X, labels = X[indices], labels[indices]
84 return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
85
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in __getitem__(self, key)
1993 if isinstance(key, (np.ndarray, list)):
1994 # either boolean or fancy integer index
-> 1995 return self._getitem_array(key)
1996 elif isinstance(key, DataFrame):
1997 return self._getitem_frame(key)
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in _getitem_array(self, key)
2030 else:
2031 indexer = self.ix._convert_to_indexer(key, axis=1)
-> 2032 return self.take(indexer, axis=1, convert=True)
2033
2034 def _getitem_multilevel(self, key):
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in take(self, indices, axis, convert)
2981 if convert:
2982 axis = self._get_axis_number(axis)
-> 2983 indices = _maybe_convert_indices(indices, len(self._get_axis(axis)))
2984
2985 if self._is_mixed_type:
/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _maybe_convert_indices(indices, n)
1038 mask = (indices>=n) | (indices<0)
1039 if mask.any():
-> 1040 raise IndexError("indices are out-of-bounds")
1041 return indices
1042
IndexError: indices are out-of-bounds
silhouette_score expects regular numpy arrays as input. Why wrap your arrays in data frames?
>>> silhouette_score(iris.data, iris.target, sample_size=50)
0.52999903616584543
From the traceback, you can observe that the code is doing fancy indexing (subsampling) on the first axis. By default indexing a dataframe will index the columns and not the rows hence the issue you observe.