When performed a logistic regression using the two API, they give different coefficients.
Even with this simple example it doesn't produce the same results in terms of coefficients. And I follow advice from older advice on the same topic, like setting a large value for the parameter C in sklearn since it makes the penalization almost vanish (or setting penalty="none").
import pandas as pd
import numpy as np
import sklearn as sk
from sklearn.linear_model import LogisticRegression
import statsmodels.api as sm
n = 200
x = np.random.randint(0, 2, size=n)
y = (x > (0.5 + np.random.normal(0, 0.5, n))).astype(int)
display(pd.crosstab( y, x ))
max_iter = 100
#### Statsmodels
res_sm = sm.Logit(y, x).fit(method="ncg", maxiter=max_iter)
print(res_sm.params)
#### Scikit-Learn
res_sk = LogisticRegression( solver='newton-cg', multi_class='multinomial', max_iter=max_iter, fit_intercept=True, C=1e8 )
res_sk.fit( x.reshape(n, 1), y )
print(res_sk.coef_)
For example I just run the above code and get 1.72276655 for statsmodels and 1.86324749 for sklearn. And when run multiple times it always gives different coefficients (sometimes closer than others, but anyway).
Thus, even with that toy example the two APIs give different coefficients (so odds ratios), and with real data (not shown here), it almost get "out of control"...
Am I missing something? How can I produce similar coefficients, for example at least at one or two numbers after the comma?
There are some issues with your code.
To start with, the two models you show here are not equivalent: although you fit your scikit-learn LogisticRegression with fit_intercept=True (which is the default setting), you don't do so with your statsmodels one; from the statsmodels docs:
An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.
It seems that this is a frequent point of confusion - see for example scikit-learn & statsmodels - which R-squared is correct? (and own answer there as well).
The other issue is that, although you are in a binary classification setting, you ask for multi_class='multinomial' in your LogisticRegression, which should not be the case.
The third issue is that, as explained in the relevant Cross Validated thread Logistic Regression: Scikit Learn vs Statsmodels:
There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number.
which makes the two models again non-comparable in principle, but you have successfully addressed it here by setting C=1e8. In fact, since then (2016), scikit-learn has indeed added a way to switch regularization off, by setting penalty='none' since, according to the docs:
If ‘none’ (not supported by the liblinear solver), no regularization is applied.
which should now be considered the canonical way to switch off the regularization.
So, incorporating these changes in your code, we have:
np.random.seed(42) # for reproducibility
#### Statsmodels
# first artificially add intercept to x, as advised in the docs:
x_ = sm.add_constant(x)
res_sm = sm.Logit(y, x_).fit(method="ncg", maxiter=max_iter) # x_ here
print(res_sm.params)
Which gives the result:
Optimization terminated successfully.
Current function value: 0.403297
Iterations: 5
Function evaluations: 6
Gradient evaluations: 10
Hessian evaluations: 5
[-1.65822763 3.65065752]
with the first element of the array being the intercept and the second the coefficient of x. While for scikit learn we have:
#### Scikit-Learn
res_sk = LogisticRegression(solver='newton-cg', max_iter=max_iter, fit_intercept=True, penalty='none')
res_sk.fit( x.reshape(n, 1), y )
print(res_sk.intercept_, res_sk.coef_)
with the result being:
[-1.65822806] [[3.65065707]]
These results are practically identical, within the machine's numeric precision.
Repeating the procedure for different values of np.random.seed() does not change the essence of the results shown above.
Related
I'm trying to use a MLPregressor from scikit learn in order to do a non linear regression on a set of 260 examples (X,Y). One example is composed of 200 features for X and 1 feature for Y.
File containing X
File containing Y
The link between X and Y is not obvious if directly plotted together but if we plot x=log10(sum(X)) and y=log10(Y), the link between both is almost linear.
As a first approach, I tried to apply my neural network directly on X and Y without success.
I have read that scaling would improve regression. In my case, Y is containing datas in a very wide range of values (from 10e-12 to 10e-5). When computing the error, of course 10e-5 as much more weight than 10e-12. But I would like my neural network to correctly approximate both. When using a linear scaling, let's say preprocessing.MinMaxScaler from scikit learn, 10e-8 ~ -0.99 and 10e-12 ~ -1. So I'm loosing all the information of my target.
My question here is: what kind of scaling could I use to get consistent results?
The only solution I have found is to apply log10(Y) but of course, error is increased exponentially.
The best I could get is with the code below:
from sklearn.neural_network import MLPRegressor
from sklearn.svm import SVR
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"]=(20,10)
freqIter=[]
for i in np.arange(0,0.2,0.001):
freqIter.append([i,i+0.001])
#############################################################################
X = np.zeros((len(learningFiles),len(freqIter)))
Y = np.zeros(len(learningFiles))
# Import X: loadtxt()
# Import Y: loadtxt
maxy = np.amax(Y)
Y *= 1/maxy
Y = Y.reshape(-1, 1)
maxx = np.amax(X)
X *= 1/maxx
#############################################################################
reg = MLPRegressor(hidden_layer_sizes=(8,2), activation='tanh', solver='adam', alpha=0.0001, learning_rate='adaptive', max_iter=10000, verbose=False, tol = 1e-7)
reg.fit(X, Y)
#############################################################################
plt.scatter([np.log10(np.sum(kou*maxx)) for kou in X],Y*maxy,label = 'INPUTS',color='blue')
plt.scatter([np.log10(np.sum(kou*maxx)) for kou in X],reg.predict(X)*maxy,label='Predicted',color='red')
plt.grid()
plt.legend()
plt.show()
Result:
Thanks for your help.
You may want to look at a FunctionTransformer. The example given applies a logarithmic transformation as part of pre-processing. You can also do it for an arbitrary mathematical function.
I would also suggest trying a ReLU activation function if you scale logarithmically. After the transformation your data looks fairly linear, so it may be converge a little faster -- but that's just a hunch.
I've finally found something interesting that is working well on my case.
First, I've used a log scaling for Y. I think it is the most adapted scaling when the range of values is very wide such as mine (from 10e-12 to 10e-5). Target is then between -5 and -12.
Secondly, my error about scaling X was to apply the same scaling to all features. Let's say my X contains 200 features, then I was dividing by the max of all features of all examples. My solution here is to scale feature1 by the max of all feature1 through all examples and then to reapeat it for all features. This gives me feature1 between 0 and 1 for all examples instead of far less previously (feature1 could be betwwen 0 and 0.0001 with my previous scaling).
I get better results, my main issue now is to select the correct parameters (number of layers, tolerance,...) but this is another problem.
I have a dataset with about 100+ features. I also have a small set of covariates.
I build an OLS linear model using statsmodels for y = x + C1 + C2 + C3 + C4 + ... + Cn for each covariate, and a feature x, and a dependent variable y.
I'm trying to perform hypothesis testing on the regression coefficients to test if the coefficients are equal to 0. I figured a t-test would be the appropriate approach to this, but I'm not quite sure how to go about implementing this in Python, using statsmodels.
I know, particularly, that I'd want to use http://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.RegressionResults.t_test.html#statsmodels.regression.linear_model.RegressionResults.t_test
But I am not certain I understand the r_matrix parameter. What could I provide to this? I did look at the examples but it is unclear to me.
Furthermore, I am not interested in doing the t-tests on the covariates themselves, but just the regression co-eff of x.
Any help appreciated!
Are you sure you don't want statsmodels.regression.linear_model.OLS? This will perform a OLS regression, making available the parameter estimates and the corresponding p-values (and many other things).
from statsmodels.regression import linear_model
from statsmodels.api import add_constant
Y = [1,2,3,5,6,7,9]
X = add_constant(range(len(Y)))
model = linear_model.OLS(Y, X)
results = model.fit()
print(results.params) # [ 0.75 1.32142857]
print(results.pvalues) # [ 2.00489220e-02 4.16826428e-06]
These p-values are from the t-tests of each fit parameter being equal to 0.
It seems like RegressionResults.t_test would be useful for less conventional hypotheses.
I have a database of features, a 2D np.array (2000 samples and each sample contains 100 features, 2000 X 100). I want to fit gaussian distributions to my database using python. My code is the following:
data = load_my_data() # loads a np.array with size 2000x200
clf = mixture.GaussianMixture(n_components= 50, covariance_type='full')
clf.fit(data)
I am not sure about the parameters for example the covariance_type and how can I investigate whether the fit was occured succesfully or not.
EDIT: I debug the code to investigate what is happening with the clf.means_ and appartently it produced a matrix n_components X size_of_features 50 X 20). Is there a way that i can check that the fitting was successful, or to plot data? What are the alternatives to Gaussian mixtures (mixtures of exponential for example, I cannot find any available implementation)?
I think you are using sklearn package.
Once you have fit, then type
print clf.means_
If it has output, then the data is fitted, if it raise errors, not fitted.
Hope this helps you.
You can do dimensionality reduction using PCA to 3D space (let's say) and then plot means and data.
Is is always preferred to choose a reduced set of candidate before trying to identify the distribution (in other words, use Cullen & Frey to reject the unlikely candidates) and then go for goodness of fit a select the best result,
You can just create a list of all available distributions in scipy. An example with two distributions and random data:
import numpy as np
import scipy.stats as st
data = np.random.random(10000)
#Specify all distributions here
distributions = [st.laplace, st.norm]
mles = []
for distribution in distributions:
pars = distribution.fit(data)
mle = distribution.nnlf(pars, data)
mles.append(mle)
results = [(distribution.name, mle) for distribution, mle in
zip(distributions, mles)]
best_fit = sorted(zip(distributions, mles), key=lambda d: d[1])[0]
print 'Best fit reached using {}, MLE value: {}'.format(best_fit[0].name, best_fit[1])
I understand, you may like to do regression of two different distributions, more than fitting them to an arithmetic curve. If this is the case, you may be interested in plotting one against the other one, and make a linear (or polynomial) regression, checking the coefficients
If this is the case, linear regression of two distributions, may tell you if there linear dependent or not.
Linear Regression using Scipy documentation
I'm using Python's statsmodels to perform a weighted linear regression. Since this is my first time with this module, I ran some basic tests.
Doing these, I find that estimated errors on the parameters (i.e., uncertainties) are independent of the weight.
This doesn't fit my naive expectation (that larger error bars would produce a more uncertain result), or the definition I've seen in nonlinear fitting, which generally uses the weighted Hessian to report the uncertainty of the parameters.
I couldn't find an explicit description of the calculation of the parameter-errors (either in the statsmodels docs or support pages). Posting here to see if anyone has experience with this (and for future searchers).
Minimal code:
import numpy as np
import statsmodels.api as sm
escale = 1
xvals = np.arange(1,11)
yvals = xvals**2 # dummy nonlinear y-data
evals = escale*np.sqrt(yvals) # errorbars
X = sm.add_constant(xvals) # to get intercept term
model = sm.WLS(yvals, X, 1.0/evals**2) # weight inverse to error
result = model.fit()
print "escale = ", escale
print "fit = ", result.params
print "err = ", result.bse # same value independent of escale
So is this a numerical bug in statsmodels, or PEBKAC (i.e., is this correct behavior and my expectations are wrong)?
I want to use KernelRidge class of scikit_learn library to fit nonlinear regression model on my data. But I am getting confused how I can do that.
from sklearn.kernel_ridge import KernelRidge
import numpy as np
n_samples, n_features = 20,1
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
Krr = KernelRidge(alpha=1.0, kernel='linear',degree = 4)
Krr.fit(X, y)
I am expecting 5 coefficients to be set for this model, how can I get them?
The above code will transform 1-D data to 4-D space and fit the model to the data. I think it should find best c0,c1,c2,c3,c4 according to the training data. My question is how can I access c0,c1,c2,c3,c4?
EDIT:
I made a mistake in above my code here, kernel parameter should be "polynomial" instead of "linear" in line 7.
Krr = KernelRidge(alpha=1.0, kernel='polynomial',degree = 4)
But my question is same as before.
http://scikit-learn.org/stable/modules/generated/sklearn.kernel_ridge.KernelRidge.html#sklearn.kernel_ridge.KernelRidge
dual_coef_ : array, shape = [n_features] or [n_targets, n_features]
so
Krr.dual_coef_
should do it.
EDIT:
Ok, so dual_coef_ is the coefficient in the Kernel space. For a linear kernel, the Kernel, K(X,X') is X.T *X . So this is an NxN matrix, hence the number of coefficients equal to the the dimension of y.
there are 3 equations we need to understand,
The first is the standard ridge regression weight estimation.
The second is the partially kernalised version, with the relation linking the two being the third equation.
dual_coef_ returns the alpha of equation 2. Therefore to have the weight vector in the 'normal' space, rather than the kernel space as it is returned, you need to do X.T * Krr.dual_coef_
We can check this is correct because KRR and Ridge Regression are the same if the kernel is linear.
import numpy as np
from sklearn.kernel_ridge import KernelRidge
from sklearn.linear_model import Ridge
rng = np.random.RandomState(0)
X = 5 * rng.rand(100, 1)
y = np.sin(X).ravel()
Krr = KernelRidge(alpha=1.0, kernel='linear', coef0=0)
R = Ridge(alpha=1.0,fit_intercept=False)
Krr.fit(X, y)
R.fit(X, y)
print np.dot(X.transpose(),Krr.dual_coef_)
print R.coef_
I see this to output:
[-0.03997686]
[-0.03997686]
Will show they are equivalent (you have to change the intercept options as the defaults differ between the models).
As the degree parameter is ignored, as I mentioned in the comments, the coefficient should be 1x1 in this case (as it is).
If you want to know exactly what a particular model returns, I recommend looking at the source code on github, which I think is the only way to gain a deeper understanding of how this stuff works. https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_ridge.py
Additionally, for a non-linear kernel, the intuition of the weights can easily be lost, so always start from first principles if you do this.
Illustration of how KernelRidge prediction works. Hope it will help someone to understand the model.