I am trying to fit some models (Spatial interaction models) according to some code which is provided in R. I have been able to get some of the code to work using statsmodels in a python framework but some of them do not match at all. I believe that the code I have for R and Python should give identical results. Does anyone see any differences? Or is there some fundamental differences which might be throwing things off? The R code is the original code which matches the numbers given in a tutorial (Found here: http://www.bartlett.ucl.ac.uk/casa/pdf/paper181).
R sample Code:
library(mosaic)
Data = fetchData('http://dl.dropbox.com/u/8649795/AT_Austria.csv')
Model = glm(Data~Origin+Destination+Dij+offset(log(Offset)), family=poisson(link="log"), data = Data)
cor = cor(Data$Data, Model$fitted, method = "pearson", use = "complete")
rsquared = cor * cor
rsquared
R output:
> Model = glm(Data~Origin+Destination+Dij+offset(log(Offset)), family=poisson(link="log"), data = Data)
Warning messages:
1: glm.fit: fitted rates numerically 0 occurred
2: glm.fit: fitted rates numerically 0 occurred
> cor = cor(Data$Data, Model$fitted, method = "pearson", use = "complete")
> rsquared = cor * cor
> rsquared
[1] 0.9753279
Python Code:
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
from scipy.stats.stats import pearsonr
Data= pd.DataFrame(pd.read_csv('http://dl.dropbox.com/u/8649795/AT_Austria.csv'))
Model = smf.glm('Data~Origin+Destination+Dij', data=Data, offset=np.log(Data['Offset']), family=sm.families.Poisson(link=sm.families.links.log)).fit()
cor = pearsonr(doubleConstrained.fittedvalues, Data["Data"])[0]
print "R-squared for doubly-constrained model is: " + str(cor*cor)
Python Output:
R-squared for doubly-constrained model is: 0.104758481123
It looks like GLM has convergence problems here in statsmodels. Maybe in R too, but R only gives these warnings.
Warning messages:
1: glm.fit: fitted rates numerically 0 occurred
2: glm.fit: fitted rates numerically 0 occurred
That could mean something like perfect separation in Logit/Probit context. I'd have to think about it for a Poisson model.
R is doing a better, if subtle, job of telling you that something may be wrong in your fitting. If you look at the fitted likelihood in statsmodels for instance, it's -1.12e27. That should be a clue right there that something is off.
Using Poisson model directly (I always prefer maximum likelihood to GLM when possible), I can replicate the R results (but I get a convergence warning). Tellingly, again, the default newton-raphson solver fails, so I use bfgs.
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
from scipy.stats.stats import pearsonr
data= pd.DataFrame(pd.read_csv('http://dl.dropbox.com/u/8649795/AT_Austria.csv'))
mod = smf.poisson('Data~Origin+Destination+Dij', data=data, offset=np.log(data['Offset'])).fit(method='bfgs')
print mod.mle_retvals['converged']
Related
I was asked to write a program for Linear Regression with the following steps.
Load the R data set mtcars as a pandas dataframe.
Build another linear regression model by considering the log of independent variable wt, and log of dependent variable mpg.
Fit the model with data, and display the R-squared value
I am a beginner at Statistics with Python.
I have tried getting the log values without converting to a new DataFrame but that gave an error saying "TypeError: 'OLS' object is not subscriptable"
import statsmodels.api as sa
import statsmodels.formula.api as sfa
import pandas as pd
import numpy as np
cars = sa.datasets.get_rdataset("mtcars")
cars_data = cars.data
lin_mod1 = sfa.ols("wt~mpg",cars_data)
lin_mod2 = pd.DataFrame(lin_mod1)
lin_mod2['wt'] = np.log(lin_mod2['wt'])
lin_mod2['mpg'] = np.log(lin_mod2['mpg'])
lin_res1 = lin_mod2.fit()
print(lin_res1.summary())
The expected result is the table after linear regression but the actual output is an error
[ValueError: DataFrame constructor not properly called!]
This might work for you.
import statsmodels.api as sm
import numpy as np
mtcars = sm.datasets.get_rdataset('mtcars')
mtcars_data = mtcars.data
liner_model = sm.formula.ols('np.log(wt) ~ np.log(mpg)',mtcars_data)
liner_result = liner_model.fit()
print(liner_result.rsquared)
I broke your code and I've ran it line by line.
The problem is here:
lin_mod1 = sfa.ols("wt~mpg",cars_data)
If you try to print it, the output is:
statsmodels.regression.linear_model.OLS object at 0x7f1c64273eb8
And it can't be interpreted correctly to build a data frame.
The solution is to get the result of the first linear model into a table and the finally put into a data frame:
results = lin_mod1.fit()
results_summary = results.summary()
If you print the results_summary you will see the variables are: Intercept and mpg.
I don't if it's an error of concept or what, since it's not the pair "wt"-"mpg".
# summary as a html table
results_as_html = results_summary.tables[1].as_html()
# dataframe from the html table
lin_mod2 = pd.read_html(results_as_html, header=0, index_col=0)[0]
The print of lin_mod2 is:
coef std err t P>|t| [0.025 0.975]
Intercept 6.0473 0.309 19.590 0.0 5.417 6.678
mpg -0.1409 0.015 -9.559 0.0 -0.171 -0.111
Here is the solution:
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
cars=sm.datasets.get_rdataset("mtcars")
cars_data=cars.data
lin_mod1=smf.ols('np.log(wt)~np.log(mpg)',cars_data)
lin_model_fit=lin_mod1.fit()
print(lin_model_fit.summary())
Change:
lin_mod2 = pd.DataFrame(lin_mod1)
To:
lin_mod2 = pd.DataFrame(data = lin_mod1)
I would like to perform a simple linear regression using statsmodels and I've tried several different methods by now but I just don't get it to work. The code that I have constructed now doesn't give me any errors but it also doesn't show me the result
I am trying to create a model for the variable "Direction" which takes the value 0 if the return for the corresponding date was negative and 1 if it was positive. The explinatory variables are the (5) lags of the returns. The df13 contains the lags and also the direction for each observed date. I tried this code and as I mentioned it doesn't give an error but says " Optimization terminated successfully.
Current function value: 0.682314
Iterations 5
However, I would like to see the typical table with all the beta values, their significance etc.
Also, what would you say, since Direction is a binary variable may it be better to use a logit instead of a linear model? However, in the assignment it appeared as a linear model.
And lastly, I am sorry its not displayed here correctly but I don't know how to write as code or insert my dataframe
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import os
import itertools
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
...
X = df13[['Lag1', 'Lag2', 'Lag3', 'Lag4', 'Lag5']]
Y = df13['Direction']
X = sm.add_constant(X)
model = sm.Logit(Y.astype(float), X.astype(float)).fit()
predictions = model.predict(X)
print_model = model.summary
print(print_model)
Edit: I'm sure it has to be a logit regression so I updated that part
I don't know if this is unintentional, but it looks like you need to define X and Y separately:
X = df13[['Lag1', 'Lag2', 'Lag3', 'Lag4', 'Lag5']]
Y = df13['Direction']
Secondly, I'm not familiar with statsmodel, but I would try converting your dataframes to numpy arrays. You can do this with
Xnum = X.to_numpy()
ynum = y.to_numpy()
And try passing those to the regressors.
I'm trying to replicate code from R that estimates a random intercept model. The R code is:
fit=lmer(resid~-1+(1|groupid),data=df)
I'm using the lmer command of the lme4 package to estimate random intercepts for the variable resid for observations in different groups (defined by groupid). There is no 'fixed effects' part, therefore no variable before the (1|groupid). Moreover, I do not want a constant estimated so that I get an intercept for each group.
Not sure how to do similar estimation in Python. I tried something like:
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
np.random.seed(12345)
df = pd.DataFrame(np.random.randn(25, 4), columns=list('ABCD'))
df['groupid'] = [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,4,4,4,4,4,5,5,5,5,5]
df['groupid'] = df['groupid'].astype('category')
###Random intercepts models
md = smf.mixedlm('A~B-1',data=df,groups=df['groupid'])
mdf = md.fit()
print(mdf.random_effects)
A is resid from the earlier example, while groupid is the same.
1) I am not sure whether the mdf.random_effects are the random intercepts I am looking for
2) I cannot remove the variable B, which I understand is the fixed effects part. If I try:
md = smf.mixedlm('A~-1',data=df,groups=df['groupid'])
I get an error that "Arrays cannot be empty".
Just trying to estimate the exact same model as in the R code. Any advice will be appreciated.
I have fitted a logisitic regression model to some data, everything is working great. I need to calculate the wald statistic which is a function of the model result.
My problem is that I do not understand, from the documentation, what the wald test requires as input? Specifically what is the R matrix and how is it generated?
I tried simply inputting the data I used to train and test the model as the R matrix, but I do not think this is correct. The documentation suggest examining the examples however none give an example of this test. I also asked the same question on crossvalidated but got shot down.
Kind regards
http://statsmodels.sourceforge.net/0.6.0/generated/statsmodels.discrete.discrete_model.LogitResults.wald_test.html#statsmodels.discrete.discrete_model.LogitResults.wald_test
The Wald test is used to test if a predictor is significant or not, of the form:
W = (beta_hat - beta_0) / SE(beta_hat) ~ N(0,1)
So somehow you'll want to input the predictors into the test. Judging from the example of the t.test and f.test, it may be simpler to input a string or tuple to indicate what you are testing.
Here is their example using a string for the f.test:
from statsmodels.datasets import longley
from statsmodels.formula.api import ols
dta = longley.load_pandas().data
formula = 'TOTEMP ~ GNPDEFL + GNP + UNEMP + ARMED + POP + YEAR'
results = ols(formula, dta).fit()
hypotheses = '(GNPDEFL = GNP), (UNEMP = 2), (YEAR/1829 = 1)'
f_test = results.f_test(hypotheses)
print(f_test)
And here is their example using a tuple:
import numpy as np
import statsmodels.api as sm
data = sm.datasets.longley.load()
data.exog = sm.add_constant(data.exog)
results = sm.OLS(data.endog, data.exog).fit()
r = np.zeros_like(results.params)
r[5:] = [1,-1]
T_test = results.t_test(r)
If you're still struggling getting the wald test to work, include your code and I can try to help make it work.
What is the Python equivalent to R predict function for linear models?
I'm sure there is something in scipy that can help here but is there an equivalent function?
https://stat.ethz.ch/R-manual/R-patched/library/stats/html/predict.lm.html
Scipy has plenty of regression tools with predict methods; though IMO, Pandas is the python library that comes closest to replicating R's functionality, complete with predict methods. The following snippets in R and python demonstrate the similarities.
R linear regression:
data(trees)
linmodel <- lm(Volume~., data = trees[1:20,])
linpred <- predict(linmodel, trees[21:31,])
plot(linpred, trees$Volume[21:31])
Same data set in python using pandas ols:
import pandas as pd
from pandas.stats.api import ols
import matplotlib.pyplot as plt
trees = pd.read_csv('trees.csv')
linmodel = ols(y = trees['Volume'][0:20], x = trees[['Girth', 'Height']][0:20])
linpred = linmodel.predict(x = trees[['Girth', 'Height']][20:31])
plt.scatter(linpred,trees['Volume'][20:31])