I have tried to use pd.cut to create a categorical variable from a continuous variable. I'd like to use this in a subsequent statsmodel defined regression including this dummy variable. When I create a categorical variable created in this way, I get an error
TypeError: data type not understood.
A test case is included below.
import numpy as np
import pandas as pd
import statsmodels as sm
import statsmodels.formula.api as smf
df = pd.DataFrame(np.random.randn(6,4))
df.columns = ['A', 'B', 'C', 'D']
df['ttt']=pd.cut(df['D'], bins=2)
test = smf.ols('A ~ B + ttt', data=df).fit()
I'm sure I've done something obviously wrong. Any help would be appreciated.
I'm not sure exactly where statsmodels is at in terms of including support for the new Categorical type in pandas. For the moment, you may have to convert the categorical back into an object type for it to work (please check that the resulting ols fit is sensible, I don't know the full details of what you're trying to do):
df['ttt_fixed'] = df.ttt.astype(np.object)
test = smf.ols('A ~ B + ttt_fixed', data=df).fit()
test.summary()
Related
I am trying to do a linear regression. With the results I want to multiply each x with its own estimated coefficient: xi·βi.
However, I am doing a lot of transformations on xi.
For example:
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
def log_plus_1(x):
return np.log(x + 1.0)
df = sm.datasets.get_rdataset("Guerry", "HistData").data
df = df[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
formule = 'Lottery ~ pow(Literacy,2) + log_plus_1(Wealth)'
mod = smf.ols(formula=formule, data=df)
res = mod.fit()
res.params
Now I would need pow(Literacy, 2) and log_plus_1(Wealth). But since they go into the model, I was hoping to get them out of there too. Instead of transforming the data from the original dataset.
In R I would use res$model to get it.
The data is stored as attributes of the model, e.g. the design matrix is mod.exog, the dependent or response variable is mod.endog.
(I'm not sure I remember correctly the details of the following: The data that patsy returns after creating the transformed design matrix should, in this case, be a pandas DataFrame, and should be stored in mod.data.orig_exog or something like that.)
res.predict automatically handles the transformation, i.e. patsy uses the formula information to transform the data for the explanatory variables in prediction in the same way as the data was transformed in creating the model.
predict only returns the prediction and not the internally transformed predict exog.
I'm getting acquainted with Statsmodels so as to shift my more complicated stats completely over to python. However, I'm being cautious, so I'm cross-checking my results with SPSS, just to make sure I'm not making any obvious blunders. Most of time, there's no difference, but I have one example of a two-way ANOVA that's throwing up very different test statistics in Statsmodels and SPSS. (Relevant point: the sample sizes in the ANOVA are mismatched, so ANOVA may not be the appropriate model here.)
I'm selecting my model as follows:
import pandas as pd
import scipy as sp
import numpy as np
import statsmodels.api as sm
import seaborn as sns
import statsmodels
import statsmodels.api as sm
from statsmodels.formula.api import ols
import matplotlib.pyplot as plt
Body = pd.read_csv(filepath)
Body = Body.dropna()
Body_lm = ols('Effect ~ C(Fiction) + C(Condition) + C(Fiction)*C(Condition)', data = Body).fit()
table = sm.stats.anova_lm(Body_lm, typ=2)
The Statsmodels output is as below:
sum_sq df F PR(>F)
C(Fiction) 278.176684 1.0 307.624463 1.682042e-55
C(Condition) 4.294764 1.0 4.749408 2.971278e-02
C(Fiction):C(Condition) 10.776312 1.0 11.917092 5.970123e-04
Residual 520.861599 576.0 NaN NaN
The corresponding SPSS results are these:
Can anyone help explain the difference? Is is perhaps the unequal sample sizes being treated differently under the hood? Or am I choosing the wrong model?
Any help appreciated!
You should use sum coding when comparing the means of the variables.
BTW you don't need to specify each variable that are in the interaction term if * multiply operator is used:
“:” adds a new column to the design matrix with the product of the other two columns.
“*” will also include the individual columns that were multiplied together.
Your model should be:
Body_lm = ols('Effect ~ C(Fiction, Sum)*C(Condition, Sum)', data = Body).fit()
I've got some regressions results from running statsmodels.formula.api.ols. Here's a toy example:
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
example_df = pd.DataFrame(np.random.randn(10, 3))
example_df.columns = ["a", "b", "c"]
fit = smf.ols('a ~ b', example_df).fit()
I'd like to apply the model to column c, but a naive attempt to do so doesn't work:
fit.predict(example_df["c"])
Here's the exception I get:
PatsyError: Error evaluating factor: NameError: name 'b' is not defined
a ~ b
^
I can do something gross and create a new, temporary DataFrame in which I rename the column of interest:
example_df2 = pd.DataFrame(example_df["c"])
example_df2.columns = ["b"]
fit.predict(example_df2)
Is there a cleaner way to do this? (short of switching to statsmodels.api instead of statsmodels.formula.api)
You can use a dictionary:
>>> fit.predict({"b": example_df["c"]})
array([ 0.84770672, -0.35968269, 1.19592387, -0.77487812, -0.98805215,
0.90584753, -0.15258093, 1.53721494, -0.26973941, 1.23996892])
or create a numpy array for the prediction, although that is much more complicated if there are categorical explanatory variables:
>>> fit.predict(sm.add_constant(example_df["c"].values), transform=False)
array([ 0.84770672, -0.35968269, 1.19592387, -0.77487812, -0.98805215,
0.90584753, -0.15258093, 1.53721494, -0.26973941, 1.23996892])
If you replace your fit definition with this line:
fit = smf.ols('example_df.a ~ example_df.b', example_df).fit()
It should work.
fit.predict(example_df["c"])
array([-0.52664491, -0.53174346, -0.52172484, -0.52819856, -0.5253607 ,
-0.52391618, -0.52800043, -0.53350634, -0.52362988, -0.52520823])
I'm trying to use Patsy to make endogenous and a exogenous datamatrices, for use in binary logistic regression. I'm having problems setting the reference level of the endogenous side.
The problem with the following code is that the endogenous side have two levels, where it should only have one in binary logistic regression.
import pandas as pd
import statsmodels.api as sm
import patsy
# data:
url = 'http://vincentarelbundock.github.io/Rdatasets/csv/datasets/iris.csv'
df = pd.read_csv(url)
df = df.iloc[:10,1:]
df = df.loc[ ( df.Species == 'setosa') | ( df.Species == 'versicolor' ) ,]
df.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species' ]
y, X = patsy.dmatrices("C(Species,Treatment('versicolor')) ~ Sepal_Length",data = df, return_type = 'dataframe')
The shape of y is (100, 2), but i only need 1 column. So how do I get Patsy to output the endogenous side so I can use it directly in binary logistic regression?
Hmm, my advice would be to slice in to y after you do the above. Patsy isn't really designed with LHS variables in mind. Statsmodels should work in this case (currently, it doesn't, but that's a bug in statsmodels IMO. If you file a bug report on github, I can look into it.)
FYI, you can use
import statsmodels.api as sm
dta = sm.datasets.get_rdataset('iris', cache=True)
As a shortcut to get to the Rdatasets data.
I'm trying to run a multiple OLS regression using statsmodels and a pandas dataframe. There are missing values in different columns for different rows, and I keep getting the error message:
ValueError: array must not contain infs or NaNs
I saw this SO question, which is similar but doesn't exactly answer my question: statsmodel.api.Logit: valueerror array must not contain infs or nans
What I would like to do is run the regression and ignore all rows where there are missing variables for the variables I am using in this regression. Right now I have:
import pandas as pd
import numpy as np
import statsmodels.formula.api as sm
df = pd.read_csv('cl_030314.csv')
results = sm.ols(formula = "da ~ cfo + rm_proxy + cpi + year", data=df).fit()
I want something like missing = "drop".
Any suggestions would be greatly appreciated. Thanks so much.
You answered your own question. Just pass
missing = 'drop'
to ols
import statsmodels.formula.api as smf
...
results = smf.ols(formula = "da ~ cfo + rm_proxy + cpi + year",
data=df, missing='drop').fit()
If this doesn't work then it's a bug and please report it with a MWE on github.
FYI, note the import above. Not everything is available in the formula.api namespace, so you should keep it separate from statsmodels.api. Or just use
import statsmodels.api as sm
sm.formula.ols(...)
The answer from jseabold works very well, but it may be not enough if you the want to do some computation on the predicted values and true values, e.g. if you want to use the function mean_squared_error. In that case, it may be better to get definitely rid of NaN
df = pd.read_csv('cl_030314.csv')
df_cleaned = df.dropna()
results = sm.ols(formula = "da ~ cfo + rm_proxy + cpi + year", data=df_cleaned).fit()