I have a heatmap that I created from Pandas in this way:
tukey = tukey.set_index('index')
fix,ax = plt.subplots(figsize=(12,6))
ax.set_title(str(date)+' '+ str(hour)+':'+'00',fontsize=14)
heatmap_args = {'linewidths': 0.35, 'linecolor': '0.5', 'clip_on': False, 'square': True, 'cbar_ax_bbox': [0.75, 0.35, 0.04, 0.3]}
sp.sign_plot(tukey, **heatmap_args)
I have tried to do this with seaborn but I haven't gotten the desired output:
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(tukey, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(12, 6))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(tukey, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
As seen, it still shows square where it is supposed to be masked and obviously the cbar is different.
My question is if there is any way to make it diagonal without using seaborn? Or at least just to get rid of the repeating part?
Edit: sample of my dataframe (the tukey):
>>> 1_a 1_b 1_c 1_d 1_e 1_f
index
1_a 1.00 0.900 0.75 0.736 0.900 0.400
1_b 0.9000 1.000 0.72 0.715 0.900 0.508
1_c 0.756 0.342 1.000 0.005 0.124 0.034
1_d 0.736 0.715 0.900 1.000 0.081 0.030
1_e 0.900 0.900 0.804 0.793 1.000 0.475
1_f 0.400 0.508 0.036 0.030 0.475 1.000
*I might have typo mistakes , the two diagonal sides suppose to be equal.
edit:
imports:
import scikit_posthocs as sp
import pandas as pd
import numpy as np
import statsmodels.api as sm
import scipy.stats as stats
from statsmodels.formula.api import ols
import matplotlib.pyplot as plt
import scipy.stats as stats
import seaborn as sns
scikit_posthocs' sign_plot() seems to create a QuadMesh (as does sns.heatmap). Setting an edge color to such a mesh will show horizontal and vertical lines for the full width and height of the mesh. To make the edges invisible in the "empty" region, they can be colored the same as the background (for example white). Making individual cells invisible can be done by setting their values to NaN such as in the code below.
Removing a column and a row (e.g. tukey.drop('1_f', axis=1, inplace=True) and
tukey.drop('1_a', axis=0, inplace=True)), doesn't help to make the plot a bit smaller because sign_plot adds them back in automatically.
import matplotlib.pyplot as plt
import scikit_posthocs as sp
import pandas as pd
import numpy as np
from io import StringIO
data_str = ''' 1_a 1_b 1_c 1_d 1_e 1_f
1_a 1.00 0.900 0.75 0.736 0.900 0.400
1_b 0.9000 1.000 0.72 0.715 0.900 0.508
1_c 0.756 0.342 1.000 0.005 0.124 0.034
1_d 0.736 0.715 0.900 1.000 0.081 0.030
1_e 0.900 0.900 0.804 0.793 1.000 0.475
1_f 0.400 0.508 0.036 0.030 0.475 1.000'''
tukey = pd.read_csv(StringIO(data_str), delim_whitespace=True)
cols = tukey.columns
for i in range(len(cols)):
for j in range(i, len(cols)):
tukey.iloc[i, j] = np.nan
fix, ax = plt.subplots(figsize=(12, 6))
heatmap_args = {'linewidths': 0.35, 'linecolor': 'white', 'clip_on': False, 'square': True,
'cbar_ax_bbox': [0.75, 0.35, 0.04, 0.3]}
sp.sign_plot(tukey, **heatmap_args)
plt.show()
I am trying to extend the ideas of item response theory to multiple responses. Consider a marketing survey, which asks customers, "what's the deciding factor in whether or not you purchase product X?" Where answers are {0: price, 1: durability, 2: ease-of-use}.
Here is some synthetic data (rows are customers, columns are products, each cell is the class response.)
responses = np.array([
[0,1,2,1,0],
[1,1,1,1,1],
[0,0,2,2,1],
[1,1,2,2,1],
[1,1,0,0,0]
])
students = 5
questions = 5
categories = 3
with pm.Model() as model:
z_student = pm.Normal("z_student", mu=0, sigma=1, shape=(students,categories))
z_question = pm.Normal("z_question",mu=0, sigma=1, shape=(categories,questions))
# Transformed parameter
theta = pm.Deterministic("theta", tt.nnet.softmax(z_student - z_question))
# Likelihood
kij = pm.Categorical("kij", p=theta, observed=responses)
trace = pm.sample(chains=4)
az.plot_trace(trace, var_names=["z_student", "z_question"], compact=False);
This code produces the following error: ValueError: Input dimension mis-match. (input[0].shape[0] = 5, input[1].shape[0] = 3).
However, when I change the theta line to: theta = pm.Deterministic("theta", tt.nnet.softmax(z_student - z_question.transpose())) the sampler doesn't instantly failure, rather is samples wrong.
az.summary(trace)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
z_student[0,0] 0.150 0.893 -1.620 1.752 0.012 0.013 5789.0 2327.0 5771.0 2991.0 1.0
z_student[0,1] 0.393 0.879 -1.319 1.980 0.012 0.012 5150.0 2610.0 5153.0 3195.0 1.0
z_student[0,2] -0.591 0.915 -2.254 1.108 0.011 0.012 6408.0 2737.0 6415.0 2830.0 1.0
z_student[1,0] -0.064 0.860 -1.676 1.538 0.011 0.014 5748.0 1942.0 5747.0 2850.0 1.0
z_student[1,1] 0.602 0.864 -0.982 2.185 0.012 0.011 4921.0 3028.0 4920.0 3269.0 1.0
z_student[1,2] -0.548 0.906 -2.218 1.137 0.012 0.012 6076.0 2870.0 6083.0 3410.0 1.0
z_student[2,0] -0.166 0.907 -1.974 1.450 0.013 0.014 4681.0 2121.0 4692.0 3108.0 1.0
z_student[2,1] -0.188 0.875 -1.776 1.472 0.011 0.014 5923.0 2073.0 5945.0 3333.0 1.0
z_student[2,2] 0.344 0.865 -1.288 1.951 0.012 0.012 4828.0 2750.0 4822.0 3039.0 1.0
z_student[3,0] -0.212 0.892 -1.980 1.395 0.011 0.013 6019.0 2504.0 5996.0 3391.0 1.0
z_student[3,1] 0.097 0.876 -1.573 1.713 0.012 0.013 5304.0 2252.0 5332.0 2971.0 1.0
z_student[3,2] 0.096 0.851 -1.583 1.645 0.011 0.012 5554.0 2678.0 5543.0 3288.0 1.0
z_student[4,0] 0.160 0.881 -1.367 1.947 0.012 0.013 5421.0 2189.0 5413.0 2927.0 1.0
z_student[4,1] 0.414 0.863 -1.255 2.026 0.012 0.012 4900.0 2548.0 4897.0 3248.0 1.0
z_student[4,2] -0.558 0.901 -2.266 1.130 0.011 0.012 6551.0 2728.0 6582.0 3142.0 1.0
z_question[0,0] -0.179 0.883 -1.795 1.488 0.011 0.015 6317.0 1769.0 6315.0 3389.0 1.0
z_question[0,1] 0.107 0.886 -1.511 1.807 0.012 0.013 5236.0 2431.0 5209.0 3503.0 1.0
z_question[0,2] 0.164 0.878 -1.450 1.834 0.012 0.013 5131.0 2248.0 5106.0 3102.0 1.0
z_question[0,3] 0.186 0.904 -1.450 1.882 0.011 0.014 6228.0 2175.0 6219.0 3335.0 1.0
z_question[0,4] -0.187 0.877 -1.790 1.508 0.011 0.014 5819.0 2089.0 5834.0 3198.0 1.0
z_question[1,0] -0.389 0.849 -1.948 1.219 0.012 0.012 4726.0 2494.0 4713.0 3146.0 1.0
z_question[1,1] -0.600 0.858 -2.249 0.946 0.012 0.011 5093.0 3247.0 5116.0 3312.0 1.0
z_question[1,2] 0.179 0.868 -1.520 1.763 0.012 0.012 5204.0 2514.0 5201.0 3418.0 1.0
z_question[1,3] -0.103 0.862 -1.683 1.561 0.013 0.013 4608.0 2212.0 4615.0 3163.0 1.0
z_question[1,4] -0.381 0.866 -2.047 1.147 0.011 0.012 6181.0 2735.0 6188.0 3038.0 1.0
z_question[2,0] 0.565 0.908 -1.125 2.337 0.012 0.012 6022.0 2879.0 6045.0 3173.0 1.0
z_question[2,1] 0.536 0.923 -1.192 2.241 0.012 0.013 6041.0 2476.0 6046.0 3059.0 1.0
z_question[2,2] -0.325 0.856 -1.918 1.289 0.012 0.012 5429.0 2741.0 5418.0 3004.0 1.0
z_question[2,3] -0.107 0.881 -1.953 1.363 0.012 0.012 5834.0 2545.0 5841.0 3332.0 1.0
z_question[2,4] 0.576 0.910 -1.202 2.253 0.011 0.013 6385.0 2606.0 6371.0 2905.0 1.0
theta[0,0] 0.360 0.173 0.072 0.685 0.003 0.002 4309.0 3774.0 4256.0 2846.0 1.0
theta[0,1] 0.528 0.182 0.208 0.857 0.003 0.002 4949.0 4563.0 4908.0 3050.0 1.0
theta[0,2] 0.113 0.104 0.001 0.304 0.001 0.001 6095.0 4045.0 7146.0 2780.0 1.0
theta[1,0] 0.216 0.144 0.007 0.477 0.002 0.002 6149.0 4576.0 6493.0 3116.0 1.0
theta[1,1] 0.678 0.168 0.381 0.962 0.002 0.002 5954.0 5954.0 6180.0 3320.0 1.0
theta[1,2] 0.107 0.100 0.000 0.294 0.001 0.001 6321.0 3863.0 7623.0 3252.0 1.0
theta[2,0] 0.234 0.150 0.010 0.509 0.002 0.002 6154.0 4352.0 6684.0 3252.0 1.0
theta[2,1] 0.230 0.152 0.005 0.506 0.002 0.001 6885.0 5424.0 6459.0 2923.0 1.0
theta[2,2] 0.536 0.186 0.194 0.858 0.002 0.002 5595.0 5250.0 5622.0 2805.0 1.0
theta[3,0] 0.239 0.157 0.007 0.526 0.002 0.002 5843.0 4627.0 5789.0 2853.0 1.0
theta[3,1] 0.381 0.178 0.065 0.703 0.003 0.002 4927.0 4377.0 5009.0 3315.0 1.0
theta[3,2] 0.380 0.174 0.069 0.692 0.003 0.002 4653.0 4176.0 4624.0 2562.0 1.0
theta[4,0] 0.361 0.175 0.057 0.668 0.002 0.002 5185.0 4637.0 5269.0 2985.0 1.0
theta[4,1] 0.527 0.184 0.186 0.852 0.003 0.002 4614.0 4445.0 4668.0 2497.0 1.0
theta[4,2] 0.111 0.100 0.002 0.303 0.001 0.001 6159.0 3978.0 7520.0 3473.0 1.0
Of note, please reference the theta values learned. Their names include: Theta[0,0]...Theta[0,2],...Theta[4,2]. So, in the first example, what PyMC3 has learned is the strength of relation between (z_student[0] - z_question[0]) and class/response 0.
This is not the effect I wish to accomplish, I want to learn a 3D tensor accounting for every possible {student, question, category} pairing; there should be 74 thetas, not 15, where Theta[0,0,0] refers to the learned value {student_0, question_0, response_0}. However, my code is currently not accomplishing this effect.
Any ideas?
Edit: More recently, I've built a function in Theano to demonstrate my goal:
responses = np.array([
[0,1,2,2,2],
[0,1,2,1,1],
[0,1,2,0,0],
[0,1,2,0,1],
[0,1,2,1,0]
])
students = 5
questions = 5
categories = 3
a = tensor.matrix()
b = tensor.matrix()
elem_sub = a[0,0] - b[0,0], a[0,1] - b[1,0], a[0,2] - b[2,0]
function = theano.function([a,b], elem_sub)
with pm.Model() as model:
z_student = pm.Normal("student_dim1", mu=0, sigma=1, shape=(students,categories))
z_question = pm.Normal("question_dim1", mu=0, sigma=1, shape=(categories,questions))
# Transformed parameter
theta = pm.Deterministic("theta", tt.nnet.softmax(function(z_student,z_question)))
# Likelihood
kij = pm.Categorical("kij", p=theta, observed=responses)
However, the following error is triggered:
TypeError: Bad input argument with name "z_student" to theano function with name "<ipython-input-2-2a16f255dca1>:23" at index 0 (0-based).
Backtrace when that variable is created:
.
.
.
Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?
I have code like this
m2=smf.ols(formula='demand~year+C(months)+year*C(months)',data=df).fit()
m2.summary()
A dataframe with three columns, 144 rows, demand, year 2000-2011 and months 1-12. Now I want to get predicted value based on interactions between year and month as predictor to predict demand (month is treated as categorical variables here. What should I do?
m2.predict( #what should I enter here?)
Here is the model of linear regression. If it is helpful
OLS Regression Results
Dep. Variable: demand R-squared: 0.985
Model: OLS Adj. R-squared: 0.982
Method: Least Squares F-statistic: 343.4
Date: Thu, 08 Oct 2020 Prob (F-statistic): 2.78e-98
Time: 00:38:14 Log-Likelihood: -590.64
No. Observations: 144 AIC: 1229.
Df Residuals: 120 BIC: 1301.
Df Model: 23
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
Intercept -5.548e+04 2686.757 -20.651 0.000 -6.08e+04 -5.02e+04
C(months)[T.2] 6521.6434 3799.648 1.716 0.089 -1001.396 1.4e+04
C(months)[T.3] 217.7471 3799.648 0.057 0.954 -7305.292 7740.786
C(months)[T.4] -3200.2960 3799.648 -0.842 0.401 -1.07e+04 4322.743
C(months)[T.5] -7465.9988 3799.648 -1.965 0.052 -1.5e+04 57.040
C(months)[T.6] -1.832e+04 3799.648 -4.822 0.000 -2.58e+04 -1.08e+04
C(months)[T.7] -3.072e+04 3799.648 -8.086 0.000 -3.82e+04 -2.32e+04
C(months)[T.8] -3.013e+04 3799.648 -7.929 0.000 -3.77e+04 -2.26e+04
C(months)[T.9] -1.265e+04 3799.648 -3.328 0.001 -2.02e+04 -5122.469
C(months)[T.10] -5374.5897 3799.648 -1.414 0.160 -1.29e+04 2148.449
C(months)[T.11] 3139.5781 3799.648 0.826 0.410 -4383.461 1.07e+04
C(months)[T.12] -1122.9114 3799.648 -0.296 0.768 -8645.950 6400.127
year 27.7867 1.340 20.741 0.000 25.134 30.439
year:C(months)[T.2] -3.2552 1.895 -1.718 0.088 -7.006 0.496
year:C(months)[T.3] -0.0944 1.895 -0.050 0.960 -3.846 3.657
year:C(months)[T.4] 1.6084 1.895 0.849 0.398 -2.143 5.360
year:C(months)[T.5] 3.7378 1.895 1.973 0.051 -0.013 7.489
year:C(months)[T.6] 9.1713 1.895 4.841 0.000 5.420 12.923
year:C(months)[T.7] 15.3741 1.895 8.115 0.000 11.623 19.125
year:C(months)[T.8] 15.0769 1.895 7.958 0.000 11.326 18.828
year:C(months)[T.9] 6.3357 1.895 3.344 0.001 2.584 10.087
year:C(months)[T.10] 2.6923 1.895 1.421 0.158 -1.059 6.444
year:C(months)[T.11] -1.5699 1.895 -0.829 0.409 -5.321 2.181
year:C(months)[T.12] 0.5699 1.895 0.301 0.764 -3.181 4.321
m2.predict(df.loc[:,['year', 'months']])
In the code below the 'correct' R2 value for sigma = 1 is 0.33, which I receive when run on my work computer. However on my personal computer I receive R2 = 0.119. This has been confirmed across multiple other computers running my exact code.
Only my personal computer produces this strange 0.119 result (even running the 'solution' code produces the 0.119 result). I have tried multiple clean installs of Anaconda to no avail.
Only thing I can think is that maybe my 'clean' installs aren't 'clean' enough. I have tried a few methods of fully deleting Anaconda and Python, maybe someone has a robust method for this?
x_peak
[2688.126327 2692.813829 2697.501331 2702.188833 2706.876334 2711.563836
2716.251338 2720.93884 2725.626341 2730.313843 2735.001345 2739.688846
2744.376348 2749.06385 2753.751352 2758.438853 2763.126355 2767.813857
2772.501359 2777.18886 2781.876362 2786.563864 2791.251366 2795.938867
2800.626369 2805.313871 2810.001373 2814.688874 2819.376376 2824.063878
2828.75138 2833.438881 2838.126383 2842.813885 2847.501387 2852.188888
2856.87639 2861.563892 2866.251394 2870.938895 2875.626397 2880.313899
2885.0014 2889.688902 2894.376404 2899.063906 2903.751407 2908.438909
2913.126411 2917.813913 2922.501414 2927.188916 2931.876418 2936.56392
2941.251421 2945.938923 2950.626425 2955.313927 2960.001428 2964.68893
2969.376432 2974.063934 2978.751435 2983.438937 2988.126439 2992.813941
2997.501442 3002.188944 3006.876446 3011.563948 3016.251449 3020.938951
3025.626453 3030.313954 3035.001456 3039.688958 3044.37646 3049.063961
3053.751463 3058.438965 3063.126467 3067.813968 3072.50147 3077.188972
3081.876474 3086.563975 3091.251477 3095.938979 3100.626481 3105.313982
3110.001484 3114.688986 3119.376488 3124.063989 3128.751491 3133.438993
3138.126495 3142.813996 3147.501498 3152.189 ]
y_peak
[0.01 0.011 0.011 0.012 0.013 0.015 0.017 0.018 0.02 0.021 0.024 0.027
0.029 0.03 0.031 0.033 0.034 0.036 0.037 0.039 0.04 0.043 0.047 0.049
0.052 0.055 0.058 0.062 0.066 0.071 0.077 0.085 0.097 0.111 0.141 0.169
0.183 0.235 0.265 0.324 0.35 0.396 0.421 0.45 0.467 0.486 0.514 0.51
0.464 0.444 0.437 0.432 0.432 0.437 0.442 0.45 0.475 0.501 0.541 0.553
0.594 0.611 0.611 0.607 0.612 0.607 0.521 0.471 0.424 0.331 0.264 0.216
0.161 0.114 0.094 0.054 0.034 0.021 0.014 0.008 0.007 0.005 0.004 0.003
0.003 0.002 0.002 0.002 0.001 0.001 0.001 0.001 0. 0. 0. 0.
0. 0. 0. 0. ]
import numpy as np
import pandas as pd
import pylab as plt
from sklearn.linear_model import LinearRegression
df = pd.read_csv('data/ethanol_IR.csv')
x_all = df['wavenumber [cm^-1]'].values
y_all = df['absorbance'].values
x_peak = x_all[475:575]
y_peak = y_all[475:575]
x_train = x_peak[::3]
y_train = y_peak[::3]
sigmas = [1, 10, 50, 100, 150]
def rbf(x_train, x_test=None, gamma=1):
if x_test is None:
x_test = x_train
N = len(x_test) #<- number of data points
M = len(x_train) #<- number of features
X = np.zeros((N,M))
for i in range(N):
for j in range(M):
X[i,j] = np.exp(-gamma*(x_test[i] - x_train[j])**2)
return X
model_rbf = LinearRegression() #create a linear regression model instance
n = len(sigmas)
def gam(sigma):
gam = 1./(2*sigma**2)
return gam
for i in range(n):
total = []
gamma = gam(sigmas[i])
print('Sigma = {}'.format(sigmas[i]))
X_train = rbf(x_train, gamma=gamma)
model_rbf.fit(X_train, y_train) #fit the model
r2 = model_rbf.score(X_train, y_train) #get the "score", which is equivalent to r^2
print('r^2 training = {}'.format(r2))
X_all = rbf(x_train, x_test=x_peak, gamma=gamma)
yhat = model_rbf.predict(X_all)
r2 = model_rbf.score(X_all, y_peak) #get the "score", which is equivalent to r^2
print('r^2 testing = {}'.format(r2))
I am having differences in the coefficient values and coefficient errors using smf.ols and sm.OLS functions of statsmodels. Even though matematically, they should be the same regression formula and give the same results.
I have done a 100% reproducible example of my question, the dataframe df can be downloaded from here: https://drive.google.com/drive/folders/1i67wztkrAeEZH2tv2hyOlgxG7N80V3pI?usp=sharing
Case 1: Linear Model using Patsy from Statsmodels
# First we load the libraries:
import statsmodels.api as sm
import statsmodels.formula.api as smf
import random
import pandas as pd
# We define a specific seed to have the same results:
random.seed(1234)
# Now we read the data that can be downloaded from Google Drive link provided above:
df = pd.read_csv("/Users/user/Documents/example/cars.csv", sep = "|")
# We create the linear regression:
lm1 = smf.ols('price ~ make + fuel_system + engine_type + num_of_doors + bore + compression_ratio + height + peak_rpm + 1', data = df)
# We see the results:
lm1.fit().summary()
The result of lm1 is:
OLS Regression Results
==============================================================================
Dep. Variable: price R-squared: 0.894
Model: OLS Adj. R-squared: 0.868
Method: Least Squares F-statistic: 35.54
Date: Mon, 18 Feb 2019 Prob (F-statistic): 5.24e-62
Time: 17:19:14 Log-Likelihood: -1899.7
No. Observations: 205 AIC: 3879.
Df Residuals: 165 BIC: 4012.
Df Model: 39
Covariance Type: nonrobust
=========================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------
Intercept 1.592e+04 1.21e+04 1.320 0.189 -7898.396 3.97e+04
make[T.audi] 6519.7045 2371.807 2.749 0.007 1836.700 1.12e+04
make[T.bmw] 1.427e+04 2292.551 6.223 0.000 9740.771 1.88e+04
make[T.chevrolet] -571.8236 2860.026 -0.200 0.842 -6218.788 5075.141
make[T.dodge] -1186.3430 2261.240 -0.525 0.601 -5651.039 3278.353
make[T.honda] 2779.6496 2891.626 0.961 0.338 -2929.709 8489.009
make[T.isuzu] 3098.9677 2592.645 1.195 0.234 -2020.069 8218.004
make[T.jaguar] 1.752e+04 2416.313 7.252 0.000 1.28e+04 2.23e+04
make[T.mazda] 306.6568 2134.567 0.144 0.886 -3907.929 4521.243
make[T.mercedes-benz] 1.698e+04 2320.871 7.318 0.000 1.24e+04 2.16e+04
make[T.mercury] 2958.1002 3605.739 0.820 0.413 -4161.236 1.01e+04
make[T.mitsubishi] -1188.8337 2284.697 -0.520 0.604 -5699.844 3322.176
make[T.nissan] -1211.5463 2073.422 -0.584 0.560 -5305.405 2882.312
make[T.peugot] 3057.0217 4255.809 0.718 0.474 -5345.841 1.15e+04
make[T.plymouth] -894.5921 2332.746 -0.383 0.702 -5500.473 3711.289
make[T.porsche] 9558.8747 3688.038 2.592 0.010 2277.044 1.68e+04
make[T.renault] -2124.9722 2847.536 -0.746 0.457 -7747.277 3497.333
make[T.saab] 3490.5333 2319.189 1.505 0.134 -1088.579 8069.645
make[T.subaru] -1.636e+04 4002.796 -4.087 0.000 -2.43e+04 -8456.659
make[T.toyota] -770.9677 1911.754 -0.403 0.687 -4545.623 3003.688
make[T.volkswagen] 406.9179 2219.714 0.183 0.855 -3975.788 4789.623
make[T.volvo] 5433.7129 2397.030 2.267 0.025 700.907 1.02e+04
fuel_system[T.2bbl] 2142.1594 2232.214 0.960 0.339 -2265.226 6549.545
fuel_system[T.4bbl] 464.1109 3999.976 0.116 0.908 -7433.624 8361.846
fuel_system[T.idi] 1.991e+04 6622.812 3.007 0.003 6837.439 3.3e+04
fuel_system[T.mfi] 3716.5201 3936.805 0.944 0.347 -4056.488 1.15e+04
fuel_system[T.mpfi] 3964.1109 2267.538 1.748 0.082 -513.019 8441.241
fuel_system[T.spdi] 3240.0003 2719.925 1.191 0.235 -2130.344 8610.344
fuel_system[T.spfi] 932.1959 4019.476 0.232 0.817 -7004.041 8868.433
engine_type[T.dohcv] -1.208e+04 4205.826 -2.872 0.005 -2.04e+04 -3773.504
engine_type[T.l] -4833.9860 3763.812 -1.284 0.201 -1.23e+04 2597.456
engine_type[T.ohc] -4038.8848 1213.598 -3.328 0.001 -6435.067 -1642.702
engine_type[T.ohcf] 9618.9281 3504.600 2.745 0.007 2699.286 1.65e+04
engine_type[T.ohcv] 3051.7629 1445.185 2.112 0.036 198.323 5905.203
engine_type[T.rotor] 1403.9928 3217.402 0.436 0.663 -4948.593 7756.579
num_of_doors[T.two] -419.9640 521.754 -0.805 0.422 -1450.139 610.211
bore 3993.4308 1373.487 2.908 0.004 1281.556 6705.306
compression_ratio -1200.5665 460.681 -2.606 0.010 -2110.156 -290.977
height -80.7141 146.219 -0.552 0.582 -369.417 207.988
peak_rpm -0.5903 0.790 -0.747 0.456 -2.150 0.970
==============================================================================
Omnibus: 65.777 Durbin-Watson: 1.217
Prob(Omnibus): 0.000 Jarque-Bera (JB): 399.594
Skew: 1.059 Prob(JB): 1.70e-87
Kurtosis: 9.504 Cond. No. 3.26e+05
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.26e+05. This might indicate that there are
strong multicollinearity or other numerical problems.
"""
Case 2: Linear Model using Dummy Variables from Statsmodels as well
# We define a specific seed to have the same results:
random.seed(1234)
# First we check what `object` type variables we have in our dataset:
df.dtypes
# We create a list where we save the `object` type variables names:
object = ['make',
'fuel_system',
'engine_type',
'num_of_doors'
]
# Now we convert those object variables to numeric with get_dummies function to have 1 unique numeric dataframe:
df_num = pd.get_dummies(df, columns = object)
# We ensure the dataframe is numeric casting all values to float64:
df_num = df_num[df_num.columns].apply(pd.to_numeric, errors='coerce', axis = 1)
# We define the predictive variables dataset:
X = df_num.drop('price', axis = 1)
# We define the response variable values:
y = df_num.price.values
# We add a constant as we did in the previous example (adding "+1" to Patsy):
Xc = sm.add_constant(X) # Adds a constant to the model
# We create the linear model and obtain results:
lm2 = sm.OLS(y, Xc)
lm2.fit().summary()
The result of lm2 is:
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.894
Model: OLS Adj. R-squared: 0.868
Method: Least Squares F-statistic: 35.54
Date: Mon, 18 Feb 2019 Prob (F-statistic): 5.24e-62
Time: 17:28:16 Log-Likelihood: -1899.7
No. Observations: 205 AIC: 3879.
Df Residuals: 165 BIC: 4012.
Df Model: 39
Covariance Type: nonrobust
======================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------
const 1.205e+04 6811.094 1.769 0.079 -1398.490 2.55e+04
bore 3993.4308 1373.487 2.908 0.004 1281.556 6705.306
compression_ratio -1200.5665 460.681 -2.606 0.010 -2110.156 -290.977
height -80.7141 146.219 -0.552 0.582 -369.417 207.988
peak_rpm -0.5903 0.790 -0.747 0.456 -2.150 0.970
make_alfa-romero -2273.9631 1865.185 -1.219 0.225 -5956.669 1408.743
make_audi 4245.7414 1324.140 3.206 0.002 1631.299 6860.184
make_bmw 1.199e+04 1232.635 9.730 0.000 9559.555 1.44e+04
make_chevrolet -2845.7867 1976.730 -1.440 0.152 -6748.733 1057.160
make_dodge -3460.3061 1170.966 -2.955 0.004 -5772.315 -1148.297
make_honda 505.6865 2049.865 0.247 0.805 -3541.661 4553.034
make_isuzu 825.0045 1706.160 0.484 0.629 -2543.716 4193.725
make_jaguar 1.525e+04 1903.813 8.010 0.000 1.15e+04 1.9e+04
make_mazda -1967.3063 982.179 -2.003 0.047 -3906.564 -28.048
make_mercedes-benz 1.471e+04 1423.004 10.338 0.000 1.19e+04 1.75e+04
make_mercury 684.1370 2913.361 0.235 0.815 -5068.136 6436.410
make_mitsubishi -3462.7968 1221.018 -2.836 0.005 -5873.631 -1051.963
make_nissan -3485.5094 946.316 -3.683 0.000 -5353.958 -1617.060
make_peugot 783.0586 3513.296 0.223 0.824 -6153.754 7719.871
make_plymouth -3168.5552 1293.376 -2.450 0.015 -5722.256 -614.854
make_porsche 7284.9115 2853.174 2.553 0.012 1651.475 1.29e+04
make_renault -4398.9354 2037.945 -2.159 0.032 -8422.747 -375.124
make_saab 1216.5702 1487.192 0.818 0.415 -1719.810 4152.950
make_subaru -1.863e+04 3263.524 -5.710 0.000 -2.51e+04 -1.22e+04
make_toyota -3044.9308 776.059 -3.924 0.000 -4577.218 -1512.644
make_volkswagen -1867.0452 1170.975 -1.594 0.113 -4179.072 444.981
make_volvo 3159.7498 1327.405 2.380 0.018 538.862 5780.638
fuel_system_1bbl -2790.4092 2230.161 -1.251 0.213 -7193.740 1612.922
fuel_system_2bbl -648.2498 1094.525 -0.592 0.554 -2809.330 1512.830
fuel_system_4bbl -2326.2983 3094.703 -0.752 0.453 -8436.621 3784.024
fuel_system_idi 1.712e+04 6154.806 2.782 0.006 4971.083 2.93e+04
fuel_system_mfi 926.1109 3063.134 0.302 0.763 -5121.881 6974.102
fuel_system_mpfi 1173.7017 1186.125 0.990 0.324 -1168.238 3515.642
fuel_system_spdi 449.5911 1827.318 0.246 0.806 -3158.349 4057.531
fuel_system_spfi -1858.2133 3111.596 -0.597 0.551 -8001.891 4285.464
engine_type_dohc 2703.6445 1803.080 1.499 0.136 -856.440 6263.729
engine_type_dohcv -9374.0342 3504.717 -2.675 0.008 -1.63e+04 -2454.161
engine_type_l -2130.3416 3357.283 -0.635 0.527 -8759.115 4498.431
engine_type_ohc -1335.2404 1454.047 -0.918 0.360 -4206.177 1535.696
engine_type_ohcf 1.232e+04 2850.883 4.322 0.000 6693.659 1.8e+04
engine_type_ohcv 5755.4074 1669.627 3.447 0.001 2458.820 9051.995
engine_type_rotor 4107.6373 3032.223 1.355 0.177 -1879.323 1.01e+04
num_of_doors_four 6234.8048 3491.722 1.786 0.076 -659.410 1.31e+04
num_of_doors_two 5814.8408 3337.588 1.742 0.083 -775.045 1.24e+04
==============================================================================
Omnibus: 65.777 Durbin-Watson: 1.217
Prob(Omnibus): 0.000 Jarque-Bera (JB): 399.594
Skew: 1.059 Prob(JB): 1.70e-87
Kurtosis: 9.504 Cond. No. 1.01e+16
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 5.38e-23. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
"""
As we can see, some variables like height have the same coefficient. Nevertheless some others don't (level isuzu from variable make, level ohc of engine_type or the independent term, etc.). Shouldn't it be the same result for both outputs? What am I missing here or doing wrong?
Thanks in advance for your help.
P.D. As clarified by #sukhbinder, even using Patsy formula without independent
term (putting "-1" in the formula, as Patsy incorporates it by
default) and eliminating independent term from dummy formulation, I
receive different results.
The reason why the results do not match is because Statsmodels does a pre-selection on predictive variables depending on high multicollinearity.
Exactly the same results are accomplished going through descriptive summary of the regression and identifying variables missing:
deletex = [
'make_alfa-romero',
'fuel_system_1bbl',
'engine_type_dohc',
'num_of_doors_four'
]
df_num.drop( deletex, axis = 1, inplace = True)
df_num = df_num[df_num.columns].apply(pd.to_numeric, errors='coerce', axis = 1)
X = df_num.drop('price', axis = 1)
y = df_num.price.values
Xc = sm.add_constant(X) # Adds a constant to the model
random.seed(1234)
linear_regression = sm.OLS(y, Xc)
linear_regression.fit().summary()
Which prints the result:
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.894
Model: OLS Adj. R-squared: 0.868
Method: Least Squares F-statistic: 35.54
Date: Thu, 21 Feb 2019 Prob (F-statistic): 5.24e-62
Time: 18:16:08 Log-Likelihood: -1899.7
No. Observations: 205 AIC: 3879.
Df Residuals: 165 BIC: 4012.
Df Model: 39
Covariance Type: nonrobust
======================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------
const 1.592e+04 1.21e+04 1.320 0.189 -7898.396 3.97e+04
bore 3993.4308 1373.487 2.908 0.004 1281.556 6705.306
compression_ratio -1200.5665 460.681 -2.606 0.010 -2110.156 -290.977
height -80.7141 146.219 -0.552 0.582 -369.417 207.988
peak_rpm -0.5903 0.790 -0.747 0.456 -2.150 0.970
make_audi 6519.7045 2371.807 2.749 0.007 1836.700 1.12e+04
make_bmw 1.427e+04 2292.551 6.223 0.000 9740.771 1.88e+04
make_chevrolet -571.8236 2860.026 -0.200 0.842 -6218.788 5075.141
make_dodge -1186.3430 2261.240 -0.525 0.601 -5651.039 3278.353
make_honda 2779.6496 2891.626 0.961 0.338 -2929.709 8489.009
make_isuzu 3098.9677 2592.645 1.195 0.234 -2020.069 8218.004
make_jaguar 1.752e+04 2416.313 7.252 0.000 1.28e+04 2.23e+04
make_mazda 306.6568 2134.567 0.144 0.886 -3907.929 4521.243
make_mercedes-benz 1.698e+04 2320.871 7.318 0.000 1.24e+04 2.16e+04
make_mercury 2958.1002 3605.739 0.820 0.413 -4161.236 1.01e+04
make_mitsubishi -1188.8337 2284.697 -0.520 0.604 -5699.844 3322.176
make_nissan -1211.5463 2073.422 -0.584 0.560 -5305.405 2882.312
make_peugot 3057.0217 4255.809 0.718 0.474 -5345.841 1.15e+04
make_plymouth -894.5921 2332.746 -0.383 0.702 -5500.473 3711.289
make_porsche 9558.8747 3688.038 2.592 0.010 2277.044 1.68e+04
make_renault -2124.9722 2847.536 -0.746 0.457 -7747.277 3497.333
make_saab 3490.5333 2319.189 1.505 0.134 -1088.579 8069.645
make_subaru -1.636e+04 4002.796 -4.087 0.000 -2.43e+04 -8456.659
make_toyota -770.9677 1911.754 -0.403 0.687 -4545.623 3003.688
make_volkswagen 406.9179 2219.714 0.183 0.855 -3975.788 4789.623
make_volvo 5433.7129 2397.030 2.267 0.025 700.907 1.02e+04
fuel_system_2bbl 2142.1594 2232.214 0.960 0.339 -2265.226 6549.545
fuel_system_4bbl 464.1109 3999.976 0.116 0.908 -7433.624 8361.846
fuel_system_idi 1.991e+04 6622.812 3.007 0.003 6837.439 3.3e+04
fuel_system_mfi 3716.5201 3936.805 0.944 0.347 -4056.488 1.15e+04
fuel_system_mpfi 3964.1109 2267.538 1.748 0.082 -513.019 8441.241
fuel_system_spdi 3240.0003 2719.925 1.191 0.235 -2130.344 8610.344
fuel_system_spfi 932.1959 4019.476 0.232 0.817 -7004.041 8868.433
engine_type_dohcv -1.208e+04 4205.826 -2.872 0.005 -2.04e+04 -3773.504
engine_type_l -4833.9860 3763.812 -1.284 0.201 -1.23e+04 2597.456
engine_type_ohc -4038.8848 1213.598 -3.328 0.001 -6435.067 -1642.702
engine_type_ohcf 9618.9281 3504.600 2.745 0.007 2699.286 1.65e+04
engine_type_ohcv 3051.7629 1445.185 2.112 0.036 198.323 5905.203
engine_type_rotor 1403.9928 3217.402 0.436 0.663 -4948.593 7756.579
num_of_doors_two -419.9640 521.754 -0.805 0.422 -1450.139 610.211
==============================================================================
Omnibus: 65.777 Durbin-Watson: 1.217
Prob(Omnibus): 0.000 Jarque-Bera (JB): 399.594
Skew: 1.059 Prob(JB): 1.70e-87
Kurtosis: 9.504 Cond. No. 3.26e+05
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.26e+05. This might indicate that there are
strong multicollinearity or other numerical problems.
Results that is completely equal to the first call with Statsmodels:
random.seed(1234)
lm_python = smf.ols('price ~ make + fuel_system + engine_type + num_of_doors + bore + compression_ratio + height + peak_rpm + 1', data = df)
lm_python.fit().summary()
OLS Regression Results
==============================================================================
Dep. Variable: price R-squared: 0.894
Model: OLS Adj. R-squared: 0.868
Method: Least Squares F-statistic: 35.54
Date: Thu, 21 Feb 2019 Prob (F-statistic): 5.24e-62
Time: 18:17:37 Log-Likelihood: -1899.7
No. Observations: 205 AIC: 3879.
Df Residuals: 165 BIC: 4012.
Df Model: 39
Covariance Type: nonrobust
=========================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------
Intercept 1.592e+04 1.21e+04 1.320 0.189 -7898.396 3.97e+04
make[T.audi] 6519.7045 2371.807 2.749 0.007 1836.700 1.12e+04
make[T.bmw] 1.427e+04 2292.551 6.223 0.000 9740.771 1.88e+04
make[T.chevrolet] -571.8236 2860.026 -0.200 0.842 -6218.788 5075.141
make[T.dodge] -1186.3430 2261.240 -0.525 0.601 -5651.039 3278.353
make[T.honda] 2779.6496 2891.626 0.961 0.338 -2929.709 8489.009
make[T.isuzu] 3098.9677 2592.645 1.195 0.234 -2020.069 8218.004
make[T.jaguar] 1.752e+04 2416.313 7.252 0.000 1.28e+04 2.23e+04
make[T.mazda] 306.6568 2134.567 0.144 0.886 -3907.929 4521.243
make[T.mercedes-benz] 1.698e+04 2320.871 7.318 0.000 1.24e+04 2.16e+04
make[T.mercury] 2958.1002 3605.739 0.820 0.413 -4161.236 1.01e+04
make[T.mitsubishi] -1188.8337 2284.697 -0.520 0.604 -5699.844 3322.176
make[T.nissan] -1211.5463 2073.422 -0.584 0.560 -5305.405 2882.312
make[T.peugot] 3057.0217 4255.809 0.718 0.474 -5345.841 1.15e+04
make[T.plymouth] -894.5921 2332.746 -0.383 0.702 -5500.473 3711.289
make[T.porsche] 9558.8747 3688.038 2.592 0.010 2277.044 1.68e+04
make[T.renault] -2124.9722 2847.536 -0.746 0.457 -7747.277 3497.333
make[T.saab] 3490.5333 2319.189 1.505 0.134 -1088.579 8069.645
make[T.subaru] -1.636e+04 4002.796 -4.087 0.000 -2.43e+04 -8456.659
make[T.toyota] -770.9677 1911.754 -0.403 0.687 -4545.623 3003.688
make[T.volkswagen] 406.9179 2219.714 0.183 0.855 -3975.788 4789.623
make[T.volvo] 5433.7129 2397.030 2.267 0.025 700.907 1.02e+04
fuel_system[T.2bbl] 2142.1594 2232.214 0.960 0.339 -2265.226 6549.545
fuel_system[T.4bbl] 464.1109 3999.976 0.116 0.908 -7433.624 8361.846
fuel_system[T.idi] 1.991e+04 6622.812 3.007 0.003 6837.439 3.3e+04
fuel_system[T.mfi] 3716.5201 3936.805 0.944 0.347 -4056.488 1.15e+04
fuel_system[T.mpfi] 3964.1109 2267.538 1.748 0.082 -513.019 8441.241
fuel_system[T.spdi] 3240.0003 2719.925 1.191 0.235 -2130.344 8610.344
fuel_system[T.spfi] 932.1959 4019.476 0.232 0.817 -7004.041 8868.433
engine_type[T.dohcv] -1.208e+04 4205.826 -2.872 0.005 -2.04e+04 -3773.504
engine_type[T.l] -4833.9860 3763.812 -1.284 0.201 -1.23e+04 2597.456
engine_type[T.ohc] -4038.8848 1213.598 -3.328 0.001 -6435.067 -1642.702
engine_type[T.ohcf] 9618.9281 3504.600 2.745 0.007 2699.286 1.65e+04
engine_type[T.ohcv] 3051.7629 1445.185 2.112 0.036 198.323 5905.203
engine_type[T.rotor] 1403.9928 3217.402 0.436 0.663 -4948.593 7756.579
num_of_doors[T.two] -419.9640 521.754 -0.805 0.422 -1450.139 610.211
bore 3993.4308 1373.487 2.908 0.004 1281.556 6705.306
compression_ratio -1200.5665 460.681 -2.606 0.010 -2110.156 -290.977
height -80.7141 146.219 -0.552 0.582 -369.417 207.988
peak_rpm -0.5903 0.790 -0.747 0.456 -2.150 0.970
==============================================================================
Omnibus: 65.777 Durbin-Watson: 1.217
Prob(Omnibus): 0.000 Jarque-Bera (JB): 399.594
Skew: 1.059 Prob(JB): 1.70e-87
Kurtosis: 9.504 Cond. No. 3.26e+05
==============================================================================
There is the need to check correspondence in predictive variables as pd.get_dummies does an extensive obtaining of all dummy variables, and Statsmodels applies an N-1 levels inside the categorical variable selection.