How obtain the intercept of the PLS-Regression (sklearn) - python

The PLS regression using sklearn gives very poor prediction results. When I get the model I can not find the way to find the "intercept". Perhaps this affects the prediction of the model? The matrix of scores and loadings are fine. The arrangement of the coefficients also. In any case, how do I get the intercept using the attributes already obtained?
This code throws the coefficients of the variables.
from pandas import DataFrame
from sklearn.cross_decomposition import PLSRegression
X = DataFrame( {
'x1': [0.0,1.0,2.0,2.0],
'x2': [0.0,0.0,2.0,5.0],
'x3': [1.0,0.0,2.0,4.0],
}, columns = ['x1', 'x2', 'x3'] )
Y = DataFrame({
'y': [ -0.2, 1.1, 5.9, 12.3 ],
}, columns = ['y'] )
def regPLS1(X,Y):
_COMPS_ = len(X.columns) # all latent variables
model = PLSRegression(_COMPS_).fit( X, Y )
return model.coef_
The result is:
regPLS1(X,Y)
>>> array([[ 0.84], [ 2.44], [-0.46]])
In addition to these coefficients, the value of the intercept is: 0.26. What am I doing wrong?
EDIT
The correct predict(evaluate) response is Y_hat (exactly the same the observed Y):
Y_hat = [-0.2 1.1 5.9 12.3]

To calculate the intercept use the following:
plsModel = PLSRegression(_COMPS_).fit( X, Y )
y_intercept = plsModel.y_mean_ - numpy.dot(plsModel.x_mean_ , plsModel.coef_)
I got the formula directly from the R "pls" package:
BInt[1,,i] <- object$Ymeans - object$Xmeans %*% B[,,i]
I tested the results and calculated the same intercepts in R 'pls' and scikit-learn.

Based of my reading of the implementation of _PLS the formula is Y = XB + Err where model.coef_ is the estimate of B. If you look at the predict method it looks like it uses the fitted parameter y_mean_ as the Err so I believe that's what you want. Use model.y_mean_ instead of model.coef_. Hope this helps!

Related

How to find regression curve equation for a fitted PolynomialFeatures model

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
data=pd.DataFrame(
{"input":
[0.001,0.015,0.066,0.151,0.266,0.402,0.45,0.499,0.598,0.646,0.738,0.782,0.86,0.894,0.924,0.95],
"output":[0.5263157894736842,0.5789473684210524,0.6315789473684206,0.6842105263157897,
0.6315789473684206, 0.7894736842105263, 0.8421052631578945, 0.7894736842105263, 0.736842105263158,
0.6842105263157897, 0.736842105263158, 0.736842105263158,0.6842105263157897, 0.6842105263157897,
0.6315789473684206,0.5789473684210524]})
I have the above data that includes input and output data and ı want to make a curve that properly fits this data. Firstly plotting of input and output values are here :
I have made this code:
X=data.iloc[:,0].to_numpy()
X=X.reshape(-1,1)
y=data.iloc[:,1].to_numpy()
y=y.reshape(-1,1)
poly=PolynomialFeatures(degree=2)
poly.fit(X,y)
X_poly=poly.transform(X)
reg=LinearRegression().fit(X_poly,y)
plt.scatter(X,y,color="blue")
plt.plot(X,reg.predict(X_poly),color="orange",label="Polynomial Linear Regression")
plt.xlabel("Temperature")
plt.ylabel("Pressure")
plt.legend(loc="upper left")
plot is:
But ı don't find the above curve's equation (orange curve) how can ı find?
Your plot actually corresponds to your code run with
poly=PolynomialFeatures(degree=7)
and not to degree=2. Indeed, running your code with the above change, we get:
Now, your polynomial features are:
poly.get_feature_names()
# ['1', 'x0', 'x0^2', 'x0^3', 'x0^4', 'x0^5', 'x0^6', 'x0^7']
and the respective coefficients of your linear regression are:
reg.coef_
# array([[ 0. , 5.43894411, -68.14277256, 364.28508827,
# -941.70924401, 1254.89358662, -831.27091422, 216.43304954]])
plus the intercept:
reg.intercept_
# array([0.51228593])
Given the above, and setting
coef = reg.coef_[0]
since here we have a single feature in the initial data, your regression equation is:
y = reg.intercept_ + coef[0] + coef[1]*x + coef[2]*x**2 + coef[3]*x**3 + coef[4]*x**4 + coef[5]*x**5 + coef[6]*x**6 + coef[7]*x**7
For visual verification, we can plot the above function with some x data in [0, 1]
x = np.linspace(0, 1, 15)
Running the above expression for y and
plt.plot(x, y)
gives:
Using some randomly generated data x, we can verify that the results of the equation y_eq are indeed equal to the results produced by the regression model y_reg within the limits of numerical precision:
x = np.random.rand(1,10)
y_eq = reg.intercept_ + coef[0] + coef[1]*x + coef[2]*x**2 + coef[3]*x**3 + coef[4]*x**4 + coef[5]*x**5 + coef[6]*x**6 + coef[7]*x**7
y_reg = np.concatenate(reg.predict(poly.transform(x.reshape(-1,1))))
y_eq
# array([[0.72452703, 0.64106819, 0.67394222, 0.71756648, 0.71102853,
# 0.63582055, 0.54243177, 0.71104983, 0.71287962, 0.6311952 ]])
y_reg
# array([0.72452703, 0.64106819, 0.67394222, 0.71756648, 0.71102853,
# 0.63582055, 0.54243177, 0.71104983, 0.71287962, 0.6311952 ])
np.allclose(y_reg, y_eq)
# True
Irrelevant to the question, I guess you already know that trying to fit such high order polynomials to so few data points is not a good idea, and you probably should remain to a low degree of 2 or 3...
Note sure how you produced the plot shown in the question. When I ran your code I got the following (degree=2) polynomial fitted to the data as expected:
Now that you have fitted the data you can see the coefficients of the model thus:
print(reg.coef_)
print(reg.intercept_)
# [[ 0. 0.85962436 -0.83796885]]
# [0.5523586]
Note that the data that was used to fit this model is equivalent to the following:
X_poly = np.concatenate([np.ones((16,1)), X, X**2], axis=1)
Therefore a single data point is a vector created as follows:
temp = 0.5
x = np.array([1, temp, temp**2]).reshape((1,3))
Your polynomial model is simply a linear model of the polynomial features:
y = A.x + B
or
y = reg.coef_.dot(x.T) + reg.intercept_
print(y) # [[0.77267856]]
Verification:
print(reg.predict(x)) # array([[0.77267856]])

Can I inverse transform the intercept and coefficients of LASSO regression after using Robust Scaler?

Is it possible to inverse transform the intercept and coefficients in LASSO regression, after fitting the model on scaled data using Robust Scaler?
I'm using LASSO regression to predict values on data that is not normalized and doesn't perform well with LASSO unless it's scaled beforehand. After scaling the data and fitting the LASSO model, I ideally want to be able to see what the model intercept and coefficients are but in the original units (not the scaled versions). I asked a similar question here and it doesn't appear this is possible. If not, why? Can someone explain this to me? I'm trying to broaden my understanding of how LASSO and Robust Scaler work.
Below is the code I was using. Here I was trying to inverse transform the coefficients using transformer_x and the intercept using transformer_y. However, it sounds like this is incorrect.
import pandas as pd
from sklearn.preprocessing import RobustScaler
from sklearn.linear_model import Lasso
df = pd.DataFrame({'Y':[5, -10, 10, .5, 2.5, 15], 'X1':[1., -2., 2., .1, .5, 3], 'X2':[1, 1, 2, 1, 1, 1],
'X3':[6, 6, 6, 5, 6, 4], 'X4':[6, 5, 4, 3, 2, 1]})
X = df[['X1','X2', 'X3' ,'X4']]
y = df[['Y']]
#Scaling
transformer_x = RobustScaler().fit(X)
transformer_y = RobustScaler().fit(y)
X_scal = transformer_x.transform(X)
y_scal = transformer_y.transform(y)
#LASSO
lasso = Lasso()
lasso = lasso.fit(X_scal, y_scal)
def pred_val(X1,X2,X3,X4):
print('X1 entered: ', X1)
#Scale X value that user entered - by hand
med_X = X.median()
Q1_X = X.quantile(0.25)
Q3_X = X.quantile(0.75)
IQR_X = Q3_X - Q1_X
X_scaled = (X1 - med_X)/IQR_X
print('X1 scaled by hand: ', X_scaled[0].round(2))
#Scale X value that user entered - by function
X_scaled2 = transformer_x.transform(np.array([[X1,X2]]))
print('X1 scaled by function: ', X_scaled2[0][0].round(2))
#Intercept by hand
med_y = y.median()
Q1_y = y.quantile(0.25)
Q3_y = y.quantile(0.75)
IQR_y = Q3_y - Q1_y
inv_int = med_y + IQR_y*lasso.intercept_[0]
#Intercept by function
inv_int2 = transformer_y.inverse_transform(lasso.intercept_.reshape(-1, 1))[0][0]
#Coefficient by hand
inv_coef = lasso.coef_[0]*IQR_y
#Coefficient by function
inv_coef2 = transformer_x.inverse_transform(reg.coef_.reshape(1,-1))[0]
#Prediction by hand
preds = inv_int + inv_coef*X_scaled[0]
#Prediction by function
preds_inner = lasso.predict(X_scaled2)
preds_f = transformer_y.inverse_transform(preds_inner.reshape(-1, 1))[0][0]
print('\nIntercept by hand: ', inv_int[0].round(2))
print('Intercept by function: ', inv_int2.round(2))
print('\nCoefficients by hand: ', inv_coef[0].round(2))
print('Coefficients by function: ', inv_coef2[0].round(2))
print('\nYour predicted value by hand is: ', preds[0].round(2))
print('Your predicted value by function is: ', preds_f.round(2))
print('Perfect Prediction would be 80')
pred_val(10,1,1,1)
Update: I've updated my code to show the type of prediction function I'm trying to create. I'm just trying to create a function that does exactly what .predict does, but also shows the intercept and coefficients in their unscaled units.
Current output:
Out[1]:
X1 entered: 10
X1 scaled by hand: 5.97
X1 scaled by function: 5.97
Intercept by hand: 34.19
Intercept by function: 34.19
Coefficients by hand: 7.6
Coefficients by function: 8.5
Your predicted value by hand is: 79.54
Your predicted value by function is: 79.54
Perfect Prediction would be 80
Ideal output:
Out[1]:
X1 entered: 10
X1 scaled by hand: 5.97
X1 scaled by function: 5.97
Intercept by hand: 34.19
Intercept by function: 34.19
Coefficients by hand: 7.6
Coefficients by function: 7.6
Your predicted value by hand is: 79.54
Your predicted value by function is: 79.54
Perfect Prediction would be 80
Based on the linked SO thread, all you want to do is to get the unscaled prediction value. Is that right?
If yes, then all you need to do is:
# Scale the test dataset
X_test_scaled = transformer_x.transform(X_test)
# Predict with the trained model
prediction = lasso.predict(X_test_scaled)
# Inverse transform the prediction
prediction_in_dollars = transformer_y.inverse_transform(prediction)
UPDATE:
Suppose the train data contain just a single feature named X. Here is what the RobustScaler will do:
X_scaled = (X - median(X))/IQR(X)
y_scaled = (y - median(y))/IQR(y)
Then, the lasso regression will give a prediction like this:
a * X_scaled + b = y_scaled
You have to work out the equations to see what model coefficient on the unscaled data:
# Substituting X_scaled and y_scaled from the 1st equation
# In this equation `median(X), IQR(X), median(y) and IQR(y) are plain numbers you already know from the training phase
a * (X - median(X))/IQR(X) + b = (y - median(y))/IQR(y)
If you try to make a a_new * x + b_new = y-like equation out of this, you end up with:
a_new = (a * (X - median(X)) / (X * IQR(X))) * IQR(y)
b_new = b * IQR(y) + median(y)
a_new * X + b_new = y
You can see that the unscaled coefficient (a_new) depends on X. So, you can use the unscaled X to make predictions directly but in between you are applying the transformation indirectly.
UPDATE 2
I've adapted your code and it now shows how you can get the coefficients in the original scale. The script is just the implementation of the formulas I'm showing above.
import pandas as pd
import numpy as np
from sklearn.preprocessing import RobustScaler
from sklearn.linear_model import Lasso
df = pd.DataFrame({'Y':[5, -10, 10, .5, 2.5, 15], 'X1':[1., -2., 2., .1, .5, 3], 'X2':[1, 1, 2, 1, 1, 1],
'X3':[6, 6, 6, 5, 6, 4], 'X4':[6, 5, 4, 3, 2, 1]})
X = df[['X1','X2','X3','X4']]
y = df[['Y']]
#Scaling
transformer_x = RobustScaler().fit(X)
transformer_y = RobustScaler().fit(y)
X_scal = transformer_x.transform(X)
y_scal = transformer_y.transform(y)
#LASSO
lasso = Lasso()
lasso = lasso.fit(X_scal, y_scal)
def pred_val(X_test):
print('X entered: ',)
print (X_test.values[0])
#Scale X value that user entered - by hand
med_X = X.median()
Q1_X = X.quantile(0.25)
Q3_X = X.quantile(0.75)
IQR_X = Q3_X - Q1_X
X_scaled = ((X_test - med_X)/IQR_X).fillna(0).values
print('X_test scaled by hand: ',)
print (X_scaled[0])
#Scale X value that user entered - by function
X_scaled2 = transformer_x.transform(X_test)
print('X_test scaled by function: ',)
print (X_scaled2[0])
#Intercept by hand
med_y = y.median()
Q1_y = y.quantile(0.25)
Q3_y = y.quantile(0.75)
IQR_y = Q3_y - Q1_y
a = lasso.coef_
coef_new = ((a * (X_test - med_X).values) / (X_test * IQR_X).values) * float(IQR_y)
coef_new = np.nan_to_num(coef_new)[0]
b = lasso.intercept_[0]
intercept_new = b * float(IQR_y) + float(med_y)
custom_pred = sum((coef_new * X_test.values)[0]) + intercept_new
pred = lasso.predict(X_scaled2)
final_pred = transformer_y.inverse_transform(pred.reshape(-1, 1))[0][0]
print('Original intercept: ', lasso.intercept_[0].round(2))
print('New intercept: ', intercept_new.round(2))
print('Original coefficients: ', lasso.coef_.round(2))
print('New coefficients: ', coef_new.round(2))
print('Your predicted value by function is: ', final_pred.round(2))
print('Your predicted value by hand is: ', custom_pred.round(2))
X_test = pd.DataFrame([10,1,1,1]).T
X_test.columns = ['X1', 'X2', 'X3', 'X4']
pred_val(X_test)
You can see the the custom prediction uses the original values (X_test.values).
Result:
X entered:
[10 1 1 1]
X_test scaled by hand:
[ 5.96774194 0. -6.66666667 -1. ]
X_test scaled by function:
[ 5.96774194 0. -6.66666667 -1. ]
Original intercept: 0.01
New intercept: 3.83
Original coefficients: [ 0.02 0. -0. -0. ]
New coefficients: [0.1 0. 0. 0. ]
Your predicted value by function is: 4.83
Your predicted value by hand is: 4.83
As I explained above, the new coefficients depend on X_test. This means that you cannot use their current values with another test sample. Their values will be different for different inputs.

How to configure lasso regression to not penalize certain variables?

I'm trying to use lasso regression in python.
I'm currently using lasso function in scikit-learn library.
I want my model not to penalize certain variables while training. (penalize only the rest of variables)
Below is my current code for training
rg_mdt = linear_model.LassoCV(alphas=np.array(10**np.linspace(0, -4, 100)), fit_intercept=True, normalize=True, cv=10)
rg_mdt.fit(df_mdt_rgmt.loc[df_mdt_rgmt.CLUSTER_ID == k].drop(['RESPONSE', 'CLUSTER_ID'], axis=1), df_mdt_rgmt.loc[df_mdt_rgmt.CLUSTER_ID == k, 'RESPONSE'])
df_mdt_rgmt is the data mart and I'm trying to keep the coefficient for certain columns non-zero.
glmnet in R provides 'penalty factor' parameter that let me do this, but how can I do that in python scikit-learn?
Below is the code I have in R
get.Lassomodel <- function(TB.EXP, TB.RSP){
VT.PEN <- rep(1, ncol(TB.EXP))
VT.PEN[which(colnames(TB.EXP) == "DC_RATE")] <- 0
VT.PEN[which(colnames(TB.EXP) == "FR_PRICE_PW_REP")] <- 0
VT.GRID <- 10^seq(0, -4, length=100)
REG.MOD <- cv.glmnet(as.matrix(TB.EXP), as.matrix(TB.RSP), alpha=1,
lambda=VT.GRID, penalty.factor=VT.PEN, nfolds=10, intercept=TRUE)
return(REG.MOD)
}
I'm afraid you can't. Of course it's not an theoretical issue, but just a design-decision.
My reasoning is based on the available API and while sometimes there are undocumented functions, this time i don't think there is what you need because the user-guide already posts this problem in the 1-factor-norm-of-all form alpha*||w||_1
Depending on your setting you might modify sklearn's code (a bit scared about CD-tunings) or even implement a customized-objective using scipy.optimize (although the latter might be a bit slower).
Here is some example showing the scipy.optimize approach. I simplified the problem by removing intercept's.
""" data """
import numpy as np
from sklearn import datasets
diabetes = datasets.load_diabetes()
A = diabetes.data[:150]
y = diabetes.target[:150]
alpha=0.1
weights=np.ones(A.shape[1])
""" sklearn """
from sklearn import linear_model
clf = linear_model.Lasso(alpha=alpha, fit_intercept=False)
clf.fit(A, y)
""" scipy """
from scipy.optimize import minimize
def lasso(x): # following sklearn's definition from user-guide!
return (1. / (2*A.shape[0])) * np.square(np.linalg.norm(A.dot(x) - y, 2)) + alpha * np.linalg.norm(weights*x, 1)
""" Test with weights = 1 """
x0 = np.zeros(A.shape[1])
res = minimize(lasso, x0, method='L-BFGS-B', options={'disp': False})
print('Equal weights')
print(lasso(clf.coef_), clf.coef_[:5])
print(lasso(res.x), res.x[:5])
""" Test scipy-based with special weights """
weights[[0, 3, 5]] = 0.0
res = minimize(lasso, x0, method='L-BFGS-B', options={'disp': False})
print('Specific weights')
print(lasso(res.x), res.x[:5])
Output:
Equal weights
12467.4614224 [-524.03922009 -75.41111354 820.0330707 40.08184085 -307.86020107]
12467.6514697 [-526.7102518 -67.42487561 825.70158417 40.04699607 -271.02909258]
Specific weights
12362.6078842 [ -6.12843589e+02 -1.51628334e+01 8.47561732e+02 9.54387812e+01
-1.02957112e-05]

Variance inflation factor in ridge regression in python

I'm running a ridge regression on somewhat collinear data. One of the methods used to identify a stable fit is a ridge trace and thanks to the great example on scikit-learn, I'm able to do that. Another method is to calculate variance inflation factors (VIFs) for each variable as k increases. When the VIFs decrease to <5 it is an indication the fit is satisfactory. Statsmodels has code for VIFs, but it is for an OLS regression. I've attempted to alter it to handle a ridge regression.
I'm checking my results against Regression Analysis by Example, 5th edition, chapter 10. My code generates the correct results for k = 0.000, but not after that. Working SAS code is available, but I'm not a SAS user and I don't know the differences between that implementation and scikit-learn's (and/or statsmodels's).
I've been stuck on this for a few days so any help would be much appreciated.
#http://www.ats.ucla.edu/stat/sas/examples/chp/chp_ch10.htm
from __future__ import division
import numpy as np
import pandas as pd
example = pd.read_csv('by_example_import.csv')
example.dropna(inplace=True)
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(example)
scaler.transform(example)
X = example.drop(['year', 'import'], axis=1)
#c_matrix = X.corr()
y = example['import']
#w, v = np.linalg.eig(c_matrix)
import pylab as pl
from sklearn import linear_model
###############################################################################
# Compute paths
alphas = [0.000, 0.001, 0.003, 0.005, 0.007, 0.009, 0.010, 0.012, 0.014, 0.016, 0.018,
0.020, 0.022, 0.024, 0.026, 0.028, 0.030, 0.040, 0.050, 0.060, 0.070, 0.080,
0.090, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.5, 2.0]
clf = linear_model.Ridge(fit_intercept=False)
clf2 = linear_model.Ridge(fit_intercept=False)
coefs = []
vif_list = [[] for x in range(X.shape[1])]
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
for j, data in enumerate(X.columns):
cols = [col for col in X.columns if col not in [data]]
Z = X[cols]
yy = X.iloc[:,j]
clf2.set_params(alpha=a)
clf2.fit(Z, yy)
r_squared_j = clf2.score(Z, yy)
vif = 1. / (1. - r_squared_j)
print r_squared_j
vif_list[j].append(vif)
pd.DataFrame(vif_list, columns = alphas).T
pd.DataFrame(coefs, index=alphas)
###############################################################################
# Display results
ax = pl.gca()
ax.set_color_cycle(['b', 'r', 'g', 'c', 'k', 'y', 'm'])
ax.plot(alphas, coefs)
pl.vlines(ridge_cv.alpha_, np.min(coefs), np.max(coefs), linestyle='dashdot')
pl.xlabel('alpha')
pl.ylabel('weights')
pl.title('Ridge coefficients as a function of the regularization')
pl.axis('tight')
pl.show()
Variance inflation factor for Ridge regression is just three lines. I checked it with the example on the UCLA statistics page.
A variation of this will make it into the next statsmodels release. Here is my current function:
def vif_ridge(corr_x, pen_factors, is_corr=True):
"""variance inflation factor for Ridge regression
assumes penalization is on standardized variables
data should not include a constant
Parameters
----------
corr_x : array_like
correlation matrix if is_corr=True or original data if is_corr is False.
pen_factors : iterable
iterable of Ridge penalization factors
is_corr : bool
Boolean to indicate how corr_x is interpreted, see corr_x
Returns
-------
vif : ndarray
variance inflation factors for parameters in columns and ridge
penalization factors in rows
could be optimized for repeated calculations
"""
corr_x = np.asarray(corr_x)
if not is_corr:
corr = np.corrcoef(corr_x, rowvar=0, bias=True)
else:
corr = corr_x
eye = np.eye(corr.shape[1])
res = []
for k in pen_factors:
minv = np.linalg.inv(corr + k * eye)
vif = minv.dot(corr).dot(minv)
res.append(np.diag(vif))
return np.asarray(res)

How do I get the components for LDA in scikit-learn?

When using PCA in sklearn, it's easy to get out the components:
from sklearn import decomposition
pca = decomposition.PCA(n_components=n_components)
pca_data = pca.fit(input_data)
pca_components = pca.components_
But I can't for the life of me figure out how to get the components out of LDA, as there is no components_ attribute. Is there a similar attribute in sklearn lda?
In the case of PCA, the documentation is clear. The pca.components_ are the eigenvectors.
In the case of LDA, we need the lda.scalings_ attribute.
Visual example using iris data and sklearn:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
iris = datasets.load_iris()
X = iris.data
y = iris.target
#In general it is a good idea to scale the data
scaler = StandardScaler()
scaler.fit(X)
X=scaler.transform(X)
lda = LinearDiscriminantAnalysis()
lda.fit(X,y)
x_new = lda.transform(X)
Verify that the lda.scalings_ are the eigenvectors:
print(lda.scalings_)
print(lda.transform(np.identity(4)))
[[-0.67614337 0.0271192 ]
[-0.66890811 0.93115101]
[ 3.84228173 -1.63586613]
[ 2.17067434 2.13428251]]
[[-0.67614337 0.0271192 ]
[-0.66890811 0.93115101]
[ 3.84228173 -1.63586613]
[ 2.17067434 2.13428251]]
Additionally here is a useful function to plot the biplot and verify visually:
def myplot(score,coeff,labels=None):
xs = score[:,0]
ys = score[:,1]
n = coeff.shape[0]
plt.scatter(xs ,ys, c = y) #without scaling
for i in range(n):
plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5)
if labels is None:
plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, "Var"+str(i+1), color = 'g', ha = 'center', va = 'center')
else:
plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')
plt.xlabel("LD{}".format(1))
plt.ylabel("LD{}".format(2))
plt.grid()
#Call the function.
myplot(x_new[:,0:2], lda.scalings_)
plt.show()
Results
My reading of the code is that the coef_ attribute is used to weight each of the components when scoring a sample's features against the different classes. scaling is the eigenvector and xbar_ is the mean. In the spirit of UTSL, here's the source for the decision function:
https://github.com/scikit-learn/scikit-learn/blob/6f32544c51b43d122dfbed8feff5cd2887bcac80/sklearn/discriminant_analysis.py#L166
In PCA, the transform operation uses self.components_.T (see the code):
X_transformed = np.dot(X, self.components_.T)
In LDA, the transform operation uses self.scalings_ (see the code):
X_new = np.dot(X, self.scalings_)
Note the .T which transposes the array in the PCA, and not in LDA:
PCA: components_ : array, shape (n_components, n_features)
LDA: scalings_ : array, shape (n_features, n_classes - 1)

Categories