Factor Loadings using sklearn - python

I want the correlations between individual variables and principal components in python.
I am using PCA in sklearn. I don't understand how can I achieve the loading matrix after I have decomposed my data? My code is here.
iris = load_iris()
data, y = iris.data, iris.target
pca = PCA(n_components=2)
transformed_data = pca.fit(data).transform(data)
eigenValues = pca.explained_variance_ratio_
http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html doesn't mention how this can be achieved.

Multiply each component by the square root of its corresponding eigenvalue:
pca.components_.T * np.sqrt(pca.explained_variance_)
This should produce your loading matrix.

I think that #RickardSjogren is describing the eigenvectors, while #BigPanda is giving the loadings. There's a big difference: Loadings vs eigenvectors in PCA: when to use one or another?.
I created this PCA class with a loadings method.
Loadings, as given by pca.components_ * np.sqrt(pca.explained_variance_), are more analogous to coefficients in a multiple linear regression. I don't use .T here because in the PCA class linked above, the components are already transposed. numpy.linalg.svd produces u, s, and vt, where vt is the Hermetian transpose, so you first need to back into v with vt.T.
There is also one other important detail: the signs (positive/negative) on the components and loadings in sklearn.PCA may differ from packages such as R.
More on that here:
In sklearn.decomposition.PCA, why are components_ negative?.

According to this blog the rows of pca.components_ are the loading vectors. So:
loadings = pca.components_

Related

How to calculate correlation coefficients using sklearn CCA module?

I need to measure similarity between feature vectors using CCA module. I saw sklearn has a good CCA module available: https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html
In different papers I reviewed, I saw that the way to measure similarity using CCA is to calculate the mean of the correlation coefficients, for example as done in this following notebook example: https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb
How to calculate the correlation coefficients (as shown in the notebook) using sklearn CCA module?
from sklearn.cross_decomposition import CCA
import numpy as np
U = np.random.random_sample(500).reshape(100,5)
V = np.random.random_sample(500).reshape(100,5)
cca = CCA(n_components=1)
cca.fit(U, V)
cca.coef_.shape # (5,5)
U_c, V_c = cca.transform(U, V)
U_c.shape # (100,1)
V_c.shape # (100,1)
This is an example of the sklearn CCA module, however I have no idea how to retrieve correlation coefficients from it.
In reference to the notebook you provided which is a supporting artefact to and implements ideas from the following two papers
"SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability". Neural Information Processing Systems (NeurIPS) 2017
"Insights on Representational Similarity in Deep Neural Networks with Canonical Correlation". Neural Information Processing Systems (NeurIPS) 2018
The authors there calculate 50 = min(A_fake neurons, B_fake neurons) components and plot the correlations between the transformed vectors of each component (i.e. 50).
With the help of the below code, using sklearn CCA, I am trying to reproduce their Toy Example. As we'll see the correlation plots match. The sanity check they used in the notebook came very handy - it passed seamlessly with this code as well.
import numpy as np
from matplotlib import pyplot as plt
from sklearn.cross_decomposition import CCA
# rows contain the number of samples for CCA and the number of rvs goes in columns
X = np.random.randn(2000, 100)
Y = np.random.randn(2000, 50)
# num of components
n_comps = min(X.shape[1], Y.shape[1])
cca = CCA(n_components=n_comps)
cca.fit(X, Y)
X_c, Y_c = cca.transform(X, Y)
# calculate and plot the correlations of all components
corrs = [np.corrcoef(X_c[:, i], Y_c[:, i])[0, 1] for i in range(n_comps)]
plt.plot(corrs)
plt.xlabel('cca_idx')
plt.ylabel('cca_corr')
plt.show()
Output:
For the sanity check, replace the Y data matrix by a scaled invertible transform of X and rerun the code.
Y = np.dot(X, np.random.randn(100, 100))
Output:

Principal Component Analysis: Order of components AFTER transformation

I am using the PCA class from sklearn.decomposition to reduce the dimensionality of the feature space in order to plot that feature space.
I wondering the following: After applying the fit and transform method of the PCA class, I am getting back an array X_transformed of shape (n_samples, n_components) as stated in the documentation. Is the order of columns of X_transformed sorted by the amount of explained variance? In the documentation it says that PCA.components_ is sorted by explained variance, so I am assuming that the columns of X_transformed are as well, but please correct me if I am wrong.
Little example:
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(X) # X is an array containing my original features. X.shape=(n_samples, n_features)
X_transformed = pca.transfom(X) # X_transformed.shape=(n_samples, n_components). Are X_transformed's columns sorted by explained variance?
Thanks!
Hmm maybe just got an idea to test that
from sklearn.decomposition import PCA
import numpy as np
pca_2 = PCA(n_components=2)
X_transformed_2 = pca_2.fit_transform(X)
# X_transformed_2 hold two components with most variance explained
pca_10 = PCA(n_components=10)
X_transformed_10 = pca_10.fit_transform(X)
# X_transformed_10 hold 10 components with most variance explained
# Hypothesis: If the first 2 components in X_transformed_10 are ordered by explained variance, it's first 2 columns should equal X_transformed_2
np.array_equal(X_transformed_2, X_transformed_10[:, 2]) ## returns True

How PCA computes the transformed version in `sklearn`?

I'm confused with sklearn's PCA(here is the documentation), and its relation with Singular Value Decomposition (SVD).
In Wikipedia we have,
The full principal components decomposition of X can, therefore, be given as T=WX,
where W is a p-by-p matrix of weights whose columns are the eigenvectors of $X^T X$. The transpose of W is sometimes called the whitening or sphering transformation.
Later once it explains the relationship with SVD, we have:
X=U $\Sigma W^T$
So I assume that matrix W, embeds samples into latent space (which makes sense noting the dimension of the matrices) and using transform module of the class PCA in sklearn should give the same result as if I was multiplying observation matrix by W. However, I checked them and they don't match.
Is there anything wrong that I'm missing or there's a bug in the code?
import numpy as np
from sklearn.decomposition import PCA
x = np.random.rand(200).reshape(20,10)
x = x-x.mean(axis=0)
u, s, vh = np.linalg.svd(x, full_matrices=False)
pca = PCA().fit(x)
# transformed version based on WIKI: t = X#vh.T = u#np.diag(s)
t_svd1= x#vh.T
t_svd2= u#np.diag(s)
# the pca transform
t_pca = pca.transform(x)
print(np.abs(t_svd1-t_pca).max()) # should be a small value, but it's not :(
print(np.abs(t_svd2-t_pca).max()) # should be a small value, but it's not :(
There is a difference between the theoretical Wikipedia description and the practical sklearn implementation, but it is not a bug, merely just a stability and reproducibility enhancement.
You have almost pretty much nailed the exact implementation of the PCA, however in order to be able to fully reproduce the computation, sklearn developers added one more enforcement to their implementation. The problem stems from the indeterministic nature of SVD, i.e. the SVD does not have a unique solution. That can be easily seen from your equation as well by setting U_s = -U and W_s = -W, then U_s and W_s also satisfy:
X=U_s $\Sigma W_s^T$
More importantly this holds also when switching the signs of columns of U and W. If we just reverse the signs of k-th column of U and W, the equality will still hold. You can read more about this issue f.e. here https://prod-ng.sandia.gov/techlib-noauth/access-control.cgi/2007/076422.pdf.
The implementation of PCA deals with this problem by enforcing the highest loading values in absolute values to be always positive, specifically the method sklearn.utils.extmath.svd_flip is being used. This way, no matter which sign the resulting vectors have from the indeterministic method np.linalg.svd, the loading values in absolutes will remain the same, i.e. the signs of the matrices will remain the same.
Thus in order for your code to have the same result as the PCA implementation:
import numpy as np
from sklearn.decomposition import PCA
np.random.seed(41)
x = np.random.rand(200).reshape(20,10)
x = x-x.mean(axis=0)
u, s, vh = np.linalg.svd(x, full_matrices=False)
max_abs_cols = np.argmax(np.abs(u), axis=0)
signs = np.sign(u[max_abs_cols, range(u.shape[1])])
u *= signs
vh *= signs.reshape(-1,1)
pca = PCA().fit(x)
# transformed version based on WIKI: t = X#vh.T = u#np.diag(s)
t_svd1= x#vh.T
t_svd2= u#np.diag(s)
# the pca transform
t_pca = pca.transform(x)
print(np.abs(t_svd1-t_pca).max()) # pretty small value :)
print(np.abs(t_svd2-t_pca).max()) # pretty small value :)

sci-kit (sklearn) PCA decomposition transform

To manually perform a transformation, I have to matrix multiply the original data (de-meaned) by PCA().fit(data).components_.T (the transpose of the components) to get the results to match PCA().fit_transform(data).
Does anyone know if this is intentional, and if so why? Looking at the comments in the source code there is a note that it should be X_new = X*V, where V is the components, but I find that it is X_new = XV^T.

covariance matrix using np.matmul(data.T, data)

This is the code I've found online
d0 = pd.read_csv('./mnist_train.csv')
labels = d0.label.head(15000)
data = d0.drop('label').head(15000)
from sklearn.preprocessing import StandardScaler
standardized_data = StandardScaler().fit_transform(data)
#find the co-variance matrix which is : (A^T * A)/n
sample_data = standardized_data
# matrix multiplication using numpy
covar_matrix = np.matmul(sample_data.T , sample_data) / len(sample_data)
How does multiplying the same data gives np.matmul(sample_data.T, sample_data) covariance matrix? What is the co-variance matrix according to this tutorial I found online? The last step is what I don't understand.
This might be a better question for the math or stats stack exchange, but I'll answer here for now.
This comes from the definition of covariance. The Wikipedia page (linked) gives a whole lot of detail, but covariance is defined as (in pseudo-code)
cov = E[dot((x - E[x]), (x - E[x]).T)]
for column vectors, but in your case you probably have row vectors, which is why the first element in your dot-product is transposed, not the second. The E[...] means expected value, which is the mean for Gaussian-distributed data. When you perform StandardScaler().fit_transform(data), you are basically subtracting out the mean of the data, so that's why you don't explicitly do so in your dot product.
Note that StandardScaler() is also dividing by the variance, so it's normalizing everything to unit variance. This is going to affect your covariance! So if you need the actual covariance of the data without normalization, just calculate it with something like np.cov() from the numpy module.
Let's build towards Covariance matrix step by step, first let's define variance.
The variance of some random variable X is a measure of how much values in the distribution vary on average with respect to the mean.
Now we have to define covariance.
Covariance is the measure of the joint probability for two random variables. It describes how the two variables change together. Read here.
So now armed with that you can understand that Co-variance matrix is a matrix which shows how each feature varies with changes in other features. Which can be calculated as
and there you can see the equation that you are confused about formed at the bottom. If you have any further queries, comment down.
Image Source: Wikipedia.

Categories