Finding and utilizing eigenvalues and eigenvectors from PCA in scikit-learn - python

I have been utilizing PCA implemented in scikit-learn. However, I want to find the eigenvalues and eigenvectors that result after we fit the training dataset. There is no mention of both in the docs.
Secondly, can these eigenvalues and eigenvectors themselves be utilized as features for classification purposes?

I am assuming here that by EigenVectors you mean the Eigenvectors of the Covariance Matrix.
Lets say that you have n data points in a p-dimensional space, and X is a p x n matrix of your points then the directions of the principal components are the Eigenvectors of the Covariance matrix XXT. You can obtain the directions of these EigenVectors from sklearn by accessing the components_ attribute of the PCA object. This can be done as follows:
from sklearn.decomposition import PCA
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca = PCA()
pca.fit(X)
print pca.components_
This gives an output like
[[ 0.83849224 0.54491354]
[ 0.54491354 -0.83849224]]
where every row is a principal component in the p-dimensional space (2 in this toy example). Each of these rows is an Eigenvector of the centered covariance matrix XXT.
As far as the Eigenvalues go, there is no straightforward way to get them from the PCA object. The PCA object does have an attribute called explained_variance_ratio_ which gives the percentage of the variance of each component. These numbers for each component are proportional to the Eigenvalues. In the case of our toy example, we get these if print the explained_variance_ratio_ attribute :
[ 0.99244289 0.00755711]
This means that the ratio of the eigenvalue of the first principal component to the eigenvalue of the second principal component is 0.99244289:0.00755711.
If the understanding of the basic mathematics of PCA is clear, then a better way to get the Eigenvectors and Eigenvalues is to use numpy.linalg.eig to get Eigenvalues and Eigenvectors of the centered covariance matrix. If your data matrix is a p x n matrix, X (p features, n points), then the you can use the following code:
import numpy as np
centered_matrix = X - X.mean(axis=1)[:, np.newaxis]
cov = np.dot(centered_matrix, centered_matrix.T)
eigvals, eigvecs = np.linalg.eig(cov)
Coming to your second question. These EigenValues and EigenVectors cannot be used themselves for classification. For classification you need features for each data point. These Eigenvectors and Eigenvalues that you generate are derived from the entire covariance matrix, XXT. For dimensionality reduction you could use the projections of your original points(in the p-dimensional space) on the principal components obtained as a result of PCA. However, this is also not always useful, because PCA does not take into account the labels of your training data. I would recommend you to look into LDA for supervised problems.
Hope that helps.

The docs say explained_variance_ will give you
"The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X.", new in version 0.18.
Seems a little questionable since the first and second sentences do not seem to agree.
sklearn PCA documentation

Related

Scipy and the hierarchical clustering input

When performing hierarchical clustering with scipy, it is said in the docs here that scipy.cluster.hierarchy.linkage takes 1-D condensed distance matrix or a 2-D array of observation vectors as input. However, I generated a simple (symmetric) similarity matrix with pandas Dataframe and scipy took that as input with no problem at all, and the resulting dendrogram is just fine.
Can someone explain, how is this possible? Do I have outdated docs or...?
The docs are accurate, they just don't tell you what will happen if you actually try to use an uncondensed distance matrix.
The function raises a warning but still runs because it first tries to convert input into a numpy array. This creates a 2-D array from your 2-D DataFrame while at the same time recognizing that this likely isn't the expected input based on the array dimensions and symmetry.
Depending on the complexity (e.g. cluster separation, number of clusters, distribution of data across clusters) of your input data, the clustering may still look like it succeeds in generating a suitable dendrogram, as you noted. This makes sense conceptually because the result is a clustering of n- similarity vectors which may be well-separated in simple cases.
For example, here is some synthetic data with 150 observations and 2 clusters:
import pandas as pd
from scipy.spatial.distance import cosine, pdist, squareform
np.random.seed(42) # for repeatability
a = np.random.multivariate_normal([10, 0], [[3, 1], [1, 4]], size=[100,])
b = np.random.multivariate_normal([0, 20], [[3, 1], [1, 4]], size=[50,])
obs_df = pd.DataFrame(np.concatenate((a, b),), columns=['x', 'y'])
obs_df.plot.scatter(x='x', y='y')
Z = linkage(obs_df, 'ward')
fig = plt.figure(figsize=(8, 4))
dn = dendrogram(Z)
If you generate a similarity matrix, this is an n x n matrix that could still be clustered as if it were n vectors. I can't plot 150-D vectors, but plotting the magnitude of each vector and then the dendrogram seems to confirm a similar clustering.
def similarity_func(u, v):
return 1-cosine(u, v)
dists = pdist(obs_df, similarity_func)
sim_df = pd.DataFrame(squareform(dists), columns=obs_df.index, index=obs_df.index)
sim_array = np.asarray(sim_df)
sim_lst = []
for vec in sim_array:
mag = np.linalg.norm(vec,ord=1)
sim_lst.append(mag)
pd.Series(sim_lst).plot.bar()
Z = linkage(sim_df, 'ward')
fig = plt.figure(figsize=(8, 4))
dn = dendrogram(Z)
What we're really clustering here is a vector whose components are similarity measures of each of the 150 points. We're clustering a collection of each point's intra- and inter-cluster similarity measures. Since the two clusters are different sizes, a point in one cluster will have a rather different collection of intra- and inter-cluster similarities relative to a point in the other cluster. Hence, we get two primary clusters that are proportionate to the number of points in each cluster just as we did in the first step.

Equivalent of Matlab's PCA in Python sklearn

I'm a Matlab user and I'm learning Python with the sklearn library. I want to translate this Matlab code
[coeff,score] = pca(X)
For coeff I have tried this in Python:
from sklearn.decomposition import PCA
import numpy as np
pca = PCA()
pca.fit(X)
coeff = print(np.transpose(pca.components_))
I don't know whether or not it's right; for score I have no idea.
Could anyone enlight me about correctness of coeff and feasibility of score?
The sklearn PCA has a score method as described in the documentation: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
Try: pca.score(X) or pca.score_samples(X) depending whether you wish a score for each sample (the latter) or a single score for all samples (the former)
The PCA score in sklearn is different from matlab.
In sklearn, pca.score() or pca.score_samples() gives the log-likelihood of samples whereas matlab gives the principal components.
From sklearn Documentation:
Return the log-likelihood of each sample.
Parameters:
X : array, shape(n_samples, n_features)
The data.
Returns:
ll : array, shape (n_samples,)
Log-likelihood of each sample under the current model
From matlab documentation:
[coeff,score,latent] = pca(___) also returns the principal component
scores in score and the principal component variances in latent. You
can use any of the input arguments in the previous syntaxes.
Principal component scores are the representations of X in the
principal component space. Rows of score correspond to observations,
and columns correspond to components.
The principal component variances are the eigenvalues of the
covariance matrix of X.
Now, the equivalent of matlab score in pca is fit_transform() or transform() :
>>> import numpy as np
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> matlab_equi_score = pca.fit_transform(X)

What does the numpy.linalg.norm function?

What is the function of numpy.linalg.norm method?
In this Kmeans Clustering sample the numpy.linalg.norm function is used to get the distance between new centroids and old centroids in the movement centroid step but I cannot understand what is the meaning by itself
Could somebody give me a few ideas in relation to this Kmeans clustering context?
What is the norm of a vector?
numpy.linalg.norm is used to calculate the norm of a vector or a matrix.
This is the help document taken from numpy.linalg.norm:
numpy.linalg.norm(x, ord=None, axis=None, keepdims=False)[source]
This is the code snippet taken from K-Means Clustering in Python:
# Euclidean Distance Caculator
def dist(a, b, ax=1):
return np.linalg.norm(a - b, axis=ax)
It take order=None as default, so just to calculate the Frobenius norm of (a-b), this is ti calculate the distance between a and b( using the upper Formula).
I am not a mathematician but here is my layman's explanation of “norm”:
A vector describes the location of a point in space relative to the origin. Here’s an example in 2D space for the point [3 2]:
The norm is the distance from the point to the origin. In the 2D case it’s easy to visualize the point as the diametrically opposed point of a right triangle and see that the norm is the same thing as the hypotenuse.
However, In higher dimensions it’s no longer a shape we describe in average-person language, but the distance from the origin to the point is still called the norm. Here's an example in 3D space:
I don’t know why the norm is used in K-means clustering. You stated that it was part of determing the distance between the old and new centroid in each step. Not sure why one would use the norm for this since you can get the distance between two points in any dimensionality* using an extension of the from used in 2D algebra:
You just add a term for each addtional dimension, for example here is a 3D version:
*where the dimensions are positive integers
numpy.linalg.norm function is used to get the sum from a row or column of a matrix.Suppose ,
>>> c = np.array([[ 1, 2, 3],
... [-1, 1, 4]])
>>> LA.norm(c, axis=0)
array([ 1.41421356, 2.23606798, 5. ])
>>> LA.norm(c, axis=1)
array([ 3.74165739, 4.24264069])
>>> LA.norm(c, ord=1, axis=1)
array([6, 6])

Does sklearn's implementation of PCA preserve order of input?

Let's say this is how I do my PCA with sklearns sklearn.decomposition.PCA:
def doPCA(arr):
scaler = StandardScaler()
scaler.fit(arr)
arr =scaler.transform(arr)
pca =PCA(n_components=2)
X = pca.fit_transform(arr)
return X
My current understanding is that I get an output array of the same length, but each sample is now of dimension 2.
Now, I am interested where a value in my original array arr ended up after the PCA.
My question is:
Can I assume that X[i] corresponds to arr[i]?
What you obtain as X, which is U[:, :n_components]*S[:n_components], in your code are the PCA loadings on the first n_components. To understand why X[i] should correspond to arr[i], let us see what loadings mean.
Loadings
Imagine the eigen vectors to be basis vectors for the new dimensions of order n_components. The loadings help define where each of the data points lie on this new dimension space. In other words, the original data points from the full feature space projected on to the reduced dimensional space. These are coefficients in linear combination (np.dot(X, n_components)) predicting the original full set of features by the (standardized) components.
So you can assume that X[i] corresponds to arr[i].

Performing svd by sklearn.decomposition.PCA , how can I get the U S V from this?

I perform SVD with sklearn.decomposition.PCA
From the equation of the SVD
A= U x S x V_t
V_t = transpose matrix of V
(Sorry I can't paste the original equation)
If I want the matrix U, S, and V, how can I get it if I use the sklearn.decomposition.PCA ?
First of all, depending on the size of your matrix, sklearn implementation of PCA will not always compute the full SVD decomposition. The following is taken from PCA's GitHub reciprocity:
svd_solver : string {'auto', 'full', 'arpack', 'randomized'}
auto :
the solver is selected by a default policy based on `X.shape` and
`n_components`: if the input data is larger than 500x500 and the
number of components to extract is lower than 80% of the smallest
dimension of the data, then the more efficient 'randomized'
method is enabled. Otherwise the exact full SVD is computed and
optionally truncated afterwards.
full :
run exact full SVD calling the standard LAPACK solver via
`scipy.linalg.svd` and select the components by postprocessing
arpack :
run SVD truncated to n_components calling ARPACK solver via
`scipy.sparse.linalg.svds`. It requires strictly
0 < n_components < X.shape[1]
randomized :
run randomized SVD by the method of Halko et al.
In addition, it also performs some manipulations on the data (see here).
Now, if you want to get U, S, V that are used in sklearn.decomposition.PCA you can use pca._fit(X).
For example:
from sklearn.decomposition import PCA
X = np.array([[1, 2], [3,5], [8,10], [-1, 1], [5,6]])
pca = PCA(n_components=2)
pca._fit(X)
prints
(array([[ -3.55731195e-01, 5.05615563e-01],
[ 2.88830295e-04, -3.68261259e-01],
[ 7.10884729e-01, -2.74708608e-01],
[ -5.68187889e-01, -4.43103380e-01],
[ 2.12745524e-01, 5.80457684e-01]]),
array([ 9.950385 , 0.76800941]),
array([[ 0.69988535, 0.71425521],
[ 0.71425521, -0.69988535]]))
However, if you just want the SVD decomposition of the original data, I would suggest to use scipy.linalg.svd

Categories