Measuring plots of data with PCA or t-SNE and Matplotlib - python

My goal is to find out if I can manipulate and measure data from a PCA or t-SNE plot in Python. I want to know if there is a way I can find distances of points from a center of clusters.
I think there is a way but I'm not too sure.

You don't specify so much but maybe this can help you:
Clustering techniques information:
https://scikit-learn.org/stable/modules/clustering.html#clustering
Dimensionality reduction:
https://scikit-learn.org/stable/modules/decomposition.html#decompositions
Maybe the following script helps you:
from sklearn.decomposition import PCA
X= your_data_variables
cluster = "your cluster technique"
cluster.fit(X)
pca=PCA(n_components= 2)
pca.fit(X)
pca_data = pd.DataFrame(pca.transform(X))
centers = pca.transform(cluster.cluster_centers_)
Now you have the clusters center and your data in two dimenssion and you can calculate the distance as you want.

Related

Detect cluster outliers

I have a dataset where every data sample consists of 10-20 2D coordinates points. The data is mostly clean but occasionally there are falsely annotated points. For illustration the cleany annotated data would look like these:
either clustered in a small area or spread across a larger area. The outliers I'm trying to filter out look like this:
the outlier is away from the "correct" cluster.
I tried z-score filtering but this approach falsely marked many annotations as outliers
std_score = np.abs((points - points.mean(axis=0)) / (np.std(points, axis=0) + 0.01))
validity = np.all(std_score <= np.quantile(std_score, 0.95, axis=0), axis=1)
Is there a method designed to solve this problem?
This seems like a typical clustering problem, and if the data looks as you suggested the KMeans from scikit-learn should do the trick. Lets look how we can do this.
First I am generating a data sample, which might look somewhat like your data.
import numpy as np
import matplotlib.pylab as plt
np.random.seed(1) # For reproducibility
cluster_1 = np.random.normal(loc = [1,1], scale = [0.2,0.2], size = (20,2))
cluster_2 = np.random.normal(loc = [2,1], scale = [0.4,0.4], size = (5,2))
plt.scatter(cluster_1[:,0], cluster_1[:,1])
plt.scatter(cluster_2[:,0], cluster_2[:,1])
plt.show()
points = np.vstack([cluster_1, cluster_2])
This is how the data will look like.
Further we will be doing KMeans clustering.
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 2).fit(points)
We are choosing n_clusters as 2 believing that there are 2 clusters in the dataset. And after finding these clusters lets look at them.
plt.scatter(points[kmeans.labels_==0][:,0], points[kmeans.labels_==0][:,1], label='cluster_1')
plt.scatter(points[kmeans.labels_==1][:,0], points[kmeans.labels_==1][:,1], label ='cluster_2')
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], label = 'cluster_center')
plt.legend()
plt.show()
This will look like as the image shown below.
This should solve your problem. But there ares some things which should be kept in mind.
It will not be perfect all the times.
Might be a problem if you don't have any outliers. Can be solved through silhouette scores.
Difficult to know which cluster to discard (Can be done through locating the center of the clusters (green colored points) or can also be done by finding the cluster with lesser number of points.
Endnote: You might loose some points but would automate the entire process. Depends upon how much you want to trade off in terms of data saved versus manual time saved.

PCA after k-means clustering of multidimensional data

I have the following dataset with 10 variables:
I want to identify clusters with this multidimensional dataset, so I tried k-means clustering algorith with the following code:
clustering_kmeans = KMeans(n_clusters=2, precompute_distances="auto", n_jobs=-1)
data['clusters'] = clustering_kmeans.fit_predict(data)
In order to plot the result I used PCA for dimensionality reduction:
reduced_data = PCA(n_components=2).fit_transform(data)
results = pd.DataFrame(reduced_data,columns=['pca1','pca2'])
sns.scatterplot(x="pca1", y="pca2", hue=kmeans['clusters'], data=results)
plt.title('K-means Clustering with 2 dimensions')
plt.show()
And in the end I get the following result:
So I have following questions:
1.) However, this PCA plot looks really weird splitting the whole dataset in two corners of the plot. Is that even correct or did I code something wrong?
2.) Is there another algorithm for clustering multidimensional data? I look at this but I can not find an approriate algorithm for clustering multidimensional data... How do I even implement e.g. Ward hierarchical clustering in python for my dataset?
3.) Why should I use PCA for dimensionality reduction? Can I also use t SNE? Is it better?
the problem is that you fit your PCA on your dataframe, but the dataframe contains the cluster. Column 'cluster' will probably contain most of the variation in your dataset an therefore the information in the first PC will just coincide with data['cluster'] column. Try to fit your PCA only on the distance columns:
data_reduced = PCA(n_componnts=2).fit_transform(data[['dist1', 'dist2',..., dist10']]
You can fit hierarchical clustering with sklearn by using:
sklearn.cluster.AgglomerativeClustering()`
You can use different distance metrics and linkages like 'ward'
tSNE is used to visualize multivariate data and the goal of this technique is not clustering

Perform Multi-Dimension Scaling (MDS) for clustered categorical data in python

I am currently working on clustering categorical attributes that come from a bank marketing dataset from Kaggle. I have created the three clusters with kmodes:
Output: cluster_df
Now I want to visualize each row of a cluster as a projection or point so that I get some kind of image:
Desired visualization
I am having a hard time with this. I don't get a Euclidean distance with categorized data, right? That makes no sense. Is there then no possibility to create this desired visualization?
The best way to visualize clusters is to use PCA.
You can use PCA to reduce the multi-dimensional data into 2 dimensions so that you can plot and hopefully understand the data better.
To use it see the following code:
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(x)
principalDf = pd.DataFrame(data = principalComponents
, columns = ['principal component 1', 'principal component 2'])
where x is the fitted and transformed data on your cluster.
Now u can easily visualize your clustered data since it's 2 dimensional.

Graphing multi-dimensional K-means cluster NLP python

I have a multidimensional vector designed for an NLP Classifier.
Here's the dataframe (text_df):
I used a TfidfVectorizer to create the vector:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_v = TfidfVectorizer(max_df=0.5,
max_features=13000,
min_df=5,
stop_words='english',
use_idf=True,
norm=u'l2',
smooth_idf=True
)
X = tfidf_v.fit_transform(corpus).toarray()
y = text_df.iloc[:,1].values
Shape of X is (13834, 2701).
I used 7 clusters for KMeans:
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=7,random_state=42)
I tried using PCA, but I'm not sure if the graph looks right.
from sklearn.decomposition import PCA
X_pca = PCA(2).fit_transform(X)
plt.scatter(X_pca[:,0],X_pca[:,1],c=y_kmeans)
plt.title("Clusters")
plt.legend()
plt.show()
Is this normal for NLP based clusters? I was hoping for more distinctive clusters. Is there a way to clean up this cluster graph? (i.e. clearer groupings, distinct boundaries, cluster points closer together, etc.).
K-Means clustering does not work very well on high dimensional data (see this) and is usually done after Dimensionality Reduction (PCA, in your example).
As an aside, if you aim is to cluster the documents according to their topics, it's worth exploring topic modelling. Clustering can then be done using the distributions over topics identified by the topic modelling algorithms.

using Principal Component Analysis to decorrelate noises

I have to create 13 white Gaussian noises which are completely decorrelated to each others.
I've been told that PCA can achieve it so I searched some information and tools which I can use in python.
I use PCA module from sklearn to perform PCA.The following is my code.
import numpy as np
from sklearn.decomposition import PCA
n = 13 # number of completely decorrelated noises
ms = 10000 #duration of noise in milli-seconds
fs = 44100 # sampling rate
x = np.random.randn(int(np.ceil(fs*ms/1000)),n)
# calculate the correlation between any two noise
for i in range(n):
for j in range(n):
omega = np.corrcoef(x[:,i],x[:,j])[0,1]
print omega
# perform PCA
pca = PCA(n_components=n)
pca.fit(x)
y = pca.transform(x)
for i in range(n):
for j in range(n):
omega_new = np.corrcoef(y[:,i],y[:,j])[0,1]
print omega_new
The correlation coefficients before PCA is around 0.0005~0.0014, and reduced to about 1e-16 after performing PCA.
I don't know about PCA very well, so I'm not sure whether I did it right.
In addition, after performing PCA transformation, are those new data sets still Gaussion white noises? I will normalize each noise so that their maximum amplitude is 0.999 before write them into wave files. Do I still get 13 Gaussian white noises with similar average power?
I might be doing a strawman, but here's an attack on a much reduced problem: if I average two gaussian noises, do I get a gausian noise?
If we isolate the new noise, it is undoubtedly gaussian. If we assume precise calculations (no floating point error), I believe there is no way the new noise could be distinguished from a freshly generated noise.
However, if we look at it in relation to one or both of the noises we averaged, it becomes obvious that it's their average.
I'm not sure about how exactly PCA works, but the transformation seems also to be linear in nature.
TBH, I don't know enough about PCA to comment on your situation, but I'm hoping that further edits would help extend this answer to fit your question.

Categories