Randomly sample x% of each cluster - python

I am working on a project aiming to exploit the cluster structure of my dataset to improve a supervised active learning classifier for binray classification. I use the following code to cluster my data, X using scikit-leanr's K-Means implementation:
k = KMeans(n_clusters=(i+2), precompute_distances=True, ).fit(X)
df = pd.DataFrame({'cluster' : k.labels_, 'percentage posotive' : y})
a = df.groupby('cluster').apply(lambda cluster:cluster.sum()/cluster.count())
The two classes are positive (represented by a 1) and negative (represented by a 0) and are stored in an array y.
This code first clusters X and then stores in a data frame each clusters number and the number of percentage of positive instances within it.
I would now like to randomly select points from each cluster, until I have sampled 15%. How can I do this?
As requested here is a simplified script including a test dataset:
from sklearn.cluster import KMeans
import pandas as pd
X = [[1,2], [2,5], [1,2], [3,3], [1,2], [7,3], [1,1], [2,19], [1,11], [54,3], [78,2], [74,36]]
y = [0,0,0,0,0,0,0,0,0,1,0,0]
k = KMeans(n_clusters=(4), precompute_distances=True, ).fit(X)
df = pd.DataFrame({'cluster' : k.labels_, 'percentage posotive' : y})
a = df.groupby('cluster').apply(lambda cluster:cluster.sum()/cluster.count())
print(a)
Note: The real datasets are much larger consisting of thousands of features and thousands of data instances.
In response to #SandipanDey:
I can't tell you too much, but basically we are dealing with a highly unbalanced dataset (1:10,000) and we are only interested in identifying the minority class examples with recall > 95% whilst reducing the number of labels requested. (Recall needs to be so high as its related to healthcare.)
The minority examples cluster together, and any cluster containing a positive instances will usually contain at least x%, so by sampling x% we ensure that we identify all clusters with any positive instances. So we are able to quickly reduce the size of the dataset with potential positives. This parital dataset can then be used for active learning. Our approach is loosely inspired by 'Hierarchical Sampling for Active Learning'

If I understood you correctly, the following code should serve the purpose:
import numpy as np
# For each cluster
# (1) Find all the points from X that are assigned to the cluster.
# (2) Choose x% from those points randomly.
n_clusters = 4
x = 0.15 # percentage
for i in range(n_clusters):
# (1) indices of all the points from X that belong to cluster i
C_i = np.where(k.labels_ == i)[0].tolist()
n_i = len(C_i) # number of points in cluster i
# (2) indices of the points from X to be sampled from cluster i
sample_i = np.random.choice(C_i, int(x * n_i))
print i, sample_i
Just for curiosity, how are you going to use these x% points for active learning?

Related

Detect cluster outliers

I have a dataset where every data sample consists of 10-20 2D coordinates points. The data is mostly clean but occasionally there are falsely annotated points. For illustration the cleany annotated data would look like these:
either clustered in a small area or spread across a larger area. The outliers I'm trying to filter out look like this:
the outlier is away from the "correct" cluster.
I tried z-score filtering but this approach falsely marked many annotations as outliers
std_score = np.abs((points - points.mean(axis=0)) / (np.std(points, axis=0) + 0.01))
validity = np.all(std_score <= np.quantile(std_score, 0.95, axis=0), axis=1)
Is there a method designed to solve this problem?
This seems like a typical clustering problem, and if the data looks as you suggested the KMeans from scikit-learn should do the trick. Lets look how we can do this.
First I am generating a data sample, which might look somewhat like your data.
import numpy as np
import matplotlib.pylab as plt
np.random.seed(1) # For reproducibility
cluster_1 = np.random.normal(loc = [1,1], scale = [0.2,0.2], size = (20,2))
cluster_2 = np.random.normal(loc = [2,1], scale = [0.4,0.4], size = (5,2))
plt.scatter(cluster_1[:,0], cluster_1[:,1])
plt.scatter(cluster_2[:,0], cluster_2[:,1])
plt.show()
points = np.vstack([cluster_1, cluster_2])
This is how the data will look like.
Further we will be doing KMeans clustering.
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 2).fit(points)
We are choosing n_clusters as 2 believing that there are 2 clusters in the dataset. And after finding these clusters lets look at them.
plt.scatter(points[kmeans.labels_==0][:,0], points[kmeans.labels_==0][:,1], label='cluster_1')
plt.scatter(points[kmeans.labels_==1][:,0], points[kmeans.labels_==1][:,1], label ='cluster_2')
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], label = 'cluster_center')
plt.legend()
plt.show()
This will look like as the image shown below.
This should solve your problem. But there ares some things which should be kept in mind.
It will not be perfect all the times.
Might be a problem if you don't have any outliers. Can be solved through silhouette scores.
Difficult to know which cluster to discard (Can be done through locating the center of the clusters (green colored points) or can also be done by finding the cluster with lesser number of points.
Endnote: You might loose some points but would automate the entire process. Depends upon how much you want to trade off in terms of data saved versus manual time saved.

How can i generate three outlier points such that they are apparently far away from the normal data in python?

I am using make_moons dataset and I am trying to implement an outlier detection algorithm. That's why I want to generate 3 points which are away from normal data, and testify if they are outlier or not. These 3 points should be randomly selected from my data and should be far as possible from the normal data.
My algorithm will compare the distance between that point with theresold value and finds if it is an outlier or not.
I am aware of the other resources to do that, but my specific problem to do that, is my dataset. I could not find a way to fit the solutions to my dataset
Here is my code to define dataset and fit into K-Means(I have to use K-Means fitted data):
data = make_moons(n_samples=100,noise=0, random_state=0)
X,y=data
n_clusters=10
kmeans = KMeans(n_clusters = n_clusters,random_state=10)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
Shortly, how can i find farthest 3 points in my data, to use it in outlier detection?
As stated in the comments, you should define a criteria to classify outliers. Either way, in the following code, I randomly selected three entries from X and multiplied them by 1,000, so surely that should make them outliers regardless of the definition you choose.
# Import libraries
import numpy as np
from sklearn.datasets import make_moons
# Create data
X, y = make_moons(100, random_state=123)
# Randomly select 3 row numbers from X
np.random.seed(5)
idx = np.random.randint(low=0, high=len(df[0]) + 1, size=3)
# Overwrite the data from the randomly selected rows
for i in idx:
scaler = 1000 # Change this number to whatever you need
X[i] = X[i] * scaler
Note: There is a small probability that idx will have duplicates. It won't happen with np.random.seed(5), but if you choose another seed (or opt to not use one at all) and get duplicates, simply try another one or repeat until you don't get duplicates.

Clustering clusters in Python OR merge clusters to reduce number of groups (Python)

I'm dealing with a set of 173k points labelized into 160 groups. I'd like to reduce this number of groups/clusters by merging the closest (to 9 or 10 groups). I've searched for sklearn or alike libraries, but with no success.
I guess it's simply clustering by knn not points but groups of same labelized points.
As graphics are most of the time better explanations, here is a simplified version of what I'd like :
Thanks for the help
Have you tried k-Means Clustering?
There, you can define the number of clusters n_clusters - which could be 10 in your case.
You have:
X - data
labels - current cluster labels list
To cluster clusters just add labels as a new column:
from sklearn.preprocess import scale
X = pd.DateFarme(X)
weight = 1
X['current_labels'] = scale(labels) * weight
# cluster again:
To merge cluster 3 to 2:
X['current_labels'] = labels
X[X['current_labels'] == 3] = 2

Unsupervised learning clustering 1D array

I am faced with the following array:
y = [1,2,4,7,9,5,4,7,9,56,57,54,60,200,297,275,243]
What I would like to do is extract the cluster with the highest scores. That would be
best_cluster = [200,297,275,243]
I have checked quite a few questions on stack on this topic and most of them recommend using kmeans. Although a few others mention that kmeans might be an overkill for 1D arrays clustering.
However kmeans is a supervised learnig algorithm, hence this means that I would have to pass in the number of centroids. As I need to generalize this problem to other arrays, I cannot pass the number of centroids for each one of them. Therefore I am looking at implementing some sort of unsupervised learning algorithm that would be able to figure out the clusters by itself and select the highest one.
In array y I would see 3 clusters as so [1,2,4,7,9,5,4,7,9],[56,57,54,60],[200,297,275,243].
What algorithm would best fit my needs, considering computation cost and accuracy and how could I implement it for my problem?
Try MeanShift. From the sklean user guide of MeanShift:
The algorithm automatically sets the number of clusters, ...
Modified demo code:
import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth
# #############################################################################
# Generate sample data
X = [1,2,4,7,9,5,4,7,9,56,57,54,60,200,297,275,243]
X = np.reshape(X, (-1, 1))
# #############################################################################
# Compute clustering with MeanShift
# The following bandwidth can be automatically detected using
# bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=100)
ms = MeanShift(bandwidth=None, bin_seeding=True)
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
print(labels)
Output:
number of estimated clusters : 2
[0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1]
Note that MeanShift is not scalable with the number of samples. The recommended upper limit is 10,000.
BTW, as rahlf23 already mentioned, K-mean is an unsupervised learning algorithm. The fact that you have to specify the number of clusters does not mean it is supervised.
See also:
Overview of clustering methods
Choosing the right estimator
Clustering is overkill here
Just compute the differences of subsequent elements. I.e. look at x[i]-x[i-1].
Choose the k largest differences as split points. Or define a threshold on when to split. E.g. 20. Depends on your data knowledge.
This is O(n), much faster than all the others mentioned. Also very understandable and predictable.
On one dimensional ordered data, any method that doesn't use the order will be slower than necessary.
HDBSCAN is the best clustering algorithm and you should always use it.
Basically all you need to do is provide a reasonable min_cluster_size, a valid distance metric and you're good to go.
For min_cluster_size I suggest using 3 since a cluster of 2 is lame and for metric the default euclidean works great so you don't even need to mention it.
Don't forget that distance metrics apply to vectors and here we have scalars so some ugly reshaping is in order.
To put it all together and assuming by "cluster with the highest scores" you mean the cluster that includes the max value we get:
from hdbscan import HDBSCAN
import numpy as np
y = [1,2,4,7,9,5,4,7,9,56,57,54,60,200,297,275,243]
y = np.reshape(y, (-1, 1))
clusterer = HDBSCAN(min_cluster_size=3)
cluster_labels = clusterer.fit_predict(y)
best_cluster = clusterer.exemplars_[cluster_labels[y.argmax()]].ravel()
print(best_cluster)
The output is [297 200 275 243]. Original order is not preserved. C'est la vie.

K-means cluster - Plot class proportions in each cluster

I am working on a project where I exploit the cluster structure of an unlabeled dataset to improve the performance of a supervised learning clustering algorithm. After preprocessing the data - stored in a matrix - I use k-means to cluster the data like so:
from sklearn.cluster import KMeans
k = KMeans(n_clusters=40).fit(X)
I have the desired labels stored in y. I am intrested in seeing how the different classes are clustered ie. if the clusters are relatively pure or mixed.
To do this I want to see the proportions of each class in each cluster. This is a binary classification task - positive (represented by a 1 in y) instances and negative instances (represented by a 0 in y ).
(The nth element of the y array is the correct label for the nth row of the X matrix.)
I would use pandas:
import pandas as pd
Combine the true labels and cluster labels into a dataframe:
df = pd.DataFrame({'clusters' : k.labels_, 'labels' : y})
Group by clusters and for each cluster get the fraction of 1's:
df.groupby('clusters').apply(lambda cluster: cluster.sum()/cluster.count())

Categories