sklearn agglomerative clustering input data - python

I have a similarity matrix between four users. I want to do an agglomerative clustering. the code is like this:
lena = np.matrix('1 1 0 0;1 1 0 0;0 0 1 0.2;0 0 0.2 1')
X = np.reshape(lena, (-1, 1))
print("Compute structured hierarchical clustering...")
st = time.time()
n_clusters = 3 # number of regionsle
ward = AgglomerativeClustering(n_clusters=n_clusters,
linkage='complete').fit(X)
print ward
label = np.reshape(ward.labels_, lena.shape)
print("Elapsed time: ", time.time() - st)
print("Number of pixels: ", label.size)
print("Number of clusters: ", np.unique(label).size)
print label
the print result of label is like:
[[1 1 0 0]
[1 1 0 0]
[0 0 1 2]
[0 0 2 1]]
Does this mean it gives a lists of possible cluster result, we can choose one from them? like choosing: [0,0,2,1]. If is wrong, could you tell me how to do the agglomerative algorithm based on similarity? If it'ss right, the similarity matrix is huge, how can i choose the optimal clustering result from a huge list? Thanks

I think the problem here is that you fit your model with the wrong data
# This will return a 4x4 matrix (similarity matrix)
lena = np.matrix('1 1 0 0;1 1 0 0;0 0 1 0.2;0 0 0.2 1')
# However this will return 16x1 matrix
X = np.reshape(lena, (-1, 1))
The true result you get is this:
ward.labels_
>> array([1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 2, 0, 0, 2, 1])
Which is the label of each element in the X vector and it don't make sens
If I well understood your problem, you need to classify your users by distance between them (similarity). Well, in this case I will suggest to use spectral clustering this way:
import numpy as np
from sklearn.cluster import SpectralClustering
lena = np.matrix('1 1 0 0;1 1 0 0;0 0 1 0.2;0 0 0.2 1')
n_clusters = 3
SpectralClustering(n_clusters).fit_predict(lena)
>> array([1, 1, 0, 2], dtype=int32)

Related

In a boolean matrix, what is the best way to make every value adjacent to True/1 to True?

I have a numpy boolean 2d array with True/False. I want to make every adjacent cell of a True value to be True. What's the best/fastest of doing that in python?
For Eg:
#Initial Matrix
1 0 0 0 0 1 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
#After operation
1 1 1 1 1 1 1
1 1 1 1 1 1 1
0 0 1 1 1 0 0
It looks like you want to do dilation. OpenCV might be your best tool
import cv2
dilatation_dst = cv2.dilate(src, np.ones((3,3)))
https://docs.opencv.org/3.4/db/df6/tutorial_erosion_dilatation.html
You can use scipy.signal.convolve2d.
import numpy as np
from scipy.signal import convolve2d
result = convolve2d(src, np.ones((3,3)), mode='same').astype(bool).astype(int)
print(result)
Or we can use scipy.ndimage.
from scipy import ndimage
result = ndimage.binary_dilation(src, np.ones((3,3))).astype(int)
print(result)
Output:
[[1 1 1 1 1 1 1]
[1 1 1 1 1 1 1]
[0 0 1 1 1 0 0]]
Given
arr = np.array([[1, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
You can do
from scipy.ndimage import shift
arr2 = arr | shift(arr, (0, 1), cval=0) | shift(arr, (0, -1), cval=0)
arr3 = arr2 | shift(arr2, (1, 0), cval=0), (-1, 0), cval=0)

How to shuffle a 2d binary matrix, preserving marginal distributions

Suppose I have an (n*m) binary matrix df similar to the following:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.binomial(1, .3, size=(6,8)))
0 1 2 3 4 5 6 7
------------------------------
0 | 0 0 0 0 0 1 1 0
1 | 0 1 0 0 0 0 0 0
2 | 0 0 0 0 1 0 0 0
3 | 0 0 0 0 0 1 0 1
4 | 0 1 1 0 1 0 0 0
5 | 1 0 1 1 1 0 0 1
I want to shuffle the values in the matrix to create a new_df of the same shape, such that both marginal distributions are the same, such as the following:
0 1 2 3 4 5 6 7
------------------------------
0 | 0 0 0 0 1 0 0 1
1 | 0 0 0 0 1 0 0 0
2 | 0 0 0 0 0 0 0 1
3 | 0 1 1 0 0 0 0 0
4 | 1 0 0 0 1 1 0 0
5 | 0 1 1 1 0 1 1 0
In the new matrix, the sum of each row is equal to the sum of the corresponding row in the original matrix, and likewise, columns in the new matrix have the same sum as the corresponding column in the original matrix.
The solution is pretty easy to check:
# rows have the same marginal distribution
assert(all(df.sum(axis=1) == new_df.sum(axis=1)))
# columns have the same marginal distribution
assert(all(df.sum(axis=0) == new_df.sum(axis=0)))
If n*m is small, I can use a brute-force approach to the shuffle:
def shuffle_2d(df):
"""Shuffles a multidimensional binary array, preserving marginal distributions"""
# get a list of indices where the df is 1
rowlist = []
collist = []
for i_row, row in df.iterrows():
for i_col, val in row.iteritems():
if df.loc[i_row, i_col] == 1:
rowlist.append(i_row)
collist.append(i_col)
# create an empty df of the same shape
new_df = pd.DataFrame(index=df.index, columns=df.columns, data=0)
# shuffle until you get no repeat coordinates
# this is so you don't increment the same cell in the matrix twice
repeats = 999
while repeats > 1:
pairs = list(zip(np.random.permutation(rowlist), np.random.permutation(collist)))
repeats = pd.value_counts(pairs).max()
# populate new data frame at indicated points
for i_row, i_col in pairs:
new_df.at[i_row, i_col] += 1
return new_df
The problem is that the brute force approach scales poorly. (As in that line from Indiana Jones and the Last Crusade: https://youtu.be/Ubw5N8iVDHI?t=3)
As a quick demo, for an n*n matrix, the number of attempts needed to get an acceptable shuffle looks like: (in one run)
n attempts
2 1
3 2
4 4
5 1
6 1
7 11
8 9
9 22
10 4416
11 800
12 66
13 234
14 5329
15 26501
16 27555
17 5932
18 668902
...
Is there a straightforward solution that preserves the exact marginal distributions (or tells you where no other pattern is possible that preserves that distribution)?
As a fallback, I could also use an approximation algorithm that could minimize the sum of squared errors on each row.
Thanks! =)
EDIT:
For some reason I wasn't finding existing answers before I wrote this question, but after posting it they all show up in the sidebar:
Is it possible to shuffle a 2D matrix while preserving row AND column frequencies?
Randomize matrix in perl, keeping row and column totals the same
Sometimes all you need to do is ask...
Thanks mostly to https://stackoverflow.com/a/2137012/6361632 for inspiration, here's a solution that seems to work:
def flip1(m):
"""
Chooses a single (i0, j0) location in the matrix to 'flip'
Then randomly selects a different (i, j) location that creates
a quad [(i0, j0), (i0, j), (i, j0), (i, j) in which flipping every
element leaves the marginal distributions unaltered.
Changes those elements, and returns 1.
If such a quad cannot be completed from the original position,
does nothing and returns 0.
"""
i0 = np.random.randint(m.shape[0])
j0 = np.random.randint(m.shape[1])
level = m[i0, j0]
flip = 0 if level == 1 else 1 # the opposite value
for i in np.random.permutation(range(m.shape[0])): # try in random order
if (i != i0 and # don't swap with self
m[i, j0] != level): # maybe swap with a cell that holds opposite value
for j in np.random.permutation(range(m.shape[1])):
if (j != j0 and # don't swap with self
m[i, j] == level and # check that other swaps work
m[i0, j] != level):
# make the swaps
m[i0, j0] = flip
m[i0, j] = level
m[i, j0] = level
m[i, j] = flip
return 1
return 0
def shuffle(m1, n=100):
m2 = m1.copy()
f_success = np.mean([flip1(m2) for _ in range(n)])
# f_success is the fraction of flip attempts that succeed, for diagnostics
#print(f_success)
# check the answer
assert(all(m1.sum(axis=1) == m2.sum(axis=1)))
assert(all(m1.sum(axis=0) == m2.sum(axis=0)))
return m2
Which we can call as:
m1 = np.random.binomial(1, .3, size=(6,8))
array([[0, 0, 0, 1, 1, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1],
[1, 1, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 0, 0],
[1, 0, 1, 0, 1, 0, 0, 0]])
m2 = shuffle(m1)
array([[0, 0, 0, 0, 1, 1, 0, 1],
[1, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0],
[1, 0, 0, 1, 0, 0, 0, 1]])
How many iterations do we need to get to a steady-state distribution? I've set a default of 100 here, which is sufficient for these small matrices.
Below I plot the correlation between the original matrix and the shuffled matrix (500 times) for various numbers of iterations.
for _ in range(500):
m1 = np.random.binomial(1, .3, size=(9,9)) # create starting df
m2 = shuffle(m1, n_iters)
corrs.append(np.corrcoef(m1.flatten(), m2.flatten())[1,0])
plt.hist(corrs, bins=40, alpha=.4, label=n_iters)
For a 9x9 matrix, we see improvements up until about 25 iterations, beyond which we're in a steady state.
For an 18x18 matrix, we see small gains going from 100 to 250 iterations, but not much beyond.
Note that the correlation between starting and ending distributions is lower for larger matrices, but it takes us longer to get there.
You have to look for two rows and two columns, the cut points of which give a matrix with 1 0 on the top and 0 1 on the bottom (or the other way around). These values you can switch (to 01 and 10).
There is even an algorithm, that can sample from all possible matrices with identical marginals (implemented in the R-package RaschSampler) developed by Verhelst (2008, link to article page).
A newer algorithm by Wang (2020, link), more efficient for some cases, is also available.

Create a matrix for datapoints in same or different clusters

I want to iterate through my datapoints and check whether they are in the same cluster, after using KMeans to cluster them.
And then I need to create a matrix for all the datapoints, and have 1 if they belong on the same cluster, and 0 if they don't.
After using Kmeans, I'm not sure how to retrieve which cluster every datapoint belongs to so I can create such matrix.
Do I do that using labels_ argument?
k_means = KMeans(n_clusters=5).fit(X)
labels_columns = k_means.labels_
labels_row = k_means.labels_
for row in labels_row:
for column in labels_columns:
if row == columns:
--add 1 in matrix position
else:
--add 0 in matrix position
How to best create this matrix? Or do they labels_ provide different information from what my understanding?
Any help is appreciated!
You are on the right track. Kmeans.labels_ returns a vector of n elements which tells you that the
cluster each point belongs to: [3, 4, 10, ...] tells you that point 0 belongs to cluster 3, point 1
belongs to cluster 4 and so on.
You can build the matrix you want in many ways. One possibility I thought which is a bit more elegant than
2 for loops would be the following:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
n_samples, n_features = 10, 2
X, y = make_blobs(n_samples, n_features)
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.show()
kmeans = KMeans(n_clusters=3).fit(X)
plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_)
plt.show()
neighbour_matrix = np.zeros(n_samples)
repeat_labels = np.repeat(kmeans.labels_.T, n_samples, axis=0).reshape(n_samples, n_samples)
print(kmeans.labels_)
print(repeat_labels)
proximity_matrix = (repeat_labels == repeat_labels.T).astype(int)
print(proximity_matrix)
I use the vector of labels as my starting point. Let's say that it is the following:
[1 0 0 1 1 2 2 2 2 0]
I transform it in a 2D matrix with np.repeat which has the following shape:
[[1 1 1 1 1 1 1 1 1 1]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[1 1 1 1 1 1 1 1 1 1]
.....
So I repeat the labels as many times as is the number of points n. Then I can just check where this
matrix and its transpose are equal. That will be true only if two points belong to the same cluster:
[[1 0 0 1 1 0 0 0 0 0]
[0 1 1 0 0 0 0 0 0 1]
[0 1 1 0 0 0 0 0 0 1]
[1 0 0 1 1 0 0 0 0 0]
.....
I casted the matrix to int, but mind you that the original output is actually a boolean array.
I left the print statements and the plots in the code to hopefully make it more clear.
Hope it helps!

Numpy - How to shift values at indexes where change happened

So I would like to shift my values in a 1D numpy arrays, where change happened. The sample of shifting shall be configured.
input = np.array([0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,1,0,0,0,0])
shiftSize = 2
out = np.magic(input, shiftSize)
print out
np.array([0,0,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,0,0])
For example the first switch happened and index 4, so index 2,3 becomes '1'.
The next happened at 5, so 6 and 7 becomes '1'.
EDIT: Also it would be important to be without for cycle because, that might be slow (it is needed for large data sets)
EDIT2: indexes and variable name
I tried with np.diff, so i get where the changes happened and then np.put, but with multiple index ranges it seems impossible.
Thank you for the help in advance!
What you want is called "binary dilation" and is contained in scipy.ndimage:
import numpy as np
import scipy.ndimage
input = np.array([0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,1,0,0,0,0], dtype=bool)
out = scipy.ndimage.morphology.binary_dilation(input, iterations=2).astype(int)
# array([0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0])
Nils' answer seems good. Here is an alternative using NumPy only:
import numpy as np
def dilate(ar, amount):
# Convolve with a kernel as big as the dilation scope
dil = np.convolve(np.abs(ar), np.ones(2 * amount + 1), mode='same')
# Crop in case the convolution kernel was bigger than array
dil = dil[-len(ar):]
# Take non-zero and convert to input type
return (dil != 0).astype(ar.dtype)
# Test
inp = np.array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0])
print(inp)
print(dilate(inp, 2))
Output:
[0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 1 0 0 0 0]
[0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0]
Another numpy solution :
def dilatation(seed,shift):
out=seed.copy()
for sh in range(1,shift+1):
out[sh:] |= seed[:-sh]
for sh in range(-shift,0):
out[:sh] |= seed[-sh:]
return out
Example (shift = 2) :
in : [0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0]
out: [0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1]

Spectral Clustering a graph in python

I'd like to cluster a graph in python using spectral clustering.
Spectral clustering is a more general technique which can be applied not only to graphs, but also images, or any sort of data, however, it's considered an exceptional graph clustering technique. Sadly, I can't find examples of spectral clustering graphs in python online.
Scikit Learn has two spectral clustering methods documented: SpectralClustering and spectral_clustering which seem like they're not aliases.
Both of those methods mention that they could be used on graphs, but do not offer specific instructions. Neither does the user guide. I've asked for such an example from the developers, but they're overworked and haven't gotten to it.
A good network to document this against is the Karate Club Network. It's included as a method in networkx.
I'd love some direction in how to go about this. If someone can help me figure it out, I can add the documentation to scikit learn.
Notes:
A question much like this one has already been asked on this site.
Without much experience with Spectral-clustering and just going by the docs (skip to the end for the results!):
Code:
import numpy as np
import networkx as nx
from sklearn.cluster import SpectralClustering
from sklearn import metrics
np.random.seed(1)
# Get your mentioned graph
G = nx.karate_club_graph()
# Get ground-truth: club-labels -> transform to 0/1 np-array
# (possible overcomplicated networkx usage here)
gt_dict = nx.get_node_attributes(G, 'club')
gt = [gt_dict[i] for i in G.nodes()]
gt = np.array([0 if i == 'Mr. Hi' else 1 for i in gt])
# Get adjacency-matrix as numpy-array
adj_mat = nx.to_numpy_matrix(G)
print('ground truth')
print(gt)
# Cluster
sc = SpectralClustering(2, affinity='precomputed', n_init=100)
sc.fit(adj_mat)
# Compare ground-truth and clustering-results
print('spectral clustering')
print(sc.labels_)
print('just for better-visualization: invert clusters (permutation)')
print(np.abs(sc.labels_ - 1))
# Calculate some clustering metrics
print(metrics.adjusted_rand_score(gt, sc.labels_))
print(metrics.adjusted_mutual_info_score(gt, sc.labels_))
Output:
ground truth
[0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1]
spectral clustering
[1 1 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
just for better-visualization: invert clusters (permutation)
[0 0 1 0 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
0.204094758281
0.271689477828
The general idea:
Introduction on the data and task from here:
The nodes in the graph represent the 34 members in a college Karate club. (Zachary is a sociologist, and he was one of the members.) An edge between two nodes indicates that the two members spent significant time together outside normal club meetings. The dataset is interesting because while Zachary was collecting his data, there was a dispute in the Karate club, and it split into two factions: one led by “Mr. Hi”, and one led by “John A”. It turns out that using only the connectivity information (the edges), it is possible to recover the two factions.
Using sklearn & spectral-clustering to tackle this:
If affinity is the adjacency matrix of a graph, this method can be used to find normalized graph cuts.
This describes normalized graph cuts as:
Find two disjoint partitions A and B of the vertices V of a graph, so
that A ∪ B = V and A ∩ B = ∅
Given a similarity measure w(i,j) between two vertices (e.g. identity
when they are connected) a cut value (and its normalized version) is defined as:
cut(A, B) = SUM u in A, v in B: w(u, v)
...
we seek the minimization of disassociation
between the groups A and B and the maximization of the association
within each group
Sounds alright. So we create the adjacency matrix (nx.to_numpy_matrix(G)) and set the param affinity to precomputed (as our adjancency-matrix is our precomputed similarity-measure).
Alternatively, using precomputed, a user-provided affinity matrix can be used.
Edit: While unfamiliar with this, i looked for parameters to tune and found assign_labels:
The strategy to use to assign labels in the embedding space. There are two ways to assign labels after the laplacian embedding. k-means can be applied and is a popular choice. But it can also be sensitive to initialization. Discretization is another approach which is less sensitive to random initialization.
So trying the less sensitive approach:
sc = SpectralClustering(2, affinity='precomputed', n_init=100, assign_labels='discretize')
Output:
ground truth
[0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1]
spectral clustering
[0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1]
just for better-visualization: invert clusters (permutation)
[1 1 0 1 1 1 1 1 0 0 1 1 1 1 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
0.771725032425
0.722546051351
That's a pretty much perfect fit to the ground-truth!
Here is a dummy example just to see what it does to a simple similarity matrix -- inspired by sascha's answer.
Code
import numpy as np
from sklearn.cluster import SpectralClustering
from sklearn import metrics
np.random.seed(0)
adj_mat = [[3,2,2,0,0,0,0,0,0],
[2,3,2,0,0,0,0,0,0],
[2,2,3,1,0,0,0,0,0],
[0,0,1,3,3,3,0,0,0],
[0,0,0,3,3,3,0,0,0],
[0,0,0,3,3,3,1,0,0],
[0,0,0,0,0,1,3,1,1],
[0,0,0,0,0,0,1,3,1],
[0,0,0,0,0,0,1,1,3]]
adj_mat = np.array(adj_mat)
sc = SpectralClustering(3, affinity='precomputed', n_init=100)
sc.fit(adj_mat)
print('spectral clustering')
print(sc.labels_)
Output
spectral clustering
[0 0 0 1 1 1 2 2 2]
Let's first cluster a graph G into K=2 clusters and then generalize for all K.
We can use the function linalg.algebraicconnectivity.fiedler_vector() from networkx, in order to compute the Fiedler vector of (the eigenvector corresponding to the second smallest eigenvalue of the Graph Laplacian matrix) of the graph, with the assumption that the graph is a connected undirected graph.
Then we can threshold the values of the eigenvector to compute the cluster index each node corresponds to, as shown in the next code block:
import networkx as nx
import numpy as np
A = np.zeros((11,11))
A[0,1] = A[0,2] = A[0,3] = A[0,4] = 1
A[5,6] = A[5,7] = A[5,8] = A[5,9] = A[5,10] = 1
A[0,5] = 5
G = nx.from_numpy_matrix(A)
ev = nx.linalg.algebraicconnectivity.fiedler_vector(G)
labels = [0 if v < 0 else 1 for v in ev] # using threshold 0
labels
# [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
nx.draw(G, pos=nx.drawing.layout.spring_layout(G),
with_labels=True, node_color=labels)
We can obtain the same clustering with eigen analysis of the graph Laplacian and then by choosing the eigenvector corresponding to the 2nd smallest eigenvalue too:
L = nx.laplacian_matrix(G)
e, v = np.linalg.eig(L.todense())
idx = np.argsort(e)
e = e[idx]
v = v[:,idx]
labels = [0 if x < 0 else 1 for x in v[:,1]] # using threshold 0
labels
# [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]
drawing the graph again with the clusters labeled:
With SpectralClustering from sklearn.cluster we can get the exact same result:
sc = SpectralClustering(2, affinity='precomputed', n_init=100)
sc.fit(A)
sc.labels_
# [0 0 0 0 0 1 1 1 1 1 1]
We can generalize the above for K > 2 clusters as follows (use kmeans clustering for partitioning the Fiedler vector instead of thresholding):
The following code demonstrates how k-means clustering can be used to partition the Fiedler vector and obtain a 3-clustering of a graph defined by the following adjacency matrix:
A = np.array([[3,2,2,0,0,0,0,0,0],
[2,3,2,0,0,0,0,0,0],
[2,2,3,1,0,0,0,0,0],
[0,0,1,3,3,3,0,0,0],
[0,0,0,3,3,3,0,0,0],
[0,0,0,3,3,3,1,0,0],
[0,0,0,0,0,1,3,1,1],
[0,0,0,0,0,0,1,3,1],
[0,0,0,0,0,0,1,1,3]])
K = 3 # K clusters
G = nx.from_numpy_matrix(A)
ev = nx.linalg.algebraicconnectivity.fiedler_vector(G)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=K, random_state=0).fit(ev.reshape(-1,1))
kmeans.labels_
# array([2, 2, 2, 0, 0, 0, 1, 1, 1])
Now draw the clustered graph, with labeling the nodes with the clusters obtained above:

Categories