Efficiently generating random graphs with a user-specified global clustering coefficient - python

I'm working on simulations of large-scale neuronal networks, for which I need to generate random graphs that represent the network topology.
I'd like to be able to specify the following properties of these graphs:
Number of nodes, N (~=1000-10000)
Average probability of a connection between any two given nodes, p (~0.01-0.2)
Global clustering coefficient, C (~0.1-0.5)
Ideally, the random graphs should be drawn uniformly from the set of all possible graphs that satisfy these user-specified criteria.
At the moment I'm using a very crude random diffusion approach where I start out with an Erdos-Renyi random network with the desired size and global connection probability, then on each step I randomly rewire some fraction of the edges. If the rewiring got me closer to the desired C then I keep the rewired network into the next iteration.
Here's my current Python implementation:
import igraph
import numpy as np
def generate_fixed_gcc(n, p, target_gcc, tol=1E-3):
"""
Creates an Erdos-Renyi random graph of size n with a specified global
connection probability p, which is then iteratively rewired in order to
achieve a user- specified global clustering coefficient.
"""
# initialize random graph
G_best = igraph.Graph.Erdos_Renyi(n=n, p=p, directed=True, loops=False)
loss_best = 1.
n_edges = G_best.ecount()
# start with a high rewiring rate
rewiring_rate = n_edges
n_iter = 0
while loss_best > tol:
# operate on a copy of the current best graph
G = G_best.copy()
# adjust the number of connections to rewire according to the current
# best loss
n_rewire = min(max(int(rewiring_rate * loss_best), 1), n_edges)
G.rewire(n=n_rewire)
# compute the global clustering coefficient
gcc = G.transitivity_undirected()
loss = abs(gcc - target_gcc)
# did we improve?
if loss < loss_best:
# keep the new graph
G_best = G
loss_best = loss
gcc_best = gcc
# increase the rewiring rate
rewiring_rate *= 1.1
else:
# reduce the rewiring rate
rewiring_rate *= 0.9
n_iter += 1
# get adjacency matrix as a boolean numpy array
M = np.array(G_best.get_adjacency().data, dtype=np.bool)
return M, n_iter, gcc_best
This is works OK for small networks (N < 500), but it quickly becomes intractable as the number of nodes increases. It takes on the order of about 20 sec to generate a 200 node graph, and several days to generate a 1000 node graph.
Can anyone suggest an efficient way to do this?

You are right. That is a very expensive method to achieve what you want. I can only speculate if there is a mathematically sound way to optimize and ensure that it is close to being a uniform distribution. I'm not even sure that your method leads to a uniform distribution although it seems like it would. Let me try:
Based on the docs for transitivity_undirected and wikipedia Clustering Coefficient, it sounds like it is possible to make changes locally in the graph and at the same time know the exact effect on global connectivity and global clustering.
The global clustering coefficient is based on triplets of nodes. A triplet consists of three nodes that are connected by either two (open triplet) or three (closed triplet) undirected ties. A triangle consists of three closed triplets, one centred on each of the nodes. The global clustering coefficient is the number of closed triplets (or 3 x triangles) over the total number of triplets (both open and closed).
( * edit * ) Based on my reading of the paper referenced by ali_m, the method below will probably spend too many edges on low-degree clusters, leading to a graph that cannot achieve the desired clustering coefficient unless it is very low (which probably wouldn't be useful anyway). Therefore, on the off chance that somebody actually uses this, you will want to identify higher degree clusters to add edges to in order to quickly raise the clustering coefficient without needing to add a lot of edges.
On the other hand, the method below does align with the methods in the research paper so it's more or less a reasonable approach.
If I understand it correctly, you could do the following:
Produce the graph as you have done.
Calculate and Track:
p_surplus to track the number of edges that need to be added or removed elsewhere to maintain connectivity
cc_top, cc_btm to track the clustering coefficient
Iteratively (not completely) choose random pairs and connect or disconnect them to monotonically
approach the Clustering Coefficient (cc) you want while maintaining the Connectivity (p) you already have.
Pseudo code:
for random_pair in random_pairs:
if (random_pair is connected) and (need to reduce cc or p): # maybe put priority on the one that has a larger gap?
delete the edge
p_surplus -= 1
cc_top -= broken_connected_triplets # have to search locally
cc_btm -= (broken_connected_triplets + broken_open_triplets) # have to search locally
elif (random_pair is not connected) add (need to increase c or p):
add the edge
p_surplus += 1
cc_top += new_connected_triplets
cc_btm += (new_connected_triplets + new_open_triplets)
if cc and p are within desired ranges:
done
if some condition for detecting infinite loops:
rethink this method
That may not be totally correct, but I think the approach will work. The efficiency
of searching for local triplets and always moving your parameters in the right direction will be better
than copying the graph and globally measuring the cc so many times.

Having done a bit of reading, it looks as though the best solution might be the generalized version of Gleeson's algorithm presented in this paper. However, I still don't really understand how to implement it, so for the time being I've been working on Bansal et al's algorithm.
Like my naive approach, this is a Markov chain-based method that uses random edge swaps, but unlike mine it specifically targets 'triplet motifs' within the graph for rewiring:
Since this will have a greater tendency to introduce triangles, it will therefore have a greater impact on the clustering coefficient. At least in the case of undirected graphs, the rewiring step is also guaranteed to preserve the degree sequence. Again, on every rewiring iteration the new global clustering coefficient is measured, and the new graph is accepted if the GCC got closer to the target value.
Bansal et al actually provided a Python implementation, but for various reasons I ended up writing my own version, which you can find here.
Performance
The Bansal approach takes just over half the number of iterations and half the total time compared with my naive diffusion method:
I was hoping for bigger gains, but a 2x speedup is better than nothing.
Generalizing to directed graphs
One remaining challenge with the Bansal method is that my graphs are directed, whereas Bansal et al's algorithm is only designed to work on undirected graphs. With a directed graph, the rewiring step is no longer guaranteed to preserve the in- and out-degree sequences.
Update
I've just figured out how to generalize the Bansal method to preserve both the in- and out-degree sequences for directed graphs. The trick is to select motifs where the two outward edges to be swapped have opposite directions (the directions of the edges between {x, y1} and {x, y2} don't matter):
I've also made some more optimizations, and the performance is starting to look a bit more respectable - it takes roughly half the number of iterations and half the total time compared with the diffusion approach. I've updated the graphs above with the new timings.

I came up with a graph generation model that can easily generate connected random graphs of some 10,000 nodes and more that follow prescribed degree and (local) clustering coefficient distributions which can be chosen such that any desired global clustering coefficient results. You can find a short description here. By the way, you will find your question (this one) in the references.

Kolda et al. proposed the BTER model (2013) that can generate random graphs with prescribed degree and clustering coefficient distribution (and thus prescribed global clustering index). It seems a bit more complicated than my model (see above), but maybe it's faster or generates less biased graphs. (But to be honest, I assume that my model doesn't generate severely biased graphs, neither, but essentially random graphs.)

Related

OSMNX graph to distance matrix and DBSCAN

I'm having troubles programming to get the distance matrix from a Graph of coordinates (LAT,LON).
I want to connect an arbitrarily group of points (let's say, 200.000 firms), get their nearests representations in the Graph, created with ox.graph_from_place().
I am working with dask arrays and dataframes (da.array, df.DataFrame)
if __name__ == "__main__":
# OPTION B: Use a strongly (instead of weakly) connected graph
Gs = ox.utils_graph.get_largest_component(G, strongly=True)
Gs.__name__ = "Gs"
# attach nearest network node to each firm
df["nn"] = da.array(ox.get_nearest_nodes(Gs, X=df['longitude'], Y=df['latitude'], method='balltree') )
# we'll get distances for each pair of nodes that have firms attached to them
nodes_unique = pd.Series(df['nn'].unique())
nodes_unique.index = nodes_unique.values
# convert MultiDiGraph to DiGraph for simpler faster distance matrix computation
G_dm = nx.DiGraph(Gs)
G_dm.__name__ = "G_dm"
save_pickle(Gs)
save_pickle(G_dm)
print("len df['nn']:", len(df['nn']))
print("len nodes_unique:", len(nodes_unique))
Some code has been avoided to go to the heart of the matter, then, I've tried following https://networkx.org/documentation/stable/reference/algorithms/shortest_paths.html and a network_distance_matrix() function using a strongly connected graph Gs, but this is so inefficient in computation time. I have seen in the docs a bunch of functions, but I haven't seen one to efficiently compute a distance-matrix between pairs of unique nodes belonging to the Graph and following paths through it.
I would like to know if there is any way to parallelize this process, and/or make it go through a generative way and not storing all that RAM.
My objective is to provide a pre-computed matrix in a DBSCAN (sklearn.clustering) model, and it has to be quickly so that I can kind of "grid-search" through parameters. I am a beginner to these libraries.

Evaluating vector distance measures

I am working with vectors of word frequencies and trying out some of the different distance measures available in Scikit Learns Pairwise Distances. I would like to use these distances for clustering and classification.
I usually have a feature matrix of ~ 30,000 x 100. My idea was to choose a distance metric that maximizes the pairwise distances by running pairwise differences over the same dataset with the distance metrics available in Scipy (e.g. Euclidean, Cityblock, etc.) and for each metric
convert distances computed for the dataset to zscores to normalize across metrics
get the range of these zscores, i.e. the spread of the distances
use the distance metric that gives me the widest range of distances as it apparently gives me the maximum spread over my dataset and the most variance to work with. (Cf. code below)
My questions:
Does this approach make sense?
Are there other evaluation procedures that one should try? I found these papers (Gavin, Aggarwal, but they don't apply 100 % here...)
Any help is much appreciated!
My code:
matrix=np.random.uniform(0, .1, size=(10,300)) #test data set
scipy_distances=['euclidean', 'minkowski', ...] #these are the distance metrics
for d in scipy_distances: #iterate over distances
distmatrix=sklearn.metrics.pairwise.pairwise_distances(matrix, metric=d)
distzscores = scipy.stats.mstats.zscore(distmatrix, axis=0, ddof=1)
diststats=basicstatsmaker(distzscores)
range=np.ptp(distzscores, axis=0)
print "range of metric", d, np.ptp(range)
In general - this is just a heuristic, which might, or not - work. In particular, it is easy to construct a "dummy metric" which will "win" in your approach even though it is useless. Try out
class Dummy_dist:
def __init__(self):
self.cheat = True
def __call__(self, x, y):
if self.cheat:
self.cheat = False
return 1e60
else:
return 0
dummy_dist = Dummy_dist()
This will give you huuuuge spread (even with z-score normalization). Of course this is a cheating example as this is non determinsitic, but I wanted to show the basic counterexample, and of course given your data one can construct a deterministic analogon.
So what you should do? Your metric should be treated as hyperparameter of your process. You should not divide process of generating your clustering/classification into two separate phases: choosing a distance and then learning something; but you should do this jointly, consider your clustering/classification + distance pairs as a single model, thus instead of working with k-means, you will work with k-means+euclidean, k-means+minkowsky and so on. This is the only statistically supported approach. You cannot construct a method of assessing "general goodness" of the metric, as there is no such object, metric quality can be only assessed in a particular task, which involves fixing every other element (such as a clustering/classification method, particular dataset etc.). Once you perform such wide, exhaustive evaluation, check many such pairs, on many datasets, you might claim that given metric performes best in such range of tasks.

Keeping scale free graph's degree distribution after perturbation - python

Given a scale free graph G ( a graph whose degree distribution is a power law), and the following procedure:
for i in range(C):
coint = randint(0,1)
if (coint == 0):
delete_random_edges(G)
else:
add_random_edge(G)
(C is a constant)
So, when C is large, the degree distribution after the procedure would be more like G(n,p). I am interested in preserving the power law distribution, i.e. - I want the graph to be scale free after this procedure, even for large C.
My idea is writing the procedures "delete_random_edges" and "add_random_edge" in a way that will give edges that connected to node with big degree small probability to be deleted (when adding new edge, it would be more likely to add it to node with large degree).
I use Networkx to represent the graph, and all I found is procedures that delete or add a specific edge. Any idea how can I implement the above?
Here's 2 algorithms:
Algorithm 1
This algorithm does not preserve the degree exactly, rather it preserves the expected degree.
Save each node's initial degree. Then delete edges at random. Whenever you create an edge, do so by randomly choosing two nodes, each with probability proportional to the initial degree of those nodes.
After a long period of time, the expected degree of each node 'u' is its initial degree (but it might be a bit higher or lower).
Basically, this will create what is called a Chung-Lu random graph. Networkx has a built in algorithm for creating them.
Note - this will allow the degree distribution to vary.
algorithm 1a
Here is the efficient networkx implementation skipping over the degree deleting and adding and going straight to the final result (assuming a networkx graph G):
degree_list = G.degree().values()
H = nx.expected_degree_graph(degree_list)
Here's the documentation
Algorithm 2
This algorithm preserves the degrees exactly.
Choose a set of edges and break them. Create a list, with each node appearing equal to the number of broken edges it was in. Shuffle this list. Create new edges between nodes that appear next to each other in this list.
Check to make sure you never join a node to itself or to a node which is already a neighbor. If this would occur you'll want to think of a custom way to avoid it. One option is to simply reshuffle the list. Another is to set those nodes aside and include them in the list you create next time you do this.
edit
There is a built in networkx command double_edge_swapto swap two edges at a time. documentation
Although you have already accepted the answer from #abdallah-sobehy, meaning that it works, I would suggest a more simple approach, in case it helps you or anybody around.
What you are trying to do is sometimes called preferential attachment (well, at least when you add nodes) and for that there is a random model developed quite some time ago, see Barabasi-Albert model, which leads to a power law distribution of gamma equals -3.
Basically you have to add edges with probability equal to the degree of the node divided by the sum of the degrees of all the nodes. You can scipy.stats for defining the probability distribution with a code like this,
import scipy.stats as stats
x = Gx.nodes()
sum_degrees = sum(list(Gx.degree(Gx).values()))
p = [Gx.degree(x)/sum_degrees for x in Gx]
custm = stats.rv_discrete(name='custm', values=(x, p))
Then you just pick 2 nodes following that distribution, and that's the 2 nodes you add an edge to,
custm.rvs(size=2)
As for deleting the nodes, I haven't tried that myself. But I guess you could use something like this,
sum_inv_degrees = sum([1/ x for x in list(Gx.degree(Gx).values())])
p = [1 / (Gx.degree(x) * sum_inv_degrees) for x in Gx]
although honestly I am not completely sure; it is not anymore the random model that I link to above...
Hope it helps anyway.
UPDATE after comments
Indeed by using this method for adding nodes to an existing graph, you could get 2 undesired outcomes:
duplicated links
self links
You could remove those, although it will make the results deviate from the expected distribution.
Anyhow, you should take into account that you are deviating already from the preferential attachment model, since the algorithm studied by Barabasi-Albert works adding new nodes and links to the existing graph,
The network begins with an initial connected network of m_0 nodes.
New nodes are added to the network one at a time. Each new node is connected to m > m_0 existing nodes with a probability that is proportional to the number
...
(see here)
If you want to get an exact distribution (instead of growing an existing network and keeping its properties), you're probably better off with the answer from #joel
Hope it helps.
I am not sure to what extent this will preserve the scale free property but this can be a way to implement your idea:
In order to add an edge you need to specify 2 nodes in networkx. so, you can choose one node with a probability that is proportional to (degree) and the other node to be uniformly chosen (without any preferences). Choosing a highly connected node can be achieved as follows:
For a graph G where nodes are [0,1,2,...,n]
1) create a list of floats (limits) between 0 and 1 to specify for each node a probability to be chosen according to its degree^2. For example: limits[1] - limits[0] is the probability to choose node 0, limits[2] - limits[1] is probability to choose node 2 etc.
# limits is a list that stores floats between 0 and 1 which defines
# the probabaility of choosing a certain node depending on its degree
limits = [0.0]
# store total number of edges of the graph; the summation of all degrees is 2*num_edges
num_edges = G.number_of_edges()
# store the degree of all nodes in a list
degrees = G.degree()
# iterate nodes to calculate limits depending on degree
for i in G:
limits.append(G.degree(i)/(2*num_edges) + limits[i])
2) Randomly generate a number between 0 and 1 then compare it to the limits, and choose the node to add an edge to accordingly:
rnd = np.random.random()
# compare the random number to the limits and choose node accordingly
for j in range(len(limits) - 1):
if rnd >= limits[j] and rnd < limits[j+1]:
chosen_node = G.node[j]
3) Choose another node uniformly, by generating a random integer between [0,n]
4) Add an edge between both of the chosen nodes.
5) Similarly for deleting edge, you can choose a node according to (1/degree) instead of degree then uniformly delete any of its edges.
It is interesting to know if using this approach would reserve the scale free property and at which 'C' the property is lost , so let us know if it worked or not.
EDIT: As suggested by #joel the selection of the node to add an edge to should be proportional to degree rather than degree^2. I have edited step 1 accordingly.
EDIT2: This might help you to be able to judge if the scale free graph lost its property after edges addition and removal. Simply comute the preferential attachmet score before and after changes. You can find the doumentation here.

Wrap-around when calculating distance for k-means

I'm trying to do a K-means clustering of some dataset using sklearn. The problem is that one of the dimensions is hour-of-day: a number from 0-23 and so the distance algorithm then thinks that 0 is very far from 23, because in absolute terms it is. In reality and for my purposes, hour 0 is very close to hour 23. Is there a way to make the distance algorithm do some form of wrap-around so it computes the more 'real' time difference.
I'm doing something simple, similar to the following:
from sklearn.cluster import KMeans
clusters = KMeans(n_clusters = 2)
data = vstack(data)
fit = clusters.fit(data)
classes = fit.predict(data)
data elements looks something like [22, 418, 192] where the first element is the hour.
Any ideas?
Even though #elyase answer is accepted, I think it is not the correct approach.
Yes, to use such distance you have to refine your distance measure and so - use different library. But what is more important - concept of mean used in k-means won't suit the cyclic dimension. Lets consider following example:
#current cluster X,, based on centroid position Xc=24
x1=1
x2=24
#current cluster Y, based on centroid position Yc=10
y1=12
y2=13
computing simple arithmetic mean will place the centoids in Xc=12.5,Yc=12.5, which from the point of view of cyclic meausre is incorect, it should be Xc=0.5,Yc=12.5. As you can see, asignment based on the cyclic distance measure is not "compatible" with simple mean operation, and leads to bizzare results.
Simple k-means will result in clusters {x1,y1}, {x2,y2}
Simple k--means + distance measure result in degenerated super cluster {x1,x2,y1,y2}
Correct clustering would be {x1,x2},{y1,y2}
Solving this problem requires checking one if (whether it is better to measure "simple average" or by representing one of the points as x'=x-24). Unfortunately given n points it makes 2^n possibilities.
This seems as a use case of the kernelized k-means, where you are actually clustering in the abstract feature space (in your case - a "tube" rolled around the time dimension) induced by kernel ("similarity measure", being the inner product of some vector space).
Details of the kernel k-means are given here
Why k-means doesn't work with arbitrary distances
K-means is not a distance-based algorithm.
K-means minimizes the Within-Cluster-Sum-of-Squares, which is a kind of variance (it's roughly the weighted average variance of all clusters, where each object and dimension is given the same weight).
In order for Lloyds algorithm to converge you need to have both steps optimize the same function:
the reassignment step
the centroid update step
Now the "mean" function is a least-squares estimator. I.e. choosing the mean in step 2 is optimal for the WCSS objective. Assigning objects by least-squares deviation (= squared Euclidean distance, monotone to Euclidean distance) in step 1 also yields guaranteed convergence. The mean is exactly where your wrap-around idea would fall apart.
If you plug in a random other distance function as suggested by #elyase k-means might no longer converge.
Proper solutions
There are various solutions to this:
Use K-medoids (PAM). By choosing the medoid instead of the mean you do get guaranteed convergence with arbitrary distances. However, computing the medoid is rather expensive.
Transform the data into a kernel space where you are happy with minimizing Sum-of-Squares. For example, you could transform the hour into sin(hour / 12 * pi), cos(hour / 12 * pi) which may be okay for SSQ.
Use other, distance-based clustering algorithms. K-means is old, and there has been a lot of research on clustering since. You may want to start with hierarchical clustering (which actually is just as old as k-means), and then try DBSCAN and the variants of it.
The easiest approach, to me, is to adapt the K-means algorithm wraparound dimension via computing the "circular mean" for the dimension. Of course, you will also need to change the distance-to-centroid calculation accordingly.
#compute the mean of hour 0 and 23
import numpy as np
hours = np.array(range(24))
#hours to angles
angles = hours/24 * (2*np.pi)
sin = np.sin(angles)
cos = np.cos(angles)
a = np.arctan2(sin[23]+sin[0], cos[23]+cos[0])
if a < 0: a += 2*np.pi
#angle back to hour
hour = a * 24 / (2*np.pi)
#23.5

dbscan - setting limit on maximum cluster span

By my understanding of DBSCAN, it's possible for you to specify an epsilon of, say, 100 meters and — because DBSCAN takes into account density-reachability and not direct density-reachability when finding clusters — end up with a cluster in which the maximum distance between any two points is > 100 meters. In a more extreme possibility, it seems possible that you could set epsilon of 100 meters and end up with a cluster of 1 kilometer:
see [2][6] in this array of images from scikit learn for an example of when that might occur. (I'm more than willing to be told I'm a total idiot and am misunderstanding DBSCAN if that's what's happening here.)
Is there an algorithm that is density-based like DBSCAN but takes into account some kind of thresholding for the maximum distance between any two points in a cluster?
DBSCAN indeed does not impose a total size constraint on the cluster.
The epsilon value is best interpreted as the size of the gap separating two clusters (that may at most contain minpts-1 objects).
I believe, you are in fact not even looking for clustering: clustering is the task of discovering structure in data. The structure can be simpler (such as k-means) or complex (such as the arbitrarily shaped clusters discovered by hierarchical clustering and k-means).
You might be looking for vector quantization - reducing a data set to a smaller set of representatives - or set cover - finding the optimal cover for a given set - instead.
However, I also have the impression that you aren't really sure on what you need and why.
A stength of DBSCAN is that it has a mathematical definition of structure in the form of density-connected components. This is a strong and (except for some rare border cases) well-defined mathematical concept, and the DBSCAN algorithm is an optimally efficient algorithm to discover this structure.
Direct density reachability however, doesn't define a useful (partitioning) structure. It just does not partition the data into disjoint partitions.
If you don't need this kind of strong structure (i.e. you don't do clustering as in "structure discovery", but you just want to compress your data as in vector quantization), you could give "canopy preclustering" a try. It can be seen as a preprocessing step designed for clustering. Essentially, it is like DBSCAN, except that it uses two epsilon values, and the structure is not guaranteed to be optimal in any way, but will highly depend on the ordering of your data. If you then preprocess it appropriately, it can still be useful. Unless you are in a distributed setting, canopy preclustering however is at least as expensive than a full DBSCAN run. Due to the loose requirements (in particular, "clusters" may overlap, and objects are expected to belong to multiple "clusters"), it is easier to parallelize.
Oh, and you might also just be looking for complete-linkage hierarchical clustering. If you cut the dendrogram at your desired height, the resulting clusters should all have the desired maximum distance inbetween of any two objects. The only problem is that hierarchical clustering usually is O(n^3), i.e. it doesn't scale to large data sets. DBSCAN runs in O(n log n) in good implementations (with index support).
I had the same problem and ended up solving it by using DBSCAN in combination with KMeans clustering: First I use DBSCAN to identify high density clusters and remove outliers, then I take any cluster larger than 250 Miles (in my case) and break it apart. Here's the code:
from sklearn.cluster import DBSCAN
clustering = DBSCAN(eps=0.3, min_samples=100).fit(load_geocodes[['lat', 'long']])
load_geocodes.loc[:,'cluster'] = clustering.labels_
import mpu
def calculate_cluster_size(lat, long):
left_top = (max(lat), min(long))
right_bottom = (min(lat), max(long))
distance = mpu.haversine_distance(left_top, right_bottom)*0.621371
return distance
for c, df in load_geocodes.groupby('cluster'):
if c == -1:
continue # don't do this for outliers
distance = calculate_cluster_size(df['lat'], df['long'])
print(distance)
if distance > 250:
# break clusters into more clusters until the maximum size of a cluster is less than 250 Miles
max_distance = distance
i = 2
while max_distance > 250:
kmeans = KMeans(n_clusters=i, random_state=0).fit(df[['lat', 'long']])
df.loc[:, 'cl_temp'] = kmeans.labels_
max_temp_cl_size = 0
for temp_cl, temp_cl_df in df.groupby('cl_temp'):
temp_cl_size = calculate_cluster_size(temp_cl_df['lat'], temp_cl_df['long'])
if temp_cl_size > max_temp_cl_size:
max_temp_cl_size = temp_cl_size
i += 1
max_distance = max_temp_cl_size
load_geocodes.loc[df.index,'subcluster'] = kmeans.labels_

Categories