Using k-means clustering to cluster based on single variable - python
I’m just trying to get my head around clustering.
I have a series of data points - y - which have a noise function associated with them (gaussian)
There are two classes of values 0 and >0 (obviously with noise). I’m trying to find the centre point of the group which is >0.
I’ve plotted the points with a simple moving average to be able to eyeball the data.
Moving average plot:
How can I cluster the data just based on the y value?
I’d like to have two clusters - one covering the points on the left and right (roughly <120 and >260 by the looks of it) and the other for the middle points (x = 120 to 260)
If I try with two clusters I get this:
k means plot - k=2:
How should I amend my code to achieve this?
x = range(315)
y= [-0.0019438692324050865, 0.0028994208839327852, 0.0051483573976274649, -0.0033242993359676809, -0.007205517954705391, 0.0023493638544448323, 0.0021109981155292179, 0.0035990200904119076, -0.0039516797159245328, 0.0046512034107712786, -0.0019248189368846083, 0.0036744109953683823, 0.0007898612768152954, 0.0050059088808496474, -0.0021084425769681558, 0.0014692258570182986, -0.0030711206115484175, -0.0026614801222815628, 0.0022816301256991535, 0.00019923934682088178, -0.0013181161659271139, -0.0021956355547661358, 0.0012941895041076283, 0.00337197586896105, -0.0019792508536746402, -0.002020497762984554, 0.0014495021773240431, 0.0011887337096206894, 0.0016667792145975404, -0.0010119590445208419, -0.0024506337087077676, 0.0072264471843846339, -0.0014126073097276062, -0.00065673498034648755, -0.0011355352304356647, -0.00042657980930307281, -0.0032875547481258042, -0.002351265010099495, -0.00073344218847348742, -0.0031555991687002589, 0.0026170287799315104, 0.0019289080666337198, -0.0021804765064623076, 0.0026221290350876979, 0.0019831827145683828, -0.005422907223254632, -0.0014107046201467732, -0.0049438583709020423, 0.00081884635937855494, 0.0054783747880986361, -0.0011282600170147909, -0.00436581779762948, 0.0024421851848953177, -0.0018564229613786095, -0.0052492274840120123, 0.0051775747035086306, 0.0052413417491534494, 0.0030817295096650732, -0.0014106391941506153, 0.00074380887788818206, -0.0041507550699856439, -0.00074928547462217287, -9.3938667619130614e-05, -0.00060592968804004362, 0.0064913597798387348, 0.0018098075166183621, 0.00099550852535854441, 0.0037322288350247917, 0.0027039351321340869, 0.0060238021513650541, -0.006567405116575234, 0.0020858553839503175, -0.0040329574871009084, -0.0029337227854833213, 0.0020743996957790969, 0.0041249738085716511, -0.0016678673351373336, -0.00081387164524554967, -0.0028411340446090278, 0.00013572776045231967, -0.00025350369023925548, 0.00071609777542998309, -0.0018427036825796074, -0.0015513575887011904, -0.0016357115978466398, 0.0038235991426514866, 0.0017693050063256977, -0.00029816429542494152, -0.0016071303644783605, -0.0031883070092131086, -0.0010340123778528594, -0.0049194467790889653, 0.0012109237666701397, 0.0024532524488299246, 0.0069307209537693721, 0.0009573350812806618, -6.0022322637651027e-05, -0.00050143013334696311, 0.0023415017810229548, 0.0033053845403900849, -0.0061156769150035222, 0.00022216114877491691, 0.0017257349557975464, 4.6919738262423826e-05, -0.0035257466102171162, -0.0043673831041441185, -0.0016592116617178102, -0.003298933045964781, -0.001667158964114637, 0.0011283739877531254, -0.0055098513985193534, 0.0023564462221116358, 0.0041971132878626258, 0.0061727231077443314, 0.0047583822927202779, 0.0022475414486232245, 0.0048682822792560521, 0.0022415648209199016, 0.00044859963858686957, -0.0018519391698513449, 0.0031460918774998763, 0.0038614233082916809, -0.0043409564348247066, -0.0055560805453666326, -0.00025133196059449212, 0.012436346397552794, 0.01136022093203152, 0.011244278807602391, 0.01470018209739289, 0.0075560289478025277, 0.012568781764361209, 0.0076068752709663838, 0.011022209533236597, 0.010545997929846045, 0.01084340614623565, 0.011728388118710915, 0.0075043238708055885, 0.012860298948366296, 0.0097297636410632864, 0.0098800557729756874, 0.011536517297700085, 0.0082316420968713416, 0.012612386004592427, 0.016617154743589352, 0.0091391582296167315, 0.014952150276251052, 0.011675391002362373, 0.01568297072839233, 0.01537664322062633, 0.01622711654371662, 0.010708828344561546, 0.016625354383482532, 0.010757807468539406, 0.016867909081979202, 0.010354635736138377, 0.014345365677006765, 0.011114328315579219, 0.010034249196973242, 0.015846180181371881, 0.014303841146954242, 0.011608682896746103, 0.0086826955459553216, 0.0088576104599897426, 0.011250553207393772, 0.005522552439745569, 0.011185993425936373, 0.010241377537878162, 0.0079206732150164348, 0.0052965651546758108, 0.011104715912291204, 0.010506408714857187, 0.010153282642128673, 0.010286986015082572, 0.01187330766677645, 0.014541420264499783, 0.013092204890199896, 0.012979246400649271, 0.012595814351669916, 0.014714607377710237, 0.011727516021525658, 0.011035077266739704, 0.0089698030032708698, 0.0087245475140550147, 0.011139467365240661, 0.0094505568595650603, 0.014430361388952871, 0.0089241578716030695, 0.014616210804585136, 0.013295072783119581, 0.014430633057603408, 0.01200577022494694, 0.011315388654675421, 0.013359877656434442, 0.017704146495248471, 0.0089900858719559155, 0.014731590728415532, 0.0053244009632545759, 0.011199377929150522, 0.0098899254166580439, 0.012220397221188688, 0.015315682643295272, 0.0042842773538990919, 0.0098560854848898077, 0.0088592602102698509, 0.011682575531316278, 0.0098450268165344631, 0.015508017179782136, 0.0083959771972897564, 0.0057504382506886418, 0.010149849298310511, 0.011467172305959087, 0.019354427705224483, 0.013200207481702888, 0.0084555200083286791, 0.011458643458455485, 0.0067582116806278788, 0.01083616691886825, 0.013189184991857963, 0.011774794518724967, 0.014419252448288828, 0.011252283438046358, 0.013346699363583018, 0.0070752340082163006, 0.013215300343131422, 0.0083841320189162287, 0.0067600805611729283, 0.014043517055899181, 0.0098241497159076551, 0.011466675085574904, 0.01155354571355972, 0.012051701509217881, 0.010150596813866767, 0.0093930906430917619, 0.003368481869910186, 0.0048359029438027378, 0.0072083852964288445, 0.010112266453748613, 0.014009345326404186, 0.0050187514558796657, 0.0076315122645601551, 0.0098572381625301152, 0.0114902035403828, 0.018390212262653569, 0.020552166087412803, 0.010428735773226807, 0.011717974670325962, 0.011586303572796604, 0.0092978832913345726, 0.0040060048273946845, 0.012302496528511328, 0.0076707934776137684, 0.014700766223305586, 0.013491092168119941, 0.016244916923257174, 0.010387716692694397, 0.0072564046806323553, 0.0089420045528720883, 0.012125390630607462, 0.013274623392811291, 0.012783388635585766, 0.013859113028817658, 0.0080975189401925642, 0.01379241865445455, 0.012648552766643405, 0.011380280655911323, 0.010109646424218717, 0.0098577688652478051, 0.0064661895943772208, 0.010848835432253455, -0.0010986941731458047, -0.00052875821639583262, 0.0020423603076171414, 0.0035710440970171805, 0.001652886517437206, 0.0023512717524485573, -0.002695275440737862, 0.002253880812688683, -0.0080855104018828141, -0.0020090808966136161, -0.0029794078852333791, 0.00047537441103425869, -0.0010168825525621432, 0.0028683012479151873, -0.0014733214239664142, 0.0019432702158397569, -0.0012411849653504801, -0.00034507088510895141, -0.0023587874349834145, 0.0018156591123708393, 0.0040923006067568324, 0.0043522232127477072, -0.0055992642684123371, -0.0019368557792245147, 0.0026257395447205848, 0.0025594329536029635, 0.00053681548609292378, 0.0032186216144045742, -0.003338121135450386, 0.00065996843114729585, 0.006711173245189642, 0.0032877327776177517, 0.0039528629317296367, 0.0063732674764248719, -0.0026207617244284023, 0.0061381482567009048, -0.003024741769256066, -0.0023891419421980839, -0.004011235930513047, 0.0018372067754070733, -0.0045928077859572689, -0.0021420171112169601, 0.001665179522797816, 0.0074356736689407859, 0.0065680163280897891, -0.0038116640825467678]
data = np.column_stack([x,y])
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2)
kmeans.fit(data)
y_kmeans = kmeans.predict(data)
plt.scatter(data[:, 0], data[:, 1], c=y_kmeans, s=5, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
plt.grid()
I’d also like to be able to return the max, min and average for the values in each cluster - is this possible?
Some ideas on your problem.
k-means is actually a multivariate method, so it is probably not a good choice in your case. You can take advantage of the 1-dimensionality of you data by looking for minima of a kernel density estimation of the y-data. A plot of the density estimation will show a bimodal density function with the two modes divided by a minimum which is the y-value at which you want to divide the two clusters.
Have a look at http://scikit-learn.org/stable/modules/density.html#kernel-density
To get the x-values at which you divide, you could use the moving average you already computed.
However, there might be methods better suited to your kind of data. You might want to ask your question at https://stats.stackexchange.com/ as it is not really a programming problem but one about the appropriate method.
You can reshape your data to a n x 1 array.
But if you want to take the time into account, I suggest you look into change detection in time series instead. It can detect a change in mean.
Using your code, the simplest way to get what you want is to change:
kmeans.fit(data)
y_kmeans = kmeans.predict(data)
to
kmeans.fit(data[:,1].reshape(-1,1))
y_kmeans = kmeans.predict(data[:,1].reshape(-1,1))
You can get max, min, mean etc by using index, for example:
np.max(data[:,1][y_kmeans == 1])
Related
How to fit a Gaussian using Astropy
I am trying to fit a Gaussian to a set of data points using the astropy.modeling package but all I am getting is a flat line. See below: Here's my code: %pylab inline from astropy.modeling import models,fitting from astropy import modeling #Fitting a gaussian for the absorption lines wavelength= linspace(galaxy1_wavelength_extracted_1.min(),galaxy1_wavelength_extracted_1.max(),200) g_init = models.Gaussian1D(amplitude=1., mean=5000, stddev=1.) fit_g = fitting.LevMarLSQFitter() g = fit_g(g_init, galaxy1_wavelength_extracted_1, galaxy1_flux_extracted_1) #Plotting plot(galaxy1_wavelength_extracted_1,galaxy1_flux_extracted_1,".k") plot(wavelength, g(wavelength)) xlabel("Wavelength ($\\AA$)") ylabel("Flux (counts)") What am I doing wrong or missing?
I made some fake data that sort of resembles yours, and tried running your code on it and obtained similar results. I think the problem is that if you don't adjust your model's initial parameters to at least sort of resemble the original model, or else the fitter won't be able to converge no matter how many rounds of fitting it performs. If I'm fitting a Gaussian I like to give the initial model some initial parameters based on computationally "eyeballing" them like so (here I named your real data's flux and wavelength as orig_flux and orig_wavelength respectively): >>> an_amplitude = orig_flux.min() >>> an_mean = orig_wavelength[orig_flux.argmin()] >>> an_stddev = np.sqrt(np.sum((orig_wavelength - an_mean)**2) / (len(orig_wavelength) - 1)) >>> print(f'mean: {an_mean}, stddev: {an_stddev}, amplitude: {an_amplitude}') mean: 5737.979797979798, stddev: 42.768052162734605, amplitude: 84.73925092448636 where for the standard deviation I used the unbiased standard deviation estimate. Plotting this over my fake data shows that these are reasonable values I might have picked if I manually eyeballed the data as well: >>> plt.plot(orig_wavelength, orig_flux, '.k', zorder=1) >>> plt.scatter(an_mean, an_amplitude, color='red', s=100, zorder=2) >>> plt.vlines([an_mean - an_stddev, an_mean + an_stddev], orig_flux.min(), orig_flux.max(), ... linestyles='dashed', colors='gg', zorder=2) One feature I've wanted to add to astropy.modeling in the past is optional methods that can be attached to some models to give reasonable estimates for their parameters based on some data. So for Gaussians such a method would return much like I just computed above. I don't know if that's ever been implemented though. It is also worth noting that your Gaussian would be inverted (with a negative amplitude) and that it's displaced on the flux axis some 120 points, so I added a Const1D to my model to account for this, and subtracted the displacement from the amplitude: >>> an_disp = orig_flux.max() >>> g_init = ( ... models.Const1D(an_disp) + ... models.Gaussian1D(amplitude=(an_amplitude - an_disp), mean=an_mean, stddev=an_stddev) ... ) >>> fit_g = fitting.LevMarLSQFitter() >>> g = fit_g(g_init, orig_wavelength, orig_flux) This results in the following fit which looks much better already: >>> plt.plot(orig_wavelength, orig_flux, '.k') >>> plt.plot(orig_wavelength, g(orig_wavelength), 'r-') I'm not an expert in modeling or statistics, so someone with deeper knowledge could likely improve on this. I've added a notebook with my full analysis of the problem, including how I generated my sample data here.
Output K-Means to CSV with SciKit Learn - give cluster names
I have the below scikit learn script which outputs a nice chart (below) with each of the clusters. I have a couple of questions: - How can I export this to CSV - with a cluster name or ID? - How can I name the clusters? - How can I make sure the clusters are always named the same thing? For example, I want to call the top right segment 'high spenders' how do I so that where it will always be correct? Thanks! #import the required libraries # - matplotlib is a charting library # - Seaborn builds on top of Matplotlib and introduces additional plot types. It also makes your traditional Matplotlib plots look a bit prettier. # - Numpy is numerical Python import matplotlib.pyplot as plt import seaborn as sns import numpy as np from sklearn.datasets.samples_generator import make_blobs from sklearn.cluster import KMeans #Generate sample data, with distinct clusters for testing #n_samples = the number of datapoints, equally split across each clusters #centers = The number of centers to generate (number of clusters) - a center is the arithmetic mean of all the points belonging to the cluster. #cluster_std = the standard deviation of the clusters - a quantity expressing by how much the members of a group differ from the mean value for the group (how tight is the cluster going to be) #random_state = controls the random number generator being used. If you don't mention the random_state in the code, then whenever you execute your code a new random value is generated and the train and test datasets would have different values each time. However, if you use a particular value for random_state(random_state = 1 or any other value) everytime the result will be same,i.e, same values in train and test datasets. #make_blobs generates "isotropic Gaussian blobs" - X is a numpy array with two columns which contain the (x, y) Gaussian coordinates of these points, whereas y contains the list of categories for each. #X, y = simply means that the output of make_blobs() has two elements, that are assigned to X and y. X, y = make_blobs(n_samples=300, centers=4, cluster_std=0.50, random_state=0) #X now looks like this - column zero becomes the X axis, column1 becomes the Y axis array([[ 1.85219907, 1.10411295], [-1.27582283, 7.76448722], [ 1.0060939 , 4.43642592], [-1.20998253, 7.83203579], [ 1.92461484, 1.06347673], [ 2.28565919, 0.79166208], [-1.57379043, 2.69773813], [ 1.04917913, 4.31668562], [-1.07436851, 7.93489945], [-1.15872975, 7.97295642] #The below statement, will enable us to visualise matplotlib charts, even in ipython #Using matplotlib backend: MacOSX #Populating the interactive namespace from numpy and matplotlib %pylab #plot the chart #s = the sizer of the points. #X[:, 0] is the numpy coordinates way of selecting every row entry for column 0 - i.e. a single column from the numpy array. #X[:, 1] is the numpy coordinates way of selecting every row entry for column 1 - i.e. a single column from the numpy array. plt.scatter(X[:, 0], X[:, 1], s=50); #now, I am definining that I want to find 4 clusters within the data. The general rule I follow is, I will have 7 times less clusters than datapoints. kmeans = KMeans(n_clusters=4) #build the model, based on X with the number of clusters defined above kmeans.fit(X) #now we're going to find clusters in the randomly generated dataset predict = kmeans.predict(X) #now we can plot the prediction #c = colour, which is based on the predict variable we defined above #s = the size of the plots #X[:, 0] is the numpy coordinates way of selecting every row entry for column 0 - i.e. a single column from the numpy array. #X[:, 1] is the numpy coordinates way of selecting every row entry for column 1 - i.e. a single column from the numpy array. plt.scatter(X[:, 0], X[:, 1], c=predict, s=50)
Based on your code the following worked for me. You can certainly stay with numpy for storing the CSV but I simply prefer pandas. The sorting line should give you the same results everytime you run the code. However, since the initliazation of the clusters can have an impact I would also set a seed in your code, e.g. np.random.seed(42) and call the kmeans function with the random_state parameter, e.g. kmeans = KMeans(n_clusters=4, random_state=42) # transform to dataframe import pandas as pd import seaborn as sns df = pd.DataFrame(X) df.columns = ["var1", "var2"] df["cluster"] = predict colors = sns.color_palette()[0:4] df = df.sort_values("cluster") # check plot sns.scatterplot(df["var1"], df["var2"], hue=df["cluster"], palette=colors) plt.show() # define rename schema mynames = {"0": "center_left", "1": "top_left", "2": "bot_right", "3": "center"} df["cluster_name"] = [mynames[str(i)] for i in df.cluster] # plot again to verify order sns.scatterplot(df["var1"], df["var2"], hue=df["cluster_name"], palette=colors) sns.despine() plt.show() # save dataframe as CSV df.to_csv("myoutput.csv") The first plot looks like this: The second plot looks like this: The CSV will look like this:
How to make the confidence interval (error bands) show on seaborn lineplot
I'm trying to create a plot of classification accuracy for three ML models, depending on the number of features used from the data (the number of features used is from 1 to 75, ranked according to a feature selection method). I did 100 iterations of calculating the accuracy output for each model and for each "# of features used". Below is what my data looks like (clsf from 0 to 2, timepoint from 1 to 75): data I am then calling the seaborn function as shown in documentation files. sns.lineplot(x= "timepoint", y="acc", hue="clsf", data=ttest_df, ci= "sd", err_style = "band") The plot comes out like this: plot I wanted there to be confidence intervals for each point on the x-axis, and don't know why it is not working. I have 100 y values for each x value, so I don't see why it cannot calculate/show it.
You could try your data set using Seaborn's pointplot function instead. It's specifically for showing an indication of uncertainty around a scatter plot of points. By default pointplot will connect values by a line. This is fine if the categorical variable is ordinal in nature, but it can be a good idea to remove the line via linestyles = "" for nominal data. (I used join = False in my example) I tried to recreate your notebook to give a visual, but wasn't able to get the confidence interval in my plot exactly as you describe. I hope this is helpful for you. sb.set(style="darkgrid") sb.pointplot(x = 'timepoint', y = 'acc', hue = 'clsf', data = ttest_df, ci = 'sd', palette = 'magma', join = False);
Plotting clusters using PCA features as X an Y axis
I have applied PCA to a dataframe in order to plot clusters based on K-means. Since i have like 24 features in my original df, i don't want to plot clusters based in only 3 or 3 features at each time. So what i want to do, is to plot the combinations of those features, to get a more general/representative graphical respresentation of each feature in the clusters. I extracted the components using pca.components_ and i have created the following df of components: PC-1 PC-2 media_bi_mov 0.003094 0.050599 media_bi_post 0.000762 0.028931 total_mov_prod_300 0.000836 0.573675 codsprod_0 0.440476 -0.004404 codsprod_1 0.008005 0.105349 codsprod_2 0.002851 0.042459 codsprod_3 0.001078 0.009355 codsprod_4 -0.011922 -0.022020 idaplic_0 0.392229 -0.002817 idaplic_1 0.003001 0.004822 idaplic_2 0.044730 -0.001148 idaplic_3 0.097695 -0.008628 idaplic_4 0.024273 0.486973 idaplic_5 0.234798 -0.033369 idaplic_6 0.019329 0.015455 idempro_36 0.000401 -0.000438 idempro_38 0.032149 0.292137 idempro_49 0.439413 -0.023269 codmonsw_EUR 0.440543 -0.002770 codmonsw_USD 0.000378 0.000664 resto_codsprod 0.011406 0.011731 resto_idaplic 0.041649 0.005692 días_entre_ops -0.011129 -0.015144 frecuencia 0.440543 -0.002770 valor_total_eur 0.000836 0.573675 normally i would plot the clusters using kmeans.labels_ to apply a different color to each cluster if this was the original df. But my issue now is that i can't use kmeans.labels_ to differentiate each cluster in this pca-reduced df, since kmeans.labels_ will have a bigger length. How can i apply color to differentiate the clusters in this dataframe?? Thanks in advance
i didn't realise the solution to this problem was so easy: I just needed to run kmeans on the components df to get the cluster labels for each feature in each principal component. Hope this will help someone with the same doubts as me.
How to smooth or overlap bins in pyplot.hist2d?
I am plotting a 2D histogram to show, for example, the concentration of lightnings (given by their position registered in longitude and latitude). The number of data points is not too large (53) and the result is too coarse. Here is a picture of the result: For this reason, I am trying to find a way to weight in data from surrounding bins. For example, there is a bin at longitude = 130 and latitude = 34.395 with 0 lightning registered, but with several around it. I would want this bin to reflect somehow the concentration around it. In other words, I want to smooth the data by having overlapping bins (so that a data point can be counted more than once, by different contiguous bins). I understand that hist2d has the input option for "weights", but this would only work to make a data point more "important" within its bin. The simplified code is below and I can clarify anything needed. import numpy as np import matplotlib.pyplot as plt # Here are the data, to experiment if needed longitude = np.array([119.165, 115.828, 110.354, 117.124, 119.16 , 107.068, 108.628, 126.914, 125.685, 116.608, 122.455, 116.278, 123.43, 128.84, 128.603, 130.192, 124.508, 121.916, 133.245, 125.088, 126.641, 127.224, 113.686, 129.376, 127.312, 121.353, 117.834, 125.219, 138.077, 153.299, 135.66 , 128.391, 118.011, 117.313, 119.986, 118.619, 119.178, 120.295, 121.991, 123.519, 135.948, 132.224, 129.317, 135.334, 132.923, 129.828, 139.006, 140.813, 116.207, 139.254, 120.922, 112.171, 143.508]) latitude = np.array([34.381, 34.351, 34.359, 34.357, 34.364, 34.339, 34.351, 34.38, 34.381, 34.366, 34.373, 34.366, 34.369, 34.387, 34.39 , 34.39 , 34.386, 34.371, 34.394, 34.386, 34.384, 34.387, 34.369, 34.4 , 34.396, 34.37 , 34.374, 34.383, 34.403, 34.429, 34.405, 34.385, 34.367, 34.36 , 34.367, 34.364, 34.363, 34.367, 34.367, 34.369, 34.399, 34.396, 34.382, 34.401, 34.396, 34.392, 34.401, 34.401, 34.362, 34.404, 34.382, 34.346, 34.406]) # Number of bins Nbins = 15 # Plot histogram of the positions plt.hist2d(longitude,latitude, bins=Nbins) plt.plot(longitude,latitude,'o',markersize = 8, color = 'k') plt.plot(longitude,latitude,'o',markersize = 6, color = 'w') plt.colorbar() plt.show()
Perhaps you're getting confused with the concept of 2D-histogram, or histogram. Besides the fact a histogram is a bar plot groupping data into plot, it is also a dicretized estimation of a probability funtion. In your case, the presence probability. For this reason, I would not try to overlap histograms. Moreover, because the histogram is 'discrete', it will be necessarily coarse. Actually, the resolution of a histogram is an important parameter regarding the desired visualization. Going back to your question, if you want to disminish the coarse effect, you may to simply want to play on Nbins. Perhaps, other graph type would suit better your usage: see this gallery and the 2D-density plot with shading.