Can't scale a splitted list of values - python

I got one database, whose values are like the following 253x3
array([[ 75.15000153, 73.79750061, 74.05999756],
[ 75.14499664, 74.125 , 74.28749847],
I'd like to know which row is the 80% row. So: 202nd row, in my case
lenght80 = round(len(database) * .8)
I need to splice the database in two: one which contain the first 80% rows: eg. from 0-202 row
percent80_data = database[0:lenght80]
Now, I need to scale the first 80% values:
scaler = MinMaxScaler(feature_range=(0,1))
scaled_percent80_data = scaler.fit_transform(percent80_data)
scaled_percent80_data
The result is what I'm expecting:
array([[0.22292997, 0.26680884, 0.21149309, 0.24325282, 0.15966378]
Later, I need to create another variable, which contain the remaining 20%+60 previous values of the database training, (202-60) til 253 row
percent20_data = database[lenght80-60:,:]
percent20_data
Now, I need to scale again the values
scaled_percent20_data = scaler.fit_transform(percent20_data)
scaled_percent20_data
and it results in:
array([[8.02431749e-03, 5.65810805e-03, 0.00000000e+00, 3.58560027e-02,
2.27449179e-01],...
Which is quite different from the first scaling, isn't it?
Why is that?
Thanks a lot

Related

Improve performance of 8million iterations over a dataframe and query it

There is a for loop of 8 million iterations, which takes 2 sample values from a column of a 1 million records dataframe (say df_original_nodes) and then query that 2 samples in another dataframe say (df_original_rel) and if sample does not exist then add that samples as a new row into the queried dataframe (df_original_rel) and finally write the dataframe (df_original_rel) into a CSV.
This loop is taking roughly around 24+ hrs to complete. How this can be made performant? Happy if it even takes 8 hrs to complete than anything 12+ hrs.
Here is the piece of code:
for j in range(1, n_8000000):
ran_num = random.randint(0, 1)
ran_rel_type = rel_type[ran_num]
df_ran_rel = df_original_nodes["UID"].sample(2, ignore_index=True)
FROM = df_ran_rel[0]
TO = df_ran_rel[1]
if df_original_rel.query("#FROM == FROM and #TO == TO").empty:
k += 1
new_row = {"FROM": FROM, "TO": TO, "TYPE": ran_rel_type[0], "PART_OF": ran_rel_type[1]}
df_original_rel = df_original_rel.append(new_row, ignore_index=True)
df_original_rel.to_csv("output/extra_rel.csv", encoding="utf-8", index=False)
My assumption is that querying a dataframe df_original_rel is the heavy-lifting part where the dataframe df_original_rel is also keep growing as the new row is added.
In my view lists are faster to traverse and maybe to query but then there will be another layer of conversion from dataframe to lists and vice-versa which could add further complexity.
Some things that should probably help – most of them around "do less Pandas".
Since I don't have your original data or anything like it, I can't test this.
# Grab a regular list of UIDs that we can use with `random.sample`
original_nodes_uid_list = df_original_nodes["UID"].tolist()
# Make a regular set of FROM-TO tuples
rel_from_to_pairs = set(df_original_rel[["FROM", "TO"]].apply(tuple, axis=1).tolist())
# Store new rows here instead of putting them in the dataframe; we'll also update rel_from_to_pairs as we go.
new_rows = []
for j in range(1, 8_000_000):
# These two lines could probably also be a `random.choice`
ran_num = random.randint(0, 1)
ran_rel_type = rel_type[ran_num]
# Grab a from-to pair from the UID list
FROM, TO = random.sample(original_nodes_uid_list, 2)
# If this pair isn't in the set of known pairs...
if (FROM, TO) not in rel_from_to_pairs:
# ... prepare a new row to be added later
new_rows.append({"FROM": FROM, "TO": TO, "TYPE": ran_rel_type[0], "PART_OF": ran_rel_type[1]})
# ... and since this from-to pair _would_ exist had df_original_rel
# been updated, update the pairs set.
rel_from_to_pairs.add((FROM, TO))
# Finally, make a dataframe of the new rows, concatenate it with the old, and output.
df_new_rel = pd.DataFrame(new_rows)
df_original_rel = pd.concat([df_original_rel, df_new_rel], ignore_index=True)
df_original_rel.to_csv("output/extra_rel.csv", encoding="utf-8", index=False)

Shuffle rows of a large csv

I want to shuffle this dataset to have a random set. It has 1.6 million rows but the first are 0 and the last 4, so I need pick samples randomly to have more than one class. The actual code prints only class 0 (meaning in just 1 class). I took advice from this platform but doesn't work.
fid = open("sentiment_train.csv", "r")
li = fid.readlines(16000000)
random.shuffle(li)
fid2 = open("shuffled_train.csv", "w")
fid2.writelines(li)
fid2.close()
fid.close()
sentiment_onefourty_train = pd.read_csv('shuffled_train.csv', header= 0, delimiter=",", usecols=[0,5], nrows=100000)
sentiment_onefourty_train.columns=['target', 'text']
print(sentiment_onefourty_train['target'].value_counts())
Because you read in your data using Pandas, you can also do the randomisation in a different way using pd.sample:
df = pd.read_csv('sentiment_train.csv', header= 0, delimiter=",", usecols=[0,5])
df.columns=['target', 'text']
df1 = df.sample(n=100000)
If this fails, it might be good to check out the amount of unique values and how frequent they appear. If the first 1,599,999 are 0 and the last is only 4, then the chances are that you won't get any 4.

Calculating averaged data in and writing to csv from a pandas dataframe

I have a very large spatial dataset stored in a dataframe. I am taking a slice of that dataframe into a new smaller subset to run further calculations.
The data has x, y and z coordinates with a number of additional columns, some of which are text and some are numeric. The x and y coordinates are on a defined grid and have a known separation.
Data looks like this
x,y,z,text1,text2,text3,float1,float2
75000,45000,120,aa,bbb,ii,12,0.2
75000,45000,110,bb,bbb,jj,22,0.9
75000,45100,120,aa,bbb,ii,11,1.8
75000,45100,110,bb,bbb,jj,45,2.4
75000,45100,100,bb,ccc,ii,13.6,1
75100,45000,120,bb,ddd,jj,8.2,2.1
75100,45000,110,bb,ddd,ii,12,0.6
For each x and y pair I want to iterate over a two series of text values and do three things in the z direction.
Calculate the average of one numeric value for all the values with a third specific text value
Sum another numeric value for all the values with the same text value
Write the a resultant table of 'x, y, average, sum' to a csv.
My code does part three (albeit very slowly) but doesn't calculate 1 or 2 or at least I don't appear to get the average and sum calculations in my output.
What have I done wrong and how can I speed it up?
for text1 in text_list1:
for text2 in text_list2:
# Get the data into smaller dataframe
df = data.loc[ (data["textfield1"] == text1) & (data["textfield2"] == text2 ) ]
#Get the minimum and maximum x and y
minXw = df['x'].min()
maxXw = df['x'].max()
minYw = df['y'].min()
maxYw = df['y'].max()
# dictionary for quicker printing
dict_out = {}
rows_list = []
# Make output filename
filenameOut = text1+"_"+text2+"_Values.csv"
# Start looping through x values
for x in np.arange(minXw, maxXw, x_inc):
xcount += 1
# Start looping through y values
for y in np.arange(minYw, maxYw, y_inc):
ycount += 1
# calculate average and sum
ave_val = df.loc[df['textfield3'] == 'text3', 'float1'].mean()
sum_val = df.loc[df['textfield3'] == 'text3', 'float2'].sum()
# Make Dictionary of output values
dict_out = dict([('text1', text1),
('text2', text2),
('text3', df['text3']),
('x' , x-x_inc),
('y' , y-y_inc),
('ave' , ave_val),
('sum' , sum_val)])
rows_list_c.append(dict_out)
# Write csv
columns = ['text1','text2','text3','x','y','ave','sum']
with open(filenameOut, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=columns)
writer.writeheader()
for data in dict_out:
writer.writerow(data)
My resultant csv gives me:
text1,text2,text3,x,y,ave,sum
text1,text2,,74737.5,43887.5,nan,0.0
text1,text2,,74737.5,43912.5,nan,0.0
text1,text2,,74737.5,43937.5,nan,0.0
text1,text2,,74737.5,43962.5,nan,0.0
Not really clear what you're trying to do. But here is a starting point
If you only need to process rows with a specific text3value, start by filtering out the other rows:
df = df[df.text3=="my_value"]
If at this point, you do not need text3 anymore, you can also drop it
df = df.drop(columns="text3")
Then you process several sub dataframes, and write each of them to their own csv file. groupby is the perfect tool for that:
for (text1, text2), sub_df in df.groupby(["text1", "text2"]):
filenameOut = text1+"_"+text2+"_Values.csv"
# Process sub df
output_df = process(sub_df)
# Write sub df
output_df.to_csv(filenameOut)
Note that if you keep your data as a DataFrame instead of converting it to a dict, you can use the DataFrame to_csv method to simply write the output csv.
Now let's have a look at the process function (Note that you dont really need to make it a separate function, you could as well dump the function body in the for loop).
At this point, if I understand correctly, you want to compute the sum and the average of every rows that have the same x and y coordinates. Here again you can use groupby and the agg function to compute the mean and the sum of the group.
def process(sub_df):
# drop the text1 and text2 columns since they are in the filename anyway
out = sub_df.drop(columns=["text1","text2"])
# Compute mean and max
return out.groupby(["x", "y"]).agg(ave=("float1", "mean"), sum=("float2", "sum"))
And that's preety much it.
Bonus: 2-liner version (but don't do that...)
for (text1, text2), sub_df in df[df.text3=="my_value"].drop(columns="text3").groupby(["text1", "text2"]):
sub_df.drop(columns=["text1","text2"]).groupby(["x", "y"]).agg(ave=("float1", "mean"), sum=("float2", "sum")).to_csv(text1+"_"+text2+"_Values.csv")
To do this in an efficient way in pandas you will need to use groupby, agg and the in-built to_csv method rather than using for loops to construct lists of data and writing each one with the csv module. Something like this:
groups = data[data["text1"].isin(text_list1) & data["text2"].isin(text_list2)] \
.groupby(["text1", "text2"])
for (text1, text2), group in groups:
group.groupby("text3") \
.agg({"float1": np.mean, "float2": sum}) \
.to_csv(f"{text1}_{text2}_Values.csv")
It's not clear exactly what you're trying to do with the incrementing of x and y values, which is also what makes your current code very slow. To present sums and averages of the floating point columns by intervals of x and y, you could make bin columns and group by those too.
data["x_bin"] = (data["x"] - data["x"].min()) // x_inc
data["y_bin"] = (data["y"] - data["y"].min()) // y_inc
groups = data[data["text1"].isin(text_list1) & data["text2"].isin(text_list2)] \
.groupby(["text1", "text2"])
for (text1, text2), group in groups:
group.groupby(["text3", "x_bin", "y_bin"]) \
.agg({"x": "first", "y": "first", "float1": np.mean, "float2": sum}) \
.to_csv(f"{text1}_{text2}_Values.csv")

Cannot group datapoints by cluster

I have a datalists where each datapoint has 5 features and a cluster assigned to each point.
You can see the beginning of it here, last column is the cluster number:
[[4.01682810e-01 2.14628527e-02 2.99529665e-02 2.79935965e-01 9.21441137e-01 9.00000000e+00]
[9.32087200e-03 3.38196129e-01 8.49571569e-01 3.69402590e-01 1.92096835e-01 1.20000000e+01]
[7.51465196e-01 4.45955645e-01 3.37174838e-01 3.65047097e-01 5.81725084e-01 1.00000000e+00]
I want to create a list of lists of datapoints of the same cluster, so I wrote the following function and tried to execute it:
def returnArrayOfClusters(data, clusterNumbers):
# create an empty column
column = []
# create an empty list we want to output
listOfClusters = []
# fill it with a column for each cluster
for i in clusterNumbers:
listOfClusters.append(column)
print(listOfClusters)
## fill the columns with points according to their cluster
for datapoint in data:
print(datapoint)
cluster = int(datapoint[5])
listOfClusters[cluster].append(datapoint)
return listOfClusters
listOfClusters = returnArrayOfClusters(data_labeled_unfinished, range(0,14))
What I get is an unordered list of datapoints of this format (the end of the list), and as you can see all the points in the column are of different clusters (they have different last value):
array([ 0.81802695, 0.45533606, 0.33799001, 0.26154893, 0.64155249,
13. ]), array([0.12995366, 0.45586338, 0.85833814, 0.32153188, 0.28736836,
1. ]), array([0.06230581, 0.47400143, 0.78671841, 0.3162376 , 0.04819034,
9. ]), array([0.15291747, 0.54247295, 0.54407916, 0.87888682, 0.46639597,
8. ]), array([ 0.21578994, 0.178303 , 0.80642112, 0.39853499, 0.27832876,
10. ]), array([0.27426491, 0.32986967, 0.49411613, 0.50818875, 0.2336591 ,
5. ])]
Maybe it is a very stupid mistake, but I just cannot spot the error.
What I expect to see, however, is to be all the points in the list to be of the same cluster (i.e. in the output have the same value of the 6th item)
Hopefully I got you correct, you can split your data using a list comprehension, for example:
from sklearn.cluster import KMeans
import numpy as np
X = np.random.normal(0,1,(100,5))
kmeans = KMeans(n_clusters=8, random_state=0).fit(X)
data = np.concatenate((X,kmeans.labels_.reshape(-1,1)),axis=1)
[data[data[:,5]==i,] for i in np.unique(data[:,5])]
in your case:
[data_labeled_unfinished[data_labeled_unfinished[:,5]==i,] for i in np.unique(data_labeled_unfinished[:,5])]

Why does pd.as_matrix() change the values and the number of decimal places from the original data frame?

I have a dataframe that consists of two decimal values and an Id:
When I apply the as matrix function on the x and y values it yields an array that looks like this:
coords = df.as_matrix(columns=['x', 'y'])
coords
yields:
array([[ 0.0703843 , 0.170845 ],
[ 0.07022078, 0.17150128],
[ 0.07208886, 0.17159163],
...,
[ 0.07162819, 0.17044404],
[ 0.06951432, 0.17096308],
[ 0.07104143, 0.17040137]])
This immediately seemed strange since the length of the decimal place were inconsistent but I just assumed pandas was doing some shortening for display purposes
But then when I tried to retrieve the IDs - I could only get one or zero matchs when they should all match:
ids = []
for coord in coords:
try:
_id = df.loc[df['x'] == coord[0]]['id'][1]
ids.append(_id)
except:
pass
len(ids)
1
What I am trying to understand is why the pd.as_matrix function extracts a value from the data frame that cannot be referenced again, and if so how do retrieve the ids from the data frame.
Any help here would be appreciated.
Thanks
Edit
Bellow is an subset of the data frame in CSV:
,id,x,y
0,07379a26-2447-4fce-83ac-4784abf07389,0.07038429591623253,0.17084500318384327
1,f5cc3adb-0588-4705-b1a3-fe1b669b776f,0.07022078416348305,0.17150127781674332
2,b5a57ffe-8565-4443-9685-11675ce25dc4,0.07208886125821728,0.17159163002146055
3,940efcaa-6d9d-4b10-a0fe-d8ec8c1d9c7e,0.07057468050347501,0.1700482708522834
4,616d7794-565a-4d2d-98cb-334beb5b91ef,0.07057895306948389,0.170054305037284
5,e2d1819d-1f58-407d-9950-be0a0c00374b,0.07161607658023798,0.17013089473907284
6,6a739687-f9ad-47bd-8a4b-c47bc4b2aec6,0.070163429153604,0.16889764101717875
7,dd2df646-9a66-4baa-8815-d24f1858eda7,0.07035099968831582,0.16995622800529742
8,6a224d76-efea-4313-803d-c25b619dae0a,0.07066777462044714,0.17021849979554743
9,321147fa-ee51-4bab-9634-199c92a42d2f,0.06984869509314469,0.17098101436534555
10,e52d6289-01ba-4e7d-8054-bb9a349c0505,0.07068704829137691,0.17029718331066224
11,517f256b-6171-4d93-9b4b-0f81aac828fb,0.0713283119291569,0.16983952831019206
12,e339c742-9784-49fc-a435-790db0364229,0.07131341496221469,0.1698513011377732
13,6f20ad5a-22fb-43a2-8885-838e5161df14,0.06942397329210678,0.1716572235671854
14,f6e1008f-2b22-4d88-8c84-c0dc4f2d822e,0.06942427697939664,0.17165098925109726
15,8a2d35e5-10a2-4188-b98f-54200d2db8da,0.07048162129308791,0.16896051533992895
16,adab8fd8-4348-412d-85d2-01491886967b,0.07076495746208027,0.16966622176968035
17,df79523b-848b-45a9-8dab-fe53c2a5b62d,0.06988926585338372,0.17028143287771583
18,db05d97c-3b16-4da8-9659-820fc7e3f858,0.0713167479593096,0.1685149810693375
19,d43963d1-b803-473c-85dc-2ed2e9f77f4e,0.07045583812582461,0.1706502407290604
20,9d99c9a6-2de3-4e6a-9bd7-9d7ece358a2f,0.07044174575566758,0.17066067488910522
21,3eec44be-b9e2-45a2-b919-05028f5a0ba9,0.07079585677115756,0.16920818686920963
22,9f836847-2b67-4b33-930a-1f84452628ba,0.07078522829778934,0.16919781903167638
23,fbaa8958-a5d5-4dfb-91f7-8c11afe226a8,0.07128542860765898,0.16834798505762455
24,a84b59c4-4145-472d-a26a-4c930648c16c,0.07196635776157265,0.17047633495883885
25,29cf8ad3-7068-4207-b0a2-4f0cff337c9f,0.0719701195278871,0.17051442269732875
26,d0f512c8-5c4f-427a-99e1-ebb4c5b363e5,0.0718787509597688,0.17054903897593635
27,74b1db2d-002b-4f89-8d02-ac084e9a3cd5,0.07089130417373782,0.16981103290127117
28,89210a0c-8144-491d-9e98-19e7f4c3085e,0.07076060461092577,0.1707011426749184
29,aebb377e-7c26-4bb5-8563-c3055a027844,0.07103977816965212,0.17113978347674103
30,00b527a0-d40a-44b4-90f9-750fd447d2d7,0.07097785505134419,0.16963542019904118
31,8c186559-f50d-40ca-a821-11596e1e5261,0.06992637446216321,0.17110063865050085
32,0e64cf14-6ccd-4ad0-9715-ab410f6baf6a,0.0718311255786932,0.1705675237580442
33,f5479823-1efe-47b8-9977-73dc41d1d69e,0.07016981880399553,0.1703708437681898
34,385cfa13-2476-4e3d-b755-3063a7f802b9,0.07016550435008462,0.17037054473511137
35,a40bf573-b701-46f0-9a06-5857cf3ab199,0.0701443567773146,0.17035314147536326
36,0c5a9751-2c1b-4003-834d-9584d2f907a2,0.07016050805421256,0.17038992836178396
37,65b09067-9cf0-492d-8a70-13d4f92f8a10,0.07137336818557355,0.1684713798357405
The issue is with the df.loc function on geo-dataframes.
Once I exported it to a csv, then re read the dataframe in using normal pandas it seemed to work just fine.
Just letting who finds this know.

Categories