Say I have some data in a pandas dataframe that I want to work with.
>>> df = pd.DataFrame([['a',10,5],['a',12,6],['b',4,2],['b',5,10]],
... columns=['id','val','val2']))
So the dataframe looks something like this:
>>> df
id val val2
0 a 10 5
1 a 12 6
2 b 4 2
3 b 5 10
What I want to achieve is a dataframe containing the id values as column names and val and val2 as row names, where the values shall be composed the following way:
Build the mean value for value columns based on id, leaving something like
id mean-val mean-val2
a 11 5.5
b 4.5 6
Calculate the percentage of mean-val and mean-val2 on the sum of both values based on id (e.g. 11 / (11+5.5) * 100 = 66.67), rendering
id perc-val perc-val2
a 66.67 33.33
b 42.86 57.14
The final dataframe shall look like this:
>>> new_df
a b
val 66.67 42.86
val2 33.33 57.14
My approach
I'm quite inexperienced with pandas, so it took me a while to get an unsatisfying approach.
>>> idx = ['val','val2']
>>> lst = [df.groupby('id')[index].mean() for index in idx]
>>> df_new = pd.DataFrame(
... [[x/y*100 for x, y in zip(lst2,sum(lst))] for lst2 in lst],
... index=idx, columns=df['id'].unique())
This works, but I'm not sure if it is guaranteed that either the columns or the rows are named in the right order, or if it's possible that, e.g., the a column is named b and vice versa.
So my actual question is if there is a nicer, cleaner, safer and maybe more efficient way of doing this.
Yes, there is.
If you're taking the mean over every column, you don't have to specify the column names
You can vectorize your division using DataFrame.div (or the division operator __div__)
v = df.groupby('id').mean()
v.T / v.sum(1) * 100 # thanks to #fuglede
# v.div(v.sum(1), axis=0).T # thanks to #Scott Boston
id a b
val 66.666667 42.857143
val2 33.333333 57.142857
Related
I'd need a little suggestion on a procedure using pandas, I have a 2-columns dataset that looks like this:
A 0.4533
B 0.2323
A 1.2343
A 1.2353
B 4.3521
C 3.2113
C 2.1233
.. ...
where first column contains strings and the second one floats. I would like to save the minimum value for each group of unique strings in order to have the associated minimum with A, B, C. Does anybody have any suggestions on that? It could help me also storing somehow all the values for each string they are associated.
Many thanks,
James
Input data:
>>> df
0 1
0 A 0.4533
1 B 0.2323
2 A 1.2343
3 A 1.2353
4 B 4.3521
5 C 3.2113
6 C 2.1233
Use groupby before min:
out = df.groupby(0).min()
Output result:
>>> out
1
0
A 0.4533
B 0.2323
C 2.1233
Update:
filter out all the values in the original dataset that are more than 20% different from the minimum
out = df[df.groupby(0)[1].apply(lambda x: x <= x.min() * 1.2)]
>>> out
0 1
0 A 0.4533
1 B 0.2323
6 C 2.1233
You can simply do it by
min_A=min(df[df["column_1"]=="A"]["value"])
min_B=min(df[df["column_1"]=="B"]["value"])
min_C=min(df[df["column_1"]=="C"]["value"])
where df = Dataframe column_1 and value are the names of the columns of the dataframe
You can also do it by using the pre-defined function of pandas i.e. groupby()
>> df.groupby(["column_1"]).min()
The Above will also give the same results.
I need a fast way to extract the right values from a pandas dataframe:
Given a dataframe with (a lot of) data in several named columns and an additional columns whose values only contains names of the other columns, how do I select values from the data-columns with the additional columns as keys?
It's simple to do via an explicit loop, but this is extremely slow with something like .iterrows() directly on the DataFrame. If converting to numpy-arrays, it's faster, but still not fast. Can I combine methods from pandas to do it even faster?
Example: This is the kind of DataFrame structure, where columns A and B contain data and column keys contains the keys to select from:
import pandas
df = pandas.DataFrame(
{'A': [1,2,3,4],
'B': [5,6,7,8],
'keys': ['A','B','B','A']},
)
print(df)
output:
Out[1]:
A B keys
0 1 5 A
1 2 6 B
2 3 7 B
3 4 8 A
Now I need some fast code that returns a DataFrame like
Out[2]:
val_keys
0 1
1 6
2 7
3 4
I was thinking something along the lines of this:
tmp = df.melt(id_vars=['keys'], value_vars=['A','B'])
out = tmp.loc[a['keys']==a['variable']]
which produces:
Out[2]:
keys variable value
0 A A 1
3 A A 4
5 B B 6
6 B B 7
but doesn't have the right order or index. So it's not quite a solution.
Any suggestions?
See if either of these work for you
df['val_keys']= np.where(df['keys'] =='A', df['A'],df['B'])
or
df['val_keys']= np.select([df['keys'] =='A', df['keys'] =='B'], [df['A'],df['B']])
No need to specify anything for the code below!
def value(row):
a = row.name
b = row['keys']
c = df.loc[a,b]
return c
df.apply(value, axis=1)
Have you tried filtering then mapping:
df_A = df[df['key'].isin(['A'])]
df_B = df[df['key'].isin(['B'])]
A_dict = dict(zip(df_A['key'], df_A['A']))
B_dict = dict(zip(df_B['key'], df_B['B']))
df['val_keys'] = df['key'].map(A_dict)
df['val_keys'] = df['key'].map(B_dict).fillna(df['val_keys']) # non-exhaustive mapping for the second one
Your df['val_keys'] column will now contain the result as in your val_keys output.
If you want you can just retain that column as in your expected output by:
df = df[['val_keys']]
Hope this helps :))
I have a situation where I am creating a pivot table in PANDAS where it makes more sense to calculate the fields separately and just use .pivot_table() for the pivot step. However, I am running into some difficultly trying to calculate the denominator for my percentages. Essentially, due to the data format I appear to need to do something like "groupby transform unique sum" on the second line below (which is where I am stuck):
df['numerator'] = df.groupby(['category1','category2'])['customer_id'].transform('nunique')
df['denominator'] = df.groupby(['category2'])['numerator'].nunique().transform('sum')
df['percentage'] = (df['numerator'] / df['denominator'])
df_pivot = df.pivot_table(index='category1',
columns=['category2'],
values=['numerator','percentage']) \
swaplevel(0,1,axis=1)
df_pivot.loc['total', :] = df_pivot.sum().values
My apologies for not being able to provide any dummy data, but I would appreciate any tips if I have hopefully provided enough detail to reason about.
I believe need lambda function with unique and sum:
df = pd.DataFrame({'numerator':[3,1,1,9,2,2],
'category2':list('aaabbb')})
#print (df)
df['denominator']=df.groupby(['category2'])['numerator'].transform(lambda x: x.unique().sum())
Alternative solution with sets and sums:
df['denominator']=df.groupby(['category2'])['numerator'].transform(lambda x: sum(set(x)))
print (df)
category2 numerator denominator
0 a 3 4
1 a 1 4
2 a 1 4
3 b 9 11
4 b 2 11
5 b 2 11
If I have a pandas database such as:
timestamp label value new
etc. a 1 3.5
b 2 5
a 5 ...
b 6 ...
a 2 ...
b 4 ...
I want the new column to be the average of the last two a's and the last two b's... so for the first it would be the average of 5 and 2 to get 3.5. It will be sorted by the timestamp. I know I could use a groupby to get the average of all the a's or all the b's but I'm not sure how to get an average of just the last two. I'm kinda new to python and coding so this might not be possible idk.
Edit: I should also mention this is not for a class or anything this is just for something I'm doing on my own and that this will be on a very large dataset. I'm just using this as an example. Also I would want each A and each B to have its own value for the last 2 average so the dimension of the new column will be the same as the others. So for the third line it would be the average of 2 and whatever the next a would be in the data set.
IIUC one way (among many) to do that:
In [139]: df.groupby('label').tail(2).groupby('label').mean().reset_index()
Out[139]:
label value
0 a 3.5
1 b 5.0
Edited to reflect a change in the question specifying the last two, not the ones following the first, and that you wanted the same dimensionality with values repeated.
import pandas as pd
data = {'label': ['a','b','a','b','a','b'], 'value':[1,2,5,6,2,4]}
df = pd.DataFrame(data)
grouped = df.groupby('label')
results = {'label':[], 'tail_mean':[]}
for item, grp in grouped:
subset_mean = grp.tail(2).mean()[0]
results['label'].append(item)
results['tail_mean'].append(subset_mean)
res_df = pd.DataFrame(results)
df = df.merge(res_df, on='label', how='left')
Outputs:
>> res_df
label tail_mean
0 a 3.5
1 b 5.0
>> df
label value tail_mean
0 a 1 3.5
1 b 2 5.0
2 a 5 3.5
3 b 6 5.0
4 a 2 3.5
5 b 4 5.0
Now you have a dataframe of your results only, if you need them, plus a column with it merged back into the main dataframe. Someone else posted a more succinct way to get to the results dataframe; probably no reason to do it the longer way I showed here unless you also need to perform more operations like this that you could do inside the same loop.
I have a dataframe with a number of columns, two of which are grouping variables.
>>> df2
Groupvar1 Groupvar2 x y z
0 A 1 0.726317 0.574514 0.700475
1 A 2 0.422089 0.798931 0.191157
2 A 3 0.888318 0.658061 0.686496
....
13 B 2 0.978920 0.764266 0.673941
14 B 3 0.759589 0.162488 0.698958
and I want to make a new dataframe which holds the diffrence between each datapoint in the origianl df and the mean corresponding to its subgroup.
So to begin with a make the new df with the grouped averages:
>>> grp_vars = ['Groupvar1','Groupvar2']
>>> df2_grp = df2.groupby(grp_vars)
>>> df2_grp_avg = df2_grp.mean()
>>> df2_grp_avg
x y z
Groupvar1 Groupvar2
A 1 0.364533 0.645237 0.886286
2 0.325533 0.500077 0.246287
3 0.796326 0.496950 0.510085
4 0.774854 0.688732 0.487547
B 1 0.743783 0.452482 0.612006
2 0.575687 0.396902 0.446126
3 0.473152 0.476379 0.508060
4 0.434320 0.406458 0.382187
and in the new dtaframe I want to keep the deltas, defined as:
delta = individual value - average value of the subgroup this individual is a member of
Now, it's clear to me how to do this the hard way (for loop) but I supose there must be a more elegant solution. Apprecaite any advice on finding that more elegant solution. TIA.
Use .groupby(...).transform function:
>>> demean = lambda df: df - df.mean()
>>> df.groupby(['Groupvar1', 'Groupvar2']).transform(demean)
ant then pd.concat the result with the original data-frame.