Pandas - Grouping rows and averaging per column [duplicate] - python

I have a dataframe like this:
cluster org time
1 a 8
1 a 6
2 h 34
1 c 23
2 d 74
3 w 6
I would like to calculate the average of time per org per cluster.
Expected result:
cluster mean(time)
1 15 #=((8 + 6) / 2 + 23) / 2
2 54 #=(74 + 34) / 2
3 6
I do not know how to do it in Pandas, can anybody help?

If you want to first take mean on the combination of ['cluster', 'org'] and then take mean on cluster groups, you can use:
In [59]: (df.groupby(['cluster', 'org'], as_index=False).mean()
.groupby('cluster')['time'].mean())
Out[59]:
cluster
1 15
2 54
3 6
Name: time, dtype: int64
If you want the mean of cluster groups only, then you can use:
In [58]: df.groupby(['cluster']).mean()
Out[58]:
time
cluster
1 12.333333
2 54.000000
3 6.000000
You can also use groupby on ['cluster', 'org'] and then use mean():
In [57]: df.groupby(['cluster', 'org']).mean()
Out[57]:
time
cluster org
1 a 438886
c 23
2 d 9874
h 34
3 w 6

I would simply do this, which literally follows what your desired logic was:
df.groupby(['org']).mean().groupby(['cluster']).mean()

Another possible solution is to reshape the dataframe using pivot_table() then take mean(). Note that it's necessary to pass aggfunc='mean' (this averages time by cluster and org).
df.pivot_table(index='org', columns='cluster', values='time', aggfunc='mean').mean()
Another possibility is to use level parameter of mean() after the first groupby() to aggregate:
df.groupby(['cluster', 'org']).mean().mean(level='cluster')

Related

Finding a mean of a pandas dataframe column depending on another column [duplicate]

I have a dataframe like this:
cluster org time
1 a 8
1 a 6
2 h 34
1 c 23
2 d 74
3 w 6
I would like to calculate the average of time per org per cluster.
Expected result:
cluster mean(time)
1 15 #=((8 + 6) / 2 + 23) / 2
2 54 #=(74 + 34) / 2
3 6
I do not know how to do it in Pandas, can anybody help?
If you want to first take mean on the combination of ['cluster', 'org'] and then take mean on cluster groups, you can use:
In [59]: (df.groupby(['cluster', 'org'], as_index=False).mean()
.groupby('cluster')['time'].mean())
Out[59]:
cluster
1 15
2 54
3 6
Name: time, dtype: int64
If you want the mean of cluster groups only, then you can use:
In [58]: df.groupby(['cluster']).mean()
Out[58]:
time
cluster
1 12.333333
2 54.000000
3 6.000000
You can also use groupby on ['cluster', 'org'] and then use mean():
In [57]: df.groupby(['cluster', 'org']).mean()
Out[57]:
time
cluster org
1 a 438886
c 23
2 d 9874
h 34
3 w 6
I would simply do this, which literally follows what your desired logic was:
df.groupby(['org']).mean().groupby(['cluster']).mean()
Another possible solution is to reshape the dataframe using pivot_table() then take mean(). Note that it's necessary to pass aggfunc='mean' (this averages time by cluster and org).
df.pivot_table(index='org', columns='cluster', values='time', aggfunc='mean').mean()
Another possibility is to use level parameter of mean() after the first groupby() to aggregate:
df.groupby(['cluster', 'org']).mean().mean(level='cluster')

Python: For each unique ID, find its code and its value and calculate the ratio

Actual dataframe consist of more than a million rows.
Say for example a dataframe is:
UniqueID Code Value OtherData
1 A 5 Z01
1 B 6 Z02
1 C 7 Z03
2 A 10 Z11
2 B 11 Z24
2 C 12 Z23
3 A 10 Z21
4 B 8 Z10
I want to obtain ratio of A/B for each UniqueID and put it in a new dataframe. For example, for UniqueID 1, its ratio of A/B = 5/6.
What is the most efficient way to do this in Python?
Want:
UniqueID RatioAB
1 5/6
2 10/11
3 Inf
4 0
Thank you.
One approach is using pivot_table, aggregating with the sum in the case there are multiple occurrences of the same letters (otherwise a simple pivot will do), and evaluating on columns A and B:
df.pivot_table(index='UniqueID', columns='Code', values='Value', aggfunc='sum').eval('A/B')
UniqueID
1 0.833333
2 0.909091
3 NaN
4 NaN
dtype: float64
If there is maximum one occurrence of each letter per group:
df.pivot(index='UniqueID', columns='Code', values='Value').eval('A/B')
UniqueID
1 0.833333
2 0.909091
3 NaN
4 NaN
dtype: float64
If you only care about A/B ratio:
df1 = df[df['Code'].isin(['A','B'])][['UniqueID', 'Code', 'Value']]
df1 = df1.pivot(index='UniqueID',
columns='Code',
values='Value')
df1['RatioAB'] = df1['A']/df1['B']
The most apparent way is via groupby.
df.groupby('UniqueID').apply(lambda g: g.query("Code == 'A'")['Value'].iloc[0] / g.query("Code == 'B'")['Value'].iloc[0])

Getting mean for n-largest values in each group

Let's say I have a data frame named df as below in Pandas :
id x y
1 10 A
2 12 B
3 10 B
4 4 C
5 9 A
6 15 A
7 6 B
Now I would like to group data by column y and get mean of 2 largest values (x) of each group, which would look something like that
y
A (10+15)/2 = 12.5
B (12 + 10)/2 = 11
C 4
If I try with df.groupby('y')['x'].nlargest(2), I get
y id
A 1 10
6 15
B 2 12
3 10
C 4 4
which is of type pandas.core.series.Series. So when I do df.groupby('y')[x].nlargest(2).mean() I get mean of all numbers instead of 3 means, one for each group. At the end I would like to plot the results where groups would be on the x axis and means on y axis, so I'm guessing I should get rid of column 'id' as well?
Anyone knows how to solve this one? Thank you for help!
df.groupby('y')['x'].nlargest(2).mean(level=0)
Out:
y
A 12.5
B 11.0
C 4.0
Name: x, dtype: float64
Note that this is grouping by 'y' two times (mean(level=0) is another groupby but it is done on an index so it is faster). groupby.apply might be more efficient based on the number of groups as it requires grouping only once in this particular situation.
df.groupby('y')['x'].apply(lambda ser: ser.nlargest(2).mean())
Out:
y
A 12.5
B 11.0
C 4.0
Name: x, dtype: float64

Merging two rows and averaging the columns of those rows in pandas [duplicate]

I have a dataframe like this:
cluster org time
1 a 8
1 a 6
2 h 34
1 c 23
2 d 74
3 w 6
I would like to calculate the average of time per org per cluster.
Expected result:
cluster mean(time)
1 15 #=((8 + 6) / 2 + 23) / 2
2 54 #=(74 + 34) / 2
3 6
I do not know how to do it in Pandas, can anybody help?
If you want to first take mean on the combination of ['cluster', 'org'] and then take mean on cluster groups, you can use:
In [59]: (df.groupby(['cluster', 'org'], as_index=False).mean()
.groupby('cluster')['time'].mean())
Out[59]:
cluster
1 15
2 54
3 6
Name: time, dtype: int64
If you want the mean of cluster groups only, then you can use:
In [58]: df.groupby(['cluster']).mean()
Out[58]:
time
cluster
1 12.333333
2 54.000000
3 6.000000
You can also use groupby on ['cluster', 'org'] and then use mean():
In [57]: df.groupby(['cluster', 'org']).mean()
Out[57]:
time
cluster org
1 a 438886
c 23
2 d 9874
h 34
3 w 6
I would simply do this, which literally follows what your desired logic was:
df.groupby(['org']).mean().groupby(['cluster']).mean()
Another possible solution is to reshape the dataframe using pivot_table() then take mean(). Note that it's necessary to pass aggfunc='mean' (this averages time by cluster and org).
df.pivot_table(index='org', columns='cluster', values='time', aggfunc='mean').mean()
Another possibility is to use level parameter of mean() after the first groupby() to aggregate:
df.groupby(['cluster', 'org']).mean().mean(level='cluster')

calculating differences within groups

I have a DataFrame whose rows provide a value of one feature at one time. Times are identified by the time column (there's about 1000000 distinct times). Features are identified by the feature column (there's a few dozen features). There's at most one row for any combination of feature and time. At each time, only some of the features are available; the only exception is feature 0 which is available at all times. I'd like to add to that DataFrame a column that shows the value of the feature 0 at that time. Is there a reasonably fast way to do it?
For example, let's say I have
df = pd.DataFrame({
'time': [1,1,2,2,2,3,3],
'feature': [1,0,0,2,4,3,0],
'value':[1,2,3,4,5,6,7],
})
I want to add a column that contains [2,2,3,3,3,7,7].
I tried to use groupby and boolean indexing but no luck.
I'd like to add to that DataFrame a column that shows the value of the feature 0 at that time. Is there a reasonably fast way to do it?
I think that a groupby (which is quite an expensive operation) is an overkill for this. Try a merge with the values only of the 0 feature:
>>> pd.merge(
df,
df[df.feature == 0].drop('feature', axis=1).rename(columns={'value': 'value_0'}))
feature time value value_0
0 1 1 1 2
1 0 1 2 2
2 0 2 3 3
3 2 2 4 3
4 4 2 5 3
5 3 3 6 7
6 0 3 7 7
Edit
Per #jezrael's request, here is a timing test:
import pandas as pd
m = 10000
df = pd.DataFrame({
'time': range(m / 2) + range(m / 2),
'feature': range(m / 2) + [0] * (m / 2),
'value': range(m),
})
On this input, #jezrael's solution takes 396 ms, whereas mine takes 4.03 ms.
If you'd like to drop the zero rows and add them as a separate column (slightly different than your original request), you could do the following:
# Create initial dataframe.
df = pd.DataFrame({
'time': [1,1,2,2,2,3,3],
'feature': [1,0,0,2,4,3,0],
'value':[1,2,3,4,5,6,7],
})
# Set the index to 'time'
df = df.set_index('time')
# Join the zero feature value to the non-zero feature rows.
>>> df.loc[df.feature > 0, :].join(df.loc[df.feature == 0, 'value'], rsuffix='_feature_0')
feature value value_feature_0
time
1 1 1 2
2 2 4 3
2 4 5 3
3 3 6 7
You can set_index from column value and then groupby with transform idxmin.
This solution works, if the value 0 in column feature is min.
df = df.set_index('value')
df['diff'] = df.groupby('time')['feature'].transform('idxmin')
print df.reset_index()
value feature time diff
0 1 1 1 2
1 2 0 1 2
2 3 0 2 3
3 4 2 2 3
4 5 4 2 3
5 6 3 3 7
6 7 0 3 7

Categories