I am trying to build a table that has groups that are divided by subgroups with count and average for each subgroup. For example, I want to convert the following data frame:
To a table that looks like this where the interval is a bigger group and columns a thru i become subgroups within the group with the corresponding subgroups' count and average in each cell:
I have tried this with no success:
Try.
df.groupby(['interval']).apply(lambda x : x.stack()
.groupby(level=-1)
.agg({'count', 'mean'}))
Use groupby with apply to apply a function for each group then stack and groupby again with agg to find count and mean.
Use DataFrame.melt with GroupBy.agg and tuples for aggregate functions with new columns names:
df1 = (df.melt('interval', var_name='source')
.groupby(['interval','source'])['value']
.agg([('cnt','count'), ('average','mean')])
.reset_index())
print (df1.head())
interval source cnt average
0 0 a 1 5.0
1 0 b 1 0.0
2 0 c 1 0.0
3 0 d 1 0.0
4 0 f 1 0.0
The following code solves the problem I asked for:
df.group(['interval'],,as_index=False).agg({
'a':{"count":"mean"},
'b':{"count":"mean"},
'c':{"count":"mean"},
'd':{"count":"mean"},
'f':{"count":"mean"},
'g':{"count":"mean"},
'i':{"count":"mean"}
})
Related
I would like to collapse my dataset using groupby and agg, however after collapsing, I want the new column to show a string value only for the grouped rows.
For example, the initial data is:
df = pd.DataFrame([["a",1],["a",2],["b",2]], columns=['category','value'])
category value
0 a 1
1 a 3
2 b 2
Desired output:
category value
0 a grouped
1 b 2
How should I modify my code (to show "grouped" instead of 3):
df=df.groupby(['category'], as_index=False).agg({'value':'max'})
You can use a lambda with a ternary:
df.groupby("category", as_index=False)
.agg({"value": lambda x: "grouped" if len(x) > 1 else x})
This outputs:
category value
0 a grouped
1 b 2
Another possible solution:
(df.assign(value = np.where(
df.duplicated(subset=['category'], keep=False), 'grouped', df['value']))
.drop_duplicates())
Output:
category value
0 a grouped
2 b 2
for the following df
group participated
A 1
A 1
B 0
A 0
B 1
A 1
B 0
B 0
I want to count the total number of values in the participated column for each value in the group column (groupby-count) and then find a count of how many 1s there are in each group too
Something like
group tot_participated 1s
A 4 3
B 4 1
I know the first part is simple and can be done by a simple
grouped_df=df.groupby('group').count().reset_index()
unable to wrap my head around the second part. Any help will be greatly appreciated!
You could follow the groupby with an aggregation as below:
grp_df = df.groupby('group', as_index=False).agg({'participated':['count','sum']})
grp_df.columns = ['group','tot_participated','1s']
grp_df.head()
The caveat to using .agg with multiple aggregation functions on the same column is that a multi-column index is created. This can be remedied by resetting the column names as in line 2.
Here's the code I currently have:
df.groupby(df['LOCAL_COUNCIL']).agg({'CRIME_RATING': ['mean', 'count']}).reset_index()
which returns something like the following (I've made these values up):
CRIME_RATING
mean count
0 3.000000 1
1 3.118397 39
2 2.790698 32
3 5.125000 18
4 4.000000 1
5 4.222222 22
but I'd quite like to exclude indexes 0 and 4 from the resulting dataframe given that they both have a count of 1. Can this be done?
Use Series.ne for filter not equal 1 with tuple for select MultiIndex columns and filter in boolean indexing:
df1 = df.groupby(df['LOCAL_COUNCIL']).agg({'CRIME_RATING': ['mean', 'count']}).reset_index()
df2 = df1[df1[('CRIME_RATING','count')].ne(1)]
If want avoid MultiIndex use named aggregation:
df1 = df.groupby(df['LOCAL_COUNCIL']).agg(mean = ('CRIME_RATING','mean'),
count = ('CRIME_RATING','count'))
df2 = df1[df1['count'].ne(1)]
I have a pandas dataframe defined as:
A B SUM_C
1 1 10
1 2 20
I would like to do a cumulative sum of SUM_C and add it as a new column to the same dataframe. In other words, my end goal is to have a dataframe that looks like below:
A B SUM_C CUMSUM_C
1 1 10 10
1 2 20 30
Using cumsum in pandas on group() shows the possibility of generating a new dataframe where column name SUM_C is replaced with cumulative sum. However, my ask is to add the cumulative sum as a new column to the existing dataframe.
Thank you
Just apply cumsum on the pandas.Series df['SUM_C'] and assign it to a new column:
df['CUMSUM_C'] = df['SUM_C'].cumsum()
Result:
df
Out[34]:
A B SUM_C CUMSUM_C
0 1 1 10 10
1 1 2 20 30
I have a large dataframe (from 500k to 1M rows) which contains for example these 3 numeric columns: ID, A, B
I want to filter the results in order to obtain a table like the one in the image below, where, for each unique value of column id, i have the maximum and minimum value of A and B.
How can i do?
EDIT: i have updated the image below in order to be more clear: when i get the max or min from a column i need to get also the data associated to it of the others columns
Sample data (note that you posted an image which can't be used by potential answerers without retyping, so I'm making a simple example in its place):
df=pd.DataFrame({ 'id':[1,1,1,1,2,2,2,2],
'a':range(8), 'b':range(8,0,-1) })
The key to this is just using idxmax and idxmin and then futzing with the indexes so that you can merge things in a readable way. Here's the whole answer and you may wish to examine intermediate dataframes to see how this is working.
df_max = df.groupby('id').idxmax()
df_max['type'] = 'max'
df_min = df.groupby('id').idxmin()
df_min['type'] = 'min'
df2 = df_max.append(df_min).set_index('type',append=True).stack().rename('index')
df3 = pd.concat([ df2.reset_index().drop('id',axis=1).set_index('index'),
df.loc[df2.values] ], axis=1 )
df3.set_index(['id','level_2','type']).sort_index()
a b
id level_2 type
1 a max 3 5
min 0 8
b max 0 8
min 3 5
2 a max 7 1
min 4 4
b max 4 4
min 7 1
Note in particular that df2 looks like this:
id type
1 max a 3
b 0
2 max a 7
b 4
1 min a 0
b 3
2 min a 4
b 7
The last column there holds the index values in df that were derived with idxmax & idxmin. So basically all the information you need is in df2. The rest of it is just a matter of merging back with df and making it more readable.
For anyone looking to get min and max values of a specific column where there is a unique ID, this is how I modified the above code:
df_maxA = df.groupby('id').max()['A']
df_maxA['type'] = 'max'
df_minA = df.groupby('id').max()['A']
df_minA['type'] = 'min'
df_maxB = df.groupby('id').max()['B']
df_maxB['type'] = 'max'
df_minB = df.groupby('id').max()['B']
df_minB['type'] = 'min'
Then you can merge these together to create a single dataframe.