Print out pandas groupby without operation - python

So I have the following pandas dataframe:
import pandas as pd
sample_df = pd.DataFrame({'note': ['D','C','D','C'], 'time': [1,1,4,6], 'val': [6,4,7,9]})
which gives the result
note time val
0 D 1 6
1 C 1 4
2 D 4 7
3 C 6 9
What I want is
note index time val
C 1 1 4
3 6 9
D 0 1 6
2 4 7
I tried sample_df.set_index('note',append=True) and it didn't work.

Add DataFrame.swaplevel with DataFrame.sort_index by first level:
df = sample_df.set_index('note', append=True).swaplevel(1,0).sort_index(level=0)
print (df)
time val
note
C 1 1 4
3 6 9
D 0 1 6
2 4 7
If need set level name add DataFrame.rename_axis:
df = (sample_df.rename_axis('idx')
.set_index('note',append=True)
.swaplevel(1,0)
.sort_index(level=0))
print (df)
time val
note idx
C 1 1 4
3 6 9
D 0 1 6
2 4 7
Alternatively:
sample_df.index.rename('old_index', inplace=True)
sample_df.reset_index(inplace=True)
sample_df.set_index(['note','old_index'], inplace=True)
sample_df.sort_index(level=0, inplace=True)
print (sample_df)
time val
note old_index
C 1 1 4
3 6 9
D 0 1 6
2 4 7

I am using MultiIndex create the target index
sample_df.index=pd.MultiIndex.from_arrays([sample_df.note,sample_df.index])
sample_df.drop('note',1,inplace=True)
sample_df=sample_df.sort_index(level=0)
sample_df
time val
note
C 1 1 4
3 6 9
D 0 1 6
2 4 7

I would use set_index and pop to simultaneously discard column 'note' and set new index
df.set_index([df.pop('note'), df.index]).sort_index(level=0)
Out[380]:
time val
note
C 1 1 4
3 6 9
D 0 1 6
2 4 7

Related

pop rows from dataframe based on conditions

From the dataframe
import pandas as pd
df1 = pd.DataFrame({'A':[1,1,1,1,2,2,2,2],'B':[1,2,3,4,5,6,7,8]})
print(df1)
A B
0 1 1
1 1 2
2 1 3
3 1 4
4 2 5
5 2 6
6 2 7
7 2 8
I want to pop 2 rows where 'A' == 2, preferably in a single statement like
df2 = df1.somepopfunction(...)
to generate the following result:
print(df1)
A B
0 1 1
1 1 2
2 1 3
3 1 4
4 2 7
5 2 8
print(df2)
A B
0 2 5
1 2 6
The pandas pop function sounds promising, but only pops complete colums.
What statement can replace the pseudocode
df2 = df1.somepopfunction(...)
to generate the desired results?
Pop function for remove rows does not exist in pandas, need filter first and then remove filtred rows from df1:
df2 = df1[df1.A.eq(2)].head(2)
print (df2)
A B
4 2 5
5 2 6
df1 = df1.drop(df2.index)
print (df1)
A B
0 1 1
1 1 2
2 1 3
3 1 4
6 2 7
7 2 8

How to pivot one column into multiple columns in a dataframe?

I have a dataframe of type:
a = ['a','b','c','a','b','c','a','b','c']
b = [0,1,2,3,4,5,6,7,8]
df = pd.DataFrame({'key':a,'values':b})
key values
0 a 0
1 b 1
2 c 2
3 a 3
4 b 4
5 c 5
6 a 6
7 b 7
8 c 8
I want to move the values in the "values" column to new columns where they have the same "key".
So result:
key values0 values1 values2
0 a 0 3 6
1 b 1 4 7
2 c 2 5 8
From this question How can I pivot a dataframe?
I've tried:
a=d1.pivot_table(index='key',values='values',aggfunc=list).squeeze()
pd.DataFrame(a.tolist(),index=a.index)
Which gives
0 1 2
key
a 0 3 6
b 1 4 7
c 2 5 8
But I don't want the index to be 'key', I want the index to stay the same.
You can use reset_index.
a = df.pivot_table(index='key',values='values',aggfunc=list).squeeze()
out = pd.DataFrame(a.tolist(),index=a.index).add_prefix('values').reset_index()
print(out)
# Output
key values0 values1 values2
0 a 0 3 6
1 b 1 4 7
2 c 2 5 8
Another way to do it:
out = (df.pivot_table('values', 'key', df.index // 3)
.add_prefix('values').reset_index())
print(out)
# Output
key values0 values1 values2
0 a 0 3 6
1 b 1 4 7
2 c 2 5 8
df["id"] = df.groupby("key").cumcount()
df.pivot(columns="id", index="key").reset_index()
# key values
# id 0 1 2
# 0 a 0 3 6
# 1 b 1 4 7
# 2 c 2 5 8

Drop level for index

I have the below result from a pivot table, which is about the count of customer grades that visited my stores. I used the 'droplevel' method to flatten the column header into 1 layer, how can I do the same for the index? I want to remove 'Grade' above the index, so that the column headers are at the same level as 'Store No_'.
it seems you need remove column name:
df.columns.name = None
Or rename_axis:
df = df.rename_axis(None, axis=1)
Sample:
df = pd.DataFrame({'Store No_':[1,2,3],
'A':[4,5,6],
'B':[7,8,9],
'C':[1,3,5],
'D':[5,3,6],
'E':[7,4,3]})
df = df.set_index('Store No_')
df.columns.name = 'Grade'
print (df)
Grade A B C D E
Store No_
1 4 7 1 5 7
2 5 8 3 3 4
3 6 9 5 6 3
print (df.rename_axis(None, axis=1))
A B C D E
Store No_
1 4 7 1 5 7
2 5 8 3 3 4
3 6 9 5 6 3
df = df.rename_axis(None, axis=1).reset_index()
print (df)
Store No_ A B C D E
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3

Finding the maximum entry based on another column in a data frame

Suppose I have a data frame with 3 columns: A, B, C. I want to group by column A, and find the row (for each unique A) with the maximum entry in C, so that I can store that row.A, row.B, row.C into a dictionary elsewhere.
What's the best way to do this without using iterrows?
# generate sample data
import pandas as pd
df = pd.DataFrame(np.random.randint(0,10,(10,3)))
df.columns = ['A','B','C']
# sort by C, group by A, take last row of each group
df.sort('C').groupby('A').nth(-1)
Here's another method. If df is the DataFrame, you can write df.groupby('A').apply(lambda d: d.ix[d['C'].argmax()]).
For example,
In [96]: df
Out[96]:
A B C
0 1 0 3
1 3 0 4
2 0 4 5
3 2 4 0
4 3 1 1
5 1 6 2
6 3 6 0
7 4 0 1
8 2 3 4
9 0 5 0
10 7 6 5
11 3 1 2
In [97]: g = df.groupby('A').apply(lambda d: d['C'].argmax())
In [98]: g
Out[98]:
A
0 2
1 0
2 8
3 1
4 7
7 10
dtype: int64
In [99]: df.ix[g.values]
Out[99]:
A B C
2 0 4 5
0 1 0 3
8 2 3 4
1 3 0 4
7 4 0 1
10 7 6 5

python pandas groupby() result

I have the following python pandas data frame:
df = pd.DataFrame( {
'A': [1,1,1,1,2,2,2,3,3,4,4,4],
'B': [5,5,6,7,5,6,6,7,7,6,7,7],
'C': [1,1,1,1,1,1,1,1,1,1,1,1]
} );
df
A B C
0 1 5 1
1 1 5 1
2 1 6 1
3 1 7 1
4 2 5 1
5 2 6 1
6 2 6 1
7 3 7 1
8 3 7 1
9 4 6 1
10 4 7 1
11 4 7 1
I would like to have another column storing a value of a sum over C values for fixed (both) A and B. That is, something like:
A B C D
0 1 5 1 2
1 1 5 1 2
2 1 6 1 1
3 1 7 1 1
4 2 5 1 1
5 2 6 1 2
6 2 6 1 2
7 3 7 1 2
8 3 7 1 2
9 4 6 1 1
10 4 7 1 2
11 4 7 1 2
I have tried with pandas groupby and it kind of works:
res = {}
for a, group_by_A in df.groupby('A'):
group_by_B = group_by_A.groupby('B', as_index = False)
res[a] = group_by_B['C'].sum()
but I don't know how to 'get' the results from res into df in the orderly fashion. Would be very happy with any advice on this. Thank you.
Here's one way (though it feels this should work in one go with an apply, I can't get it).
In [11]: g = df.groupby(['A', 'B'])
In [12]: df1 = df.set_index(['A', 'B'])
The size groupby function is the one you want, we have to match it to the 'A' and 'B' as the index:
In [13]: df1['D'] = g.size() # unfortunately this doesn't play nice with as_index=False
# Same would work with g['C'].sum()
In [14]: df1.reset_index()
Out[14]:
A B C D
0 1 5 1 2
1 1 5 1 2
2 1 6 1 1
3 1 7 1 1
4 2 5 1 1
5 2 6 1 2
6 2 6 1 2
7 3 7 1 2
8 3 7 1 2
9 4 6 1 1
10 4 7 1 2
11 4 7 1 2
You could also do a one liner using transform applied to the groupby:
df['D'] = df.groupby(['A','B'])['C'].transform('sum')
You could also do a one liner using merge as follows:
df = df.merge(pd.DataFrame({'D':df.groupby(['A', 'B'])['C'].size()}), left_on=['A', 'B'], right_index=True)
you can use this method :
columns = ['col1','col2',...]
df.groupby('col')[columns].sum()
if you want you can also use .sort_values(by = 'colx', ascending = True/False) after .sum() to sort the final output by a specific column (colx) and in an ascending or descending order.

Categories