I have a dataframe with multiple group columns and a value column.
a b val
0 A C 1
1 A D 1
2 A D 1
3 A D 2
4 B E 0
For any one group, for eg a==A, b==CI can do value_counts on the series slice. How can I get the value counts of all possible combinations of the group columns in a dataframe format similar to:
a b val counts
0 A C 1 1
1 A D 1 2
2 A D 2 1
2 B E 0 1
is that what you want?
In [47]: df.groupby(['a','b','val']).size().reset_index()
Out[47]:
a b val 0
0 A C 1 1
1 A D 1 2
2 A D 2 1
3 B E 0 1
or this?
In [43]: df['counts'] = df.groupby(['a','b'])['val'].transform('size')
In [44]: df
Out[44]:
a b val counts
0 A C 1 1
1 A D 1 3
2 A D 1 3
3 A D 2 3
4 B E 0 1
Related
Consider the following DataFrame:
>>> df = pd.DataFrame({'A': [1,2,3], 'B':['abc', 'def', 'ghi']}).apply({'A':int, 'B':list})
>>> df
A B
0 1 [a, b, c]
1 2 [d, e, f]
2 3 [g, h, I]
This is one way to get the desired result:
>>> df['B'] = df['B'].apply(enumerate).apply(list)
>>> df = df.explode('B', ignore_index=True)
>>> df['B'] = pd.Series(df['B'], index=['B1', 'B2'])})
>>> df.droplevel(0, axis=1)
A B1 B2
0 1 0 a
1 1 1 b
2 1 2 c
3 2 0 d
4 2 1 e
5 2 2 f
6 3 0 g
7 3 1 h
8 3 2 i
Is there a neater way?
A groupby on the index is an option:
df.explode('B').assign(
B1 = lambda df: df.groupby(level=0).cumcount())
A B B1
0 1 a 0
0 1 b 1
0 1 c 2
1 2 d 0
1 2 e 1
1 2 f 2
2 3 g 0
2 3 h 1
2 3 i 2
you can always reset the index, if you have no use for it:
df.explode('B').assign(
B1 = lambda df: df.groupby(level=0).cumcount()).reset_index(drop=True)
A B B1
0 1 a 0
1 1 b 1
2 1 c 2
3 2 d 0
4 2 e 1
5 2 f 2
6 3 g 0
7 3 h 1
8 3 i 2
Since pandas version 1.3.0 you can use multiple columns with explode out of the box:
df.assign(
B1 = df.B.apply(len).apply(range)).explode(['B', 'B1'], ignore_index = True))
A B B1
0 1 a 0
1 1 b 1
2 1 c 2
3 2 d 0
4 2 e 1
5 2 f 2
6 3 g 0
7 3 h 1
8 3 i 2
I think a faster option would be to run the reshaping outside Pandas, and then rejoin back to the dataframe (of course only tests can affirm/deny this):
from itertools import chain
# you can use np.concatenate instead
# np.concatenate(df.B)
flattened = chain.from_iterable(df.B)
index = df.index.repeat([*map(len, df.B)])
flattened = pd.Series(flattened, index, name = 'B1')
(pd.concat([df.A, flattened], axis=1)
.assign(B2 = lambda df: df.groupby(level=0).cumcount())
)
A B1 B2
0 1 a 0
0 1 b 1
0 1 c 2
1 2 d 0
1 2 e 1
1 2 f 2
2 3 g 0
2 3 h 1
2 3 i 2
df have:
A B C
a 1 2 3
b 2 1 4
c 1 1 1
df want:
A B C
a 1 2 3
b 2 1 4
c 1 1 1
d 1 -1 1
I am able to get df want by using:
df.loc['d']=df.loc['b']-df.loc['a']
However, my actual df has 'a','b','c' rows for multiple IDs 'X', 'Y' etc.
A B C
X a 1 2 3
b 2 1 4
c 1 1 1
Y a 1 2 3
b 2 1 4
c 1 1 1
How can I create the same output with multiple IDs?
My original method:
df.loc['d']=df.loc['b']-df.loc['a']
fails KeyError:'b'
Desired output:
A B C
X a 1 2 3
b 2 1 4
c 1 1 1
d 1 -1 1
Y a 1 2 3
b 2 2 4
c 1 1 1
d 1 0 1
IIUC,
for i, sub in df.groupby(df.index.get_level_values(0)):
df.loc[(i, 'd'), :] = sub.loc[(i,'b')] - sub.loc[(i, 'a')]
print(df.sort_index())
Or maybe
k = df.groupby(df.index.get_level_values(0), as_index=False).apply(lambda s: pd.DataFrame([s.loc[(s.name,'b')].values - s.loc[(s.name, 'a')].values],
columns=s.columns,
index=pd.MultiIndex(levels=[[s.name], ['d']], codes=[[0],[0]])
)).reset_index(drop=True, level=0)
pd.concat([k, df]).sort_index()
Data reshaping is a useful trick if you want to do manipulation on a particular level of a multiindex. See code below,
result = (df.unstack(0).T
.assign(d=lambda x:x.b-x.a)
.stack()
.unstack(0))
Use pd.IndexSlice to slice a and b. Call diff and slice on b and rename it to d. Finally, append it to original df
idx = pd.IndexSlice
df1 = df.loc[idx[:,['a','b']],:].diff().loc[idx[:,'b'],:].rename({'b': 'd'})
df2 = df.append(df1).sort_index().astype(int)
Out[106]:
A B C
X a 1 2 3
b 2 1 4
c 1 1 1
d 1 -1 1
Y a 1 2 3
b 2 2 4
c 1 1 1
d 1 0 1
My goal here is to concat() alternate groups between two dataframe.
desired result :
group ordercode quantity
0 A 1
B 1
C 1
D 1
0 A 1
B 3
1 A 1
B 2
C 1
1 A 1
B 1
C 2
My dataframe:
import pandas as pd
df1=pd.DataFrame([[0,"A",1],[0,"B",1],[0,"C",1],[0,"D",1],[1,"A",1],[1,"B",2],[1,"C",1]],columns=["group","ordercode","quantity"])
df2=pd.DataFrame([[0,"A",1],[0,"B",3],[1,"A",1],[1,"B",1],[1,"C",2]],columns=["group","ordercode","quantity"])
print(df1)
print(df2)
I have used dfff=pd.concat([df1,df2]).sort_index(kind="merge")
but I have got the below result:
group ordercode quantity
0 0 A 1
0 0 A 1
1 B 1
1 B 3
2 C 1
3 D 1
4 1 A 1
4 1 A 1
5 B 2
5 B 1
6 C 1
6 C 2
You can see here the concatenate is formed between each rows not by group.
It has to print like
group 0 of df1
group0 of df2
group1 of df1
group1 of df2 and so on
Note:
I have created these DataFrame using groupby() function
df = pd.DataFrame(np.concatenate(df.apply(lambda x: [x[0]] * x[1], 1).as_matrix()),
columns=['ordercode'])
df['quantity'] = 1
df['group'] = sorted(list(range(0, len(df)//3, 1)) * 4)[0:len(df)]
df=df.groupby(['group', 'ordercode']).sum()
Question:
Where I went wrong?
Its sorting out by taking index
I have used .set_index("group") but It didnt work either.
Use cumcount for helper column used for sorting by sort_values :
df1['g'] = df1.groupby('ordercode').cumcount()
df2['g'] = df2.groupby('ordercode').cumcount()
dfff = pd.concat([df1,df2]).sort_values(['group','g']).reset_index(drop=True)
print (dfff)
group ordercode quantity g
0 0 A 1 0
1 0 B 1 0
2 0 C 1 0
3 0 D 1 0
4 0 A 1 0
5 0 B 3 0
6 1 C 2 0
7 1 A 1 1
8 1 B 2 1
9 1 C 1 1
10 1 A 1 1
11 1 B 1 1
and last remove column:
dfff = dfff.drop('g', axis=1)
The pandas dataframe includes two columns 'A' and 'B'
A B
1 a b
2 a c d
3 x
Each value in column 'B' is a string containing a variable number of letters separated by spaces.
Is there a simple way to construct:
A B
1 a
1 b
2 a
2 c
2 d
3 x
You can use the following:
splitted = df.set_index("A")["B"].str.split(expand=True)
stacked = splitted.stack().reset_index(1, drop=True)
result = stacked.to_frame("B").reset_index()
print(result)
A B
0 1 a
1 1 b
2 2 a
3 2 c
4 2 d
5 3 x
For the sub steps, see below:
print(splitted)
0 1 2
A
1 a b None
2 a c d
3 x None None
print(stacked)
A
1 a
1 b
2 a
2 c
2 d
3 x
dtype: object
Or you may also use pd.melt:
splitted = df["B"].str.split(expand=True)
pd.melt(splitted.assign(A=df.A), id_vars="A", value_name="B")\
.dropna()\
.drop("variable", axis=1)\
.sort_values("A")
A B
0 1 a
3 1 b
1 2 a
4 2 c
7 2 d
2 3 x
I want to create a new column with column name for the max value by index.
Tie would include both columns.
A B C D
TRDNumber
ALB2008081610 3 1 1 1
ALB200808167 1 3 4 1
ALB200808168 3 1 3 1
ALB200808171 2 2 5 1
ALB2008081710 1 2 2 5
Desired output
A B C D Best
TRDNumber
ALB2008081610 3 1 1 1 A
ALB200808167 1 3 4 1 C
ALB200808168 3 1 3 1 A,C
ALB200808171 2 2 5 1 C
ALB2008081710 1 2 2 5 D
I have tried the following code
df.groupby(['TRDNumber'])[cols].max()
you can do:
>>> f = lambda r: ','.join(df.columns[r])
>>> df.eq(df.max(axis=1), axis=0).apply(f, axis=1)
TRDNumber
ALB2008081610 A
ALB200808167 C
ALB200808168 A,C
ALB200808171 C
ALB2008081710 D
dtype: object
>>> df['best'] = _
>>> df
A B C D best
TRDNumber
ALB2008081610 3 1 1 1 A
ALB200808167 1 3 4 1 C
ALB200808168 3 1 3 1 A,C
ALB200808171 2 2 5 1 C
ALB2008081710 1 2 2 5 D