Pandas: Groupby, concatenate one column and identify the row with maximums - python

I have a datframe like this:
prefix input_text target_text score
X V A 1
X V B 2
X W C 1
X W B 3
I want to group them by some columns and concatenate the column target_text, meanwhile get the maximum of score in each group and identify the target_text with highest score, like this:
prefix input_text target_text score top
X V A, B 2 B
X W C, B 3 B
This is my code which does the concatenation, however I just don't know about the rest.
df['target_text'] = df[['prefix', 'target_text','input_text']].groupby(['input_text','prefix'])['target_text'].transform(lambda x: '<br />'.join(x))
df = df.drop_duplicates(subset=['prefix','input_text','target_text'])
In concatenation I use html code to concat them, if I could bold the target with highest score, then it would be nice.

Let us try
df.sort_values('score',ascending=False).\
drop_duplicates(['prefix','input_text']).\
rename(columns={'target_text':'top'}).\
merge(df.groupby(['prefix','input_text'],as_index=False)['target_text'].agg(','.join))
Out[259]:
prefix input_text top score target_text
0 X W B 3 C,B
1 X V B 2 A,B

groupby agg would be useful here:
new_df = (
df.groupby(['prefix', 'input_text'], as_index=False).agg(
target_text=('target_text', ', '.join),
score=('score', 'max'),
top=('score', 'idxmax')
)
)
new_df['top'] = df.loc[new_df['top'], 'target_text'].values
new_df:
prefix input_text target_text score top
0 X V A, B 2 B
1 X W C, B 3 B
Aggregations are as follows:
target_text is joined together using ', '.join.
score is aggregated to only keep the max value with `'max'
top is the idxmax of the score column.
new_df = (
df.groupby(['prefix', 'input_text'], as_index=False).agg(
target_text=('target_text', ', '.join),
score=('score', 'max'),
top=('score', 'idxmax')
)
)
prefix input_text target_text score top
0 X V A, B 2 1
1 X W C, B 3 3
The values in top are the corresponding indexes from df:
prefix input_text target_text score
0 X V A 1
1 X V B 2 # index 1
2 X W C 1
3 X W B 3 # index 3
These values need to be "looked up" from df:
df.loc[new_df['top'], 'target_text']
1 B
3 B
Name: target_text, dtype: object
And assigned back to new_df. values is needed to break the index alignment.

try via sort_values(), groupby() and agg():
out=(df.sort_values('score')
.groupby(['prefix', 'input_text'], as_index=False)
.agg(target_text=('target_text', ', '.join), score=('score', 'max'), top=('target_text', 'last')))
output of out:
input_text prefix score target_text top
0 V X 2 A, B B
1 W X 3 C, B B
Explaination:
we are sorting values of 'score' and then grouping by column 'input_text' and 'prefix' and aggregrating values that are as follows:
we are joining together the values of 'target_text' by ', '
we are getting only max value of 'score column' bcz we are aggregrating max
we are getting last value of 'target_text' column since we sorted previously so now we are aggregrating last on it
Update:
If you have many more columns to include then you can aggregrate them if they are not in high in number otherwise:
newdf=df.sort_values('score',ascending=False).drop_duplicates(['prefix','input_text'],ignore_index=True)
#Finally join them
out=out.join(newdf[list of column names that you want])
#For example:
#out=out.join(newdf[['target_first','target_last]])

Related

Recursive groupby with quantiles

I have a dataframe of floats
a b c d e
0 0.085649 0.236811 0.801274 0.582162 0.094129
1 0.433127 0.479051 0.159739 0.734577 0.113672
2 0.391228 0.516740 0.430628 0.586799 0.737838
3 0.956267 0.284201 0.648547 0.696216 0.292721
4 0.001490 0.973460 0.298401 0.313986 0.891711
5 0.585163 0.471310 0.773277 0.030346 0.706965
6 0.374244 0.090853 0.660500 0.931464 0.207191
7 0.630090 0.298163 0.741757 0.722165 0.218715
I can divide it into quantiles for a single column like so:
def groupby_quantiles(df, column, groups: int):
quantiles = df[column].quantile(np.linspace(0, 1, groups + 1))
bins = pd.cut(df[column], quantiles, include_lowest=True)
return df.groupby(bins)
>>> df.pipe(groupby_quantiles, "a", 2).apply(lambda x: print(x))
a b c d e
0 0.085649 0.236811 0.801274 0.582162 0.094129
2 0.391228 0.516740 0.430628 0.586799 0.737838
4 0.001490 0.973460 0.298401 0.313986 0.891711
6 0.374244 0.090853 0.660500 0.931464 0.207191
a b c d e
1 0.433127 0.479051 0.159739 0.734577 0.113672
3 0.956267 0.284201 0.648547 0.696216 0.292721
5 0.585163 0.471310 0.773277 0.030346 0.706965
7 0.630090 0.298163 0.741757 0.722165 0.218715
Now, I want to repeat the same operation on each of the groups for the next column. The code becomes ridiculous
>>> (
df
.pipe(groupby_quantiles, "a", 2)
.apply(
lambda df_group: (
df_group
.pipe(groupby_quantiles, "b", 2)
.apply(lambda x: print(x))
)
)
)
a b c d e
0 0.085649 0.236811 0.801274 0.582162 0.094129
6 0.374244 0.090853 0.660500 0.931464 0.207191
a b c d e
2 0.391228 0.51674 0.430628 0.586799 0.737838
4 0.001490 0.97346 0.298401 0.313986 0.891711
a b c d e
3 0.956267 0.284201 0.648547 0.696216 0.292721
7 0.630090 0.298163 0.741757 0.722165 0.218715
a b c d e
1 0.433127 0.479051 0.159739 0.734577 0.113672
5 0.585163 0.471310 0.773277 0.030346 0.706965
My goal is to repeat this operation for as many columns as I want, then aggregate the groups at the end. Here's how the final function could look like and the desired result assuming to aggregate with the mean.
>>> groupby_quantiles(df, columns=["a", "b"], groups=[2, 2], agg="mean")
a b c d e
0 0.229947 0.163832 0.730887 0.756813 0.150660
1 0.196359 0.745100 0.364515 0.450392 0.814774
2 0.793179 0.291182 0.695152 0.709190 0.255718
3 0.509145 0.475180 0.466508 0.382462 0.410319
Any ideas on how to achieve this?
Here is a way. First using quantile then cut can be rewrite with qcut. Then using recursive operation similar to this.
def groupby_quantiles(df, cols, grs, agg_func):
# to store all the results
_dfs = []
# recursive function
def recurse(_df, depth):
col = cols[depth]
gr = grs[depth]
# iterate over the groups per quantile
for _, _dfgr in _df.groupby(pd.qcut(_df[col], gr)):
if depth != -1: recurse(_dfgr, depth+1) #recursive if not at the last column
else: _dfs.append(_dfgr.agg(agg_func)) #else perform the aggregate
# using negative depth is easier to acces the right column and quantile
depth = -len(cols)
recurse(df, depth) # starts the recursion
return pd.concat(_dfs, axis=1).T # concat the results and transpose
print(groupby_quantiles(df, cols = ['a','b'], grs = [2,2], agg_func='mean'))
# a b c d e
# 0 0.229946 0.163832 0.730887 0.756813 0.150660
# 1 0.196359 0.745100 0.364515 0.450392 0.814774
# 2 0.793179 0.291182 0.695152 0.709190 0.255718
# 3 0.509145 0.475181 0.466508 0.382462 0.410318

Is there a way to print a string into a new df column based off of the other values of two other columns?

My desired output would be the highlighted column
You can use ord to convert the letters to numbers, range to generate a range of all the numbers in between, and chr to convert the numbers back to letters. Then use df.apply() with a lambda function and axis=1 to do this for every row:
df['Final'] = df.apply(lambda x: ' '.join(chr(c) for c in range(ord(x['Letter start']), ord(x['Letter finish']) + 1)), axis=1)
Output:
>>> df
Letter start Letter finish Final
0 A D A B C D
1 C E C D E
2 K M K L M

Groupby and keep rows depending on string value

I have this DF:
In [106]: dfTest = pd.DataFrame( {'name':['a','a','b','b'], 'value':['x','y','x','h']})
In [107]: dfTest
Out[107]:
name value
0 a x
1 a y
2 b x
3 b h
So my intention is to obtain one row per name group and the value to keep will depend. If for each group of name I find h in value, I'd like to keep it. Otherwise, any value would fit, such as:
In [109]: dfTest
Out[109]:
name value
0 a x
1 b h
You can do it this way:
dfTest.reindex(dfTest.groupby('name')['value'].agg(lambda x: (x=='h').idxmax()))
Output:
name value
value
0 a x
3 b h
Another approach with drop_duplicates:
(dfTest.loc[dfTest['value'].eq('h').sort_values().index]
.drop_duplicates('name', keep='last')
)
Output:
name value
1 a y
3 b h

Pandas: Sort before aggregate within a group

I have the following Pandas dataframe:
A B C
A A Test1
A A Test2
A A XYZ
A B BA
A B AB
B A AA
I want to group this dataset twice: First by A and B to concate the group within C and afterwards only on A to get the groups defined solely by column A. The result looks like this:
A A Test1,Test2,XYZ
A B AB, BA
B A AA
And the final result should be:
A A,A:(Test1,Test2,XYZ), A,B:(AB, BA)
B B,A:(AA)
Concatenating itself works, however the sorting does not seem work.
Can anyone help me with this problem?
Kind regards.
Using groupby + join
s1=df.groupby(['A','B']).C.apply(','.join)
s1
Out[421]:
A B
A A Test1,Test2,XYZ
B BA,AB
B A AA
Name: C, dtype: object
s1.reset_index().groupby('A').apply(lambda x : x.set_index(['A','B'])['C'].to_dict())
Out[420]:
A
A {('A', 'A'): 'Test1,Test2,XYZ', ('A', 'B'): 'B...
B {('B', 'A'): 'AA'}
dtype: object
First sort_values by 3 columns, then groupby with join first, then join A with B columns and last groupby for dictionary per groups:
df1 = df.sort_values(['A','B','C']).groupby(['A','B'])['C'].apply(','.join).reset_index()
#if only 3 columns DataFrame
#df1 = df.sort_values().groupby(['A','B'])['C'].apply(','.join).reset_index()
df1['D'] = df1['A'] + ',' + df1['B']
print (df1)
A B C D
0 A A Test1,Test2,XYZ A,A
1 A B AB,BA A,B
2 B A AA B,A
s = df1.groupby('A').apply(lambda x: dict(zip(x['D'], x['C']))).reset_index(name='val')
print (s)
A val
0 A {'A,A': 'Test1,Test2,XYZ', 'A,B': 'AB,BA'}
1 B {'B,A': 'AA'}
If need tuples only change first part of code:
df1 = df.sort_values(['A','B','C']).groupby(['A','B'])['C'].apply(tuple).reset_index()
df1['D'] = df1['A'] + ',' + df1['B']
print (df1)
A B C D
0 A A (Test1, Test2, XYZ) A,A
1 A B (AB, BA) A,B
2 B A (AA,) B,A
s = df1.groupby('A').apply(lambda x: dict(zip(x['D'], x['C']))).reset_index(name='val')
print (s)
A val
0 A {'A,A': ('Test1', 'Test2', 'XYZ'), 'A,B': ('AB...
1 B {'B,A': ('AA',)}

How to merge strings pandas df

I am trying merge specific strings in a pandas df. The df below is just an example. The values in my df will differ but the basic rules will apply. I basically want to merge all rows until there's a 4 letter string.
Whilst the 4 letter string in this df is always Excl, my df will contain numerous 4 letter strings.
import pandas as pd
d = ({
'A' : ['Include','Inclu','Incl','Inc'],
'B' : ['Excl','de','ude','l'],
'C' : ['X','Excl','Excl','ude'],
'D' : ['','Y','ABC','Excl'],
})
df = pd.DataFrame(data=d)
Out:
A B C D
0 Include Excl X
1 Inclu de Excl Y
2 Incl ude Excl ABC
3 Inc l ude Excl
Intended Output:
A B C D
0 Include Excl X
1 Include Excl Y
2 Include Excl ABC
3 Include Excl
So row 0 stays the same as col B has 4 letters. Row 1 merges Col A,B as Col C 4 letters. Row 2 stays the same as above. Row 3 merges Col A,B,C as Col D has 4 letters.
I have tried to do this manually by merging all columns and then go back and removing unwanted values.
df["Com"] = df["A"].map(str) + df["B"] + df["C"]
But I would have to manually go through each row and remove different lengths of letters.
The above df is just an example. The central similarity is I need to merge everything before the 4 letter string.
You could do something like
mask = (df.iloc[:, 1:].applymap(len) == 4).cumsum(1) == 0
df.A = df.A + df.iloc[:, 1:][mask].apply(lambda x: x.str.cat(), 1)
df.iloc[:, 1:] = df.iloc[:, 1:][~mask].fillna('')
try this,
Sorry for the clumsy solution, I'll try to improve the performance ,
temp=df.eq('Excl').shift(-1,axis=1)
df['end']= temp.apply(lambda x:x.argmax(),axis=1)
res=df.apply(lambda x:x.loc[:x['end']].sum(),axis=1)
mask=temp.replace(False,np.NaN).fillna(method='ffill').fillna(False).astype(bool)
del df['end']
df[:]=np.where(mask,'',df)
df['A']=res
print df
Output:
A B C D
0 Include Excl X
1 Include Excl Y
2 Include Excl ABC
3 Include Excl
Improved solution:
res= df.apply(lambda x:x.loc[:x.eq('Excl').shift(-1).argmax()].sum(),axis=1)
mask=df.eq('Excl').shift(-1,axis=1).replace(False,np.NaN).fillna(method='ffill').fillna(False).astype(bool)
df[:]=np.where(mask,'',df)
df['A']=res
More simplified solution:
t=df.eq('Excl').shift(-1,axis=1)
res= df.apply(lambda x:x.loc[:x.eq('Excl').shift(-1).argmax()].sum(),axis=1)
df[:]=np.where(t.fillna(0).astype(int).cumsum() >= 1,'',df)
df['A']=res
I am giving you a rough approach,
Here, we are finding the location of the 'Excl' and merging the column values up it so as to obtain our desired output.
ls=[]
for i in range(len(df)):
end=(df.loc[i,:].index[(df.loc[i,:]=='Excl')][0])
ls.append(''.join(df.loc[i,:end].replace({'Excl':''}).values))
df['A']=ls

Categories