How can I just combine the groups of a pandas GroupBy? - python

I'm using DataFrame.groupby() to group rows with the same key, while maintaining a previously sorted row order. I wish to combine the groups back into a complete DataFrame, so rows with a common key will follow the first such row, with groups starting with largest col value first. After much experimentation and searching split-apply-combine for a separate combine method, I arrived at the following idiom which works. That it required an open coded identity function suggested to me that I'm not using GroupBy the way it was intended. Is there a better idiom?
df.sort_values(col, ascending=False).groupby(key, sort=False).apply(lambda g_df: g_df)
I'd love to know where in the pandas documentation I could have answered this for myself.

We can sort first. Because pd.unique preservers order it finds the 'key' ordered by their highest value. Then by setting the index and using .loc we can group all of them together.
Sample Data
import pandas as pd
df = pd.DataFrame({'col': [1,2,3,4,5,6,7,8,9,10],
'key': list('abababcacb')})
Code
df = df.sort_values('col', ascending=False)
df = df.set_index('key').loc[df['key'].unique()].reset_index()
key col
0 b 10
1 b 6
2 b 4
3 b 2
4 c 9
5 c 7
6 a 8
7 a 5
8 a 3
9 a 1
Another way to do what you want is to create a helper column. You want to sort by the max 'col' value within the group, so use transform to broadcast the result to a helper column that we sort on and then drop.
df['key1'] = df.groupby('key')['col'].transform('max')
df = df.sort_values(['key1', 'col'], ascending=False).drop(columns='key1')
If you wanted to use groupby you're really just using it to get the index locations. A straight-forward implementation would be to just concat the groups, iterating over the groupby object:
df = pd.concat([gp for _,gp in df.sort_values('col', ascending=False).groupby('key', sort=False)])
However, because you just need to re-arrange the entire DataFranme there's really no need to split it just to concat everything back. The .groups attribute stores the indices. Chain them together and slice the original DataFrame
from itertools import chain
idx = chain.from_iterable(df.sort_values('col', ascending=False)
.groupby('key', sort=False)
.groups.values())
df = df.loc[idx]

Related

Concatenating values into column from multiple rows

I have a dataframe containing only duplicate "MainID" rows. One MainID may have multiple secondary IDs (SecID). I want to concatenate the values of SecID if there is a common MainID, joined by ':' in SecID col. What is the best way of achieving this? Yes, I know this is not best practice, however it's the structure the software wants.
Need to keep the df structure and values in rest of the df. They will always match the other duplicated row. Only SecID will be different.
Current:
data={'MainID':['NHFPL0580','NHFPL0580','NHFPL0582','NHFPL0582'],'SecID':['G12345','G67890','G11223','G34455'], 'Other':['A','A','B','B']}
df=pd.DataFrame(data)
print(df)
MainID SecID Other
0 NHFPL0580 G12345 A
1 NHFPL0580 G67890 A
2 NHFPL0582 G11223 B
3 NHFPL0582 G34455 B
Intended Structure
MainID SecID Other
NHFPL0580 G12345:G67890 A
NHFPL0582 G11223:G34455 B
Try:
df.groupby('MainID').apply(lambda x: ':'.join(x.SecID))
the above code returns a pd.Series, and you can convert it to a dataframe as #Guy suggested:
You need .reset_index(name='SecID') if you want it back as DataFrame
The solution to the edited question:
df = df.groupby(['MainID', 'Other']).apply(lambda x: ':'.join(x.SecID)).reset_index(name='SecID')
You can then change the column order
cols = df.columns.tolist()
df = df[[cols[i] for i in [0, 2, 1]]]

Best way to move an unexpected column in a Pandas DF to a new DF?

Wondering what the best way to tackle this issue is. If I have a DF with the following columns
df1()
type_of_fruit name_of_fruit price
..... ..... .....
and a list called
expected_cols = ['name_of_fruit','price']
whats the best way to automate the check of df1 against the expected_cols list? I was trying something like
df_cols=df1.columns.values.tolist()
if df_cols != expected_cols:
And then try to drop to another df any columns not in expected_cols, but this doesn't seem like a great idea to me. Is there a way to save the "dropped" columns?
df2 = df1.drop(columns=expected_cols)
But then this seems problematic depending on column ordering, and also in cases where the columns could have either more values than expected, or less values than expected. In cases where there are less values than expected (ie the df1 only contains the column name_of_fruit) I'm planning on using
df1.reindex(columns=expected_cols)
But a bit iffy on how to do this programatically, and then how to handle the issue where there are more columns than expected.
You can use set difference using -:
Assuming df1 having cols:
In [542]: df1_cols = df1.columns # ['type_of_fruit', 'name_of_fruit', 'price']
In [539]: expected_cols = ['name_of_fruit','price']
In [541]: unwanted_cols = list(set(d1_cols) - set(expected_cols))
In [542]: df2 = df1[unwanted_cols]
In [543]: df1.drop(unwanted_cols, 1, inplace=True)
Use groupby along the columns axis to split the DataFrame succinctly. In this case, check whether the columns are in your list to form the grouper, and you can store the results in a dict where the True key gets the DataFrame with the subset of columns in the list and the False key has the subset of columns not in the list.
Sample Data
import pandas as pd
df = pd.DataFrame(data = [[1,2,3]],
columns=['type_of_fruit', 'name_of_fruit', 'price'])
expected_cols = ['name_of_fruit','price']
Code
d = dict(tuple(df.groupby(df.columns.isin(expected_cols), axis=1)))
# If you need to ensure columns are always there then do
#d[True] = d[True].reindex(expected_cols)
d[True]
# name_of_fruit price
#0 2 3
d[False]
# type_of_fruit
#0 1

Iterate to find the repeat values in Pandas dataframe

Window 10, Python 3.6
I have a dataframe df
df=pd.DataFrame({'name':['boo', 'foo', 'too', 'boo', 'roo', 'too'],
'zip':['30004', '02895', '02895', '30750', '02895', '02895']})
I want to find the repeat record that has same 'name' and 'zip', and record the repeat times. The idea output is
name repeat zip
0 too 1 02895
Because my dataframe is much more than six rows, I need to use a iterate method. I appreciate any tips.
I believe you need groupby all columns and use GroupBy.size:
#create DataFrame from online source
#df = pd.read_csv('someonline.csv')
#df = pd.read_html('someurl')[0]
#L = []
#for x in iterator:
#in loop added data to list
# L.append(x)
##created DataFrame from contructor
#df = pd.DataFrame(L)
df = df.groupby(df.columns.tolist()).size().reset_index(name='repeat')
#if need specify columns
#df = df.groupby(['name','zip']).size().reset_index(name='repeat')
print (df)
name zip repeat
0 boo 30004 1
1 boo 30750 1
2 foo 02895 1
3 roo 02895 1
4 too 02895 2
Pandas has a handy .duplicated() method that can help you identify duplicates.
df.duplicated()
By passing the duplicate vector into a selection you can get the duplicate record:
df[df.duplicated()]
You can get the sum of the duplicated records by using .sum()
df.duplicated().sum()

How to keep indexes when sum by columns based on grouped_by in pandas

I have a dataset where each ID has 6 corresponding rows. I want to this dataset grouped by the column ID and sum aggregate using sum. I wrote this piece of code:
col = [col for col in train.columns if col not in ['Month', 'ID']]
train.groupby('ID')[col].sum().reset_index()
Everything works fine except that I lose column ID. Now, Unique ID from my initial database disappeared and instead I have just enumerated ids from 0 up to the number of rows in the resulting dataset. I want to keep initial indexes, because I will need to merge this dataset with another further. How I can deal with this problem? Thanks for helping very much!
P.S: deleting reset_index() has no effect
P.S: You can see two problems on the images. On first image there is original database. You can see 6 entries for each ID. On the second image there is a databased which is a result from the grouped statement. First problem: IDs are not the same as in the original table. Second problem: the sum over 6 months for each ID is not correct.
Instead of using reset_index() you can simply use the keyword argument as_index: df.groupby('ID', as_index=False)
This will preserve column ID in the resulting DataFrameGroupBy, as described in groupby()'s doc.
as_index : boolean, default True
For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output
When you group a data frame by some columns, those columns become your new index.
import pandas as pd
import numpy as np
# Create data
n = 6; m = 3
col_id = np.hstack([['id-'+str(i)] * n for i in range(m)]).reshape(-1, 1)
np.random.shuffle(col_id)
data = np.random.rand(m*n, m)
columns = ['v'+str(i+1) for i in range(m)]
df = pd.DataFrame(data, columns=columns)
df['ID'] = col_id
# Group by ID
print(df.groupby('ID').sum())
Will simply give you
v1 v2 v3
ID
id-0 2.099219 2.708839 2.766141
id-1 2.554117 2.183166 3.914883
id-2 2.485505 2.739834 2.250873
If you just want the column ID back, you just have to reset_index()
print(df.groupby('ID').sum().reset_index())
which will leave you with
ID v1 v2 v3
0 id-0 2.099219 2.708839 2.766141
1 id-1 2.554117 2.183166 3.914883
2 id-2 2.485505 2.739834 2.250873
Note:
groupby will sort the resulting DataFrame by its index. If you don't want that for any reason just set sorted=False (see also the documentation)
print(df.groupby('ID', sorted=false).sum())

"Expanding" pandas dataframe by using cell-contained list

I have a dataframe in which third column is a list:
import pandas as pd
pd.DataFrame([[1,2,['a','b','c']]])
I would like to separate that nest and create more rows with identical values of first and second column.
The end result should be something like:
pd.DataFrame([[[1,2,'a']],[[1,2,'b']],[[1,2,'c']]])
Note, this is simplified example. In reality I have multiple rows that I would like to "expand".
Regarding my progress, I have no idea how to solve this. Well, I imagine that I could take each member of nested list while having other column values in mind. Then I would use the list comprehension to make more list. I would continue so by and add many lists to create a new dataframe... But this seems just a bit too complex. What about simpler solution?
Create the dataframe with a single column, then add columns with constant values:
import pandas as pd
df = pd.DataFrame({"data": ['a', 'b', 'c']})
df['col1'] = 1
df['col2'] = 2
print df
This prints:
data col1 col2
0 a 1 2
1 b 1 2
2 c 1 2
Not exactly the same issue that the OR described, but related - and more pandas-like - is the situation where you have a dict of lists with lists of unequal lengths. In that case, you can create a DataFrame like this in long format.
import pandas as pd
my_dict = {'a': [1,2,3,4], 'b': [2,3]}
df = pd.DataFrame.from_dict(my_dict, orient='index')
df = df.unstack() # to format it in long form
df = df.dropna() # to drop nan values which were generated by having lists of unequal length
df.index = df.index.droplevel(level=0) # if you don't want to store the index in the list
# NOTE this last step results duplicate indexes

Categories