Speeding up duplicating of rows in Pandas groupby? - python

I have a very large data frame (hundreds of millions of rows). There are two group ID's, group_id_1 and group_id_2. The data frame looks like this:
group_id_1 group_id_2 value1 time
1 2 45 1
1 2 49 2
1 4 95 1
1 4 55 2
2 2 44 1
2 4 88 1
2 4 90 2
For each group_id_1 x group_id_2 combo, I need to duplicate the row with the latest time, and increment the time by one. In other words, my table should look like:
group_id_1 group_id_2 value1 time
1 2 45 1
1 2 49 2
1 2 49 3
1 4 95 1
1 4 55 2
1 4 55 3
2 2 44 1
2 2 44 2
2 4 88 1
2 4 90 2
2 4 90 3
Right now, I am doing:
for name, group in df.groupby(['group_id_1', 'group_id_2']):
last, = group.sort_values(by='time').tail(1)['time'].values
temp = group[group['time']==last]
temp.loc[:, 'time'] = last + 1
group = group.append(temp)
This is insanely inefficient. If I put the above code into a function, and use the .apply() method with the groupby object, it also takes an enormous amount of time.
How do I speed this process up?

You can use groupby with aggregate last, add time by add and concat to original:
df1 = df.sort_values(by='time').groupby(['group_id_1', 'group_id_2']).last().reset_index()
df1.time = df1.time.add(1)
print (df1)
group_id_1 group_id_2 value1 time
0 1 2 49 3
1 1 4 55 3
2 2 2 44 2
3 2 4 90 3
df = pd.concat([df,df1])
df = df.sort_values(['group_id_1','group_id_2']).reset_index(drop=True)
print (df)
group_id_1 group_id_2 value1 time
0 1 2 45 1
1 1 2 49 2
2 1 2 49 3
3 1 4 95 1
4 1 4 55 2
5 1 4 55 3
6 2 2 44 1
7 2 2 44 2
8 2 4 88 1
9 2 4 90 2
10 2 4 90 3

First, sort the dataframe by time (this should be more efficient than sorting each group by time):
df = df.sort_values('time')
Second, get the last row in each group (without sorting the groups to improve performance):
last = df.groupby(['group_id_1', 'group_id_2'], sort=False).last()
Third, increment the time:
last['time'] = last['time'] + 1
Fourth, concatenate:
df = pd.concat([df, last])
Fifth, sort back to the original order:
df = df.sort_values(['group_id_1', 'group_id_2'])
Explanation: concatenating and then sorting will be much faster than inserting rows one by one.

Related

Pandas or other Python application to generate a column with 1 to n value based on other two columns with rules

Hope I can explain the question properly.
In basic terms, imagining the df as below:
print(df)
year id
1 16100
1 150
1 150
2 66
2 370
2 370
2 530
3 41
3 43
3 61
Would need df.seq to be a cycling 1 to n value if the year rows are identical, until it changes.
df.seq2 would be still n, instead of n+1, if the above rows id value is identical.
So if we imagine excel like formula would be something like
df.seq2 = IF(A2=A1,IF(B2=B1,F1,F1+1),1)
which would make the desired output seq and seq2 below:
year id seq seq2
1 16100 1 1
1 150 2 2
1 150 3 2
2 66 1 1
2 370 2 2
2 370 3 2
2 530 4 3
3 41 1 1
3 43 2 2
3 61 3 3
Did test couple things like (assuming I've generated the df.seq)
comb_df['match'] = comb_df.year.eq(comb_df.year.shift())
comb_df['match2'] = comb_df.id.eq(comb_df.id.shift())
comb_df["seq2"] = np.where((comb_df["match"].shift(+1) == True) & (comb_df["match2"].shift(+1) == True), comb_df["seq"] - 1, comb_df["seq2"])
But the problem is this doesn't really work out if there are multiple duplicates in a row etc.
Perhaps issue can not be resolved purely on numpy sort of way but perhaps I'd have to iterate over the rows?
There are 2-3 million rows, so the performance will be an issue if the solution would be very slow.
Would need to generate both df.seq and df.seq2
Any ideas would be extremely helpful!
We can do groupby with cumcount and factorize
df['seq'] = df.groupby('year').cumcount()+1
df['seq2'] = df.groupby('year')['id'].transform(lambda x : x.factorize()[0]+1)
df
Out[852]:
year id seq seq2
0 1 16100 1 1
1 1 150 2 2
2 1 150 3 2
3 2 66 1 1
4 2 370 2 2
5 2 370 3 2
6 2 530 4 3
7 3 41 1 1
8 3 43 2 2
9 3 61 3 3

Is it possible to shuffle a dataframe while using while grouping by index in pandas or sklearn? [duplicate]

My dataframe looks like this
sampleID col1 col2
1 1 63
1 2 23
1 3 73
2 1 20
2 2 94
2 3 99
3 1 73
3 2 56
3 3 34
I need to shuffle the dataframe keeping same samples together and the order of the col1 must be same as in above dataframe.
So I need it like this
sampleID col1 col2
2 1 20
2 2 94
2 3 99
3 1 73
3 2 56
3 3 34
1 1 63
1 2 23
1 3 73
How can I do this? If my example is not clear please let me know.
Assuming you want to shuffle by sampleID. First df.groupby, shuffle (import random first), and then call pd.concat:
import random
groups = [df for _, df in df.groupby('sampleID')]
random.shuffle(groups)
pd.concat(groups).reset_index(drop=True)
sampleID col1 col2
0 2 1 20
1 2 2 94
2 2 3 99
3 1 1 63
4 1 2 23
5 1 3 73
6 3 1 73
7 3 2 56
8 3 3 34
You reset the index with df.reset_index(drop=True), but it is an optional step.
I found this to be significantly faster than the accepted answer:
ids = df["sampleID"].unique()
random.shuffle(ids)
df = df.set_index("sampleID").loc[ids].reset_index()
for some reason the pd.concat was the bottleneck in my usecase. Regardless this way you avoid the concatenation.
Just to add one thing to #cs95 answer.
If you want to shuffle by sampleID but you want to have your sampleIDs ordered from 1. So here the sampleID is not that important to keep.
Here is a solution where you have just to iterate over the gourped dataframes and change the sampleID.
groups = [df for _, df in df.groupby('doc_id')]
random.shuffle(groups)
for i, df in enumerate(groups):
df['doc_id'] = i+1
shuffled = pd.concat(groups).reset_index(drop=True)
doc_id sent_id word_id
0 1 1 20
1 1 2 94
2 1 3 99
3 2 1 63
4 2 2 23
5 2 3 73
6 3 1 73
7 3 2 56
8 3 3 34

How to make the first row of each group as sum of other rows in the same group in pandas dataframe?

Let's say I have a Pandas dataframe that looks like this:
A B
0 67 1
1 78 1
2 53 1
3 44 1
4 84 1
5 2 2
6 63 2
7 13 2
8 56 2
9 24 2
My goal is to:
1) group column A based on column B
2) make the first row of each formed group as a result of groupby() a sum of all other rows of this group. In this case, the value in the first row will be overwritten by the sum.
My desired output would be:
A B
0 259 1
1 78 1
2 53 1
3 44 1
4 84 1
5 156 2
6 63 2
7 13 2
8 56 2
9 24 2
So, the first row of group 1 (grouped based on column B), we have 259 in column A because the values, except the very first row, for group 1 are 78+53+44+84 = 259
For group 2, the first row of group 2 is 156 because 63+13+56+24 = 156
I spent days trying to figure out how to do this and I finally surrender, here's hoping someone in this great community will help.
Here is one way:
grp = df.groupby('B')
Method 1 (similar to #Kent deleted answer):
s=grp['A'].transform('sum').sub(df['A'])
idx=grp.head(1).index
df.loc[idx,'A']=s
Method 2:
v= [g.iloc[1:].groupby('B')['A'].sum().iat[0] for _,g in grp]
idx = grp.head(1).index
df.loc[idx,'A'] = v
print(df)
A B
0 259 1
1 78 1
2 53 1
3 44 1
4 84 1
5 156 2
6 63 2
7 13 2
8 56 2
9 24 2

Get longest streak of consecutive weeks by group in pandas

Currently I'm working with weekly data for different subjects, but it might have some long streaks without data, so, what I want to do, is to just keep the longest streak of consecutive weeks for every id. My data looks like this:
id week
1 8
1 15
1 60
1 61
1 62
2 10
2 11
2 12
2 13
2 25
2 26
My expected output would be:
id week
1 60
1 61
1 62
2 10
2 11
2 12
2 13
I got a bit close, trying to mark with a 1 when week==week.shift()+1. The problem is this approach doesn't mark the first occurrence in a streak, and also I can't filter the longest one:
df.loc[ (df['id'] == df['id'].shift())&(df['week'] == df['week'].shift()+1),'streak']=1
This, according to my example, would bring this:
id week streak
1 8 nan
1 15 nan
1 60 nan
1 61 1
1 62 1
2 10 nan
2 11 1
2 12 1
2 13 1
2 25 nan
2 26 1
Any ideas on how to achieve what I want?
Try this:
df['consec'] = df.groupby(['id',df['week'].diff(-1).ne(-1).shift().bfill().cumsum()]).transform('count')
df[df.groupby('id')['consec'].transform('max') == df.consec]
Output:
id week consec
2 1 60 3
3 1 61 3
4 1 62 3
5 2 10 4
6 2 11 4
7 2 12 4
8 2 13 4
Not as concise as #ScottBoston but I like this approach
def max_streak(s):
a = s.values # Let's deal with an array
# I need to know where the differences are not `1`.
# Also, because I plan to use `diff` again, I'll wrap
# the boolean array with `True` to make things cleaner
b = np.concatenate([[True], np.diff(a) != 1, [True]])
# Tell the locations of the breaks in streak
c = np.flatnonzero(b)
# `diff` again tells me the length of the streaks
d = np.diff(c)
# `argmax` will tell me the location of the largest streak
e = d.argmax()
return c[e], d[e]
def make_thing(df):
start, length = max_streak(df.week)
return df.iloc[start:start + length].assign(consec=length)
pd.concat([
make_thing(g) for _, g in df.groupby('id')
])
id week consec
2 1 60 3
3 1 61 3
4 1 62 3
5 2 10 4
6 2 11 4
7 2 12 4
8 2 13 4

applying several functions in transform in pandas

After a groupby, when using agg, if a dict of columns:functions is passed, the functions will be applied in the corresponding columns. Nevertheless this syntax doesn't work with transform. Is there another way to apply several functions in transform?
Let's give an example:
import pandas as pd
df_test = pd.DataFrame([[1,2,3],[1,20,30],[2,30,50],[1,2,33],[2,4,50]],columns = ['a','b','c'])
Out[1]:
a b c
0 1 2 3
1 1 20 30
2 2 30 50
3 1 2 33
4 2 4 50
def my_fct1(series):
return series.mean()
def my_fct2(series):
return series.std()
df_test.groupby('a').agg({'b':my_fct1,'c':my_fct2})
Out[2]:
c b
a
1 16.522712 8
2 0.000000 17
The previous example shows how to apply different function to different columns in agg, but if we want to transform the columns without aggregating them, agg can't be used anymore. Therefore:
df_test.groupby('a').transform({'b':np.cumsum,'c':np.cumprod})
Out[3]:
TypeError: unhashable type: 'dict'
How can we perform such an action with the following expected output:
a b c
0 1 2 3
1 1 22 90
2 2 30 50
3 1 24 2970
4 2 34 2500
You can still use a dict but with a bit of hack:
df_test.groupby('a').transform(lambda x: {'b': x.cumsum(), 'c': x.cumprod()}[x.name])
Out[427]:
b c
0 2 3
1 22 90
2 30 50
3 24 2970
4 34 2500
If you need to keep column a, you can do:
df_test.set_index('a')\
.groupby('a')\
.transform(lambda x: {'b': x.cumsum(), 'c': x.cumprod()}[x.name])\
.reset_index()
Out[429]:
a b c
0 1 2 3
1 1 22 90
2 2 30 50
3 1 24 2970
4 2 34 2500
Another way is to use an if else to check column names:
df_test.set_index('a')\
.groupby('a')\
.transform(lambda x: x.cumsum() if x.name=='b' else x.cumprod())\
.reset_index()
I think now (pandas 0.20.2) function transform is not implemented with dict - columns names with functions like agg.
If functions return Series with same lenght:
df1 = df_test.set_index('a').groupby('a').agg({'b':np.cumsum,'c':np.cumprod}).reset_index()
print (df1)
a c b
0 1 3 2
1 1 90 22
2 2 50 30
3 1 2970 24
4 2 2500 34
But if aggreagte different length need join:
df2 = df_test[['a']].join(df_test.groupby('a').agg({'b':my_fct1,'c':my_fct2}), on='a')
print (df2)
a c b
0 1 16.522712 8
1 1 16.522712 8
2 2 0.000000 17
3 1 16.522712 8
4 2 0.000000 17
With the updates to Pandas, you can use the assign method, along with transform to either append new columns, or replace existing columns with new values :
grouper = df_test.groupby("a")
df_test.assign(b=grouper["b"].transform("cumsum"),
c=grouper["c"].transform("cumprod"))
a b c
0 1 2 3
1 1 22 90
2 2 30 50
3 1 24 2970
4 2 34 2500

Categories