So I have a DataFrame that looks something along these lines:
import pandas as pd
ddd = {
'a': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'b': [22, 25, 18, 53, 19, 8, 75, 11, 49, 64],
'c': [1, 1, 1, 2, 2, 3, 4, 4, 4, 5]
}
df = pd.DataFrame(ddd)
What I need is to group the data by the 'c' column and apply some data transformations. At the moment I'm doing this:
def do_stuff(d: pd.DataFrame):
if d.shape[0] >= 2:
return pd.DataFrame(
{
'start': [d.a.values[0]],
'end': [d.a.values[d.shape[0] - 1]],
'foo': [d.a.sum()],
'bar': [d.b.mean()]
}
)
else:
return pd.DataFrame()
r = df.groupby('c').apply(lambda x: do_stuff(x))
Which gives the correct result:
start end foo bar
c
1 0 1.0 3.0 6.0 21.666667
2 0 4.0 5.0 9.0 36.000000
4 0 7.0 9.0 24.0 45.000000
The problem is that this approach appears to be too slow. On my actual data it runs in around 0.7 seconds which is too long and needs to be ideally much faster.
Is there any way I can do this faster? Or maybe there's some other faster method not involving groupby that I could use?
We could first filter df for the "c" values that appear 2 or more times; then use groupby + named aggregation:
msk = df['c'].value_counts() >= 2
out = (df[df['c'].isin(msk.index[msk])]
.groupby('c')
.agg(start=('a','first'), end=('a','last'), foo=('a','sum'), bar=('b','mean')))
You could also do:
out = (df[df.groupby('c')['c'].transform('count').ge(2)]
.groupby('c')
.agg(start=('a','first'),
end=('a','last'),
foo=('a','sum'),
bar=('b','mean')))
or
msk = df['c'].value_counts() >= 2
out = (df[df['c'].isin(msk.index[msk])]
.groupby('c')
.agg({'a':['first','last','sum'], 'b':'mean'})
.set_axis(['start','end','foo','bar'], axis=1))
Output:
start end foo bar
c
1 1 3 6 21.666667
2 4 5 9 36.000000
4 7 9 24 45.000000
Some benchmarks:
>>> %timeit out = df.groupby('c').apply(lambda x: do_stuff(x))
6.49 ms ± 335 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit msk = df['c'].value_counts() >= 2; out = (df[df['c'].isin(msk.index[msk])].groupby('c').agg(start=('a','first'), end=('a','last'), foo=('a','sum'), bar=('b','mean')))
7.6 ms ± 211 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit out = (df[df.groupby('c')['c'].transform('count').ge(2)].groupby('c').agg(start=('a','first'), end=('a','last'), foo=('a','sum'), bar=('b','mean')))
7.86 ms ± 509 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit msk = df['c'].value_counts() >= 2; out = (df[df['c'].isin(msk.index[msk])].groupby('c').agg({'a':['first','last','sum'], 'b':'mean'}).set_axis(['start','end','foo','bar'], axis=1))
4.68 ms ± 57.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Related
I'm trying to construct a fast Pandas approach for dropping certain rows from a Dataframe when some condition is met. Specifically, I want to drop the first occurrence of some variable in the dataframe if some other value in that row is equal to 0. This is perhaps easiest explained by example:
foo = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3])
bar = np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1])
df = pd.DataFrame({'foo': foo, 'bar':bar})
# So df is:
idx | foo | bar
0 1 1
1 1 0
2 1 1
3 1 0
4 1 1
5 1 0
6 1 1
7 1 0
8 1 1
9 1 0
10 1 1
11 2 0
12 2 1
13 2 0
14 2 1
15 3 1
16 3 1
17 3 0
18 3 1
I want to look at the first row when the 'foo' column is a new value, then drop it from the dataframe if the 'bar' value in that row = 0.
I can find when this condition is met using groupby:
df.groupby('foo').first()
# Result:
bar
foo
1 1
2 0
3 1
So I see that I need to drop the first row when foo = 2 (i.e. just drop row with index = 11 in my original data frame). I cannot work out, however, how to use this groupby result as a mask for my original data frame, since the shapes / sizes are different.
I found a related question on groupby modifications (Drop pandas dataframe rows based on groupby() condition), but in this example they drop ALL rows when this condition is met, whereas I only want to drop the first row.
Is this possible please?
Use Series.shift:
df.loc[~(df['foo'].ne(df['foo'].shift()) & df['bar'].eq(0))]
or
df.loc[df.duplicated(subset = 'foo') | df['bar'].ne(0)]
clearly much better
%%timeit
df.loc[~(df['foo'].ne(df['foo'].shift()) & df['bar'].eq(0))]
#970 µs ± 51.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) each)
%%timeit
df.loc[df.duplicated(subset = 'foo') | df['bar'].ne(0)]
#1.34 ms ± 34 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
df.loc[~df.index.isin(df.drop_duplicates(subset='foo').loc[lambda x: x.bar==0].index)]
#2.16 ms ± 109 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
if foo is like in your example:
%%timeit
df.loc[~(df['foo'].diff().ne(0)&df['bar'].eq(0))]
908 µs ± 15.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
or
%%timeit
df.loc[df['foo'].duplicated().add(df['bar']).ne(0)]
787 µs ± 15.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
You can first find the first occurence of each new foo, check if bar is 0, then use it as a mask to filter the original df.
df.loc[~df.index.isin(df.drop_duplicates(subset='foo').loc[lambda x: x.bar==0].index)]
Or to use groupby:
(
df.groupby('foo').apply(lambda x: x.iloc[int(x.bar.iloc[0]==0):])
.reset_index(level=0,drop=True)
)
First approach is faster (2.71 ms) than the groupby method(3.93 ms) with your example.
I have a dataframe like this
data
0 1.5
1 1.3
2 1.3
3 1.8
4 1.3
5 1.8
6 1.5
And I have a list of lists like this:
indices = [[0, 3, 4], [0, 3], [2, 6, 4], [1, 3, 4, 5]]
I want to produce sums of each of the groups in my dataframe using the list of lists, so
group1 = df[0] + df[1] + df[2]
group2 = df[1] + df[2] + df[3]
group3 = df[2] + df[3] + df[4]
group4 = df[3] + df[4] + df[5]
so I am looking for something like df.groupby(indices).sum
I know this can be done iteratively using a for loop and applying the sum to each of the df.iloc[sublist], but I am looking for a faster way.
Use list comprehension:
a = [df.loc[x, 'data'].sum() for x in indices]
print (a)
[4.6, 3.3, 4.1, 6.2]
arr = df['data'].values
a = [arr[x].sum() for x in indices]
print (a)
[4.6, 3.3, 4.1, 6.2]
Solution with groupby + sum is possible, but not sure if better performance:
df1 = pd.DataFrame({
'd' : df['data'].values[np.concatenate(indices)],
'g' : np.arange(len(indices)).repeat([len(x) for x in indices])
})
print (df1)
d g
0 1.5 0
1 1.8 0
2 1.3 0
3 1.5 1
4 1.8 1
5 1.3 2
6 1.5 2
7 1.3 2
8 1.3 3
9 1.8 3
10 1.3 3
11 1.8 3
print(df1.groupby('g')['d'].sum())
g
0 4.6
1 3.3
2 4.1
3 6.2
Name: d, dtype: float64
Performance tested in small sample data - in real data should be different:
In [150]: %timeit [df.loc[x, 'data'].sum() for x in indices]
4.84 ms ± 80.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [151]: %%timeit
...: df['data'].values
...: [arr[x].sum() for x in indices]
...:
...:
20.9 µs ± 99.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [152]: %timeit pd.DataFrame({'d' : df['data'].values[np.concatenate(indices)],'g' : np.arange(len(indices)).repeat([len(x) for x in indices])}).groupby('g')['d'].sum()
1.46 ms ± 234 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
On real data
In [37]: %timeit [df.iloc[x, 0].sum() for x in indices]
158 ms ± 485 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [38]: arr = df['data'].values
...: %timeit \
...: [arr[x].sum() for x in indices]
5.99 ms ± 18 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In[49]: %timeit pd.DataFrame({'d' : df['last'].values[np.concatenate(sample_indices['train'])],'g' : np.arange(len(sample_indices['train'])).repeat([len(x) for x in sample_indices['train']])}).groupby('g')['d'].sum()
...:
5.97 ms ± 45.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
interesting.. both of the bottom answers are fast.
I have the following DataFrame with named columns and index:
'a' 'a*' 'b' 'b*'
1 5 NaN 9 NaN
2 NaN 3 3 NaN
3 4 NaN 1 NaN
4 NaN 9 NaN 7
The data source has caused some column headings to be copied slightly differently. For example, as above, some column headings are a string and some are the same string with an additional '*' character.
I want to copy any values (which are not null) from a* and b* columns to a and b, respectively.
Is there an efficient way to do such an operation?
Use np.where
df['a']= np.where(df['a'].isnull(), df['a*'], df['a'])
df['b']= np.where(df['b'].isnull(), df['b*'], df['b'])
Output:
a a* b b*
0 5.0 NaN 9.0 NaN
1 3.0 3.0 3.0 NaN
2 4.0 NaN 1.0 NaN
3 9.0 9.0 7.0 7.0
Using fillna() is a lot slower than np.where but has the advantage of being pandas only. If you want a faster method and keep it pandas pure, you can use combine_first() which according to the documentation is used to:
Combine Series values, choosing the calling Series’s values first. Result index will be the union of the two indexes
Translation: this is a method designed to do exactly what is asked in the question.
How do I use it?
df['a'].combine_first(df['a*'])
Performance:
df = pd.DataFrame({'A': [0, None, 1, 2, 3, None] * 10000, 'A*': [4, 4, 5, 6, 7, 8] * 10000})
def using_fillna(df):
return df['A'].fillna(df['A*'])
def using_combine_first(df):
return df['A'].combine_first(df['A*'])
def using_np_where(df):
return np.where(df['A'].isnull(), df['A*'], df['A'])
def using_np_where_numpy(df):
return np.where(np.isnan(df['A'].values), df['A*'].values, df['A'].values)
%timeit -n 100 using_fillna(df)
%timeit -n 100 using_combine_first(df)
%timeit -n 100 using_np_where(df)
%timeit -n 100 using_np_where_numpy(df)
1.34 ms ± 71.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
281 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
257 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
166 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For better performance is possible use numpy.isnan and convert Series to numpy arrays by values:
df['a'] = np.where(np.isnan(df['a'].values), df['a*'].values, df['a'].values)
df['b'] = np.where(np.isnan(df['b'].values), df['b*'].values, df['a'].values)
Another general solution if exist only pairs with/without * in columns of DataFrame and is necessary remove * columns:
First create MultiIndex by split with append *val:
df.columns = (df.columns + '*val').str.split('*', expand=True, n=1)
And then select by DataFrame.xs for DataFrames, so DataFrame.fillna working very nice:
df = df.xs('*val', axis=1, level=1).fillna(df.xs('val', axis=1, level=1))
print (df)
a b
1 5.0 9.0
2 3.0 3.0
3 4.0 1.0
4 9.0 7.0
Performance: (depends of number of missing values and length of DataFrame)
df = pd.DataFrame({'A': [0, np.nan, 1, 2, 3, np.nan] * 10000,
'A*': [4, 4, 5, 6, 7, 8] * 10000})
def using_fillna(df):
df['A'] = df['A'].fillna(df['A*'])
return df
def using_np_where(df):
df['B'] = np.where(df['A'].isnull(), df['A*'], df['A'])
return df
def using_np_where_numpy(df):
df['C'] = np.where(np.isnan(df['A'].values), df['A*'].values, df['A'].values)
return df
def using_combine_first(df):
df['D'] = df['A'].combine_first(df['A*'])
return df
%timeit -n 100 using_fillna(df)
%timeit -n 100 using_np_where(df)
%timeit -n 100 using_combine_first(df)
%timeit -n 100 using_np_where_numpy(df)
1.15 ms ± 89.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
533 µs ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
591 µs ± 38.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
423 µs ± 21.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I have a pandas dataframe with 1 million rows. I want to replace values in 900,000 rows in a column by another set of values. Is there fast way to do this without a for loop (which takes me two days to complete)?
For example, look at this sample dataframe where I have condensed 1 million rows to 8 rows
import numpy as np
import pandas as pd
df = pd.DataFrame()
df['a'] = [-1,-3,-4,-4,-3, 4,5,6]
df['b'] = [23,45,67,89,0,-1, 2, 3]
L2 = [-1,-3,-4]
L5 = [9,10,11]
I want to replace values where a is -1, -3, -4 in a single shot if possible or as fast as possible without a for loop.
The crucial part is that values in L5 have to be repeated as needed.
I have tried
df.loc[df.a < 0, 'a'] = L5
but this works only when len(df.a.values) == len(L5)
Use map by dictionary created from both lists by zip, last replace to original non matched values by fillna:
d = dict(zip(L2, L5))
print (d)
{-1: 9, -3: 10, -4: 11}
df['a'] = df['a'].map(d).fillna(df['a'])
print (df)
a b
0 9.0 23
1 10.0 45
2 11.0 67
3 11.0 89
4 10.0 0
5 4.0 -1
6 5.0 2
7 6.0 3
Performance:
It depends of number of values for replace anf of lenght of lists:
Length of lists is 100:
np.random.seed(123)
N = 1000000
df = pd.DataFrame({'a':np.random.randint(1000, size=N)})
L2 = np.arange(100)
L5 = np.arange(100) + 10
In [336]: %timeit df['d'] = np.select([df['a'] == i for i in L2], L5, df['a'])
180 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [337]: %timeit df['a'].map(dict(zip(L2, L5))).fillna(df['a'])
56.9 ms ± 2.55 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
If length of lists is small (e.g. 3):
np.random.seed(123)
N = 1000000
df = pd.DataFrame({'a':np.random.randint(100, size=N)})
L2 = np.arange(3)
L5 = np.arange(3) + 10
In [339]: %timeit df['d'] = np.select([df['a'] == i for i in L2], L5, df['a'])
11.9 ms ± 40.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [340]: %timeit df['a'].map(dict(zip(L2, L5))).fillna(df['a'])
54 ms ± 215 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
you can use np.select such as:
import numpy as np
condition = [df['a'] == i for i in L2]
df['a'] = np.select(condition, L5, df['a'])
and you get:
a b
0 9 23
1 10 45
2 11 67
3 11 89
4 10 0
5 4 -1
6 5 2
7 6 3
Timing: let's create a bigger dataframe such as with your df:
df_l = pd.concat([df]*10000)
print (df_l.shape)
(80000, 2)
Now some timeit:
# with map, #jezrael
d = dict(zip(L2, L5))
%timeit df_l['a'].map(d).fillna(df_l['a'])
100 loops, best of 3: 7.71 ms per loop
# with np.select
condition = [df_l['a'] == i for i in L2]
%timeit np.select(condition, L5, df_l['a'])
1000 loops, best of 3: 350 µs per loop
I'd like to know if there's a way to find the location (column and row index) of the highest value in a dataframe. So if for example my dataframe looks like this:
A B C D E
0 100 9 1 12 6
1 80 10 67 15 91
2 20 67 1 56 23
3 12 51 5 10 58
4 73 28 72 25 1
How do I get a result that looks like this: [0, 'A'] using Pandas?
Use np.argmax
NumPy's argmaxcan be helpful:
>>> df.stack().index[np.argmax(df.values)]
(0, 'A')
In steps
df.values is a two-dimensional NumPy array:
>>> df.values
array([[100, 9, 1, 12, 6],
[ 80, 10, 67, 15, 91],
[ 20, 67, 1, 56, 23],
[ 12, 51, 5, 10, 58],
[ 73, 28, 72, 25, 1]])
argmax gives you the index for the maximum value for the "flattened" array:
>>> np.argmax(df.values)
0
Now, you can use this index to find the row-column location on the stacked dataframe:
>>> df.stack().index[0]
(0, 'A')
Fast Alternative
If you need it fast, do as few steps as possible.
Working only on the NumPy array to find the indices np.argmax seems best:
v = df.values
i, j = [x[0] for x in np.unravel_index([np.argmax(v)], v.shape)]
[df.index[i], df.columns[j]]
Result:
[0, 'A']
Timings
Timing works best for lareg data frames:
df = pd.DataFrame(data=np.arange(int(1e6)).reshape(-1,5), columns=list('ABCDE'))
Sorted slowest to fastest:
Mask:
%timeit df.mask(~(df==df.max().max())).stack().index.tolist()
33.4 ms ± 982 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Stack-idmax
%timeit list(df.stack().idxmax())
17.1 ms ± 139 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Stack-argmax
%timeit df.stack().index[np.argmax(df.values)]
14.8 ms ± 392 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Where
%%timeit
i,j = np.where(df.values == df.values.max())
list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
4.45 ms ± 84.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Argmax-unravel_index
%%timeit
v = df.values
i, j = [x[0] for x in np.unravel_index([np.argmax(v)], v.shape)]
[df.index[i], df.columns[j]]
499 µs ± 12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Compare
d = {'name': ['Mask', 'Stack-idmax', 'Stack-argmax', 'Where', 'Argmax-unravel_index'],
'time': [33.4, 17.1, 14.8, 4.45, 499],
'unit': ['ms', 'ms', 'ms', 'ms', 'µs']}
timings = pd.DataFrame(d)
timings['seconds'] = timings.time * timings.unit.map({'ms': 1e-3, 'µs': 1e-6})
timings['factor slower'] = timings.seconds / timings.seconds.min()
timings.sort_values('factor slower')
Output:
name time unit seconds factor slower
4 Argmax-unravel_index 499.00 µs 0.000499 1.000000
3 Where 4.45 ms 0.004450 8.917836
2 Stack-argmax 14.80 ms 0.014800 29.659319
1 Stack-idmax 17.10 ms 0.017100 34.268537
0 Mask 33.40 ms 0.033400 66.933868
So the "Argmax-unravel_index" version seems to be one to nearly two orders of magnitude faster for large data frames, i.e. where often speeds matters most.
Use stack for Series with MultiIndex and idxmax for index of max value:
print (df.stack().idxmax())
(0, 'A')
print (list(df.stack().idxmax()))
[0, 'A']
Detail:
print (df.stack())
0 A 100
B 9
C 1
D 12
E 6
1 A 80
B 10
C 67
D 15
E 91
2 A 20
B 67
C 1
D 56
E 23
3 A 12
B 51
C 5
D 10
E 58
4 A 73
B 28
C 72
D 25
E 1
dtype: int64
mask + max
df.mask(~(df==df.max().max())).stack().index.tolist()
Out[17]: [(0, 'A')]
This should work:
def max_df(df):
m = None
p = None
for idx, item in enumerate(df.idxmax()):
c = df.columns[item]
val = df[c][idx]
if m is None or val > m:
m = val
p = idx, c
return p
This uses the idxmax function, then compares all of the values returned by it.
Example usage:
>>> df
A B
0 100 9
1 90 8
>>> max_df(df)
(0, 'A')
Here's a one-liner (for fun):
def max_df2(df):
return max((df[df.columns[item]][idx], idx, df.columns[item]) for idx, item in enumerate(df.idxmax()))[1:]
In my opinion for larger datasets, stack() becomes inefficient, let's use np.where to return index positions:
i,j = np.where(df.values == df.values.max())
list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
Output:
[0, 'A']
Timings for larger datafames:
df = pd.DataFrame(data=np.arange(10000).reshape(-1,5), columns=list('ABCDE'))
np.where method
> %%timeit i,j = np.where(df.values == df.values.max())
> list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
1000 loops, best of 3: 364 µs per loop
Other stack methods
> %timeit df.mask(~(df==df.max().max())).stack().index.tolist()
100 loops, best of 3: 7.68 ms per loop
> %timeit df.stack().index[np.argmax(df.values)`]
10 loops, best of 3: 50.5 ms per loop
> %timeit list(df.stack().idxmax())
1000 loops, best of 3: 1.58 ms per loop
Even larger dataframe:
df = pd.DataFrame(data=np.arange(100000).reshape(-1,5), columns=list('ABCDE'))
Respectively:
1000 loops, best of 3: 1.62 ms per loop
10 loops, best of 3: 18.2 ms per loop
100 loops, best of 3: 5.69 ms per loop
100 loops, best of 3: 6.64 ms per loop
simple, fast, one liner:
loc = [df.max(axis=1).idxmax(), df.max().idxmax()]
(For large data frames, .stack() can be quite slow.)
print('Max value:', df.stack().max())
print('Parameters :', df.stack().idxmax())
This is the best way imho.