Fastest way to index a very large Pandas dataframe - python

I have a very large knowledge graph in pandas dataframe format as follows.
This dataframe KG has more than 100 million rows:
pred subj obj
0 nationality BART USA
1 placeOfBirth BART NEWYORK
2 locatedIn NEWYORK USA
... ... ... ...
116390740 hasFather BART HOMMER
116390741 nationality HOMMER USA
116390743 placeOfBirth HOMMER NEWYORK
I tried to get a row from this KG with a specific value for subj and obj.
a) I tried indexing into KG by generating a boolean series using isin() function:
KG[KG['subj'].isin(['BART', 'NEWYORK']) & KG['obj'].isin(['USA', 'HOMMER'])]
b) I also tried indexing the KG using query() function:
KG = KG.set_index(['subj','obj'], drop=True)
KG = KG.sort_index()
subj_substitution = ['BART', 'NEWYORK']
obj_substitution= ['USA', 'HOMMER']
KG.query(f"subj in {subj_substitution} & obj in {obj_substitution}
c) And I also tried to join two DataFrames using a merge() as shown below.
subj_df
subj
0 BART
1 NEWYORK
obj_df
obj
0 USA
1 HOMMER
merge_result = pd.merge(KG, subj_df, on = ['subj']).drop_duplicates()
merge_result = pd.merge(merge_result, obj_df, on = ['obj']).drop_duplicates()
These methods result in the following:
pred subj obj
0 nationality BART USA
2 locatedIn NEWYORK USA
116390740 hasFather BART HOMMER
I used the timeit function to check the time for each as shown below.
timeit.timeit(lambda: KG[(KG['subj'].isin(['BART', 'NEWYORK']) & (KG['obj'].isin(['USA', 'HOMMER'])))] , number=10)
The runtimes were:
function
runtime
isin()
35.6s
query()
155.2s
merge()
288.9s
I think isin() is the fastest way to index a very large Dataframe.
I would appreciate it if you could tell me a faster way than this.

I would personally go with isin or query with in.
Pandas doc says:
Performance of query()
DataFrame.query() using numexpr is slightly faster than Python for large frames.
Note: You will only see the performance benefits of using the numexpr engine with DataFrame.query() if your frame has more than approximately 200,000 rows.
Details about query can be found here
In your example when I tested KG dataframe with shape (50331648, 3) - 50M+ rows and 3 column using query and isin the performance results were almost same.
isin
%timeit KG[KG['subj'].isin(['BART', 'NEWYORK']) & KG['obj'].isin(['USA', 'HOMMER'])]
4.14 s ± 83.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
query with in operator
%timeit KG.query("(subj in ['BART', 'NEWYORK']) and (obj in ['USA', 'HOMMER'])")
4.08 s ± 82.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
query with isin
%timeit KG.query("(subj.isin(['BART', 'NEWYORK']))& (obj.isin(['USA', 'HOMMER']))")
4.99 s ± 210 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Test Data
d="""pred,subj,obj
nationality,BART,USA
placeOfBirth,BART,NEWYORK
locatedIn,NEWYORK,USA
hasFather,BART,HOMMER
nationality,HOMMER,USA
placeOfBirth,HOMMER,NEWYORK"""
KG = pd.read_csv(StringIO(d))
for i in range(23):
KG = pd.concat([KG,KG])
KG.shape # (50331648, 3)
If performance + code readability(maintenance) is concerned, then atleast for complex queries I would go with query function.

Related

Split / extract a Pandas Column of nested tuples into multiple columns

I am currently working with an .xml file that I have converted into a data frame that looks like so:
I want to split the Coordinates column into 4 separate columns with the following layout:
to_longitude, to_latitude, from_longitude, from_latitude
I am attempting to do this with the code below:
pd.concat([df[[0]], df[1].str.split(',', expand=True)], axis=1)
However, this gives me the following error:
KeyError: "None of [Int64Index([0], dtype='int64')] are in the [columns]"
My question is what am I doing wrong and how can I correct my code to make it work as intended?
Consider using Pandas apply function -
def my_func(record):
record['to_longitude']=record['Coordinates'][0][0]
record['to_latitude']=record['Coordinates'][0][1]
record['from_longitude']=record['Coordinates'][1][0]
record['from_latitude']=record['Coordinates'][1][1]
return record
new_df = df.apply(my_func, axis=1)
You can use the string accessor str[] to get the values of nested tuples to set up the 4 columns, as follows:
df['to_longitude'] = df['Coordinates'].str[0].str[0]
df['to_latitude'] = df['Coordinates'].str[0].str[1]
df['from_longitude'] = df['Coordinates'].str[1].str[0]
df['from_latitude'] = df['Coordinates'].str[1].str[1]
Demo
data = {'Link': ['abd', 'abe'],
'Coordinates': [((-4.21, 55.85), (-4.22, 55.86)), ((-4.25, 55.82), (-4.26, 55.83))]}
df = pd.DataFrame(data)
Link Coordinates
0 abd ((-4.21, 55.85), (-4.22, 55.86))
1 abe ((-4.25, 55.82), (-4.26, 55.83))
df['to_longitude'] = df['Coordinates'].str[0].str[0]
df['to_latitude'] = df['Coordinates'].str[0].str[1]
df['from_longitude'] = df['Coordinates'].str[1].str[0]
df['from_latitude'] = df['Coordinates'].str[1].str[1]
Link Coordinates to_longitude to_latitude from_longitude from_latitude
0 abd ((-4.21, 55.85), (-4.22, 55.86)) -4.21 55.85 -4.22 55.86
1 abe ((-4.25, 55.82), (-4.26, 55.83)) -4.25 55.82 -4.26 55.83
Execution time comparison:
Test data of 40,000 rows
df2 = pd.concat([df] * 20000, ignore_index=True)
Solution 1: Tom Ron's solution
def my_func(record):
record['to_longitude']=record['Coordinates'][0][0]
record['to_latitude']=record['Coordinates'][0][1]
record['from_longitude']=record['Coordinates'][1][0]
record['from_latitude']=record['Coordinates'][1][1]
return record
%timeit new_df = df2.apply(my_func, axis=1)
Result:
1min 16s ± 2.19 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
Solution 2: SeaBean's solution
%%timeit
df2['to_longitude'] = df2['Coordinates'].str[0].str[0]
df2['to_latitude'] = df2['Coordinates'].str[0].str[1]
df2['from_longitude'] = df2['Coordinates'].str[1].str[0]
df2['from_latitude'] = df2['Coordinates'].str[1].str[1]
Result:
165 ms ± 1.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Solution 3: Anurag Dabas' solution
%%timeit
cols=['to_longitude','to_latitude','from_longitude','from_latitude']
out=pd.DataFrame(np.hstack(df2['Coordinates'].values),columns=cols)
#OR
#out=pd.DataFrame(np.concatenate(df['Coordinates'].values,axis=1),columns=cols)
Result:
Can't get the benchmarking since got error for both options:
ValueError: Shape of passed values is (2, 80000), indices imply (2, 4)
Summary
Solution 1: Tom Ron's solution
1min 16s ± 2.19 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
Solution 2: SeaBean's solution
165 ms ± 1.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Solution 3: Anurag Dabas' solution
Can't get the benchmarking since got error for large dataset both options
For the first 2 solutions with benchmarking results, SeaBean's solution is 460x times faster than Tom Ron's solution (165ms vs 1min 16s) for 40,000 rows of data.
The faster execution time is contributed by all vectorized Pandas operations (optimized with fast C or CPython codes) used in codes, instead of the slow apply() on axis=1 which under the hood is slow Python looping.

Concat strings from dataframe columns in a loop (Python 3.8)

Suppose I have a DataFrame "DS_df" containing strings ands numbers. The three columns "LAultimateparentcountry", "borrowerultimateparentcountry" and "tot" form a relationship.
How can I create a dictionary out of those three columns (for the entire dataset, while order matters)? I would need to access the two countries as one variable, and tot as another. I've tried the code below so far, but this merely yields me a list with separate items. For some reason, I am also not able to get .join to work, as the df is quite big (+900k rows).
new_list =[]
for i, row in DS_df.iterrows():
new_list.append(row["LAultimateparentcountry"])
new_list.append(row["borrowerultimateparentcountry"])
new_list.append(row["tot"])
Preferred outcome would be a dictionary, where I could access "Germany_Switzerland": 56708 for example. Any help or advice is much appreciated.
Cheers
You can use a dict this way:
countries_map = {}
for index, row in DS_df.iterrows():
curr_rel = f'{row["LAultimateparentcountry"]}_{row["borrowerultimateparentcountry"]}'
countries_map[curr_rel] = row["tot"]
If you are not wishing to not run over existing keys values
(and use their first appearance):
countries_map = {}
for index, row in DS_df.iterrows():
curr_rel = f'{row["LAultimateparentcountry"]}_{row["borrowerultimateparentcountry"]}'
if curr_rel not in countries_map.keys():
countries_map[curr_rel] = row["tot"]
When performing operations on a dataframe it's always good to think for a solution column-wise and not row-wise.
If your dataframe is having 900k+ rows then it might be a good option to apply vectorized operations on dataframe.
Below are two solutions:
Using pd.Series + to_dict():
pd.Series(DS_df.tot.values, index=DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_")).to_dict()
Using zip() + dict():
dict(zip(DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_"), DS_df.tot))
Test Dataframe:
DS_df = DataFrame({
'LAultimateparentcountry':['India', 'Germany', 'India'],
'borrowerultimateparentcountry':['France', 'Ireland', 'France'],
'tot':[56708, 87902, 91211]
})
DS_df
LAultimateparentcountry borrowerultimateparentcountry tot
0 India France 56708
1 Germany Ireland 87902
2 India France 91211
Output of both solutions:
{'India_France': 91211, 'Germany_Ireland': 87902}
If the formed key has duplicates then the value will be updated.
Which solution is more performant?
short answer -
zip() + dict() # if the rows are approx. below 1000000
pd.Series + to_dict() # if the rows are approx. above 1000000
Long answer - Below are the tests:
Test with 30 rows and 3 columns
zip() + dict()
%timeit dict(zip(DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_"), DS_df.tot))
297 µs ± 21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
pd.Series + to_dict():
%timeit pd.Series(DS_df.tot.values, index=DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_")).to_dict()
506 µs ± 35.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Test with 6291456 rows and 3 columns
pd.Series + to_dict()
%timeit pd.Series(DS_df.tot.values, index=DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_")).to_dict()
3.92 s ± 77.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
zip + dict()
%timeit dict(zip(DS_df.LAultimateparentcountry.str.cat(DS_df.borrowerultimateparentcountry, sep="_"), DS_df.tot))
3.97 s ± 226 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

When should I (not) want to use pandas apply() in my code?

I have seen many answers posted to questions on Stack Overflow involving the use of the Pandas method apply. I have also seen users commenting under them saying that "apply is slow, and should be avoided".
I have read many articles on the topic of performance that explain apply is slow. I have also seen a disclaimer in the docs about how apply is simply a convenience function for passing UDFs (can't seem to find that now). So, the general consensus is that apply should be avoided if possible. However, this raises the following questions:
If apply is so bad, then why is it in the API?
How and when should I make my code apply-free?
Are there ever any situations where apply is good (better than other possible solutions)?
apply, the Convenience Function you Never Needed
We start by addressing the questions in the OP, one by one.
"If apply is so bad, then why is it in the API?"
DataFrame.apply and Series.apply are convenience functions defined on DataFrame and Series object respectively. apply accepts any user defined function that applies a transformation/aggregation on a DataFrame. apply is effectively a silver bullet that does whatever any existing pandas function cannot do.
Some of the things apply can do:
Run any user-defined function on a DataFrame or Series
Apply a function either row-wise (axis=1) or column-wise (axis=0) on a DataFrame
Perform index alignment while applying the function
Perform aggregation with user-defined functions (however, we usually prefer agg or transform in these cases)
Perform element-wise transformations
Broadcast aggregated results to original rows (see the result_type argument).
Accept positional/keyword arguments to pass to the user-defined functions.
...Among others. For more information, see Row or Column-wise Function Application in the documentation.
So, with all these features, why is apply bad? It is because apply is slow. Pandas makes no assumptions about the nature of your function, and so iteratively applies your function to each row/column as necessary. Additionally, handling all of the situations above means apply incurs some major overhead at each iteration. Further, apply consumes a lot more memory, which is a challenge for memory bounded applications.
There are very few situations where apply is appropriate to use (more on that below). If you're not sure whether you should be using apply, you probably shouldn't.
Let's address the next question.
"How and when should I make my code apply-free?"
To rephrase, here are some common situations where you will want to get rid of any calls to apply.
Numeric Data
If you're working with numeric data, there is likely already a vectorized cython function that does exactly what you're trying to do (if not, please either ask a question on Stack Overflow or open a feature request on GitHub).
Contrast the performance of apply for a simple addition operation.
df = pd.DataFrame({"A": [9, 4, 2, 1], "B": [12, 7, 5, 4]})
df
A B
0 9 12
1 4 7
2 2 5
3 1 4
<!- ->
df.apply(np.sum)
A 16
B 28
dtype: int64
df.sum()
A 16
B 28
dtype: int64
Performance wise, there's no comparison, the cythonized equivalent is much faster. There's no need for a graph, because the difference is obvious even for toy data.
%timeit df.apply(np.sum)
%timeit df.sum()
2.22 ms ± 41.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
471 µs ± 8.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Even if you enable passing raw arrays with the raw argument, it's still twice as slow.
%timeit df.apply(np.sum, raw=True)
840 µs ± 691 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Another example:
df.apply(lambda x: x.max() - x.min())
A 8
B 8
dtype: int64
df.max() - df.min()
A 8
B 8
dtype: int64
%timeit df.apply(lambda x: x.max() - x.min())
%timeit df.max() - df.min()
2.43 ms ± 450 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.23 ms ± 14.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In general, seek out vectorized alternatives if possible.
String/Regex
Pandas provides "vectorized" string functions in most situations, but there are rare cases where those functions do not... "apply", so to speak.
A common problem is to check whether a value in a column is present in another column of the same row.
df = pd.DataFrame({
'Name': ['mickey', 'donald', 'minnie'],
'Title': ['wonderland', "welcome to donald's castle", 'Minnie mouse clubhouse'],
'Value': [20, 10, 86]})
df
Name Value Title
0 mickey 20 wonderland
1 donald 10 welcome to donald's castle
2 minnie 86 Minnie mouse clubhouse
This should return the row second and third row, since "donald" and "minnie" are present in their respective "Title" columns.
Using apply, this would be done using
df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)
0 False
1 True
2 True
dtype: bool
df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]
Name Title Value
1 donald welcome to donald's castle 10
2 minnie Minnie mouse clubhouse 86
However, a better solution exists using list comprehensions.
df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]
Name Title Value
1 donald welcome to donald's castle 10
2 minnie Minnie mouse clubhouse 86
<!- ->
%timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]
%timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]
2.85 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
788 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The thing to note here is that iterative routines happen to be faster than apply, because of the lower overhead. If you need to handle NaNs and invalid dtypes, you can build on this using a custom function you can then call with arguments inside the list comprehension.
For more information on when list comprehensions should be considered a good option, see my writeup: Are for-loops in pandas really bad? When should I care?.
Note
Date and datetime operations also have vectorized versions. So, for example, you should prefer pd.to_datetime(df['date']), over,
say, df['date'].apply(pd.to_datetime).
Read more at the
docs.
A Common Pitfall: Exploding Columns of Lists
s = pd.Series([[1, 2]] * 3)
s
0 [1, 2]
1 [1, 2]
2 [1, 2]
dtype: object
People are tempted to use apply(pd.Series). This is horrible in terms of performance.
s.apply(pd.Series)
0 1
0 1 2
1 1 2
2 1 2
A better option is to listify the column and pass it to pd.DataFrame.
pd.DataFrame(s.tolist())
0 1
0 1 2
1 1 2
2 1 2
<!- ->
%timeit s.apply(pd.Series)
%timeit pd.DataFrame(s.tolist())
2.65 ms ± 294 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
816 µs ± 40.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Lastly,
"Are there any situations where apply is good?"
Apply is a convenience function, so there are situations where the overhead is negligible enough to forgive. It really depends on how many times the function is called.
Functions that are Vectorized for Series, but not DataFrames
What if you want to apply a string operation on multiple columns? What if you want to convert multiple columns to datetime? These functions are vectorized for Series only, so they must be applied over each column that you want to convert/operate on.
df = pd.DataFrame(
pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2),
columns=['date1', 'date2'])
df
date1 date2
0 2018-12-31 2019-01-02
1 2019-01-04 2019-01-06
2 2019-01-08 2019-01-10
3 2019-01-12 2019-01-14
4 2019-01-16 2019-01-18
5 2019-01-20 2019-01-22
6 2019-01-24 2019-01-26
7 2019-01-28 2019-01-30
df.dtypes
date1 object
date2 object
dtype: object
This is an admissible case for apply:
df.apply(pd.to_datetime, errors='coerce').dtypes
date1 datetime64[ns]
date2 datetime64[ns]
dtype: object
Note that it would also make sense to stack, or just use an explicit loop. All these options are slightly faster than using apply, but the difference is small enough to forgive.
%timeit df.apply(pd.to_datetime, errors='coerce')
%timeit pd.to_datetime(df.stack(), errors='coerce').unstack()
%timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1)
%timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce')
5.49 ms ± 247 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.94 ms ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.16 ms ± 216 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.41 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
You can make a similar case for other operations such as string operations, or conversion to category.
u = df.apply(lambda x: x.str.contains(...))
v = df.apply(lambda x: x.astype(category))
v/s
u = pd.concat([df[c].str.contains(...) for c in df], axis=1)
v = df.copy()
for c in df:
v[c] = df[c].astype(category)
And so on...
Converting Series to str: astype versus apply
This seems like an idiosyncrasy of the API. Using apply to convert integers in a Series to string is comparable (and sometimes faster) than using astype.
The graph was plotted using the perfplot library.
import perfplot
perfplot.show(
setup=lambda n: pd.Series(np.random.randint(0, n, n)),
kernels=[
lambda s: s.astype(str),
lambda s: s.apply(str)
],
labels=['astype', 'apply'],
n_range=[2**k for k in range(1, 20)],
xlabel='N',
logx=True,
logy=True,
equality_check=lambda x, y: (x == y).all())
With floats, I see the astype is consistently as fast as, or slightly faster than apply. So this has to do with the fact that the data in the test is integer type.
GroupBy operations with chained transformations
GroupBy.apply has not been discussed until now, but GroupBy.apply is also an iterative convenience function to handle anything that the existing GroupBy functions do not.
One common requirement is to perform a GroupBy and then two prime operations such as a "lagged cumsum":
df = pd.DataFrame({"A": list('aabcccddee'), "B": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]})
df
A B
0 a 12
1 a 7
2 b 5
3 c 4
4 c 5
5 c 4
6 d 3
7 d 2
8 e 1
9 e 10
<!- ->
You'd need two successive groupby calls here:
df.groupby('A').B.cumsum().groupby(df.A).shift()
0 NaN
1 12.0
2 NaN
3 NaN
4 4.0
5 9.0
6 NaN
7 3.0
8 NaN
9 1.0
Name: B, dtype: float64
Using apply, you can shorten this to a a single call.
df.groupby('A').B.apply(lambda x: x.cumsum().shift())
0 NaN
1 12.0
2 NaN
3 NaN
4 4.0
5 9.0
6 NaN
7 3.0
8 NaN
9 1.0
Name: B, dtype: float64
It is very hard to quantify the performance because it depends on the data. But in general, apply is an acceptable solution if the goal is to reduce a groupby call (because groupby is also quite expensive).
Other Caveats
Aside from the caveats mentioned above, it is also worth mentioning that apply operates on the first row (or column) twice. This is done to determine whether the function has any side effects. If not, apply may be able to use a fast-path for evaluating the result, else it falls back to a slow implementation.
df = pd.DataFrame({
'A': [1, 2],
'B': ['x', 'y']
})
def func(x):
print(x['A'])
return x
df.apply(func, axis=1)
# 1
# 1
# 2
A B
0 1 x
1 2 y
This behaviour is also seen in GroupBy.apply on pandas versions <0.25 (it was fixed for 0.25, see here for more information.)
Not all applys are alike
The below chart suggests when to consider apply1. Green means possibly efficient; red avoid.
Some of this is intuitive: pd.Series.apply is a Python-level row-wise loop, ditto pd.DataFrame.apply row-wise (axis=1). The misuses of these are many and wide-ranging. The other post deals with them in more depth. Popular solutions are to use vectorised methods, list comprehensions (assumes clean data), or efficient tools such as the pd.DataFrame constructor (e.g. to avoid apply(pd.Series)).
If you are using pd.DataFrame.apply row-wise, specifying raw=True (where possible) is often beneficial. At this stage, numba is usually a better choice.
GroupBy.apply: generally favoured
Repeating groupby operations to avoid apply will hurt performance. GroupBy.apply is usually fine here, provided the methods you use in your custom function are themselves vectorised. Sometimes there is no native Pandas method for a groupwise aggregation you wish to apply. In this case, for a small number of groups apply with a custom function may still offer reasonable performance.
pd.DataFrame.apply column-wise: a mixed bag
pd.DataFrame.apply column-wise (axis=0) is an interesting case. For a small number of rows versus a large number of columns, it's almost always expensive. For a large number of rows relative to columns, the more common case, you may sometimes see significant performance improvements using apply:
# Python 3.7, Pandas 0.23.4
np.random.seed(0)
df = pd.DataFrame(np.random.random((10**7, 3))) # Scenario_1, many rows
df = pd.DataFrame(np.random.random((10**4, 10**3))) # Scenario_2, many columns
# Scenario_1 | Scenario_2
%timeit df.sum() # 800 ms | 109 ms
%timeit df.apply(pd.Series.sum) # 568 ms | 325 ms
%timeit df.max() - df.min() # 1.63 s | 314 ms
%timeit df.apply(lambda x: x.max() - x.min()) # 838 ms | 473 ms
%timeit df.mean() # 108 ms | 94.4 ms
%timeit df.apply(pd.Series.mean) # 276 ms | 233 ms
1 There are exceptions, but these are usually marginal or uncommon. A couple of examples:
df['col'].apply(str) may slightly outperform df['col'].astype(str).
df.apply(pd.to_datetime) working on strings doesn't scale well with rows versus a regular for loop.
For axis=1 (i.e. row-wise functions) then you can just use the following function in lieu of apply. I wonder why this isn't the pandas behavior. (Untested with compound indexes, but it does appear to be much faster than apply)
def faster_df_apply(df, func):
cols = list(df.columns)
data, index = [], []
for row in df.itertuples(index=True):
row_dict = {f:v for f,v in zip(cols, row[1:])}
data.append(func(row_dict))
index.append(row[0])
return pd.Series(data, index=index)
Are there ever any situations where apply is good?
Yes, sometimes.
Task: decode Unicode strings.
import numpy as np
import pandas as pd
import unidecode
s = pd.Series(['mañana','Ceñía'])
s.head()
0 mañana
1 Ceñía
s.apply(unidecode.unidecode)
0 manana
1 Cenia
Update
I was by no means advocating for the use of apply, just thinking since the NumPy cannot deal with the above situation, it could have been a good candidate for pandas apply. But I was forgetting the plain ol list comprehension thanks to the reminder by #jpp.

Frequency counts for a pandas column of lists

I have a pandas DataFrame and a column contained a string that was separated by a pipe. These were from movie genres. They looked like this:
Genre
Adventure|Animation|Children|Comedy|Fantasy
Comedy|Romance
...
I used str.split to get them back into the cell as a List. Like this:
Genre
[Adventure, Animation, Children, Comedy, Fantasy]
[Adventure, Children, Fantasy]
[Comedy, Romance]
[Comedy, Drama, Romance]
[Comedy]
I want to get a sum of all the genres. For example how many times did Comedy appear? How many times did Adventure and so on? I can't seem to figure this out.
This would look like
Comedy 4
Adventure 2
Animation 1
(...and so on...)
As somebody from the for-loop club, I recommend using python's C-accelerated routines—itertools.chain, and collections.Counter—for performance.
from itertools import chain
from collections import Counter
pd.Series(
Counter(chain.from_iterable(x.split('|') for x in df.Genre)))
Adventure 1
Animation 1
Children 1
Comedy 2
Fantasy 1
Romance 1
dtype: int64
Why do I think CPython functions are better than pandas "vectorised" string functions? They are inherently hard to vectorise. You can read more at For loops with pandas - When should I care?.
If you have to deal with NaNs, you can call a function that handles exceptions gracefully:
def try_split(x):
try:
return x.split('|')
except AttributeError:
return []
pd.Series(
Counter(chain.from_iterable(try_split(x) for x in df.Genre)))
pandaically, you would do this with split, stack, and value_counts.
df['Genre'].str.split('|', expand=True).stack().value_counts()
Comedy 2
Romance 1
Children 1
Animation 1
Fantasy 1
Adventure 1
dtype: int64
The timing difference is obvious even for tiny DataFrames.
%timeit df['Genre'].str.get_dummies(sep='|').sum()
%timeit df['Genre'].str.split('|', expand=True).stack().value_counts()
%%timeit
pd.Series(
Counter(chain.from_iterable(try_split(x) for x in df.Genre)))
2.8 ms ± 68.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.4 ms ± 210 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
320 µs ± 9.71 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I'm also in favor of using chain+for.
Just to document this, one more possible way is to use get_dummies
df['Genre'].str.get_dummies(sep='|').sum()

Pandas groupby nlargest sum

I am trying to use groupby, nlargest, and sum functions in Pandas together, but having trouble making it work.
State County Population
Alabama a 100
Alabama b 50
Alabama c 40
Alabama d 5
Alabama e 1
...
Wyoming a.51 180
Wyoming b.51 150
Wyoming c.51 56
Wyoming d.51 5
I want to use groupby to select by state, then get the top 2 counties by population. Then use only those top 2 county population numbers to get a sum for that state.
In the end, I'll have a list that will have the state and the population (of it's top 2 counties).
I can get the groupby and nlargest to work, but getting the sum of the nlargest(2) is a challenge.
The line I have right now is simply: df.groupby('State')['Population'].nlargest(2)
You can use apply after performing the groupby:
df.groupby('State')['Population'].apply(lambda grp: grp.nlargest(2).sum())
I think this issue you're having is that df.groupby('State')['Population'].nlargest(2) will return a DataFrame, so you can no longer do group level operations. In general, if you want to perform multiple operations in a group, you'll need to use apply/agg.
The resulting output:
State
Alabama 150
Wyoming 330
EDIT
A slightly cleaner approach, as suggested by #cs95:
df.groupby('State')['Population'].nlargest(2).sum(level=0)
This is slightly slower than using apply on larger DataFrames though.
Using the following setup:
import numpy as np
import pandas as pd
from string import ascii_letters
n = 10**6
df = pd.DataFrame({'A': np.random.choice(list(ascii_letters), size=n),
'B': np.random.randint(10**7, size=n)})
I get the following timings:
In [3]: %timeit df.groupby('A')['B'].apply(lambda grp: grp.nlargest(2).sum())
103 ms ± 1.08 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [4]: %timeit df.groupby('A')['B'].nlargest(2).sum(level=0)
147 ms ± 3.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
The slower performance is potentially caused by the level kwarg in sum performing a second groupby under the hood.
Using agg, the grouping logic looks like:
df.groupby('State').agg({'Population': {lambda x: x.nlargest(2).sum() }})
This results in another dataframe object; which you could query to find the most populous states, etc.
Population
State
Alabama 150
Wyoming 330

Categories