Given the update to pandas 0.20.0 and the deprecation of .ix, I am wondering what the most efficient way to get the same result using the remaining .loc and .iloc. I just answered this question, but the second option (not using .ix) seems inefficient and verbose.
Snippet:
print df.iloc[df.loc[df['cap'].astype(float) > 35].index, :-1]
Is this the proper way to go when using both conditional and index position filtering?
You can stay in the world of a single loc by getting at the index values you need by slicing that particular index with positions.
df.loc[
df['cap'].astype(float) > 35,
df.columns[:-1]
]
Generally, you would prefer to avoid chained indexing in pandas (though, strictly speaking, you're actually using two different indexing methods). You can't modify your dataframe this way (details in the docs), and the docs cite performance as another reason (indexing once vs. twice).
For the latter, it's usually insignificant (or rather, unlikely to be a bottleneck in your code), and actually seems to not be the case (at least in the following example):
df = pd.DataFrame(np.random.uniform(size=(100000,10)),columns = list('abcdefghij'))
# Get columns number 2:5 where value in 'a' is greater than 0.5
# (i.e. Boolean mask along axis 0, position slice of axis 1)
# Deprecated .ix method
%timeit df.ix[df['a'] > 0.5,2:5]
100 loops, best of 3: 2.14 ms per loop
# Boolean, then position
%timeit df.loc[df['a'] > 0.5,].iloc[:,2:5]
100 loops, best of 3: 2.14 ms per loop
# Position, then Boolean
%timeit df.iloc[:,2:5].loc[df['a'] > 0.5,]
1000 loops, best of 3: 1.75 ms per loop
# .loc
%timeit df.loc[df['a'] > 0.5, df.columns[2:5]]
100 loops, best of 3: 2.64 ms per loop
# .iloc
%timeit df.iloc[np.where(df['a'] > 0.5)[0],2:5]
100 loops, best of 3: 9.91 ms per loop
Bottom line: If you really want to avoid .ix, and you're not intending to modify values in your dataframe, just go with chained indexing. On the other hand (the 'proper' but arguably messier way), if you do need to modify values, either do .iloc with np.where() or .loc with integer slices of df.index or df.columns.
How about breaking this into a two-step indexing:
df[df['cap'].astype(float) > 35].iloc[:,:-1]
or even:
df[df['cap'].astype(float) > 35].drop('cap',1)
Pandas remove .ix, and encourage you to use .iloc, .loc .
for this you can refer to the iloc, loc definition and how they are different from ix,
This might help you.
How are iloc, ix and loc different?
Related
I would like to have a quick index lookup when using a pandas dataframe. As noted here and there, I understand that I need to keep my index unique, otherwise all hope is lost.
I made sure that the index is sorted and unique:
df = df.sort_index()
assert df.index.is_unique
I measured the lookup speed:
%timeit df.at[tuple(q.values), 'column']
1000 loops, best of 3: 185 µs per loop
Then, when I moved the index to a separate python dictionary:
index = {}
for i in df.index:
index[i] = np.random.randint(1000)
assert len(index) == len(df.index)
I got a huge speedup:
%timeit index[tuple(q.values)]
100000 loops, best of 3: 2.7 µs per loop
Why is it so? Am I doing something wrong? Is there a way to replicate python dict's speed (or something in <5x range) in a pandas index?
Please help me understand why this "replace from dictionary" operation is slow in Python/Pandas:
# Series has 200 rows and 1 column
# Dictionary has 11269 key-value pairs
series.replace(dictionary, inplace=True)
Dictionary lookups should be O(1). Replacing a value in a column should be O(1). Isn't this a vectorized operation? Even if it's not vectorized, iterating 200 rows is only 200 iterations, so how can it be slow?
Here is a SSCCE demonstrating the issue:
import pandas as pd
import random
# Initialize dummy data
dictionary = {}
orig = []
for x in range(11270):
dictionary[x] = 'Some string ' + str(x)
for x in range(200):
orig.append(random.randint(1, 11269))
series = pd.Series(orig)
# The actual operation we care about
print('Starting...')
series.replace(dictionary, inplace=True)
print('Done.')
Running that command takes more than 1 second on my machine, which is 1000's of times longer than expected to perform <1000 operations.
It looks like replace has a bit of overhead, and explicitly telling the Series what to do via map yields the best performance:
series = series.map(lambda x: dictionary.get(x,x))
If you're sure that all keys are in your dictionary you can get a very slight performance boost by not creating a lambda, and directly supplying the dictionary.get function. Any keys that are not present will return NaN via this method, so beware:
series = series.map(dictionary.get)
You can also supply just the dictionary itself, but this appears to introduce a bit of overhead:
series = series.map(dictionary)
Timings
Some timing comparisons using your example data:
%timeit series.map(dictionary.get)
10000 loops, best of 3: 124 µs per loop
%timeit series.map(lambda x: dictionary.get(x,x))
10000 loops, best of 3: 150 µs per loop
%timeit series.map(dictionary)
100 loops, best of 3: 5.45 ms per loop
%timeit series.replace(dictionary)
1 loop, best of 3: 1.23 s per loop
.replacecan do incomplete substring matches, while .map requires complete values to be supplied in the dictionary (or it returns NaNs). The fast but generic solution (that can handle substring) should first use .replace on a dict of all possible values (obtained e.g. with .value_counts().index) and then go over all rows of the Series with this dict and .map. This combo can handle for instance special national characters replacements (full substrings) on 1m-row columns in a quarter of a second, where .replace alone would take 15.
Thanks to #root: I did a benchmarking again and found different results on pandas v1.1.4
Found series.map(dictionary) fastest it also returns NaN is key not present
Why do we use 'loc' for pandas dataframes? it seems the following code with or without using loc both compile anr run at a simulular speed
%timeit df_user1 = df.loc[df.user_id=='5561']
100 loops, best of 3: 11.9 ms per loop
or
%timeit df_user1_noloc = df[df.user_id=='5561']
100 loops, best of 3: 12 ms per loop
So why use loc?
Edit: This has been flagged as a duplicate question. But although pandas iloc vs ix vs loc explanation? does mention that *
you can do column retrieval just by using the data frame's
getitem:
*
df['time'] # equivalent to df.loc[:, 'time']
it does not say why we use loc, although it does explain lots of features of loc, my specific question is 'why not just omit loc altogether'? for which i have accepted a very detailed answer below.
Also that other post the answer (which i do not think is an answer) is very hidden in the discussion and any person searching for what i was looking for would find it hard to locate the information and would be much better served by the answer provided to my question.
Explicit is better than implicit.
df[boolean_mask] selects rows where boolean_mask is True, but there is a corner case when you might not want it to: when df has boolean-valued column labels:
In [229]: df = pd.DataFrame({True:[1,2,3],False:[3,4,5]}); df
Out[229]:
False True
0 3 1
1 4 2
2 5 3
You might want to use df[[True]] to select the True column. Instead it raises a ValueError:
In [230]: df[[True]]
ValueError: Item wrong length 1 instead of 3.
Versus using loc:
In [231]: df.loc[[True]]
Out[231]:
False True
0 3 1
In contrast, the following does not raise ValueError even though the structure of df2 is almost the same as df1 above:
In [258]: df2 = pd.DataFrame({'A':[1,2,3],'B':[3,4,5]}); df2
Out[258]:
A B
0 1 3
1 2 4
2 3 5
In [259]: df2[['B']]
Out[259]:
B
0 3
1 4
2 5
Thus, df[boolean_mask] does not always behave the same as df.loc[boolean_mask]. Even though this is arguably an unlikely use case, I would recommend always using df.loc[boolean_mask] instead of df[boolean_mask] because the meaning of df.loc's syntax is explicit. With df.loc[indexer] you know automatically that df.loc is selecting rows. In contrast, it is not clear if df[indexer] will select rows or columns (or raise ValueError) without knowing details about indexer and df.
df.loc[row_indexer, column_index] can select rows and columns. df[indexer] can only select rows or columns depending on the type of values in indexer and the type of column values df has (again, are they boolean?).
In [237]: df2.loc[[True,False,True], 'B']
Out[237]:
0 3
2 5
Name: B, dtype: int64
When a slice is passed to df.loc the end-points are included in the range. When a slice is passed to df[...], the slice is interpreted as a half-open interval:
In [239]: df2.loc[1:2]
Out[239]:
A B
1 2 4
2 3 5
In [271]: df2[1:2]
Out[271]:
A B
1 2 4
Performance Consideration on multiple columns "Chained Assignment" with and without using .loc
Let me supplement the already very good answers with the consideration of system performance.
The question itself includes a comparison on the system performance (execution time) of 2 pieces of codes with and without using .loc. The execution times are roughly the same for the code samples quoted. However, for some other code samples, there could be considerable difference on execution times with and without using .loc: e.g. several times difference or more!
A common case of pandas dataframe manipulation is we need to create a new column derived from values of an existing column. We may use the codes below to filter conditions (based on existing column) and set different values to the new column:
df[df['mark'] >= 50]['text_rating'] = 'Pass'
However, this kind of "Chained Assignment" does not work since it could create a "copy" instead of a "view" and assignment to the new column based on this "copy" will not update the original dataframe.
2 options available:
We can either use .loc, or
Code it another way without using .loc
2nd case e.g.:
df['text_rating'][df['mark'] >= 50] = 'Pass'
By placing the filtering at the last (after specifying the new column name), the assignment works well with the original dataframe updated.
The solution using .loc is as follows:
df.loc[df['mark'] >= 50, 'text_rating'] = 'Pass'
Now, let's see their execution time:
Without using .loc:
%%timeit
df['text_rating'][df['mark'] >= 50] = 'Pass'
2.01 ms ± 105 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
With using .loc:
%%timeit
df.loc[df['mark'] >= 50, 'text_rating'] = 'Pass'
577 µs ± 5.13 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
As we can see, with using .loc, the execution time is more than 3X times faster!
For a more detailed explanation of "Chained Assignment", you can refer to another related post How to deal with SettingWithCopyWarning in pandas? and in particular the answer of cs95. The post is excellent in explaining the functional differences of using .loc. I just supplement here the system performance (execution time) difference.
In addition to what has already been said (issues with having True, False as column name without using loc and ability to select rows and columns with loc and ability to do slicing for row and column selections), another big difference is that you can use loc to assign values to specific rows and columns. If you try to select a subset of the dataframe using boolean series and attempt to change a value of that subset selection you will likely get the SettingWithCopy warning.
Let's say you're trying to change the "upper management" column for all the rows whose salary is bigger than 60000.
This:
mask = df["salary"] > 60000
df[mask]["upper management"] = True
throws the warning that "A value is is trying to be set on a copy of a slice from a Dataframe" and won't work because df[mask] creates a copy and trying to update "upper management" of that copy has no effect on the original df.
But this succeeds:
mask = df["salary"] > 60000
df.loc[mask,"upper management"] = True
Note that in both cases you can do df[df["salary"] > 60000] or df.loc[df["salary"] > 60000], but I think storing boolean condition in a variable first is cleaner.
I have a data frame, and would like to work on a small partition each time for particular tuples of values of 'a', 'b','c'.
df = pd.DataFrame({'a':np.random.randint(0,10,10000),
'b':np.random.randint(0,10,10000),
'c':np.random.randint(0,10,10000),
'value':np.random.randint(0,100,10000)})
so I chose to use pandas multiindex:
dfi = df.set_index(['a','b','c'])
dfi.sortlevel(inplace = True)
However, the performance is not great.
%timeit dfi.ix[(2,1,7)] # 511 us
%timeit df[(df['a'].values == 2) &
(df['b'].values == 1) & (df['c'].values == 7)] # 247 us
I suspect there are some overheads somewhere. My program has ~1k tuples, so it takes 511 * 1000 = 0.5s for one run. How can I improve further?
update:
hmm, I forgot to mention that the number of tuples are less than the total Cartesian product of distinct values in 'a', 'b','c' in df. Wouldn't groupby do excess amount of work on indices that doesn't exist in my tuples?
its not clear what 'work' on means, but I would do this
this can be almost any function
In [33]: %timeit df.groupby(['a','b','c']).apply(lambda x: x.sum())
10 loops, best of 3: 83.6 ms per loop
certain operations are cythonized so very fast
In [34]: %timeit df.groupby(['a','b','c']).sum()
100 loops, best of 3: 2.65 ms per loop
Doing a a selection on a multi-index is not efficient to do index by index.
If you are operating on a very small subset of the total groups, then you might want to directly index into the multi-index; groupby wins if you are operating on a fraction (maybe 20%) of the groups or more. You might also want to investigate filter which you can use to pre-filter the groups based on some criteria.
As noted above, the cartesian product of the groups indexers is irrelevant. Only the actual groups will be iterated by groupby (think of a MultiIndex as a sparse representation of the total possible space).
How about:
dfi = df.set_index(['a','b','c'])
dfi.sortlevel(inplace = True)
value = dfi["value"].values
value[dfi.index.get_loc((2, 1, 7))]
the result is a ndarray without index.
I am building a new method to parse a DataFrame into a Vincent-compatible format. This requires a standard Index (Vincent can't parse a MultiIndex).
Is there a way to detect whether a Pandas DataFrame has a MultiIndex?
In: type(frame)
Out: pandas.core.index.MultiIndex
I've tried:
In: if type(result.index) is 'pandas.core.index.MultiIndex':
print True
else:
print False
Out: False
If I try without quotations I get:
NameError: name 'pandas' is not defined
Any help appreciated.
(Once I have the MultiIndex, I'm then resetting the index and merging the two columns into a single string value for the presentation stage.)
You can use isinstance to check whether an object is a class (or its subclasses):
if isinstance(result.index, pandas.MultiIndex):
You can use nlevels to check how many levels there are:
df.index.nlevels
df.columns.nlevels
If nlevels > 1, your dataframe certainly has multiple indices.
There's also
len(result.index.names) > 1
but it is considerably slower than either isinstance or type:
timeit(len(result.index.names) > 1)
The slowest run took 10.95 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.12 µs per loop
In [254]:
timeit(isinstance(result.index, pd.MultiIndex))
The slowest run took 30.53 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 177 ns per loop
In [252]:
)
timeit(type(result.index) == pd.MultiIndex)
The slowest run took 22.86 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 200 ns per loop
Maybe the shortest way is if type(result.index)==pd.MultiIndex: