Fast numpy row slicing on a matrix - python

I have the following issue: I have a matrix yj of size (m,200) (m = 3683), and I have a dictionary that for each key, returns a numpy array of row indices for yj (for each key, the size array changes, just in case anyone is wondering).
Now, I have to access this matrix lots of times (around 1M times) and my code is slowing down because of the indexing (I've profiled the code and it takes 65% of time on this step).
Here is what I've tried out:
First of all, use the indices for slicing:
>> %timeit yj[R_u_idx_train[1]]
10.5 µs ± 79.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
The variable R_u_idx_train is the dictionary that has the row indices.
I thought that maybe boolean indexing might be faster:
>> yj[R_u_idx_train_mask[1]]
10.5 µs ± 159 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
R_u_idx_train_mask is a dictionary that returns a boolean array of size m where the indices given by R_u_idx_train are set to True.
I also tried np.ix_
>> cols = np.arange(0,200)
>> %timeit ix_ = np.ix_(R_u_idx_train[1], cols); yj[ix_]
42.1 µs ± 353 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I also tried np.take
>> %timeit np.take(yj, R_u_idx_train[1], axis=0)
2.35 ms ± 88.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
And while this seems great, it is not, since it gives an array that is shape (R_u_idx_train[1].shape[0], R_u_idx_train[1].shape[0]) (it should be (R_u_idx_train[1].shape[0], 200)). I guess I'm not using the method correctly.
I also tried np.compress
>> %timeit np.compress(R_u_idx_train_mask[1], yj, axis=0)
14.1 µs ± 124 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Finally I tried to index with a boolean matrix
>> %timeit yj[R_u_idx_train_mask2[1]]
244 µs ± 786 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
So, is 10.5 µs ± 79.7 ns per loop the best I can do? I could try to use cython but that seems like a lot of work for just indexing...
Thanks a lot.

A very smart solution was given by V.Ayrat in the comments.
>> newdict = {k: yj[R_u_idx_train[k]] for k in R_u_idx_train.keys()}
>> %timeit newdict[1]
202 ns ± 6.7 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Anyway maybe it would still be cool to know if there is a way to speed it up using numpy!

Related

Need help in understanding the loop speed with timeit function in python

I need help in understanding the %timeit function works in the two programs.
Program A
a = [1,3,2,4,1,4,2]
%timeit [val + 5 for val in a]
830 ns ± 45.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Program B
import numpy as np
a = np.array([1,3,2,4,1,4,2])
%timeit [a+5]
1.07 µs ± 23.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
My confusion:
µs is bigger than ns. How does the NumPy function execute slower than for loop here?
1.07 µs ± 23.7 ns per loop... why is the loop speed calculated in ns and not in µs?
Numpy adds an overhead, this will impact the speed on small datasets. Vectorization is mostly useful when using large datasets.
You must try on larger numbers:
N = 10_000_000
a = list(range(N))
%timeit [val + 5 for val in a]
import numpy as np
a = np.arange(N)
%timeit a+5
Output:
1.51 s ± 318 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
55.8 ms ± 3.63 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Numpy gives different results for almost identical code?

def match_score(vendor, company):
return max(fuzz.ratio(vendor, company), fuzz.partial_ratio(vendor, company), fuzz.token_sort_ratio(vendor, company))
Note: fuzz is from import fuzzywuzzy library
========================
vendor = 'RED DEER TELUS STORE'
When I try this code:
df['Vendor']=vendor
df['Score'] = np.array(match_score(tuple(df['Vendor']), tuple(df['Company'])))
I get this
However, when I try an almost identical code I get a different 'Score'?
df['Score'] = np.array(match_score(vendor, tuple(df['Company'])))
My logic in #2 is that the vendor is the same across the entire column so no need to put it in a tuple..I can just give it as a string and make the processing faster.
Can anyone explain why passing an entire column where vendor in each cell = 'RED DEER TELUS STORE' gives a different result than just passing 'RED DEER TELUS STORE' to the function as a string? Thanks!
tuple versus tolist:
In [166]: x=np.arange(10000)
In [167]: timeit tuple(x)
1.14 ms ± 26.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [168]: timeit list(x)
1.12 ms ± 2.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [169]: timeit x.tolist()
296 µs ± 9.98 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
or with a series
In [170]: ds = pd.Series(x)
In [171]: timeit tuple(ds)
1.22 ms ± 1.57 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [172]: timeit list(ds)
1.23 ms ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [173]: timeit ds.to_list()
394 µs ± 22.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
With a series of string values (object dtype):
In [184]: ds = pd.Series(['' for _ in range(1000)])
In [185]: ds[:] = vendor
In [186]: timeit tuple(ds)
104 µs ± 1.11 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [187]: timeit ds.to_list()
27.2 µs ± 179 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
As to why you get a difference between passing the Series/tuple versus the string, I think you need to examine the fuzz code/docs in more detail. Maybe even test the function(s) with small examples. I don't have fuzz installed, so can't explore that part of your calculations.
You might even want to make up some lists (or tuples) of strings, and experiment with those. I don't think this is a numpy/pandas issue. It's a matter of learning to use fuzz correctly.

Fastest way to cut a pandas time-series

Looking for the fastest way to cut a timeseries ... for example just taking the values that are more recent than a certain index.
I've found two commonly used methods:
df = original_series.truncate(before=example_time)
and
df = original_series[example_time:]
Which one is faster (for large time-series > 10**6 values) ?
This usually depends on what your dataframe index is, throwing a random DataFrame of 10^7 values into timeit we get the following.
From a performance standpoint in truncation more inefficient as pandas is optimized for integer based indexing via numpy.
Truncate:
62.6 ms ± 3.63 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Bracket Indexing:
54.1 µs ± 4.41 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
ILoc:
69.5 µs ± 4.52 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Loc:
92 µs ± 5.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Ix (which is deprecated):
110 µs ± 8.44 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
EDIT This is all on pandas 0.24.2, back in the 0.14-0.18 versions loc performance was much much worse

Why "any" sometimes works much faster, and sometimes much slower than "max" on Boolean values in python?

Consider the following code:
import numpy as np
import pandas as pd
a = pd.DataFrame({'case': np.arange(10000) % 100,
'x': np.random.rand(10000) > 0.5})
%timeit any(a.x)
%timeit a.x.max()
%timeit a.groupby('case').x.transform(any)
%timeit a.groupby('case').x.transform(max)
13.2 µs ± 179 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
195 µs ± 811 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
25.9 ms ± 555 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.43 ms ± 13.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
b = pd.DataFrame({'x': np.random.rand(100) > 0.5})
%timeit any(b.x)
%timeit b.x.max()
13.1 µs ± 205 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
81.5 µs ± 1.81 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
We see that "any" works faster than "max" on a boolean pandas.Series of size 100 and 10000, but when we try to groupby and transform data in groups of 100, suddenly "max" is a lot faster than "any". Why?
Because any evaluation is lazy. Which means that the that the any function will stop at the first True boolean element.
The max, however, can't do so because it required to inspect every element in a sequence to be sure it haven't missed any greater element.
That's why, max always will inspect all element when any inspect only element before the first True.
The case when max works faster are probably the cases with type coercion because all values in numpy are stored in their own types and formats, mathematical operations may be faster that python's any.
As said in comment, the python any fonction have a short circuit mechanism, when np.any
have not. see here.
But True in a.x is even faster:
%timeit any(a.x)
53.6 µs ± 543 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit True in (a.x)
3.39 µs ± 31.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Fastest way to drop rows / get subset with difference from large DataFrame in Pandas

Question
I'm looking for the fastest way to drop a set of rows which indices I've got or get the subset of the difference of these indices (which results in the same dataset) from a large Pandas DataFrame.
So far I have two solutions, which seem relatively slow to me:
df.loc[df.difference(indices)]
which takes ~115 sec on my dataset
df.drop(indices)
which takes ~215 sec on my dataset
Is there a faster way to do this? Preferably in Pandas.
Performance of proposed Solutions
~41 sec: df[~df.index.isin(indices)] by #jezrael
I believe you can create boolean mask, inverting by ~ and filtering by boolean indexing:
df1 = df[~df.index.isin(indices)]
As #user3471881 mentioned for avoid chained indexing if you are planning on manipulating the filtered df later is necessary add copy:
df1 = df[~df.index.isin(indices)].copy()
This filtering depends of number of matched indices and also by length of DataFrame.
So another possible solution is create array/list of indices for keeping and then inverting is not necessary:
df1 = df[df.index.isin(need_indices)]
Using iloc (or loc, see below) and Series.drop:
df = pd.DataFrame(np.arange(0, 1000000, 1))
indices = np.arange(0, 1000000, 3)
%timeit -n 100 df[~df.index.isin(indices)]
%timeit -n 100 df.iloc[df.index.drop(indices)]
41.3 ms ± 997 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
32.7 ms ± 1.06 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
As #jezrael points out you can only use iloc if index is a RangeIndex otherwise you will have to use loc. But this is still faster than df[df.isin()] (see why below).
All three options on 10 million rows:
df = pd.DataFrame(np.arange(0, 10000000, 1))
indices = np.arange(0, 10000000, 3)
%timeit -n 10 df[~df.index.isin(indices)]
%timeit -n 10 df.iloc[df.index.drop(indices)]
%timeit -n 10 df.loc[df.index.drop(indices)]
4.98 s ± 76.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
752 ms ± 51.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.65 s ± 69.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Why does super slow loc outperform boolean_indexing?
Well, the short answer is that it doesn't. df.index.drop(indices) is just a lot faster than ~df.index.isin(indices) (given above data with 10 million rows):
%timeit -n 10 ~df.index.isin(indices)
%timeit -n 10 df.index.drop(indices)
4.55 s ± 129 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
388 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
We can compare this to the performance of boolean_indexing vs iloc vs loc:
boolean_mask = ~df.index.isin(indices)
dropped_index = df.index.drop(indices)
%timeit -n 10 df[boolean_mask]
%timeit -n 10 df.iloc[dropped_index]
%timeit -n 10 df.loc[dropped_index]
489 ms ± 25.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
371 ms ± 10.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.38 s ± 153 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
If order of rows doesn't mind, you can arrange them in place :
n=10**7
df=pd.DataFrame(arange(4*n).reshape(n,4))
indices=np.unique(randint(0,n,size=n//2))
from numba import njit
#njit
def _dropfew(values,indices):
k=len(values)-1
for ind in indices[::-1]:
values[ind]=values[k]
k-=1
def dropfew(df,indices):
_dropfew(df.values,indices)
return df.iloc[:len(df)-len(indices)]
Runs :
In [39]: %time df.iloc[df.index.drop(indices)]
Wall time: 1.07 s
In [40]: %time dropfew(df,indices)
Wall time: 219 ms

Categories