Python: Vectorize list lookup - python

I have sensor data like this:
{"Time":1541203508.45,"Tc":25.4,"Hp":33}
{"Time":1541203508.45,"Tc":25.2,"Hp":32}
{"Time":1541203508.45,"Tc":25.1,"Hp":31}
{"Time":1541203508.45,"Tc":25.2,"Hp":33}
I'm doing a lot of list lookups in a for loop like this:
output={}
for i,data in enumerate(sensor_data):
output[i]={}
output[i]['H']=['V_Dry','Dry','Normal','Humid','V_Humid','ERR']([sensor_data[i]['Hp'])%20]
#.... And so on for temp etc
Is there some way to vectorize this if I converted it to a numpy/pandas datatype? Like, if I split the sections into temp, humidity etc, is there a python method that would apply this 'mask' kind of thing on it?
Is map my only option to speed it up?

First attempt
I suggest you first convert your data into a numpy array:
import numpy as np
data = [{"Time":1541203508.45,"Tc":25.4,"Hp":33},
{"Time":1541203508.45,"Tc":25.2,"Hp":32},
{"Time":1541203508.45,"Tc":25.1,"Hp":31},
{"Time":1541203508.45,"Tc":25.2,"Hp":33}]
np_data = np.asarray([list(element.values()) for element in data])
Now the third column is humidity in your example. Let's now define a map for that:
def convert_number_to_hr(value):
hr_names = ['V_Dry','Dry','Normal','Humid','V_Humid','ERR']
return hr_names[int(value//20)]
This uses your predefined names in steps of 20%. Now let's apply the map:
hr_humidity = map(convert_number_to_hr, np_data[:,2])
This is a generator. You can iterate through it or convert it to a list via list(hr_humidity).
This reports a speed of
%timeit hr_humidity = map(convert_number_to_hr, np_data[:,2])
515 ns ± 2.25 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
If you apply list(..) this time grows to
%timeit hr_humidity = list(map(convert_number_to_hr, np_data[:,2]))
5.62 µs ± 18.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
You can now use the same procedure for everything else in your dataset.
Second attempt
I tried to do this fully vectorized as you asked in your comment. I came up with:
def same_but_pure_numpy(arr):
arr = arr.astype(int)//20
hr_names = np.asarray(['V_Dry','Dry','Normal','Humid','V_Humid','ERR'])
return hr_names[arr]
This reports a speed of
%timeit a = same_but_pure_numpy(np_data[:,2])
11.5 µs ± 151 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
so the map version seems to be faster.
Third attempt
EDIT: Okay I had my first try with pandas:
import pandas as pd
data = [{"Time":1541203508.45,"Tc":25.4,"Hp":33},
{"Time":1541203508.45,"Tc":25.2,"Hp":32},
{"Time":1541203508.45,"Tc":25.1,"Hp":31},
{"Time":1541203508.45,"Tc":25.2,"Hp":33}]
df = pd.DataFrame(data)
def convert_number_to_hr(value):
hr_names = ['V_Dry','Dry','Normal','Humid','V_Humid','ERR']
return hr_names[int(value//20)]
The result is as expected, but it seems to consume much time:
%timeit new = df["Hp"].map(convert_number_to_hr)
110 µs ± 569 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Related

What is the most efficient way to bootstrap the mean of a list of numbers?

I have a list of numbers (floats) and I would like to estimate the mean. I also need to estimate the variation of such mean. My goal is to resample the list 100 times, and my output would be an array with length 100, each element corresponding to the mean of a resampled list.
Here is a simple workable example for what I would like to achieve:
import numpy as np
data = np.linspace(0, 4, 5)
ndata, boot = len(data), 100
output = np.mean(np.array([data[k] for k in np.random.uniform(high=ndata, size=boot*ndata).astype(int)]).reshape((boot, ndata)), axis=1)
This is however quite slow when I have to repeat for many lists with large number of elements. The method also seems very clunky and un-Pythonic. What would be a better way to achieve my goal?
P.S. I am aware of scipy.stats.bootstrap, but I have problem upgrading scipy to 1.7.1 in anaconda to import this.
Use np.random.choice:
import numpy as np
data = np.linspace(0, 4, 5)
ndata, boot = len(data), 100
output = np.mean(
np.random.choice(data, size=(100, ndata)),
axis=1)
If I understood correctly, this expression (in your question's code):
np.array([data[k] for k in np.random.uniform(high=ndata, size=boot*ndata).astype(int)]).reshape((boot, ndata)
is doing a sampling with replacement and that is exactly what np.random.choice does.
Here are some timings for reference:
%timeit np.mean(np.array([data[k] for k in np.random.uniform(high=ndata, size=boot*ndata).astype(int)]).reshape((boot, ndata)), axis=1)
133 µs ± 3.96 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.mean(np.random.choice(data, size=(boot, ndata)),axis=1)
41.1 µs ± 538 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
As it can be seen np.random.choice yields 3x improvement.

Faster pandas DatetimeIndex membership checking

I have a tight loop which, among other things, checks whether a given date (in the form of a pandas.Timestamp) is contained in a given unique pandas.DatetimeIndex (the application being checking whether a date is a custom business day).
As a minimal example, consider this bit:
import pandas as pd
dates = pd.date_range("2020", "2021")
index = dates.to_series().sample(frac=0.7).sort_index().index
for date in dates:
if date in index:
# Do stuff...
(Note that simply iterating over index is not an option in the full application)
To my surprise, I found that the date in index bit takes up a significant part of the total runtime. Profiling furthermore shows that Pandas' membership check does a lot more than just a hash lookup, which is further confirmed by a small experiment comparing DatetimeIndex vs a plain python set:
%timeit [date in index for date in dates]
# 3.28 ms ± 81.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
vs
index_set = set(index)
%timeit [date in index_set for date in dates]
# 341 µs ± 3.42 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Note that the difference is almost 10x! Why this difference and can I do anything to make it faster?

Two list in a function with minimum execution with n number of lists in python

I have a n number of lists, Now I want to calculate the similarity of n lists with a list called l.
Now I am using following code,
from scipy import spatial
similarity_score=[]
for i in n:
spatial.similarity_score.append(spatial.distance.cosine(i,l))
Now It gives the result I wanted, but the execution time is huge, looking for some alternative process where the same thing could be done in least execution time.
You probably want to create this in a parallel way with for instance numpy
Say you data in in the format
#
# X contained 1000 lists of size 50
#
import numpy as np
X = np.random.random( (50,1000) )
#
# v contains the vector you want to calculate the distance to
#
v = np.random.random( (50,1) )
Than the loop approach is
%%timeit
#
# for loop approach
#
from scipy import spatial
similarity_score=[]
for i in X.T:
similarity_score.append(spatial.distance.cosine(i,v))
Which on my machine gave
82.3 ms ± 13.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
And produces the following output
similarity_score[:10]
[0.22282765699585905,
0.2160367009172488,
0.30853097430098786,
0.24034072729579192,
0.16217833767527134,
0.2829791739176786,
0.18946375557860284,
0.19624968983011593,
0.2484078232716126,
0.3258394812037617]
When we implement this parallel in numpy
%%timeit
#
# Parallel approach using np.einsum
#
I = np.einsum("ij,ij->j", X,v)
D = 1 - I / ( np.linalg.norm(X,ord=2,axis=0) * np.linalg.norm(v,ord=2,axis=0) )
We get
191 µs ± 63.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
And of course check output
D[:10]
array([0.22282766, 0.2160367 , 0.30853097, 0.24034073, 0.16217834,
0.28297917, 0.18946376, 0.19624969, 0.24840782, 0.32583948])
Note the output of the example is not of the same type, numpy will output a numpy array.

What's the fastest way to acces a Pandas DataFrame?

I have a DataFrame df with 541 columns, and I need to save all unique pairs of its column names into the rows of a separate DataFrame, repeated 8 times each.
I thought I would create an empty DataFrame fp, double loop through df's column names, insert into every 8th row, and fill in the blanks with the last available value.
When I tried to do this though I was baffled by how long it's taking. With 541 columns I only have to write 146,611 times yet it's taking well over 20 minutes. This seems egregious for just data access. Where is the problem and how can I solve it? It takes less time than that for Pandas to produce a correlation matrix with the columns so I must me doing something wrong.
Here's a reproducible example of what I mean:
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
%timeit for idx in range(0, len(fp)): fp.iloc[idx, 0] = idx
# 1 loop, best of 3: 22.3 s per loop
Don't do iloc/loc/chained-indexing. Using the NumPy interface alone increases speed by ~180x. If you further remove element access, we can bump this to 180,000x.
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
# this confirms how slow data access is on my computer
%timeit for idx in range(0, len(fp)): fp.iloc[idx, 0] = idx
1 loops, best of 3: 3min 9s per loop
# this accesses the underlying NumPy array, so you can directly set the data
%timeit for idx in range(0, len(fp)): fp.values[idx, 0] = idx
1 loops, best of 3: 1.19 s per loop
This is because there's extensive code that goes in the Python layer for this fancing indexing, taking ~10µs per loop. Using Pandas indexing should be done to retrieve entire subsets of data, which you then use to do vectorized operations on the entire dataframe. Individual element access is glacial: using Python dictionaries will give you a > 180 fold increase in performance.
Things get a lot better when you access columns or rows instead of individual elements: 3 orders of magnitude better.
# set all items in 1 go.
%timeit fp[0] = np.arange(146611)
1000 loops, best of 3: 814 µs per loop
Moral
Don't try to access individual elements via chained indexing, loc, or iloc. Generate a NumPy array in a single allocation, from a Python list (or a C-interface if performance is absolutely critical), and then perform operations on entire columns or dataframes.
Using NumPy arrays and performing operations directly on columns rather than individual elements, we got a whopping 180,000+ fold increase in performance. Not too shabby.
Edit
Comments from #kushy suggest Pandas may have optimized indexing in certain cases since I originally wrote this answer. Always profile your own code, and your mileage may vary.
Alexander's answer was the fastest for me as of 2020-01-06 when using .is_numpy() instead of .values. Tested in Jupyter Notebook on Windows 10. Pandas version = 0.24.2
import numpy as np
import pandas as pd
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
pd.__version__ # '0.24.2'
def func1():
# Asker badmax solution
for idx in range(0, len(fp)):
fp.iloc[idx, 0] = idx
def func2():
# Alexander Huszagh solution 1
for idx in range(0, len(fp)):
fp.to_numpy()[idx, 0] = idx
def func3():
# user4322543 answer to
# https://stackoverflow.com/questions/34855859/is-there-a-way-in-pandas-to-use-previous-row-value-in-dataframe-apply-when-previ
new = []
for idx in range(0, len(fp)):
new.append(idx)
fp[0] = new
def func4():
# Alexander Huszagh solution 2
fp[0] = np.arange(146611)
%timeit func1
19.7 ns ± 1.08 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func2
19.1 ns ± 0.465 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func3
21.1 ns ± 3.26 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func4
24.7 ns ± 0.889 ns per loop (mean ± std. dev. of 7 runs, 50000000 loops each)

Pandas DataFrame performance

Pandas is really great, but I am really surprised by how inefficient it is to retrieve values from a Pandas.DataFrame. In the following toy example, even the DataFrame.iloc method is more than 100 times slower than a dictionary.
The question: Is the lesson here just that dictionaries are the better way to look up values? Yes, I get that that is precisely what they were made for. But I just wonder if there is something I am missing about DataFrame lookup performance.
I realize this question is more "musing" than "asking" but I will accept an answer that provides insight or perspective on this. Thanks.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
dictionary = df.to_dict()
'''
f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
for func in f:
print func
print min(timeit.Timer(func, setup).repeat(3, 100000))
value = dictionary[5][5]
0.130625009537
value = df.loc[5, 5]
19.4681699276
value = df.iloc[5, 5]
17.2575249672
A dict is to a DataFrame as a bicycle is to a car.
You can pedal 10 feet on a bicycle faster than you can start a car, get it in gear, etc, etc. But if you need to go a mile, the car wins.
For certain small, targeted purposes, a dict may be faster.
And if that is all you need, then use a dict, for sure! But if you need/want the power and luxury of a DataFrame, then a dict is no substitute. It is meaningless to compare speed if the data structure does not first satisfy your needs.
Now for example -- to be more concrete -- a dict is good for accessing columns, but it is not so convenient for accessing rows.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 1000]))
dictionary = df.to_dict()
'''
# f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
f = ['value = [val[5] for col,val in dictionary.items()]', 'value = df.loc[5]', 'value = df.iloc[5]']
for func in f:
print(func)
print(min(timeit.Timer(func, setup).repeat(3, 100000)))
yields
value = [val[5] for col,val in dictionary.iteritems()]
25.5416321754
value = df.loc[5]
5.68071913719
value = df.iloc[5]
4.56006002426
So the dict of lists is 5 times slower at retrieving rows than df.iloc. The speed deficit becomes greater as the number of columns grows. (The number of columns is like the number of feet in the bicycle analogy. The longer the distance, the more convenient the car becomes...)
This is just one example of when a dict of lists would be less convenient/slower than a DataFrame.
Another example would be when you have a DatetimeIndex for the rows and wish to select all rows between certain dates. With a DataFrame you can use
df.loc['2000-1-1':'2000-3-31']
There is no easy analogue for that if you were to use a dict of lists. And the Python loops you would need to use to select the right rows would again be terribly slow compared to the DataFrame.
It seems the performance difference is much smaller now (0.21.1 -- I forgot what was the version of Pandas in the original example). Not only the performance gap between dictionary access and .loc reduced (from about 335 times to 126 times slower), loc (iloc) is less than two times slower than at (iat) now.
In [1]: import numpy, pandas
...: ...: df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
...: ...: dictionary = df.to_dict()
...:
In [2]: %timeit value = dictionary[5][5]
85.5 ns ± 0.336 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [3]: %timeit value = df.loc[5, 5]
10.8 µs ± 137 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [4]: %timeit value = df.at[5, 5]
6.87 µs ± 64.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [5]: %timeit value = df.iloc[5, 5]
14.9 µs ± 114 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [6]: %timeit value = df.iat[5, 5]
9.89 µs ± 54.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [7]: print(pandas.__version__)
0.21.1
---- Original answer below ----
+1 for using at or iat for scalar operations. Example benchmark:
In [1]: import numpy, pandas
...: df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
...: dictionary = df.to_dict()
In [2]: %timeit value = dictionary[5][5]
The slowest run took 34.06 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 310 ns per loop
In [4]: %timeit value = df.loc[5, 5]
10000 loops, best of 3: 104 µs per loop
In [5]: %timeit value = df.at[5, 5]
The slowest run took 6.59 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 9.26 µs per loop
In [6]: %timeit value = df.iloc[5, 5]
10000 loops, best of 3: 98.8 µs per loop
In [7]: %timeit value = df.iat[5, 5]
The slowest run took 6.67 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 9.58 µs per loop
It seems using at (iat) is about 10 times faster than loc (iloc).
I encountered the same problem. you can use at to improve.
"Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the at and iat methods, which are implemented on all of the data structures."
see official reference http://pandas.pydata.org/pandas-docs/stable/indexing.html chapter "Fast scalar value getting and setting"
I experienced different phenomenon about accessing the dataframe row.
test this simple example on dataframe about 10,000,000 rows.
dictionary rocks.
def testRow(go):
go_dict = go.to_dict()
times = 100000
ot= time.time()
for i in range(times):
go.iloc[100,:]
nt = time.time()
print('for iloc {}'.format(nt-ot))
ot= time.time()
for i in range(times):
go.loc[100,2]
nt = time.time()
print('for loc {}'.format(nt-ot))
ot= time.time()
for i in range(times):
[val[100] for col,val in go_dict.iteritems()]
nt = time.time()
print('for dict {}'.format(nt-ot))
I think the fastest way of accessing a cell, is
df.get_value(row,column)
df.set_value(row,column,value)
Both are faster than (I think)
df.iat(...)
df.at(...)

Categories