Fastest way for a loop and condition code (Python + Dataframes) - python

I have the following loops that takes more than 9 seconds for 10 000 loops. For my program, I have to execute more than 1000 times this function. I need some help to optimize the "simu" function as from now my code is impossible to use since the time duration. For info, daterange values are only for example but can be very different from one to others.
What take mostly time :
df.itertuples(['DATES'])
loop even using iterator
if condition
f.index.get_loc to have the position of the date
Has someone any idea how to optimize this code ?
def simu(nbprod, df, daterange):
timer = time.time()
mat = np.zeros((len(df), nbprod))
iterator = ((i,j) for j in xrange(len(daterange)) for i in df.itertuples(['DATES']))
for (i,j) in iterator:
thedate = i[0]
if (thedate >= daterange[j][0]) and (thedate <= daterange[j][1]):
mat[df.index.get_loc(i[0])][j] = 1
print time.time() - timer
return mat
new_index = pd.date_range(start=pd.datetime(2014,1,1), periods=24*10000, freq='H')
df = pd.DataFrame(np.random.randn(len(new_index)), new_index)
df.index.name = 'DATES'
daterange = [[pd.datetime(2014,1,3), pd.datetime(2014,1,7)], [pd.datetime(2015,6,3), pd.datetime(2017,1,7)], [pd.datetime(2017,1,3), pd.datetime(2020,1,7)]]
### for 1 time
>>> simu(len(daterange), df, daterange)
9.43400001526
### for 3 times more
>>> simu(len(daterange)*3, df, daterange*3)
30.6919999123
>>> simu(len(daterange)*10, df, daterange*10)
92.2009999752

This returns a frame, which is IMHO more useful anyhow (if you want the underlying
data, just df.values. This will scale linearly with the length of daterange.
def simu2(df, daterange):
mat = pd.DataFrame(0,index=df.index,columns=range(len(daterange)))
for j, (d1,d2) in enumerate(daterange):
result = df[(df.index>=d1)&(df.index<=d2)]
mat.loc[result.index,j] = 1
return mat
In [7]: result1 = simu2(df, daterange)
In [10]: result2 = simu(len(daterange), df, daterange)
5.7844748497
In [11]: (result1.values==result2).all()
Out[11]: True
In [12]: %timeit simu2(df, daterange)
10 loops, best of 3: 162 ms per loop

Related

Compare rows with conditions and generate a new dataframe in Pandas

I have a very big dataframe with this structure:
Timestamp Val1
Here you can see a real sample:
Timestamp Temp
0 1622471518.92911 36.443
1 1622471525.034114 36.445
2 1622471531.148139 37.447
3 1622471537.284337 36.449
4 1622471543.622588 43.345
5 1622471549.734765 36.451
6 1622471556.2518 36.454
7 1622471562.361368 41.461
8 1622471568.472718 42.468
9 1622471574.826475 36.470
What I want to do is compare the Temp column with itself and if is higher than "X", for example 4, and the time between they is lower than "Y", for example 180 min, then I save some data of they.
Now I'm using two for loops one inside the other, but this expends to much time and usually pandas has an option to avoid this.
This is my code:
cap_time, maxim = 180, 4
cap_time = cap_time * 60
temps= df['Temperature'].values
times = df['Timestamp'].values
results = []
for i in range(len(temps)):
for j in range(i+1, len(temps)):
print(i,j,len(temps))
if float(temps[j]) > float(temps[i])*maxim:
timeIn = dt.datetime.fromtimestamp(float(times[i]))
timeOut = dt.datetime.fromtimestamp(float(times[j]))
diff = timeOut - timeIn
tdiff = diff.total_seconds()
if dd > cap_time:
break
else:
res = [temps[i], temps[j], times[i], times[j], tdiff/60, cap_time/60, maxim]
results.append(res)
break
# Then I save it in a dataframe and another actions
Can Pandas help me to achieve my goal and reduce the execution time? I found dataFrame.diff() but I'm not sure is what I want (or I don`t know how to use it).
Thank you very much.
Short of avoiding the nested for loops, you can already speed things up by avoiding all unnecessary calculations and conversions within the loops. In particular, you can use NumPy broadcasting to define a Boolean array beforehand, in which you can look up whether the condition is met:
import numpy as np
temps_diff = temps - temps[:, None]
times_diff = times - times[:, None]
condition = np.logical_and(temps_diff > maxim,
times_diff < cap_time)
results = []
for i in range(len(temps)):
for j in range(i+1, len(temps)):
if condition[i, j]:
results.append([temps[i], temps[j],
times[i], times[j],
times_diff[i, j]])
results
[[36.443, 43.345, 1622471518.92911, 1622471543.622588, 24.693477869033813],
...
[36.454, 42.468, 1622471556.2518, 1622471568.472718, 12.22091794013977]]
To avoid the loops altogether, you could define a 3-dimensional full results array and then use the condition array as a Boolean mask to filter out the results you want:
import numpy as np
n = len(temps)
temps_diff = temps - temps[:, None]
times_diff = times - times[:, None]
condition = np.logical_and(temps_diff > maxim,
times_diff < cap_time)
results_full = np.stack([np.repeat(temps[:, None], n, axis=1),
np.tile(temps, (n, 1)),
np.repeat(times[:, None], n, axis=1),
np.tile(times, (n, 1)),
times_diff])
results = results_full[np.stack(results_full.shape[0] * [condition])]
results.reshape((5, -1)).T
array([[ 3.64430000e+01, 4.33450000e+01, 1.62247152e+09,
1.62247154e+09, 2.46934779e+01],
...
[ 3.64540000e+01, 4.24680000e+01, 1.62247156e+09,
1.62247157e+09, 1.22209179e+01],
...
])
As you can see, the resulting numbers are the same as above, although this time the results array will contain more rows, because we didn't use the shortcut of starting the inner loop at i+1.

Adding many series efficiently?

I have thousands of pd.Series items, and I just want to add them. They regard different time intervals, and I need to pad missing values with zeros. I tried
add_series = lambda a, b: a.add(b, fill_value=0).fillna(0)
result = reduce(add_series, all_my_items)
which takes more time than I would expect. Is there any way to speed this up significantly?
Using concat
pd.concat(all_my_items,axis=1).fillna(0).sum(axis=1)
You can drop down to NumPy via np.pad and np.vstack. For performance, if possible you should avoid regular Python methods when manipulating Pandas / NumPy objects.
The below solution assumes each series is aligned by index, i.e. the kth item of each series by position is comparable across series for each k.
np.random.seed(0)
m, n = 10**2, 10**4
S = [pd.Series(np.random.random(np.random.randint(0, m))) for _ in range(n)]
def combiner(arrs):
n = max(map(len, arrs))
L = [np.pad(i.values, (0, n-len(i)), 'constant') for i in arrs]
return np.vstack(L).sum(0)
res1 = pd.concat(L, axis=1).fillna(0).sum(axis=1)
res2 = pd.Series(combiner(S))
assert (res1 == res2).all()
%timeit pd.concat(L, axis=1).fillna(0).sum(axis=1) # 2.63 s per loop
%timeit pd.Series(combiner(S)) # 863 ms per loop
You can use pd.concat but with axis=0 and then groupby on level=0 such as:
pd.concat(all_my_items,axis=0).groupby(level=0).sum()
With all_my_items containing 1000 pd.Series of different lengths (e.g. between 2000 and 2500) and different time intervals such as:
import numpy as np
np.random.seed(0)
n = 1000 #number of series
#lengths of the series
len_ser = np.random.randint(2000, 2500, n)
# to pick a random start date
list_date = pd.date_range(start = pd.to_datetime('1980-01-01'), periods=15000).tolist()
# generate the list of pd.Series
all_my_items = [pd.Series(range(len_ser[i]),
index=pd.date_range(start=list_date[np.random.randint(0,15000,1)[0]],
periods=len_ser[i]))
for i in range(n)]
# Wen's solution
%timeit pd.concat(all_my_items,axis=1).fillna(0).sum(axis=1) #1.47 s ± 138 ms per loop
#this solution
%timeit pd.concat(all_my_items,axis=0).groupby(level=0).sum() #270 ms ± 11.3 ms
#verify same result
print (pd.concat(all_my_items,axis=1).fillna(0).sum(axis=1) ==
pd.concat(all_my_items,axis=0).groupby(level=0).sum()).all()) #True
So the result is the same and the operation is faster

Spedup distance and summary computation between two HUGE multi-dimensional arrays in python

I have only a year of experience with using python. I would like to find summary statistics based on two multi-dimensional arrays DF_All and DF_On. Both have X,Y values. A function is created that computes distance as sqrt((X-X0)^2 + (Y-Y0)^2) and generates summaries as shown in the code below. My question is: Is there any way to make this code run faster? I would prefer a native python method but other strategies (like numba are also welcomed).
The example (toy) code below takes only 50 milliseconds to run on my windows-7 x64 desktop. But my DF_All has more than 10,000 rows and I need to do this calculation a huge number of times as well resulting in a huge execution time.
import numpy as np
import pandas as pd
import json, random
# create data
KY = ['ER','WD','DF']
DS = ['On','Off']
DF_All = pd.DataFrame({'KY': np.random.choice(KY,20,replace = True),
'DS': np.random.choice(DS,20,replace = True),
'X': random.sample(range(1,100),20),
'Y': random.sample(range(1,100),20)})
DF_On = DF_All[DF_All['DS']=='On']
# function
def get_values(DF_All,X = list(DF_On['X'])[0],Y = list(DF_On['Y'])[0]):
dist_vector = np.sqrt((DF_All['X'] - X)**2 + (DF_All['Y'] - Y)**2) # computes distance
DF_All = DF_All[dist_vector<35] # filters if distance is < 35
# print(DF_All.shape)
DS_summary = [sum(DF_All['DS']==x) for x in ['On','Off']] # get summary
KY_summary = [sum(DF_All['KY']==x) for x in ['ER','WD','DF']] # get summary
joined_summary = DS_summary + KY_summary # join two summary lists
return(joined_summary) # return
Array_On = DF_On.values.tolist() # convert to array then to list
Values = [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On] # list comprehension to get DS and KY summary for all rows of Array_On list
Array_Updated = [x + y for x,y in zip(Array_On,Values)] # appending the summary list to Array_On list
Array_Updated = pd.DataFrame(Array_Updated) # converting to pandas dataframe
print(Array_Updated)
Here's an approach making use of vectorization by getting rid of the looping there -
from scipy.spatial.distance import cdist
def get_values_vectorized(DF_All, Array_On):
a = DF_All[['X','Y']].values
b = np.array(Array_On)[:,2:].astype(int)
v_mask = (cdist(b,a) < 35).astype(int)
DF_DS = DF_All.DS.values
DS_sums = v_mask.dot(DF_DS[:,None] == ['On','Off'])
DF_KY = DF_All.KY.values
KY_sums = v_mask.dot(DF_KY[:,None] == ['ER','WD','DF'])
return np.column_stack(( DS_sums, KY_sums ))
Using a bit less memory, a tweaked one -
def get_values_vectorized_v2(DF_All, Array_On):
a = DF_All[['X','Y']].values
b = np.array(Array_On)[:,2:].astype(int)
v_mask = cdist(a,b) < 35
DF_DS = DF_All.DS.values
DS_sums = [((DF_DS==x)[:,None] & v_mask).sum(0) for x in ['On','Off']]
DF_KY = DF_All.KY.values
KY_sums = [((DF_KY==x)[:,None] & v_mask).sum(0) for x in ['ER','WD','DF']]
out = np.column_stack(( np.column_stack(DS_sums), np.column_stack(KY_sums)))
return out
Runtime test -
Case #1 : Original sample size of 20
In [417]: %timeit [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On]
100 loops, best of 3: 16.3 ms per loop
In [418]: %timeit get_values_vectorized(DF_All, Array_On)
1000 loops, best of 3: 386 µs per loop
Case #2: Sample size of 2000
In [420]: %timeit [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On]
1 loops, best of 3: 1.39 s per loop
In [421]: %timeit get_values_vectorized(DF_All, Array_On)
100 loops, best of 3: 18 ms per loop

why is len so much more efficient on DataFrame than on underlying numpy array?

I've noticed that using len on a DataFrame is far quicker than using len on the underlying numpy array. I don't understand why. Accessing the same information via shape isn't any help either. This is more relevant as I try to get at the number of columns and number of rows. I was always debating which method to use.
I put together the following experiment and it's very clear that I will be using len on the dataframe. But can someone explain why?
from timeit import timeit
import pandas as pd
import numpy as np
ns = np.power(10, np.arange(6))
results = pd.DataFrame(
columns=ns,
index=pd.MultiIndex.from_product(
[['len', 'len(values)', 'shape'],
ns]))
dfs = {(n, m): pd.DataFrame(np.zeros((n, m))) for n in ns for m in ns}
for n, m in dfs.keys():
df = dfs[(n, m)]
results.loc[('len', n), m] = timeit('len(df)', 'from __main__ import df', number=10000)
results.loc[('len(values)', n), m] = timeit('len(df.values)', 'from __main__ import df', number=10000)
results.loc[('shape', n), m] = timeit('df.values.shape', 'from __main__ import df', number=10000)
fig, axes = plt.subplots(2, 3, figsize=(9, 6), sharex=True, sharey=True)
for i, (m, col) in enumerate(results.iteritems()):
r, c = i // 3, i % 3
col.unstack(0).plot.bar(ax=axes[r, c], title=m)
From looking at the various methods, the main reason is that constructing the numpy array df.values takes the lion's share of the time.
len(df) and df.shape
These two are fast because they are essentially
len(df.index._data)
and
(len(df.index._data), len(df.columns._data))
where _data is a numpy.ndarray. Thus, using df.shape should be half as fast as len(df) because it's finding the length of both df.index and df.columns (both of type pd.Index)
len(df.values) and df.values.shape
Let's say you had already extracted vals = df.values. Then
In [1]: df = pd.DataFrame(np.random.rand(1000, 10), columns=range(10))
In [2]: vals = df.values
In [3]: %timeit len(vals)
10000000 loops, best of 3: 35.4 ns per loop
In [4]: %timeit vals.shape
10000000 loops, best of 3: 51.7 ns per loop
Compared to:
In [5]: %timeit len(df.values)
100000 loops, best of 3: 3.55 µs per loop
So the bottleneck is not len but how df.values is constructed. If you examine pandas.DataFrame.values(), you'll find the (roughly equivalent) methods:
def values(self):
return self.as_matrix()
def as_matrix(self, columns=None):
self._consolidate_inplace()
if self._AXIS_REVERSED:
return self._data.as_matrix(columns).T
if len(self._data.blocks) == 0:
return np.empty(self._data.shape, dtype=float)
if columns is not None:
mgr = self._data.reindex_axis(columns, axis=0)
else:
mgr = self._data
if self._data._is_single_block or not self._data.is_mixed_type:
return mgr.blocks[0].get_values()
else:
dtype = _interleaved_dtype(self.blocks)
result = np.empty(self.shape, dtype=dtype)
if result.shape[0] == 0:
return result
itemmask = np.zeros(self.shape[0])
for blk in self.blocks:
rl = blk.mgr_locs
result[rl.indexer] = blk.get_values(dtype)
itemmask[rl.indexer] = 1
# vvv here is your final array assuming you actually have data
return result
def _consolidate_inplace(self):
def f():
if self._data.is_consolidated():
return self._data
bm = self._data.__class__(self._data.blocks, self._data.axes)
bm._is_consolidated = False
bm._consolidate_inplace()
return bm
self._protect_consolidate(f)
def _protect_consolidate(self, f):
blocks_before = len(self._data.blocks)
result = f()
if len(self._data.blocks) != blocks_before:
if i is not None:
self._item_cache.pop(i, None)
else:
self._item_cache.clear()
return result
Note that df._data is a pandas.core.internals.BlockManager, not a numpy.ndarray.
If you look at __len__ for pd.DataFrame, they actually just call len(df.index):
https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L770
For a RangeIndex, this is a really fast operation since it's just a subtraction and division of values stored within the index object:
return max(0, -(-(self._stop - self._start) // self._step))
https://github.com/pandas-dev/pandas/blob/master/pandas/indexes/range.py#L458
I suspect that if you tested with a non-RangeIndex, the difference in times would be much more similar. I'll probably try modifying what you have to see if that's the case.
EDIT: After a quick check, the speed difference still seems to hold even with a standard Index, so there must still be some other optimization there.

Is there lexographical version of searchsorted in numpy?

I have two arrays which are lex-sorted.
In [2]: a = np.array([1,1,1,2,2,3,5,6,6])
In [3]: b = np.array([10,20,30,5,10,100,10,30,40])
In [4]: ind = np.lexsort((b, a)) # sorts elements first by a and then by b
In [5]: print a[ind]
[1 1 1 2 2 3 5 6 6]
In [7]: print b[ind]
[ 10 20 30 5 10 100 10 30 40]
I want to do a binary search for (2, 7) and (5, 150) expecting (4, 7) as the answer.
In [6]: np.lexsearchsorted((a,b), ([2, 5], [7,150]))
We have searchsorted function but that works only on 1D arrays.
EDIT: Edited to reflect comment.
def comp_leq(t1,t2):
if (t1[0] > t2[0]) or ((t1[0] == t2[0]) and (t1[1] > t2[1])):
return 0
else:
return 1
def bin_search(L,item):
from math import floor
x = L[:]
while len(x) > 1:
index = int(floor(len(x)/2) - 1)
#Check item
if comp_leq(x[index], item):
x = x[index+1:]
else:
x = x[:index+1]
out = L.index(x[0])
#If greater than all
if item >= L[-1]:
return len(L)
else:
return out
def lexsearch(a,b,items):
z = zip(a,b)
return [bin_search(z,item) for item in items]
if __name__ == '__main__':
a = [1,1,1,2,2,3,5,6,6]
b = [10,20,30,5,10,100,10,30,40]
print lexsearch(a,b,([2,7],[5,150])) #prints [4,7]
This code seems to do it for a set of (exactly) 2 lexsorted arrays
You might be able to make it faster if you create a set of values[-1], and than create a dictionary with the boundries for them.
I haven't checked other cases apart from the posted one, so please verify it's not bugged.
def lexsearchsorted_2(arrays, values, side='left'):
assert len(arrays) == 2
assert (np.lexsort(arrays) == range(len(arrays[0]))).all()
# here it will be faster to work on all equal values in 'values[-1]' in one time
boundries_l = np.searchsorted(arrays[-1], values[-1], side='left')
boundries_r = np.searchsorted(arrays[-1], values[-1], side='right')
# a recursive definition here will make it work for more than 2 lexsorted arrays
return tuple([boundries_l[i] +
np.searchsorted(arrays[-2[boundries_l[i]:boundries_r[i]],
values[-2][i],
side=side)
for i in range(len(boundries_l))])
Usage:
import numpy as np
a = np.array([1,1,1,2,2,3,5,6,6])
b = np.array([10,20,30,5,10,100,10,30,40])
lexsearchsorted_2((b, a), ([7,150], [2, 5])) # return (4, 7)
I ran into the same issue and came up with a different solution. You can treat the multi-column data instead as single entries using a structured data type. A structured data type will allow one to use argsort/sort on the data (instead of lexsort, although lexsort appears faster at this stage) and then use the standard searchsorted. Here is an example:
import numpy as np
from itertools import repeat
# Setup our input data
# Every row is an entry, every column what we want to sort by
# Unlike lexsort, this takes columns in decreasing priority, not increasing
a = np.array([1,1,1,2,2,3,5,6,6])
b = np.array([10,20,30,5,10,100,10,30,40])
data = np.transpose([a,b])
# Sort the data
data = data[np.lexsort(data.T[::-1])]
# Convert to a structured data-type
dt = np.dtype(zip(repeat(''), repeat(data.dtype, data.shape[1]))) # the structured dtype
data = np.ascontiguousarray(data).view(dt).squeeze(-1) # the dtype change leaves a trailing 1 dimension, ascontinguousarray is required for the dtype change
# You can also first convert to the structured data-type with the two lines above then use data.sort()/data.argsort()/np.sort(data)
# Search the data
values = np.array([(2,7),(5,150)], dtype=dt) # note: when using structured data types the rows must be a tuple
pos = np.searchsorted(data, values)
# pos is (4,7) in this example, exactly what you would want
This works for any number of columns, uses the built-in numpy functions, the columns remain in the "logical" order (decreasing priority), and it should be quite fast.
A compared the two two numpy-based methods time-wise.
#1 is the recursive method from #j0ker5 (the one below extends his example with his suggestion of recursion and works with any number of lexsorted rows)
#2 is the structured array from me
They both take the same inputs, basically like searchsorted except a and v are as per lexsort.
import numpy as np
def lexsearch1(a, v, side='left', sorter=None):
def _recurse(a, v):
if a.shape[1] == 0: return 0
if a.shape[0] == 1: return a.squeeze(0).searchsorted(v.squeeze(0), side)
bl = np.searchsorted(a[-1,:], v[-1], side='left')
br = np.searchsorted(a[-1,:], v[-1], side='right')
return bl + _recurse(a[:-1,bl:br], v[:-1])
a,v = np.asarray(a), np.asarray(v)
if v.ndim == 1: v = v[:,np.newaxis]
assert a.ndim == 2 and v.ndim == 2 and a.shape[0] == v.shape[0] and a.shape[0] > 1
if sorter is not None: a = a[:,sorter]
bl = np.searchsorted(a[-1,:], v[-1,:], side='left')
br = np.searchsorted(a[-1,:], v[-1,:], side='right')
for i in xrange(len(bl)): bl[i] += _recurse(a[:-1,bl[i]:br[i]], v[:-1,i])
return bl
def lexsearch2(a, v, side='left', sorter=None):
from itertools import repeat
a,v = np.asarray(a), np.asarray(v)
if v.ndim == 1: v = v[:,np.newaxis]
assert a.ndim == 2 and v.ndim == 2 and a.shape[0] == v.shape[0] and a.shape[0] > 1
a_dt = np.dtype(zip(repeat(''), repeat(a.dtype, a.shape[0])))
v_dt = np.dtype(zip(a_dt.names, repeat(v.dtype, a.shape[0])))
a = np.asfortranarray(a[::-1,:]).view(a_dt).squeeze(0)
v = np.asfortranarray(v[::-1,:]).view(v_dt).squeeze(0)
return a.searchsorted(v, side, sorter).ravel()
a = np.random.randint(100, size=(2,10000)) # Values to sort, rows in increasing priority
v = np.random.randint(100, size=(2,10000)) # Values to search for, rows in increasing priority
sorted_idx = np.lexsort(a)
a_sorted = a[:,sorted_idx]
And the timing results (in iPython):
# 2 rows
%timeit lexsearch1(a_sorted, v)
10 loops, best of 3: 33.4 ms per loop
%timeit lexsearch2(a_sorted, v)
100 loops, best of 3: 14 ms per loop
# 10 rows
%timeit lexsearch1(a_sorted, v)
10 loops, best of 3: 103 ms per loop
%timeit lexsearch2(a_sorted, v)
100 loops, best of 3: 14.7 ms per loop
Overall the structured array approach is faster, and can be made even faster if you design it to work with the flipped and transposed versions of a and v. It gets even faster as the numbers of rows/keys goes up, barely slowing down when going from 2 rows to 10 rows.
I did not notice any significant timing difference between using a_sorted or a and sorter=sorted_idx so I left those out for clarity.
I believe that a really fast method could be made using Cython, but this is as fast as it is going to get with pure pure Python and numpy.

Categories