Panda dataframe Creation a new column by comparing all other row - python

I have the following example:
import pandas as pd
import numpy as np
import time
def function(value,df):
return len(df[(df['A']<value)])
df= pd.DataFrame(np.random.randint(0,100,size=(30000, 1)), columns=['A'])
start=time.time()
df['B']=pd.Series([len(df[df['A']<value]) for value in df['A']])
end=time.time()
print("list comprehension time:",end-start)
start=time.time()
df['B']=df['A'].apply(function,df=df)
end=time.time()
print("apply time:",end-start)
start=time.time()
series = []
for index, row in df.iterrows():
series.append(len(df[df['A']<row['A']]))
df['B'] = series
end=time.time()
print("loop time:",end-start)
Output:
time: 19.54859232902527
time: 23.598857402801514
time: 26.441001415252686
This example create a new column by counting all the row which value is superior to the current value of the row.
For this type of issue (when I created a new column, after comparing for a row all other row of the dataframe), I have tried the apply function,list comprehension and classic loop but I think they are slow.
Is there a faster way?
Ps: A specialized solution for this example is not the thing which interested me the most. I prefer a general solution for this type of issue.
An another example can be: for a dataframe with a columns of string,create a new column by counting for each row the number of string in the dataframe which begin by the string first letter.

Usually I am using numpy broadcast for this type task
%timeit df['B']=pd.Series([len(df[df['A']<value]) for value in df['A']])
1 loop, best of 3: 25.4 s per loop
%timeit df['B']=(df.A.values<df.A.values[:,None]).sum(1)
1 loop, best of 3: 1.74 s per loop
#df= pd.DataFrame(np.random.randint(0,100,size=(30000, 1)), columns=['A'])

In general, broadcasting as Wen's solution is generally the fastest. In this case, looks like rank does the job.
np.random.seed(1)
df= pd.DataFrame(np.random.randint(0,100,size=(30000, 1)), columns=['A'])
%timeit df.A.rank()-1
2.71 ms ± 119 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Related

Compare with another column value

train.loc[:,'nd_mean_2021-04-15':'nd_mean_2021-08-27'] > train['q_5']
I get Automatic reindexing on DataFrame vs Series comparisons is deprecated and will raise ValueError in a future version. Do left, right = left.align(right, axis=1, copy=False)before e.g.left == right` and something strange output with a lot of columns, but I did expect cell values masked with True or False for calculate sum on next step.
Comparing each columns separately works just fine
train['nd_mean_2021-04-15'] > train['q_5']
But works slowly and messy code.
I've tested your original solution, and two additional ways of performing this comparison you want to make.
To cut to the chase, the following option had the smallest execution time:
%%timeit
sliced_df = df.loc[:, 'nd_mean_2021-04-15':'nd_mean_2021-08-27']
comparisson_df = pd.DataFrame({col: df['q_5'] for col in sliced_df.columns})
(sliced_df > comparisson_df)
# 1.46 ms ± 610 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Drawback: it's little bit messy and requires you to create 2 new objects (sliced_df and comparisson_df)
Option 2: Using DataFrame.apply (slower but more readable)
The second option although slower than your original and the above implementations, in my opinion is the cleanest and easiest to read of them all. If you're not trying to process large amounts of data (I assume not, since you're using pandas instead of Dask or Spark that are tools more suitable for processing large volumes of data) then it's worth bringing it to the discussion table:
%%timeit
df.loc[:, 'nd_mean_2021-04-15':'nd_mean_2021-08-27'].apply(lambda col: col > df['q_5'])
# 5.66 ms ± 897 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Original Solution
I've also tested the performance of your original implementation and here's what I got:
%%timeit
df.loc[:, 'nd_mean_2021-04-15':'nd_mean_2021-08-27'] > df['q_5']
# 2.02 ms ± 175 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Side-Note: If the FutureWarning message is bothering you, there's always the option to ignore them, adding the following code after your script imports:
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
DataFrame Used for Testing
All of the above implementations used the same dataframe, that I created using the following code:
import pandas as pd
import numpy as np
columns = list(
map(
lambda value: f'nd_mean_{value}',
pd.date_range('2021-04-15', '2021-08-27', freq='W').to_series().dt.strftime('%Y-%m-%d').to_list()
)
)
df = pd.DataFrame(
{col: np.random.randint(0, 100, 10) for col in [*columns, 'q_5']}
)
Screenshots

How to quickly subset many dataframes?

I have 180 DataFrame objects, each one has 3130 rows and it's about 300KB in memory.
The index is a DatetimeIndex, business days from 2000-01-03 to 2011-12-31:
from datetime import datetime
import pandas as pd
freq = pd.tseries.offsets.BDay()
index = pd.date_range(datetime(2000,1,3), datetime(2011,12,31), freq=freq)
df = pd.DataFrame(index=index)
df['A'] = 1000.0
df['B'] = 2000.0
df['C'] = 3000.0
df['D'] = 4000.0
df['E'] = 5000.0
df['F'] = True
df['G'] = 1.0
df['H'] = 100.0
I preprocess all the data taking advantage of numpy/pandas vectorization, then I have to loop through the dataframes day by day. To prevent the possibility of 'look ahead bias' and get data from the future I must be sure each day I only return a subset of my dataframes, up to that datapoint. I explain: if the current datapoint I am processing is datetime(2010,5,15) I need data from datetime(2000,1,3) to datetime(2010,5,15). You should not be able to access data more recent than datetime(2010,5,15). With this subset I'll make other computations I can't vectorize because they are path dependent.
I modified my original loop like this:
def get_data(datapoint):
return df.loc[:datapoint]
calendar = df.index
for datapoint in calendar:
x = get_data(datapoint)
This kind of code is painfully slow. What is my best option to improve its speed?
If I do not try to prevent the look ahead bias my production code takes about 3 minutes to run but it is too risky. With code like this it takes 13 minutes and this is unacceptable.
%%timeit
A slightly faster option is using iloc instead of loc but it is still slow:
def get_data2(datapoint):
idx = df.index.get_loc(datapoint)
return df.iloc[:idx]
for datapoint in calendar:
x = get_data(datapoint)
371 ms ± 23.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
for datapoint in calendar:
x = get_data2(datapoint)
327 ms ± 7.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The original code, which was not trying to prevent the possibility of look ahead bias, simply returned the whole DataFrame when called for each datapoint. In this example is 100 time faster, real code is 4 times faster.
def get_data_no_check():
return df
for datapoint in calendar:
x = get_data_no_check()
2.87 ms ± 89.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
See if this work for you:
datapoint_range = pd.date_range(datetime(2000,1,3), datetime.now(), freq=freq)
datapoint = datapoint_range[-1]
Logic is: replacing the ending date to be today so as to ensure not future date. Then get the last date of the range.
Then use your df.loc[:datapoint] to get the range you want.
I solved it like this: first I preprocess all my data in the DataFrame to take advantage of pandas vectorization then I convert it into a dict of dict and I iterate over it preventing the possibility of 'look ahead bias'. Since data are already preprocessed I can avoid the DataFrame overhead. The increase in processing speed in production code let me speechless: down from more than 30 minutes to 40 seconds!
# Convert the DataFrame into a dict of dict
for s, data in self._data.items():
self._data[s] = data.to_dict(orient='index')

Speeding up operations on large arrays & datasets (Pandas slow, Numpy better, further improvements?)

I have a large dataset comprising millions of rows and around 6 columns. The data is currently in a Pandas dataframe and I'm looking for the fastest way to operate on it. For example, let's say I want to drop all the rows where the value in one column is "1".
Here's my minimal working example:
# Create dummy data arrays and pandas dataframe
array_size = int(5e6)
array1 = np.random.rand(array_size)
array2 = np.random.rand(array_size)
array3 = np.random.rand(array_size)
array_condition = np.random.randint(0, 3, size=array_size)
df = pd.DataFrame({'array_condition': array_condition, 'array1': array1, 'array2': array2, 'array3': array3})
def method1():
df_new = df.drop(df[df.array_condition == 1].index)
EDIT: As Henry Yik pointed out in the comments, a faster Pandas approach is this:
def method1b():
df_new = df[df.array_condition != 1]
I believe that Pandas can be quite slow at this sort of thing, so I also implemented a method using numpy, processing each column as a separate array:
def method2():
masking = array_condition != 1
array1_new = array1[masking]
array2_new = array2[masking]
array3_new = array3[masking]
array_condition_new = array_condition[masking]
And the results:
%timeit method1()
625 ms ± 7.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit methodb()
158 ms ± 7.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit method2()
138 ms ± 3.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
So we do see a slight significant performance boost using numpy. However, this is at the cost of much less readable code (i.e. having to create a mask and apply it to each array). This method doesn't seem as scalable either as if I have, say, 30 columns of data, I'll need a lot of lines of code that apply the mask to every array! Additionally, it would be useful to allow optional columns, so this method may fail trying to operate on arrays which are empty.
Therefore, I have 2 questions:
1) Is there a cleaner / more flexible way to implement this in numpy?
2) Or better, is there any higher performance method I could use here? e.g. JIT (numba?), Cython or something else?
PS, in practice, in-place operations can be used, replacing the old array with the new one once data is dropped
Part 1: Pandas and (maybe) Numpy
Compare your method1b and method2:
method1b generates a DataFrame, which is probably what you want,
method2 generates a Numpy array, so to get fully comparable result,
you should subsequently generate a DataFrame from it.
So I changed your method2 to:
def method2():
masking = array_condition != 1
array1_new = array1[masking]
array2_new = array2[masking]
array3_new = array3[masking]
array_condition_new = array_condition[masking]
df_new = pd.DataFrame({ 'array_condition': array_condition[masking],
'array1': array1_new, 'array2': array2_new, 'array3': array3_new})
and then compared execution times (using %timeit).
The result was that my (expanded) version of method2 executed about 5% longer
than method1b (check on your own).
So my opinion is that as long as a single operation is concerned,
it is probably better to stay with Pandas.
But if you want to perform on your source DataFrame a couple of operations
in sequence and / or you are satisfied with the result as a Numpy array,
it is worth to:
Call arr = df.values to get the underlying Numpy array.
Perform all required operations on it using Numpy methods.
(Optionally) create a DataFrame from the final reslut.
I tried Numpy version of method1b:
def method3():
a = df.values
arr = a[a[:,0] != 1]
but the execution time was about 40 % longer.
The reason is probably that Numpy array has all elements of the
same type, so array_condition column is coerced to float and then
the whole Numpy array is created, what takes some time.
Part 2: Numpy and Numba
An alternative to consider is to use Numba package - a Just-In-Time
Python compiler.
I made such test:
Created a Numpy array (as a preliminary step):
a = df.values
The reason is that JIT compiled methods are able to use Numpy methods and types,
but not those of Pandas.
To perform the test, I used almost the same method as above,
but with #njit annotation (requires from numba import njit):
#njit
def method4():
arr = a[a[:,0] != 1]
This time:
The execution time was about 45 % of the time for method1b.
But since a = df.values has been executed before the test loop,
there are doubts whether this result is comparable with earlier tests.
Anyway, try Numba on your own, maybe it will be an interesting option for you.
You may find using numpy.where useful here. It converts a Boolean mask to array indices, making life much cheaper. Combining this with numpy.vstack allows for some memory-cheap operations:
def method3():
wh = np.where(array_condition == 1)
return np.vstack(tuple(col[wh] for col in (array1, array2, array3)))
This gives the following timeits:
>>> %timeit method2()
180 ms ± 6.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit method3()
96.9 ms ± 2.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Tuple unpacking allows the operation to be fairly light on memory, as when the object is vstack-ed back together, it is smaller. If you need to get your columns out of a DataFrame directly, the following code snippet may be useful:
def method3b():
wh = np.where(array_condition == 1)
col_names = ['array1','array2','array3']
return np.vstack(tuple(col[wh] for col in tuple(df[col_name].to_numpy()
for col_name in col_names)))
This allows one to grab columns by name from the DataFrame, which are then tuple unpacked on the fly. The speed is about the same:
>>> %timeit method3b()
96.6 ms ± 3.09 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Enjoy!

What's the fastest way to acces a Pandas DataFrame?

I have a DataFrame df with 541 columns, and I need to save all unique pairs of its column names into the rows of a separate DataFrame, repeated 8 times each.
I thought I would create an empty DataFrame fp, double loop through df's column names, insert into every 8th row, and fill in the blanks with the last available value.
When I tried to do this though I was baffled by how long it's taking. With 541 columns I only have to write 146,611 times yet it's taking well over 20 minutes. This seems egregious for just data access. Where is the problem and how can I solve it? It takes less time than that for Pandas to produce a correlation matrix with the columns so I must me doing something wrong.
Here's a reproducible example of what I mean:
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
%timeit for idx in range(0, len(fp)): fp.iloc[idx, 0] = idx
# 1 loop, best of 3: 22.3 s per loop
Don't do iloc/loc/chained-indexing. Using the NumPy interface alone increases speed by ~180x. If you further remove element access, we can bump this to 180,000x.
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
# this confirms how slow data access is on my computer
%timeit for idx in range(0, len(fp)): fp.iloc[idx, 0] = idx
1 loops, best of 3: 3min 9s per loop
# this accesses the underlying NumPy array, so you can directly set the data
%timeit for idx in range(0, len(fp)): fp.values[idx, 0] = idx
1 loops, best of 3: 1.19 s per loop
This is because there's extensive code that goes in the Python layer for this fancing indexing, taking ~10µs per loop. Using Pandas indexing should be done to retrieve entire subsets of data, which you then use to do vectorized operations on the entire dataframe. Individual element access is glacial: using Python dictionaries will give you a > 180 fold increase in performance.
Things get a lot better when you access columns or rows instead of individual elements: 3 orders of magnitude better.
# set all items in 1 go.
%timeit fp[0] = np.arange(146611)
1000 loops, best of 3: 814 µs per loop
Moral
Don't try to access individual elements via chained indexing, loc, or iloc. Generate a NumPy array in a single allocation, from a Python list (or a C-interface if performance is absolutely critical), and then perform operations on entire columns or dataframes.
Using NumPy arrays and performing operations directly on columns rather than individual elements, we got a whopping 180,000+ fold increase in performance. Not too shabby.
Edit
Comments from #kushy suggest Pandas may have optimized indexing in certain cases since I originally wrote this answer. Always profile your own code, and your mileage may vary.
Alexander's answer was the fastest for me as of 2020-01-06 when using .is_numpy() instead of .values. Tested in Jupyter Notebook on Windows 10. Pandas version = 0.24.2
import numpy as np
import pandas as pd
fp = np.empty(shape = (146611, 10))
fp.fill(np.nan)
fp = pd.DataFrame(fp)
pd.__version__ # '0.24.2'
def func1():
# Asker badmax solution
for idx in range(0, len(fp)):
fp.iloc[idx, 0] = idx
def func2():
# Alexander Huszagh solution 1
for idx in range(0, len(fp)):
fp.to_numpy()[idx, 0] = idx
def func3():
# user4322543 answer to
# https://stackoverflow.com/questions/34855859/is-there-a-way-in-pandas-to-use-previous-row-value-in-dataframe-apply-when-previ
new = []
for idx in range(0, len(fp)):
new.append(idx)
fp[0] = new
def func4():
# Alexander Huszagh solution 2
fp[0] = np.arange(146611)
%timeit func1
19.7 ns ± 1.08 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func2
19.1 ns ± 0.465 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func3
21.1 ns ± 3.26 ns per loop (mean ± std. dev. of 7 runs, 500000000 loops each)
%timeit func4
24.7 ns ± 0.889 ns per loop (mean ± std. dev. of 7 runs, 50000000 loops each)

Pandas DataFrame performance

Pandas is really great, but I am really surprised by how inefficient it is to retrieve values from a Pandas.DataFrame. In the following toy example, even the DataFrame.iloc method is more than 100 times slower than a dictionary.
The question: Is the lesson here just that dictionaries are the better way to look up values? Yes, I get that that is precisely what they were made for. But I just wonder if there is something I am missing about DataFrame lookup performance.
I realize this question is more "musing" than "asking" but I will accept an answer that provides insight or perspective on this. Thanks.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
dictionary = df.to_dict()
'''
f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
for func in f:
print func
print min(timeit.Timer(func, setup).repeat(3, 100000))
value = dictionary[5][5]
0.130625009537
value = df.loc[5, 5]
19.4681699276
value = df.iloc[5, 5]
17.2575249672
A dict is to a DataFrame as a bicycle is to a car.
You can pedal 10 feet on a bicycle faster than you can start a car, get it in gear, etc, etc. But if you need to go a mile, the car wins.
For certain small, targeted purposes, a dict may be faster.
And if that is all you need, then use a dict, for sure! But if you need/want the power and luxury of a DataFrame, then a dict is no substitute. It is meaningless to compare speed if the data structure does not first satisfy your needs.
Now for example -- to be more concrete -- a dict is good for accessing columns, but it is not so convenient for accessing rows.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 1000]))
dictionary = df.to_dict()
'''
# f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
f = ['value = [val[5] for col,val in dictionary.items()]', 'value = df.loc[5]', 'value = df.iloc[5]']
for func in f:
print(func)
print(min(timeit.Timer(func, setup).repeat(3, 100000)))
yields
value = [val[5] for col,val in dictionary.iteritems()]
25.5416321754
value = df.loc[5]
5.68071913719
value = df.iloc[5]
4.56006002426
So the dict of lists is 5 times slower at retrieving rows than df.iloc. The speed deficit becomes greater as the number of columns grows. (The number of columns is like the number of feet in the bicycle analogy. The longer the distance, the more convenient the car becomes...)
This is just one example of when a dict of lists would be less convenient/slower than a DataFrame.
Another example would be when you have a DatetimeIndex for the rows and wish to select all rows between certain dates. With a DataFrame you can use
df.loc['2000-1-1':'2000-3-31']
There is no easy analogue for that if you were to use a dict of lists. And the Python loops you would need to use to select the right rows would again be terribly slow compared to the DataFrame.
It seems the performance difference is much smaller now (0.21.1 -- I forgot what was the version of Pandas in the original example). Not only the performance gap between dictionary access and .loc reduced (from about 335 times to 126 times slower), loc (iloc) is less than two times slower than at (iat) now.
In [1]: import numpy, pandas
...: ...: df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
...: ...: dictionary = df.to_dict()
...:
In [2]: %timeit value = dictionary[5][5]
85.5 ns ± 0.336 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [3]: %timeit value = df.loc[5, 5]
10.8 µs ± 137 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [4]: %timeit value = df.at[5, 5]
6.87 µs ± 64.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [5]: %timeit value = df.iloc[5, 5]
14.9 µs ± 114 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [6]: %timeit value = df.iat[5, 5]
9.89 µs ± 54.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [7]: print(pandas.__version__)
0.21.1
---- Original answer below ----
+1 for using at or iat for scalar operations. Example benchmark:
In [1]: import numpy, pandas
...: df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
...: dictionary = df.to_dict()
In [2]: %timeit value = dictionary[5][5]
The slowest run took 34.06 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 310 ns per loop
In [4]: %timeit value = df.loc[5, 5]
10000 loops, best of 3: 104 µs per loop
In [5]: %timeit value = df.at[5, 5]
The slowest run took 6.59 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 9.26 µs per loop
In [6]: %timeit value = df.iloc[5, 5]
10000 loops, best of 3: 98.8 µs per loop
In [7]: %timeit value = df.iat[5, 5]
The slowest run took 6.67 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 9.58 µs per loop
It seems using at (iat) is about 10 times faster than loc (iloc).
I encountered the same problem. you can use at to improve.
"Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the at and iat methods, which are implemented on all of the data structures."
see official reference http://pandas.pydata.org/pandas-docs/stable/indexing.html chapter "Fast scalar value getting and setting"
I experienced different phenomenon about accessing the dataframe row.
test this simple example on dataframe about 10,000,000 rows.
dictionary rocks.
def testRow(go):
go_dict = go.to_dict()
times = 100000
ot= time.time()
for i in range(times):
go.iloc[100,:]
nt = time.time()
print('for iloc {}'.format(nt-ot))
ot= time.time()
for i in range(times):
go.loc[100,2]
nt = time.time()
print('for loc {}'.format(nt-ot))
ot= time.time()
for i in range(times):
[val[100] for col,val in go_dict.iteritems()]
nt = time.time()
print('for dict {}'.format(nt-ot))
I think the fastest way of accessing a cell, is
df.get_value(row,column)
df.set_value(row,column,value)
Both are faster than (I think)
df.iat(...)
df.at(...)

Categories