Date in Pandas transformating [duplicate] - python

I need to generate a list of dates in a dataframe by days and that each day is a row in the new dataframe, taking into account the start date and the end date of each record.
Input Dataframe:
A
B
Start
End
A1
B1
2021-05-15 00:00:00
2021-05-17 00:00:00
A1
B2
2021-05-30 00:00:00
2021-06-02 00:00:00
A2
B3
2021-05-10 00:00:00
2021-05-12 00:00:00
A2
B4
2021-06-02 00:00:00
2021-06-04 00:00:00
Expected Output:
A
B
Start
End
A1
B1
2021-05-15 00:00:00
2021-05-16 00:00:00
A1
B1
2021-05-16 00:00:00
2021-05-17 00:00:00
A1
B2
2021-05-30 00:00:00
2021-05-31 00:00:00
A1
B2
2021-05-31 00:00:00
2021-06-01 00:00:00
A1
B2
2021-06-01 00:00:00
2021-06-02 00:00:00
A2
B3
2021-05-10 00:00:00
2021-05-11 00:00:00
A2
B3
2021-05-11 00:00:00
2021-05-12 00:00:00
A2
B4
2021-06-02 00:00:00
2021-06-03 00:00:00
A2
B4
2021-06-03 00:00:00
2021-06-04 00:00:00

Use:
#convert columns to datetimes
df["Start"] = pd.to_datetime(df["Start"])
df["End"] = pd.to_datetime(df["End"])
#subtract values and convert to days
s = df["End"].sub(df["Start"]).dt.days
#repeat index
df = df.loc[df.index.repeat(s)].copy()
#add days by timedeltas, add 1 day for End column
add = pd.to_timedelta(df.groupby(level=0).cumcount(), unit='d')
df['Start'] = df["Start"].add(add)
df['End'] = df["Start"] + pd.Timedelta(1, 'd')
#default index
df = df.reset_index(drop=True)
print (df)
A B Start End
0 A1 B1 2021-05-15 2021-05-16
1 A1 B1 2021-05-16 2021-05-17
2 A1 B2 2021-05-30 2021-05-31
3 A1 B2 2021-05-31 2021-06-01
4 A1 B2 2021-06-01 2021-06-02
5 A2 B3 2021-05-10 2021-05-11
6 A2 B3 2021-05-11 2021-05-12
7 A2 B4 2021-06-02 2021-06-03
8 A2 B4 2021-06-03 2021-06-04
Performance:
#4k rows
df = pd.concat([df] * 1000, ignore_index=True)
In [136]: %timeit jez(df)
16.9 ms ± 3.94 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [137]: %timeit andreas(df)
888 ms ± 136 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
#800 rows
df = pd.concat([df] * 200, ignore_index=True)
In [139]: %timeit jez(df)
6.25 ms ± 46.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [140]: %timeit andreas(df)
170 ms ± 28.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
def andreas(df):
df['d_range'] = df.apply(lambda row: list(pd.date_range(start=row['Start'], end=row['End'])), axis=1)
return df.explode('d_range')
def jez(df):
df["Start"] = pd.to_datetime(df["Start"])
df["End"] = pd.to_datetime(df["End"])
#subtract values and convert to days
s = df["End"].sub(df["Start"]).dt.days
#repeat index
df = df.loc[df.index.repeat(s)].copy()
#add days by timedeltas, add 1 day for End column
add = pd.to_timedelta(df.groupby(level=0).cumcount(), unit='d')
df['Start'] = df["Start"].add(add)
df['End'] = df["Start"] + pd.Timedelta(1, 'd')
#default index
return df.reset_index(drop=True)

You can create a list of dates and explode it:
df['d_range'] = df.apply(lambda row: list(pd.date_range(start=row['Start'], end=row['End'])), axis=1)
df = df.explode('d_range')

Related

Fill NaN values from previous column with data

I have a dataframe in pandas, and I am trying to take data from the same row and different columns and fill NaN values in my data. How would I do this in pandas?
For example,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
83 27.0 29.0 NaN 29.0 30.0 NaN NaN 15.0 16.0 17.0 NaN 28.0 30.0 NaN 28.0 18.0
The goal is for the data to look like this:
1 2 3 4 5 6 7 ... 10 11 12 13 14 15 16
83 NaN NaN NaN 27.0 29.0 29.0 30.0 ... 15.0 16.0 17.0 28.0 30.0 28.0 18.0
The goal is to be able to take the mean of the last five columns that have data. If there are not >= 5 data-filled cells, then take the average of however many cells there are.
Use function justify for improve performance with filter all columns without first by DataFrame.iloc:
print (df)
name 1 2 3 4 5 6 7 8 9 10 11 12 13 \
80 bob 27.0 29.0 NaN 29.0 30.0 NaN NaN 15.0 16.0 17.0 NaN 28.0 30.0
14 15 16
80 NaN 28.0 18.0
df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
print (df)
name 1 2 3 4 5 6 7 8 9 10 11 12 13 \
80 bob NaN NaN NaN NaN NaN 27.0 29.0 29.0 30.0 15.0 16.0 17.0 28.0
14 15 16
80 30.0 28.0 18.0
Function:
#https://stackoverflow.com/a/44559180/2901002
def justify(a, invalid_val=0, axis=1, side='left'):
"""
Justifies a 2D array
Parameters
----------
A : ndarray
Input array to be justified
axis : int
Axis along which justification is to be made
side : str
Direction of justification. It could be 'left', 'right', 'up', 'down'
It should be 'left' or 'right' for axis=1 and 'up' or 'down' for axis=0.
"""
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
Performance:
#100 rows
df = pd.concat([df] * 100, ignore_index=True)
#41 times slowier
In [39]: %timeit df.loc[:,df.columns[1:]] = df.loc[:,df.columns[1:]].apply(fun, axis=1)
145 ms ± 23.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [41]: %timeit df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
3.54 ms ± 236 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
#1000 rows
df = pd.concat([df] * 1000, ignore_index=True)
#198 times slowier
In [43]: %timeit df.loc[:,df.columns[1:]] = df.loc[:,df.columns[1:]].apply(fun, axis=1)
1.13 s ± 37.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [45]: %timeit df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
5.7 ms ± 184 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Assuming you need to move all NaN to the first columns I would define a function that takes all NaN and places them first and leave the rest as it is:
def fun(row):
index_order = row.index[row.isnull()].append(row.index[~row.isnull()])
row.iloc[:] = row[index_order].values
return row
df_fix = df.loc[:,df.columns[1:]].apply(fun, axis=1)
If you need to overwrite the results in the same dataframe then:
df.loc[:,df.columns[1:]] = df_fix.copy()

Adding random number of days to a series of datetime values

I am trying to add random number of days to a series of datetime values without iterating each row of the dataframe as it is taking a lot of time(i have a large dataframe). I went through datetime's timedelta, pandas DateOffset,etc but they they do not have option to give the random number of days at once i.e. using list as an input(we have to give random numbers one by one).
code:
df['date_columnA'] = df['date_columnB'] + datetime.timedelta(days = n)
Above code will add same number of days i.e. n to all the rows whereas i want random numbers to be added.
If performance is important create all random timedeltas by to_timedelta with numpy.random.randint and add to column:
np.random.seed(2020)
df = pd.DataFrame({'date_columnB': pd.date_range('2015-01-01', periods=20)})
td = pd.to_timedelta(np.random.randint(1,100, size=len(df)), unit='d')
df['date_columnA'] = df['date_columnB'] + td
print (df)
date_columnB date_columnA
0 2015-01-01 2015-04-08
1 2015-01-02 2015-01-11
2 2015-01-03 2015-03-12
3 2015-01-04 2015-03-13
4 2015-01-05 2015-04-07
5 2015-01-06 2015-01-10
6 2015-01-07 2015-03-20
7 2015-01-08 2015-03-06
8 2015-01-09 2015-02-08
9 2015-01-10 2015-02-28
10 2015-01-11 2015-02-13
11 2015-01-12 2015-02-06
12 2015-01-13 2015-03-29
13 2015-01-14 2015-01-24
14 2015-01-15 2015-03-08
15 2015-01-16 2015-01-28
16 2015-01-17 2015-03-14
17 2015-01-18 2015-03-22
18 2015-01-19 2015-03-28
19 2015-01-20 2015-03-31
Performance for 10k rows:
np.random.seed(2020)
df = pd.DataFrame({'date_columnB': pd.date_range('2015-01-01', periods=10000)})
In [357]: %timeit df['date_columnA'] = df['date_columnB'].apply(lambda x:x+timedelta(days=random.randint(0,100)))
158 ms ± 1.85 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [358]: %timeit df['date_columnA1'] = df['date_columnB'] + pd.to_timedelta(np.random.randint(1,100, size=len(df)), unit='d')
1.53 ms ± 37.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
import random
df['date_columnA'] = df['date_columnB'].apply(lambda x:x+timedelta(days=random.randint(0,100))
import numpy as np
import pandas as pd
df['date_columnA'] = df['date_columnB'] +np.random.choice(pd.date_range('2000-01-01', '2020-01-01' , len(df))

Difference of datetimes in hours, excluding the weekend

I currently have a dataframe, where an uniqueID has multiple dates in another column. I want extract the hours between each date, but ignore the weekend if the next date is after the weekend. For example, if today is friday at 12 pm,
and the following date is tuesday at 12 pm then the difference in hours between these two dates would be 48 hours.
Here is my dataset with the expected output:
df = pd.DataFrame({"UniqueID": ["A","A","A","B","B","B","C","C"],"Date":
["2018-12-07 10:30:00","2018-12-10 14:30:00","2018-12-11 17:30:00",
"2018-12-14 09:00:00","2018-12-18 09:00:00",
"2018-12-21 11:00:00","2019-01-01 15:00:00","2019-01-07 15:00:00"],
"ExpectedOutput": ["28.0","27.0","Nan","48.0","74.0","NaN","96.0","NaN"]})
df["Date"] = df["Date"].astype(np.datetime64)
This is what I have so far, but it includes the weekends:
df["date_diff"] = df.groupby(["UniqueID"])["Date"].apply(lambda x: x.diff()
/ np.timedelta64(1 ,'h')).shift(-1)
Thanks!
Idea is floor datetimes for remove times and get number of business days between start day + one day and shifted day to hours3 column by numpy.busday_count and then create hour1 and hour2 columns for start and end hours if not weekends hours. Last sum all hours columns together:
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
df["hours1"] = df["Date"].dt.floor('d')
df["hours2"] = df["shifted"].dt.floor('d')
mask = df['shifted'].notnull()
f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24
mask1 = df['hours1'].dt.dayofweek < 5
hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')
mask1 = df['hours2'].dt.dayofweek < 5
df['hours2'] = np.where(mask1, df['shifted']-df['hours2'], np.nan) / np.timedelta64(1 ,'h')
df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
print (df)
UniqueID Date ExpectedOutput shifted hours1 \
0 A 2018-12-07 10:30:00 28.0 2018-12-10 14:30:00 13.5
1 A 2018-12-10 14:30:00 27.0 2018-12-11 17:30:00 9.5
2 A 2018-12-11 17:30:00 Nan NaT 6.5
3 B 2018-12-14 09:00:00 48.0 2018-12-18 09:00:00 15.0
4 B 2018-12-18 09:00:00 74.0 2018-12-21 11:00:00 15.0
5 B 2018-12-21 11:00:00 NaN NaT 13.0
6 C 2019-01-01 15:00:00 96.0 2019-01-07 15:00:00 9.0
7 C 2019-01-07 15:00:00 NaN NaT 9.0
hours2 hours3 date_diff
0 14.5 0.0 28.0
1 17.5 0.0 27.0
2 NaN NaN NaN
3 9.0 24.0 48.0
4 11.0 48.0 74.0
5 NaN NaN NaN
6 15.0 72.0 96.0
7 NaN NaN NaN
First solution was removed with 2 reasons - was not accurate and slow:
np.random.seed(2019)
dates = pd.date_range('2015-01-01','2018-01-01', freq='H')
df = pd.DataFrame({"UniqueID": np.random.choice(list('ABCDEFGHIJ'), size=100),
"Date": np.random.choice(dates, size=100)})
print (df)
def old(df):
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
def f(x):
a = pd.date_range(x['Date'], x['shifted'], freq='T')
return ((a.dayofweek < 5).sum() / 60).round()
mask = df['shifted'].notnull()
df.loc[mask, 'date_diff'] = df[mask].apply(f, axis=1)
return df
def new(df):
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
df["hours1"] = df["Date"].dt.floor('d')
df["hours2"] = df["shifted"].dt.floor('d')
mask = df['shifted'].notnull()
f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24
mask1 = df['hours1'].dt.dayofweek < 5
hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')
mask1 = df['hours2'].dt.dayofweek < 5
df['hours2'] = np.where(mask1, df['shifted'] - df['hours2'], np.nan) / np.timedelta64(1 ,'h')
df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
return df
print (new(df))
print (old(df))
In [44]: %timeit (new(df))
22.7 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [45]: %timeit (old(df))
1.01 s ± 8.03 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Pandas DataFrame count number of not None elements in two columns [duplicate]

My data looks like this:
Close a b c d e Time
2015-12-03 2051.25 5 4 3 1 1 05:00:00
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00
I need to count 'horizontally' the values in the columns ['a'] to ['e'] that are not NaN. So the outcome would be this:
df['Count'] = .....
df
Close a b c d e Time Count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
Thanks
You can subselect from your df and call count passing axis=1:
In [24]:
df['count'] = df[list('abcde')].count(axis=1)
df
Out[24]:
Close a b c d e Time count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
TIMINGS
In [25]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
100 loops, best of 3: 3.28 ms per loop
100 loops, best of 3: 2.76 ms per loop
100 loops, best of 3: 2.98 ms per loop
apply is the slowest which is not a surprise, the drop version is marginally faster but semantically I prefer just passing the list of cols of interest and calling count for readability
Hmm I keep getting varying timings now:
In [27]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
100 loops, best of 3: 3.33 ms per loop
100 loops, best of 3: 2.7 ms per loop
100 loops, best of 3: 2.7 ms per loop
100 loops, best of 3: 2.57 ms per loop
MORE TIMINGS
In [160]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
%timeit df[list('abcde')].notnull().sum(axis=1)
1000 loops, best of 3: 1.4 ms per loop
1000 loops, best of 3: 1.14 ms per loop
1000 loops, best of 3: 1.11 ms per loop
1000 loops, best of 3: 1.11 ms per loop
1000 loops, best of 3: 1.05 ms per loop
It seems that testing for notnull and summing (as notnull will produce a boolean mask) is quicker on this dataset
On a 50k row df the last method is slightly quicker:
In [172]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
%timeit df[list('abcde')].notnull().sum(axis=1)
1 loops, best of 3: 5.83 s per loop
100 loops, best of 3: 6.15 ms per loop
100 loops, best of 3: 6.49 ms per loop
100 loops, best of 3: 6.04 ms per loop
df['Count'] = df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
In [1254]: df
Out[1254]:
Close a b c d e Time Count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
Include the list of desired columns, or just drop the two columns you do not want to exclude from the count - along axis=1 (see docs):
df['Count'] = df.drop(['Close', 'Time'], axis=1).count(axis=1)
Close a b c d e Time Count
0 2051.25 5 4 3 1 1 05:00:00 5
1 2088.25 5 4 3 1 NaN 06:00:00 4
2 2081.50 5 4 3 NaN NaN 07:00:00 3
3 2058.25 5 4 3 NaN NaN 08:00:00 3
4 2042.25 5 4 NaN NaN NaN 09:00:00 2

How to count non NaN values accross columns in pandas dataframe?

My data looks like this:
Close a b c d e Time
2015-12-03 2051.25 5 4 3 1 1 05:00:00
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00
I need to count 'horizontally' the values in the columns ['a'] to ['e'] that are not NaN. So the outcome would be this:
df['Count'] = .....
df
Close a b c d e Time Count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
Thanks
You can subselect from your df and call count passing axis=1:
In [24]:
df['count'] = df[list('abcde')].count(axis=1)
df
Out[24]:
Close a b c d e Time count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
TIMINGS
In [25]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
100 loops, best of 3: 3.28 ms per loop
100 loops, best of 3: 2.76 ms per loop
100 loops, best of 3: 2.98 ms per loop
apply is the slowest which is not a surprise, the drop version is marginally faster but semantically I prefer just passing the list of cols of interest and calling count for readability
Hmm I keep getting varying timings now:
In [27]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
100 loops, best of 3: 3.33 ms per loop
100 loops, best of 3: 2.7 ms per loop
100 loops, best of 3: 2.7 ms per loop
100 loops, best of 3: 2.57 ms per loop
MORE TIMINGS
In [160]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
%timeit df[list('abcde')].notnull().sum(axis=1)
1000 loops, best of 3: 1.4 ms per loop
1000 loops, best of 3: 1.14 ms per loop
1000 loops, best of 3: 1.11 ms per loop
1000 loops, best of 3: 1.11 ms per loop
1000 loops, best of 3: 1.05 ms per loop
It seems that testing for notnull and summing (as notnull will produce a boolean mask) is quicker on this dataset
On a 50k row df the last method is slightly quicker:
In [172]:
%timeit df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
%timeit df.drop(['Close', 'Time'], axis=1).count(axis=1)
%timeit df[list('abcde')].count(axis=1)
%timeit df[['a', 'b', 'c', 'd', 'e']].count(axis=1)
%timeit df[list('abcde')].notnull().sum(axis=1)
1 loops, best of 3: 5.83 s per loop
100 loops, best of 3: 6.15 ms per loop
100 loops, best of 3: 6.49 ms per loop
100 loops, best of 3: 6.04 ms per loop
df['Count'] = df[['a', 'b', 'c', 'd', 'e']].apply(lambda x: sum(x.notnull()), axis=1)
In [1254]: df
Out[1254]:
Close a b c d e Time Count
2015-12-03 2051.25 5 4 3 1 1 05:00:00 5
2015-12-04 2088.25 5 4 3 1 NaN 06:00:00 4
2015-12-07 2081.50 5 4 3 NaN NaN 07:00:00 3
2015-12-08 2058.25 5 4 NaN NaN NaN 08:00:00 2
2015-12-09 2042.25 5 NaN NaN NaN NaN 09:00:00 1
Include the list of desired columns, or just drop the two columns you do not want to exclude from the count - along axis=1 (see docs):
df['Count'] = df.drop(['Close', 'Time'], axis=1).count(axis=1)
Close a b c d e Time Count
0 2051.25 5 4 3 1 1 05:00:00 5
1 2088.25 5 4 3 1 NaN 06:00:00 4
2 2081.50 5 4 3 NaN NaN 07:00:00 3
3 2058.25 5 4 3 NaN NaN 08:00:00 3
4 2042.25 5 4 NaN NaN NaN 09:00:00 2

Categories