I'm looking for a way to create a datetimeindex in pandas. My data looks as follows:
Date Time AAA
0 06/17/2016 03:00:00 PM 19.13
1 06/17/2016 02:00:00 PM 19.13
2 06/17/2016 01:00:00 PM 19.26
3 06/17/2016 12:00:00 AM 19.28
4 06/17/2016 11:00:00 AM 19.28
The result I want to obtain is:
AAA
Date
2016-06-17 15:00:00 19.16
2016-06-17 14:00:00 19.14
2016-06-17 13:00:00 19.18
2016-06-17 12:00:00 19.27
2016-06-17 11:00:00 19.27
I am note sure how to efficiently do this since my Time column uses 12-hour clock format.
you can do it using to_datetime as:
df
Out[38]:
Date Time AAA
0 06/17/2016 03:00:00 PM 19.13
1 06/17/2016 02:00:00 PM 19.13
2 06/17/2016 01:00:00 PM 19.26
3 06/17/2016 12:00:00 AM 19.28
4 06/17/2016 11:00:00 AM 19.28
In [39]: df['Date']=pd.to_datetime(df['Date']+ ' '+df['Time'])
In [40]: df
Out[40]:
Date Time AAA
0 2016-06-17 15:00:00 03:00:00 PM 19.13
1 2016-06-17 14:00:00 02:00:00 PM 19.13
2 2016-06-17 13:00:00 01:00:00 PM 19.26
3 2016-06-17 00:00:00 12:00:00 AM 19.28
4 2016-06-17 11:00:00 11:00:00 AM 19.28
In [40]: df=df.drop(['Time','Date'],axis=1).set_index(df['Date'])
In [41]: df
Out[41]:
AAA
Date
2016-06-17 15:00:00 19.13
2016-06-17 14:00:00 19.13
2016-06-17 13:00:00 19.26
2016-06-17 00:00:00 19.28
2016-06-17 11:00:00 19.28
Using date objects as opposed to parsing strings
df = pd.DataFrame([
['06/17/2016', '03:00:00 PM', 19.13],
['06/17/2016', '02:00:00 PM', 19.13],
['06/17/2016', '01:00:00 PM', 19.26],
['06/17/2016', '12:00:00 AM', 19.28],
['06/17/2016', '11:00:00 AM', 19.28],
],
columns=['Date', 'Time', 'AAA'],
)
df.Date = pd.to_datetime(df.Date)
df.Time = pd.to_datetime(df.Time) - pd.DatetimeIndex(df.Time).date
df.set_index(df.Date + df.Time)[['AAA']]
AAA
2016-06-17 15:00:00 19.13
2016-06-17 14:00:00 19.13
2016-06-17 13:00:00 19.26
2016-06-17 00:00:00 19.28
2016-06-17 11:00:00 19.28
Related
I'm trying to measure the difference between timestamps using certain conditions. Using below, for each unique ID, I'm hoping to subtract the End Time where Item == A and the Start Time where Item == D.
So the timestamps are actually located on separate rows.
At the moment my process is returning an error. I'm also hoping to drop the .shift() for something more robust as each unique ID will have different combinations. For ex, A,B,C,D - A,B,D - A,D etc.
df = pd.DataFrame({'ID': [10,10,10,20,20,30],
'Start Time': ['2019-08-02 09:00:00','2019-08-03 10:50:00','2019-08-05 16:00:00','2019-08-04 08:00:00','2019-08-04 15:30:00','2019-08-06 11:00:00'],
'End Time': ['2019-08-04 15:00:00','2019-08-04 16:00:00','2019-08-05 16:00:00','2019-08-04 14:00:00','2019-08-05 20:30:00','2019-08-07 10:00:00'],
'Item': ['A','B','D','A','D','A'],
})
df['Start Time'] = pd.to_datetime(df['Start Time'])
df['End Time'] = pd.to_datetime(df['End Time'])
df['diff'] = (df.groupby('ID')
.apply(lambda x: x['End Time'].shift(1) - x['Start Time'].shift(1))
.reset_index(drop=True))
Intended Output:
ID Start Time End Time Item diff
0 10 2019-08-02 09:00:00 2019-08-04 15:00:00 A NaT
1 10 2019-08-03 10:50:00 2019-08-04 16:00:00 B NaT
2 10 2019-08-05 16:00:00 2019-08-05 16:00:00 D 1 days 01:00:00
3 20 2019-08-04 08:00:00 2019-08-04 14:00:00 A NaT
4 20 2019-08-04 15:30:00 2019-08-05 20:30:00 D 0 days 01:30:00
5 30 2019-08-06 11:00:00 2019-08-07 10:00:00 A NaT
df2 = df.set_index('ID')
df2.query('Item == "D"')['Start Time']-df2.query('Item == "A"')['End Time']
output:
ID
10 2 days 05:30:00
20 0 days 20:30:00
30 NaT
dtype: timedelta64[ns]
older answer
The issue is your fillna, you can't have strings in a timedelta column:
df['diff'] = (df.groupby('ID')
.apply(lambda x: x['End Time'].shift(1) - x['Start Time'].shift(1))
#.fillna('-') # the issue is here
.reset_index(drop=True))
output:
ID Start Time End Time Item diff
0 10 2019-08-02 09:00:00 2019-08-02 09:30:00 A NaT
1 10 2019-08-03 10:50:00 2019-08-03 11:00:00 B 0 days 00:30:00
2 10 2019-08-04 15:00:00 2019-08-05 16:00:00 C 0 days 00:10:00
3 20 2019-08-04 08:00:00 2019-08-04 14:00:00 B NaT
4 20 2019-08-05 10:30:00 2019-08-05 20:30:00 C 0 days 06:00:00
5 30 2019-08-06 11:00:00 2019-08-07 10:00:00 A NaT
IIUC use:
df1 = df.pivot('ID','Item')
print (df1)
Start Time \
Item A B D
ID
10 2019-08-02 09:00:00 2019-08-03 10:50:00 2019-08-04 15:00:00
20 2019-08-04 08:00:00 NaT 2019-08-05 10:30:00
30 2019-08-06 11:00:00 NaT NaT
End Time
Item A B D
ID
10 2019-08-02 09:30:00 2019-08-03 11:00:00 2019-08-05 16:00:00
20 2019-08-04 14:00:00 NaT 2019-08-05 20:30:00
30 2019-08-07 10:00:00 NaT NaT
a = df1[('Start Time','D')].sub(df1[('End Time','A')])
print (a)
ID
10 2 days 05:30:00
20 0 days 20:30:00
30 NaT
dtype: timedelta64[ns]
Here I have an extract from my pandas dataframe which is survey data with two datetime fields. It appears that some of the start times and end times were filled in the wrong position in the survey. Here is an example from my dataframe. The start and end time in the 8th row, I suspect were entered the wrong way round.
Just to give context, I generated the third column like this:
df_time['trip_duration'] = df_time['tripEnd_time'] - df_time['tripStart_time']
The three columns are in timedelta64 format.
Here is the top of my dataframe:
tripStart_time tripEnd_time trip_duration
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 -1 days +22:15:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
What I am trying to do is, loop through these two columns, and for each time 'tripEnd_time' is less than 'tripStart_time' swap the positions of these two entries. So in the case of row 8 above, I would make tripStart_time = tripEnd_time and tripEnd_time = tripStart_time.
I am not quite sure the best way to approach this. Should I use nested for loop where i compare each entry in the two columns?
Thanks
Use Series.abs:
df_time['trip_duration'] = (df_time['tripEnd_time'] - df_time['tripStart_time']).abs()
print (df_time)
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 01:45:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
What is same like:
a = df_time['tripEnd_time'] - df_time['tripStart_time']
b = df_time['tripStart_time'] - df_time['tripEnd_time']
mask = df_time['tripEnd_time'] > df_time['tripStart_time']
df_time['trip_duration'] = np.where(mask, a, b)
print (df_time)
tripStart_time tripEnd_time trip_duration
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 01:45:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
You can switch column values on selected rows:
df_time.loc[df_time['tripEnd_time'] < df_time['tripStart_time'],
['tripStart_time', 'tripEnd_time']] = df_time.loc[
df_time['tripEnd_time'] < df_time['tripStart_time'],
['tripEnd_time', 'tripStart_time']].values
I have a dataframe df that contains datetimes for every hour of a day between 2003-02-12 to 2017-06-30 and I want to delete all datetimes between 24th Dec and 1st Jan of EVERY year.
An extract of my data frame is:
...
7505,2003-12-23 17:00:00
7506,2003-12-23 18:00:00
7507,2003-12-23 19:00:00
7508,2003-12-23 20:00:00
7509,2003-12-23 21:00:00
7510,2003-12-23 22:00:00
7511,2003-12-23 23:00:00
7512,2003-12-24 00:00:00
7513,2003-12-24 01:00:00
7514,2003-12-24 02:00:00
7515,2003-12-24 03:00:00
7516,2003-12-24 04:00:00
7517,2003-12-24 05:00:00
7518,2003-12-24 06:00:00
...
7723,2004-01-01 19:00:00
7724,2004-01-01 20:00:00
7725,2004-01-01 21:00:00
7726,2004-01-01 22:00:00
7727,2004-01-01 23:00:00
7728,2004-01-02 00:00:00
7729,2004-01-02 01:00:00
7730,2004-01-02 02:00:00
7731,2004-01-02 03:00:00
7732,2004-01-02 04:00:00
7733,2004-01-02 05:00:00
7734,2004-01-02 06:00:00
7735,2004-01-02 07:00:00
...
and my expected output is:
...
7505,2003-12-23 17:00:00
7506,2003-12-23 18:00:00
7507,2003-12-23 19:00:00
7508,2003-12-23 20:00:00
7509,2003-12-23 21:00:00
7510,2003-12-23 22:00:00
7511,2003-12-23 23:00:00
...
7728,2004-01-02 00:00:00
7729,2004-01-02 01:00:00
7730,2004-01-02 02:00:00
7731,2004-01-02 03:00:00
7732,2004-01-02 04:00:00
7733,2004-01-02 05:00:00
7734,2004-01-02 06:00:00
7735,2004-01-02 07:00:00
...
Sample dataframe:
dates
0 2003-12-23 23:00:00
1 2003-12-24 05:00:00
2 2004-12-27 05:00:00
3 2003-12-13 23:00:00
4 2002-12-23 23:00:00
5 2004-01-01 05:00:00
6 2014-12-24 05:00:00
Solution:
If you want it for every year between the following dates excluded, then extract the month and dates first:
df['month'] = df['dates'].dt.month
df['day'] = df['dates'].dt.day
And now put the condition check:
dec_days = [24, 25, 26, 27, 28, 29, 30, 31]
## if the month is dec, then check for these dates
## if the month is jan, then just check for the day to be 1 like below
df = df[~(((df.month == 12) & (df.day.isin(dec_days))) | ((df.month == 1) & (df.day == 1)))]
Sample output:
dates month day
0 2003-12-23 23:00:00 12 23
3 2003-12-13 23:00:00 12 13
4 2002-12-23 23:00:00 12 23
This takes advantage of the fact that datetime-strings in the form mm-dd are sortable. Read everything in from the CSV file then filter for the dates you want:
df = pd.read_csv('...', parse_dates=['DateTime'])
s = df['DateTime'].dt.strftime('%m-%d')
excluded = (s == '01-01') | (s >= '12-24') # Jan 1 or >= Dec 24
df[~excluded]
You can try dropping on conditionals. Maybe with a pattern match to the date string or parsing the date as a number (like in Java) and conditionally removing.
datesIdontLike = df[df['colname'] == <stringPattern>].index
newDF = df.drop(datesIdontLike, inplace=True)
Check this out: https://thispointer.com/python-pandas-how-to-drop-rows-in-dataframe-by-conditions-on-column-values/
(If you have issues, let me know.)
You can use pandas and boolean filtering with strftime
# version 0.23.4
import pandas as pd
# make df
df = pd.DataFrame(pd.date_range('20181223', '20190103', freq='H'), columns=['date'])
# string format the date to only include the month and day
# then set it strictly less than '12-24' AND greater than or equal to `01-02`
df = df.loc[
(df.date.dt.strftime('%m-%d') < '12-24') &
(df.date.dt.strftime('%m-%d') >= '01-02')
].copy()
print(df)
date
0 2018-12-23 00:00:00
1 2018-12-23 01:00:00
2 2018-12-23 02:00:00
3 2018-12-23 03:00:00
4 2018-12-23 04:00:00
5 2018-12-23 05:00:00
6 2018-12-23 06:00:00
7 2018-12-23 07:00:00
8 2018-12-23 08:00:00
9 2018-12-23 09:00:00
10 2018-12-23 10:00:00
11 2018-12-23 11:00:00
12 2018-12-23 12:00:00
13 2018-12-23 13:00:00
14 2018-12-23 14:00:00
15 2018-12-23 15:00:00
16 2018-12-23 16:00:00
17 2018-12-23 17:00:00
18 2018-12-23 18:00:00
19 2018-12-23 19:00:00
20 2018-12-23 20:00:00
21 2018-12-23 21:00:00
22 2018-12-23 22:00:00
23 2018-12-23 23:00:00
240 2019-01-02 00:00:00
241 2019-01-02 01:00:00
242 2019-01-02 02:00:00
243 2019-01-02 03:00:00
244 2019-01-02 04:00:00
245 2019-01-02 05:00:00
246 2019-01-02 06:00:00
247 2019-01-02 07:00:00
248 2019-01-02 08:00:00
249 2019-01-02 09:00:00
250 2019-01-02 10:00:00
251 2019-01-02 11:00:00
252 2019-01-02 12:00:00
253 2019-01-02 13:00:00
254 2019-01-02 14:00:00
255 2019-01-02 15:00:00
256 2019-01-02 16:00:00
257 2019-01-02 17:00:00
258 2019-01-02 18:00:00
259 2019-01-02 19:00:00
260 2019-01-02 20:00:00
261 2019-01-02 21:00:00
262 2019-01-02 22:00:00
263 2019-01-02 23:00:00
264 2019-01-03 00:00:00
This will work with multiple years because we are only filtering on the month and day.
# change range to include 2017
df = pd.DataFrame(pd.date_range('20171223', '20190103', freq='H'), columns=['date'])
df = df.loc[
(df.date.dt.strftime('%m-%d') < '12-24') &
(df.date.dt.strftime('%m-%d') >= '01-02')
].copy()
print(df)
date
0 2017-12-23 00:00:00
1 2017-12-23 01:00:00
2 2017-12-23 02:00:00
3 2017-12-23 03:00:00
4 2017-12-23 04:00:00
5 2017-12-23 05:00:00
6 2017-12-23 06:00:00
7 2017-12-23 07:00:00
8 2017-12-23 08:00:00
9 2017-12-23 09:00:00
10 2017-12-23 10:00:00
11 2017-12-23 11:00:00
12 2017-12-23 12:00:00
13 2017-12-23 13:00:00
14 2017-12-23 14:00:00
15 2017-12-23 15:00:00
16 2017-12-23 16:00:00
17 2017-12-23 17:00:00
18 2017-12-23 18:00:00
19 2017-12-23 19:00:00
20 2017-12-23 20:00:00
21 2017-12-23 21:00:00
22 2017-12-23 22:00:00
23 2017-12-23 23:00:00
240 2018-01-02 00:00:00
241 2018-01-02 01:00:00
242 2018-01-02 02:00:00
243 2018-01-02 03:00:00
244 2018-01-02 04:00:00
245 2018-01-02 05:00:00
... ...
8779 2018-12-23 19:00:00
8780 2018-12-23 20:00:00
8781 2018-12-23 21:00:00
8782 2018-12-23 22:00:00
8783 2018-12-23 23:00:00
9000 2019-01-02 00:00:00
9001 2019-01-02 01:00:00
9002 2019-01-02 02:00:00
9003 2019-01-02 03:00:00
9004 2019-01-02 04:00:00
9005 2019-01-02 05:00:00
9006 2019-01-02 06:00:00
9007 2019-01-02 07:00:00
9008 2019-01-02 08:00:00
9009 2019-01-02 09:00:00
9010 2019-01-02 10:00:00
9011 2019-01-02 11:00:00
9012 2019-01-02 12:00:00
9013 2019-01-02 13:00:00
9014 2019-01-02 14:00:00
9015 2019-01-02 15:00:00
9016 2019-01-02 16:00:00
9017 2019-01-02 17:00:00
9018 2019-01-02 18:00:00
9019 2019-01-02 19:00:00
9020 2019-01-02 20:00:00
9021 2019-01-02 21:00:00
9022 2019-01-02 22:00:00
9023 2019-01-02 23:00:00
9024 2019-01-03 00:00:00
Since you want this to happen for every year, we can first define a series that where we replace the year by a static value (2000 for example). Let date be the column that stores the date, we can generate such column as:
dt = pd.to_datetime({'year': 2000, 'month': df['date'].dt.month, 'day': df['date'].dt.day})
For the given sample data, we get:
>>> dt
0 2000-12-23
1 2000-12-23
2 2000-12-23
3 2000-12-23
4 2000-12-23
5 2000-12-23
6 2000-12-23
7 2000-12-24
8 2000-12-24
9 2000-12-24
10 2000-12-24
11 2000-12-24
12 2000-12-24
13 2000-12-24
14 2000-01-01
15 2000-01-01
16 2000-01-01
17 2000-01-01
18 2000-01-01
19 2000-01-02
20 2000-01-02
21 2000-01-02
22 2000-01-02
23 2000-01-02
24 2000-01-02
25 2000-01-02
26 2000-01-02
dtype: datetime64[ns]
Next we can filter the rows, like:
from datetime import date
df[(dt >= date(2000,1,2)) & (dt < date(2000,12,24))]
This gives us the following data for your sample data:
>>> df[(dt >= date(2000,1,2)) & (dt < date(2000,12,24))]
id dt
0 7505 2003-12-23 17:00:00
1 7506 2003-12-23 18:00:00
2 7507 2003-12-23 19:00:00
3 7508 2003-12-23 20:00:00
4 7509 2003-12-23 21:00:00
5 7510 2003-12-23 22:00:00
6 7511 2003-12-23 23:00:00
19 7728 2004-01-02 00:00:00
20 7729 2004-01-02 01:00:00
21 7730 2004-01-02 02:00:00
22 7731 2004-01-02 03:00:00
23 7732 2004-01-02 04:00:00
24 7733 2004-01-02 05:00:00
25 7734 2004-01-02 06:00:00
26 7735 2004-01-02 07:00:00
So regardless what the year is, we will only consider dates between the 2nd of January and the 23rd of December (both inclusive).
I'm not able to create a Pandas Series of every hour (as datetime objects) of a given year without iterating and adding one hour to the last, and that's slow. Is there any way to do that paralelly.
My input would be a year and the output should be a Pandas Series of every hour of that year.
You can use pd.date_range with freq='H' which is hourly frequency:
Edit with 23:00:00 after comment by #ALollz
year = 2019
pd.Series(pd.date_range(start=f'{year}-01-01', end=f'{year}-12-31 23:00:00', freq='H'))
0 2019-01-01 00:00:00
1 2019-01-01 01:00:00
2 2019-01-01 02:00:00
3 2019-01-01 03:00:00
4 2019-01-01 04:00:00
5 2019-01-01 05:00:00
6 2019-01-01 06:00:00
7 2019-01-01 07:00:00
8 2019-01-01 08:00:00
9 2019-01-01 09:00:00
10 2019-01-01 10:00:00
11 2019-01-01 11:00:00
12 2019-01-01 12:00:00
13 2019-01-01 13:00:00
14 2019-01-01 14:00:00
15 2019-01-01 15:00:00
16 2019-01-01 16:00:00
17 2019-01-01 17:00:00
18 2019-01-01 18:00:00
19 2019-01-01 19:00:00
20 2019-01-01 20:00:00
21 2019-01-01 21:00:00
22 2019-01-01 22:00:00
23 2019-01-01 23:00:00
24 2019-01-02 00:00:00
25 2019-01-02 01:00:00
26 2019-01-02 02:00:00
27 2019-01-02 03:00:00
28 2019-01-02 04:00:00
29 2019-01-02 05:00:00
30 2019-01-02 06:00:00
31 2019-01-02 07:00:00
32 2019-01-02 08:00:00
33 2019-01-02 09:00:00
34 2019-01-02 10:00:00
35 2019-01-02 11:00:00
36 2019-01-02 12:00:00
37 2019-01-02 13:00:00
38 2019-01-02 14:00:00
39 2019-01-02 15:00:00
40 2019-01-02 16:00:00
41 2019-01-02 17:00:00
42 2019-01-02 18:00:00
43 2019-01-02 19:00:00
44 2019-01-02 20:00:00
45 2019-01-02 21:00:00
46 2019-01-02 22:00:00
47 2019-01-02 23:00:00
48 2019-01-03 00:00:00
49 2019-01-03 01:00:00
...
8711 2019-12-29 23:00:00
8712 2019-12-30 00:00:00
8713 2019-12-30 01:00:00
8714 2019-12-30 02:00:00
8715 2019-12-30 03:00:00
8716 2019-12-30 04:00:00
8717 2019-12-30 05:00:00
8718 2019-12-30 06:00:00
8719 2019-12-30 07:00:00
8720 2019-12-30 08:00:00
8721 2019-12-30 09:00:00
8722 2019-12-30 10:00:00
8723 2019-12-30 11:00:00
8724 2019-12-30 12:00:00
8725 2019-12-30 13:00:00
8726 2019-12-30 14:00:00
8727 2019-12-30 15:00:00
8728 2019-12-30 16:00:00
8729 2019-12-30 17:00:00
8730 2019-12-30 18:00:00
8731 2019-12-30 19:00:00
8732 2019-12-30 20:00:00
8733 2019-12-30 21:00:00
8734 2019-12-30 22:00:00
8735 2019-12-30 23:00:00
8736 2019-12-31 00:00:00
8737 2019-12-31 01:00:00
8738 2019-12-31 02:00:00
8739 2019-12-31 03:00:00
8740 2019-12-31 04:00:00
8741 2019-12-31 05:00:00
8742 2019-12-31 06:00:00
8743 2019-12-31 07:00:00
8744 2019-12-31 08:00:00
8745 2019-12-31 09:00:00
8746 2019-12-31 10:00:00
8747 2019-12-31 11:00:00
8748 2019-12-31 12:00:00
8749 2019-12-31 13:00:00
8750 2019-12-31 14:00:00
8751 2019-12-31 15:00:00
8752 2019-12-31 16:00:00
8753 2019-12-31 17:00:00
8754 2019-12-31 18:00:00
8755 2019-12-31 19:00:00
8756 2019-12-31 20:00:00
8757 2019-12-31 21:00:00
8758 2019-12-31 22:00:00
8759 2019-12-31 23:00:00
8760 2020-01-01 00:00:00
Length: 8761, dtype: datetime64[ns]
Note if your Python version is lower than 3.6 use .format for string formatting:
year = 2019
pd.Series(pd.date_range(start='{}-01-01'.format(year), end='{}-01-01 23:00:00'.format(year), freq='H'))
I have a dataframe and I want to remove certain specific repeating rows:
import numpy as np
import pandas as pd
nrows = 144
df = pd.DataFrame(np.random.rand(nrows,), pd.date_range('2016-02-08 00:00:00', periods=nrows, freq='2h'), columns=['A'])
The dataframe is continuous with time, providing data every two hours ad infinitum, but I've chosen to only show a subset for brevity.I want to remove the data every 72 hours at 8:00 starting on Mondays to coincide with an external event that alters the data.For this snapshot of data I want to remove the rows indexed at 2016-02-08 08:00, 2016-02-11 08:00, +3D etc..
Is there a simple way to do this?
IIUC you could do this:
In [18]:
start = df.index[(df.index.dayofweek == 0) & (df.index.hour == 8)][0]
start
Out[18]:
Timestamp('2016-02-08 08:00:00')
In [45]:
df.loc[df.index.difference(pd.date_range(start, end=df.index[-1], freq='3D'))]
Out[45]:
A
2016-02-08 00:00:00 0.323742
2016-02-08 02:00:00 0.962252
2016-02-08 04:00:00 0.706537
2016-02-08 06:00:00 0.561446
2016-02-08 10:00:00 0.225042
2016-02-08 12:00:00 0.746258
2016-02-08 14:00:00 0.167950
2016-02-08 16:00:00 0.199958
2016-02-08 18:00:00 0.808286
2016-02-08 20:00:00 0.288797
2016-02-08 22:00:00 0.508109
2016-02-09 00:00:00 0.980772
2016-02-09 02:00:00 0.995731
2016-02-09 04:00:00 0.742751
2016-02-09 06:00:00 0.392247
2016-02-09 08:00:00 0.460511
2016-02-09 10:00:00 0.083660
2016-02-09 12:00:00 0.273620
2016-02-09 14:00:00 0.791506
2016-02-09 16:00:00 0.440630
2016-02-09 18:00:00 0.326418
2016-02-09 20:00:00 0.790780
2016-02-09 22:00:00 0.521131
2016-02-10 00:00:00 0.219315
2016-02-10 02:00:00 0.016625
2016-02-10 04:00:00 0.958566
2016-02-10 06:00:00 0.405643
2016-02-10 08:00:00 0.958025
2016-02-10 10:00:00 0.786663
2016-02-10 12:00:00 0.589064
... ...
2016-02-17 12:00:00 0.360848
2016-02-17 14:00:00 0.757499
2016-02-17 16:00:00 0.391574
2016-02-17 18:00:00 0.062812
2016-02-17 20:00:00 0.308282
2016-02-17 22:00:00 0.251520
2016-02-18 00:00:00 0.832871
2016-02-18 02:00:00 0.387108
2016-02-18 04:00:00 0.070969
2016-02-18 06:00:00 0.298831
2016-02-18 08:00:00 0.878526
2016-02-18 10:00:00 0.979233
2016-02-18 12:00:00 0.386620
2016-02-18 14:00:00 0.420962
2016-02-18 16:00:00 0.238879
2016-02-18 18:00:00 0.124069
2016-02-18 20:00:00 0.985828
2016-02-18 22:00:00 0.585278
2016-02-19 00:00:00 0.409226
2016-02-19 02:00:00 0.093945
2016-02-19 04:00:00 0.389450
2016-02-19 06:00:00 0.378091
2016-02-19 08:00:00 0.874232
2016-02-19 10:00:00 0.527629
2016-02-19 12:00:00 0.490236
2016-02-19 14:00:00 0.509008
2016-02-19 16:00:00 0.097061
2016-02-19 18:00:00 0.111626
2016-02-19 20:00:00 0.877099
2016-02-19 22:00:00 0.796201
[140 rows x 1 columns]
So this determines the start range by comparing the dayofweek and hour and taking the first index value, we then generate an index using date_range and call difference on the index to remove these rows and pass these to loc