the df looks like this:
DateTime
2017-07-10 03:00:00 288.0
2017-07-10 04:00:00 306.0
2017-08-10 05:00:00 393.0
2017-08-10 06:00:00 522.0
2017-09-10 07:00:00 487.0
2017-09-10 08:00:00 523.0
2017-10-10 09:00:00 585.0
Question how to select row that in a list of dates:
['2017-07-10', '2017-09-10']
to have:
DateTime
2017-07-10 03:00:00 288.0
2017-07-10 04:00:00 306.0
2017-09-10 07:00:00 487.0
2017-09-10 08:00:00 523.0
Thanks
Given that the dates in your list contain up to the daily information, you could start by flooring (Series.dt.floor) the DatetimeIndex up to the daily level and indexing with the list of datetime objects using isin:
t = [pd.to_datetime('2017-07-10'), pd.to_datetime('2017-09-10')]
df.index= pd.to_datetime(df.index)
df[df.index.floor('d').isin(t)]
Output
DateTime
2017-07-10 03:00:00 288.0
2017-07-10 04:00:00 306.0
2017-09-10 07:00:00 487.0
2017-09-10 08:00:00 523.0
Assuming the Datetime is index, try with the below:
to_search=['2017-07-10', '2017-09-10']
df[df.index.to_series().dt.date.astype(str).isin(to_search)]
1
DateTime
2017-07-10 03:00:00 288.0
2017-07-10 04:00:00 306.0
2017-09-10 07:00:00 487.0
2017-09-10 08:00:00 523.0
Related
I have a pandas data frame containing a large-ish set of hourly data points. For a few days, there are missing data (NaN). I want to interpolate values for the missing hourly data points by calculating the mean of the same time period on the prior and following day (I've done some analysis and believe this will be reasonable).
An example of the data is below:
datetime
value
2018-11-17 00:00:00
9.12
2018-11-17 01:00:00
8.94
2018-11-17 02:00:00
8.68
2018-11-17 03:00:00
8.19
2018-11-17 04:00:00
7.75
2018-11-17 05:00:00
7.35
2018-11-17 06:00:00
7.05
2018-11-17 07:00:00
6.55
2018-11-17 08:00:00
6.30
2018-11-17 09:00:00
6.28
2018-11-17 10:00:00
6.68
2018-11-17 11:00:00
7.64
2018-11-17 12:00:00
8.61
2018-11-17 13:00:00
9.44
2018-11-17 14:00:00
9.84
2018-11-17 15:00:00
9.62
2018-11-17 16:00:00
8.17
2018-11-17 17:00:00
6.16
2018-11-17 18:00:00
5.93
2018-11-17 19:00:00
5.36
2018-11-17 20:00:00
4.69
2018-11-17 21:00:00
4.36
2018-11-17 22:00:00
4.68
2018-11-17 23:00:00
4.86
2018-11-18 00:00:00
NaN
2018-11-18 01:00:00
NaN
2018-11-18 02:00:00
NaN
2018-11-18 03:00:00
NaN
2018-11-18 04:00:00
NaN
2018-11-18 05:00:00
NaN
2018-11-18 06:00:00
NaN
2018-11-18 07:00:00
NaN
2018-11-18 08:00:00
NaN
2018-11-18 09:00:00
NaN
2018-11-18 10:00:00
NaN
2018-11-18 11:00:00
NaN
2018-11-18 12:00:00
NaN
2018-11-18 13:00:00
NaN
2018-11-18 14:00:00
NaN
2018-11-18 15:00:00
NaN
2018-11-18 16:00:00
NaN
2018-11-18 17:00:00
NaN
2018-11-18 18:00:00
NaN
2018-11-18 19:00:00
NaN
2018-11-18 20:00:00
NaN
2018-11-18 21:00:00
NaN
2018-11-18 22:00:00
NaN
2018-11-18 23:00:00
NaN
2018-11-19 00:00:00
3.19
2018-11-19 01:00:00
2.60
2018-11-19 02:00:00
2.29
2018-11-19 03:00:00
1.97
2018-11-19 04:00:00
2.19
2018-11-19 05:00:00
3.09
2018-11-19 06:00:00
4.32
2018-11-19 07:00:00
4.87
2018-11-19 08:00:00
5.14
2018-11-19 09:00:00
5.55
2018-11-19 10:00:00
6.34
2018-11-19 11:00:00
7.43
2018-11-19 12:00:00
8.18
2018-11-19 13:00:00
8.53
2018-11-19 14:00:00
8.45
2018-11-19 15:00:00
7.94
2018-11-19 16:00:00
6.87
2018-11-19 17:00:00
5.56
2018-11-19 18:00:00
4.65
2018-11-19 19:00:00
4.18
2018-11-19 20:00:00
3.97
2018-11-19 21:00:00
3.98
2018-11-19 22:00:00
4.01
2018-11-19 23:00:00
4.00
So, for example, the desired output for 2018-11-18 00:00:00 would be the mean of 9.12 and 3.19 = 6.16. And so on for the other hours of the day on 2018-11-18.
Is there a simple way to do this in pandas? Ideally with a method that could be applied to a whole column (feature) within a data frame, rather than having to slice out some of the data, transform it, and then replace (because honestly, it would be a lot quicker for me to do that in excel!).
Thanks in advance for your help.
Try:
#make sure every hour is in the datetime
df = df.set_index("datetime").resample("1h").last()
#create a series of means averaging the values 24 hours before and after
means = df["value"].shift(24).add(df["value"].shift(-24)).mul(0.5)
#fill the NaN in df with means
df["value"] = df["value"].combine_first(means)
>>> df.iloc[24:48]
value
datetime
2018-11-18 00:00:00 6.155
2018-11-18 01:00:00 5.770
2018-11-18 02:00:00 5.485
2018-11-18 03:00:00 5.080
2018-11-18 04:00:00 4.970
2018-11-18 05:00:00 5.220
2018-11-18 06:00:00 5.685
2018-11-18 07:00:00 5.710
2018-11-18 08:00:00 5.720
2018-11-18 09:00:00 5.915
2018-11-18 10:00:00 6.510
2018-11-18 11:00:00 7.535
2018-11-18 12:00:00 8.395
2018-11-18 13:00:00 8.985
2018-11-18 14:00:00 9.145
2018-11-18 15:00:00 8.780
2018-11-18 16:00:00 7.520
2018-11-18 17:00:00 5.860
2018-11-18 18:00:00 5.290
2018-11-18 19:00:00 4.770
2018-11-18 20:00:00 4.330
2018-11-18 21:00:00 4.170
2018-11-18 22:00:00 4.345
2018-11-18 23:00:00 4.430
Here I have an extract from my pandas dataframe which is survey data with two datetime fields. It appears that some of the start times and end times were filled in the wrong position in the survey. Here is an example from my dataframe. The start and end time in the 8th row, I suspect were entered the wrong way round.
Just to give context, I generated the third column like this:
df_time['trip_duration'] = df_time['tripEnd_time'] - df_time['tripStart_time']
The three columns are in timedelta64 format.
Here is the top of my dataframe:
tripStart_time tripEnd_time trip_duration
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 -1 days +22:15:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
What I am trying to do is, loop through these two columns, and for each time 'tripEnd_time' is less than 'tripStart_time' swap the positions of these two entries. So in the case of row 8 above, I would make tripStart_time = tripEnd_time and tripEnd_time = tripStart_time.
I am not quite sure the best way to approach this. Should I use nested for loop where i compare each entry in the two columns?
Thanks
Use Series.abs:
df_time['trip_duration'] = (df_time['tripEnd_time'] - df_time['tripStart_time']).abs()
print (df_time)
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 01:45:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
What is same like:
a = df_time['tripEnd_time'] - df_time['tripStart_time']
b = df_time['tripStart_time'] - df_time['tripEnd_time']
mask = df_time['tripEnd_time'] > df_time['tripStart_time']
df_time['trip_duration'] = np.where(mask, a, b)
print (df_time)
tripStart_time tripEnd_time trip_duration
1 22:30:00 23:15:00 00:45:00
2 11:00:00 11:30:00 00:30:00
3 09:00:00 09:15:00 00:15:00
4 13:30:00 14:25:00 00:55:00
5 09:00:00 10:15:00 01:15:00
6 12:00:00 12:15:00 00:15:00
7 08:00:00 08:30:00 00:30:00
8 11:00:00 09:15:00 01:45:00
9 14:00:00 14:30:00 00:30:00
10 14:55:00 15:20:00 00:25:00
You can switch column values on selected rows:
df_time.loc[df_time['tripEnd_time'] < df_time['tripStart_time'],
['tripStart_time', 'tripEnd_time']] = df_time.loc[
df_time['tripEnd_time'] < df_time['tripStart_time'],
['tripEnd_time', 'tripStart_time']].values
I have hourly observations of several variables that exhibit daily seasonality. I want to fill any missing value with the corresponding variable's value 24 hours prior.
Ideally my function would fill the missing values from oldest to newest. Thus if there are 25 consecutive missing values, the 25th missing value is filled with the same value as the first missing value. Using Series.map() fails in this case.
value desired_output
hour
2019-08-17 00:00:00 58.712986 58.712986
2019-08-17 01:00:00 28.904234 28.904234
2019-08-17 02:00:00 14.275149 14.275149
2019-08-17 03:00:00 58.777087 58.777087
2019-08-17 04:00:00 95.964955 95.964955
2019-08-17 05:00:00 64.971372 64.971372
2019-08-17 06:00:00 95.759469 95.759469
2019-08-17 07:00:00 98.675457 98.675457
2019-08-17 08:00:00 77.510319 77.510319
2019-08-17 09:00:00 56.492446 56.492446
2019-08-17 10:00:00 90.968924 90.968924
2019-08-17 11:00:00 66.647501 66.647501
2019-08-17 12:00:00 7.756725 7.756725
2019-08-17 13:00:00 49.328135 49.328135
2019-08-17 14:00:00 28.634033 28.634033
2019-08-17 15:00:00 65.157161 65.157161
2019-08-17 16:00:00 93.127539 93.127539
2019-08-17 17:00:00 98.806335 98.806335
2019-08-17 18:00:00 94.789761 94.789761
2019-08-17 19:00:00 63.518037 63.518037
2019-08-17 20:00:00 89.524433 89.524433
2019-08-17 21:00:00 48.076081 48.076081
2019-08-17 22:00:00 5.027928 5.027928
2019-08-17 23:00:00 0.417763 0.417763
2019-08-18 00:00:00 29.933627 29.933627
2019-08-18 01:00:00 61.726948 61.726948
2019-08-18 02:00:00 NaN 14.275149
2019-08-18 03:00:00 NaN 58.777087
2019-08-18 04:00:00 NaN 95.964955
2019-08-18 05:00:00 NaN 64.971372
2019-08-18 06:00:00 NaN 95.759469
2019-08-18 07:00:00 NaN 98.675457
2019-08-18 08:00:00 NaN 77.510319
2019-08-18 09:00:00 NaN 56.492446
2019-08-18 10:00:00 NaN 90.968924
2019-08-18 11:00:00 NaN 66.647501
2019-08-18 12:00:00 NaN 7.756725
2019-08-18 13:00:00 NaN 49.328135
2019-08-18 14:00:00 NaN 28.634033
2019-08-18 15:00:00 NaN 65.157161
2019-08-18 16:00:00 NaN 93.127539
2019-08-18 17:00:00 NaN 98.806335
2019-08-18 18:00:00 NaN 94.789761
2019-08-18 19:00:00 NaN 63.518037
2019-08-18 20:00:00 NaN 89.524433
2019-08-18 21:00:00 NaN 48.076081
2019-08-18 22:00:00 NaN 5.027928
2019-08-18 23:00:00 NaN 0.417763
2019-08-19 00:00:00 NaN 29.933627
2019-08-19 01:00:00 NaN 61.726948
2019-08-19 02:00:00 NaN 14.275149
2019-08-19 03:00:00 NaN 58.777087
2019-08-19 04:00:00 NaN 95.964955
2019-08-19 05:00:00 NaN 64.971372
2019-08-19 06:00:00 NaN 95.759469
2019-08-19 07:00:00 NaN 98.675457
2019-08-19 08:00:00 NaN 77.510319
2019-08-19 09:00:00 NaN 56.492446
2019-08-19 10:00:00 NaN 90.968924
2019-08-19 11:00:00 NaN 66.647501
2019-08-19 12:00:00 NaN 7.756725
2019-08-19 13:00:00 61.457913 61.457913
2019-08-19 14:00:00 52.429383 52.429383
2019-08-19 15:00:00 79.016485 79.016485
2019-08-19 16:00:00 77.724758 77.724758
2019-08-19 17:00:00 62.205810 62.205810
2019-08-19 18:00:00 15.841707 15.841707
2019-08-19 19:00:00 72.196028 72.196028
2019-08-19 20:00:00 5.497441 5.497441
2019-08-19 21:00:00 30.737596 30.737596
2019-08-19 22:00:00 65.550690 65.550690
2019-08-19 23:00:00 3.543332 3.543332
import pandas as pd
from dateutil.relativedelta import relativedelta as rel_delta
df['isna'] = df['value'].isna()
df['value'] = df.index.map(lambda t: df.at[t - rel_delta(hours=24), 'value'] if df.at[t,'isna'] and t - rel_delta(hours=24) >= df.index.min() else df.at[t, 'value'])
What is the most efficient way to complete this naive forward fill?
IIUC, just groupby time and ffill()
df['resuts'] = df.groupby(df.hour.dt.time).value.ffill()
If hour is your index, just do df.index.time instead.
Checking:
>>> (df['results'] == df['desired_output']).all()
True
Wouldn't this work?
df['value'] = df['value'].fillna(df.index.hour)
Separate Date and Time into two columns as strings. Call it df.
Date Time Value
0 2019-08-17 00:00:00 58.712986
1 2019-08-17 01:00:00 28.904234
2 2019-08-17 02:00:00 14.275149
3 2019-08-17 03:00:00 58.777087
4 2019-08-17 04:00:00 95.964955
Then conducts data reshaping, pivot Time into column headers, forward fillna along each hour.
(df reshaping)
Date 00:00:00 01:00:00 02:00:00 03:00:00 04:00:00
2019-08-17 58.712986 28.904234 14.275149 58.777087 95.964955
2019-08-18 29.933627 61.726948 NaN NaN NaN
2019-08-19 NaN NaN NaN NaN NaN
(df ffill)
Date 00:00:00 01:00:00 02:00:00 03:00:00 04:00:00
2019-08-17 58.712986 28.904234 14.275149 58.777087 95.964955
2019-08-18 29.933627 61.726948 14.275149 58.777087 95.964955
2019-08-19 29.933627 61.726948 14.275149 58.777087 95.964955
(Code)
(df.set_index(['Date','Time')['Value']
.unstack()
.ffill()
.stack()
.reset_index(name='Value')
I'm not able to create a Pandas Series of every hour (as datetime objects) of a given year without iterating and adding one hour to the last, and that's slow. Is there any way to do that paralelly.
My input would be a year and the output should be a Pandas Series of every hour of that year.
You can use pd.date_range with freq='H' which is hourly frequency:
Edit with 23:00:00 after comment by #ALollz
year = 2019
pd.Series(pd.date_range(start=f'{year}-01-01', end=f'{year}-12-31 23:00:00', freq='H'))
0 2019-01-01 00:00:00
1 2019-01-01 01:00:00
2 2019-01-01 02:00:00
3 2019-01-01 03:00:00
4 2019-01-01 04:00:00
5 2019-01-01 05:00:00
6 2019-01-01 06:00:00
7 2019-01-01 07:00:00
8 2019-01-01 08:00:00
9 2019-01-01 09:00:00
10 2019-01-01 10:00:00
11 2019-01-01 11:00:00
12 2019-01-01 12:00:00
13 2019-01-01 13:00:00
14 2019-01-01 14:00:00
15 2019-01-01 15:00:00
16 2019-01-01 16:00:00
17 2019-01-01 17:00:00
18 2019-01-01 18:00:00
19 2019-01-01 19:00:00
20 2019-01-01 20:00:00
21 2019-01-01 21:00:00
22 2019-01-01 22:00:00
23 2019-01-01 23:00:00
24 2019-01-02 00:00:00
25 2019-01-02 01:00:00
26 2019-01-02 02:00:00
27 2019-01-02 03:00:00
28 2019-01-02 04:00:00
29 2019-01-02 05:00:00
30 2019-01-02 06:00:00
31 2019-01-02 07:00:00
32 2019-01-02 08:00:00
33 2019-01-02 09:00:00
34 2019-01-02 10:00:00
35 2019-01-02 11:00:00
36 2019-01-02 12:00:00
37 2019-01-02 13:00:00
38 2019-01-02 14:00:00
39 2019-01-02 15:00:00
40 2019-01-02 16:00:00
41 2019-01-02 17:00:00
42 2019-01-02 18:00:00
43 2019-01-02 19:00:00
44 2019-01-02 20:00:00
45 2019-01-02 21:00:00
46 2019-01-02 22:00:00
47 2019-01-02 23:00:00
48 2019-01-03 00:00:00
49 2019-01-03 01:00:00
...
8711 2019-12-29 23:00:00
8712 2019-12-30 00:00:00
8713 2019-12-30 01:00:00
8714 2019-12-30 02:00:00
8715 2019-12-30 03:00:00
8716 2019-12-30 04:00:00
8717 2019-12-30 05:00:00
8718 2019-12-30 06:00:00
8719 2019-12-30 07:00:00
8720 2019-12-30 08:00:00
8721 2019-12-30 09:00:00
8722 2019-12-30 10:00:00
8723 2019-12-30 11:00:00
8724 2019-12-30 12:00:00
8725 2019-12-30 13:00:00
8726 2019-12-30 14:00:00
8727 2019-12-30 15:00:00
8728 2019-12-30 16:00:00
8729 2019-12-30 17:00:00
8730 2019-12-30 18:00:00
8731 2019-12-30 19:00:00
8732 2019-12-30 20:00:00
8733 2019-12-30 21:00:00
8734 2019-12-30 22:00:00
8735 2019-12-30 23:00:00
8736 2019-12-31 00:00:00
8737 2019-12-31 01:00:00
8738 2019-12-31 02:00:00
8739 2019-12-31 03:00:00
8740 2019-12-31 04:00:00
8741 2019-12-31 05:00:00
8742 2019-12-31 06:00:00
8743 2019-12-31 07:00:00
8744 2019-12-31 08:00:00
8745 2019-12-31 09:00:00
8746 2019-12-31 10:00:00
8747 2019-12-31 11:00:00
8748 2019-12-31 12:00:00
8749 2019-12-31 13:00:00
8750 2019-12-31 14:00:00
8751 2019-12-31 15:00:00
8752 2019-12-31 16:00:00
8753 2019-12-31 17:00:00
8754 2019-12-31 18:00:00
8755 2019-12-31 19:00:00
8756 2019-12-31 20:00:00
8757 2019-12-31 21:00:00
8758 2019-12-31 22:00:00
8759 2019-12-31 23:00:00
8760 2020-01-01 00:00:00
Length: 8761, dtype: datetime64[ns]
Note if your Python version is lower than 3.6 use .format for string formatting:
year = 2019
pd.Series(pd.date_range(start='{}-01-01'.format(year), end='{}-01-01 23:00:00'.format(year), freq='H'))
I have a dataframe and I want to remove certain specific repeating rows:
import numpy as np
import pandas as pd
nrows = 144
df = pd.DataFrame(np.random.rand(nrows,), pd.date_range('2016-02-08 00:00:00', periods=nrows, freq='2h'), columns=['A'])
The dataframe is continuous with time, providing data every two hours ad infinitum, but I've chosen to only show a subset for brevity.I want to remove the data every 72 hours at 8:00 starting on Mondays to coincide with an external event that alters the data.For this snapshot of data I want to remove the rows indexed at 2016-02-08 08:00, 2016-02-11 08:00, +3D etc..
Is there a simple way to do this?
IIUC you could do this:
In [18]:
start = df.index[(df.index.dayofweek == 0) & (df.index.hour == 8)][0]
start
Out[18]:
Timestamp('2016-02-08 08:00:00')
In [45]:
df.loc[df.index.difference(pd.date_range(start, end=df.index[-1], freq='3D'))]
Out[45]:
A
2016-02-08 00:00:00 0.323742
2016-02-08 02:00:00 0.962252
2016-02-08 04:00:00 0.706537
2016-02-08 06:00:00 0.561446
2016-02-08 10:00:00 0.225042
2016-02-08 12:00:00 0.746258
2016-02-08 14:00:00 0.167950
2016-02-08 16:00:00 0.199958
2016-02-08 18:00:00 0.808286
2016-02-08 20:00:00 0.288797
2016-02-08 22:00:00 0.508109
2016-02-09 00:00:00 0.980772
2016-02-09 02:00:00 0.995731
2016-02-09 04:00:00 0.742751
2016-02-09 06:00:00 0.392247
2016-02-09 08:00:00 0.460511
2016-02-09 10:00:00 0.083660
2016-02-09 12:00:00 0.273620
2016-02-09 14:00:00 0.791506
2016-02-09 16:00:00 0.440630
2016-02-09 18:00:00 0.326418
2016-02-09 20:00:00 0.790780
2016-02-09 22:00:00 0.521131
2016-02-10 00:00:00 0.219315
2016-02-10 02:00:00 0.016625
2016-02-10 04:00:00 0.958566
2016-02-10 06:00:00 0.405643
2016-02-10 08:00:00 0.958025
2016-02-10 10:00:00 0.786663
2016-02-10 12:00:00 0.589064
... ...
2016-02-17 12:00:00 0.360848
2016-02-17 14:00:00 0.757499
2016-02-17 16:00:00 0.391574
2016-02-17 18:00:00 0.062812
2016-02-17 20:00:00 0.308282
2016-02-17 22:00:00 0.251520
2016-02-18 00:00:00 0.832871
2016-02-18 02:00:00 0.387108
2016-02-18 04:00:00 0.070969
2016-02-18 06:00:00 0.298831
2016-02-18 08:00:00 0.878526
2016-02-18 10:00:00 0.979233
2016-02-18 12:00:00 0.386620
2016-02-18 14:00:00 0.420962
2016-02-18 16:00:00 0.238879
2016-02-18 18:00:00 0.124069
2016-02-18 20:00:00 0.985828
2016-02-18 22:00:00 0.585278
2016-02-19 00:00:00 0.409226
2016-02-19 02:00:00 0.093945
2016-02-19 04:00:00 0.389450
2016-02-19 06:00:00 0.378091
2016-02-19 08:00:00 0.874232
2016-02-19 10:00:00 0.527629
2016-02-19 12:00:00 0.490236
2016-02-19 14:00:00 0.509008
2016-02-19 16:00:00 0.097061
2016-02-19 18:00:00 0.111626
2016-02-19 20:00:00 0.877099
2016-02-19 22:00:00 0.796201
[140 rows x 1 columns]
So this determines the start range by comparing the dayofweek and hour and taking the first index value, we then generate an index using date_range and call difference on the index to remove these rows and pass these to loc