Pandas Resample 5 mins data to Hourly average : Date issue [duplicate] - python

This question already has answers here:
Pandas: Datetime Improperly selecting day as month from date [duplicate]
(2 answers)
Closed 1 year ago.
I am trying to resample a timeseries data from 5 min frequency to hourly average.
df = pd.read_csv("my_data.csv", index_col=False, usecols=['A','B','C'])
output:
A B C
0 16-01-21 0:00 95.75 0.0
1 16-01-21 0:05 90.10 0.0
2 16-01-21 0:10 86.26 0.0
3 16-01-21 0:15 92.72 0.0
4 16-01-21 0:20 81.54 0.0
df.A= pd.to_datetime(df.A)
Output:
A B C
0 2021-01-16 00:00:00 95.75 0.0
1 2021-01-16 00:05:00 90.10 0.0
2 2021-01-16 00:10:00 86.26 0.0
3 2021-01-16 00:15:00 92.72 0.0
4 2021-01-16 00:20:00 81.54 0.0
Now I set the Timestamp column as index,
df.set_index('A', inplace=True)
And when I try to resample with
df2 = df.resample('H').mean()
I am getting this,
B C
A
2021-01-02 00:00:00 79.970278 0.0
2021-01-02 01:00:00 77.951667 0.0
2021-01-02 02:00:00 77.610556 0.0
2021-01-02 03:00:00 80.800000 0.0
2021-01-02 04:00:00 84.305000 0.0
Was expecting this kind of timestamp with the average values for each hour,
A B C
2021-01-16 00:00:00 79.970278 0.0
2021-01-16 01:00:00 77.951667 0.0
2021-01-16 02:00:00 77.610556 0.0
2021-01-16 03:00:00 80.800000 0.0
2021-01-16 04:00:00 84.305000 0.0
I am not sure where I am making a mistake. Help me out.

I think problem here is some datetimes are wrongly converted:
#default is month first in df.A= pd.to_datetime(df.A)
01-02-21 -> 2021-01-02
Possible solutions:
df.A= pd.to_datetime(df.A, dayfirst=True)
Or:
df = pd.read_csv("my_data.csv",
index_col=False,
usecols=['A','B','C'],
parse_dates=['A'],
dayfirst=True)

Related

Plotting events on a line graph

I am trying to visualise rain events using a data contained in a dataframe.
the idea seems very simple, but the execution seems to be impossible!
here is a part of the dataframe:
start_time end_time duration br_open_total
0 2022-01-01 10:00:00 2022-01-01 19:00:00 9.0 0.2540000563879943
1 2022-01-02 00:00:00 2022-01-02 10:00:00 10.0 1.0160002255520624
2 2022-01-02 17:00:00 2022-01-03 02:00:00 9.0 0.7620001691640113
3 2022-01-03 02:00:00 2022-01-04 12:00:00 34.0 10.668002368296513
4 2022-01-07 21:00:00 2022-01-08 06:00:00 9.0 0.2540000563879943
5 2022-01-16 05:00:00 2022-01-16 20:00:00 15.0 0.5080001127760454
6 2022-01-19 04:00:00 2022-01-19 17:00:00 13.0 0.7620001691640255
7 2022-01-21 14:00:00 2022-01-22 00:00:00 10.0 1.5240003383280751
8 2022-01-27 02:00:00 2022-01-27 16:00:00 14.0 3.0480006766561503
9 2022-02-01 12:00:00 2022-02-01 21:00:00 9.0 0.2540000563880126
10 2022-02-03 05:00:00 2022-02-03 15:00:00 10.0 0.5080001127760251
What I want to do is have a plot with time on the x axis, and the value of the 'br_open_total' on the y axis.
I can draw what I want it to look like, see below:
I apologise for the simplicity of the drawing, but I think it explains what I want to do.
How do I do this, and then repeat for other dataframes on the same plot.
I have tried staircase, matplotlib.pyplot.stair and others with no success.
It seems such a simple concept!
Edit 1:
Tried Joswin K J's answer with the actual data, and got this:
The event at 02-12 11:00 should be 112 hours duration, but the bar is the same width as all the others.
Edit2:
Tried Mozway's answer and got this:
Still doesn't show width of each event, and doesn't discretise the events either
Edit 3:
Using Mozway's amended answer I get this plot for the actual data:
I have added the cursor position using paint, at the top right of the plot you can see that the cursor is at 2022-02-09 and 20.34, which is actually the value for 2022-02-01, so it seems that the plot is shifted to the left by one data point?, also the large block between 2022-3-01 and 2022-04-03 doesn't seem to be in the data
edit 4: as requested by Mozway
Reshaped Data
duration br_open_total variable date
0 10.0 1.0160002255520624 start_time 2022-01-02 00:00:00
19 10.0 0.0 end_time 2022-01-02 10:00:00
1 9.0 0.7620001691640113 start_time 2022-01-02 17:00:00
2 34.0 10.668002368296513 start_time 2022-01-03 02:00:00
21 34.0 0.0 end_time 2022-01-04 12:00:00
3 15.0 0.5080001127760454 start_time 2022-01-16 05:00:00
22 15.0 0.0 end_time 2022-01-16 20:00:00
4 13.0 0.7620001691640255 start_time 2022-01-19 04:00:00
23 13.0 0.0 end_time 2022-01-19 17:00:00
5 10.0 1.5240003383280751 start_time 2022-01-21 14:00:00
24 10.0 0.0 end_time 2022-01-22 00:00:00
6 14.0 3.0480006766561503 start_time 2022-01-27 02:00:00
25 14.0 0.0 end_time 2022-01-27 16:00:00
7 10.0 0.5080001127760251 start_time 2022-02-03 05:00:00
26 10.0 0.0 end_time 2022-02-03 15:00:00
8 18.0 7.366001635252363 start_time 2022-02-03 23:00:00
27 18.0 0.0 end_time 2022-02-04 17:00:00
9 13.0 2.28600050749211 start_time 2022-02-05 11:00:00
28 13.0 0.0 end_time 2022-02-06 00:00:00
10 19.0 2.2860005074921173 start_time 2022-02-06 04:00:00
29 19.0 0.0 end_time 2022-02-06 23:00:00
11 13.0 1.2700002819400584 start_time 2022-02-07 11:00:00
30 13.0 0.0 end_time 2022-02-08 00:00:00
12 12.0 2.79400062026814 start_time 2022-02-09 01:00:00
31 12.0 0.0 end_time 2022-02-09 13:00:00
13 112.0 20.320004511041 start_time 2022-02-12 11:00:00
32 112.0 0.0 end_time 2022-02-17 03:00:00
14 28.0 2.0320004511041034 start_time 2022-02-18 14:00:00
33 28.0 0.0 end_time 2022-02-19 18:00:00
15 17.0 17.272003834384847 start_time 2022-02-23 17:00:00
34 17.0 0.0 end_time 2022-02-24 10:00:00
16 9.0 0.7620001691640397 start_time 2022-02-27 13:00:00
35 9.0 0.0 end_time 2022-02-27 22:00:00
17 18.0 4.0640009022082 start_time 2022-04-04 00:00:00
36 18.0 0.0 end_time 2022-04-04 18:00:00
18 15.0 1.0160002255520482 start_time 2022-04-06 05:00:00
37 15.0 0.0 end_time 2022-04-06 20:00:00
when plotted using
plt.step(bdf2['date'], bdf2['br_open_total'])
plt.gcf().set_size_inches(10, 4)
plt.xticks(rotation=90)
produces the plot shown above, in which the top left corner of a block corresponds to the previous data point.
edit 5: further info
When I plot all my dataframes (different sensors) I get the same differential on the event start and end times?
You can use a step plot:
# ensure datetime
df['start_time'] = pd.to_datetime(df['start_time'])
df['end_time'] = pd.to_datetime(df['end_time'])
# reshape the data
df2 = (df
.melt(id_vars=['duration', 'br_open_total'], value_name='date')
.sort_values(by='date')
.drop_duplicates(subset='date')
.assign(br_open_total=lambda d: d['br_open_total'].mask(d['variable'].eq('end_time'), 0))
)
# plot
import matplotlib.pyplot as plt
plt.step(df2['date'], df2['br_open_total'])
plt.gcf().set_size_inches(10, 4)
output:
reshaped data:
duration br_open_total variable date
0 9.0 0.254000 start_time 2022-01-01 10:00:00
11 9.0 0.000000 end_time 2022-01-01 19:00:00
1 10.0 1.016000 start_time 2022-01-02 00:00:00
12 10.0 0.000000 end_time 2022-01-02 10:00:00
2 9.0 0.762000 start_time 2022-01-02 17:00:00
3 34.0 10.668002 start_time 2022-01-03 02:00:00
14 34.0 0.000000 end_time 2022-01-04 12:00:00
4 9.0 0.254000 start_time 2022-01-07 21:00:00
15 9.0 0.000000 end_time 2022-01-08 06:00:00
5 15.0 0.508000 start_time 2022-01-16 05:00:00
16 15.0 0.000000 end_time 2022-01-16 20:00:00
6 13.0 0.762000 start_time 2022-01-19 04:00:00
17 13.0 0.000000 end_time 2022-01-19 17:00:00
7 10.0 1.524000 start_time 2022-01-21 14:00:00
18 10.0 0.000000 end_time 2022-01-22 00:00:00
8 14.0 3.048001 start_time 2022-01-27 02:00:00
19 14.0 0.000000 end_time 2022-01-27 16:00:00
9 9.0 0.254000 start_time 2022-02-01 12:00:00
20 9.0 0.000000 end_time 2022-02-01 21:00:00
10 10.0 0.508000 start_time 2022-02-03 05:00:00
21 10.0 0.000000 end_time 2022-02-03 15:00:00
Try this:
import matplotlib.pyplot as plt
for ind,row in df.iterrows():
plt.plot(pd.Series([row['start_time'],row['end_time']]),pd.Series([row['br_open_total'],row['br_open_total']]),color='b')
plt.plot(pd.Series([row['start_time'],row['start_time']]),pd.Series([0,row['br_open_total']]),color='b')
plt.plot(pd.Series([row['end_time'],row['end_time']]),pd.Series([0,row['br_open_total']]),color='b')
plt.xticks(rotation=90)
Result:
I believe I have now cracked it, with a great debt of thanks to #Mozway.
The code to restructure the dataframe for plotting:
#create dataframes of each open gauge events removing any event with an open total of less than 0.254mm
#bresser/open
bdftdf=bdf.loc[bdf['br_open_total'] > 0.255]
bdftdf=bdftdf.copy()
bdftdf['start_time'] = pd.to_datetime(bdftdf['start_time'])
bdftdf['end_time'] = pd.to_datetime(bdftdf['end_time'])
bdf2 = (bdftdf
.melt(id_vars=['duration', 'ic_total','mc_total','md_total','imd_total','oak_total','highpoint_total','school_total','br_open_total',
'fr_gauge_total','open_mean_total','br_open_ic_%_int','br_open_mc_%_int','br_open_md_%_int','br_open_imd_%_int',
'br_open_oak_%_int'], value_name='date')
.sort_values(by='date')
#.drop_duplicates(subset='date')
.assign(br_open_total=lambda d: d['br_open_total'].mask(d['variable'].eq('end_time'), 0))
)
#create array for stairs plot
bdfarr=np.array(bdf2['date'])
bl=len(bdf2)
bdfarr=np.append(bdfarr,[bdfarr[bl-1]+np.timedelta64(1,'h')])
Rather than use the plt.step plot as suggested by Mozway, I have used plt.stairs, after creating an array of the 'date' column in the dataframe and appending an extra element to that array equal to the last element =1hour.
This means that the data now plots as I had intended it to.:
code for plot:
fig1=plt.figure()
plt.stairs(bdf2['br_open_total'], bdfarr, label='Bresser\Open')
plt.stairs(frdf2['fr_gauge_total'], frdfarr, label='FR Gauge')
plt.stairs(hpdf2['highpoint_total'], hpdfarr, label='Highpoint')
plt.stairs(schdf2['school_total'], schdfarr, label='School')
plt.stairs(opmedf2['open_mean_total'], opmedfarr, label='Open mean')
plt.xticks(rotation=90)
plt.legend(title='Rain events', loc='best')
plt.show()

How to groupby for dataframe which has a datetimeindex only using hours

I've got a dataframe called new_dh of web request that looks like (there are more columns
s-sitename sc-win32-status
date_time
2006-11-01 00:00:00 W3SVC1 0.0
2006-11-01 00:00:00 W3SVC1 0.0
2006-11-01 01:00:00 W3SVC1 0.0
2006-11-01 01:00:00 W3SVC1 0.0
2006-11-01 02:00:00 W3SVC1 0.0
2007-02-28 02:00:00 W3SVC1 0.0
2007-02-28 10:00:00 W3SVC1 0.0
2007-02-28 23:00:00 W3SVC1 0.0
2007-02-28 23:00:00 W3SVC1 0.0
2007-02-28 23:00:00 W3SVC1 0.0
What I would like to do is group by the hours(the actual date of the request does not matter, just the hour and all the times have already been rounded down to not include minutes) for the datetimeindex and instead return
count
hour
0 2
01 2
02 2
10 1
23 3
Any help would be much appreciated.
I have tried
new_dh.groupby([new_dh.index.hour]).count()
but find myself printing many columns of the same value whereas I only want the above version
If need DatetimeIndex in output use DataFrame.resample:
new_dh.resample('H')['s-sitename'].count()
Or DatetimeIndex.floor:
new_dh.groupby(new_dh.index.floor('H'))['s-sitename'].count()
Problem of your solution is if use GroupBy.count it count all columns value per Hours with exclude missing values, so if no missing values get multiple columns with same values. Possible solution is specify column after groupby:
new_dh.groupby([new_dh.index.hour])['s-sitename'].count()
So data was changed for see how count with exclude missing values:
print (new_dh)
s-sitename sc-win32-status
date_time
2006-11-01 00:00:00 W3SVC1 0.0
2006-11-01 00:00:00 W3SVC1 0.0
2006-11-01 01:00:00 W3SVC1 0.0
2006-11-01 01:00:00 W3SVC1 0.0
2006-11-01 02:00:00 NaN 0.0
2007-02-28 02:00:00 W3SVC1 0.0
2007-02-28 10:00:00 W3SVC1 0.0
2007-02-28 23:00:00 NaN 0.0
2007-02-28 23:00:00 NaN 0.0
2007-02-28 23:00:00 W3SVC1 0.0
df = new_dh.groupby([new_dh.index.hour]).count()
print (df)
s-sitename sc-win32-status
date_time
0 2 2
1 2 2
2 1 2
10 1 1
23 1 3
So if column is specified:
s = new_dh.groupby([new_dh.index.hour])['s-sitename'].count()
print (s)
date_time
0 2
1 2
2 1
10 1
23 1
Name: s-sitename, dtype: int64
df = new_dh.groupby([new_dh.index.hour])['s-sitename'].count().to_frame()
print (df)
s-sitename
date_time
0 2
1 2
2 1
10 1
23 1
If need count also missing values then use GroupBy.size:
s = new_dh.groupby([new_dh.index.hour])['s-sitename'].size()
print (s)
date_time
0 2
1 2
2 2
10 1
23 3
Name: s-sitename, dtype: int64
df = new_dh.groupby([new_dh.index.hour])['s-sitename'].size().to_frame()
print (df)
s-sitename
date_time
0 2
1 2
2 2
10 1
23 3
new_dh['hour'] = new_dh.index.map(lambda x: x.hour)
new_dh.groupby('hour')['hour'].count()
Result
hour
0 2
1 2
2 2
10 1
23 3
Name: hour, dtype: int64
If you need a DataFrame as result:
new_dh.groupby('hour')['hour'].count().rename('count').to_frame()
In this case, the result will be:
count
hour
0 2
1 2
2 2
10 1
23 3
You can also do this by using groupby() and assign() method:
If 'date_time' column is not your index:
result=df.assign(hour=df['date_time'].dt.hour).groupby('hour').agg(count=('s-sitename','count'))
If It's your index then use:
result=df.groupby(df.index.hour)['s-sitename'].count().to_frame('count')
result.index.name='hour'
Now if you print result then you will get your desired output:
count
hour
0 1
1 2
2 2
10 1
23 3

Pandas - how to merge dataframes on datetime column of different format?

I have two dataframes that I need to merge based on date. The first dataframe looks like:
Time Stamp HP_1H_mean Coolant1_1H_mean Extreme_1H_mean
0 2019-07-26 07:00:00 410.637966 414.607081 0.0
1 2019-07-26 08:00:00 403.521735 424.787366 0.0
2 2019-07-26 09:00:00 403.143925 425.739639 0.0
3 2019-07-26 10:00:00 410.542895 426.210538 0.0
...
17 2019-07-27 00:00:00 0.000000 0.000000 0.0
18 2019-07-27 01:00:00 0.000000 0.000000 0.0
19 2019-07-27 02:00:00 0.000000 0.000000 0.0
20 2019-07-27 03:00:00 0.000000 0.000000 0.0
The second is like this:
Time Stamp Qty Compl
0 2019-07-26 150
1 2019-07-27 20
2 2019-07-29 230
3 2019-07-30 230
4 2019-07-31 170
Both Time Stamp columns are datetime64[ns]. I wanted to merge left, and forward fill the date into all the other rows for a day. My problem is at the merge, the Qty Compl from the second df is applied at midnight of each day, and some days does not have a midnight time stamp, such as the first day in the first dataframe.
Is there a way to merge and match every row that contains the same day? The desired output would look like this:
Time Stamp HP_1H_mean Coolant1_1H_mean Extreme_1H_mean Qty Compl
0 2019-07-26 07:00:00 410.637966 414.607081 0.0 150
1 2019-07-26 08:00:00 403.521735 424.787366 0.0 150
2 2019-07-26 09:00:00 403.143925 425.739639 0.0 150
3 2019-07-26 10:00:00 410.542895 426.210538 0.0 150
...
17 2019-07-27 00:00:00 0.000000 0.000000 0.0 20
18 2019-07-27 01:00:00 0.000000 0.000000 0.0 20
19 2019-07-27 02:00:00 0.000000 0.000000 0.0 20
20 2019-07-27 03:00:00 0.000000 0.000000 0.0 20
Use merge_asof with sorted both DataFrames by datetimes:
#if necessary
df1['Time Stamp'] = pd.to_datetime(df1['Time Stamp'])
df2['Time Stamp'] = pd.to_datetime(df2['Time Stamp'])
df1 = df1.sort_values('Time Stamp')
df2 = df2.sort_values('Time Stamp')
df = pd.merge_asof(df1, df2, on='Time Stamp')
print (df)
Time Stamp HP_1H_mean Coolant1_1H_mean Extreme_1H_mean \
0 2019-07-26 07:00:00 410.637966 414.607081 0.0
1 2019-07-26 08:00:00 403.521735 424.787366 0.0
2 2019-07-26 09:00:00 403.143925 425.739639 0.0
3 2019-07-26 10:00:00 410.542895 426.210538 0.0
4 2019-07-27 00:00:00 0.000000 0.000000 0.0
5 2019-07-27 01:00:00 0.000000 0.000000 0.0
6 2019-07-27 02:00:00 0.000000 0.000000 0.0
7 2019-07-27 03:00:00 0.000000 0.000000 0.0
Qty Compl
0 150
1 150
2 150
3 150
4 20
5 20
6 20
7 20

Pandas datetime resample count non-zero

I have a time series of daily rainfall data that looks like this:
PRCP
year_month_day
1797-01-01 00:00:00 0.0
1797-01-02 00:00:00 0.0
1797-01-03 00:00:00 1.1
1797-01-04 00:00:00 0.0
1797-01-05 00:00:00 3.5
1797-02-01 00:00:00 8.1
1797-02-02 00:00:00 3.0
1797-02-03 00:00:00 0.0
1797-02-04 00:00:00 0.0
1797-02-05 00:00:00 0.0
1797-03-01 00:00:00 0.0
1797-03-02 00:00:00 0.0
1797-03-03 00:00:00 0.0
1797-03-04 00:00:00 0.0
1797-03-05 00:00:00 1.5
1797-04-01 00:00:00 6.3
1797-04-02 00:00:00 24.0
1797-04-03 00:00:00 0.0
1797-04-04 00:00:00 2.2
1797-04-05 00:00:00 5.9
1797-05-01 00:00:00 0.0
1797-05-02 00:00:00 15.9
1797-05-03 00:00:00 0.0
1797-05-04 00:00:00 0.0
1797-05-05 00:00:00 0.0
1797-06-01 00:00:00 1.6
1797-06-02 00:00:00 0.0
1797-06-03 00:00:00 0.0
1797-06-04 00:00:00 7.9
1797-06-05 00:00:00 0.0
I have been able to import it with the index column as a pandas datetime object. I am trying to count all of the non-zero raindays per month. I can group by month with:
grouped = df.groupby(pd.Grouper(freq='M'))
and can count everything per month with:
raindays = grouped.resample("M").count()
But that also counts days with 0 rainfall. I found hints about using nunique(), but it doesn't seem to work with resample. eg:
raindays = grouped.resample("M").nunique()
returns error:
AttributeError: 'DataFrameGroupBy' object has no attribute 'nunique'
Is there a way to count non zero values in a grouped pandas object?
Mask those 0s and try again.
df.mask(df.PRCP.eq(0)).groupby(pd.Grouper(freq='M')).count()
Or, the more obvious version with replace.
df.replace({0 : np.nan}).groupby(pd.Grouper(freq='M')).count()
PRCP
year_month_day
1797-01-31 2
1797-02-28 2
1797-03-31 1
1797-04-30 4
1797-05-31 1
1797-06-30 2
Using factorize and bincount
f, u = pd.factorize(df.index + pd.offsets.MonthEnd(0))
pd.Series(np.bincount(f, df.PRCP.values != 0).astype(int), u)
1797-01-31 2
1797-02-28 2
1797-03-31 1
1797-04-30 4
1797-05-31 1
1797-06-30 2
dtype: float64

Pandas how to outer merge on datetime column correctly

I have two dataframes:
resetted.head()
WeightedSentiment Popularity Datetime
0 0 2 2012-11-22 11:00:00
1 0 2 2012-11-22 11:30:00
2 0 4 2012-11-22 12:00:00
3 0 2 2012-11-22 15:00:00
4 0 2 2012-11-22 15:30:00
prices.head()
Open High Low Close Volume Datetime
46623 236.9392 238.6095 236.5392 238.2094 315177 2012-11-23 10:00:00
46624 238.1894 238.3095 236.7492 237.4993 122132 2012-11-23 10:30:00
46625 237.4793 238.2595 237.1393 238.2094 144457 2012-11-23 11:00:00
46626 238.2094 238.9196 238.1694 238.7695 131733 2012-11-23 11:30:00
46627 238.7695 239.1396 237.9394 238.9496 150386 2012-11-23 12:00:00
And I tried to outer join these two dataframes, but by using
pd.merge(prices,resetted,how='outer',on='Datetime')
The result is very strange and seems wrong:
Open High Low Close Volume Datetime WeightedSentiment Popularity
0 236.9392 238.6095 236.5392 238.2094 315177.0 2012-11-23 10:00:00 0.0 20.0
1 238.1894 238.3095 236.7492 237.4993 122132.0 2012-11-23 10:30:00 0.0 12.0
2 237.4793 238.2595 237.1393 238.2094 144457.0 2012-11-23 11:00:00 0.0 12.0
3 238.2094 238.9196 238.1694 238.7695 131733.0 2012-11-23 11:30:00 0.0 2.0
4 238.7695 239.1396 237.9394 238.9496 150386.0 2012-11-23 12:00:00 0.0 12.0
5 238.7995 242.0301 238.0394 241.5900 1183601.0 2012-11-23 12:30:00 0.0 16.0
If I swap the two dataframes' position in the merge function, there will be NaN at head as expected, but the other rows are wrong. I have setup a demo notebook on github.
I'm on pandas 0.21.0

Categories