I have a list of nodes (about 2300 of them) that have hourly price data for about a year. I have a script that, for each node, loops through the times of the day to create a 4-hour trailing average, then groups the averages by month and hour. Finally, these hours in a month are averaged to give, for each month, a typical day of prices. I'm wondering if there is a faster way to do this because what I have seems to take a significant amount of time (about an hour). I also save the dataframes as csv files for later visualization (that's not the slow part).
df (before anything is done to it)
Price_Node_Name Local_Datetime_HourEnding Price Irrelevant_column
0 My-node 2016-08-17 01:00:00 20.95 EST
1 My-node 2016-08-17 02:00:00 21.45 EST
2 My-node 2016-08-17 03:00:00 25.60 EST
df_node (after the groupby as it looks going to csv)
Month Hour MA
1 0 23.55
1 1 23.45
1 2 21.63
for node in node_names:
df_node = df[df['Price_Node_Name'] == node]
df_node['MA'] = df_node['Price'].rolling(4).mean()
df_node = df_node.groupby([df_node['Local_Datetime_HourEnding'].dt.month,
df_node['Local_Datetime_HourEnding'].dt.hour]).mean()
df_node.to_csv('%s_rollingavg.csv' % node)
I get an weak error warning me about SetWithCopy, but I haven't quite figured out how to use .loc here since the column ['MA'] doesn't exist until I create it in this snippet and any way I can think of to create it before hand and fill it seems slower than what I have. Could be totally wrong though. Any help would be great.
python 3.6
edit: I might have misread the question here, hopefully this at least sparks some ideas for the solution.
I think it is useful to have the index as the datetime column when working with time series data in Pandas.
Here is some sample data:
Out[3]:
price
date
2015-01-14 00:00:00 155.427361
2015-01-14 01:00:00 205.285202
2015-01-14 02:00:00 205.305021
2015-01-14 03:00:00 195.000000
2015-01-14 04:00:00 213.102000
2015-01-14 05:00:00 214.500000
2015-01-14 06:00:00 222.544375
2015-01-14 07:00:00 227.090251
2015-01-14 08:00:00 227.700000
2015-01-14 09:00:00 243.456190
We use Series.rolling to create an MA column, i.e. we apply the method to the price column, with a two-period window, and call mean on the resulting rolling object:
In [4]: df['MA'] = df.price.rolling(window=2).mean()
In [5]: df
Out[5]:
price MA
date
2015-01-14 00:00:00 155.427361 NaN
2015-01-14 01:00:00 205.285202 180.356281
2015-01-14 02:00:00 205.305021 205.295111
2015-01-14 03:00:00 195.000000 200.152510
2015-01-14 04:00:00 213.102000 204.051000
2015-01-14 05:00:00 214.500000 213.801000
2015-01-14 06:00:00 222.544375 218.522187
2015-01-14 07:00:00 227.090251 224.817313
2015-01-14 08:00:00 227.700000 227.395125
2015-01-14 09:00:00 243.456190 235.578095
And if you want month and hour columns, can extract those from the index:
In [7]: df['month'] = df.index.month
In [8]: df['hour'] = df.index.hour
In [9]: df
Out[9]:
price MA month hour
date
2015-01-14 00:00:00 155.427361 NaN 1 0
2015-01-14 01:00:00 205.285202 180.356281 1 1
2015-01-14 02:00:00 205.305021 205.295111 1 2
2015-01-14 03:00:00 195.000000 200.152510 1 3
2015-01-14 04:00:00 213.102000 204.051000 1 4
2015-01-14 05:00:00 214.500000 213.801000 1 5
2015-01-14 06:00:00 222.544375 218.522187 1 6
2015-01-14 07:00:00 227.090251 224.817313 1 7
2015-01-14 08:00:00 227.700000 227.395125 1 8
2015-01-14 09:00:00 243.456190 235.578095 1 9
Then we can use groupby:
In [11]: df.groupby([
...: df['month'],
...: df['hour']
...: ]).mean()[['MA']]
Out[11]:
MA
month hour
1 0 NaN
1 180.356281
2 205.295111
3 200.152510
4 204.051000
5 213.801000
6 218.522187
7 224.817313
8 227.395125
9 235.578095
Here's a few things to try:
set 'Price_Node_name' to the index before the loop
df.set_index('Price_Node_name', inplace=True)
for node in node_names:
df_node = df[node]
use sort=False as a kwarg in the groupby
df_node.groupby(..., sort=False).mean()
Perform the rolling average AFTER the groupby, or don't do it at all--I don't think you need it in your case. Averaging the hourly totals for a month will give you the expected values for a typical day, which is what you desire. If you still want the rolling average, perform it on the averaged hourly totals for each month.
Related
I have the following table:
Hora_Retiro count_uses
0 00:00:18 1
1 00:00:34 1
2 00:02:27 1
3 00:03:13 1
4 00:06:45 1
... ... ...
748700 23:58:47 1
748701 23:58:49 1
748702 23:59:11 1
748703 23:59:47 1
748704 23:59:56 1
And I want to group all values within each hour, so I can see the total number of uses per hour (00:00:00 - 23:00:00)
I have the following code:
hora_pico_aug= hora_pico.groupby(pd.Grouper(key="Hora_Retiro",freq='H')).count()
Hora_Retiro column is of timedelta64[ns] type
Which gives the following output:
count_uses
Hora_Retiro
00:00:02 2566
01:00:02 602
02:00:02 295
03:00:02 5
04:00:02 10
05:00:02 4002
06:00:02 16075
07:00:02 39410
08:00:02 76272
09:00:02 56721
10:00:02 36036
11:00:02 32011
12:00:02 33725
13:00:02 41032
14:00:02 50747
15:00:02 50338
16:00:02 42347
17:00:02 54674
18:00:02 76056
19:00:02 57958
20:00:02 34286
21:00:02 22509
22:00:02 13894
23:00:02 7134
However, the index column starts at 00:00:02, and I want it to start at 00:00:00, and then go from one hour intervals. Something like this:
count_uses
Hora_Retiro
00:00:00 2565
01:00:00 603
02:00:00 295
03:00:00 5
04:00:00 10
05:00:00 4002
06:00:00 16075
07:00:00 39410
08:00:00 76272
09:00:00 56721
10:00:00 36036
11:00:00 32011
12:00:00 33725
13:00:00 41032
14:00:00 50747
15:00:00 50338
16:00:00 42347
17:00:00 54674
18:00:00 76056
19:00:00 57958
20:00:00 34286
21:00:00 22509
22:00:00 13894
23:00:00 7134
How can i make it to start at 00:00:00??
Thanks for the help!
You can create an hour column from Hora_Retiro column.
df['hour'] = df['Hora_Retiro'].dt.hour
And then groupby on the basis of hour
gpby_df = df.groupby('hour')['count_uses'].sum().reset_index()
gpby_df['hour'] = pd.to_datetime(gpby_df['hour'], format='%H').dt.time
gpby_df.columns = ['Hora_Retiro', 'sum_count_uses']
gpby_df
gives
Hora_Retiro sum_count_uses
0 00:00:00 14
1 09:00:00 1
2 10:00:00 2
3 20:00:00 2
I assume that Hora_Retiro column in your DataFrame is of
Timedelta type. It is not datetime, as in this case there
would be printed also the date part.
Indeed, your code creates groups starting at the minute / second
taken from the first row.
To group by "full hours":
round each element in this column to hour,
then group (just by this rounded value).
The code to do it is:
hora_pico.groupby(hora_pico.Hora_Retiro.apply(
lambda tt: tt.round('H'))).count_uses.count()
However I advise you to make up your mind, what do you want to count:
rows or values in count_uses column.
In the second case replace count function with sum.
I have a .csv file with some data. There is only one column of in this file, which includes timestamps. I need to organize that data into bins of 30 minutes. This is what my data looks like:
Timestamp
04/01/2019 11:03
05/01/2019 16:30
06/01/2019 13:19
08/01/2019 13:53
09/01/2019 13:43
So in this case, the last two data points would be grouped together in the bin that includes all the data from 13:30 to 14:00.
This is what I have already tried
df = pd.read_csv('book.csv')
df['Timestamp'] = pd.to_datetime(df.Timestamp)
df.groupby(pd.Grouper(key='Timestamp',
freq='30min')).count().dropna()
I am getting around 7000 rows showing all hours for all days with the count next to them, like this:
2019-09-01 03:00:00 0
2019-09-01 03:30:00 0
2019-09-01 04:00:00 0
...
I want to create bins for only the hours that I have in my dataset. I want to see something like this:
Time Count
11:00:00 1
13:00:00 1
13:30:00 2 (we have two data points in this interval)
16:30:00 1
Thanks in advance!
Use groupby.size as:
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
df = df.Timestamp.dt.floor('30min').dt.time.to_frame()\
.groupby('Timestamp').size()\
.reset_index(name='Count')
Or as per suggestion by jpp:
df = df.Timestamp.dt.floor('30min').dt.time.value_counts().reset_index(name='Count')
print(df)
Timestamp Count
0 11:00:00 1
1 13:00:00 1
2 13:30:00 2
3 16:30:00 1
I have data that I wish to groupby week.
I have been able to do this using the following
Data_Frame.groupby([pd.Grouper(freq='W')]).count()
this creates a dataframe in the form of
2018-01-07 ...
2018-01-14 ...
2018-01-21 ...
which is great. However I need it to start at 06:00, so something like
2018-01-07 06:00:00 ...
2018-01-14 06:00:00 ...
2018-01-21 06:00:00 ...
I am aware that I could shift my data by 6 hours but this seems like a cheat and I'm pretty sure Grouper comes with the functionality to do this (some way of specifying when it should start grouping).
I was hoping someone who know of a good method of doing this.
Many Thanks
edit:
I'm trying to use pythons actual in built functionality more since it often works much better and more consistently. I also turn the data itself into a graph with the timestamps as the y column and I would want the timestamp to actuality reflect the data, without some method such as shifting everything by 6 hours grouping it and then reshifting everything back 6 hours to get the right timestamp .
Use double shift:
np.random.seed(456)
idx = pd.date_range(start = '2018-01-07', end = '2018-01-09', freq = '2H')
df = pd.DataFrame({'a':np.random.randint(10, size=25)}, index=idx)
print (df)
a
2018-01-07 00:00:00 5
2018-01-07 02:00:00 9
2018-01-07 04:00:00 4
2018-01-07 06:00:00 5
2018-01-07 08:00:00 7
2018-01-07 10:00:00 1
2018-01-07 12:00:00 8
2018-01-07 14:00:00 3
2018-01-07 16:00:00 5
2018-01-07 18:00:00 2
2018-01-07 20:00:00 4
2018-01-07 22:00:00 2
2018-01-08 00:00:00 2
2018-01-08 02:00:00 8
2018-01-08 04:00:00 4
2018-01-08 06:00:00 8
2018-01-08 08:00:00 5
2018-01-08 10:00:00 6
2018-01-08 12:00:00 0
2018-01-08 14:00:00 9
2018-01-08 16:00:00 8
2018-01-08 18:00:00 2
2018-01-08 20:00:00 3
2018-01-08 22:00:00 6
2018-01-09 00:00:00 7
#freq='D' for easy check, in original use `W`
df1 = df.shift(-6, freq='H').groupby([pd.Grouper(freq='D')]).count().shift(6, freq='H')
print (df1)
a
2018-01-06 06:00:00 3
2018-01-07 06:00:00 12
2018-01-08 06:00:00 10
So to solve this one needs to use the base parameter for Grouper.
However the caveat is that whatever time period used (years, months, days etc..) for Freq, base will also be in it (from what I can tell).
So as I want to displace the starting position by 6 hours then my freq needs to be in hours rather than weeks (i.e. 1W = 168H).
So the solution I was looking for was
Data_Frame.groupby([pd.Grouper(freq='168H', base = 6)]).count()
This is simple, short, quick and works exactly as I want it to.
Thanks to all the other answers though
I would create another column with the required dates, and groupby on them
import pandas as pd
import numpy as np
selected_datetime = pd.date_range(start = '2018-01-07', end = '2018-01-30', freq = '1H')
df = pd.DataFrame(selected_datetime, columns = ['date'])
df['value1'] = np.random.rand(df.shape[0])
# specify the condition for your date, eg. starting from 6am
df['shift1'] = df['date'].apply(lambda x: x.date() if x.hour == 6 else np.nan)
# forward fill the na values to have last date
df['shift1'] = df['shift1'].fillna(method = 'ffill')
# you can groupby on this col
df.groupby('shift1')['value1'].mean()
I have a series of floats with a datetimeindex that I have resampled into bins of 3 hours. As such I have an index containing
2015-01-01 09:00:00
2015-01-01 12:00:00
2015-01-01 15:00:00
2015-01-01 18:00:00
2015-01-01 21:00:00
2015-01-02 00:00:00
2015-01-02 03:00:00
2015-01-02 06:00:00
2015-01-02 09:00:00
and so forth. I am trying to sum the floats associated with each time of day, say 09:00:00, for all days.
The only way I can think to do it with my limited experience is to convert this series to a dataframe by using the date time index as another column, then running iterations to see if the hours slot of the date time is equal to one another than summing the values. I feel like this is horribly inefficient and probably not the 'correct' way to do this. Any help would be appreciated!
IIUC:
In [116]: s
Out[116]:
2015-01-01 09:00:00 3
2015-01-01 12:00:00 1
2015-01-01 15:00:00 0
2015-01-01 18:00:00 1
2015-01-01 21:00:00 0
2015-01-02 00:00:00 9
2015-01-02 03:00:00 2
2015-01-02 06:00:00 2
2015-01-02 09:00:00 7
2015-01-02 12:00:00 8
Freq: 3H, Name: val, dtype: int32
In [117]: s.groupby(s.index - s.index.normalize()).sum()
Out[117]:
00:00:00 9
03:00:00 2
06:00:00 2
09:00:00 10
12:00:00 9
15:00:00 0
18:00:00 1
21:00:00 0
Name: val, dtype: int32
or:
In [118]: s.groupby(s.index.hour).sum()
Out[118]:
0 9
3 2
6 2
9 10
12 9
15 0
18 1
21 0
Name: val, dtype: int32
I have a dataframe, the index is timestamp format with 'YYYY-MM-DD HH:MM:SS'
Now i want to divide this data frame into two parts.
one is the data with time before 12pm('YYYY-MM-DD 12:00:00') everyday
another is the data with time after 12pm for everyday.
I'm just stuck with this question for several days. Any suggestions?
Thank you.
If you have a DatetimeIndex (and if you don't, df.index = pd.to_datetime(df.index) should work to get one), then you can access .hour, e.g. df.index.hour, and select using that:
>>> df.head()
A
2015-01-01 00:00:00 0
2015-01-01 01:00:00 1
2015-01-01 02:00:00 2
2015-01-01 03:00:00 3
2015-01-01 04:00:00 4
>>> morning = df[df.index.hour < 12]
>>> afternoon = df[df.index.hour >= 12]
>>> morning.head()
A
2015-01-01 00:00:00 0
2015-01-01 01:00:00 1
2015-01-01 02:00:00 2
2015-01-01 03:00:00 3
2015-01-01 04:00:00 4
>>> afternoon.head()
A
2015-01-01 12:00:00 12
2015-01-01 13:00:00 13
2015-01-01 14:00:00 14
2015-01-01 15:00:00 15
2015-01-01 16:00:00 16
You could also use groupby, e.g. df.groupby(df.index.hour < 12), but that seems like overkill here. If you wanted a more complex division that might be the way to go, though.