Another way to use downsampling in pandas - python

let’s look at some one-minute data:
In [513]: rng = pd.date_range('1/1/2000', periods=12, freq='T')
In [514]: ts = Series(np.arange(12), index=rng)
In [515]: ts
Out[515]:
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
2000-01-01 00:09:00 9
2000-01-01 00:10:00 10
2000-01-01 00:11:00 11
Freq: T
Suppose you wanted to aggregate this data into five-minute chunks or bars by taking
the sum of each group:
In [516]: ts.resample('5min', how='sum')
Out[516]:
2000-01-01 00:00:00 0
2000-01-01 00:05:00 15
2000-01-01 00:10:00 40
2000-01-01 00:15:00 11
Freq: 5T
However I don't want to use the resample method and still want the same input-output. How can I use group_by or reindex or any of such other methods?

You can use a custom pd.Grouper this way:
In [78]: ts.groupby(pd.Grouper(freq='5min', closed='right')).sum()
Out [78]:
1999-12-31 23:55:00 0
2000-01-01 00:00:00 15
2000-01-01 00:05:00 40
2000-01-01 00:10:00 11
Freq: 5T, dtype: int64
The closed='right' ensures that the output is exactly the same.
However, if your aim is to do more custom grouping, you can use .groupby with your own vector:
In [78]: buckets = (ts.index - ts.index[0]) / pd.Timedelta('5min')
In [79]: grp = ts.groupby(np.ceil(buckets.values))
In [80]: grp.sum()
Out[80]:
0 0
1 15
2 40
3 11
dtype: int64
The output is not exactly the same, but the method is more flexible (e.g. can create uneven buckets).

Related

How to use groupby() with between_time()?

I have a DataFrame and want to multiply all values in a column a for a certain day with the value of a at 6h00m00 of that day. If there is no 6h00m00 entry, that day should stay unchanged.
The code below unfortunately gives an error.
How do I have to correct this code / replace it with any working solution?
import pandas as pd
import numpy as np
start = pd.Timestamp('2000-01-01')
end = pd.Timestamp('2000-01-03')
t = np.linspace(start.value, end.value, 9)
datetime1 = pd.to_datetime(t)
df = pd.DataFrame( {'a':[1,3,4,5,6,7,8,9,14]})
df['date']= datetime1
print(df)
def myF(x):
y = x.set_index('date').between_time('05:59', '06:01').a
return y
toMultiplyWith = df.groupby(df.date.dt.floor('D')).transform(myF)
.
a date
0 1 2000-01-01 00:00:00
1 3 2000-01-01 06:00:00
2 4 2000-01-01 12:00:00
3 5 2000-01-01 18:00:00
4 6 2000-01-02 00:00:00
5 7 2000-01-02 06:00:00
6 8 2000-01-02 12:00:00
7 9 2000-01-02 18:00:00
8 14 2000-01-03 00:00:00
....
AttributeError: ("'Series' object has no attribute 'set_index'", 'occurred at index a')
you should change this line:
toMultiplyWith = df.groupby(df.date.dt.floor('D')).transform(myF)
to this:
toMultiplyWith = df.groupby(df.date.dt.floor('D')).apply(myF)
using .apply instead of .transform will give you the desired result.
apply is the right choice here since it implicitly passes all the columns for each group as a DataFrame to the custom function.
to read more about the difference between the two methods, consider this answer
If you stick to use between_times(...) function, that would be the way to do it:
df = df.set_index('date')
mask = df.between_time('05:59', '06:01').index
df.loc[mask, 'a'] = df.loc[mask, 'a'] ** 2 # the operation you want to perform
df.reset_index(inplace=True)
Outputs:
date a
0 2000-01-01 00:00:00 1
1 2000-01-01 06:00:00 9
2 2000-01-01 12:00:00 4
3 2000-01-01 18:00:00 5
4 2000-01-02 00:00:00 6
5 2000-01-02 06:00:00 49
6 2000-01-02 12:00:00 8
7 2000-01-02 18:00:00 9
8 2000-01-03 00:00:00 14
If I got your goal right, you can use apply to return a dataframe with the same amount of rows as the original dataframe (simulating a transform):
def myF(grp):
time = grp.date.dt.strftime('%T')
target_idx = time == '06:00:00'
if target_idx.any():
grp.loc[~target_idx, 'a_sum'] = grp.loc[~target_idx, 'a'].values * grp.loc[target_idx, 'a'].values
else:
grp.loc[~target_idx, 'a_sum'] = np.nan
return grp
df.groupby(df.date.dt.floor('D')).apply(myF)
Output:
a date a_sum
0 1 2000-01-01 00:00:00 3.0
1 3 2000-01-01 06:00:00 NaN
2 4 2000-01-01 12:00:00 12.0
3 5 2000-01-01 18:00:00 15.0
4 6 2000-01-02 00:00:00 42.0
5 7 2000-01-02 06:00:00 NaN
6 8 2000-01-02 12:00:00 56.0
7 9 2000-01-02 18:00:00 63.0
8 14 2000-01-03 00:00:00 NaN
See that, for each day, each value with time other than 06:00:00 is multiplied by the value with time equals 06:00:00. It retuns NaN for the 06:00:00-values themselves, as well as for the groups without this time.

How to create a rolling time window with offset in Pandas

I want to apply some statistics on records within a time window with an offset. My data looks something like this:
lon lat stat ... speed course head
ts ...
2016-09-30 22:00:33.272 5.41463 53.173161 15 ... 0.0 0.0 511
2016-09-30 22:01:42.879 5.41459 53.173180 15 ... 0.0 0.0 511
2016-09-30 22:02:42.879 5.41461 53.173161 15 ... 0.0 0.0 511
2016-09-30 22:03:44.051 5.41464 53.173168 15 ... 0.0 0.0 511
2016-09-30 22:04:53.013 5.41462 53.173141 15 ... 0.0 0.0 511
[5 rows x 7 columns]
I need the records within time windows of 600 seconds, with steps of 300 seconds. For example, these windows:
start end
2016-09-30 22:00:00.000 2016-09-30 22:10:00.000
2016-09-30 22:05:00.000 2016-09-30 22:15:00.000
2016-09-30 22:10:00.000 2016-09-30 22:20:00.000
I have looked at Pandas rolling to do this. But it seems like it does not have the option to add the offset which I described above. Am I overlooking something, or should I create a custom function for this?
What you want to achieve should be possible by combining DataFrame.resample with DataFrame.shift.
import pandas as pd
index = pd.date_range('1/1/2000', periods=9, freq='T')
series = pd.Series(range(9), index=index)
df = pd.DataFrame(series)
That will give you a primitive timeseries (example taken from api docs DataFrame.resample).
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Now resample by your step size (see DataFrame.shift).
sampled = df.resample('90s').sum()
This will give you non-overlapping windows of the step size.
2000-01-01 00:00:00 1
2000-01-01 00:01:30 2
2000-01-01 00:03:00 7
2000-01-01 00:04:30 5
2000-01-01 00:06:00 13
2000-01-01 00:07:30 8
Finally, shift the sampled df by one step and sum with the previously created df. Window size being twice the step size helps.
sampled.shift(1, fill_value=0) + sampled
This will yield:
2000-01-01 00:00:00 1
2000-01-01 00:01:30 3
2000-01-01 00:03:00 9
2000-01-01 00:04:30 12
2000-01-01 00:06:00 18
2000-01-01 00:07:30 21
There may be a more elegant solution, but I hope this helps.

Getting data for given day from pandas Dataframe

I have a dataframe df as below:
date1 item_id
2000-01-01 00:00:00 0
2000-01-01 10:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 12:07:00 7
2000-01-02 00:08:00 8
2000-01-02 00:00:00 0
2000-01-02 00:01:00 1
2000-01-02 03:02:00 2
2000-01-02 00:03:00 3
2000-01-02 00:04:00 4
2000-01-02 00:05:00 5
2000-01-02 04:06:00 6
2000-01-02 00:07:00 7
2000-01-02 00:08:00 8
I need the data for single day i.e. 1st Jan 2000. Below query gives me the correct result. But is there a way it can be done just by passing "2000-01-01"?
result= df[(df['date1'] > '2000-01-01 00:00') & (df['date1'] < '2000-01-01 23:59')]
Use partial string indexing, but need DatetimeIndex first:
df = df.set_index('date1')['2000-01-01']
print (df)
item_id
date1
2000-01-01 00:00:00 0
2000-01-01 10:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 12:07:00 7
Another solution is convert datetimes to strings by strftime and filter by boolean indexing:
df = df[df['date1'].dt.strftime('%Y-%m-%d') == '2000-01-01']
print (df)
date1 item_id
0 2000-01-01 00:00:00 0
1 2000-01-01 10:01:00 1
2 2000-01-01 00:02:00 2
3 2000-01-01 00:03:00 3
4 2000-01-01 00:04:00 4
5 2000-01-01 00:05:00 5
6 2000-01-01 00:06:00 6
7 2000-01-01 12:07:00 7
The other alternative would be to create a mask:
df[df.date1.dt.date.astype(str) == '2000-01-01']
Full example:
import pandas as pd
data = '''\
date1 item_id
2000-01-01T00:00:00 0
2000-01-01T10:01:00 1
2000-01-01T00:02:00 2
2000-01-01T00:03:00 3
2000-01-01T00:04:00 4
2000-01-01T00:05:00 5
2000-01-01T00:06:00 6
2000-01-01T12:07:00 7
2000-01-02T00:08:00 8
2000-01-02T00:00:00 0
2000-01-02T00:01:00 1
2000-01-02T03:02:00 2'''
df = pd.read_csv(pd.compat.StringIO(data), sep='\s+', parse_dates=['date1'])
res = df[df.date1.dt.date.astype(str) == '2000-01-01']
print(res)
Returns:
date1 item_id
0 2000-01-01 00:00:00 0
1 2000-01-01 10:01:00 1
2 2000-01-01 00:02:00 2
3 2000-01-01 00:03:00 3
4 2000-01-01 00:04:00 4
5 2000-01-01 00:05:00 5
6 2000-01-01 00:06:00 6
7 2000-01-01 12:07:00 7
Or
import datetime
df[df.date1.dt.date == datetime.date(2000,1,1)]

merging pandas DataFrames after resampling

I have a DataFramewith with a datetime index.
df1=pd.DataFrame(index=pd.date_range('20100201', periods=24, freq='8h3min'),
data=np.random.rand(24),columns=['Rubbish'])
df1.index=df1.index.to_datetime()
I want to resample this DataFrame, as in :
df1=df1.resample('7D').agg(np.median)
Then I have another DataFrame, with index of different frequency and starting at a different offset hour
df2=pd.DataFrame(index=pd.date_range('20100205', periods=24, freq='6h3min'),
data=np.random.rand(24),columns=['Rubbish'])
df2.index=df2.index.to_datetime()
df2=df2.resample('7D').agg(np.median)
The operations work well independently, but when I try to merge the results using
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
I get:
Rubbish_x Rubbish_y
2010-02-01 0.585986 NaN
2010-02-05 NaN 0.423316
2010-02-08 0.767499 NaN
While I would like to resample both with same offset, and get the following result after a merge
Rubbish_x Rubbish_y
2010-02-01 AVALUE AVALUE
2010-02-08 AVALUE AVALUE
I have tried the following, but it only generates nans
df2.reindex(df1.index)
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
I have to stick to pandas 0.20.1.
I have tried mergeas_of
df1.index
Out[48]: Index([2015-03-24, 2015-03-31, 2015-04-07, 2015-04-14, 2015-04-21, 2015-04-28], dtype='object')
df2.index
Out[49]: Index([2015-03-24, 2015-03-31, 2015-04-07, 2015-04-14, 2015-04-21, 2015-04-28], dtype='object')
output=pd.merge_asof(df1,df2,left_index=True,right_index=True)
but it crashes with following traceback
Traceback (most recent call last):
TypeError: 'NoneType' object is not callable
I believe need merge_asof:
print(pd.merge_asof(df1,df2,right_index=True,left_index=True))
Rubbish_x Rubbish_y
2010-02-01 0.446505 NaN
2010-02-08 0.474330 0.606826
Or parameter method='nearest' to reindex:
df2 = df2.reindex(df1.index, method='nearest')
print (df2)
Rubbish
2010-02-01 0.415248
2010-02-08 0.415248
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
Rubbish_x Rubbish_y
2010-02-01 0.431966 0.415248
2010-02-08 0.279121 0.415248
I think follow code base would achieve your task.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.resample.html

how to understand closed and label arguments in pandas resample method?

Based on the pandas documentation from here: Docs
And the examples:
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
After resampling:
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
In my thoughts, the bins should looks like these after resampling:
=========bin 01=========
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
=========bin 02=========
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
=========bin 03=========
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Am I right on this step??
So after .sum I thought it should be like this:
2000-01-01 00:02:00 3
2000-01-01 00:05:00 12
2000-01-01 00:08:00 21
I just do not understand how it comes out:
2000-01-01 00:00:00 0
(because label='right', 2000-01-01 00:00:00 cannot be any right edge of any bins in this case).
2000-01-01 00:09:00 15
(the label 2000-01-01 00:09:00 even does not exists in the original Series.
Short answer: If you use closed='left' and loffset='2T' then you'll get what you expected:
series.resample('3T', label='left', closed='left', loffset='2T').sum()
2000-01-01 00:02:00 3
2000-01-01 00:05:00 12
2000-01-01 00:08:00 21
Long answer: (or why the results you got were correct, given the arguments you used) This may not be clear from the documentation, but open and closed in this setting is about strict vs non-strict inequality (e.g. < vs <=).
An example should make this clear. Using an interior interval from your example, this is the difference from changing the value of closed:
closed='right' => ( 3:00, 6:00 ] or 3:00 < x <= 6:00
closed='left' => [ 3:00, 6:00 ) or 3:00 <= x < 6:00
You can find an explanation of the interval notation (parentheses vs brackets) in many places like here, for example:
https://en.wikipedia.org/wiki/Interval_(mathematics)
The label parameter merely controls whether the left (3:00) or right (6:00) side is displayed, but doesn't impact the results themselves.
Also note that you can change the starting point for the intervals with the loffset parameter (which should be entered as a time delta).
Back to the example, where we change only the labeling from 'right' to 'left':
series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
series.resample('3T', label='left', closed='right').sum()
1999-12-31 23:57:00 0
2000-01-01 00:00:00 6
2000-01-01 00:03:00 15
2000-01-01 00:06:00 15
As you can see, the results are the same, only the index label changes. Pandas only lets you display the right or left label, but if it showed both, then it would look like this (below I'm using standard index notation where ( on the left side means open and ] on the right side means closed):
( 1999-12-31 23:57:00, 2000-01-01 00:00:00 ] 0 # = 0
( 2000-01-01 00:00:00, 2000-01-01 00:03:00 ] 6 # = 1+2+3
( 2000-01-01 00:03:00, 2000-01-01 00:06:00 ] 15 # = 4+5+6
( 2000-01-01 00:06:00, 2000-01-01 00:09:00 ] 15 # = 7+8
Note that the first bin (23:57:00,00:00:00] is not empty, it's just that it contains a single row and the value in that single row is zero. If you change 'sum' to 'count' this becomes more obvious:
series.resample('3T', label='left', closed='right').count()
1999-12-31 23:57:00 1
2000-01-01 00:00:00 3
2000-01-01 00:03:00 3
2000-01-01 00:06:00 2
Per JohnE's answer I put together a little helpful infographic which should settle this issue once and for all:
It is important that resampling is performed by first producing a raster which is a sequence of instants (not periods, intervals, durations), and it is done independent of the 'label' and 'closed' parameters. It uses only the 'freq' parameter and 'loffset'. In your case, the system will produce the following raster:
2000-01-01 00:00:00
2000-01-01 00:03:00
2000-01-01 00:06:00
2000-01-01 00:09:00
Note again that at this moment there is no interpretation in terms of intervals or periods. You can shift it using 'loffset'.
Then the system will use the 'closed' parameter in ordre to choose among two options:
(start, end]
[start, end)
Here start and end are two adjacent time stamps in the raster. The 'label' parameter is used to choose whether start or end are used as a representative of the interval.
In your example, if you choose closed='right' then you will get the following intervals:
( previous_interval , 2000-01-01 00:00:00] - {0}
(2000-01-01 00:00:00, 2000-01-01 00:03:00] - {1,2,3}
(2000-01-01 00:03:00, 2000-01-01 00:06:00] - {1,2,3}
(2000-01-01 00:06:00, 2000-01-01 00:09:00] - {4,5,6}
(2000-01-01 00:09:00, next_interval ] - {7,8}
Note that after you aggregate the values over these intervals, the result is displayed in two versions depending on the 'label' parameter, that is, whether one and the same interval is represented by its left or right time stamp.
I now realized how it works, but still the strange thing about this is why the additional timestamp is added at the right side, which is counter-intuitive in a way. I guess this is similar to the range or iloc thing.

Categories