Based on the pandas documentation from here: Docs
And the examples:
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
After resampling:
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
In my thoughts, the bins should looks like these after resampling:
=========bin 01=========
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
=========bin 02=========
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
=========bin 03=========
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Am I right on this step??
So after .sum I thought it should be like this:
2000-01-01 00:02:00 3
2000-01-01 00:05:00 12
2000-01-01 00:08:00 21
I just do not understand how it comes out:
2000-01-01 00:00:00 0
(because label='right', 2000-01-01 00:00:00 cannot be any right edge of any bins in this case).
2000-01-01 00:09:00 15
(the label 2000-01-01 00:09:00 even does not exists in the original Series.
Short answer: If you use closed='left' and loffset='2T' then you'll get what you expected:
series.resample('3T', label='left', closed='left', loffset='2T').sum()
2000-01-01 00:02:00 3
2000-01-01 00:05:00 12
2000-01-01 00:08:00 21
Long answer: (or why the results you got were correct, given the arguments you used) This may not be clear from the documentation, but open and closed in this setting is about strict vs non-strict inequality (e.g. < vs <=).
An example should make this clear. Using an interior interval from your example, this is the difference from changing the value of closed:
closed='right' => ( 3:00, 6:00 ] or 3:00 < x <= 6:00
closed='left' => [ 3:00, 6:00 ) or 3:00 <= x < 6:00
You can find an explanation of the interval notation (parentheses vs brackets) in many places like here, for example:
https://en.wikipedia.org/wiki/Interval_(mathematics)
The label parameter merely controls whether the left (3:00) or right (6:00) side is displayed, but doesn't impact the results themselves.
Also note that you can change the starting point for the intervals with the loffset parameter (which should be entered as a time delta).
Back to the example, where we change only the labeling from 'right' to 'left':
series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
series.resample('3T', label='left', closed='right').sum()
1999-12-31 23:57:00 0
2000-01-01 00:00:00 6
2000-01-01 00:03:00 15
2000-01-01 00:06:00 15
As you can see, the results are the same, only the index label changes. Pandas only lets you display the right or left label, but if it showed both, then it would look like this (below I'm using standard index notation where ( on the left side means open and ] on the right side means closed):
( 1999-12-31 23:57:00, 2000-01-01 00:00:00 ] 0 # = 0
( 2000-01-01 00:00:00, 2000-01-01 00:03:00 ] 6 # = 1+2+3
( 2000-01-01 00:03:00, 2000-01-01 00:06:00 ] 15 # = 4+5+6
( 2000-01-01 00:06:00, 2000-01-01 00:09:00 ] 15 # = 7+8
Note that the first bin (23:57:00,00:00:00] is not empty, it's just that it contains a single row and the value in that single row is zero. If you change 'sum' to 'count' this becomes more obvious:
series.resample('3T', label='left', closed='right').count()
1999-12-31 23:57:00 1
2000-01-01 00:00:00 3
2000-01-01 00:03:00 3
2000-01-01 00:06:00 2
Per JohnE's answer I put together a little helpful infographic which should settle this issue once and for all:
It is important that resampling is performed by first producing a raster which is a sequence of instants (not periods, intervals, durations), and it is done independent of the 'label' and 'closed' parameters. It uses only the 'freq' parameter and 'loffset'. In your case, the system will produce the following raster:
2000-01-01 00:00:00
2000-01-01 00:03:00
2000-01-01 00:06:00
2000-01-01 00:09:00
Note again that at this moment there is no interpretation in terms of intervals or periods. You can shift it using 'loffset'.
Then the system will use the 'closed' parameter in ordre to choose among two options:
(start, end]
[start, end)
Here start and end are two adjacent time stamps in the raster. The 'label' parameter is used to choose whether start or end are used as a representative of the interval.
In your example, if you choose closed='right' then you will get the following intervals:
( previous_interval , 2000-01-01 00:00:00] - {0}
(2000-01-01 00:00:00, 2000-01-01 00:03:00] - {1,2,3}
(2000-01-01 00:03:00, 2000-01-01 00:06:00] - {1,2,3}
(2000-01-01 00:06:00, 2000-01-01 00:09:00] - {4,5,6}
(2000-01-01 00:09:00, next_interval ] - {7,8}
Note that after you aggregate the values over these intervals, the result is displayed in two versions depending on the 'label' parameter, that is, whether one and the same interval is represented by its left or right time stamp.
I now realized how it works, but still the strange thing about this is why the additional timestamp is added at the right side, which is counter-intuitive in a way. I guess this is similar to the range or iloc thing.
Related
I have large df with datettime index with hourly time step and precipitation values in several columns. My precipitation valuesare a cumulative total during the day (from 1:00 am to 0:00 am of the next day) and are reset after every day, example:
datetime S1
2000-01-01 00:00:00 4.5 ...
2000-01-01 01:00:00 0 ...
2000-01-01 02:00:00 0 ...
2000-01-01 03:00:00 0 ...
2000-01-01 04:00:00 0
2000-01-01 05:00:00 0
2000-01-01 06:00:00 0
2000-01-01 07:00:00 0
2000-01-01 08:00:00 0
2000-01-01 09:00:00 0
2000-01-01 10:00:00 0
2000-01-01 11:00:00 6.5
2000-01-01 12:00:00 7.5
2000-01-01 13:00:00 8.7
2000-01-01 14:00:00 8.7
...
2000-01-01 22:00:00 8.7
2000-01-01 23:00:00 8.7
2000-01-02 00:00:00 8.7
2000-01-02 01:00:00 0
I am trying to go from this to the actual hourly values, so the value for 1:00 am for every day is fine and then I want to substract the value from the timestep before.
Can I somehow use if statement inside of df.apply?
I thought of smth like:
df_copy = df.copy()
df = df.apply(lambda x: if df.hour !=1: era5_T[x]=era5_T[x]-era5_T_copy[x-1])
But this is not working since I'm not calling a function? I could work with a for loop but that doesn't seem like the most efficient way as I'm working with a big dataset.
You can use numpy.where and pd.Series.shift to acheive the result
import numpy as np
df['hourly_S1'] = np.where(df.hour ==1, df.S1, df.S1-df.S1.shift())
I want to apply some statistics on records within a time window with an offset. My data looks something like this:
lon lat stat ... speed course head
ts ...
2016-09-30 22:00:33.272 5.41463 53.173161 15 ... 0.0 0.0 511
2016-09-30 22:01:42.879 5.41459 53.173180 15 ... 0.0 0.0 511
2016-09-30 22:02:42.879 5.41461 53.173161 15 ... 0.0 0.0 511
2016-09-30 22:03:44.051 5.41464 53.173168 15 ... 0.0 0.0 511
2016-09-30 22:04:53.013 5.41462 53.173141 15 ... 0.0 0.0 511
[5 rows x 7 columns]
I need the records within time windows of 600 seconds, with steps of 300 seconds. For example, these windows:
start end
2016-09-30 22:00:00.000 2016-09-30 22:10:00.000
2016-09-30 22:05:00.000 2016-09-30 22:15:00.000
2016-09-30 22:10:00.000 2016-09-30 22:20:00.000
I have looked at Pandas rolling to do this. But it seems like it does not have the option to add the offset which I described above. Am I overlooking something, or should I create a custom function for this?
What you want to achieve should be possible by combining DataFrame.resample with DataFrame.shift.
import pandas as pd
index = pd.date_range('1/1/2000', periods=9, freq='T')
series = pd.Series(range(9), index=index)
df = pd.DataFrame(series)
That will give you a primitive timeseries (example taken from api docs DataFrame.resample).
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Now resample by your step size (see DataFrame.shift).
sampled = df.resample('90s').sum()
This will give you non-overlapping windows of the step size.
2000-01-01 00:00:00 1
2000-01-01 00:01:30 2
2000-01-01 00:03:00 7
2000-01-01 00:04:30 5
2000-01-01 00:06:00 13
2000-01-01 00:07:30 8
Finally, shift the sampled df by one step and sum with the previously created df. Window size being twice the step size helps.
sampled.shift(1, fill_value=0) + sampled
This will yield:
2000-01-01 00:00:00 1
2000-01-01 00:01:30 3
2000-01-01 00:03:00 9
2000-01-01 00:04:30 12
2000-01-01 00:06:00 18
2000-01-01 00:07:30 21
There may be a more elegant solution, but I hope this helps.
I have a DataFramewith with a datetime index.
df1=pd.DataFrame(index=pd.date_range('20100201', periods=24, freq='8h3min'),
data=np.random.rand(24),columns=['Rubbish'])
df1.index=df1.index.to_datetime()
I want to resample this DataFrame, as in :
df1=df1.resample('7D').agg(np.median)
Then I have another DataFrame, with index of different frequency and starting at a different offset hour
df2=pd.DataFrame(index=pd.date_range('20100205', periods=24, freq='6h3min'),
data=np.random.rand(24),columns=['Rubbish'])
df2.index=df2.index.to_datetime()
df2=df2.resample('7D').agg(np.median)
The operations work well independently, but when I try to merge the results using
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
I get:
Rubbish_x Rubbish_y
2010-02-01 0.585986 NaN
2010-02-05 NaN 0.423316
2010-02-08 0.767499 NaN
While I would like to resample both with same offset, and get the following result after a merge
Rubbish_x Rubbish_y
2010-02-01 AVALUE AVALUE
2010-02-08 AVALUE AVALUE
I have tried the following, but it only generates nans
df2.reindex(df1.index)
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
I have to stick to pandas 0.20.1.
I have tried mergeas_of
df1.index
Out[48]: Index([2015-03-24, 2015-03-31, 2015-04-07, 2015-04-14, 2015-04-21, 2015-04-28], dtype='object')
df2.index
Out[49]: Index([2015-03-24, 2015-03-31, 2015-04-07, 2015-04-14, 2015-04-21, 2015-04-28], dtype='object')
output=pd.merge_asof(df1,df2,left_index=True,right_index=True)
but it crashes with following traceback
Traceback (most recent call last):
TypeError: 'NoneType' object is not callable
I believe need merge_asof:
print(pd.merge_asof(df1,df2,right_index=True,left_index=True))
Rubbish_x Rubbish_y
2010-02-01 0.446505 NaN
2010-02-08 0.474330 0.606826
Or parameter method='nearest' to reindex:
df2 = df2.reindex(df1.index, method='nearest')
print (df2)
Rubbish
2010-02-01 0.415248
2010-02-08 0.415248
print(pd.merge(df1,df2,right_index=True,left_index=True,how='outer'))
Rubbish_x Rubbish_y
2010-02-01 0.431966 0.415248
2010-02-08 0.279121 0.415248
I think follow code base would achieve your task.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.resample.html
What I have:
A pandas dataframe with a column containing dates
Python 3.6
What I want:
Compute a new column, where the new value for every row depends only on a part of the date in the existing column for the same row (for example, an operation that depends only on the hour of the date)
Do so in an efficient manner (thinking, vectorized), as opposed to row-by-row computations.
Example dataframe (small dataframe is convenient for printing, but I also have an actual use-case with a larger dataframe which I can't share, but can use to for timing different solutions):
import numpy as np
import pandas as pd
from datetime import datetime
from datetime import timedelta
df = pd.DataFrame({'Date': np.arange(datetime(2000,1,1),
datetime(2000,1,2),
timedelta(hours=3)).astype(datetime)})
print(df)
Which gives:
Date
0 2000-01-01 00:00:00
1 2000-01-01 03:00:00
2 2000-01-01 06:00:00
3 2000-01-01 09:00:00
4 2000-01-01 12:00:00
5 2000-01-01 15:00:00
6 2000-01-01 18:00:00
7 2000-01-01 21:00:00
Existing solution (too slow):
df['SinHour'] = df.apply(
lambda row: np.sin((row.Date.hour + float(row.Date.minute) / 60.0) * np.pi / 12.0),
axis=1)
print(df)
Which gives:
Date SinHour
0 2000-01-01 00:00:00 0.000000e+00
1 2000-01-01 03:00:00 7.071068e-01
2 2000-01-01 06:00:00 1.000000e+00
3 2000-01-01 09:00:00 7.071068e-01
4 2000-01-01 12:00:00 1.224647e-16
5 2000-01-01 15:00:00 -7.071068e-01
6 2000-01-01 18:00:00 -1.000000e+00
7 2000-01-01 21:00:00 -7.071068e-01
I say this solution is too slow, because it computes every value in the column row-by-row. Of course, if this really is the only possibility, I'll have to settle for this. However, in the case of simpler functions, I've gotten huge speedups by using vectorized numpy functions, which I'm hoping will be possible in some way here too.
Direction for desired solution (does not work):
I was hoping to be able to do something like this:
df = df.assign(
SinHour=lambda data: np.sin((data.Date.hour + float(data.Date.minute) / 60.0)
* np.pi / 12.0))
This is the direction I was hoping to go in, because it's no longer a row-by-row apply. However, it obviously doesn't work, because it can't access the hour and minute properties of the entire Date column at once in a "vectorized" manner.
You was really close, only need .dt for process datetimes Series and for cast astype:
df = df.assign(SinHour=np.sin((df.Date.dt.hour +
(df.Date.dt.minute).astype(float) / 60.0) * np.pi / 12.0)
)
print(df)
Date SinHour
0 2000-01-01 00:00:00 0.000000e+00
1 2000-01-01 03:00:00 7.071068e-01
2 2000-01-01 06:00:00 1.000000e+00
3 2000-01-01 09:00:00 7.071068e-01
4 2000-01-01 12:00:00 1.224647e-16
5 2000-01-01 15:00:00 -7.071068e-01
6 2000-01-01 18:00:00 -1.000000e+00
7 2000-01-01 21:00:00 -7.071068e-01
let’s look at some one-minute data:
In [513]: rng = pd.date_range('1/1/2000', periods=12, freq='T')
In [514]: ts = Series(np.arange(12), index=rng)
In [515]: ts
Out[515]:
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
2000-01-01 00:09:00 9
2000-01-01 00:10:00 10
2000-01-01 00:11:00 11
Freq: T
Suppose you wanted to aggregate this data into five-minute chunks or bars by taking
the sum of each group:
In [516]: ts.resample('5min', how='sum')
Out[516]:
2000-01-01 00:00:00 0
2000-01-01 00:05:00 15
2000-01-01 00:10:00 40
2000-01-01 00:15:00 11
Freq: 5T
However I don't want to use the resample method and still want the same input-output. How can I use group_by or reindex or any of such other methods?
You can use a custom pd.Grouper this way:
In [78]: ts.groupby(pd.Grouper(freq='5min', closed='right')).sum()
Out [78]:
1999-12-31 23:55:00 0
2000-01-01 00:00:00 15
2000-01-01 00:05:00 40
2000-01-01 00:10:00 11
Freq: 5T, dtype: int64
The closed='right' ensures that the output is exactly the same.
However, if your aim is to do more custom grouping, you can use .groupby with your own vector:
In [78]: buckets = (ts.index - ts.index[0]) / pd.Timedelta('5min')
In [79]: grp = ts.groupby(np.ceil(buckets.values))
In [80]: grp.sum()
Out[80]:
0 0
1 15
2 40
3 11
dtype: int64
The output is not exactly the same, but the method is more flexible (e.g. can create uneven buckets).