Downsampling in Pandas DataFrame by dividing observations into ratios - python

Given a DataFrame having timestamp (ts), I'd like to these by the hour (downsample). Values that were previously indexed by ts should now be divided into ratios based on the number of minutes left in an hour. [note: divide data in ratios for NaN columns while doing resampling]
ts event duration
0 2020-09-09 21:01:00 a 12
1 2020-09-10 00:10:00 a 22
2 2020-09-10 01:31:00 a 130
3 2020-09-10 01:50:00 b 60
4 2020-09-10 01:51:00 b 50
5 2020-09-10 01:59:00 b 26
6 2020-09-10 02:01:00 c 72
7 2020-09-10 02:51:00 b 51
8 2020-09-10 03:01:00 b 63
9 2020-09-10 04:01:00 c 79
def create_dataframe():
df = pd.DataFrame([{'duration':12, 'event':'a', 'ts':'2020-09-09 21:01:00'},
{'duration':22, 'event':'a', 'ts':'2020-09-10 00:10:00'},
{'duration':130, 'event':'a', 'ts':'2020-09-10 01:31:00'},
{'duration':60, 'event':'b', 'ts':'2020-09-10 01:50:00'},
{'duration':50, 'event':'b', 'ts':'2020-09-10 01:51:00'},
{'duration':26, 'event':'b', 'ts':'2020-09-10 01:59:00'},
{'duration':72, 'event':'c', 'ts':'2020-09-10 02:01:00'},
{'duration':51, 'event':'b', 'ts':'2020-09-10 02:51:00'},
{'duration':63, 'event':'b', 'ts':'2020-09-10 03:01:00'},
{'duration':79, 'event':'c', 'ts':'2020-09-10 04:01:00'},
{'duration':179, 'event':'c', 'ts':'2020-09-10 06:05:00'},
])
df.ts = pd.to_datetime(df.ts)
return df
I want to estimate a produced based on the ratio of time spend and produced. This can be compared to how many lines of code have been completed or find how many actual lines per hour?
for example: at "2020-09-10 00:10:00" we have 22. Then during the period from 21:01 - 00:10, we produced based on
59 min of 21:00 hours -> 7 => =ROUND(22/189*59,0)
60 min of 22:00 hours -> 7 => =ROUND(22/189*60,0)
60 min of 23:00 hours -> 7 => =ROUND(22/189*60,0)
10 min of 00:00 hours -> 1 => =ROUND(22/189*10,0)
the result should be something like.
ts event duration
0 2020-09-09 20:00:00 a NaN
1 2020-09-10 21:00:00 a 7
2 2020-09-10 22:00:00 a 7
3 2020-09-10 23:00:00 a 7
4 2020-09-10 00:00:00 a 1
5 2020-09-10 01:00:00 b ..
6 2020-09-10 02:01:00 c ..
Problem with this approach:
It appears to me that, we are having a serious issue with this approach. If you look at the rows[1] -> 2020-09-10 07:00:00, we have 4, we need to divide it between 3 hours. Considering base duration value as 1 (base unit), we however get
def create_dataframe2():
df = pd.DataFrame([{'duration':4, 'event':'c', 'c':'event3.5', 'ts':'2020-09-10 07:00:00'},
{'duration':4, 'event':'c', 'c':'event3.5', 'ts':'2020-09-10 10:00:00'}])
df.ts = pd.to_datetime(df.ts)
return df
Source
duration event c ts
0 4 c event3.5 2020-09-10 07:00:00
1 4 c event3.5 2020-09-10 10:00:00
Expected Output
ts_hourly mins duration
0 2020-09-10 07:00:00 60.0 2
1 2020-09-10 08:00:00 60.0 1
2 2020-09-10 09:00:00 60.0 1
3 2020-09-10 10:00:00 0.0 0

The first step is to add "previous ts" column to the source DataFrame:
df['tsPrev'] = df.ts.shift()
Then set ts column as the index:
df.set_index('ts', inplace=True)
The third step is to create an auxiliary index, composed of the original
index and "full hours":
ind = df.event.resample('H').asfreq().index.union(df.index)
Then create an auxiliary DataFrame, reindexed with the just created index
and "back fill" event column:
df2 = df.reindex(ind)
df2.event = df2.event.bfill()
Define a function to be applied to each group of rows from df2:
def parts(grp):
lstRow = grp.iloc[-1] # Last row from group
if pd.isna(lstRow.tsPrev): # First group
return pd.Series([lstRow.duration], index=[grp.index[0]], dtype=int)
# Other groups
return -pd.Series([0], index=[lstRow.tsPrev]).append(grp.duration)\
.interpolate(method='index').round().diff(-1)[:-1].astype(int)
Then generate the source data for "produced" column in 2 steps:
Generate detailed data:
prodDet = df2.groupby(np.isfinite(df2.duration.values[::-1]).cumsum()[::-1],
sort=False).apply(parts).reset_index(level=0, drop=True)
The source is df2 grouped this way that each group is terminated
with a row with a non-null value in duration column. Then each group
is processed with parts function.
The result is:
2020-09-09 21:00:00 12
2020-09-09 21:01:00 7
2020-09-09 22:00:00 7
2020-09-09 23:00:00 7
2020-09-10 00:00:00 1
2020-09-10 00:10:00 80
2020-09-10 01:00:00 50
2020-09-10 01:31:00 60
2020-09-10 01:50:00 50
2020-09-10 01:51:00 26
2020-09-10 01:59:00 36
2020-09-10 02:00:00 36
2020-09-10 02:01:00 51
2020-09-10 02:51:00 57
2020-09-10 03:00:00 6
2020-09-10 03:01:00 78
2020-09-10 04:00:00 1
2020-09-10 04:01:00 85
2020-09-10 05:00:00 87
2020-09-10 06:00:00 7
dtype: int32
Generate aggregated data, for the time being also as a Series:
prod = prodDet.resample('H').sum().rename('produced')
This time prodDet is resampled (broken down by hours) and the
result is the sum of values.
The result is:
2020-09-09 21:00:00 19
2020-09-09 22:00:00 7
2020-09-09 23:00:00 7
2020-09-10 00:00:00 81
2020-09-10 01:00:00 222
2020-09-10 02:00:00 144
2020-09-10 03:00:00 84
2020-09-10 04:00:00 86
2020-09-10 05:00:00 87
2020-09-10 06:00:00 7
Freq: H, Name: produced, dtype: int32
Let's describe the content of prodDet:
There is no row for 2020-09-09 20:00:00, because no source row is
from this hour (your data start from 21:01:00).
Row 21:00:00 12 comes from the first source row (you forgot about
it writing the expected result).
Rows for 21:01:00, 22:00:00, 23:00:00 and 00:00:00 come from
"partitioning" of row 00:10:00 a 22, just as a part of your
expected result.
Rows with 80 and 50 come from row containing 130, divided
between rows with hours 00:01:00 and 01:00:00.
And so on.
Now we start to assemble the final result.
Join prod (converted to a DataFrame) with event column:
result = prod.to_frame().join(df2.event)
Add tsMin column - the minimal ts in each hour (as you asked
in one of comments):
result['tsMin'] = df.duration.resample('H').apply(lambda grp: grp.index.min())
Change the index into a regular column and set its name to ts
(like in the source DataFrame):
result = result.reset_index().rename(columns={'index': 'ts'})
The final result is:
ts produced event tsMin
0 2020-09-09 21:00:00 19 a 2020-09-09 21:01:00
1 2020-09-09 22:00:00 7 a NaT
2 2020-09-09 23:00:00 7 a NaT
3 2020-09-10 00:00:00 81 a 2020-09-10 00:10:00
4 2020-09-10 01:00:00 222 a 2020-09-10 01:31:00
5 2020-09-10 02:00:00 144 c 2020-09-10 02:01:00
6 2020-09-10 03:00:00 84 b 2020-09-10 03:01:00
7 2020-09-10 04:00:00 86 c 2020-09-10 04:01:00
8 2020-09-10 05:00:00 87 c NaT
9 2020-09-10 06:00:00 7 c 2020-09-10 06:05:00
E.g. the value of 81 for 00:00:00 is a sum of 1 and 80 (the first
part resulting from row with 130), see prodDet above.
Some values in tsMin column are empty, for hours in which there is no
source row.
If you want to totally drop the result from the first row (with
duration == 12), change return pd.Series([lstRow.duration]... to
return pd.Series([0]... (the 4-th row of parts function).
To sum up, my solution is more pandasonic and significantly shorter
than yours (17 rows (my solution) vs. about 70 (yours), excluding comments).

I was not able to find a solution in pandas, so I created a solution with plain python.
Basically, I am iterating over all the values after sorting and sending two datetimes viz start_time and end_time to a function, which does the processing.
def get_ratio_per_hour(start_time: list, end_time: list, data_: int):
# get total hours between the start and end, use this for looping
totalhrs = lambda x: [1 for _ in range(int(x // 3600))
] + [
(x % 3600 / 3600
or 0.1 # added for loop fix afterwards
)]
# check if Start and End are not in same hour
if start_time.hour != end_time.hour:
seconds = (end_time - start_time).total_seconds()
if seconds < 3600:
parts_ = [1] + totalhrs(seconds)
else:
parts_ = totalhrs(seconds)
else:
# parts_ define the loop iterations
parts_ = totalhrs((end_time - start_time).total_seconds())
sum_of_hrs = sum(parts_)
# for Constructing DF
new_hours = []
mins = []
# Clone data
start_time_ = start_time
end_time_ = end_time
for e in range(len(parts_)):
# print(parts_[e])
if sum_of_hrs != 0:
if sum_of_hrs > 1:
if end_time_.hour != start_time_.hour:
# Floor > based on the startTime +1 hour
floor_time = (start_time_ + timedelta(hours=1)).floor('H')
#
new_hours.append(start_time_.floor('H'))
mins.append((floor_time - start_time_).total_seconds() // 60)
sum_of_hrs = sum_of_hrs - 1
start_time_ = floor_time
else:
# Hour is same.
floor_time = (start_time_ + timedelta(hours=1)).floor('H')
new_hours.append(start_time_.floor('H'))
mins.append((floor_time - start_time_).total_seconds() // 60)
sum_of_hrs = sum_of_hrs - 1
start_time_ = floor_time
else:
if end_time_.hour != start_time_.hour:
# Get round off hour
floor_time = (end_time_ + timedelta(hours=1)).floor('H')
new_hours.append(end_time_.floor('H'))
mins.append(60 - ((floor_time - end_time_).total_seconds() // 60)
)
sum_of_hrs = sum_of_hrs - 1
start_time_ = floor_time
else:
# Hour is same.
floor_time = (end_time_ + timedelta(hours=1)).floor('H')
new_hours.append(end_time_.floor('H'))
mins.append((end_time_ - start_time_).total_seconds() // 60)
sum_of_hrs = sum_of_hrs - 1
start_time_ = floor_time
# Get DataFrame Build
df_out = pd.DataFrame()
df_out['hours'] = pd.Series(new_hours)
df_out['mins'] = pd.Series(mins)
df_out['ratios'] = round(data_ / sum(mins) * df_out['mins'])
return df_out
Now, let's run the code for each iteration
time_val=[]
split_f_val=[]
split_field = 'duration'
time_field = 'ts'
# creating DataFrames for intermediate results!
df_final = pd.DataFrame()
df2 = pd.DataFrame()
for ix, row in df.iterrows():
time_val.append(row[str(time_field)])
split_f_val.append(int(row[str(split_field)]))
# Skipping First Element for Processing. Therefore, having minimum two data values
if ix !=0:
# getting Last Two Values
new_time_list=time_val[-2:]
new_data_list=split_f_val[-2:]
# get times to compare
start_time=new_time_list[: -1][0]
end_time=new_time_list[1:][0]
# get latest Data to divide
data_ = new_data_list[1:][0]
# print(start_time)
# print(end_time)
df2 = get_ratio_per_hour(start_time,end_time, data_ )
df_final = pd.concat([df_final
, df2], ignore_index=True)
else:
# Create Empty DataFrame for First Value.
df_final = pd.DataFrame([[np.nan,np.nan,np.nan] ],
columns=['hours', 'mins', 'ratios'])
df_final = pd.concat([df_final
, df2], ignore_index=True)
result = df_final.groupby(['hours'])['ratios'].sum()
Intermediate DataFrame:
hours mins ratios
0
0 2020-09-09 21:00:00 59.0 7.0
1 2020-09-09 22:00:00 60.0 7.0
2 2020-09-09 23:00:00 60.0 7.0
3 2020-09-10 00:00:00 10.0 1.0
0 2020-09-10 00:00:00 50.0 80.0
1 2020-09-10 01:00:00 31.0 50.0
0 2020-09-10 01:00:00 19.0 60.0
0 2020-09-10 01:00:00 1.0 50.0
0 2020-09-10 01:00:00 8.0 26.0
0 2020-09-10 01:00:00 1.0 36.0
1 2020-09-10 02:00:00 1.0 36.0
0 2020-09-10 02:00:00 50.0 51.0
0 2020-09-10 02:00:00 9.0 57.0
1 2020-09-10 03:00:00 1.0 6.0
0 2020-09-10 03:00:00 59.0 78.0
1 2020-09-10 04:00:00 1.0 1.0
0 2020-09-10 04:00:00 59.0 85.0
1 2020-09-10 05:00:00 60.0 87.0
2 2020-09-10 06:00:00 5.0 7.0
Final Output:
hours ratios
2020-09-09 21:00:00 7.0
2020-09-09 22:00:00 7.0
2020-09-09 23:00:00 7.0
2020-09-10 00:00:00 81.0
2020-09-10 01:00:00 222.0
2020-09-10 02:00:00 144.0
2020-09-10 03:00:00 84.0
2020-09-10 04:00:00 86.0
2020-09-10 05:00:00 87.0
2020-09-10 06:00:00 7.0

Related

how to convert 15 mins data to hourly in pandas?

I have two generator's generation data which is 15 min time block, I want to convert it hourly. Here is an example:
Time Gen1 Gen2
00:15:00 10 21
00:30:00 12 22
00:45:00 16 26
01:00:00 20 11
01:15:00 60 51
01:30:00 30 31
01:45:00 70 21
02:00:00 40 61
I want to take the average of the first 4 values( basically the 15 min block to the hourly block) and put them in place of a 1-hour block. Expected output:
Time Gen1 Gen2
01:00:00 14.5 20
02:00:00 50 41
I know I can use the pandas' grourpby function to get the expected output but I don't know its proper syntex. So can anyone please help?
Use resample with closed='right'. But first we convert your Time column to datetime type:
df['Time'] = pd.to_datetime(df['Time'])
df.resample('H', on='Time', closed='right').mean().reset_index()
Time Gen1 Gen2
0 2021-01-09 00:00:00 14.5 20.0
1 2021-01-09 01:00:00 50.0 41.0
To convert the Time column back to time format, use:
df['Time'] = df['Time'].dt.time
Time Gen1 Gen2
0 00:00:00 14.5 20.0
1 01:00:00 50.0 41.0
you can try create a column hourand then groupby('hour').mean().
df['date_time'] = pd.to_datetime(df['Time'], format="%H:%M:%S")
df['hour'] = df['date_time'].apply(lambda x: x.strftime("%H:00:00"))
gr_df = df.groupby('hour').mean()
gr_df.index.name = 'Time'
print(gr_df.reset_index())
Time Gen1 Gen2
0 00:00:00 12.666667 23.0
1 01:00:00 45.000000 28.5
2 02:00:00 40.000000 61.0

How can I create extra columns during resampling a Pandas data frame?

if I have a dataframe like this:
timestamp price
1596267946298 100.0
1596267946299 101.0
1596267946300 102.0
1596267948301 99.0
1596267948302 98.0
1596267949303 99.0
and I want to create the high, low and average during resampling:
I can duplicate the price column into a high and low column and then during resample do the min, max and mean on the appropriate columns.
But I was wondering if there is a way to make this in one pass?
my expected output would be (let's assume resampling at 100ms for this example)
timestamp price min mean max
1596267946298 100.0 100 100.5 101
1596267946299 101.0 100 100.5 101
1596267946300 102.0 98 99.5 102
1596267948301 99.0 98 99.5 102
1596267948302 98.0 98 99.5 102
1596267949303 99.0 98 995. 102
You could something like this-
import pandas as pd
index = pd.date_range('1/1/2000', periods=9, freq='T')
series = pd.Series(range(9), index=index)
def custom_func(x):
return x[-1], x.min(), x.max(), x.mean()
result = series.resample('3T').apply(custom_func)
print(pd.DataFrame(result.tolist(), columns=['resampled', 'min', 'max', 'mean'], index=result.index))
Before resampling
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
After resampling
resampled min max mean
2000-01-01 00:00:00 2 0 2 1.0
2000-01-01 00:03:00 5 3 5 4.0
2000-01-01 00:06:00 8 6 8 7.0

Optimize code to find the median of values of past 30 day for each row in a DataFrame

I'd like to find faster code to achieve the same goal: for each row, compute the median of all data in the past 30 days. But there are less than 5 data points, then return np.nan.
import pandas as pd
import numpy as np
import datetime
def findPastVar(df, var='var' ,window=30, method='median'):
# window= # of past days
def findPastVar_apply(row):
pastVar = df[var].loc[(df['timestamp'] - row['timestamp'] < datetime.timedelta(days=0)) & (df['timestamp'] - row['timestamp'] > datetime.timedelta(days=-window))]
if len(pastVar) < 5:
return(np.nan)
if method == 'median':
return(np.median(pastVar.values))
df['past{}d_{}_median'.format(window,var)] = df.apply(findPastVar_apply,axis=1)
return(df)
df = pd.DataFrame()
df['timestamp'] = pd.date_range('1/1/2011', periods=100, freq='D')
df['timestamp'] = df.timestamp.astype(pd.Timestamp)
df['var'] = pd.Series(np.random.randn(len(df['timestamp'])))
Data looks like this. In my real data, there are gaps in time and maybe more data points in one day.
In [47]: df.head()
Out[47]:
timestamp var
0 2011-01-01 00:00:00 -0.670695
1 2011-01-02 00:00:00 0.315148
2 2011-01-03 00:00:00 -0.717432
3 2011-01-04 00:00:00 2.904063
4 2011-01-05 00:00:00 -1.092813
Desired output:
In [55]: df.head(10)
Out[55]:
timestamp var past30d_var_median
0 2011-01-01 00:00:00 -0.670695 NaN
1 2011-01-02 00:00:00 0.315148 NaN
2 2011-01-03 00:00:00 -0.717432 NaN
3 2011-01-04 00:00:00 2.904063 NaN
4 2011-01-05 00:00:00 -1.092813 NaN
5 2011-01-06 00:00:00 -2.676784 -0.670695
6 2011-01-07 00:00:00 -0.353425 -0.694063
7 2011-01-08 00:00:00 -0.223442 -0.670695
8 2011-01-09 00:00:00 0.162126 -0.512060
9 2011-01-10 00:00:00 0.633801 -0.353425
However, my current code running speed:
In [49]: %timeit findPastVar(df)
1 loop, best of 3: 755 ms per loop
I need to run a large dataframe from time to time, so I want to optimize this code.
Any suggestion or comment are welcome.
New in pandas 0.19 is time aware rolling. It can deal with missing data.
Code:
print(df.rolling('30d', on='timestamp', min_periods=5)['var'].median())
Test Code:
df = pd.DataFrame()
df['timestamp'] = pd.date_range('1/1/2011', periods=60, freq='D')
df['timestamp'] = df.timestamp.astype(pd.Timestamp)
df['var'] = pd.Series(np.random.randn(len(df['timestamp'])))
# duplicate one sample
df.timestamp.loc[50] = df.timestamp.loc[51]
# drop some data
df = df.drop(range(15, 50))
df['median'] = df.rolling(
'30d', on='timestamp', min_periods=5)['var'].median()
Results:
timestamp var median
0 2011-01-01 00:00:00 -0.639901 NaN
1 2011-01-02 00:00:00 -1.212541 NaN
2 2011-01-03 00:00:00 1.015730 NaN
3 2011-01-04 00:00:00 -0.203701 NaN
4 2011-01-05 00:00:00 0.319618 -0.203701
5 2011-01-06 00:00:00 1.272088 0.057958
6 2011-01-07 00:00:00 0.688965 0.319618
7 2011-01-08 00:00:00 -1.028438 0.057958
8 2011-01-09 00:00:00 1.418207 0.319618
9 2011-01-10 00:00:00 0.303839 0.311728
10 2011-01-11 00:00:00 -1.939277 0.303839
11 2011-01-12 00:00:00 1.052173 0.311728
12 2011-01-13 00:00:00 0.710270 0.319618
13 2011-01-14 00:00:00 1.080713 0.504291
14 2011-01-15 00:00:00 1.192859 0.688965
50 2011-02-21 00:00:00 -1.126879 NaN
51 2011-02-21 00:00:00 0.213635 NaN
52 2011-02-22 00:00:00 -1.357243 NaN
53 2011-02-23 00:00:00 -1.993216 NaN
54 2011-02-24 00:00:00 1.082374 -1.126879
55 2011-02-25 00:00:00 0.124840 -0.501019
56 2011-02-26 00:00:00 -0.136822 -0.136822
57 2011-02-27 00:00:00 -0.744386 -0.440604
58 2011-02-28 00:00:00 -1.960251 -0.744386
59 2011-03-01 00:00:00 0.041767 -0.440604
you can try rolling_median
O(N log(window)) implementation using skip list
pd.rolling_median(df,window= 30,min_periods=5)

Resample python list with pandas

Fairly new to python and pandas here.
I make a query that's giving me back a timeseries. I'm never sure how many data points I receive from the query (run for a single day), but what I do know is that I need to resample them to contain 24 points (one for each hour in the day).
Printing m3hstream gives
[(1479218009000L, 109), (1479287368000L, 84)]
Then I try to make a dataframe df with
df = pd.DataFrame(data = list(m3hstream), columns=['Timestamp', 'Value'])
and this gives me an output of
Timestamp Value
0 1479218009000 109
1 1479287368000 84
Following I do this
daily_summary = pd.DataFrame()
daily_summary['value'] = df['Value'].resample('H').mean()
daily_summary = daily_summary.truncate(before=start, after=end)
print "Now daily summary"
print daily_summary
But this is giving me a TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'RangeIndex'
Could anyone please let me know how to resample it so I have 1 point for each hour in the 24 hour period that I'm querying for?
Thanks.
First thing you need to do is convert that 'Timestamp' to an actual pd.Timestamp. It looks like those are milliseconds
Then resample with the on parameter set to 'Timestamp'
df = df.assign(
Timestamp=pd.to_datetime(df.Timestamp, unit='ms')
).resample('H', on='Timestamp').mean().reset_index()
Timestamp Value
0 2016-11-15 13:00:00 109.0
1 2016-11-15 14:00:00 NaN
2 2016-11-15 15:00:00 NaN
3 2016-11-15 16:00:00 NaN
4 2016-11-15 17:00:00 NaN
5 2016-11-15 18:00:00 NaN
6 2016-11-15 19:00:00 NaN
7 2016-11-15 20:00:00 NaN
8 2016-11-15 21:00:00 NaN
9 2016-11-15 22:00:00 NaN
10 2016-11-15 23:00:00 NaN
11 2016-11-16 00:00:00 NaN
12 2016-11-16 01:00:00 NaN
13 2016-11-16 02:00:00 NaN
14 2016-11-16 03:00:00 NaN
15 2016-11-16 04:00:00 NaN
16 2016-11-16 05:00:00 NaN
17 2016-11-16 06:00:00 NaN
18 2016-11-16 07:00:00 NaN
19 2016-11-16 08:00:00 NaN
20 2016-11-16 09:00:00 84.0
If you want to fill those NaN values, use ffill, bfill, or interpolate
df.assign(
Timestamp=pd.to_datetime(df.Timestamp, unit='ms')
).resample('H', on='Timestamp').mean().reset_index().interpolate()
Timestamp Value
0 2016-11-15 13:00:00 109.00
1 2016-11-15 14:00:00 107.75
2 2016-11-15 15:00:00 106.50
3 2016-11-15 16:00:00 105.25
4 2016-11-15 17:00:00 104.00
5 2016-11-15 18:00:00 102.75
6 2016-11-15 19:00:00 101.50
7 2016-11-15 20:00:00 100.25
8 2016-11-15 21:00:00 99.00
9 2016-11-15 22:00:00 97.75
10 2016-11-15 23:00:00 96.50
11 2016-11-16 00:00:00 95.25
12 2016-11-16 01:00:00 94.00
13 2016-11-16 02:00:00 92.75
14 2016-11-16 03:00:00 91.50
15 2016-11-16 04:00:00 90.25
16 2016-11-16 05:00:00 89.00
17 2016-11-16 06:00:00 87.75
18 2016-11-16 07:00:00 86.50
19 2016-11-16 08:00:00 85.25
20 2016-11-16 09:00:00 84.00
Let's try:
daily_summary = daily_summary.set_index('Timestamp')
daily_summary.index = pd.to_datetime(daily_summary.index, unit='ms')
For once an hour:
daily_summary.resample('H').mean()
or for once a day:
daily_summary.resample('D').mean()

Merging two dataframes only at specific times

I have two excel files that I'm trying to merge into one using pandas. The first file is a list of times and dates with a subscriber count for that given time and day. The second file has weather information on an hourly basis. I import both files and the data resembles:
df1=
Date Count
2010-01-02 09:00:00 15
2010-01-02 10:00:00 8
2010-01-02 11:00:00 9
2010-01-02 12:00:00 11
2010-01-02 13:00:00 8
2010-01-02 14:00:00 10
2010-01-02 15:00:00 8
2010-01-02 16:00:00 6
...
df2 =
Date Temp Rel_Hum Pressure Weather
2010-01-00 09:00:00 -5 93 100.36 Snow,Fog
2010-01-01 10:00:00 -5 93 100.36 Snow,Fog
2010-01-02 11:00:00 -6.5 91 100 Snow,Fog
2010-01-03 12:00:00 -7 87 89 Snow,Fog
2010-01-04 13:00:00 -7 87 89 Snow,Fog
2010-01-05 14:00:00 -6.7 88 89 Snow,Fog
2010-01-06 15:00:00 -6.5 89 89 Snow,Fog
2010-01-07 16:00:00 -6 88 90 Snow,Fog
2010-01-08 17:00:00 -6 89 89 Snow,Fog
...
I only need to weather info for the times that are specified in df1, but df2 contains weather info for a 24 hour period for everyday of that month.
Since df1 only contains 2 columns, I've modified df1 to have a Temp Rel_Hum Pressure and Weather column so that it resembles:
Date Count Temp Rel_Hum Pressure Weather
2010-01-02 09:00:00 15 0 0 0 0
2010-01-02 10:00:00 8 0 0 0 0
2010-01-02 11:00:00 9 0 0 0 0
2010-01-02 12:00:00 11 0 0 0 0
2010-01-02 13:00:00 8 0 0 0 0
2010-01-02 14:00:00 10 0 0 0 0
2010-01-02 15:00:00 8 0 0 0 0
2010-01-02 16:00:00 6 0 0 0 0
...
I've managed to test the code that I've written for a one month period and the problem that I'm encountering is that it is taking a great deal of time to complete the task. I wanted to know if there was a faster way of going about this
import pandas as pd
import numpy as np
from datetime import datetime
location = '/home/lukasz/Documents/BUS/HOURLY_DATA.xlsx'
location2 = '/home/lukasz/Documents/BUS/Weather Data/2010-01.xlsx'
df1 = pd.read_excel(location)
df2 = pd.read_excel(location2)
df.Temp = df.Temp.astype(float)
df.Rel_Hum = df.Rel_Hum.astype(float)
df.Pressure = df.Pressure.astype(float)
df.Weather = df.Weather.astype(str)
n = len(df2) - len(df)
for i in range(len(df)):
print(df['Date'][i])
for j in range(i, i+n):
date_object = datetime.strptime(df2['Date/Time'][j], '%Y-%m-%d %H:%M') # The date column in df2 is a str
if df['Date'][i] == date_object:
df.set_value(i, 'Temp', df2['Temp'][j])
df.set_value(i, 'Dew_Point_Temp', df2['Dew_Point_Temp'][j])
df.set_value(i, 'Rel_Hum', df2['Rel_Hum'][j])
df.set_value(i, 'Pressure', df2['Pressure'][j])
df.set_value(i, 'Weather', df2['Weather'][j])
# print(df[:5])
df.to_excel(location, index=False)
Use a combination of reindex to get df2 aligned with df1. Make sure to include parameter method='ffill' to forward fill weather information.
Then use join
df1.join(df2.set_index('Date').reindex(df1.Date, method='ffill'), on='Date')

Categories