Trying to down sample of 8 weekly time points to 2 points, each represents the average over 4 weeks, I use resample(). I started by defining the rule using (60*60*24*7*4) seconds, and saw I ended up in 3 time points, latest one is dummy. Started to check it, I noticed that if I define the rule as 4W or 28D it's fine, but going down to 672H or smaller units (minutes, seconds,..) the extra faked column appears. This testing code:
import numpy as np
import pandas as pd
d = np.arange(16).reshape(2, 8)
res = []
for month in range(1,13):
start_date = str(month) + '/1/2014'
df = pd.DataFrame(data=d, index=['A', 'B'], columns=pd.date_range(start_date, periods=8, freq='7D'))
print(df, '\n')
dfw = df.resample(rule='4W', how='mean', axis=1, closed='left', label='left')
print('4 Weeks:\n', dfw, '\n')
dfd = df.resample(rule='28D', how='mean', axis=1, closed='left', label='left')
print('28 Days:\n', dfd, '\n')
dfh = df.resample(rule='672H', how='mean', axis=1, closed='left', label='left')
print('672 Hours:\n', dfh, '\n')
dfm = df.resample(rule='40320T', how='mean', axis=1, closed='left', label='left')
print('40320 Minutes:\n', dfm, '\n')
dfs = df.resample(rule='2419200S', how='mean', axis=1, closed='left', label='left')
print('2419200 Seconds:\n', dfs, '\n')
res.append(([start_date], dfh.shape[1] == dfd.shape[1]))
print('\n\n--------------------------\n\n')
[print(res[i]) for i in range(12)]
pass
is printed as (I pasted here only the printout of the last iteration):
2014-11-01 2014-11-29 2014-12-27
A 1.5 5.5 NaN
B 9.5 13.5 NaN
2014-12-01 2014-12-08 2014-12-15 2014-12-22 2014-12-29 2015-01-05 \
A 0 1 2 3 4 5
B 8 9 10 11 12 13
2015-01-12 2015-01-19
A 6 7
B 14 15
4 Weeks:
2014-11-30 2014-12-28
A 1.5 5.5
B 9.5 13.5
28 Days:
2014-12-01 2014-12-29
A 1.5 5.5
B 9.5 13.5
672 Hours:
2014-12-01 2014-12-29 2015-01-26
A 1.5 5.5 NaN
B 9.5 13.5 NaN
40320 Minutes:
2014-12-01 2014-12-29 2015-01-26
A 1.5 5.5 NaN
B 9.5 13.5 NaN
2419200 Seconds:
2014-12-01 2014-12-29 2015-01-26
A 1.5 5.5 NaN
B 9.5 13.5 NaN
--------------------------
(['1/1/2014'], False)
(['2/1/2014'], True)
(['3/1/2014'], True)
(['4/1/2014'], True)
(['5/1/2014'], False)
(['6/1/2014'], False)
(['7/1/2014'], False)
(['8/1/2014'], False)
(['9/1/2014'], False)
(['10/1/2014'], False)
(['11/1/2014'], False)
(['12/1/2014'], False)
So there is an error for date_range starting on beginning of 9 months, and no error for 3 months (February-April). Either I miss something or it's a bug, is it?
Thanks #DSM and #Andy, indeed I had pandas 0.15.1, upgrading to latest 0.15.2 solved it
Related
I have this data frame
import pandas as pd
df = pd.DataFrame({'COTA':['A','A','A','A','A','B','B','B','B'],
'Date':['14/10/2021','19/10/2020','29/10/2019','30/09/2021','20/09/2020','20/10/2021','29/10/2020','15/10/2019','10/09/2020'],
'Mark':[1,2,3,4,5,1,2,3,3]
})
print(df)
based on this data frame I wanted the MARK from the previous year, I managed to acquire the maximum COTA but I wanted the last one, I used .max() and I thought I could get it with .last() but it didn't work.
follow the example of my code.
df['Date'] = pd.to_datetime(df['Date'])
df['LastYear'] = df['Date'] - pd.offsets.YearEnd(0)
s1 = df.groupby(['Found', 'LastYear'])['Mark'].max()
s2 = s1.rename(index=lambda x: x + pd.offsets.DateOffset(years=1), level=1)
df = df.join(s2.rename('Max_MarkLastYear'), on=['Found', 'LastYear'])
print (df)
Found Date Mark LastYear Max_MarkLastYear
0 A 2021-10-14 1 2021-12-31 5.0
1 A 2020-10-19 2 2020-12-31 3.0
2 A 2019-10-29 3 2019-12-31 NaN
3 A 2021-09-30 4 2021-12-31 5.0
4 A 2020-09-20 5 2020-12-31 3.0
5 B 2021-10-20 1 2021-12-31 3.0
6 B 2020-10-29 2 2020-12-31 3.0
7 B 2019-10-15 3 2019-12-31 NaN
8 B 2020-10-09 3 2020-12-31 3.0
How do I create a new column with the last value of the previous year
This isn't duplicate. I already referred this post_1 and post_2
My question is different and not about agg function. It is about displaying grouped by column as well during ffill operation. Though the code works fine, just sharing the full code for you to get an idea. Problem is in the commented line. look out for that line below.
I have a dataframe like as given below
df = pd.DataFrame({
'subject_id':[1,1,1,1,1,1,1,2,2,2,2,2],
'time_1' :['2173-04-03 12:35:00','2173-04-03 12:50:00','2173-04-05 12:59:00','2173-05-04 13:14:00','2173-05-05 13:37:00','2173-07-06 13:39:00','2173-07-08 11:30:00','2173-04-08 16:00:00','2173-04-09 22:00:00','2173-04-11 04:00:00','2173- 04-13 04:30:00','2173-04-14 08:00:00'],
'val' :[5,5,5,5,1,6,5,5,8,3,4,6]})
df['time_1'] = pd.to_datetime(df['time_1'])
df['day'] = df['time_1'].dt.day
df['month'] = df['time_1'].dt.month
What this code with the help of Jezrael from forum does is add missing dates based on threshold value. Only issue is,I don't see the grouped by column during output
df['time_1'] = pd.to_datetime(df['time_1'])
df['day'] = df['time_1'].dt.day
df['date'] = df['time_1'].dt.floor('d')
df1 = (df.set_index('date')
.groupby('subject_id')
.resample('d')
.last()
.index
.to_frame(index=False))
df2 = df1.merge(df, how='left')
thresh = 5
mask = df2['day'].notna()
s = mask.cumsum().mask(mask)
df2['count'] = s.map(s.value_counts())
df2 = df2[(df2['count'] < thresh) | (df2['count'].isna())]
df2 = df2.groupby(df2['subject_id']).ffill() # problem is here #here is the problem
dates = df2['time_1'].dt.normalize()
df2['time_1'] += np.where(dates == df2['date'], 0, df2['date'] - dates)
df2['day'] = df2['time_1'].dt.day
df2['val'] = df2['val'].astype(int)
As shown in code above, I tried the below approaches
df2 = df2.groupby(df2['subject_id']).ffill() # doesn't help
df2 = df2.groupby(df2['subject_id']).ffill().reset_index() # doesn't help
df2 = df2.groupby('subject_id',as_index=False).ffill() # doesn't help
Incorrect output without subject_id
I expect my output to have subject_id column as well
Here are 2 possible solutions - specify all columns in list after groupby and assign back:
cols = df2.columns.difference(['subject_id'])
df2[cols] = df2.groupby('subject_id')[cols].ffill() # problem is here #here is the problem
Or create index by subject_id column and grouping by index:
#newer pandas versions
df2 = df2.set_index('subject_id').groupby('subject_id').ffill().reset_index()
#oldier pandas versions
df2 = df2.set_index('subject_id').groupby(level=0).ffill().reset_index()
dates = df2['time_1'].dt.normalize()
df2['time_1'] += np.where(dates == df2['date'], 0, df2['date'] - dates)
df2['day'] = df2['time_1'].dt.day
df2['val'] = df2['val'].astype(int)
print (df2)
subject_id date time_1 val day month count
0 1 2173-04-03 2173-04-03 12:35:00 5 3 4.0 NaN
1 1 2173-04-03 2173-04-03 12:50:00 5 3 4.0 NaN
2 1 2173-04-04 2173-04-04 12:50:00 5 4 4.0 1.0
3 1 2173-04-05 2173-04-05 12:59:00 5 5 4.0 1.0
32 1 2173-05-04 2173-05-04 13:14:00 5 4 5.0 1.0
33 1 2173-05-05 2173-05-05 13:37:00 1 5 5.0 1.0
95 1 2173-07-06 2173-07-06 13:39:00 6 6 7.0 1.0
96 1 2173-07-07 2173-07-07 13:39:00 6 7 7.0 1.0
97 1 2173-07-08 2173-07-08 11:30:00 5 8 7.0 1.0
98 2 2173-04-08 2173-04-08 16:00:00 5 8 4.0 NaN
99 2 2173-04-09 2173-04-09 22:00:00 8 9 4.0 NaN
100 2 2173-04-10 2173-04-10 22:00:00 8 10 4.0 1.0
101 2 2173-04-11 2173-04-11 04:00:00 3 11 4.0 1.0
102 2 2173-04-12 2173-04-12 04:00:00 3 12 4.0 1.0
103 2 2173-04-13 2173-04-13 04:30:00 4 13 4.0 1.0
104 2 2173-04-14 2173-04-14 08:00:00 6 14 4.0 1.0
I currently have a dataframe, where an uniqueID has multiple dates in another column. I want extract the hours between each date, but ignore the weekend if the next date is after the weekend. For example, if today is friday at 12 pm,
and the following date is tuesday at 12 pm then the difference in hours between these two dates would be 48 hours.
Here is my dataset with the expected output:
df = pd.DataFrame({"UniqueID": ["A","A","A","B","B","B","C","C"],"Date":
["2018-12-07 10:30:00","2018-12-10 14:30:00","2018-12-11 17:30:00",
"2018-12-14 09:00:00","2018-12-18 09:00:00",
"2018-12-21 11:00:00","2019-01-01 15:00:00","2019-01-07 15:00:00"],
"ExpectedOutput": ["28.0","27.0","Nan","48.0","74.0","NaN","96.0","NaN"]})
df["Date"] = df["Date"].astype(np.datetime64)
This is what I have so far, but it includes the weekends:
df["date_diff"] = df.groupby(["UniqueID"])["Date"].apply(lambda x: x.diff()
/ np.timedelta64(1 ,'h')).shift(-1)
Thanks!
Idea is floor datetimes for remove times and get number of business days between start day + one day and shifted day to hours3 column by numpy.busday_count and then create hour1 and hour2 columns for start and end hours if not weekends hours. Last sum all hours columns together:
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
df["hours1"] = df["Date"].dt.floor('d')
df["hours2"] = df["shifted"].dt.floor('d')
mask = df['shifted'].notnull()
f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24
mask1 = df['hours1'].dt.dayofweek < 5
hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')
mask1 = df['hours2'].dt.dayofweek < 5
df['hours2'] = np.where(mask1, df['shifted']-df['hours2'], np.nan) / np.timedelta64(1 ,'h')
df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
print (df)
UniqueID Date ExpectedOutput shifted hours1 \
0 A 2018-12-07 10:30:00 28.0 2018-12-10 14:30:00 13.5
1 A 2018-12-10 14:30:00 27.0 2018-12-11 17:30:00 9.5
2 A 2018-12-11 17:30:00 Nan NaT 6.5
3 B 2018-12-14 09:00:00 48.0 2018-12-18 09:00:00 15.0
4 B 2018-12-18 09:00:00 74.0 2018-12-21 11:00:00 15.0
5 B 2018-12-21 11:00:00 NaN NaT 13.0
6 C 2019-01-01 15:00:00 96.0 2019-01-07 15:00:00 9.0
7 C 2019-01-07 15:00:00 NaN NaT 9.0
hours2 hours3 date_diff
0 14.5 0.0 28.0
1 17.5 0.0 27.0
2 NaN NaN NaN
3 9.0 24.0 48.0
4 11.0 48.0 74.0
5 NaN NaN NaN
6 15.0 72.0 96.0
7 NaN NaN NaN
First solution was removed with 2 reasons - was not accurate and slow:
np.random.seed(2019)
dates = pd.date_range('2015-01-01','2018-01-01', freq='H')
df = pd.DataFrame({"UniqueID": np.random.choice(list('ABCDEFGHIJ'), size=100),
"Date": np.random.choice(dates, size=100)})
print (df)
def old(df):
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
def f(x):
a = pd.date_range(x['Date'], x['shifted'], freq='T')
return ((a.dayofweek < 5).sum() / 60).round()
mask = df['shifted'].notnull()
df.loc[mask, 'date_diff'] = df[mask].apply(f, axis=1)
return df
def new(df):
df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])
df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
df["hours1"] = df["Date"].dt.floor('d')
df["hours2"] = df["shifted"].dt.floor('d')
mask = df['shifted'].notnull()
f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24
mask1 = df['hours1'].dt.dayofweek < 5
hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')
mask1 = df['hours2'].dt.dayofweek < 5
df['hours2'] = np.where(mask1, df['shifted'] - df['hours2'], np.nan) / np.timedelta64(1 ,'h')
df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
return df
print (new(df))
print (old(df))
In [44]: %timeit (new(df))
22.7 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [45]: %timeit (old(df))
1.01 s ± 8.03 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I have the code below:
import pandas as pd
import datetime
df=pd.read_csv("https://www.dropbox.com/s/08kuxi50d0xqnfc/demo.csv?dl=1")
df["date"]=pd.to_datetime(df["date"])
df['date'] = df.date.apply(lambda x: datetime.datetime.strftime(x,'%b')) # SHOWS date as MONTH
pvt_enroll=df.pivot_table(index='site', columns="date", values = 'baseline', aggfunc = {'baseline' : 'count'}, fill_value=0, margins=True) # Pivot_Table with enrollment by SITE by MONTH
pvt_enroll.to_csv("pivot_test.csv")
table_enroll_site_month = pd.read_csv('pivot_test.csv', encoding='latin-1')
table_enroll_site_month.rename(columns={'site':'Study Site'}, inplace=True)
table_enroll_site_month
Study Site Apr Jul Jun May All
0 A 5.0 0.0 8.0 4.0 17.0
1 B 9.0 0.0 11.0 5.0 25.0
2 C 6.0 1.0 3.0 20.0 30.0
3 D 5.0 0.0 3.0 2.0 10.0
4 E 5.0 0.0 5.0 0.0 10.0
5 All 30.0 1.0 30.0 31.0 92.0
And wonder how to:
1. Display months with year as
Apr16 Jul16 Jun16 May16
2. Is it possible to get same table without running this step (pvt_enroll.to_csv("pivot_test.csv")? I mean, can I get same result without needing to save to .csv file first?
I think by using %b%y you can get 'Apr16' etc format.
I tried with the following code, without saving into .csv.
import pandas as pd
from datetime import datetime
df=pd.read_csv("demo.csv")
df["date"]=pd.to_datetime(df["date"])
df['date'] = df['date'].apply(lambda x: datetime.strftime(x,'%b%y'))
pvt_enroll=df.pivot_table(index='site', columns="date", values = 'baseline', aggfunc = {'baseline' : 'count'}, fill_value=0, margins=True) # Pivot_Table with enrollment by SITE by MONTH
pvt_enroll.reset_index(inplace=True)
pvt_enroll.rename(columns={'site':'Study Site'}, inplace=True)
print(pvt_enroll)
And I got the output as follows
date Study Site Apr16 Jul16 Jun16 May16 All
0 A 5 0 8 4 17
1 B 9 0 11 5 25
2 C 6 1 3 20 30
3 D 5 0 3 2 10
4 E 5 0 5 0 10
5 All 30 1 30 31 92
I have a dataframe like this
df
order_date amount
0 2015-10-02 1
1 2015-12-21 15
2 2015-12-24 3
3 2015-12-26 4
4 2015-12-27 5
5 2015-12-28 10
I would like to sum on df["amount"] based on range from df["order_date"] to df["order_date"] + 6 days
order_date amount sum
0 2015-10-02 1 1
1 2015-12-21 15 27 //comes from 15 + 3 + 4 + 5
2 2015-12-24 3 22 //comes from 3 + 4 + 5 + 10
3 2015-12-26 4 19
4 2015-12-27 5 15
5 2015-12-28 10 10
the data type of order_date is datetime
have tried to use iloc but it did not work well...
if anyone has any idea/example on who to work on this,
please kindly let me know.
If pandas rolling allowed left-aligned window (default is right-aligned) then the answer would be a simple single liner: df.set_index('order_date').amount.rolling('7d',min_periods=1,align='left').sum(), however forward-looking has not been implemented yet (i.e. rolling does not accept an align parameter). So, the trick I came up with is to "reverse" the dates temporarily. Solution:
df.index = pd.to_datetime(pd.datetime.now() - df.order_date)
df['sum'] = df.sort_index().amount.rolling('7d',min_periods=1).sum()
df.reset_index(drop=True)
Output:
order_date amount sum
0 2015-10-02 1 1.0
1 2015-12-21 15 27.0
2 2015-12-24 3 22.0
3 2015-12-26 4 19.0
4 2015-12-27 5 15.0
5 2015-12-28 10 10.0
Expanding on my comment:
from datetime import timedelta
df['sum'] = 0
for i in range(len(df)):
dt1 = df['order_date'][i]
dt2 = dt1 + timedelta(days=6)
df['sum'][i] = sum(df['amount'][(df['order_date'] >= dt1) & (df['order_date'] <= dt2)])
There's probably a much better way to do this but it works...
There is my way for this problem. It works.. (I believe there should be a much better way to do this.)
import pandas as pd
df['order_date']=pd.to_datetime(pd.Series(df.order_date))
Temp=pd.DataFrame(pd.date_range(start='2015-10-02', end='2017-01-01'),columns=['STDate'])
Temp=Temp.merge(df,left_on='STDate',right_on='order_date',how='left')
Temp['amount']=Temp['amount'].fillna(0)
Temp.sort(['STDate'],ascending=False,inplace=True)
Temp['rolls']=pd.rolling_sum(Temp['amount'],window =7,min_periods=0)
Temp.loc[Temp.STDate.isin(df.order_date),:].sort(['STDate'],ascending=True)
STDate Unnamed: 0 order_date amount rolls
0 2015-10-02 0.0 2015-10-02 1.0 1.0
80 2015-12-21 1.0 2015-12-21 15.0 27.0
83 2015-12-24 2.0 2015-12-24 3.0 22.0
85 2015-12-26 3.0 2015-12-26 4.0 19.0
86 2015-12-27 4.0 2015-12-27 5.0 15.0
87 2015-12-28 5.0 2015-12-28 10.0 10.0
Set order_date to be DatetimeIndex, so that you can use df.ix[time1:time2] to get the time range rows, then filter amount column and sum them.
You can try with :
from datetime import timedelta
df = pd.read_fwf('test2.csv')
df.order_date = pd.to_datetime(df.order_date)
df =df.set_index(pd.DatetimeIndex(df['order_date']))
sum_list = list()
for i in range(len(df)):
sum_list.append(df.ix[df.ix[i]['order_date']:(df.ix[i]['order_date'] + timedelta(days=6))]['amount'].sum())
df['sum'] = sum_list
df
Output:
order_date amount sum
2015-10-02 2015-10-02 1 1
2015-12-21 2015-12-21 15 27
2015-12-24 2015-12-24 3 22
2015-12-26 2015-12-26 4 19
2015-12-27 2015-12-27 5 15
2015-12-28 2015-12-28 10 10
import datetime
df['order_date'] = pd.to_datetime(df['order_date'], format='%Y-%m-%d')
df.set_index(['order_date'], inplace=True)
# Sum rows within the range of six days in the future
d = {t: df[(df.index >= t) & (df.index <= t + datetime.timedelta(days=6))]['amount'].sum()
for t in df.index}
# Assign the summed values back to the dataframe
df['amount_sum'] = [d[t] for t in df.index]
df is now:
amount amount_sum
order_date
2015-10-02 1.0 1.0
2015-12-21 15.0 27.0
2015-12-24 3.0 22.0
2015-12-26 4.0 19.0
2015-12-27 5.0 15.0
2015-12-28 10.0 10.0