I've got the following data frame on pandas:
d = {'col_Date_Time': ['2020-08-01 00:00:00',
'2020-08-01 00:10:00',
'2020-08-01 00:15:00',
'2020-08-01 00:19:00',
'2020-08-01 01:19:00',
'2020-08-02 00:00:00',
'2020-08-02 00:15:00',
'2020-08-02 00:35:00',
'2020-08-02 01:35:00']}
df = pd.DataFrame(data=d)
df = pd.to_datetime(df.col_Date_Time)
I want to add another column that contains the number of minutes from the start of each day.
So, the result in this case woud be:
NAN
10
15
19
79
NAN
15
35
95
import pandas as pd
import numpy as np
df = pd.DataFrame({'col_Date_Time': ['2020-08-01 00:00:00',
'2020-08-01 00:10:00',
'2020-08-01 00:15:00',
'2020-08-01 00:19:00',
'2020-08-01 01:23:00',
'2020-08-02 00:00:00',
'2020-08-02 00:15:00',
'2020-08-02 00:35:00',
'2020-08-02 06:31:00']})
df['col_Date_Time'] = pd.to_datetime(df.col_Date_Time)
df['start_day_time_stamp']=list(map(lambda x: x.date(),df['col_Date_Time']))
df['mins_from_day_start']=((pd.to_datetime(df['col_Date_Time'])-pd.to_datetime(df['start_day_time_stamp'])).dt.total_seconds())/60
df
Let us try
s = df.dt.minute.where(df.dt.date.duplicated())
Out[66]:
0 NaN
1 10.0
2 15.0
3 19.0
4 NaN
5 15.0
6 35.0
Name: col_Date_Time, dtype: float64
You can truncate the column to days (.dt.floor('d')), subtract this to col_Date_Time, and save in another column:
df["DELTA"] = df.col_Date_Time - df.col_Date_Time.dt.floor('d')
If you want this like integer:
df["DELTA2"] = df.DELTA.dt.seconds.div(60).astype(int)
col_Date_Time DELTA DELTA2
0 2020-08-01 00:00:00 00:00:00 0
1 2020-08-01 00:10:00 00:10:00 10
2 2020-08-01 00:15:00 00:15:00 15
3 2020-08-01 00:19:00 00:19:00 19
4 2020-08-01 01:19:00 01:19:00 79
5 2020-08-02 00:00:00 00:00:00 0
6 2020-08-02 00:15:00 00:15:00 15
7 2020-08-02 00:35:00 00:35:00 35
8 2020-08-02 01:35:00 01:35:00 95
Related
On specific dates, a metric starting at 0 increases by a value. Given a set of non-continuous dates and values, is it possible to produce a column with metric?
Input - metric changes per day
date value
02-03-2022 00:00:00 10
03-03-2022 00:00:00 0
06-03-2022 00:00:00 2
10-03-2022 00:00:00 18
Output - metric calculated for continuous range of days (starting value = 0 unless change applies already on first day)
0 metric
0 2022-02-28 0
1 2022-03-01 0
2 2022-03-02 10
3 2022-03-03 10
4 2022-03-04 10
5 2022-03-05 10
6 2022-03-06 12
7 2022-03-07 12
8 2022-03-08 12
9 2022-03-09 12
10 2022-03-10 30
11 2022-03-11 30
12 2022-03-12 30
13 2022-03-13 30
Code example
import pandas as pd
df = pd.DataFrame({'date': ['02-03-2022 00:00:00',
'03-03-2022 00:00:00',
'06-03-2022 00:00:00',
'10-03-2022 00:00:00'],
'value': [10, 0, 2, 18]},
index=[0,1,2,3])
df2 = pd.DataFrame(pd.date_range(start='28-02-2022', end='13-03-2022'))
df2['metric'] = 0 # TODO
Replace values in df2 from df by date, fill missing values with 0 and then cumsum:
df['date'] = pd.to_datetime(df.date, format='%d-%m-%Y %H:%M:%S')
df2['metric'] = df2[0].map(df.set_index('date')['value']).fillna(0).cumsum()
df2
0 metric
0 2022-02-28 0.0
1 2022-03-01 0.0
2 2022-03-02 10.0
3 2022-03-03 10.0
4 2022-03-04 10.0
5 2022-03-05 10.0
6 2022-03-06 12.0
7 2022-03-07 12.0
8 2022-03-08 12.0
9 2022-03-09 12.0
10 2022-03-10 30.0
11 2022-03-11 30.0
12 2022-03-12 30.0
13 2022-03-13 30.0
df.reindex is useful for this. Then add df.fillna and apply df.cumsum.
import pandas as pd
df = pd.DataFrame({'date': ['02-03-2022 00:00:00',
'03-03-2022 00:00:00',
'06-03-2022 00:00:00',
'10-03-2022 00:00:00'],
'value': [10, 0, 2, 18]},
index=[0,1,2,3])
df['date'] = pd.to_datetime(df.date, format='%d-%m-%Y %H:%M:%S')
res = df.set_index('date').reindex(pd.date_range(
start='2022-02-28', end='2022-03-13')).fillna(0).cumsum()\
.reset_index(drop=False).rename(columns={'index':'date',
'value':'metric'})
print(res)
date metric
0 2022-02-28 0.0
1 2022-03-01 0.0
2 2022-03-02 10.0
3 2022-03-03 10.0
4 2022-03-04 10.0
5 2022-03-05 10.0
6 2022-03-06 12.0
7 2022-03-07 12.0
8 2022-03-08 12.0
9 2022-03-09 12.0
10 2022-03-10 30.0
11 2022-03-11 30.0
12 2022-03-12 30.0
13 2022-03-13 30.0
I have a data in below format
user timestamp flowers total_flowers
xyz 01-01-2020 00:05:00 15 15
xyz 01-01-2020 00:10:00 5 20
xyz 01-01-2020 00:15:00 21 41
xyz 01-01-2020 00:35:00 1 42
...
xyz 01-01-2020 11:45:00 57 1029
xyz 01-01-2020 11:55:00 18 1047
Expected Output:
user timestamp flowers total_flowers
xyz 01-01-2020 00:05:00 15 15
xyz 01-01-2020 00:10:00 5 20
xyz 01-01-2020 00:15:00 21 41
xyz 01-01-2020 00:20:00 0 41
xyz 01-01-2020 00:25:00 0 41
xyz 01-01-2020 00:30:00 0 41
xyz 01-01-2020 00:35:00 1 42
...
xyz 01-01-2020 11:45:00 57 1029
xyz 01-01-2020 11:50:00 0 1029
xyz 01-01-2020 11:55:00 18 1047
So I want to fill timestamps with 5 mins interval and fill flowers column by 0 and total_flowers column by previous value(ffill)
My efforts:
start_day = "01-01-2020"
end_day = "01-01-2020"
start_time = pd.to_datetime(f"{start_day} 00:05:00+05:30")
end_time = pd.to_datetime(f"{end_day} 23:55:00+05:30")
dates = pd.date_range(start=start_time, end=end_time, freq='5Min')
df = df.set_index('timestamp').reindex(dates).reset_index(drop=False).reindex(columns=df.columns)
How do I fill flowers column with zeros and total_flower column with ffill and I am also getting values in timestamp column as Nan
Actual Output:
user timestamp flowers total_flowers
xyz Nan 15 15
xyz Nan 5 20
xyz Nan 21 41
xyz Nan Nan Nan
xyz Nan Nan Nan
xyz Nan Nan Nan
xyz Nan 1 42
...
xyz Nan 57 1029
xyz Nan Nan Nan
xyz Nan 18 1047
Reindex and refill
If you construct the dates such that you can reindex your timestamps, you can then just do some fillna and ffill operations. I had to remove the timezone information, but you should be able to add that back if your data are timezone aware. Here's the full example using some of your data:
d = {'user': {0: 'xyz', 1: 'xyz', 2: 'xyz', 3: 'xyz'},
'timestamp': {0: Timestamp('2020-01-01 00:05:00'),
1: Timestamp('2020-01-01 00:10:00'),
2: Timestamp('2020-01-01 00:15:00'),
3: Timestamp('2020-01-01 00:35:00')},
'flowers': {0: 15, 1: 5, 2: 21, 3: 1},
'total_flowers': {0: 15, 1: 20, 2: 41, 3: 42}}
df = pd.DataFrame(d)
# user timestamp flowers total_flowers
#0 xyz 2020-01-01 00:05:00 15 15
#1 xyz 2020-01-01 00:10:00 5 20
#2 xyz 2020-01-01 00:15:00 21 41
#3 xyz 2020-01-01 00:35:00 1 42
#as you did, but with no TZ
start_day = "01-01-2020"
end_day = "01-01-2020"
start_time = pd.to_datetime(f"{start_day} 00:05:00")
end_time = pd.to_datetime(f"{end_day} 00:55:00")
dates = pd.date_range(start=start_time, end=end_time, freq='5Min', name="timestamp")
#filling the nas and reformatting
df = df.set_index('timestamp')
df = df.reindex(dates)
df['user'].ffill(inplace=True)
df['flowers'].fillna(0, inplace=True)
df['total_flowers'].ffill(inplace=True)
df.reset_index(inplace=True)
Output:
timestamp user flowers total_flowers
0 2020-01-01 00:05:00 xyz 15.0 15.0
1 2020-01-01 00:10:00 xyz 5.0 20.0
2 2020-01-01 00:15:00 xyz 21.0 41.0
3 2020-01-01 00:20:00 xyz 0.0 41.0
4 2020-01-01 00:25:00 xyz 0.0 41.0
5 2020-01-01 00:30:00 xyz 0.0 41.0
6 2020-01-01 00:35:00 xyz 1.0 42.0
7 2020-01-01 00:40:00 xyz 0.0 42.0
8 2020-01-01 00:45:00 xyz 0.0 42.0
9 2020-01-01 00:50:00 xyz 0.0 42.0
10 2020-01-01 00:55:00 xyz 0.0 42.0
Resample and refill
You can also use resample here using asfreq(), then do the filling as before. This is convenient for finding the dates (and should get around the timezone stuff):
# resample and then fill the gaps
# same df as constructed above
df = df.set_index('timestamp')
df.resample('5T').asfreq()
df['user'].ffill(inplace=True)
df['flowers'].fillna(0, inplace=True)
df['total_flowers'].ffill(inplace=True)
df.index.name='timestamp'
df.reset_index(inplace=True)
Same output:
timestamp flowers total_flowers user
0 2020-01-01 00:05:00 15 15.0 xyz
1 2020-01-01 00:10:00 5 20.0 xyz
2 2020-01-01 00:15:00 21 41.0 xyz
3 2020-01-01 00:20:00 0 41.0 xyz
4 2020-01-01 00:25:00 0 41.0 xyz
5 2020-01-01 00:30:00 0 41.0 xyz
6 2020-01-01 00:35:00 1 42.0 xyz
I couldn't find a way to do the filling during the resampling. For instance, using
df = df.resample('5T').agg({'flowers':'sum',
'total_flowers':'ffill',
'user':'ffill'})
does not work (it gets you to the same place as asfreq, but there's more room for accidentally missing out columns here). Which is odd because when applying ffill over the whole DataFrame, the missing data can be forward filled (but we only want that for some columns, and the user column also gets dropped). But simply using asfreq and doing the filling after the fact seems fine to me with few columns.
crossed with #Tom
You are almost there:
df = pd.DataFrame({'user': ['xyz', 'xyz', 'xyz', 'xyz'],
'timestamp': ['01-01-2020 00:05:00', '01-01-2020 00:10:00', '01-01-2020 00:15:00', '01-01-2020 00:35:00'],
'flowers':[15, 5, 21, 1],
'total_flowers':[15, 20, 41, 42]
})
df['timestamp'] = pd.to_datetime(df['timestamp'])
r = pd.date_range(start=df['timestamp'].min(), end=df['timestamp'].max(), freq='5Min')
df = df.set_index('timestamp').reindex(r).rename_axis('timestamp').reset_index()
df['user'].ffill(inplace=True)
df['total_flowers'].ffill(inplace=True)
df['flowers'].fillna(0, inplace=True)
leads to the following output:
timestamp user flowers total_flowers
0 2020-01-01 00:05:00 xyz 15.0 15.0
1 2020-01-01 00:10:00 xyz 5.0 20.0
2 2020-01-01 00:15:00 xyz 21.0 41.0
3 2020-01-01 00:20:00 xyz 0.0 41.0
4 2020-01-01 00:25:00 xyz 0.0 41.0
5 2020-01-01 00:30:00 xyz 0.0 41.0
6 2020-01-01 00:35:00 xyz 1.0 42.0
How do I add future dates to a data frame? This datetime delta only adds deltas to adjacent columns.
import pandas as pd
from datetime import timedelta
df = pd.DataFrame({
'date': ['2001-02-01','2001-02-02','2001-02-03', '2001-02-04'],
'Monthly Value': [100, 200, 300, 400]
})
df["future_date"] = df["date"] + timedelta(days=4)
print(df)
date future_date
0 2001-02-01 00:00:00 2001-02-05 00:00:00
1 2001-02-02 00:00:00 2001-02-06 00:00:00
2 2001-02-03 00:00:00 2001-02-07 00:00:00
3 2001-02-04 00:00:00 2001-02-08 00:00:00
Desired dataframe:
date future_date
0 2001-02-01 00:00:00 2001-02-01 00:00:00
1 2001-02-02 00:00:00 2001-02-02 00:00:00
2 2001-02-03 00:00:00 2001-02-03 00:00:00
3 2001-02-04 00:00:00 2001-02-04 00:00:00
4 2001-02-05 00:00:00
5 2001-02-06 00:00:00
6 2001-02-07 00:00:00
7 2001-02-08 00:00:00
You can do the following:
# set to timestamp
df['date'] = pd.to_datetime(df['date'])
# create a future date df
ftr = (df['date'] + pd.Timedelta(4, unit='days')).to_frame()
ftr['Monthly Value'] = None
# join the future data
df1 = pd.concat([df, ftr], ignore_index=True)
date Monthly Value
0 2001-02-01 100
1 2001-02-02 200
2 2001-02-03 300
3 2001-02-04 400
4 2001-02-05 None
5 2001-02-06 None
6 2001-02-07 None
7 2001-02-08 None
I found that this also works:
df.append(pd.DataFrame({'date': pd.date_range(start=df.date.iloc[-1], periods= 4, freq='d', closed='right')}))
If I understand you correctly,
we can create a new dataframe using the min of your date, and max + 4 days.
we just concat this back using axis = 1.
df['date'] = pd.to_datetime(df['date'])
fdates = pd.DataFrame(
pd.date_range(df["date"].min(), df["date"].max() + pd.DateOffset(days=4))
,columns=['future_date'])
df_new = pd.concat([df,fdates],axis=1)
print(df_new[['date','future_date','Monthly Value']])
0 2001-02-01 2001-02-01 100.0
1 2001-02-02 2001-02-02 200.0
2 2001-02-03 2001-02-03 300.0
3 2001-02-04 2001-02-04 400.0
4 NaT 2001-02-05 NaN
5 NaT 2001-02-06 NaN
6 NaT 2001-02-07 NaN
7 NaT 2001-02-08 NaN
I'd like to find faster code to achieve the same goal: for each row, compute the median of all data in the past 30 days. But there are less than 5 data points, then return np.nan.
import pandas as pd
import numpy as np
import datetime
def findPastVar(df, var='var' ,window=30, method='median'):
# window= # of past days
def findPastVar_apply(row):
pastVar = df[var].loc[(df['timestamp'] - row['timestamp'] < datetime.timedelta(days=0)) & (df['timestamp'] - row['timestamp'] > datetime.timedelta(days=-window))]
if len(pastVar) < 5:
return(np.nan)
if method == 'median':
return(np.median(pastVar.values))
df['past{}d_{}_median'.format(window,var)] = df.apply(findPastVar_apply,axis=1)
return(df)
df = pd.DataFrame()
df['timestamp'] = pd.date_range('1/1/2011', periods=100, freq='D')
df['timestamp'] = df.timestamp.astype(pd.Timestamp)
df['var'] = pd.Series(np.random.randn(len(df['timestamp'])))
Data looks like this. In my real data, there are gaps in time and maybe more data points in one day.
In [47]: df.head()
Out[47]:
timestamp var
0 2011-01-01 00:00:00 -0.670695
1 2011-01-02 00:00:00 0.315148
2 2011-01-03 00:00:00 -0.717432
3 2011-01-04 00:00:00 2.904063
4 2011-01-05 00:00:00 -1.092813
Desired output:
In [55]: df.head(10)
Out[55]:
timestamp var past30d_var_median
0 2011-01-01 00:00:00 -0.670695 NaN
1 2011-01-02 00:00:00 0.315148 NaN
2 2011-01-03 00:00:00 -0.717432 NaN
3 2011-01-04 00:00:00 2.904063 NaN
4 2011-01-05 00:00:00 -1.092813 NaN
5 2011-01-06 00:00:00 -2.676784 -0.670695
6 2011-01-07 00:00:00 -0.353425 -0.694063
7 2011-01-08 00:00:00 -0.223442 -0.670695
8 2011-01-09 00:00:00 0.162126 -0.512060
9 2011-01-10 00:00:00 0.633801 -0.353425
However, my current code running speed:
In [49]: %timeit findPastVar(df)
1 loop, best of 3: 755 ms per loop
I need to run a large dataframe from time to time, so I want to optimize this code.
Any suggestion or comment are welcome.
New in pandas 0.19 is time aware rolling. It can deal with missing data.
Code:
print(df.rolling('30d', on='timestamp', min_periods=5)['var'].median())
Test Code:
df = pd.DataFrame()
df['timestamp'] = pd.date_range('1/1/2011', periods=60, freq='D')
df['timestamp'] = df.timestamp.astype(pd.Timestamp)
df['var'] = pd.Series(np.random.randn(len(df['timestamp'])))
# duplicate one sample
df.timestamp.loc[50] = df.timestamp.loc[51]
# drop some data
df = df.drop(range(15, 50))
df['median'] = df.rolling(
'30d', on='timestamp', min_periods=5)['var'].median()
Results:
timestamp var median
0 2011-01-01 00:00:00 -0.639901 NaN
1 2011-01-02 00:00:00 -1.212541 NaN
2 2011-01-03 00:00:00 1.015730 NaN
3 2011-01-04 00:00:00 -0.203701 NaN
4 2011-01-05 00:00:00 0.319618 -0.203701
5 2011-01-06 00:00:00 1.272088 0.057958
6 2011-01-07 00:00:00 0.688965 0.319618
7 2011-01-08 00:00:00 -1.028438 0.057958
8 2011-01-09 00:00:00 1.418207 0.319618
9 2011-01-10 00:00:00 0.303839 0.311728
10 2011-01-11 00:00:00 -1.939277 0.303839
11 2011-01-12 00:00:00 1.052173 0.311728
12 2011-01-13 00:00:00 0.710270 0.319618
13 2011-01-14 00:00:00 1.080713 0.504291
14 2011-01-15 00:00:00 1.192859 0.688965
50 2011-02-21 00:00:00 -1.126879 NaN
51 2011-02-21 00:00:00 0.213635 NaN
52 2011-02-22 00:00:00 -1.357243 NaN
53 2011-02-23 00:00:00 -1.993216 NaN
54 2011-02-24 00:00:00 1.082374 -1.126879
55 2011-02-25 00:00:00 0.124840 -0.501019
56 2011-02-26 00:00:00 -0.136822 -0.136822
57 2011-02-27 00:00:00 -0.744386 -0.440604
58 2011-02-28 00:00:00 -1.960251 -0.744386
59 2011-03-01 00:00:00 0.041767 -0.440604
you can try rolling_median
O(N log(window)) implementation using skip list
pd.rolling_median(df,window= 30,min_periods=5)
I have a column in a pandas data frame looking like:
test1.Received
Out[9]:
0 01/01/2015 17:25
1 02/01/2015 11:43
2 04/01/2015 18:21
3 07/01/2015 16:17
4 12/01/2015 20:12
5 14/01/2015 11:09
6 15/01/2015 16:05
7 16/01/2015 21:02
8 26/01/2015 03:00
9 27/01/2015 08:32
10 30/01/2015 11:52
This represents a time stamp as Day Month Year Hour Minute. I would like to rearrange the date as Year Month Day Hour Minute. So that it would look like:
test1.Received
Out[9]:
0 2015/01/01 17:25
1 2015/01/02 11:43
...
Just use pd.to_datetime:
In [33]:
import pandas as pd
pd.to_datetime(df['date'])
Out[33]:
index
0 2015-01-01 17:25:00
1 2015-02-01 11:43:00
2 2015-04-01 18:21:00
3 2015-07-01 16:17:00
4 2015-12-01 20:12:00
5 2015-01-14 11:09:00
6 2015-01-15 16:05:00
7 2015-01-16 21:02:00
8 2015-01-26 03:00:00
9 2015-01-27 08:32:00
10 2015-01-30 11:52:00
Name: date, dtype: datetime64[ns]
In your case:
pd.to_datetime(test1['Received'])
should just work
If you want to change the display format then you need to parse as a datetime and then apply `datetime.strftime:
In [35]:
import datetime as dt
pd.to_datetime(df['date']).apply(lambda x: dt.datetime.strftime(x, '%m/%d/%y %H:%M:%S'))
Out[35]:
index
0 01/01/15 17:25:00
1 02/01/15 11:43:00
2 04/01/15 18:21:00
3 07/01/15 16:17:00
4 12/01/15 20:12:00
5 01/14/15 11:09:00
6 01/15/15 16:05:00
7 01/16/15 21:02:00
8 01/26/15 03:00:00
9 01/27/15 08:32:00
10 01/30/15 11:52:00
Name: date, dtype: object
So the above is now showing month/day/year, in your case the following should work:
pd.to_datetime(test1['Received']).apply(lambda x: dt.datetime.strftime(x, '%y/%m/%d %H:%M:%S'))
EDIT
it looks like you need to pass param dayfirst=True to to_datetime:
In [45]:
pd.to_datetime(df['date'], format('%d/%m/%y %H:%M:%S'), dayfirst=True).apply(lambda x: dt.datetime.strftime(x, '%m/%d/%y %H:%M:%S'))
Out[45]:
index
0 01/01/15 17:25:00
1 01/02/15 11:43:00
2 01/04/15 18:21:00
3 01/07/15 16:17:00
4 01/12/15 20:12:00
5 01/14/15 11:09:00
6 01/15/15 16:05:00
7 01/16/15 21:02:00
8 01/26/15 03:00:00
9 01/27/15 08:32:00
10 01/30/15 11:52:00
Name: date, dtype: object
Pandas has this in-built, you can specify your datetime format
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html.
use infer_datetime_format
>>> import pandas as pd
>>> i = pd.date_range('20000101',periods=100)
>>> df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day))
>>> pd.to_datetime(df.year*10000 + df.month*100 + df.day, format='%Y%m%d')
0 2000-01-01
1 2000-01-02
...
98 2000-04-08
99 2000-04-09
Length: 100, dtype: datetime64[ns]
you can use the datetime functions to convert from and to strings.
# converts to date
datetime.strptime(date_string, 'DD/MM/YYYY HH:MM')
and
# converts to your requested string format
datetime.strftime(date_string, "YYYY/MM/DD HH:MM:SS")