pandas merging based on a timestamp which do not match exactly - python

What methods are available to merge columns which have timestamps that do not exactly match?
DF1:
date start_time employee_id session_id
01/01/2016 01/01/2016 06:03:13 7261824 871631182
DF2:
date start_time employee_id session_id
01/01/2016 01/01/2016 06:03:37 7261824 871631182
I could join on the ['date', 'employee_id', 'session_id'], but sometimes the same employee will have multiple identical sessions on the same date which causes duplicates. I could drop the rows where this takes place, but I would lose valid sessions if I did.
Is there an efficient way to join if the timestamp of DF1 is <5 minutes from the timestamp of DF2, and the session_id and employee_id also match? If there is a matching record, then the timestamp will always be slightly later than DF1 because an event is triggered at some future point.
['employee_id', 'session_id', 'timestamp<5minutes']
Edit - I assumed someone would have run into this issue before.
I was thinking of doing this:
Take my timestamp on each dataframe
Create a column which is the timestamp + 5 minutes (rounded)
Create a column which is the timestamp - 5 minutes (rounded)
Create a 10 minute interval string to join the files on
df1['low_time'] = df1['start_time'] - timedelta(minutes=5)
df1['high_time'] = df1['start_time'] + timedelta(minutes=5)
df1['interval_string'] = df1['low_time'].astype(str) + df1['high_time'].astype(str)
Does someone know how to round those 5 minute intervals to the nearest 5 minute mark?
02:59:37 - 5 min = 02:55:00
02:59:37 + 5 min = 03:05:00
interval_string = '02:55:00-03:05:00'
pd.merge(df1, df2, how = 'left', on = ['employee_id', 'session_id', 'date', 'interval_string']
Does anyone know how to round the time like that? This seems like it could work. You still match based on the date, employee, and session, and then you look for times which are basically within the same 10 minute interval or range

I would try using this method in pandas:
pandas.merge_asof()
The parameters of interest for you would be direction,tolerance,left_on, and right_on
Building off #Igor answer:
import pandas as pd
from pandas import read_csv
from io import StringIO
# datetime column (combination of date + start_time)
dtc = [['date', 'start_time']]
# index column (above combination)
ixc = 'date_start_time'
df1 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:03:00,7261824,871631183
01/01/2016,11:01:00,7261824,871631184
01/01/2016,14:01:00,7261824,871631185
'''), parse_dates=dtc)
df2 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:05:00,7261824,871631183
01/01/2016,11:04:00,7261824,871631184
01/01/2016,14:10:00,7261824,871631185
'''), parse_dates=dtc)
df1['date_start_time'] = pd.to_datetime(df1['date_start_time'])
df2['date_start_time'] = pd.to_datetime(df2['date_start_time'])
# converting this to the index so we can preserve the date_start_time columns so you can validate the merging logic
df1.index = df1['date_start_time']
df2.index = df2['date_start_time']
# the magic happens below, check the direction and tolerance arguments
tol = pd.Timedelta('5 minute')
pd.merge_asof(left=df1,right=df2,right_index=True,left_index=True,direction='nearest',tolerance=tol)
output
date_start_time date_start_time_x employee_id_x session_id_x date_start_time_y employee_id_y session_id_y
2016-01-01 02:03:00 2016-01-01 02:03:00 7261824 871631182 2016-01-01 02:03:00 7261824.0 871631182.0
2016-01-01 06:03:00 2016-01-01 06:03:00 7261824 871631183 2016-01-01 06:05:00 7261824.0 871631183.0
2016-01-01 11:01:00 2016-01-01 11:01:00 7261824 871631184 2016-01-01 11:04:00 7261824.0 871631184.0
2016-01-01 14:01:00 2016-01-01 14:01:00 7261824 871631185 NaT NaN NaN

Consider the following mini-version of your problem:
from io import StringIO
from pandas import read_csv, to_datetime
# how close do sessions have to be to be considered equal? (in minutes)
threshold = 5
# datetime column (combination of date + start_time)
dtc = [['date', 'start_time']]
# index column (above combination)
ixc = 'date_start_time'
df1 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:03:00,7261824,871631183
01/01/2016,11:01:00,7261824,871631184
01/01/2016,14:01:00,7261824,871631185
'''), parse_dates=dtc)
df2 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:05:00,7261824,871631183
01/01/2016,11:04:00,7261824,871631184
01/01/2016,14:10:00,7261824,871631185
'''), parse_dates=dtc)
which gives
>>> df1
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:03:00 7261824 871631183
2 2016-01-01 11:01:00 7261824 871631184
3 2016-01-01 14:01:00 7261824 871631185
>>> df2
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:04:00 7261824 871631184
3 2016-01-01 14:10:00 7261824 871631185
You would like to treat df2[0:3] as duplicates of df1[0:3] when merging (since they are respectively less than 5 minutes apart), but treat df1[3] and df2[3] as separate sessions.
Solution 1: Interval Matching
This is essentially what you are suggesting in your edit. You want to map timestamps in both tables to a 10-minute interval centered on the timestamp rounded to the nearest 5 minutes.
Each interval can be represented uniquely by its midpoint, so you can merge the data frames on the timestamp rounded to the nearest 5 minutes. For example:
import numpy as np
# half-threshold in nanoseconds
threshold_ns = threshold * 60 * 1e9
# compute "interval" to which each session belongs
df1['interval'] = to_datetime(np.round(df1.date_start_time.astype(np.int64) / threshold_ns) * threshold_ns)
df2['interval'] = to_datetime(np.round(df2.date_start_time.astype(np.int64) / threshold_ns) * threshold_ns)
# join
cols = ['interval', 'employee_id', 'session_id']
print df1.merge(df2, on=cols, how='outer')[cols]
which prints
interval employee_id session_id
0 2016-01-01 02:05:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:00:00 7261824 871631184
3 2016-01-01 14:00:00 7261824 871631185
4 2016-01-01 11:05:00 7261824 871631184
5 2016-01-01 14:10:00 7261824 871631185
Note that this is not totally correct. The sessions df1[2] and df2[2] are not treated as duplicates although they are only 3 minutes apart. This is because they were on different sides of the interval boundary.
Solution 2: One-to-one matching
Here is another approach which depends on the condition that sessions in df1 have either zero or one duplicates in df2.
We replace timestamps in df1 with the closest timestamp in df2 which matches on employee_id and session_id and is less than 5 minutes away.
from datetime import timedelta
# get closest match from "df2" to row from "df1" (as long as it's below the threshold)
def closest(row):
matches = df2.loc[(df2.employee_id == row.employee_id) &
(df2.session_id == row.session_id)]
deltas = matches.date_start_time - row.date_start_time
deltas = deltas.loc[deltas <= timedelta(minutes=threshold)]
try:
return matches.loc[deltas.idxmin()]
except ValueError: # no items
return row
# replace timestamps in "df1" with closest timestamps in "df2"
df1 = df1.apply(closest, axis=1)
# join
cols = ['date_start_time', 'employee_id', 'session_id']
print df1.merge(df2, on=cols, how='outer')[cols]
which prints
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:04:00 7261824 871631184
3 2016-01-01 14:01:00 7261824 871631185
4 2016-01-01 14:10:00 7261824 871631185
This approach is significantly slower, since you have to search through the entirety of df2 for each row in df1. What I have written can probably be optimized further, but this will still take a long time on large datasets.

I would suggest to use the built-in pandas Series dt round function, to round both dataframe to a common time, for example round up to every 5min. So the time will always be in format: 01:00:00 and then 01:05:00 for example. In that way, both dataframe will have similar time index to perform the merge.
Please see documentation and examples here pandas.Series.dt.round

Related

Joining two dfs based on timestamp column [duplicate]

What methods are available to merge columns which have timestamps that do not exactly match?
DF1:
date start_time employee_id session_id
01/01/2016 01/01/2016 06:03:13 7261824 871631182
DF2:
date start_time employee_id session_id
01/01/2016 01/01/2016 06:03:37 7261824 871631182
I could join on the ['date', 'employee_id', 'session_id'], but sometimes the same employee will have multiple identical sessions on the same date which causes duplicates. I could drop the rows where this takes place, but I would lose valid sessions if I did.
Is there an efficient way to join if the timestamp of DF1 is <5 minutes from the timestamp of DF2, and the session_id and employee_id also match? If there is a matching record, then the timestamp will always be slightly later than DF1 because an event is triggered at some future point.
['employee_id', 'session_id', 'timestamp<5minutes']
Edit - I assumed someone would have run into this issue before.
I was thinking of doing this:
Take my timestamp on each dataframe
Create a column which is the timestamp + 5 minutes (rounded)
Create a column which is the timestamp - 5 minutes (rounded)
Create a 10 minute interval string to join the files on
df1['low_time'] = df1['start_time'] - timedelta(minutes=5)
df1['high_time'] = df1['start_time'] + timedelta(minutes=5)
df1['interval_string'] = df1['low_time'].astype(str) + df1['high_time'].astype(str)
Does someone know how to round those 5 minute intervals to the nearest 5 minute mark?
02:59:37 - 5 min = 02:55:00
02:59:37 + 5 min = 03:05:00
interval_string = '02:55:00-03:05:00'
pd.merge(df1, df2, how = 'left', on = ['employee_id', 'session_id', 'date', 'interval_string']
Does anyone know how to round the time like that? This seems like it could work. You still match based on the date, employee, and session, and then you look for times which are basically within the same 10 minute interval or range
I would try using this method in pandas:
pandas.merge_asof()
The parameters of interest for you would be direction,tolerance,left_on, and right_on
Building off #Igor answer:
import pandas as pd
from pandas import read_csv
from io import StringIO
# datetime column (combination of date + start_time)
dtc = [['date', 'start_time']]
# index column (above combination)
ixc = 'date_start_time'
df1 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:03:00,7261824,871631183
01/01/2016,11:01:00,7261824,871631184
01/01/2016,14:01:00,7261824,871631185
'''), parse_dates=dtc)
df2 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:05:00,7261824,871631183
01/01/2016,11:04:00,7261824,871631184
01/01/2016,14:10:00,7261824,871631185
'''), parse_dates=dtc)
df1['date_start_time'] = pd.to_datetime(df1['date_start_time'])
df2['date_start_time'] = pd.to_datetime(df2['date_start_time'])
# converting this to the index so we can preserve the date_start_time columns so you can validate the merging logic
df1.index = df1['date_start_time']
df2.index = df2['date_start_time']
# the magic happens below, check the direction and tolerance arguments
tol = pd.Timedelta('5 minute')
pd.merge_asof(left=df1,right=df2,right_index=True,left_index=True,direction='nearest',tolerance=tol)
output
date_start_time date_start_time_x employee_id_x session_id_x date_start_time_y employee_id_y session_id_y
2016-01-01 02:03:00 2016-01-01 02:03:00 7261824 871631182 2016-01-01 02:03:00 7261824.0 871631182.0
2016-01-01 06:03:00 2016-01-01 06:03:00 7261824 871631183 2016-01-01 06:05:00 7261824.0 871631183.0
2016-01-01 11:01:00 2016-01-01 11:01:00 7261824 871631184 2016-01-01 11:04:00 7261824.0 871631184.0
2016-01-01 14:01:00 2016-01-01 14:01:00 7261824 871631185 NaT NaN NaN
Consider the following mini-version of your problem:
from io import StringIO
from pandas import read_csv, to_datetime
# how close do sessions have to be to be considered equal? (in minutes)
threshold = 5
# datetime column (combination of date + start_time)
dtc = [['date', 'start_time']]
# index column (above combination)
ixc = 'date_start_time'
df1 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:03:00,7261824,871631183
01/01/2016,11:01:00,7261824,871631184
01/01/2016,14:01:00,7261824,871631185
'''), parse_dates=dtc)
df2 = read_csv(StringIO(u'''
date,start_time,employee_id,session_id
01/01/2016,02:03:00,7261824,871631182
01/01/2016,06:05:00,7261824,871631183
01/01/2016,11:04:00,7261824,871631184
01/01/2016,14:10:00,7261824,871631185
'''), parse_dates=dtc)
which gives
>>> df1
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:03:00 7261824 871631183
2 2016-01-01 11:01:00 7261824 871631184
3 2016-01-01 14:01:00 7261824 871631185
>>> df2
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:04:00 7261824 871631184
3 2016-01-01 14:10:00 7261824 871631185
You would like to treat df2[0:3] as duplicates of df1[0:3] when merging (since they are respectively less than 5 minutes apart), but treat df1[3] and df2[3] as separate sessions.
Solution 1: Interval Matching
This is essentially what you are suggesting in your edit. You want to map timestamps in both tables to a 10-minute interval centered on the timestamp rounded to the nearest 5 minutes.
Each interval can be represented uniquely by its midpoint, so you can merge the data frames on the timestamp rounded to the nearest 5 minutes. For example:
import numpy as np
# half-threshold in nanoseconds
threshold_ns = threshold * 60 * 1e9
# compute "interval" to which each session belongs
df1['interval'] = to_datetime(np.round(df1.date_start_time.astype(np.int64) / threshold_ns) * threshold_ns)
df2['interval'] = to_datetime(np.round(df2.date_start_time.astype(np.int64) / threshold_ns) * threshold_ns)
# join
cols = ['interval', 'employee_id', 'session_id']
print df1.merge(df2, on=cols, how='outer')[cols]
which prints
interval employee_id session_id
0 2016-01-01 02:05:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:00:00 7261824 871631184
3 2016-01-01 14:00:00 7261824 871631185
4 2016-01-01 11:05:00 7261824 871631184
5 2016-01-01 14:10:00 7261824 871631185
Note that this is not totally correct. The sessions df1[2] and df2[2] are not treated as duplicates although they are only 3 minutes apart. This is because they were on different sides of the interval boundary.
Solution 2: One-to-one matching
Here is another approach which depends on the condition that sessions in df1 have either zero or one duplicates in df2.
We replace timestamps in df1 with the closest timestamp in df2 which matches on employee_id and session_id and is less than 5 minutes away.
from datetime import timedelta
# get closest match from "df2" to row from "df1" (as long as it's below the threshold)
def closest(row):
matches = df2.loc[(df2.employee_id == row.employee_id) &
(df2.session_id == row.session_id)]
deltas = matches.date_start_time - row.date_start_time
deltas = deltas.loc[deltas <= timedelta(minutes=threshold)]
try:
return matches.loc[deltas.idxmin()]
except ValueError: # no items
return row
# replace timestamps in "df1" with closest timestamps in "df2"
df1 = df1.apply(closest, axis=1)
# join
cols = ['date_start_time', 'employee_id', 'session_id']
print df1.merge(df2, on=cols, how='outer')[cols]
which prints
date_start_time employee_id session_id
0 2016-01-01 02:03:00 7261824 871631182
1 2016-01-01 06:05:00 7261824 871631183
2 2016-01-01 11:04:00 7261824 871631184
3 2016-01-01 14:01:00 7261824 871631185
4 2016-01-01 14:10:00 7261824 871631185
This approach is significantly slower, since you have to search through the entirety of df2 for each row in df1. What I have written can probably be optimized further, but this will still take a long time on large datasets.
I would suggest to use the built-in pandas Series dt round function, to round both dataframe to a common time, for example round up to every 5min. So the time will always be in format: 01:00:00 and then 01:05:00 for example. In that way, both dataframe will have similar time index to perform the merge.
Please see documentation and examples here pandas.Series.dt.round

How to merge 3 Pandas Dataframes based on Timestamp?

I have three dataframes in Pandas, say df1, df2 and df3. The first column of all dataframes is the Timestamp (DateTime format like 2017-01-01 12:30:00 etc.) Here is an example of each's first column:-
df1 TimeStamp
2016-01-01 12:00:00
2016-01-01 12:10:00
.....
df2 TimeStamp
2016-01-01 12:00:00
2016-01-01 12:10:00
.....
df3 TimeStamp
2016-13-01 12:00:00
2016-13-01 12:30:00
.....
As you can see, for the first two are at 10 minutes intervals, while the third one is at 30 minutes intervals. What I would like to do is to merge all 3 dataframes together, such that for cases where there is not exact match due to non-available data(like 12:10:00 not available for 3rd dataframe ), it would be considered as 12:00:00 (the preceding measurement) for merging purposes. (But of course, the Date should be the same) Note that all the dataframes have different sizes, but I would like to merge them based on Timestamp together for analytical purposes. Thank you!
DESIRED RESULT:
df_final TimeStamp .. Columns of df1 Columns of df2 Columns of df3
2016-13-01 12:00:00
2016-13-01 12:10:00
2016-13-01 12:20:00
.....
MORE DETAILS BASED ON ANSWER SUGGESTED
Firstly, as my dataframes (all 3) did not have index as TimeStamps, but had columns as TimeStamps, I set index for each as the TimeStamps:
df1.index = df1.TimeStamp
df2.index = df2.TimeStamp
df3.index = df3.TimeStamp
On using this
u_index = df3.index.union(df2.index.union(df1.index))
I get a weird output strangely which is not at regularly 10 min intervals like needed.
Index(['2016-01-01 00:00:00.000', '2016-01-01 00:00:00.000',
'2016-01-01 00:00:00.000', '2016-01-01 00:00:00.000',
...
'2017-12-31 23:50:00.000', '2017-12-31 23:50:00.000',
'2017-12-31 23:50:00.000', '2017-12-31 23:50:00.000',
dtype='object', name='TimeStamp', length=3199372)
Accordingly, the final df1_n dataframe is at 30 min intervals and not 10 mins (as the Union of indices was not properly done). I think that there is something going wrong here and once Step 2 suggested (u_index) is working properly, everything will be easy to merge the dataframes.
So I'm not 100% sure if what you asked for is how to complete the missing values after merging the three dataframes with the next valid observation.
if so, that's the quickest way I found to do this (not the most elegant...):
create a new index which is the union of the three indexes (will result in timestamp with intervals of 10 minutes in you case).
reindex all three dfs according to the new index while filling in missing values separately.
merge columns of the three dfs (which will be easy since after step NO.2 they will have the same index).
taking a portion of the data:
df1
Out[48]:
val_1
TimeStamp
2016-01-01 12:00:00 11
2016-01-01 12:10:00 12
df2
Out[49]:
val_2
TimeStamp
2016-01-01 12:00:00 21
2016-01-01 12:10:00 22
df3
Out[50]:
val_3
TimeStamp
2016-01-01 12:00:00 31
2016-13-01 12:30:00 32
step NO.1
u_index = df3.index.union(df2.index.union(df1.index))
u_index
Out[38]: Index(['2016-01-01 12:00:00', '2016-01-01 12:10:00', '2016-13-01 12:30:00'], dtype='object', name='TimeStamp')
step NO.2
df3_n = df3.reindex(index=u_index,method='bfill')
df2_n = df2.reindex(index=u_index,method='bfill')
df1_n = df1.reindex(index=u_index,method='bfill')
step NO.3
df1_n.merge(df2_n,on='TimeStamp').merge(df3_n,on='TimeStamp')
Out[47]:
val_1 val_2 val_3
TimeStamp
2016-01-01 12:00:00 11.0 21.0 31
2016-01-01 12:10:00 12.0 22.0 32
2016-13-01 12:30:00 NaN NaN 32
You might need to adjust the last row, since it has no following row to fill values from. but that's it pretty much.

Split 1 column into 2 columns pandas dataframe AttributeError .str [duplicate]

I have a pandas dataframe with over 1000 timestamps (below) that I would like to loop through:
2016-02-22 14:59:44.561776
I'm having a hard time splitting this time stamp into 2 columns- 'date' and 'time'. The date format can stay the same, but the time needs to be converted to CST (including milliseconds).
Thanks for the help
Had same problem and this worked for me.
Suppose the date column in your dataset is called "date"
import pandas as pd
df = pd.read_csv(file_path)
df['Dates'] = pd.to_datetime(df['date']).dt.date
df['Time'] = pd.to_datetime(df['date']).dt.time
This will give you two columns "Dates" and "Time" with splited dates.
I'm not sure why you would want to do this in the first place, but if you really must...
df = pd.DataFrame({'my_timestamp': pd.date_range('2016-1-1 15:00', periods=5)})
>>> df
my_timestamp
0 2016-01-01 15:00:00
1 2016-01-02 15:00:00
2 2016-01-03 15:00:00
3 2016-01-04 15:00:00
4 2016-01-05 15:00:00
df['new_date'] = [d.date() for d in df['my_timestamp']]
df['new_time'] = [d.time() for d in df['my_timestamp']]
>>> df
my_timestamp new_date new_time
0 2016-01-01 15:00:00 2016-01-01 15:00:00
1 2016-01-02 15:00:00 2016-01-02 15:00:00
2 2016-01-03 15:00:00 2016-01-03 15:00:00
3 2016-01-04 15:00:00 2016-01-04 15:00:00
4 2016-01-05 15:00:00 2016-01-05 15:00:00
The conversion to CST is more tricky. I assume that the current timestamps are 'unaware', i.e. they do not have a timezone attached? If not, how would you expect to convert them?
For more details:
https://docs.python.org/2/library/datetime.html
How to make an unaware datetime timezone aware in python
EDIT
An alternative method that only loops once across the timestamps instead of twice:
new_dates, new_times = zip(*[(d.date(), d.time()) for d in df['my_timestamp']])
df = df.assign(new_date=new_dates, new_time=new_times)
The easiest way is to use the pandas.Series dt accessor, which works on columns with a datetime dtype (see pd.to_datetime). For this case, pd.date_range creates an example column with a datetime dtype, therefore use .dt.date and .dt.time:
df = pd.DataFrame({'full_date': pd.date_range('2016-1-1 10:00:00.123', periods=10, freq='5H')})
df['date'] = df['full_date'].dt.date
df['time'] = df['full_date'].dt.time
In [166]: df
Out[166]:
full_date date time
0 2016-01-01 10:00:00.123 2016-01-01 10:00:00.123000
1 2016-01-01 15:00:00.123 2016-01-01 15:00:00.123000
2 2016-01-01 20:00:00.123 2016-01-01 20:00:00.123000
3 2016-01-02 01:00:00.123 2016-01-02 01:00:00.123000
4 2016-01-02 06:00:00.123 2016-01-02 06:00:00.123000
5 2016-01-02 11:00:00.123 2016-01-02 11:00:00.123000
6 2016-01-02 16:00:00.123 2016-01-02 16:00:00.123000
7 2016-01-02 21:00:00.123 2016-01-02 21:00:00.123000
8 2016-01-03 02:00:00.123 2016-01-03 02:00:00.123000
9 2016-01-03 07:00:00.123 2016-01-03 07:00:00.123000
If your timestamps are already in pandas format (not string), then:
df["date"] = df["timestamp"].date
dt["time"] = dt["timestamp"].time
If your timestamp is a string, you can parse it using the datetime module:
from datetime import datetime
data1["timestamp"] = df["timestamp"].apply(lambda x: \
datetime.strptime(x,"%Y-%m-%d %H:%M:%S.%f"))
Source:
http://pandas.pydata.org/pandas-docs/stable/timeseries.html
If your timestamp is a string, you can convert it to a datetime object:
from datetime import datetime
timestamp = '2016-02-22 14:59:44.561776'
dt = datetime.strptime(timestamp, '%Y-%m-%d %H:%M:%S.%f')
From then on you can bring it to whatever format you like.
Try
s = '2016-02-22 14:59:44.561776'
date,time = s.split()
then convert time as needed.
If you want to further split the time,
hour, minute, second = time.split(':')
try this:
def time_date(datetime_obj):
date_time = datetime_obj.split(' ')
time = date_time[1].split('.')
return date_time[0], time[0]
In addition to #Alexander if you want a single liner
df['new_date'],df['new_time'] = zip(*[(d.date(), d.time()) for d in df['my_timestamp']])
If your timestamp is a string, you can convert it to pandas timestamp before splitting it.
#convert to pandas timestamp
data["old_date"] = pd.to_datetime(data.old_date)
#split columns
data["new_date"] = data["old_date"].dt.date
data["new_time"] = data["old_date"].dt.time

Collapsing rows with overlapping dates

I'm trying to collapse the below dataframe into rows containing continuous time periods by id. Continuous means that, within the same id, the start_date is either less than, equal or at most one day greater than the end_date of the previous row (the data is already sorted by id, start_date and end_date). All rows that are continuous should be output as a single row, with start_date being the minimum start_date (i.e. the start_date of the first row in the continuous set) and end_date being the maximum end_date from the continuous set of rows.
Please see the desired output at the bottom.
The only way I can think of approaching this is by parsing the dataframe row by row, but I was wondering if there a more pythonic and elegant way to do it.
id start_date end_date
1 2017-01-01 2017-01-15
1 2017-01-12 2017-01-24
1 2017-01-25 2017-02-03
1 2017-02-05 2017-02-14
1 2017-02-16 2017-02-28
2 2017-01-01 2017-01-19
2 2017-01-24 2017-02-07
2 2017-02-07 2017-02-20
Here is the code to generate the input dataframe:
import numpy as np
import pandas as pd
start_date = ['2017-01-01','2017-01-12','2017-01-25','2017-02-05','2017-02-16',
'2017-01-01','2017-01-24','2017-02-07']
end_date = ['2017-01-15','2017-01-24','2017-02-03','2017-02-14','2017-02-28',
'2017-01-19','2017-02-07','2017-02-20']
df = pd.DataFrame({'id': [1,1,1,1,1,2,2,2],
'start_date': pd.to_datetime(start_date, format='%Y-%m-%d'),
'end_date': pd.to_datetime(end_date, format='%Y-%m-%d')})
Desired output:
id start_date end_date
1 2017-01-01 2017-02-03
1 2017-02-05 2017-02-14
1 2017-02-16 2017-02-28
2 2017-01-01 2017-01-19
2 2017-01-24 2017-02-20
def f(grp):
#define a list to collect valid start and end ranges
d=[]
(
#append a new row if the start date is at least 2 days greater than the last date from previous row,
#otherwise update last rows's end date with current row's end date.
grp.reset_index(drop=True)
.apply(lambda x: d.append({x.start_date:x.end_date})
if x.name==0 or (x.start_date-pd.DateOffset(1))>grp.iloc[x.name-1].end_date
else d[-1].update({list(d[-1].keys())[0]:x.end_date}),
axis=1)
)
#reconstruct a df using only valid start and end dates pairs.
return pd.DataFrame([[list(e.keys())[0],list(e.values())[0]] for e in d],
columns=['start_date','end_date'])
df.groupby('id').apply(f).reset_index().drop('level_1',1)
Out[467]:
id start_date end_date
0 1 2017-01-01 2017-02-03
1 1 2017-02-05 2017-02-14
2 1 2017-02-16 2017-02-28
3 2 2017-01-01 2017-01-19
4 2 2017-01-24 2017-02-20

adding column with per-row computed time difference from group start?

(newbie to python and pandas)
I have a data set of 15 to 20 million rows, each row is a time-indexed observation of a time a 'user' was seen, and I need to analyze the visit-per-day patterns of each user, normalized to their first visit. So, I'm hoping to plot with an X axis of "days after first visit" and a Y axis of "visits by this user on this day", i.e., I need to get a series indexed by a timedelta and with values of visits in the period ending with that delta [0:1, 3:5, 4:2, 6:8,] But I'm stuck very early ...
I start with something like this:
rng = pd.to_datetime(['2000-01-01 08:00', '2000-01-02 08:00',
'2000-01-01 08:15', '2000-01-02 18:00',
'2000-01-02 17:00', '2000-03-01 08:00',
'2000-03-01 08:20','2000-01-02 18:00'])
uid=Series(['u1','u2','u1','u2','u1','u2','u2','u3'])
misc=Series(['','x1','A123','1.23','','','','u3'])
df = DataFrame({'uid':uid,'misc':misc,'ts':rng})
df=df.set_index(df.ts)
grouped = df.groupby('uid')
firstseen = grouped.first()
The ts values are unique to each uid, but can be duplicated (two uid can be seen at the same time, but any one uid is seen only once at any one timestamp)
The first step is (I think) to add a new column to the DataFrame, showing for each observation what the timedelta is back to the first observation for that user. But, I'm stuck getting that column in the DataFrame. The simplest thing I tried gives me an obscure-to-newbie error message:
df['sinceseen'] = df.ts - firstseen.ts[df.uid]
...
ValueError: cannot reindex from a duplicate axis
So I tried a brute-force method:
def f(row):
return row.ts - firstseen.ts[row.uid]
df['sinceseen'] = Series([{idx:f(row)} for idx, row in df.iterrows()], dtype=timedelta)
In this attempt, df gets a sinceseen but it's all NaN and shows a type of float for type(df.sinceseen[0]) - though, if I just print the Series (in iPython) it generates a nice list of timedeltas.
I'm working back and forth through "Python for Data Analysis" and it seems like apply() should work, but
def fg(ugroup):
ugroup['sinceseen'] = ugroup.index - ugroup.index.min()
return ugroup
df = df.groupby('uid').apply(fg)
gives me a TypeError on the "ugroup.index - ugroup.index.min(" even though each of the two operands is a Timestamp.
So, I'm flailing - can someone point me at the "pandas" way to get to the data structure Ineed?
Does this help you get started?
>>> df = DataFrame({'uid':uid,'misc':misc,'ts':rng})
>>> df = df.sort(["uid", "ts"])
>>> df["since_seen"] = df.groupby("uid")["ts"].apply(lambda x: x - x.iloc[0])
>>> df
misc ts uid since_seen
0 2000-01-01 08:00:00 u1 0 days, 00:00:00
2 A123 2000-01-01 08:15:00 u1 0 days, 00:15:00
4 2000-01-02 17:00:00 u1 1 days, 09:00:00
1 x1 2000-01-02 08:00:00 u2 0 days, 00:00:00
3 1.23 2000-01-02 18:00:00 u2 0 days, 10:00:00
5 2000-03-01 08:00:00 u2 59 days, 00:00:00
6 2000-03-01 08:20:00 u2 59 days, 00:20:00
7 u3 2000-01-02 18:00:00 u3 0 days, 00:00:00
[8 rows x 4 columns]

Categories