Converting pandas date column into seconds elapsed - python

I have a pandas dataframe of multiple columns with a column of datetime64[ns] data. Time is in HH:MM:SS format. How can I convert this column of dates into a column of seconds elapsed? Like if the time said 10:00:00 in seconds that would be 36000. The seconds should be in a float64 type format.
Example data column

New Answer
Convert your text to Timedelta
df['Origin Time(Local)'] = pd.to_timedelta(df['Origin Time(Local)'])
df['Seconds'] = df['Origin Time(Local)'].dt.total_seconds()
Old Answer
Consider the dataframe df
df = pd.DataFrame(dict(Date=pd.date_range('2017-03-01', '2017-03-02', freq='2H')))
Date
0 2017-03-01 00:00:00
1 2017-03-01 02:00:00
2 2017-03-01 04:00:00
3 2017-03-01 06:00:00
4 2017-03-01 08:00:00
5 2017-03-01 10:00:00
6 2017-03-01 12:00:00
7 2017-03-01 14:00:00
8 2017-03-01 16:00:00
9 2017-03-01 18:00:00
10 2017-03-01 20:00:00
11 2017-03-01 22:00:00
12 2017-03-02 00:00:00
Subtract the most recent day from the timestamps and use total_seconds. total_seconds is an attribute of a Timedelta. We get a series of Timedeltas by taking the difference between two series of Timestamps.
(df.Date - df.Date.dt.floor('D')).dt.total_seconds()
# equivalent to
# (df.Date - pd.to_datetime(df.Date.dt.date)).dt.total_seconds()
0 0.0
1 7200.0
2 14400.0
3 21600.0
4 28800.0
5 36000.0
6 43200.0
7 50400.0
8 57600.0
9 64800.0
10 72000.0
11 79200.0
12 0.0
Name: Date, dtype: float64
Put it in a new column
df.assign(seconds=(df.Date - df.Date.dt.floor('D')).dt.total_seconds())
Date seconds
0 2017-03-01 00:00:00 0.0
1 2017-03-01 02:00:00 7200.0
2 2017-03-01 04:00:00 14400.0
3 2017-03-01 06:00:00 21600.0
4 2017-03-01 08:00:00 28800.0
5 2017-03-01 10:00:00 36000.0
6 2017-03-01 12:00:00 43200.0
7 2017-03-01 14:00:00 50400.0
8 2017-03-01 16:00:00 57600.0
9 2017-03-01 18:00:00 64800.0
10 2017-03-01 20:00:00 72000.0
11 2017-03-01 22:00:00 79200.0
12 2017-03-02 00:00:00 0.0

it would work:
df['time'].dt.total_seconds()
regards

Related

How to use pandas Grouper to get sum of values within each hour

I have the following table:
Hora_Retiro count_uses
0 00:00:18 1
1 00:00:34 1
2 00:02:27 1
3 00:03:13 1
4 00:06:45 1
... ... ...
748700 23:58:47 1
748701 23:58:49 1
748702 23:59:11 1
748703 23:59:47 1
748704 23:59:56 1
And I want to group all values within each hour, so I can see the total number of uses per hour (00:00:00 - 23:00:00)
I have the following code:
hora_pico_aug= hora_pico.groupby(pd.Grouper(key="Hora_Retiro",freq='H')).count()
Hora_Retiro column is of timedelta64[ns] type
Which gives the following output:
count_uses
Hora_Retiro
00:00:02 2566
01:00:02 602
02:00:02 295
03:00:02 5
04:00:02 10
05:00:02 4002
06:00:02 16075
07:00:02 39410
08:00:02 76272
09:00:02 56721
10:00:02 36036
11:00:02 32011
12:00:02 33725
13:00:02 41032
14:00:02 50747
15:00:02 50338
16:00:02 42347
17:00:02 54674
18:00:02 76056
19:00:02 57958
20:00:02 34286
21:00:02 22509
22:00:02 13894
23:00:02 7134
However, the index column starts at 00:00:02, and I want it to start at 00:00:00, and then go from one hour intervals. Something like this:
count_uses
Hora_Retiro
00:00:00 2565
01:00:00 603
02:00:00 295
03:00:00 5
04:00:00 10
05:00:00 4002
06:00:00 16075
07:00:00 39410
08:00:00 76272
09:00:00 56721
10:00:00 36036
11:00:00 32011
12:00:00 33725
13:00:00 41032
14:00:00 50747
15:00:00 50338
16:00:00 42347
17:00:00 54674
18:00:00 76056
19:00:00 57958
20:00:00 34286
21:00:00 22509
22:00:00 13894
23:00:00 7134
How can i make it to start at 00:00:00??
Thanks for the help!
You can create an hour column from Hora_Retiro column.
df['hour'] = df['Hora_Retiro'].dt.hour
And then groupby on the basis of hour
gpby_df = df.groupby('hour')['count_uses'].sum().reset_index()
gpby_df['hour'] = pd.to_datetime(gpby_df['hour'], format='%H').dt.time
gpby_df.columns = ['Hora_Retiro', 'sum_count_uses']
gpby_df
gives
Hora_Retiro sum_count_uses
0 00:00:00 14
1 09:00:00 1
2 10:00:00 2
3 20:00:00 2
I assume that Hora_Retiro column in your DataFrame is of
Timedelta type. It is not datetime, as in this case there
would be printed also the date part.
Indeed, your code creates groups starting at the minute / second
taken from the first row.
To group by "full hours":
round each element in this column to hour,
then group (just by this rounded value).
The code to do it is:
hora_pico.groupby(hora_pico.Hora_Retiro.apply(
lambda tt: tt.round('H'))).count_uses.count()
However I advise you to make up your mind, what do you want to count:
rows or values in count_uses column.
In the second case replace count function with sum.

Applying start and endtime as filters to dataframe

I'm working on a timeseries dataframe which looks like this and has data from January to August 2020.
Timestamp Value
2020-01-01 00:00:00 -68.95370
2020-01-01 00:05:00 -67.90175
2020-01-01 00:10:00 -67.45966
2020-01-01 00:15:00 -67.07624
2020-01-01 00:20:00 -67.30549
.....
2020-07-01 00:00:00 -65.34212
I'm trying to apply a filter on the previous dataframe using the columns start_time and end_time in the dataframe below:
start_time end_time
2020-01-12 16:15:00 2020-01-13 16:00:00
2020-01-26 16:00:00 2020-01-26 16:10:00
2020-04-12 16:00:00 2020-04-13 16:00:00
2020-04-20 16:00:00 2020-04-21 16:00:00
2020-05-02 16:00:00 2020-05-03 16:00:00
The output should assign all values which are not within the start and end time as zero and retain values for the start and end times specified in the filter. I tried applying two simultaneous filters for start and end time but didn't work.
Any help would be appreciated.
Idea is create all masks by Series.between in list comprehension, then join with logical_or by np.logical_or.reduce and last pass to Series.where:
print (df1)
Timestamp Value
0 2020-01-13 00:00:00 -68.95370 <- changed data for match
1 2020-01-01 00:05:00 -67.90175
2 2020-01-01 00:10:00 -67.45966
3 2020-01-01 00:15:00 -67.07624
4 2020-01-01 00:20:00 -67.30549
5 2020-07-01 00:00:00 -65.34212
L = [df1['Timestamp'].between(s, e) for s, e in df2[['start_time','end_time']].values]
m = np.logical_or.reduce(L)
df1['Value'] = df1['Value'].where(m, 0)
print (df1)
Timestamp Value
0 2020-01-13 00:00:00 -68.9537
1 2020-01-01 00:05:00 0.0000
2 2020-01-01 00:10:00 0.0000
3 2020-01-01 00:15:00 0.0000
4 2020-01-01 00:20:00 0.0000
5 2020-07-01 00:00:00 0.0000
Solution using outer join of merge method and query:
print(df1)
timestamp Value <- changed Timestamp to timestamp to avoid name conflict in query
0 2020-01-13 00:00:00 -68.95370 <- also changed data for match
1 2020-01-01 00:05:00 -67.90175
2 2020-01-01 00:10:00 -67.45966
3 2020-01-01 00:15:00 -67.07624
4 2020-01-01 00:20:00 -67.30549
5 2020-07-01 00:00:00 -65.34212
df1.loc[df1.index.difference(df1.assign(key=0).merge(df2.assign(key=0), how = 'outer')\
.query("timestamp >= start_time and timestamp < end_time").index),"Value"] = 0
result:
timestamp Value
0 2020-01-13 00:00:00 -68.9537
1 2020-01-01 00:05:00 0.0000
2 2020-01-01 00:10:00 0.0000
3 2020-01-01 00:15:00 0.0000
4 2020-01-01 00:20:00 0.0000
5 2020-07-01 00:00:00 0.0000
Key assign(key=0) is added to both dataframes to produce cartesian product.

pandas calculate duration between datetime but not considering specific time range

For clarity here is MRE:
df = pd.DataFrame(
{"id":[1,2,3,4],
"start_time":["2020-06-01 01:00:00", "2020-06-01 01:00:00", "2020-06-01 19:00:00", "2020-06-02 04:00:00"],
"end_time":["2020-06-01 14:00:00", "2020-06-01 18:00:00", "2020-06-02 10:00:00", "2020-06-02 16:00:00"]
})
df["start_time"] = pd.to_datetime(df["start_time"])
df["end_time"] = pd.to_datetime(df["end_time"])
df["sub_time"] = df["end_time"] - df["start_time"]
this outputs:
id start_time end_time sub_time
0 1 2020-06-01 01:00:00 2020-06-01 14:00:00 13:00:00
1 2 2020-06-01 01:00:00 2020-06-01 18:00:00 17:00:00
2 3 2020-06-01 19:00:00 2020-06-02 10:00:00 15:00:00
3 4 2020-06-02 04:00:00 2020-06-02 16:00:00 12:00:00
but when start_time ~ end_time consists of times range 00:00:00~03:59:59am I want to ignore it(not calculated in sub_time)
So instead of output above I would get:
id start_time end_time sub_time
0 1 2020-06-01 01:00:00 2020-06-01 14:00:00 10:00:00
1 2 2020-06-01 01:00:00 2020-06-01 18:00:00 14:00:00
2 3 2020-06-01 19:00:00 2020-06-02 10:00:00 11:00:00
3 4 2020-06-02 04:00:00 2020-06-02 16:00:00 12:00:00
row 0: starting at 01:00:00 do not count until 04:00:00. then 04:00:00 ~ 14:00:00 is 10 hour period
row 2: consider duration from 19:00:00 ~ 24:00:00 and 04:00:00 ~ 10:00:00 thus we get 11:00:00 in sub_time column.
Any suggestions?

Converting to datetime in python

I have a time data in a column and trying to figure out how can I get it in datetime format
2000
2100
2300
2355
0
1
5
10
100
105
330
My question is how can I get these in datetime format:
output should be:
20:00:00
21:00:00
23:00:00
23:55:00
00:00:00
00:01:00
00:05:00
00:10:00
01:00:00
01:05:00
03:30:00
tried:
1. da = pd.to_datetime(330, format='%H%M')
output: '03:30:00'
2. d= str(datetime.timedelta(minutes = 55 ))
output : '0:55:00'
But if I apply 1. to 100 it gives 10 hrs.
eg: da = pd.to_datetime(100, format='%H%M')
output: '10:00:00'
Try,
pd.to_datetime(df['time'].astype(str).str.zfill(4), format = '%H%M').dt.time
0 20:00:00
1 21:00:00
2 23:00:00
3 23:55:00
4 00:00:00
5 00:01:00
6 00:05:00
7 00:10:00
8 01:00:00
9 01:05:00
10 03:30:00
IIUC str.rjust
pd.to_datetime(s.astype(str).str.rjust(4,'0'),format='%H%M').dt.time
Out[41]:
0 20:00:00
1 21:00:00
2 23:00:00
3 23:55:00
4 00:00:00
5 00:01:00
6 00:05:00
7 00:10:00
8 01:00:00
9 01:05:00
10 03:30:00
Name: x, dtype: object
Since novice code, I am making the things more explicit and adding the formatting letters %H and %M info:
df['cname'] = pd.to_datetime(df['cname'].astype(str).str.zfill(4), format = '%H%M').dt.time
print(df['cname'])
# %H Hour (24-hour clock) as a zero-padded decimal number. 07
# %M Minute as a zero-padded decimal number. 06

Groupby for datetime on a scale of hours (ignoring what day)

I have a series of floats with a datetimeindex that I have resampled into bins of 3 hours. As such I have an index containing
2015-01-01 09:00:00
2015-01-01 12:00:00
2015-01-01 15:00:00
2015-01-01 18:00:00
2015-01-01 21:00:00
2015-01-02 00:00:00
2015-01-02 03:00:00
2015-01-02 06:00:00
2015-01-02 09:00:00
and so forth. I am trying to sum the floats associated with each time of day, say 09:00:00, for all days.
The only way I can think to do it with my limited experience is to convert this series to a dataframe by using the date time index as another column, then running iterations to see if the hours slot of the date time is equal to one another than summing the values. I feel like this is horribly inefficient and probably not the 'correct' way to do this. Any help would be appreciated!
IIUC:
In [116]: s
Out[116]:
2015-01-01 09:00:00 3
2015-01-01 12:00:00 1
2015-01-01 15:00:00 0
2015-01-01 18:00:00 1
2015-01-01 21:00:00 0
2015-01-02 00:00:00 9
2015-01-02 03:00:00 2
2015-01-02 06:00:00 2
2015-01-02 09:00:00 7
2015-01-02 12:00:00 8
Freq: 3H, Name: val, dtype: int32
In [117]: s.groupby(s.index - s.index.normalize()).sum()
Out[117]:
00:00:00 9
03:00:00 2
06:00:00 2
09:00:00 10
12:00:00 9
15:00:00 0
18:00:00 1
21:00:00 0
Name: val, dtype: int32
or:
In [118]: s.groupby(s.index.hour).sum()
Out[118]:
0 9
3 2
6 2
9 10
12 9
15 0
18 1
21 0
Name: val, dtype: int32

Categories