Pandas - Add seconds from a column to datetime in other column - python

I have a dataFrame with two columns, ["StartDate" ,"duration"]
the elements in the StartDate column are datetime type, and the duration are ints.
Something like:
StartDate Duration
08:16:05 20
07:16:01 20
I expect to get:
EndDate
08:16:25
07:16:21
Simply add the seconds to the hour.
I'd being checking some ideas about it like the delta time types and that all those datetimes have the possibilities to add delta times, but so far I can find how to do it with the DataFrames (in a vector fashion, cause It might be possible to iterate over all the rows performing the operation ).

consider this df
StartDate duration
0 01/01/2017 135
1 01/02/2017 235
You can get the datetime column like this
df['EndDate'] = pd.to_datetime(df['StartDate']) + pd.to_timedelta(df['duration'], unit='s')
df.drop('StartDate,'duration', axis = 1, inplace = True)
You get
EndDate
0 2017-01-01 00:02:15
1 2017-01-02 00:03:55
EDIT: with the sample dataframe that you posted
df['EndDate'] = pd.to_timedelta(df['StartDate']) + pd.to_timedelta(df['Duration'], unit='s')

df.StartDate = df.apply(lambda x: pd.to_datetime(x.StartDate)+pd.Timedelta(Second(df.duration)) ,axis = 1)

Related

use groupby() and for loop to count column values with conditions

The logic of what I am trying to do I think is best explained with code:
import pandas as pd
import numpy as np
from datetime import timedelta
random.seed(365)
#some data
start_date = pd.date_range(start = "2015-01-09", end = "2022-09-11", freq = "6D")
end_date = [start_date + timedelta(days = np.random.exponential(scale = 100)) for start_date in start_date]
df = pd.DataFrame(
{"start_date":start_date,
"end_date":end_date}
)
#randomly remove some end dates
df["end_date"] = df["end_date"].sample(frac = 0.7).reset_index(drop = True)
df["end_date"] = df["end_date"].dt.date.astype("datetime64[ns]")
I first create a pd.Series with the 1st day of every month in the entire history of the data:
dates = pd.Series(df["start_date"].dt.to_period("M").sort_values(ascending = True).unique()).dt.start_time
What I then want to do is count the number of df["start_date"] values which are less than the 1st day of each month in the series and where the df["end_date"] values are null (recorded as NaT)
I would think I would use a for loop to do this and somehow groupby the dates series so that the resulting output looks something like this:
month_start
count
2015-01-01
5
2015-02-01
10
2015-03-01
35
The count column in the resulting output is a count of the number of df rows where the df["start_date"] values are less than the 1st of each month in the series and where the df["end_date"] values are null - this occurs for every value in the series
Here is the logic of what I am trying to do:
df.groupby(by = dates)[["start_date", "end_date"]].apply(
lambda x: [x["start_date"] < date for date in dates] & x["end_date"].isnull == True
)
Is this what you want:
df2 = df[df['end_date'].isnull()]
dates_count = dates.apply(lambda x: df2[df2['start_date'] < x]['start_date'].count())
print(pd.concat([dates, dates_count], axis=1))
IIUC, group by period (shifted by 1 month) and count the NaT, then cumsum to accumulate the counts:
(df['end_date'].isna()
.groupby(df['start_date'].dt.to_period('M').add(1).dt.start_time)
.sum()
.cumsum()
)
Output:
start_date
2015-02-01 0
2015-03-01 0
2015-04-01 0
2015-05-01 0
2015-06-01 0
...
2022-06-01 122
2022-07-01 127
2022-08-01 133
2022-09-01 138
2022-10-01 140
Name: end_date, Length: 93, dtype: int64

pandas time differences (delta between rows)

I have a column with timestamps (strings) which look like the following:
2017-10-25T09:57:00.319Z
2017-10-25T09:59:00.319Z
2017-10-27T11:03:00.319Z
Tbh I do not know the meaning of Z but I guess it is not that important.
How to convert the above strings into correct timestamp to calculate the difference/delta (e.g. in seconds or minutes)?
I want to have a column where the deltas between one to anoter timestamp are listed.
You can use pd.to_datetime() to convert the string to datetime format. Then get the time difference/delta by .diff(). Finally, convert the timedelta to seconds by .dt.total_seconds(), as follows:
(Assuming your column of string is named Date):
df['Date'] = pd.to_datetime(df['Date'])
df['TimeDelta'] = df['Date'].diff().dt.total_seconds()
Result:
Time delta in seconds:
print(df)
Date TimeDelta
0 2017-10-25 09:57:00.319000+00:00 NaN
1 2017-10-25 09:59:00.319000+00:00 120.0
2 2017-10-27 11:03:00.319000+00:00 176640.0

How to subset rows based on date overlap range efficiently using python pandas?

My data frame has two date type columns: start and end (yyyy-mm-dd).
Here's my data frame:
import pandas as pd
import datetime
data=[["2016-10-17","2017-03-08"],["2014-08-17","2016-09-08"],["2014-01-01","2015-01-01"],["2017-12-20","2019-01-01"]]
df=pd.DataFrame(data,columns=['start','end'])
df['start'] = pd.to_datetime(df['start'], format='%Y-%m-%d')
df['end'] = pd.to_datetime(df['end'], format='%Y-%m-%d')
start end
0 2016-10-17 2017-03-08
1 2014-08-17 2016-09-08
2 2014-01-01 2015-01-01
3 2017-12-20 2019-01-01
And I have reference start and end date as following.
ref_start=datetime.date(2015, 9, 20)
ref_end=datetime.date(2017,1,31)
print(ref_start,ref_end)
2015-09-20 2017-01-31
I would like to subset rows if the start and end date range of a row overlaps with reference start and end date. The third and the fourth rows are not selected since the start and end date range does not overlap with reference date range (2015-09-20 ~ 2017-01-31)
So my desired outcome looks like this:
start end
0 2016-10-17 2017-03-08
1 2014-08-17 2016-09-08
To do that, I was thinking about using the following codes based on this: Efficient date range overlap calculation in python?
df[(max(df['start'],ref_start)>min(df['end'],ref_end))]
However, it doesn't work. Is there any way to get the desired outcome efficiently?
A trick I learned early on in my career is what I call "crossing the dates": you compare the start of one range against the end of the other.
# pd.Timestamp can do everything that datetime/date does and some more
ref_start = pd.Timestamp(2015, 9, 20)
ref_end = pd.Timestamp(2017,1,31)
# Compare the start of one range to the end of another and vice-versa
# Made into a separate variable for reability
cond = (ref_start <= df['end']) & (ref_end >= df['start'])
df[cond]

How to group a data frame by a time interval in pandas?

I have a data frame df
Date Mobile_No Amount Time .....
121526 2014-12-24 739637 200.00 9:44:00
121529 2014-12-28 199002 500.00 9:49:44
121531 2014-12-10 813770 100.00 9:50:41
121536 2014-12-09 178795 100.00 9:52:15
121537 2014-12-09 178795 100.00 9:52:24
having Date and Time of type datetime64 and object. I need to group this data frame by time interval of 5 minutes and Mobile_No. My expected output is the last two rows should be counted as one (Same Mobile_No and time interval is less than 5 minutes).
Is there any way to achieve this?
First I thought to combine Date and Time column and make timestamp and then use it as index and apply pd.TimeGrouper(), but this doesn't seem to work
>>>import datetime as dt
>>>import pandas as pd
...
>>> df.apply(lambda x: dt.datetime.combine(x['Date'], dt.time(x['Time'])), axis=1)
gives the error
'an integer is required', u'occurred at index 121526'
Can you not convert to string, concat the strings and parse the format in to_datetime if you are having issues:
df['Time']=df['Time'].astype(str)
df['Date']=df['Date'].astype(str)
df['Timestamp'] = df['Date'] +' ' + df['Time']
df.index = pd.to_datetime(df['Timestamp'], format='%Y/%m/%d %H:%M:%S')
from there you can resample or us pd.Grouper as required.

get subset dataframe by date

I have the following subset with a starting date (DD/MM/YYYY) and Amount
Start Date Amount
1 01/01/2013 20
2 02/05/2007 10
3 01/05/2004 15
4 01/06/2014 20
5 17/08/2008 21
I'd like to create a subset of this dataframe where only where the Start Date Day is 01:
Start Date Amount
1 01/01/2013 20
3 01/05/2004 15
4 01/06/2014 20
I've tried to loop through the table and use the index but couldn't find a suitable way to iterate through a dataframe rows.
Assuming your dates are datetime already then the following should work, if they are strings you can convert them using to_datetime so df['Start Date'] = pd.to_datetime(df['Start Date']), you may also need to pass param dayfirst = True if required. If you imported the data using read_csv you could've done this at the point of import so df = pd.read_csv('data.csv', parse_dates=[n], dayfirst=True) where n is the index (0-based of course) so if it was the first then pass parse_dates=[0].
One method could be to apply a lambda to the column and use the boolean index returned this to index against:
In [19]:
df[df['Start Date'].apply(lambda x: x.day == 1)]
Out[19]:
Start Date Amount
index
1 2013-01-01 20
3 2004-05-01 15
4 2014-06-01 20
Not sure if there is a built in method that doesn't involve setting this to be the index which will convert it into a timeseries index.

Categories