change date based on midnight time - python

i have dataframe which contains 3 column for date and time: date ,depart time and arrive time. i want to make two columns of datetime (depart time and arrive time) using pandas so I use to_datetime function.
since the date column based only on the depart time, there are some cases where the depart time is around 23:00 and the arrive time is after 24:00 but the date stays the same. for instance:
depart datetime: 01/12/2017 23:58:00 arrive time 01/12/2017 00:30:00
how can i write a function that will update the day to the day after if the arrive time is after midnight? (in the example it should be arrive time 02/12/2017)
thanks

I think you can check difference is bellow 0 Timedelta and by mask add one day:
print (df)
depart time arrive time
0 01/12/2017 23:58:00 01/12/2017 00:30:00
1 01/12/2017 00:30:00 01/12/2017 23:58:00
df['depart time'] = pd.to_datetime(df['depart time'], dayfirst=True)
df['arrive time'] = pd.to_datetime(df['arrive time'], dayfirst=True)
m = (df['arrive time'] - df['depart time']) < pd.Timedelta(0)
Another condition should be:
m = (df['depart time'] - df['arrive time']).dt.days != -1
print (m)
0 True
1 False
dtype: bool
df['arrive time'] = df['arrive time'].mask(m, df['arrive time'] + pd.Timedelta(1, unit='d'))
print (df)
depart time arrive time
0 2017-12-01 23:58:00 2017-12-02 00:30:00
1 2017-12-01 00:30:00 2017-12-01 23:58:00

Related

Remove rows that are not the 15th counted day of the month

Sheet 1 has a column 'Date' with 10 years worth of dates. These dates are trading days for the Australian stockmarket. I'm looking to remove all dates that are not the 15th trading day of each month (not necessarily the 15th day of the month). This code works for the first 12 months of the first year but it stops after that.
Code:
df = pd.read_csv(r'C:\Users\\Desktop\Sheet1.csv')
df['Date'] = pd.to_datetime(df['Date'])
df['month'] = df['Date'].dt.month
df['trading_day'] = df.groupby(['month']).cumcount() + 1
df = df[df['trading_day'] == 15]
df.drop(['month', 'trading_day'], axis=1, inplace=True)
df.to_excel("Sheet2.xlsx", index=False)
Current output:
Date NAV
2009-06-22 00:00:00 $50.7731
2009-07-21 00:00:00 $52.2194
2009-08-21 00:00:00 $55.5233
2009-09-21 00:00:00 $61.1116
2009-10-21 00:00:00 $62.6512
2009-11-20 00:00:00 $60.9736
2009-12-21 00:00:00 $60.2841
2010-01-22 00:00:00 $61.2418
2010-02-19 00:00:00 $59.8768
2010-03-19 00:00:00 $63.4521
2010-04-23 00:00:00 $63.1672
2010-05-21 00:00:00 $55.8651
You also need to group by year to compute the cumcount:
df['trading_day'] = df.groupby([df['Date'].dt.year, 'month']).cumcount() + 1

Converting columns with hours to datetime type pandas

I try to convert my column with "time" in the form "hr hr: min min :sec sec" in my pandas frame from object to date time 64 as I want to filter for hours.
I tried new['Time'] = pd.to_datetime(new['Time'], format='%H:%M:%S').dt.time which has no effect at all (it is still an object).
I also tried new['Time'] = pd.to_datetime(new['Time'],infer_datetime_format=True)
which gets the error message: TypeError: <class 'datetime.time'> is not convertible to datetime
I want to be able to sort my data frame for hours.
How do i convert the object to the hour?
can I then filter by hour (for example everything after 8am) or do I have to enter the exact value with minutes and seconds to filter for it?
Thank you
If you want your df['Time'] to be of type datetime64 just use
df['Time'] = pd.to_datetime(df['Time'], format='%H:%M:%S')
print(df['Time'])
This will result in the following column
0 1900-01-01 00:00:00
1 1900-01-01 00:01:00
2 1900-01-01 00:02:00
3 1900-01-01 00:03:00
4 1900-01-01 00:04:00
...
1435 1900-01-01 23:55:00
1436 1900-01-01 23:56:00
1437 1900-01-01 23:57:00
1438 1900-01-01 23:58:00
1439 1900-01-01 23:59:00
Name: Time, Length: 1440, dtype: datetime64[ns]
If you just want to extract the hour from the timestamp extent pd.to_datetime(...) by .dt.hour
If you want to group your values on an hourly basis you can also use (after converting the df['Time'] to datetime):
new_df = df.groupby(pd.Grouper(key='Time', freq='H'))['Value'].agg({pd.Series.to_list})
This will return all values grouped by hour.
IIUC, you already have a time structure from datetime module:
Suppose this dataframe:
from datetime import time
df = pd.DataFrame({'Time': [time(10, 39, 23), time(8, 47, 59), time(9, 21, 12)]})
print(df)
# Output:
Time
0 10:39:23
1 08:47:59
2 09:21:12
Few operations:
# Check if you have really `time` instance
>>> df['Time'].iloc[0]
datetime.time(10, 39, 23)
# Sort values by time
>>> df.sort_values('Time')
Time
1 08:47:59
2 09:21:12
0 10:39:23
# Extract rows from 08:00 and 09:00
>>> df[df['Time'].between(time(8), time(9))]
Time
1 08:47:59

Reading in Date / Time Values Correctly

Any ideas on how I can manipulate my current date-time data to make it suitable for use when converting the datatype to time?
For example:
df1['Date/Time'] = pd.to_datetime(df1['Date/Time'])
The current format for the data is mm/dd 00:00:00
an example of the column in the dataframe can be seen below.
Date/Time Dry_Temp[C] Wet_Temp[C] Solar_Diffuse_Rate[[W/m2]] \
0 01/01 00:10:00 8.45 8.237306 0.0
1 01/01 00:20:00 7.30 6.968360 0.0
2 01/01 00:30:00 6.15 5.710239 0.0
3 01/01 00:40:00 5.00 4.462898 0.0
4 01/01 00:50:00 3.85 3.226244 0.0
For the condition where the hour is denoted as 24, you have two choices. First you can simply reset the hour to 00 and second you can reset the hour to 00 and also add 1 to the date.
In either case the first step is detecting the condition which can be done with a simple find statement t.find(' 24:')
Having detected the condition in the first case it is a simple matter of reseting the hour to 00 and proceeding with the process of formatting the field. In the second case, however, adding 1 to the day is a little more complicated because of the fact you can roll over to next month.
Here is the approach I would use:
Given a df of form:
Date Time
0 01/01 00:00:00
1 01/01 00:24:00
2 01/01 24:00:00
3 01/31 24:00:00
The First Case
def parseDate2(tx):
ti = tx.find(' 24:')
if ti >= 0:
tk = pd.to_datetime(tx[:5]+' 00:'+tx[10:], format= '%m/%d %H:%M:%S')
return tk + du.relativedelta.relativedelta(hours=+24)
return pd.to_datetime(tx, format= '%m/%d %H:%M:%S')
df['Date Time'] = df['Date Time'].apply(lambda x: parseDate(x))
Produces the following:
Date Time
0 1900-01-01 00:00:00
1 1900-01-01 00:24:00
2 1900-01-01 00:00:00
3 1900-01-31 00:00:00
For the second case, I employed the dateutil relativedelta library and slightly modified my parseDate funstion as shown below:
import dateutil as du
def parseDate2(tx):
ti = tx.find(' 24:')
if ti >= 0:
tk = pd.to_datetime(tx[:5]+' 00:'+tx[10:], format= '%m/%d %H:%M:%S')
return tk + du.relativedelta.relativedelta(hours=+24)
return pd.to_datetime(tx, format= '%m/%d %H:%M:%S')
df['Date Time'] = df['Date Time'].apply(lambda x: parseDate2(x))
Yields:
Date Time
0 1900-01-01 00:00:00
1 1900-01-01 00:24:00
2 1900-01-02 00:00:00
3 1900-02-01 00:00:00
​
To access the values of the datetime (namely the time), you can use:
# These are now in a usable format
seconds = df1['Date/Time'].dt.second
minutes = df1['Date/Time'].dt.minute
hours = df1['Date/Time'].dt.hours
And if need be, you can create its own independent time series with:
df1['Dat/Time'].dt.time

First week of year considering the first day last year

I have the following df:
time_series date sales
store_0090_item_85261507 1/2020 1,0
store_0090_item_85261501 2/2020 0,0
store_0090_item_85261500 3/2020 6,0
Being 'date' = Week/Year.
So, I tried use the following code:
df['date'] = df['date'].apply(lambda x: datetime.strptime(x + '/0', "%U/%Y/%w"))
But, return this df:
time_series date sales
store_0090_item_85261507 2020-01-05 1,0
store_0090_item_85261501 2020-01-12 0,0
store_0090_item_85261500 2020-01-19 6,0
But, the first day of the first week of 2020 is 2019-12-29, considering sunday as first day. How can I have the first day 2020-12-29 of the first week of 2020 and not 2020-01-05?
From the datetime module's documentation:
%U: Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0.
Edit: My originals answer doesn't work for input 1/2023 and using ISO 8601 date values doesn't work for 1/2021, so I've edited this answer by adding a custom function
Here is a way with a custom function
import pandas as pd
from datetime import datetime, timedelta
##############################################
# to demonstrate issues with certain dates
print(datetime.strptime('0/2020/0', "%U/%Y/%w")) # 2019-12-29 00:00:00
print(datetime.strptime('1/2020/0', "%U/%Y/%w")) # 2020-01-05 00:00:00
print(datetime.strptime('0/2021/0', "%U/%Y/%w")) # 2020-12-27 00:00:00
print(datetime.strptime('1/2021/0', "%U/%Y/%w")) # 2021-01-03 00:00:00
print(datetime.strptime('0/2023/0', "%U/%Y/%w")) # 2023-01-01 00:00:00
print(datetime.strptime('1/2023/0', "%U/%Y/%w")) # 2023-01-01 00:00:00
#################################################
df = pd.DataFrame({'date':["1/2020", "2/2020", "3/2020", "1/2021", "2/2021", "1/2023", "2/2023"]})
print(df)
def get_first_day(date):
date0 = datetime.strptime('0/' + date.split('/')[1] + '/0', "%U/%Y/%w")
date1 = datetime.strptime('1/' + date.split('/')[1] + '/0', "%U/%Y/%w")
date = datetime.strptime(date + '/0', "%U/%Y/%w")
return date if date0 == date1 else date - timedelta(weeks=1)
df['new_date'] = df['date'].apply(lambda x:get_first_day(x))
print(df)
Input
date
0 1/2020
1 2/2020
2 3/2020
3 1/2021
4 2/2021
5 1/2023
6 2/2023
Output
date new_date
0 1/2020 2019-12-29
1 2/2020 2020-01-05
2 3/2020 2020-01-12
3 1/2021 2020-12-27
4 2/2021 2021-01-03
5 1/2023 2023-01-01
6 2/2023 2023-01-08
You'll want to use ISO week parsing directives, Ex:
import pandas as pd
date = pd.Series(["1/2020", "2/2020", "3/2020"])
pd.to_datetime(date+"/1", format="%V/%G/%u")
0 2019-12-30
1 2020-01-06
2 2020-01-13
dtype: datetime64[ns]
you can also shift by one day if the week should start on Sunday:
pd.to_datetime(date+"/1", format="%V/%G/%u") - pd.Timedelta('1d')
0 2019-12-29
1 2020-01-05
2 2020-01-12
dtype: datetime64[ns]

Negative time duration in Pandas

I have a dataset with two columns: Actual Time and Promised Time (representing the actual and promised start times of some process).
For example:
import pandas as pd
example_df = pd.DataFrame(columns = ['Actual Time', 'Promised Time'],
data = [
('2016-6-10 9:00', '2016-6-10 9:00'),
('2016-6-15 8:52', '2016-6-15 9:52'),
('2016-6-19 8:54', '2016-6-19 9:02')]).applymap(pd.Timestamp)
So as we can see, sometimes Actual Time = Promised Time, but there are also cases where Actual Time < Promised Time.
I defined a column that shows the difference between these two columns (example_df['Actual Time']-example_df['Promised Time']), but the problem is that for the third row it returned -1 day +23:52:00 instead of - 00:08:00.
Sample:
print (df)
Actual Time Promised Time
0 2016-6-10 9:00 2016-6-10 9:00
1 2016-6-15 10:52 2016-6-15 9:52 <- changed datetimes
2 2016-6-19 8:54 2016-6-19 9:02
def format_timedelta(x):
ts = x.total_seconds()
if ts >= 0:
hours, remainder = divmod(ts, 3600)
minutes, seconds = divmod(remainder, 60)
return ('{}:{:02d}:{:02d}').format(int(hours), int(minutes), int(seconds))
else:
hours, remainder = divmod(-ts, 3600)
minutes, seconds = divmod(remainder, 60)
return ('-{}:{:02d}:{:02d}').format(int(hours), int(minutes), int(seconds))
First create datetimes:
df['Actual Time'] = pd.to_datetime(df['Actual Time'])
df['Promised Time'] = pd.to_datetime(df['Promised Time'])
And then timedeltas:
df['diff'] = (df['Actual Time'] - df['Promised Time'])
If convert negative timedeltas to seconds by Series.dt.total_seconds it working nice:
df['diff1'] = df['diff'].dt.total_seconds()
But if want negative timedeltas in string representation it is possible with custom function, because strftime for timedeltas is not yet implemented:
df['diff2'] = df['diff'].apply(format_timedelta)
print (df)
Actual Time Promised Time diff diff1 diff2
0 2016-06-10 09:00:00 2016-06-10 09:00:00 00:00:00 0.0 0:00:00
1 2016-06-15 10:52:00 2016-06-15 09:52:00 01:00:00 3600.0 1:00:00
2 2016-06-19 08:54:00 2016-06-19 09:02:00 -1 days +23:52:00 -480.0 -0:08:00
I assume your dataframe already in datetime dtype. abs works just fine
Without abs
df['Actual Time'] - df['Promised Time']
Out[526]:
0 00:00:00
1 -1 days +23:00:00
2 -1 days +23:52:00
dtype: timedelta64[ns]
With abs
abs(df['Promised Time'] - df['Actual Time'])
Out[529]:
0 00:00:00
1 01:00:00
2 00:08:00
dtype: timedelta64[ns]
The difference result is timedelta type which by default is in ns format.
You need to change the type of your result to you desired format:
import pandas as pd
df=pd.DataFrame(data={
'Actual Time':['2016-6-10 9:00','2016-6-15 8:52','2016-6-19 8:54'],
'Promised Time':['2016-6-10 9:00','2016-6-15 9:52','2016-6-19 9:02']
},dtype='datetime64[ns]')
# here you need to add the `astype` part and to determine the unit you want
df['diff']=(df['Actual Time']-df['Promised Time']).astype('timedelta64[m]')

Categories