Hello,
I am trying to extract date and time column from my excel data. I am getting column as DataFrame with float values, after using pandas.to_datetime I am getting date with different date than actual date from excel. for example, in excel starting date is 01.01.1901 00:00:00 but in python I am getting 1971-01-03 00:00:00.000000 like this.
How can I solve this problem?
I need a final output in total seconds with DataFrame. First cell starting as a 00 sec and very next cell with timestep of seconds (time difference in ever cell is 15min.)
Thank you.
Your input is fractional days, so there's actually no need to convert to datetime if you want the duration in seconds relative to the first entry. Subtract that from the rest of the column and multiply by the number of seconds in a day:
import pandas as pd
df = pd.DataFrame({"Datum/Zeit": [367.0, 367.010417, 367.020833]})
df["totalseconds"] = (df["Datum/Zeit"] - df["Datum/Zeit"].iloc[0]) * 86400
df["totalseconds"]
0 0.0000
1 900.0288
2 1799.9712
Name: totalseconds, dtype: float64
If you have to use datetime, you'll need to convert to timedelta (duration) to do the same, e.g. like
df["datetime"] = pd.to_datetime(df["Datum/Zeit"], unit="d")
# df["datetime"]
# 0 1971-01-03 00:00:00.000000
# 1 1971-01-03 00:15:00.028800
# 2 1971-01-03 00:29:59.971200
# Name: datetime, dtype: datetime64[ns]
# subtraction of datetime from datetime gives timedelta, which has total_seconds:
df["totalseconds"] = (df["datetime"] - df["datetime"].iloc[0]).dt.total_seconds()
# df["totalseconds"]
# 0 0.0000
# 1 900.0288
# 2 1799.9712
# Name: totalseconds, dtype: float64
Related
My instructions are as follows:
Read the date columns in as timestamps, convert them to YYYY/MM/DD
hours:minutes:seconds format, where you set hours minutes and seconds to random
values appropriate to their range
Here is column of the data frame we are suppose to alter to datetime:
Order date
11/12/2016
11/24/2016
6/12/2016
10/12/2016
...
And here is the date time I need
2016/11/12 (random) hours:minutes:seconds
2016/11/24 (random) hours:minutes:seconds
...
My main question is how do I get random hours minutes and seconds. The rest I can figure out with the documentation
You can generate random numbers between 0 and 86399 (number of seconds in a day - 1) and convert to a TimeDelta with pandas.to_timedelta:
import numpy as np
time = pd.to_timedelta(np.random.randint(0, 60*60*24-1, size=len(df)), unit='s')
df['Order date'] = pd.to_datetime(df['Order date']).add(time)
Output:
Order date
0 2016-11-12 02:21:53
1 2016-11-24 13:26:00
2 2016-06-12 15:13:03
3 2016-10-12 14:45:12
You're trying to read the data in '%Y-%m-%d' format but the data is in "%d/%m/%Y" format. See https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior to find out how to convert the date to your desired format.
I have a data frame with a lot of columns and rows, the index column contains datetime objects.
date_time column1 column2
10-10-2010 00:00:00 1 10
10-10-2010 00:00:03 1 10
10-10-2010 00:00:06 1 10
Now I want to calculate the difference in time between the first and last datetime object. Therefore:
start = df["date_time"].head(1)
stop = df["date_time"].tail(1)
However I now want to extract this datetime value so that I can use the .total_seconds() seconds to calculate the number of seconds difference between the two datetime objects, something like:
delta_t_seconds = (start - stop).total_seconds()
This however doesn't give the desired result, since start and stop are still series with only one member.
please help
From an online API I gather a series of data points, each with a value and an ISO timestamp. Unfortunately I need to loop over them, so I store them in a temporary dict and then create a pandas dataframe from that and set the index to the timestamp column (simplified example):
from datetime import datetime
import pandas
input_data = [
'2019-09-16T06:44:01+02:00',
'2019-11-11T09:13:01+01:00',
]
data = []
for timestamp in input_data:
_date = datetime.fromisoformat(timestamp)
data.append({'time': _date})
pd_data = pandas.DataFrame(data).set_index('time')
As long as all timestamps are in the same timezone and DST/non-DST everything works fine, and, I get a Dataframe with a DatetimeIndex which I can work on later.
However, once two different time-offsets appear in one dataset (above example), I only get an Index, in my dataframe, which does not support any time-based methods.
Is there any way to make pandas accept timezone-aware, differing date as index?
A minor correction of the question's wording, which I think is important. What you have are UTC offsets - DST/no-DST would require more information than that, i.e. a time zone. Here, this matters since you can parse timestamps with UTC offsets (even different ones) to UTC easily:
import pandas as pd
input_data = [
'2019-09-16T06:44:01+02:00',
'2019-11-11T09:13:01+01:00',
]
dti = pd.to_datetime(input_data, utc=True)
# dti
# DatetimeIndex(['2019-09-16 04:44:01+00:00', '2019-11-11 08:13:01+00:00'], dtype='datetime64[ns, UTC]', freq=None)
I prefer to work with UTC so I'd be fine with that. If however you need date/time in a certain time zone, you can convert e.g. like
dti = dti.tz_convert('Europe/Berlin')
# dti
# DatetimeIndex(['2019-09-16 06:44:01+02:00', '2019-11-11 09:13:01+01:00'], dtype='datetime64[ns, Europe/Berlin]', freq=None)
A pandas datetime column also requires the offset to be the same. A column with different offsets, will not be converted to a datetime dtype.
I suggest, do not convert the data to a datetime until it's in pandas.
Separate the time offset, and treat it as a timedelta
to_timedelta requires a format of 'hh:mm:ss' so add ':00' to the end of the offset
See Pandas: Time deltas for all the available timedelta operations
pandas.Series.dt.tz_convert
pandas.Series.tz_localize
Convert to a specific TZ with:
If a datetime is not datetime64[ns, UTC] dtype, then first use .dt.tz_localize('UTC') before .dt.tz_convert('US/Pacific')
Otherwise df.datetime_utc.dt.tz_convert('US/Pacific')
import pandas as pd
# sample data
input_data = ['2019-09-16T06:44:01+02:00', '2019-11-11T09:13:01+01:00']
# dataframe
df = pd.DataFrame(input_data, columns=['datetime'])
# separate the offset from the datetime and convert it to a timedelta
df['offset'] = pd.to_timedelta(df.datetime.str[-6:] + ':00')
# if desired, create a str with the separated datetime
# converting this to a datetime will lead to AmbiguousTimeError because of overlapping datetimes at 2AM, per the OP
df['datetime_str'] = df.datetime.str[:-6]
# convert the datetime column to a datetime format without the offset
df['datetime_utc'] = pd.to_datetime(df.datetime, utc=True)
# display(df)
datetime offset datetime_str datetime_utc
0 2019-09-16T06:44:01+02:00 0 days 02:00:00 2019-09-16 06:44:01 2019-09-16 04:44:01+00:00
1 2019-11-11T09:13:01+01:00 0 days 01:00:00 2019-11-11 09:13:01 2019-11-11 08:13:01+00:00
print(df.info())
[out]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 datetime 2 non-null object
1 offset 2 non-null timedelta64[ns]
2 datetime_str 2 non-null object
3 datetime_utc 2 non-null datetime64[ns, UTC]
dtypes: datetime64[ns, UTC](1), object(2), timedelta64[ns](1)
memory usage: 192.0+ bytes
# convert to local timezone
df.datetime_utc.dt.tz_convert('US/Pacific')
[out]:
0 2019-09-15 21:44:01-07:00
1 2019-11-11 00:13:01-08:00
Name: datetime_utc, dtype: datetime64[ns, US/Pacific]
Other Resources
Calculate Pandas DataFrame Time Difference Between Two Columns in Hours and Minutes.
Talk Python to Me: Episode #271: Unlock the mysteries of time, Python's datetime that is!
Real Python: Using Python datetime to Work With Dates and Times
The dateutil module provides powerful extensions to the standard datetime module.
My dataframe has a column which measures time difference in the format HH:MM:SS.000
The pandas is formed from an excel file, the column which stores time difference is an Object. However some entries have negative time difference, the negative sign doesn't matter to me and needs to be removed from the time as it's not filtering a condition I have:
Note: I only have the negative time difference there because of the issue I'm currently having.
I've tried the following functions but I get errors as some of the time difference data is just 00:00:00 and some is 00:00:02.65 and some are 00:00:02.111
firstly how would I ensure that all data in this column is to 00:00:00.000. And then how would I remove the '-' from some the data.
Here's a sample of the time diff column, I cant transform this column into datetime as some of the entries dont have 3 digits after the decimal. Is there a way to iterate through the column and add a 0 if the length of the value isn't equal to 12 digits.
00:00:02.97
00:00:03:145
00:00:00
00:00:12:56
28 days 03:05:23.439
It looks like you need to clean your input before you can parse to timedelta, e.g. with the following function:
import pandas as pd
def clean_td_string(s):
if s.count(':') > 2:
return '.'.join(s.rsplit(':', 1))
return s
Applied to a df's column, this looks like
df = pd.DataFrame({'Time Diff': ['00:00:02.97', '00:00:03:145', '00:00:00', '00:00:12:56', '28 days 03:05:23.439']})
df['Time Diff'] = pd.to_timedelta(df['Time Diff'].apply(clean_td_string))
# df['Time Diff']
# 0 0 days 00:00:02.970000
# 1 0 days 00:00:03.145000
# 2 0 days 00:00:00
# 3 0 days 00:00:12.560000
# 4 28 days 03:05:23.439000
# Name: Time Diff, dtype: timedelta64[ns]
Here is a sample with date format:
data = pd.DataFrame({'Quarter':['Q1_01','Q2_01', 'Q3_01', 'Q4_01', 'Q1_02','Q2_02']
, 'Sale' :[10, 20, 30, 40, 50, 60]})
print(data)
# Quarter Sale
#0 Q1_01 10
#1 Q2_01 20
#2 Q3_01 30
#3 Q4_01 40
#4 Q1_02 50
#5 Q2_02 60
print(data.dtypes)
# Quarter object
# Sale int64
Would like to convert Quarter column into Pandas datetime format like
'Jan-2001' or '01-2001' that can be used in fbProphet for time series analysis.
Tried using strptime but got an error TypeError: strptime() argument 1 must be str, not Series
from datetime import datetime
data['Quarter'] = datetime.strptime(data['Quarter'], 'Q%q_%y')
What is the cause of the error ? Any better solution?
Knowing the format to_datetime needs to pass period indices is helpful (it is along the lines of YYYY-QX), so we start with replace, then to_datetime and finally strftime:
u = df.Quarter.str.replace(r'(Q\d)_(\d+)', r'20\2-\1')
pd.to_datetime(u).dt.strftime('%b-%Y')
0 Jan-2001
1 Apr-2001
2 Jul-2001
3 Oct-2001
4 Jan-2002
5 Apr-2002
Name: Quarter, dtype: object
The month represents the start of its respective quarter.
If the dates can range across the 90s and the 2000s, then let's try something different:
df = pd.DataFrame({'Quarter':['Q1_98','Q2_99', 'Q3_01', 'Q4_01', 'Q1_02','Q2_02']})
dt = pd.to_datetime(df.Quarter.str.replace(r'(Q\d)_(\d+)', r'\2-\1'))
(dt.where(dt <= pd.to_datetime('today'), dt - pd.DateOffset(years=100))
.dt.strftime('%b-%Y'))
0 Jan-1998
1 Apr-1999
2 Jul-2001
3 Oct-2001
4 Jan-2002
5 Apr-2002
Name: Quarter, dtype: object
pd.to_datetime auto-parses "98" as "2098", so we do a little fix to subtract 100 years from dates later than "today's date".
This hack will stop working in a few decades. Ye pandas gods, have mercy on my soul :-)
Another option is parsing to PeriodIndex:
(pd.PeriodIndex(df.Quarter.str.replace(r'(Q\d)_(\d+)', r'20\2-\1'), freq='Q')
.strftime('%b-%Y'))
# Index(['Mar-2001', 'Jun-2001', 'Sep-2001',
# 'Dec-2001', 'Mar-2002', 'Jun-2002'], dtype='object')
Here, the months printed out are at the ends of their respective quarters. You decide what to use.