Pandas Timezone Aware Index Drops Timezone When Converting To Series - python

I am trying to get the time index of a dataframe as a series, but it appears to be dropping the timezone when I call the method to_series. Below is an example. Is this a bug or am I doing something incorrectly?
rows = 50
df = pd.DataFrame(np.random.randn(rows,2), columns=list('AB'), index=pd.date_range('1/1/2000', periods=rows, freq='1H', tz=pytz.UTC))
print df.index[-1]
# 2000-01-03 01:00:00+00:00
print df.index.to_series()[-1]
# 2000-01-03 01:00:00
print df.index[-1].tzinfo
# UTC
print df.index.to_series()[-1].tzinfo
#None

No, you are not doing something wrong, and neither it is a bug.
It is a currently known limitation of pandas/numpy: timezones aware datetime data are only supported in the index. In a series, the data are stored as numpy datetime64 types, which does not support timezones. There is an open issue for adding this timezone feature also to series: https://github.com/pydata/pandas/issues/8260
A workaround is to store your data as object dtype instead of datetime64 (then they will be stored as Timestamp objects, a subclass of datetime.datetime). This will enable you to keep the timezone info.
to_series has a keep_tz keyword argument to obtain this (see docstring):
In [34]: df = df.tz_convert('US/Eastern')
In [35]: df.index.to_series()[-1]
Out[35]: Timestamp('2000-01-03 01:00:00')
In [36]: df.index.to_series(keep_tz=True)[-1]
Out[36]: Timestamp('2000-01-02 20:00:00-0500', tz='US/Eastern', offset='H')

Related

What does .value return when applied to pandas TimeStamp? [duplicate]

I want to change Datetime (2014-12-23 00:00:00) into unixtime. I tried it with the Datetime function but it didn´t work. I got the Datetime stamps in an array.
Zeit =np.array(Jahresgang1.ix[ :,'Zeitstempel'])
t = pd.to_datetime(Zeit, unit='s')
unixtime = pd.DataFrame(t)
print unixtime
Thanks a lot
I think you can subtract the date 1970-1-1 to create a timedelta and then access the attribute total_seconds:
In [130]:
s = pd.Series(pd.datetime(2012,1,1))
s
Out[130]:
0 2012-01-01
dtype: datetime64[ns]
In [158]:
(s - dt.datetime(1970,1,1)).dt.total_seconds()
Out[158]:
0 1325376000
dtype: float64
to emphasize EdChum's first comment, you can directly get Unix time like
import pandas as pd
s = pd.to_datetime(["2014-12-23 00:00:00"])
unix = s.astype("int64")
print(unix)
# Int64Index([1419292800000000000], dtype='int64')
or for a pd.Timestamp:
print(pd.to_datetime("2014-12-23 00:00:00").value)
# 1419292800000000000
Notes
the output precision is nanoseconds - if you want another, divide appropriately, e.g. by 10⁹ to get seconds, 10⁶ to get milliseconds etc.
this assumes the input date/time to be UTC, unless a time zone / UTC offset is specified

Remove the time from datetime.datetime in pandas column

I have a pandas column called 'date'
which has values and type like 2014-07-30 00:00:00 <class 'datetime.datetime'>.
I want to remove the time from the date.The end result being `2014-07-30' in datetime.datetime format.
I tried a bunch of solutions like-
df['PSG Date '] = df['PSG Date '].dt.date
but its giving me error-
AttributeError: Can only use .dt accessor with datetimelike values
I believe need first to_datetime and for dates use dt.date:
df['PSG Date '] = pd.to_datetime(df['PSG Date '], errors='coerce').dt.date
If want datetimes with no times use dt.floor:
df['PSG Date '] = pd.to_datetime(df['PSG Date '], errors='coerce').dt.floor('d')
First, you should begin with a datetime series; if you don't have one, use pd.to_datetime to force this conversion. This will permit vectorised computations:
df = pd.DataFrame({'col': ['2014-07-30 12:19:22', '2014-07-30 05:52:05',
'2014-07-30 20:15:00']})
df['col'] = pd.to_datetime(df['col'])
Next, note you cannot remove time from a datetime series in Pandas. By definition, a datetime series will include both "date" and "time" components.
Normalize time
You can use pd.Series.dt.floor or pd.Series.dt.normalize to reset the time component to 00:00:00:
df['col_floored'] = df['col'].dt.floor('d')
df['col_normalized'] = df['col'].dt.normalize()
print(df['col_floored'].iloc[0]) # 2014-07-30 00:00:00
print(df['col_normalized'].iloc[0]) # 2014-07-30 00:00:00
Convert to datetime.date pointers
You can convert your datetime series to an object series, consisting of datetime.date objects representing dates:
df['col_date'] = df['col'].dt.date
print(df['col_date'].iloc[0]) # 2014-07-30
Since these are not held in a contiguous memory block, operations on df['col_date'] will not be vectorised.
How to check the difference
It's useful to check the dtype for the series we have derived. Notice the one option which "removes" time involves converting your series to object.
Computations will be non-vectorised with such a series, since it consists of pointers to datetime.date objects instead of data in a contiguous memory block.
print(df.dtypes)
col datetime64[ns]
col_date object
col_floored datetime64[ns]
col_normalized datetime64[ns]
You can convert a datetime.datetime to date time.date by calling the .date() method of the object. eg
current_datetime = datetime.datetime.now()
date_only = current_datetime.date()

Prevent Pandas to_json() from adding time component to date object

I have a dataframe of that contains some date objects. I need to convert to a json for use in JavaScript, which requires YYYY-MM-DD, but to_json() keeps adding a time component. I've seen a number of answers that convert to a string first, but this is part of a loop of about 15 queries each with many columns (simplified it for the SO question) and I don't want to hardcode each column conversion as there are a lot.
import pandas as pd
from datetime import date
df = pd.DataFrame(data=[[date(year=2018, month=1, day=1)]])
print df.to_json(orient='records', date_format='iso', date_unit='s')
Output:
[{"0":"2018-01-01T00:00:00Z"}]
Desired Output:
[{"0":"2018-01-01"}]
Pandas does not currently have the feature. There is an open issue about this, you should subscribe to the issue in case more options for the date_format argument are added in a future release (which seems like a reasonable feature request):
No way with to_json to write only date out of datetime #16492
Manually converting the relevant columns to string before dumping out json is likely the best option.
You could use strftime('%Y-%m-%d') format like so:
df = pd.DataFrame(data=[[date(year=2018, month=1, day=1).strftime('%Y-%m-
%d')]]
print(df.to_json(orient='records', date_format='iso', date_unit='s'))
# [{"0":"2018-01-01"}]
I think this is the best approach for now until pandas adds a way to write only the date out of datetime.
Demo:
Source DF:
In [249]: df = pd.DataFrame({
...: 'val':np.random.rand(5),
...: 'date1':pd.date_range('2018-01-01',periods=5),
...: 'date2':pd.date_range('2017-12-15',periods=5)
...: })
In [250]: df
Out[250]:
date1 date2 val
0 2018-01-01 2017-12-15 0.539349
1 2018-01-02 2017-12-16 0.308532
2 2018-01-03 2017-12-17 0.788588
3 2018-01-04 2017-12-18 0.526541
4 2018-01-05 2017-12-19 0.887299
In [251]: df.dtypes
Out[251]:
date1 datetime64[ns]
date2 datetime64[ns]
val float64
dtype: object
You can cast datetime columns to strings in one command:
In [252]: df.update(df.loc[:, df.dtypes.astype(str).str.contains('date')].astype(str))
In [253]: df.dtypes
Out[253]:
date1 object
date2 object
val float64
dtype: object
In [254]: df.to_json(orient='records')
Out[254]: '[{"date1":"2018-01-01","date2":"2017-12-15","val":0.5393488718},{"date1":"2018-01-02","date2":"2017-12-16","val":0.3085324043},{"
date1":"2018-01-03","date2":"2017-12-17","val":0.7885879674},{"date1":"2018-01-04","date2":"2017-12-18","val":0.5265407505},{"date1":"2018-0
1-05","date2":"2017-12-19","val":0.887298853}]'
Alternatively you can cast date columns to strings on the SQL side
I had that problem as well, but since I was looking only for the date, discarding the timezone, I was able to go around that using the following expression:
df = pd.read_json('test.json')
df['date_hour'] = [datetime.strptime(date[0:10],'%Y-%m-%d').date() for date in df['date_hour']]
So, if you have 'iso' date_format for df[date_hour] in the json file = "2018-01-01T00:00:00Z" you may use this solution.
This way you can extract the bit that really matters. Important to say that you must do it using this list comprehension, because the conversion can only be done string by string (or row by row), otherwise, the datetime.strptime alone, would throw an error saying that cannot be used with series.
Generic solution would be as follows:
df.assign( **df.select_dtypes(['datetime']).astype(str).to_dict('list') ).to_json(orient="records")
Based on the dtype it selects the datetime columns and set these as str objects so the date format is kept during serialization.

How to convert timedelta to time of day in pandas?

I have a SQL table that contains data of the mySQL time type as follows:
time_of_day
-----------
12:34:56
I then use pandas to read the table in:
df = pd.read_sql('select * from time_of_day', engine)
Looking at df.dtypes yields:
time_of_day timedelta64[ns]
My main issue is that, when writing my df to a csv file, the data comes out all messed up, instead of essentially looking like my SQL table:
time_of_day
0 days 12:34:56.000000000
I'd like to instead (obviously) store this record as a time, but I can't find anything in the pandas docs that talk about a time dtype.
Does pandas lack this functionality intentionally? Is there a way to solve my problem without requiring janky data casting?
Seems like this should be elementary, but I'm confounded.
Pandas does not support a time dtype series
Pandas (and NumPy) do not have a time dtype. Since you wish to avoid Pandas timedelta, you have 3 options: Pandas datetime, Python datetime.time, or Python str. Below they are presented in order of preference. Let's assume you start with the following dataframe:
df = pd.DataFrame({'time': pd.to_timedelta(['12:34:56', '05:12:45', '15:15:06'])})
print(df['time'].dtype) # timedelta64[ns]
Pandas datetime series
You can use Pandas datetime series and include an arbitrary date component, e.g. today's date. Underlying such a series are integers, which makes this solution the most efficient and adaptable.
The default date, if unspecified, is 1-Jan-1970:
df['time'] = pd.to_datetime(df['time'])
print(df)
# time
# 0 1970-01-01 12:34:56
# 1 1970-01-01 05:12:45
# 2 1970-01-01 15:15:06
You can also specify a date, such as today:
df['time'] = pd.Timestamp('today').normalize() + df['time']
print(df)
# time
# 0 2019-01-02 12:34:56
# 1 2019-01-02 05:12:45
# 2 2019-01-02 15:15:06
Pandas object series of Python datetime.time values
The Python datetime module from the standard library supports datetime.time objects. You can convert your series to an object dtype series containing pointers to a sequence of datetime.time objects. Operations will no longer be vectorised, but each underlying value will be represented internally by a number.
df['time'] = pd.to_datetime(df['time']).dt.time
print(df)
# time
# 0 12:34:56
# 1 05:12:45
# 2 15:15:06
print(df['time'].dtype)
# object
print(type(df['time'].at[0]))
# <class 'datetime.time'>
Pandas object series of Python str values
Converting to strings is only recommended for presentation purposes that are not supported by other types, e.g. Pandas datetime or Python datetime.time. For example:
df['time'] = pd.to_datetime(df['time']).dt.strftime('%H:%M:%S')
print(df)
# time
# 0 12:34:56
# 1 05:12:45
# 2 15:15:06
print(df['time'].dtype)
# object
print(type(df['time'].at[0]))
# <class 'str'>
it's a hack, but you can pull out the components to create a string and convert that string to a datetime.time(h,m,s) object
def convert(td):
time = [str(td.components.hours), str(td.components.minutes),
str(td.components.seconds)]
return datetime.strptime(':'.join(time), '%H:%M:%S').time()
df['time'] = df['time'].apply(lambda x: convert(x))
found a solution, but i feel like it's gotta be more elegant than this:
def convert(x):
return pd.to_datetime(x).strftime('%H:%M:%S')
df['time_of_day'] = df['time_of_day'].apply(convert)
df['time_of_day'] = pd.to_datetime(df['time_of_day']).apply(lambda x: x.time())
Adapted this code

Convert pandas timezone-aware DateTimeIndex to naive timestamp, but in certain timezone

You can use the function tz_localize to make a Timestamp or DateTimeIndex timezone aware, but how can you do the opposite: how can you convert a timezone aware Timestamp to a naive one, while preserving its timezone?
An example:
In [82]: t = pd.date_range(start="2013-05-18 12:00:00", periods=10, freq='s', tz="Europe/Brussels")
In [83]: t
Out[83]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: S, Timezone: Europe/Brussels
I could remove the timezone by setting it to None, but then the result is converted to UTC (12 o'clock became 10):
In [86]: t.tz = None
In [87]: t
Out[87]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 10:00:00, ..., 2013-05-18 10:00:09]
Length: 10, Freq: S, Timezone: None
Is there another way I can convert a DateTimeIndex to timezone naive, but while preserving the timezone it was set in?
Some context on the reason I am asking this: I want to work with timezone naive timeseries (to avoid the extra hassle with timezones, and I do not need them for the case I am working on).
But for some reason, I have to deal with a timezone-aware timeseries in my local timezone (Europe/Brussels). As all my other data are timezone naive (but represented in my local timezone), I want to convert this timeseries to naive to further work with it, but it also has to be represented in my local timezone (so just remove the timezone info, without converting the user-visible time to UTC).
I know the time is actually internal stored as UTC and only converted to another timezone when you represent it, so there has to be some kind of conversion when I want to "delocalize" it. For example, with the python datetime module you can "remove" the timezone like this:
In [119]: d = pd.Timestamp("2013-05-18 12:00:00", tz="Europe/Brussels")
In [120]: d
Out[120]: <Timestamp: 2013-05-18 12:00:00+0200 CEST, tz=Europe/Brussels>
In [121]: d.replace(tzinfo=None)
Out[121]: <Timestamp: 2013-05-18 12:00:00>
So, based on this, I could do the following, but I suppose this will not be very efficient when working with a larger timeseries:
In [124]: t
Out[124]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: S, Timezone: Europe/Brussels
In [125]: pd.DatetimeIndex([i.replace(tzinfo=None) for i in t])
Out[125]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: None, Timezone: None
To answer my own question, this functionality has been added to pandas in the meantime. Starting from pandas 0.15.0, you can use tz_localize(None) to remove the timezone resulting in local time.
See the whatsnew entry: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#timezone-handling-improvements
So with my example from above:
In [4]: t = pd.date_range(start="2013-05-18 12:00:00", periods=2, freq='H',
tz= "Europe/Brussels")
In [5]: t
Out[5]: DatetimeIndex(['2013-05-18 12:00:00+02:00', '2013-05-18 13:00:00+02:00'],
dtype='datetime64[ns, Europe/Brussels]', freq='H')
using tz_localize(None) removes the timezone information resulting in naive local time:
In [6]: t.tz_localize(None)
Out[6]: DatetimeIndex(['2013-05-18 12:00:00', '2013-05-18 13:00:00'],
dtype='datetime64[ns]', freq='H')
Further, you can also use tz_convert(None) to remove the timezone information but converting to UTC, so yielding naive UTC time:
In [7]: t.tz_convert(None)
Out[7]: DatetimeIndex(['2013-05-18 10:00:00', '2013-05-18 11:00:00'],
dtype='datetime64[ns]', freq='H')
This is much more performant than the datetime.replace solution:
In [31]: t = pd.date_range(start="2013-05-18 12:00:00", periods=10000, freq='H',
tz="Europe/Brussels")
In [32]: %timeit t.tz_localize(None)
1000 loops, best of 3: 233 µs per loop
In [33]: %timeit pd.DatetimeIndex([i.replace(tzinfo=None) for i in t])
10 loops, best of 3: 99.7 ms per loop
Because I always struggle to remember, a quick summary of what each of these do:
>>> pd.Timestamp.now() # naive local time
Timestamp('2019-10-07 10:30:19.428748')
>>> pd.Timestamp.utcnow() # tz aware UTC
Timestamp('2019-10-07 08:30:19.428748+0000', tz='UTC')
>>> pd.Timestamp.now(tz='Europe/Brussels') # tz aware local time
Timestamp('2019-10-07 10:30:19.428748+0200', tz='Europe/Brussels')
>>> pd.Timestamp.now(tz='Europe/Brussels').tz_localize(None) # naive local time
Timestamp('2019-10-07 10:30:19.428748')
>>> pd.Timestamp.now(tz='Europe/Brussels').tz_convert(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
>>> pd.Timestamp.utcnow().tz_localize(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
>>> pd.Timestamp.utcnow().tz_convert(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
I think you can't achieve what you want in a more efficient manner than you proposed.
The underlying problem is that the timestamps (as you seem aware) are made up of two parts. The data that represents the UTC time, and the timezone, tz_info. The timezone information is used only for display purposes when printing the timezone to the screen. At display time, the data is offset appropriately and +01:00 (or similar) is added to the string. Stripping off the tz_info value (using tz_convert(tz=None)) doesn't doesn't actually change the data that represents the naive part of the timestamp.
So, the only way to do what you want is to modify the underlying data (pandas doesn't allow this... DatetimeIndex are immutable -- see the help on DatetimeIndex), or to create a new set of timestamp objects and wrap them in a new DatetimeIndex. Your solution does the latter:
pd.DatetimeIndex([i.replace(tzinfo=None) for i in t])
For reference, here is the replace method of Timestamp (see tslib.pyx):
def replace(self, **kwds):
return Timestamp(datetime.replace(self, **kwds),
offset=self.offset)
You can refer to the docs on datetime.datetime to see that datetime.datetime.replace also creates a new object.
If you can, your best bet for efficiency is to modify the source of the data so that it (incorrectly) reports the timestamps without their timezone. You mentioned:
I want to work with timezone naive timeseries (to avoid the extra hassle with timezones, and I do not need them for the case I am working on)
I'd be curious what extra hassle you are referring to. I recommend as a general rule for all software development, keep your timestamp 'naive values' in UTC. There is little worse than looking at two different int64 values wondering which timezone they belong to. If you always, always, always use UTC for the internal storage, then you will avoid countless headaches. My mantra is Timezones are for human I/O only.
The accepted solution does not work when there are multiple different timezones in a Series. It throws ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True
The solution is to use the apply method.
Please see the examples below:
# Let's have a series `a` with different multiple timezones.
> a
0 2019-10-04 16:30:00+02:00
1 2019-10-07 16:00:00-04:00
2 2019-09-24 08:30:00-07:00
Name: localized, dtype: object
> a.iloc[0]
Timestamp('2019-10-04 16:30:00+0200', tz='Europe/Amsterdam')
# trying the accepted solution
> a.dt.tz_localize(None)
ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True
# Make it tz-naive. This is the solution:
> a.apply(lambda x:x.tz_localize(None))
0 2019-10-04 16:30:00
1 2019-10-07 16:00:00
2 2019-09-24 08:30:00
Name: localized, dtype: datetime64[ns]
# a.tz_convert() also does not work with multiple timezones, but this works:
> a.apply(lambda x:x.tz_convert('America/Los_Angeles'))
0 2019-10-04 07:30:00-07:00
1 2019-10-07 13:00:00-07:00
2 2019-09-24 08:30:00-07:00
Name: localized, dtype: datetime64[ns, America/Los_Angeles]
Setting the tz attribute of the index explicitly seems to work:
ts_utc = ts.tz_convert("UTC")
ts_utc.index.tz = None
Late contribution but just came across something similar in Python datetime and pandas give different timestamps for the same date.
If you have timezone-aware datetime in pandas, technically, tz_localize(None) changes the POSIX timestamp (that is used internally) as if the local time from the timestamp was UTC. Local in this context means local in the specified timezone. Ex:
import pandas as pd
t = pd.date_range(start="2013-05-18 12:00:00", periods=2, freq='H', tz="US/Central")
# DatetimeIndex(['2013-05-18 12:00:00-05:00', '2013-05-18 13:00:00-05:00'], dtype='datetime64[ns, US/Central]', freq='H')
t_loc = t.tz_localize(None)
# DatetimeIndex(['2013-05-18 12:00:00', '2013-05-18 13:00:00'], dtype='datetime64[ns]', freq='H')
# offset in seconds according to timezone:
(t_loc.values-t.values)//1e9
# array([-18000, -18000], dtype='timedelta64[ns]')
Note that this will leave you with strange things during DST transitions, e.g.
t = pd.date_range(start="2020-03-08 01:00:00", periods=2, freq='H', tz="US/Central")
(t.values[1]-t.values[0])//1e9
# numpy.timedelta64(3600,'ns')
t_loc = t.tz_localize(None)
(t_loc.values[1]-t_loc.values[0])//1e9
# numpy.timedelta64(7200,'ns')
In contrast, tz_convert(None) does not modify the internal timestamp, it just removes the tzinfo.
t_utc = t.tz_convert(None)
(t_utc.values-t.values)//1e9
# array([0, 0], dtype='timedelta64[ns]')
My bottom line would be: stick with timezone-aware datetime if you can or only use t.tz_convert(None) which doesn't modify the underlying POSIX timestamp. Just keep in mind that you're practically working with UTC then.
(Python 3.8.2 x64 on Windows 10, pandas v1.0.5.)
Building on D.A.'s suggestion that "the only way to do what you want is to modify the underlying data" and using numpy to modify the underlying data...
This works for me, and is pretty fast:
def tz_to_naive(datetime_index):
"""Converts a tz-aware DatetimeIndex into a tz-naive DatetimeIndex,
effectively baking the timezone into the internal representation.
Parameters
----------
datetime_index : pandas.DatetimeIndex, tz-aware
Returns
-------
pandas.DatetimeIndex, tz-naive
"""
# Calculate timezone offset relative to UTC
timestamp = datetime_index[0]
tz_offset = (timestamp.replace(tzinfo=None) -
timestamp.tz_convert('UTC').replace(tzinfo=None))
tz_offset_td64 = np.timedelta64(tz_offset)
# Now convert to naive DatetimeIndex
return pd.DatetimeIndex(datetime_index.values + tz_offset_td64)
The most important thing is add tzinfo when you define a datetime object.
from datetime import datetime, timezone
from tzinfo_examples import HOUR, Eastern
u0 = datetime(2016, 3, 13, 5, tzinfo=timezone.utc)
for i in range(4):
u = u0 + i*HOUR
t = u.astimezone(Eastern)
print(u.time(), 'UTC =', t.time(), t.tzname())
How I handled this problem with a 15-min frequency datetimeindex in europe.
If you are in the situation where you have a timezone aware (Europe/Amsterdam in my case) index and want to convert it into a timezone naive index by transforming everything into local time, you will have dst problems, namely
there will be 1 hour missing on the last sunday of march (when europe switches to summer time)
there will be 1 hour duplicate on the last sunday of october (when europe switches to summer time)
Here is how you can handle it:
# make index tz naive
df.index = df.index.tz_localize(None)
# handle dst
if df.index[0].month == 3:
# last sunday of march, one hour is lost
df = df.resample("15min").pad()
if df.index[0].month == 10:
# in october, one hour is added
df = df[~df.index.duplicated(keep='last')]
Note: in my case, I run the above code on a df that contains only a single month, hence I do df.index[0].month to find out the month. If yours contains more months, you should probably index it differently to know when to do DST.
It consists of resampling from the last valid value in march, to avoid losing the 1 hour (in my case, all my data is in 15 min intervals, hence i resample like that. Resample for whatever your interval is). And for october, I drop duplicates.

Categories