pandas fillna removes timezone when used with value=dict - python

I bumped into an unexpected-to-me behaviour in pandas. Here is the code, running python 3.10.8.
In [1]: from datetime import datetime, timezone
In [2]: import pandas
In [3]: pandas.__version__
Out[3]: '1.4.4'
In [4]: df = pandas.DataFrame(data={"end_date": [datetime(2022, 1, 20, tzinfo=timezone.utc)]})
In [5]: df.end_date.dt.tz
Out[5]: datetime.timezone.utc
In [6]: df.fillna(value={"end_date": datetime.now(tz=timezone.utc)}).end_date.dt.tz
In [7]: df.assign(end_date=lambda df: df["end_date"].fillna(datetime.now(tz=timezone.utc))).end_date.dt.tz
Out[7]: datetime.timezone.utc
As you can see, when using .fillna(value={...}), the timezone information is lost even if you do not have any value to fill. But it is kept when no dictionary is used.
Is it expected?
Thanks in advance.

Related

How to prevent losing nanoseconds in numpy astype

When I case object to datetime64[ns] then nanoseconds are lost. Why is it happening and how can this be fixed?
In python 3.6 and 3.7 same behaviour.
import numpy as np
import pandas as pd
a = np.ndarray(1, dtype=object)
a[0] = pd.Timestamp.max
print(a)
print(a.astype('datetime64[ns]'))
In the output we can see that nanoseconds are zeroed out
[Timestamp('2262-04-11 23:47:16.854775807')]
['2262-04-11T23:47:16.854775000']
The original problem came out from pandas dataframe and this code:
df = pd.DataFrame(columns=['col'])
#df.loc[0] = [None] # uncommenting this line makes nanoseconds being dropped
df.loc[0] = [pd.Timestamp.max]
print(df['col'].values.astype('datetime64[ns]'))
Update
Numpy docs says that nanoseconds are supported only for datetime values within [ 1678 AD, 2262 AD].
But the issue is reproduced for datetime values inside the range:
import numpy as np
import pandas as pd
a = np.ndarray(1, dtype=object)
a[0] = pd.Timestamp(2020, 7, 31, 12, 12, 12, 123456, 789)
print(a)
print(a.astype('datetime64[ns]'))
In the output we can see that nanoseconds are zeroed out
[Timestamp('2020-07-31 12:12:12.123456789')]
['2020-07-31T12:12:12.123456000']
I accidentally found workaround.
fillna restores nanoseconds!
df = pd.DataFrame(columns=['col'])
df.loc[0] = [None]
df.loc[0] = [pd.Timestamp(2020, 7, 31, 12, 12, 12, 123456, 789)]
print(df['col'].values.astype('datetime64[ns]'))
df['col'] = df['col'].fillna('')
print(df['col'].values.astype('datetime64[ns]'))
Output:
['2020-07-31T12:12:12.123456000']
['2020-07-31T12:12:12.123456789']

Convert the 'datetime.date' to a datetime with 'pd.Timestamp' [duplicate]

How do I convert a numpy.datetime64 object to a datetime.datetime (or Timestamp)?
In the following code, I create a datetime, timestamp and datetime64 objects.
import datetime
import numpy as np
import pandas as pd
dt = datetime.datetime(2012, 5, 1)
# A strange way to extract a Timestamp object, there's surely a better way?
ts = pd.DatetimeIndex([dt])[0]
dt64 = np.datetime64(dt)
In [7]: dt
Out[7]: datetime.datetime(2012, 5, 1, 0, 0)
In [8]: ts
Out[8]: <Timestamp: 2012-05-01 00:00:00>
In [9]: dt64
Out[9]: numpy.datetime64('2012-05-01T01:00:00.000000+0100')
Note: it's easy to get the datetime from the Timestamp:
In [10]: ts.to_datetime()
Out[10]: datetime.datetime(2012, 5, 1, 0, 0)
But how do we extract the datetime or Timestamp from a numpy.datetime64 (dt64)?
.
Update: a somewhat nasty example in my dataset (perhaps the motivating example) seems to be:
dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100')
which should be datetime.datetime(2002, 6, 28, 1, 0), and not a long (!) (1025222400000000000L)...
You can just use the pd.Timestamp constructor. The following diagram may be useful for this and related questions.
Welcome to hell.
You can just pass a datetime64 object to pandas.Timestamp:
In [16]: Timestamp(numpy.datetime64('2012-05-01T01:00:00.000000'))
Out[16]: <Timestamp: 2012-05-01 01:00:00>
I noticed that this doesn't work right though in NumPy 1.6.1:
numpy.datetime64('2012-05-01T01:00:00.000000+0100')
Also, pandas.to_datetime can be used (this is off of the dev version, haven't checked v0.9.1):
In [24]: pandas.to_datetime('2012-05-01T01:00:00.000000+0100')
Out[24]: datetime.datetime(2012, 5, 1, 1, 0, tzinfo=tzoffset(None, 3600))
To convert numpy.datetime64 to datetime object that represents time in UTC on numpy-1.8:
>>> from datetime import datetime
>>> import numpy as np
>>> dt = datetime.utcnow()
>>> dt
datetime.datetime(2012, 12, 4, 19, 51, 25, 362455)
>>> dt64 = np.datetime64(dt)
>>> ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
>>> ts
1354650685.3624549
>>> datetime.utcfromtimestamp(ts)
datetime.datetime(2012, 12, 4, 19, 51, 25, 362455)
>>> np.__version__
'1.8.0.dev-7b75899'
The above example assumes that a naive datetime object is interpreted by np.datetime64 as time in UTC.
To convert datetime to np.datetime64 and back (numpy-1.6):
>>> np.datetime64(datetime.utcnow()).astype(datetime)
datetime.datetime(2012, 12, 4, 13, 34, 52, 827542)
It works both on a single np.datetime64 object and a numpy array of np.datetime64.
Think of np.datetime64 the same way you would about np.int8, np.int16, etc and apply the same methods to convert between Python objects such as int, datetime and corresponding numpy objects.
Your "nasty example" works correctly:
>>> from datetime import datetime
>>> import numpy
>>> numpy.datetime64('2002-06-28T01:00:00.000000000+0100').astype(datetime)
datetime.datetime(2002, 6, 28, 0, 0)
>>> numpy.__version__
'1.6.2' # current version available via pip install numpy
I can reproduce the long value on numpy-1.8.0 installed as:
pip install git+https://github.com/numpy/numpy.git#egg=numpy-dev
The same example:
>>> from datetime import datetime
>>> import numpy
>>> numpy.datetime64('2002-06-28T01:00:00.000000000+0100').astype(datetime)
1025222400000000000L
>>> numpy.__version__
'1.8.0.dev-7b75899'
It returns long because for numpy.datetime64 type .astype(datetime) is equivalent to .astype(object) that returns Python integer (long) on numpy-1.8.
To get datetime object you could:
>>> dt64.dtype
dtype('<M8[ns]')
>>> ns = 1e-9 # number of seconds in a nanosecond
>>> datetime.utcfromtimestamp(dt64.astype(int) * ns)
datetime.datetime(2002, 6, 28, 0, 0)
To get datetime64 that uses seconds directly:
>>> dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100', 's')
>>> dt64.dtype
dtype('<M8[s]')
>>> datetime.utcfromtimestamp(dt64.astype(int))
datetime.datetime(2002, 6, 28, 0, 0)
The numpy docs say that the datetime API is experimental and may change in future numpy versions.
I think there could be a more consolidated effort in an answer to better explain the relationship between Python's datetime module, numpy's datetime64/timedelta64 and pandas' Timestamp/Timedelta objects.
The datetime standard library of Python
The datetime standard library has four main objects
time - only time, measured in hours, minutes, seconds and microseconds
date - only year, month and day
datetime - All components of time and date
timedelta - An amount of time with maximum unit of days
Create these four objects
>>> import datetime
>>> datetime.time(hour=4, minute=3, second=10, microsecond=7199)
datetime.time(4, 3, 10, 7199)
>>> datetime.date(year=2017, month=10, day=24)
datetime.date(2017, 10, 24)
>>> datetime.datetime(year=2017, month=10, day=24, hour=4, minute=3, second=10, microsecond=7199)
datetime.datetime(2017, 10, 24, 4, 3, 10, 7199)
>>> datetime.timedelta(days=3, minutes = 55)
datetime.timedelta(3, 3300)
>>> # add timedelta to datetime
>>> datetime.timedelta(days=3, minutes = 55) + \
datetime.datetime(year=2017, month=10, day=24, hour=4, minute=3, second=10, microsecond=7199)
datetime.datetime(2017, 10, 27, 4, 58, 10, 7199)
NumPy's datetime64 and timedelta64 objects
NumPy has no separate date and time objects, just a single datetime64 object to represent a single moment in time. The datetime module's datetime object has microsecond precision (one-millionth of a second). NumPy's datetime64 object allows you to set its precision from hours all the way to attoseconds (10 ^ -18). It's constructor is more flexible and can take a variety of inputs.
Construct NumPy's datetime64 and timedelta64 objects
Pass an integer with a string for the units. See all units here. It gets converted to that many units after the UNIX epoch: Jan 1, 1970
>>> np.datetime64(5, 'ns')
numpy.datetime64('1970-01-01T00:00:00.000000005')
>>> np.datetime64(1508887504, 's')
numpy.datetime64('2017-10-24T23:25:04')
You can also use strings as long as they are in ISO 8601 format.
>>> np.datetime64('2017-10-24')
numpy.datetime64('2017-10-24')
Timedeltas have a single unit
>>> np.timedelta64(5, 'D') # 5 days
>>> np.timedelta64(10, 'h') 10 hours
Can also create them by subtracting two datetime64 objects
>>> np.datetime64('2017-10-24T05:30:45.67') - np.datetime64('2017-10-22T12:35:40.123')
numpy.timedelta64(147305547,'ms')
Pandas Timestamp and Timedelta build much more functionality on top of NumPy
A pandas Timestamp is a moment in time very similar to a datetime but with much more functionality. You can construct them with either pd.Timestamp or pd.to_datetime.
>>> pd.Timestamp(1239.1238934) #defaults to nanoseconds
Timestamp('1970-01-01 00:00:00.000001239')
>>> pd.Timestamp(1239.1238934, unit='D') # change units
Timestamp('1973-05-24 02:58:24.355200')
>>> pd.Timestamp('2017-10-24 05') # partial strings work
Timestamp('2017-10-24 05:00:00')
pd.to_datetime works very similarly (with a few more options) and can convert a list of strings into Timestamps.
>>> pd.to_datetime('2017-10-24 05')
Timestamp('2017-10-24 05:00:00')
>>> pd.to_datetime(['2017-1-1', '2017-1-2'])
DatetimeIndex(['2017-01-01', '2017-01-02'], dtype='datetime64[ns]', freq=None)
Converting Python datetime to datetime64 and Timestamp
>>> dt = datetime.datetime(year=2017, month=10, day=24, hour=4,
minute=3, second=10, microsecond=7199)
>>> np.datetime64(dt)
numpy.datetime64('2017-10-24T04:03:10.007199')
>>> pd.Timestamp(dt) # or pd.to_datetime(dt)
Timestamp('2017-10-24 04:03:10.007199')
Converting numpy datetime64 to datetime and Timestamp
>>> dt64 = np.datetime64('2017-10-24 05:34:20.123456')
>>> unix_epoch = np.datetime64(0, 's')
>>> one_second = np.timedelta64(1, 's')
>>> seconds_since_epoch = (dt64 - unix_epoch) / one_second
>>> seconds_since_epoch
1508823260.123456
>>> datetime.datetime.utcfromtimestamp(seconds_since_epoch)
>>> datetime.datetime(2017, 10, 24, 5, 34, 20, 123456)
Convert to Timestamp
>>> pd.Timestamp(dt64)
Timestamp('2017-10-24 05:34:20.123456')
Convert from Timestamp to datetime and datetime64
This is quite easy as pandas timestamps are very powerful
>>> ts = pd.Timestamp('2017-10-24 04:24:33.654321')
>>> ts.to_pydatetime() # Python's datetime
datetime.datetime(2017, 10, 24, 4, 24, 33, 654321)
>>> ts.to_datetime64()
numpy.datetime64('2017-10-24T04:24:33.654321000')
>>> dt64.tolist()
datetime.datetime(2012, 5, 1, 0, 0)
For DatetimeIndex, the tolist returns a list of datetime objects. For a single datetime64 object it returns a single datetime object.
One option is to use str, and then to_datetime (or similar):
In [11]: str(dt64)
Out[11]: '2012-05-01T01:00:00.000000+0100'
In [12]: pd.to_datetime(str(dt64))
Out[12]: datetime.datetime(2012, 5, 1, 1, 0, tzinfo=tzoffset(None, 3600))
Note: it is not equal to dt because it's become "offset-aware":
In [13]: pd.to_datetime(str(dt64)).replace(tzinfo=None)
Out[13]: datetime.datetime(2012, 5, 1, 1, 0)
This seems inelegant.
.
Update: this can deal with the "nasty example":
In [21]: dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100')
In [22]: pd.to_datetime(str(dt64)).replace(tzinfo=None)
Out[22]: datetime.datetime(2002, 6, 28, 1, 0)
If you want to convert an entire pandas series of datetimes to regular python datetimes, you can also use .to_pydatetime().
pd.date_range('20110101','20110102',freq='H').to_pydatetime()
> [datetime.datetime(2011, 1, 1, 0, 0) datetime.datetime(2011, 1, 1, 1, 0)
datetime.datetime(2011, 1, 1, 2, 0) datetime.datetime(2011, 1, 1, 3, 0)
....
It also supports timezones:
pd.date_range('20110101','20110102',freq='H').tz_localize('UTC').tz_convert('Australia/Sydney').to_pydatetime()
[ datetime.datetime(2011, 1, 1, 11, 0, tzinfo=<DstTzInfo 'Australia/Sydney' EST+11:00:00 DST>)
datetime.datetime(2011, 1, 1, 12, 0, tzinfo=<DstTzInfo 'Australia/Sydney' EST+11:00:00 DST>)
....
NOTE: If you are operating on a Pandas Series you cannot call to_pydatetime() on the entire series. You will need to call .to_pydatetime() on each individual datetime64 using a list comprehension or something similar:
datetimes = [val.to_pydatetime() for val in df.problem_datetime_column]
This post has been up for 4 years and I still struggled with this conversion problem - so the issue is still active in 2017 in some sense. I was somewhat shocked that the numpy documentation does not readily offer a simple conversion algorithm but that's another story.
I have come across another way to do the conversion that only involves modules numpy and datetime, it does not require pandas to be imported which seems to me to be a lot of code to import for such a simple conversion. I noticed that datetime64.astype(datetime.datetime) will return a datetime.datetime object if the original datetime64 is in micro-second units while other units return an integer timestamp. I use module xarray for data I/O from Netcdf files which uses the datetime64 in nanosecond units making the conversion fail unless you first convert to micro-second units. Here is the example conversion code,
import numpy as np
import datetime
def convert_datetime64_to_datetime( usert: np.datetime64 )->datetime.datetime:
t = np.datetime64( usert, 'us').astype(datetime.datetime)
return t
Its only tested on my machine, which is Python 3.6 with a recent 2017 Anaconda distribution. I have only looked at scalar conversion and have not checked array based conversions although I'm guessing it will be good. Nor have I looked at the numpy datetime64 source code to see if the operation makes sense or not.
import numpy as np
import pandas as pd
def np64toDate(np64):
return pd.to_datetime(str(np64)).replace(tzinfo=None).to_datetime()
use this function to get pythons native datetime object
I've come back to this answer more times than I can count, so I decided to throw together a quick little class, which converts a Numpy datetime64 value to Python datetime value. I hope it helps others out there.
from datetime import datetime
import pandas as pd
class NumpyConverter(object):
#classmethod
def to_datetime(cls, dt64, tzinfo=None):
"""
Converts a Numpy datetime64 to a Python datetime.
:param dt64: A Numpy datetime64 variable
:type dt64: numpy.datetime64
:param tzinfo: The timezone the date / time value is in
:type tzinfo: pytz.timezone
:return: A Python datetime variable
:rtype: datetime
"""
ts = pd.to_datetime(dt64)
if tzinfo is not None:
return datetime(ts.year, ts.month, ts.day, ts.hour, ts.minute, ts.second, tzinfo=tzinfo)
return datetime(ts.year, ts.month, ts.day, ts.hour, ts.minute, ts.second)
I'm gonna keep this in my tool bag, something tells me I'll need it again.
I did like this
import pandas as pd
# Custom function to convert Pandas Datetime to Timestamp
def toTimestamp(data):
return data.timestamp()
# Read a csv file
df = pd.read_csv("friends.csv")
# Replace the "birthdate" column by:
# 1. Transform to datetime
# 2. Apply the custom function to the column just converted
df["birthdate"] = pd.to_datetime(df["birthdate"]).apply(toTimestamp)
Some solutions work well for me but numpy will deprecate some parameters.
The solution that work better for me is to read the date as a pandas datetime and excract explicitly the year, month and day of a pandas object.
The following code works for the most common situation.
def format_dates(dates):
dt = pd.to_datetime(dates)
try: return [datetime.date(x.year, x.month, x.day) for x in dt]
except TypeError: return datetime.date(dt.year, dt.month, dt.day)
Only way I managed to convert a column 'date' in pandas dataframe containing time info to numpy array was as following: (dataframe is read from csv file "csvIn.csv")
import pandas as pd
import numpy as np
df = pd.read_csv("csvIn.csv")
df["date"] = pd.to_datetime(df["date"])
timestamps = np.array([np.datetime64(value) for dummy, value in df["date"].items()])
indeed, all of these datetime types can be difficult, and potentially problematic (must keep careful track of timezone information). here's what i have done, though i admit that i am concerned that at least part of it is "not by design". also, this can be made a bit more compact as needed.
starting with a numpy.datetime64 dt_a:
dt_a
numpy.datetime64('2015-04-24T23:11:26.270000-0700')
dt_a1 = dt_a.tolist() # yields a datetime object in UTC, but without tzinfo
dt_a1
datetime.datetime(2015, 4, 25, 6, 11, 26, 270000)
# now, make your "aware" datetime:
dt_a2=datetime.datetime(*list(dt_a1.timetuple()[:6]) + [dt_a1.microsecond], tzinfo=pytz.timezone('UTC'))
... and of course, that can be compressed into one line as needed.

compare date and time objects and get older date in python

I have 3 dates (in hours and sec and mins)
2016-11-30T13:27:04-05:00
2016-11-30T13:27:41-05:00
2017-03-01T22:16:35-05:00
How can i get older date which is 2016-11-30T13:27:04-05:00
as output
This python script is not giving correct results
import time
find_a = min(a)
print find_a
Try this
dat = ['2016-11-30T13:27:04-05:02', '2016-11-30T13:27:41-05:02','2016-11-30T13:27:04-05:00']
print(min(dat))
You can use dateutil.parser to parse dates and a min to compare them. Here is an example:
In [1]: from dateutil.parser import parse
In [4]: dates = ['2016-11-30T13:27:04-05:00', '2016-11-30T13:27:41-05:00', '2017-03-01T22:16:35-05:00']
In [5]: min([parse(s) for s in dates])
Out[5]: datetime.datetime(2016, 11, 30, 13, 27, 4, tzinfo=tzoffset(None, -18000))
a = ['2016-11-30T13:27:04-05:00', '2016-11-30T13:27:41-05:00','2017-03-01T22:16:35-05:00']
>>> min(a) '2016-11-30T13:27:04-05:00
It works, just try it as normal strings. You can avoid importing time
Do you want to convert them datetime objects and then find the minimum?
use the method specified here https://stackoverflow.com/a/12282040/5334188 to datetime objects and find minimum

How do I keep the timezone of my index when serializing/deserializing a Pandas DataFrame using JSON

I need to serialize a Pandas DataFrame to JSON using the to_json method. Here is an example of how I am doing that:
import pandas
import numpy as np
dr = pandas.date_range('2016-01-01T12:30:00Z', '2016-02-01T12:30:00Z')
data = np.random.rand(len(dr), 2)
df = pandas.DataFrame(data, index=dr, columns=['a', 'b'])
# NOTE: The index for df has the following properties in pandas 0.19.2
# dtype='datetime64[ns, UTC]', freq='D'
# Save to JSON
df.to_json('/tmp/test_data_01.json', date_unit='s', date_format='iso')
Using the code above I see that my DataFrame has been saved to disk and that the indices look like: [2016-01-01T12:30:00Z, 2016-01-02T12:30:00Z, ...] in the file /tmp/test_data_01.json.
The problem is that when I do the following:
df2 = pandas.read_json('/tmp/test_data_01.json')
the index for df2 has no timezone.
df2.index.tz
# Returns None
Is there anyway to keep the timezone property of a DataFrame that is serialized to JSON and deserialized back?
Pandas will convert everything to UTC when using to_json.
See this example where I change it to Europe/Paris which is UTC+1:
In [1]:
dr = pd.date_range('2016-01-01T12:30:00Z', '2016-02-01T12:30:00Z')
dr = dr.tz_convert('Europe/Paris')
data = np.random.rand(len(dr), 2)
df = pd.DataFrame(data, index=dr, columns=['a', 'b'])
In [2]: df.index[0]
Out[2]: Timestamp('2016-01-01 13:30:00+0100', tz='Europe/Paris', freq='D')
In [3]: df.to_json('test_data_01.json', date_unit='s', date_format='iso')
If I open the test_data_01.json, the first one is "2016-01-01T12:30:00Z".
So when you load the json, localize it to UTC. There's no way to know what tz was used beforehand though:
In [4]:
df2 = pd.read_json('test_data_01.json')
df2.index = df2.index.tz_localize('UTC')
As of PR #35973, (version 1.2.0 I think) timezones are now supported when using the orient='table' argument.
import pandas as pd
import numpy as np
dr = pd.date_range("2020-08-30", freq="d", periods=4, tz="Europe/London")
data = np.random.rand(len(dr), 2)
df = pd.DataFrame(data, index=dr, columns=['a', 'b'])
print(df)
print(pd.read_json(df.to_json(orient='table'), orient='table')) # same output!
I'm not agree with the solution of #julien-marrec, because it force the timezone to be UTC, and when calling read_json the timezone could be anything else. I had implemented the following workaround that parse date while analyzing timezone.
import pandas._libs.json as json
loads = json.loads
result = loads('{"2019-01-01T13:00:00.000Z":15,"2019-01-01T11:00:00.000Z":88.352985054,"2019-01-01T12:00:00.000Z":90.091719896}',
dtype=None, numpy=True, labelled=True )
pd.Series(result[0], pd.DatetimeIndex(result[1])).index
And filled a bug about that https://github.com/pandas-dev/pandas/issues/25546

pandas handling of numpy timedelta64[ms]

>>> import pandas as pd
>>> pd.__version__
'0.11.0'
>>> import numpy as np
>>> np.__version__
'1.7.1'
>>> d={'a':np.array([68614867, 72200835], dtype=np.dtype('timedelta64[ms]'))}
>>> d['a'][0]
numpy.timedelta64(68614867,'ms')
>>> df = pd.DataFrame.from_dict(d)
>>> print df
a
0 00:00:00.068615
1 00:00:00.072201
It looks like it is interpreting the values in the underlying int64 as ns not ms. Is this a bug in pandas' handling of timedelta64[ms] types?
timedelta handling is still a work-in-progress, see this issue: https://github.com/pydata/pandas/issues/3009
main issue is that timedeltas are broken in numpy 1.6.2.
passing of arbitrary timedeltas dtypes in creation is not supported yet, as
a workaround, you can do this, as the ONLY dtype supported at the moment is the
internal timedelta64[ns] (this is exactly how datetime64[ns]) works btw. Pandas
converts to an internal repr and then you do want you want.
(this solution is ONLY good for numpy >= 1.7).
In [22]: d['a'].astype('timedelta64[ns]')
Out[22]: array([68614867000000, 72200835000000], dtype='timedelta64[ns]')
In [23]: DataFrame(dict(a = d['a'].astype('timedelta64[ns]')))
Out[23]:
a
0 19:03:34.867000
1 20:03:20.835000
In [24]: DataFrame(dict(a = d['a'].astype('timedelta64[ns]'))).dtypes
Out[24]:
a timedelta64[ns]
dtype: object
what is the final goal you are trying to accomplish?

Categories