The documentation says:
http://docs.djangoproject.com/en/dev/ref/settings/#time-zone
Note that this is the time zone to
which Django will convert all
dates/times -- not necessarily the
timezone of the server. For example,
one server may serve multiple
Django-powered sites, each with a
separate time-zone setting.
Normally, Django sets the
os.environ['TZ'] variable to the time
zone you specify in the TIME_ZONE
setting. Thus, all your views and
models will automatically operate in
the correct time zone.
I've read this several times and it's not clear to me what's going on with the TIME_ZONE setting.
Should I be managing UTC offsets if I want models with a date-time stamp to display to the users local-time zone?
For example on save use, datetime.datetime.utcnow() instead of datetime.datetime.now(), and in the view do something like:
display_datetime = model.date_time + datetime.timedelta(USER_UTC_OFFSET)
Much to my surprise, it does appear to.
web81:~/webapps/dominicrodger2/dominicrodger$ python2.5 manage.py shell
Python 2.5.4 (r254:67916, Aug 5 2009, 12:42:40)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import settings
>>> settings.TIME_ZONE
'Europe/London'
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2009, 10, 15, 6, 29, 58, 85662)
>>> exit()
web81:~/webapps/dominicrodger2/dominicrodger$ date
Thu Oct 15 00:31:10 CDT 2009
And yes, I did get distracted whilst writing this answer :-)
I use the TIME_ZONE setting so that my automatically added timestamps on object creation (using auto_now_add, which I believe is soon to be deprecated) show creation times in the timezone I set.
If you want to convert those times into the timezones of your website visitors, you'll need to do a bit more work, as per the example you gave. If you want to do lots of timezone conversion to display times in your website visitors' timezones, then I'd strongly advise you to set your TIME_ZONE settings to store times in UTC, because it'll make your life easier in the long run (you can just use UTC-offsets, rather than having to worry about daylight savings).
If you're interested, I believe the timezone is set from the TIME_ZONE setting here.
Edit, per your comment that it doesn't work on Windows, this is because of the following in the Django source:
if hasattr(time, 'tzset'):
# Move the time zone info into os.environ. See ticket #2315 for why
# we don't do this unconditionally (breaks Windows).
os.environ['TZ'] = self.TIME_ZONE
time.tzset()
Windows:
C:\Documents and Settings\drodger>python
ActivePython 2.6.1.1 (ActiveState Software Inc.) based on
Python 2.6.1 (r261:67515, Dec 5 2008, 13:58:38) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> hasattr(time, 'tzset')
False
Linux:
web81:~$ python2.5
Python 2.5.4 (r254:67916, Aug 5 2009, 12:42:40)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> hasattr(time, 'tzset')
True
With TIME_ZONE as UTC, utcnow() and now() are the same. This is probably what you want. Then you can record times as now/utcnow and functions like timesince will work perfectly for every user. To display absolute times to specific users, you can use utc offsets as you suggest.
Related
Introduction
Image you want to describe a combustion process
In order to sort elements, I created
classes for the describing the substance (eq. class Fuel)
class which describes the combustion (eq. class combustion)
main.py to run an example
Python Files
properties.py
class SolidProp:
def __init__(self,ua):
self._ultimate = Ultimate(ua)
#property
def ultimate(self):
return self._ultimate
class Ultimate:
def __init__(self,ua: dict):
self._comp = ua
#property
def comp(self):
return self._comp
combustion.py
from properties import *
class Combustion:
def __init__(self,ultimate):
self.fuel = SolidProp(ua=ultimate)
main.py
from combustion import *
burner = Combustion({'CH4':0.75, 'C2H4':0.25})
Problem Description
ipython console
In the ipython console (in bash) the following is not recognized automatically (but it can be called):
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: run 'main.py'
In [2]: burner.fuel.ultimate.comp
Out[2]: {'CH4': 0.75, 'C2H4': 0.25}
This has something to do that *.ultimate is defined via a decorator in properties.py (see #property) but I would like to be able to get *.ultimate.comp autocompleted in the ipython console, so people can work with it intuitively.
Example
burner.fuel.ultimate is recognized
burner.fuel.ultimate.comp is NOT recognized
I can not see any methods or properties beyond burner.fuel.ultimate in the ipython console. This makes it not intuitively for people to work with it when they do not know those methods exist.
Remark: ipython console of IDE pycharm works fine!?
python console
Running it in the python console:
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exec(open("main.py").read())
>>> burner.fuel.ultimate.comp
{'CH4': 0.75, 'C2H4': 0.25}
Works fine. But why not in the ipython console from a terminal?
I'm using python requests library. My application performs simple get request from a site and prints results.
The site requires authorization with ntlm. Fortunately I can rely on HttpNtlmAuth, which works fine.
session = requests.Session()
session.auth = HttpNtlmAuth(domain + "\\" + username,
password,
session)
But if application is executed several times - each time I need to ask for username and password. It is very uncomfortable. Storing credentials is undesirable.
Could I store session object itself and reuse it several times? From server point of view - it should be fine.
Is there a way to pickle and unpickle session?
If you use the dill package, you should be able to pickle the session where pickle itself fails.
>>> import dill as pickle
>>> pickled = pickle.dumps(session)
>>> restored = pickle.loads(pickled)
Get dill here: https://github.com/uqfoundation/dill
Actually, dill also makes it easy to store your python session across restarts, so you
could pickle your entire python session like this:
>>> pickle.dump_session('session.pkl')
Then restart python, and pick up where you left off.
Python 2.7.8 (default, Jul 13 2014, 02:29:54)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill as pickle
>>> pickle.load_session('session.pkl')
>>> restored
<requests.sessions.Session object at 0x10c012690>
Can anyone explain why importing cv and numpy would change the behaviour of python's struct.unpack? Here's what I observe:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
This is correct
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
Still ok, after importing cv
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
And OK after importing cv and then numpy
Now I restart python:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
So far so good, but now I import cv AFTER importing numpy:
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
0.0
I've repeated this a number of times, including on multiple servers, and it always goes the same way. I've also tried it with struct.unpack and struct.pack, which also makes no difference.
I can't understand how importing numpy and cv could have any impact at all on the output of struct.unpack (pack remains the same, btw).
The "libdc1394" thing is, I believe, a red-herring: ctypes error: libdc1394 error: Failed to initialize libdc1394
Any ideas?
tl;dr: importing numpy and then opencv changes the behaviour of struct.unpack.
UPDATE: Paulo's answer below shows that this is reproducible. Seborg's comment suggests that it's something to do with the way python handles subnormals, which sounds plausible. I looked into Contexts but that didn't seem to be the problem, as the context was the same after the imports as it had been before them.
This isn't an answer, but it's too big for a comment. I played with the values a bit to find the limits.
Without loading numpy and cv:
>>> unpack("f", pack("i", 8388608))
(1.1754943508222875e-38,)
>>> unpack("f", pack("i", 8388607))
(1.1754942106924411e-38,)
After loading numpy and cv, the first line is the same, but the second:
>>> unpack("f", pack("i", 8388607))
(0.0,)
You'll notice that the first result is the lower limit for 32 bit floats. I then tried the same with d.
Without loading the libraries:
>>> unpack("d", pack("xi", 1048576))
(2.2250738585072014e-308,)
>>> unpack("d", pack("xi", 1048575))
(2.2250717365114104e-308,)
And after loading the libraries:
>>> unpack("d",pack("xi", 1048575))
(0.0,)
Now the first result is the lower limit for 64 bit float precision.
It seems that for some reason, loading the numpy and cv libraries, in that order, constrains unpack to use 32 and 64 bit precision and return 0 for lower values.
I'm compiling several different versions of Python for my system, and I'd like to know where in the source the startup banner is defined so I can change it for each version. For example, when the interpreter starts it displays
Python 3.3.1 (default, Apr 28 2013, 10:19:42)
[GCC 4.7.2 20121109 (Red Hat 4.7.2-8)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
I'd like to change the string default to other things to signal which version I'm using, but I'm also interested in how the whole shebang is assembled. Where is this defined?
Let's use grep to get in the ballpark. I'm not going to bother searching for default because I'll get too many results, but I'll try Type "Help", which should not appear too many times. If it's a C string, the quotes will be escaped. We should look for C strings first and Python strings later.
Python $ grep 'Type \\"help\\"' . -Ir
./Modules/main.c: "Type \"help\", \"copyright\", \"credits\" or \"license\" " \
It's in Modules/main.c, in Py_Main(). More digging gives us this line:
fprintf(stderr, "Python %s on %s\n",
Py_GetVersion(), Py_GetPlatform());
Because "on" is in the format string, Py_GetPlatform() must be linux and Py_GetVersion() must give the string we want...
Python $ grep Py_GetVersion . -Irl
...
./Python/getversion.c
...
That looks promising...
PyOS_snprintf(version, sizeof(version), "%.80s (%.80s) %.80s",
PY_VERSION, Py_GetBuildInfo(), Py_GetCompiler());
We must want Py_GetBuildInfo(), because it's inside the parentheses...
Python $ grep Py_GetBuildInfo . -Irl
...
./Modules/getbuildinfo.c
...
That looks a little too obvious.
const char *
Py_GetBuildInfo(void)
{
static char buildinfo[50 + sizeof(HGVERSION) +
((sizeof(HGTAG) > sizeof(HGBRANCH)) ?
sizeof(HGTAG) : sizeof(HGBRANCH))];
const char *revision = _Py_hgversion();
const char *sep = *revision ? ":" : "";
const char *hgid = _Py_hgidentifier();
if (!(*hgid))
hgid = "default";
PyOS_snprintf(buildinfo, sizeof(buildinfo),
"%s%s%s, %.20s, %.9s", hgid, sep, revision,
DATE, TIME);
return buildinfo;
}
So, default is the name of the Mercurial branch. By examining the makefiles, we can figure out that this comes from the macro HGTAG. A makefile variable named HGTAG produces the variable, and that variable is run as a command. So,
Simple solution
When building Python,
Python $ ./configure
Python $ make HGTAG='echo awesome'
Python $ ./python
Python 3.2.3 (awesome, May 1 2013, 21:33:27)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
Looks like if you add a mercurial tag before you build, then default will be replaced with the name of your tag (source: Modules/getbuildinfo.c : _Py_hgidentifier())
Basically seems like it chooses the name default because that is the name of the branch. Looks like the interpreter is built with the tag name, if one exists, or the name of the branch if no tag (besides tip) exists on the current working copy.
So I have datetime objects in UTC time and I want to convert them to UTC timestamps. The problem is, time.mktime makes adjustments for localtime.
So here is some code:
import os
import pytz
import time
import datetime
epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
print time.mktime(epoch.timetuple())
os.environ['TZ'] = 'UTC+0'
time.tzset()
print time.mktime(epoch.timetuple())
Here is some output:
Python 2.6.4 (r264:75706, Dec 25 2009, 08:52:16)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import pytz
>>> import time
>>> import datetime
>>>
>>> epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
>>> print time.mktime(epoch.timetuple())
25200.0
>>>
>>> os.environ['TZ'] = 'UTC+0'
>>> time.tzset()
>>> print time.mktime(epoch.timetuple())
0.0
So obviously if the system is in UTC time no problem, but when it's not, it is a problem. Setting the environment variable and calling time.tzset works but is that safe? I don't want to adjust it for the whole system.
Is there another way to do this? Or is it safe to call time.tzset this way.
The calendar module contains calendar.timegm which solves this problem.
calendar.timegm(tuple)
An unrelated but handy function that takes a time tuple such as returned by the gmtime() function in the time module, and returns the corresponding Unix timestamp value, assuming an epoch of 1970, and the POSIX encoding. In fact, time.gmtime() and timegm() are each others’ inverse.