In Python, I want to get the current Unix timestamp and then store the value for the long term and be handled by non-Python systems. (I am not merely trying to compute the difference between two timestamps within the same program run.)
Calling the function time.time() seems to be a very reasonable and concise way to get the desired timestamp... until I read the documentation:
Return the time in seconds since the epoch as a floating point number. The specific date of the epoch and the handling of leap seconds is platform dependent. On Windows and most Unix systems, the epoch is January 1, 1970, 00:00:00 (UTC) and leap seconds are not counted towards the time in seconds since the epoch. This is commonly referred to as Unix time. To find out what the epoch is on a given platform, look at gmtime(0).
[...]
(Versions: 3.5~3.9)
The phrase "epoch ... is platform dependent" is a warning sign. A weasel phrase is "most Unix systems". What are examples of Unix or non-Unix systems where Python's time.time()'s epoch is not 1970-01-01T00:00:00Z?
Is time.time() subtly unsafe for my goal? Should I look to alternatives like datetime.datetime.now().timestamp()?
Digging deeper, I also noticed that previous versions of Python didn't seem to have these caveats for time.time():
Return the time in seconds since the epoch as a floating point number. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
(Versions: 2.7, 3.2~3.4)
And even older wording:
Return the time as a floating point number expressed in seconds since the epoch, in UTC. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
(Versions: 2.2~2.6, 3.0~3.1)
If you don't want to depend upon the time.time() implementation and the (maybe) variable epoch, you can simply calculate the Unix Timestamp yourself by getting the current datetime, and substract the datetime of the epoch you want for unix timestamp (January 1st 1970), and get the seconds:
from datetime import datetime
unix_timestamp = (datetime.now() - datetime(1970, 1, 1)).total_seconds()
NOTE: you might want to add the timezone information maybe.
Related
I'm currrently writing an alarm clock in Python, However i have some technical difficulties.
The user has the option for the alarm to repeat (on given days), or to not repeat. They then provide the minutes and hour at which they want the alarm to trigger.
For my alarm system to work, i need to know the time as an epoch of when the alarm should trigger.
If i am trying to set an alarm, (for example for 19:30, time will always be inputted as 24 hours), i need the alarm to be able to find out the epoch time of the next time it is 19:30, because it could either be on the same day if i set the alarm before 19:30, or it could be for the next day if i set the alarm after 19:30.
Because of this it means i can't simply do time.localtime(), and then take the resulting struct_time object and swap out the hours and minutes to the integers of 19 and 30 (located at indexes 3 and 4 respectively of the object's named tuple), as i would also have to correctly assign the values of the month, day, and day of the year in order to have a valid struct_time object, which, whilst possible, would require a lot of manipulating, when i feel like there is likely a much more reasonable way of doing this.
Any help would be much appreciated
You can simply use the timestamp method on the result. This will return the epoch time of the datetime instance. This will work in almost any circumstance, especially since it is a simple alarm clock, but be aware of this wraning from the docs.
Naive datetime instances are assumed to represent local time and this method relies on the platform C mktime() function to perform the conversion. Since datetime supports wider range of values than mktime() on many platforms, this method may raise OverflowError for times far in the past or far in the future.
Depending on your program architecture you might also consider using the amount of seconds between two times, which can be done using simple subtraction to get a timedelta and the total_seconds function:
import time
import datetime
start = datetime.datetime.now()
time.sleep(2)
end = datetime.datetime.now()
# print total seconds
print((end - start).total_seconds())
The more I read about datetime arithmetic, the bigger a headache I get.
There's lots of different kinds of time:
Civil time
UTC
TAI
UNIX time
system time
thread time
CPU time
And then the clocks can run faster or slower or jump backwards or forwards because of
daylight savings
moving across timezones
leap seconds
NTP synchronization
general relativity
And how these are dealt with depends in turn on:
Operating system
Hardware
Programming language
So please can somebody tell me, for my specific use case, the safest and most reliable way to measure a short interval? Here is what I am doing:
I'm making a game in Python (3.7.x) and I need to keep track of how long it has been since certain events. For example, how long the player has been holding a button, or how long since an enemy has spotted the player, or how long since a level was loaded. Timescales should be accurate to the millisecond (nanoseconds are overkill).
Here are scenarios I want to be sure are averted:
You play the game late at night. In your timezone, on that night, the clocks go forward an hour at 2am for DST, so the minutes go: 1:58, 1:59, 2:00, 3:01, 3:02. Every time-related variable in the game suddenly has an extra hour added to it -- it thinks you'd been holding down that button for an hour and 2 seconds instead of just 2 seconds. Catastrophe ensues.
The same, but the IERS decides to insert or subtract a leap second sometime that day. You play through the transition, and all time variables get an extra second added or subtracted. Catastrophe ensues.
You play the game on a train or plane and catastrophe ensues when you cross a timezone boundary and/or the International Date Line.
The game works correctly in the above scenarios on some hardware and operating systems, but not others. I.e. it breaks on Linux but not Window, or vice versa.
And I can't really write tests for these since the problematic events come around so rarely. I need to get it right the first time. So, what time-related function do I need to use? I know there's plain old time.time(), but also a bewildering array of other options like
time.clock()
time.perf_counter()
time.process_time()
time.monotonic()
and then nanosecond variants of all of the above.
From reading the documentation it seems like time.monotonic() is the one I want. But if reading about all the details of timekeeping has taught me anything, it's that these things are never quite what they seem. Once upon a time, I thought I knew what a "second" was. Now I'm not so sure.
So, how do I make sure my game clocks work properly?
The specification of time module is the best place to look for details about each of those.
There, you can easily see that:
time.clock() is deprecated and should be replaced with other functions
time.process_time() counts only CPU time spent by your process, so it is not suitable for measuring wall clock time (which is what you need)
time.perf_counter() has the same problem as time.process_time()
time.time() is just about right, but it will give bad timings if user modifies the current time
time.monotonic() - this seems to be the safest bet for measuring time intervals - note that this does not give you current time at all, but it gives you a correct difference between two time points
As for the nanoseconds versions, you should use those only if you need nanoseconds.
I have some questions about the new functions time.perf_counter() and time.process_time().
For the former, from the documentation:
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
Is this 'highest resolution' the same on all systems? Or does it always slightly depend if, for example, we use linux or windows?
The question comes from the fact the reading the documentation of time.time() it says that 'not all systems provide time with a better precision than 1 second' so how can they provide a better and higher resolution now?
About the latter, time.process_time():
Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
I don't understand, what are those 'system time' and 'user CPU time'? What's the difference?
There are two distincts types of 'time', in this context: absolute time and relative time.
Absolute time is the 'real-world time', which is returned by time.time() and which we are all used to deal with. It is usually measured from a fixed point in time in the past (e.g. the UNIX epoch of 00:00:00 UTC on 01/01/1970) at a resolution of at least 1 second. Modern systems usually provide milli- or micro-second resolution. It is maintained by the dedicated hardware on most computers, the RTC (real-time clock) circuit is normally battery powered so the system keeps track of real time between power ups. This 'real-world time' is also subject to modifications based on your location (time-zones) and season (daylight savings) or expressed as an offset from UTC (also known as GMT or Zulu time).
Secondly, there is relative time, which is returned by time.perf_counter and time.process_time. This type of time has no defined relationship to real-world time, in the sense that the relationship is system and implementation specific. It can be used only to measure time intervals, i.e. a unit-less value which is proportional to the time elapsed between two instants. This is mainly used to evaluate relative performance (e.g. whether this version of code runs faster than that version of code).
On modern systems, it is measured using a CPU counter which is monotonically increased at a frequency related to CPU's hardware clock. The counter resolution is highly dependent on the system's hardware, the value cannot be reliably related to real-world time or even compared between systems in most cases. Furthermore, the counter value is reset every time the CPU is powered up or reset.
time.perf_counter returns the absolute value of the counter. time.process_time is a value which is derived from the CPU counter but updated only when a given process is running on the CPU and can be broken down into 'user time', which is the time when the process itself is running on the CPU, and 'system time', which is the time when the operating system kernel is running on the CPU on behalf on the process.
I would like to set the expiry time for memcache objects to a specific date.
cache.set(string, 1, 86400)
The statement above allows me to set it for a day, but it does not expire if the date changes. One way I could handle this is by calculating the number of seconds left in the day and provide it as a variable.
I was wondering if there was a more simpler/efficient way to do it.
Looking at the documentation, we see that the expiration parameter is explained as:
Optional expiration time, either relative number of seconds from current time (up to 1 month), or an absolute Unix epoch time. By default, items never expire, though items may be evicted due to memory pressure. Float values will be rounded up to the nearest whole second.
So basically if the number you put in there is less than 2592000, it is interpreted as a relative time. So the number 86400 would be interpreted as 86400 seconds (one day) from now, the time it's being set.
It looks like you're going to want to use a number bigger than that to signify an absolute time. There are a variety of ways to get a unix timestamp. But quite simply you can do:
time_tuple = (2013, 2, 15, 0, 0, 0,0,0,0)
timestamp = time.mktime(time_tuple)
cache.set(string, 1, timestamp);
You initial idea is correct. You can find out the timestamp for now, and the timestamp of the date you want and just provide the difference, that would be equivalent too.
The day changes at least every hour of every day, does it not? Either the client or the server must specify which one of those is relevant to any given request. This is generally a better task for the client application.
Do note that you can specify absolute timestamps, which might make it easier to calculate when that expiry time is since you'd be able to reuse it for the whole day (or at least an hour).
Is there any way of obtaining high-precision system time in python?
I have a little application to work with virtual COM-port. I want to measure the time interval between the sending of the message and its receiving.
At the moment it works like this:
I obtain the message, use
time.time()
and append its 20 digits to the message. The client application receives this message and gets
time.time()
again, then calculates their differences. At the most of the cases the time interval (as i expected) equals zero.
The question is: is there any way of doing this in more intelligent way and with more precision?
Here is an excerpt from time.clock
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very
definition of the meaning of “processor time”, depends on that of the
C function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on
the Win32 function QueryPerformanceCounter(). The resolution is
typically better than one microsecond.
(emphasis mine)