what is the difference between time.time and time.clock? [duplicate] - python

This question already has answers here:
Python's time.clock() vs. time.time() accuracy?
(16 answers)
Closed 6 years ago.
I thought both measure the amount of time? But they return very different numbers and I don't understand what the documentation is saying. Can anyone elaborate?

time.clock() gives you an elapsed amount of time. time.time() gives you the wall clock time.
You can use time.time() to communicate with others (including humans) about when something happened. time.clock() only lets you measure how long something takes.
Generally speaking, you'd use time.clock() when you want to measure timings, time.time() to schedule something. To that end time.time() has to be set correctly on your computer (to agree with the rest of your region as to what time it is now), but time.clock() doesn't, it just counts seconds from an arbitrary point in time (usually when your computer started or when your process first used the function).
The exact behaviour of time.clock() depends on your OS (it could just measure process time, excluding time sleeping, or it could measure time elapsed even when the process is inactive, it could go backwards if your system time is adjusted, etc).
For some use-cases this variability in exact behaviour isn't good enough, and as such it is deprecated in Python 3. There better options are available for either measuring performance or process time, see time.perf_counter() and time.process_time().

Related

More precise sleep (usleep) in Python?

I'm trying to implement real-time plotting in Python, with samples being around 500-1000 microseconds apart. Using time.sleep() between drawing each sample doesn't work due to reasons mentioned here: accuracy of sleep(). I'm currently doing busy waiting like this:
stime = time()
while stime + diff/1000000 > time():
pass
But it's taking a lot of CPU resources and it's also not 100% precise. Is there a better way of doing this (preferably platform independent and not busy waiting)?

Is time.process_time() a better measure for performance than time.time()?

I'm fairly new to python (and CS in general), and I've been reading some docs regarding the "time" library in python. There are quite a lot of time measuring methods, and I'm trying to find the most suitable one that will enable me to compare the performance of 2 versions of an algorithm.
I understand that time.time() is wall time, and time.process_time() is either user-cpu time or system-cpu time (I'm not quite sure), but which one of these two would be a better (more accurate) measure of performance?
Thank you!!!
I would suggest you use time.perf_counter() as it is the recommended function for this kind of tasks (it auto selects the method with highest precision available).
It returns a float, that means just nothing on its own (unlike the result of time.time()), but computing the difference between two time.perf_counter()'s measurements tells you how much time elapsed.
For more info, read the time.perf_counter()'s docs.

Safest and most reliable way to measure short intervals in Python? (cross-platform, cross-hardware, resistant to DST and leap seconds)

The more I read about datetime arithmetic, the bigger a headache I get.
There's lots of different kinds of time:
Civil time
UTC
TAI
UNIX time
system time
thread time
CPU time
And then the clocks can run faster or slower or jump backwards or forwards because of
daylight savings
moving across timezones
leap seconds
NTP synchronization
general relativity
And how these are dealt with depends in turn on:
Operating system
Hardware
Programming language
So please can somebody tell me, for my specific use case, the safest and most reliable way to measure a short interval? Here is what I am doing:
I'm making a game in Python (3.7.x) and I need to keep track of how long it has been since certain events. For example, how long the player has been holding a button, or how long since an enemy has spotted the player, or how long since a level was loaded. Timescales should be accurate to the millisecond (nanoseconds are overkill).
Here are scenarios I want to be sure are averted:
You play the game late at night. In your timezone, on that night, the clocks go forward an hour at 2am for DST, so the minutes go: 1:58, 1:59, 2:00, 3:01, 3:02. Every time-related variable in the game suddenly has an extra hour added to it -- it thinks you'd been holding down that button for an hour and 2 seconds instead of just 2 seconds. Catastrophe ensues.
The same, but the IERS decides to insert or subtract a leap second sometime that day. You play through the transition, and all time variables get an extra second added or subtracted. Catastrophe ensues.
You play the game on a train or plane and catastrophe ensues when you cross a timezone boundary and/or the International Date Line.
The game works correctly in the above scenarios on some hardware and operating systems, but not others. I.e. it breaks on Linux but not Window, or vice versa.
And I can't really write tests for these since the problematic events come around so rarely. I need to get it right the first time. So, what time-related function do I need to use? I know there's plain old time.time(), but also a bewildering array of other options like
time.clock()
time.perf_counter()
time.process_time()
time.monotonic()
and then nanosecond variants of all of the above.
From reading the documentation it seems like time.monotonic() is the one I want. But if reading about all the details of timekeeping has taught me anything, it's that these things are never quite what they seem. Once upon a time, I thought I knew what a "second" was. Now I'm not so sure.
So, how do I make sure my game clocks work properly?
The specification of time module is the best place to look for details about each of those.
There, you can easily see that:
time.clock() is deprecated and should be replaced with other functions
time.process_time() counts only CPU time spent by your process, so it is not suitable for measuring wall clock time (which is what you need)
time.perf_counter() has the same problem as time.process_time()
time.time() is just about right, but it will give bad timings if user modifies the current time
time.monotonic() - this seems to be the safest bet for measuring time intervals - note that this does not give you current time at all, but it gives you a correct difference between two time points
As for the nanoseconds versions, you should use those only if you need nanoseconds.

time.time(): get theprecision [duplicate]

Consider a very simple timer;
start = time.time()
end = time.time() - start
while(end<5):
end = time.time() - start
print end
how precise is this timer ? I mean compared to real-time clock, how synchronized and real-time is this one ?
Now for the real question ;
What is the smallest scale of time that can be measured precisely with Python ?
This is entirely platform dependent. Use the timeit.default_timer() function, it'll return the most precise timer for your platform.
From the documentation:
Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()‘s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise.
So, on Windows, you get microseconds, on Unix, you'll get whatever precision the platform can provide, which is usually (much) better than 1/100th of a second.
This entirely depends on the system you are running it on - there is no guarantee Python has any way of tracking time at all.
That said, it's pretty safe to assume you are going to get millisecond accuracy on modern systems, beyond that, it really is highly dependent on the system. To quote the docs:
Although this module is always available, not all functions are
available on all platforms. Most of the functions defined in this
module call platform C library functions with the same name. It may
sometimes be helpful to consult the platform documentation, because
the semantics of these functions varies among platforms.
And:
The precision of the various real-time functions may be less than
suggested by the units in which their value or argument is expressed.
E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a
second.

Is python time module reliable enough to use to measure response time?

My question was not specific enough last time, and so this is second question about this topic.
I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit.
I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time.
So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it?
Thank you,
Joon
CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate to milliseconds.
If you benchmark on a *nix system (Linux most probably), time.clock() will return CPU time in seconds. On its own, it's not very informative, but as a difference of results (i.e. t0 = time.clock(); some_process(); t = time.clock() - t0), you'd have a much more load-independent timing than with time.time().

Categories