Consider a very simple timer;
start = time.time()
end = time.time() - start
while(end<5):
end = time.time() - start
print end
how precise is this timer ? I mean compared to real-time clock, how synchronized and real-time is this one ?
Now for the real question ;
What is the smallest scale of time that can be measured precisely with Python ?
This is entirely platform dependent. Use the timeit.default_timer() function, it'll return the most precise timer for your platform.
From the documentation:
Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()‘s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise.
So, on Windows, you get microseconds, on Unix, you'll get whatever precision the platform can provide, which is usually (much) better than 1/100th of a second.
This entirely depends on the system you are running it on - there is no guarantee Python has any way of tracking time at all.
That said, it's pretty safe to assume you are going to get millisecond accuracy on modern systems, beyond that, it really is highly dependent on the system. To quote the docs:
Although this module is always available, not all functions are
available on all platforms. Most of the functions defined in this
module call platform C library functions with the same name. It may
sometimes be helpful to consult the platform documentation, because
the semantics of these functions varies among platforms.
And:
The precision of the various real-time functions may be less than
suggested by the units in which their value or argument is expressed.
E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a
second.
Related
I have a Python program running on my Raspberry Pi 3B doing a bunch of image processing and so on. I wanted to gather some data measurements from the program by writing it into a .csv file, and I wanted to write the corresponding time with each measurement. I used time.clock() (see code snippet below) to find the time before each write operation, but somewhere between 2147 seconds and 2148 seconds, the time becomes negative (see snippet table below). I expect some kind over overflow occurred, but I'm having trouble understanding in which manner it overflowed. The Raspberry Pi is a 32 bit system, and as I understand, the time.clock() method returns a float. Shouldn't the time variable have overflowed only at much larger values, or is the time variable in this case not 32 bits?
Since then, I've read various threads that indicate time.time() might have been a better method for this use-case, and I might do that in future tests, but I just want to see what I can do with the values I've gathered thus far. I believe I can do some processing on the logged time to "de-overflow", for the lack of a better word, and use it as is. Any thoughts?
import time
import csv
def somefunction(someX, someY, csvwriter):
t = time.clock()
x = somefunc(someX)
y = somefunc(someY)
csvwriter.writerow([t, x, y])
return
Time (s)
X value
Y value
2146.978524
-0.0019
0.00032
2147.30423
-0.00191
0.00023
-2147.336675
-0.00182
0.00034
-2147.000555
-0.00164
0.00037
I doubt this is an 32-bit issue. The third bullet point near the beginning of the Python 3.7 documentation of the time module says:
The functions in this module may not handle dates and times before the epoch or far in the future. The cut-off point in the future is determined by the C library; for 32-bit systems, it is typically in 2038.
That said, I don't really know what the problem is. Perhaps using the time.perf_counter() or time.process_time() functions instead would avoid the issue.
I'm fairly new to python (and CS in general), and I've been reading some docs regarding the "time" library in python. There are quite a lot of time measuring methods, and I'm trying to find the most suitable one that will enable me to compare the performance of 2 versions of an algorithm.
I understand that time.time() is wall time, and time.process_time() is either user-cpu time or system-cpu time (I'm not quite sure), but which one of these two would be a better (more accurate) measure of performance?
Thank you!!!
I would suggest you use time.perf_counter() as it is the recommended function for this kind of tasks (it auto selects the method with highest precision available).
It returns a float, that means just nothing on its own (unlike the result of time.time()), but computing the difference between two time.perf_counter()'s measurements tells you how much time elapsed.
For more info, read the time.perf_counter()'s docs.
Recently, when creating a loop with a very short wait at the end, I ran into an unexpected behaviour of time.sleep() when used in quick succession.
I used this piece of code to look further into my problem
import time
import statistics
def average_wait(func):
waits=[]
loops=0
while loops<1000:
start=time.time()
func(1/1000)
waits.append(time.time()-start)
loops+=1
print(waits)
print("Average wait for 0.001: {}".format(statistics.mean(waits)))
average_wait(time.sleep)
This function usually returns something around 0.0013 which is many many times less accurate than just calling time.sleep() once, upon further inspection of this problem by looking at the waits list, I found that the amount of time time.sleep() was actually sleeping for was either almost exactly the right amount of time or almost exactly double the amount of time.
Here is a sample from waits:
[0.0010008811950683594, 0.0020041465759277344, 0.0009999275207519531, 0.0019621849060058594, 0.0010418891906738281]
Is there any reason for this behaviour and anything that can be done to avoid it?
From the time.time() documentation:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
The precision is platform dependent. Moreover, it produces wall-clock time, and your process is never the only thing running on a modern OS, other processes also are given time to process and you'll see variation in timings in your own process because of that.
The module offers different clocks, with more precision and some are per-process. See the time.get_clock_info() function to see what precision they offer. Note time.process_time() offers per-process time but excludes sleep time.
Next, time.sleep() is also not going to sleep in exact time spans; again from the relevant documentation:
[T]he suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
It too is subject to OS scheduling.
Together, these effects can easily add up to the millisecond variation in timings you see in your experiments. So this is not a doubling of time slept; even if you used different values for time.sleep() you'd still see a similar deviation from the requested time.
I am trying to sample cpu registers every millisecond and calculate the frequency. To have an accurate measurement, I require that the sampling time to be very accurate. I have been using the time.sleep() to achieve this but sleep is not very accurate past 1 second.
What I would like to do is set up a counter and sample when that counter reaches a certain value and where the counter is incremented at an accurate rate. I am running Python 2.6. Does anyone have any suggestions?
I suspect there are likely several Python packages out there that -help- with what you want. I also suspect Python is not the right tool for that surpose.
There is the timeit module
There is time module with clock() which is NOT a wall clock but a CPU usage clock (for the application initializing the time.clock() object. It is a floating point value which shows some 12+ digits below the ones place ie 1.12345678912345. Python floats are not know for there accuracy and the return value from time.clock() is not something I personally trust as accurate.
There are other Python introspection tools that you can google for, like inspect, itertools, and others, that time processes. however I suspect their accuracy is dependant on running averages of many iterations of measuring the same thing.
My question was not specific enough last time, and so this is second question about this topic.
I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit.
I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time.
So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it?
Thank you,
Joon
CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate to milliseconds.
If you benchmark on a *nix system (Linux most probably), time.clock() will return CPU time in seconds. On its own, it's not very informative, but as a difference of results (i.e. t0 = time.clock(); some_process(); t = time.clock() - t0), you'd have a much more load-independent timing than with time.time().