I need to meassure the time certain parts of my code take. While executing my code on a powerfull server, I get 10 diffrent results
I tried comparing time measured with time.time(), time.perf_counter(), time.perf_counter_ns(), time.process_time() and time.process_time_ns().
import time
for _ in range(10):
start = time.perf_counter()
i = 0
while i < 100000:
i = i + 1
time.sleep(1)
end = time.perf_counter()
print(end - start)
I'm expecting when executing the same code 10 times, to be the same (the results to have a resolution of at least 1ms) ex. 1.041XX and not 1.030sec - 1.046sec.
When executing my code on a 16 cpu, 32gb memory server I'm receiving this result:
1.045549364
1.030857833
1.0466020120000001
1.0309665050000003
1.0464690349999994
1.046397238
1.0309525370000001
1.0312070380000007
1.0307592159999999
1.046095523
Im expacting the result to be:
1.041549364
1.041857833
1.0416020120000001
1.0419665050000003
1.0414690349999994
1.041397238
1.0419525370000001
1.0412070380000007
1.0417592159999999
1.041095523
Your expectations are wrong. If you want to measure code average time consumption use the timeit module. It executes your code multiple times and averages over the times.
The reason your code has different runtimes lies in your code:
time.sleep(1) # ensures (3.5+) _at least_ 1000ms are waited, won't be less, might be more
You are calling it in a tight loop,resulting in accumulated differences:
Quote from time.sleep(..) documentation:
Suspend execution of the calling thread for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale).
Emphasis mine.
Perfoming a code do not take the same time at each loop iteration because of the scheduling of the system (system puts on hold your process to perform another process then back to it...).
I created a simple application, and I realised that my code is running extremely slow. This application included calling the same method over and over again. I tried investigating the problem, and it turned out that calling the same function / method several times resulted in Python sometimes taking 15 milliseconds to execute an empty function (pass).
I'm running windows 10 Home 64 bit on a Lenovo ThinkPad, i7 CPU
The less code the function / method has, the smaller the chance of having a 15ms runtime, however, it never goes away.
Here's the code:
import time
class Clock:
def __init__(self):
self.t = time.time()
def restart(self):
dt = time.time() - self.t
t = time.time()
return dt * 1000
def method():
pass
for i in range(100000):
c = Clock()
dt = c.restart()
if dt > 1.:
print(str(i) + ' ' + str(dt))
I'd expect that I never get anything printed out, however an average result looks like this:
6497 15.619516372680664
44412 15.622615814208984
63348 15.621185302734375
On average 1-4 out of 100000 times the time elapsed between starting the clock and getting the result (which is an empty function call and a simple subtraction and variable assignment) the elapsed time is 15.62.. milliseconds, which makes the run time really slow.
Occasionally the elapsed time is 1 millisecond.
Thank you for your help!
In your code you are making the call to time.time() twice which would require the system to retrieve the time from the OS. You can read here
How does python's time.time() method work?
As you mentioned you used Windows, it is probably better for you to use time.clock() instead and will defer you to read this link instead since they do a much better job explaining. https://www.pythoncentral.io/measure-time-in-python-time-time-vs-time-clock/
Also the link takes garbage collection into account of performance and gives the ability to remove it during testing.
Hope it answers your questions!
I'm using this code on test time.clock() function in python
start = time.clock()
print(start)
time.sleep(3)
end = time.clock()
print(end)
print(end-start)
and the result is
0.282109
0.282151
4.199999999998649e-05
the doc say "On Unix, return the current processor time as a floating point number expressed in seconds." but if the thread is sleeping for 3 seconds how is the result of end-start so low?
Processor time means what is usually called CPU time, which is how much work the processor has done on the current process's behalf. That is next to nothing if you have only slept for 3 seconds.
Use time.time() instead.
As #decece quoted from the manual, perf_counter() would be a better choice here.
import time
start = time.perf_counter()
time.sleep(3)
end = time.perf_counter()
print(end-start) # 3.003116666999631
If you want to test arbritary code, the timeit - module is a good choice:
import timeit
n = 4
print( timeit.timeit( "time.sleep(3)", setup="import time", number=n)/n)
Output:
3.00312000513
You can give it a setup= code that is executed once and let it execute your sourcecode a number of times, getting the total time for all executions with otherwise default settings.
This will average out timings if you divide by number again - making the resulting time more robust.
API: timeit
Your concrete measurements for the sleep-method will vary, as it mostly guarantees to wait "at least" the amount of seconds given, depending on OS - rescheduling and interrupts it might take longer:
Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale).
I want to be able to easily set a benchmarking program's file write rate. It's a python program that I'm using to test another system. I'd like to be able to control the rate of file creation. One method I have thought of is to have a function with an argument for the number of files to create. This could be called in a loop which keeps track of the clock and only calls the function every second. This would fulfill the requirement of creating a certain number of files every second. The problem with this is that there could be a chunk of dead time (milliseconds, but still). I'd like a continuous load.
You'll have to somehow keep track of the time it takes to actually perform the file I/O calls, and adjust the sleep times between the operations. Adjustment needs to be continuous, as the sleeps and IO calls might take different amount of time depending on system load.
If you'd like to do N operations per second on average, you could run loops of few seconds (or longer), and after every round see if you're running too fast or slow, and adjust the sleep() time done between each operation upwards or downwards based on that. If you're running much too fast, increment the sleep time more, if you're only a little bit too fast, increment less.
import time
# target rate: 100 ops / 1 second
target = 100.0
round_time = 1.0
# at first, assume the writes are immediate
sleepTime = round_time/target
ops = 0
t_start = time.time()
while True:
#doYourIOoperationHere()
ops += 1
time.sleep(sleepTime)
# adjust sleep time periodically
if ops == target:
t_end = time.time()
elapsed = t_end - t_start
difference = round_time - elapsed
# print out the vars here to debug adjustment
print "%d ops done, elapsed %.3f, difference %.3f" % (ops, elapsed, difference)
# increase or decrease the sleep time, approach the target time slowly
sleepTime += difference/target/2
t_start = time.time()
ops = 0
Or something along those lines (simplistic code untested). This might not work well for very high IO rates or system loads, you might have to start doing multiple write operations per single sleep call. Also, a longer averaging than 1 second is likely to be necessary.
Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
start = time.clock()
... do something
elapsed = (time.clock() - start)
vs.
start = time.time()
... do something
elapsed = (time.time() - start)
As of 3.3, time.clock() is deprecated, and it's suggested to use time.process_time() or time.perf_counter() instead.
Previously in 2.7, according to the time module docs:
time.clock()
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of “processor time”, depends on that of the C function
of the same name, but in any case, this is the function to use for
benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Additionally, there is the timeit module for benchmarking code snippets.
The short answer is: most of the time time.clock() will be better.
However, if you're timing some hardware (for example some algorithm you put in the GPU), then time.clock() will get rid of this time and time.time() is the only solution left.
Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with time.time() but exists also with time.clock(), so you should never run one timing test only, but always run a series of test and look at mean/variance of the times.
Others have answered re: time.time() vs. time.clock().
However, if you're timing the execution of a block of code for benchmarking/profiling purposes, you should take a look at the timeit module.
One thing to keep in mind:
Changing the system time affects time.time() but not time.clock().
I needed to control some automatic tests executions. If one step of the test case took more than a given amount of time, that TC was aborted to go on with the next one.
But sometimes a step needed to change the system time (to check the scheduler module of the application under test), so after setting the system time a few hours in the future, the TC timeout expired and the test case was aborted. I had to switch from time.time() to time.clock() to handle this properly.
clock() -> floating point number
Return the CPU time or real time since the start of the process or since
the first call to clock(). This has as much precision as the system
records.
time() -> floating point number
Return the current time in seconds since the Epoch.
Fractions of a second may be present if the system clock provides them.
Usually time() is more precise, because operating systems do not store the process running time with the precision they store the system time (ie, actual time)
Depends on what you care about. If you mean WALL TIME (as in, the time on the clock on your wall), time.clock() provides NO accuracy because it may manage CPU time.
time() has better precision than clock() on Linux. clock() only has precision less than 10 ms. While time() gives prefect precision.
My test is on CentOS 6.4, python 2.6
using time():
1 requests, response time: 14.1749382019 ms
2 requests, response time: 8.01301002502 ms
3 requests, response time: 8.01491737366 ms
4 requests, response time: 8.41021537781 ms
5 requests, response time: 8.38804244995 ms
using clock():
1 requests, response time: 10.0 ms
2 requests, response time: 0.0 ms
3 requests, response time: 0.0 ms
4 requests, response time: 10.0 ms
5 requests, response time: 0.0 ms
6 requests, response time: 0.0 ms
7 requests, response time: 0.0 ms
8 requests, response time: 0.0 ms
As others have noted time.clock() is deprecated in favour of time.perf_counter() or time.process_time(), but Python 3.7 introduces nanosecond resolution timing with time.perf_counter_ns(), time.process_time_ns(), and time.time_ns(), along with 3 other functions.
These 6 new nansecond resolution functions are detailed in PEP 564:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time:int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the _ns suffix, but
return a number of nanoseconds as a Python int.
As others have also noted, use the timeit module to time functions and small code snippets.
The difference is very platform-specific.
clock() is very different on Windows than on Linux, for example.
For the sort of examples you describe, you probably want the "timeit" module instead.
I use this code to compare 2 methods .My OS is windows 8 , processor core i5 , RAM 4GB
import time
def t_time():
start=time.time()
time.sleep(0.1)
return (time.time()-start)
def t_clock():
start=time.clock()
time.sleep(0.1)
return (time.clock()-start)
counter_time=0
counter_clock=0
for i in range(1,100):
counter_time += t_time()
for i in range(1,100):
counter_clock += t_clock()
print "time() =",counter_time/100
print "clock() =",counter_clock/100
output:
time() = 0.0993799996376
clock() = 0.0993572257367
time.clock() was removed in Python 3.8 because it had platform-dependent behavior:
On Unix, return the current processor time as a floating point number expressed in seconds.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number
print(time.clock()); time.sleep(10); print(time.clock())
# Linux : 0.0382 0.0384 # see Processor Time
# Windows: 26.1224 36.1566 # see Wall-Clock Time
So which function to pick instead?
Processor Time: This is how long this specific process spends actively being executed on the CPU. Sleep, waiting for a web request, or time when only other processes are executed will not contribute to this.
Use time.process_time()
Wall-Clock Time: This refers to how much time has passed "on a clock hanging on the wall", i.e. outside real time.
Use time.perf_counter()
time.time() also measures wall-clock time but can be reset, so you could go back in time
time.monotonic() cannot be reset (monotonic = only goes forward) but has lower precision than time.perf_counter()
On Unix time.clock() measures the amount of CPU time that has been used by the current process, so it's no good for measuring elapsed time from some point in the past. On Windows it will measure wall-clock seconds elapsed since the first call to the function. On either system time.time() will return seconds passed since the epoch.
If you're writing code that's meant only for Windows, either will work (though you'll use the two differently - no subtraction is necessary for time.clock()). If this is going to run on a Unix system or you want code that is guaranteed to be portable, you will want to use time.time().
Short answer: use time.clock() for timing in Python.
On *nix systems, clock() returns the processor time as a floating point number, expressed in seconds. On Windows, it returns the seconds elapsed since the first call to this function, as a floating point number.
time() returns the the seconds since the epoch, in UTC, as a floating point number. There is no guarantee that you will get a better precision that 1 second (even though time() returns a floating point number). Also note that if the system clock has been set back between two calls to this function, the second function call will return a lower value.
To the best of my understanding, time.clock() has as much precision as your system will allow it.
Right answer : They're both the same length of a fraction.
But which faster if subject is time ?
A little test case :
import timeit
import time
clock_list = []
time_list = []
test1 = """
def test(v=time.clock()):
s = time.clock() - v
"""
test2 = """
def test(v=time.time()):
s = time.time() - v
"""
def test_it(Range) :
for i in range(Range) :
clk = timeit.timeit(test1, number=10000)
clock_list.append(clk)
tml = timeit.timeit(test2, number=10000)
time_list.append(tml)
test_it(100)
print "Clock Min: %f Max: %f Average: %f" %(min(clock_list), max(clock_list), sum(clock_list)/float(len(clock_list)))
print "Time Min: %f Max: %f Average: %f" %(min(time_list), max(time_list), sum(time_list)/float(len(time_list)))
I am not work an Swiss labs but I've tested..
Based of this question : time.clock() is better than time.time()
Edit : time.clock() is internal counter so can't use outside, got limitations max 32BIT FLOAT, can't continued counting if not store first/last values. Can't merge another one counter...
Comparing test result between Ubuntu Linux and Windows 7.
On Ubuntu
>>> start = time.time(); time.sleep(0.5); (time.time() - start)
0.5005500316619873
On Windows 7
>>> start = time.time(); time.sleep(0.5); (time.time() - start)
0.5