I'm just trying to time a piece of code. The pseudocode looks like:
start = get_ticks()
do_long_code()
print "It took " + (get_ticks() - start) + " seconds."
How does this look in Python?
More specifically, how do I get the number of ticks since midnight (or however Python organizes that timing)?
In the time module, there are two timing functions: time and clock. time gives you "wall" time, if this is what you care about.
However, the python docs say that clock should be used for benchmarking. Note that clock behaves different in separate systems:
on MS Windows, it uses the Win32 function QueryPerformanceCounter(), with "resolution typically better than a microsecond". It has no special meaning, it's just a number (it starts counting the first time you call clock in your process).
# ms windows
t0= time.clock()
do_something()
t= time.clock() - t0 # t is wall seconds elapsed (floating point)
on *nix, clock reports CPU time. Now, this is different, and most probably the value you want, since your program hardly ever is the only process requesting CPU time (even if you have no other processes, the kernel uses CPU time now and then). So, this number, which typically is smaller¹ than the wall time (i.e. time.time() - t0), is more meaningful when benchmarking code:
# linux
t0= time.clock()
do_something()
t= time.clock() - t0 # t is CPU seconds elapsed (floating point)
Apart from all that, the timeit module has the Timer class that is supposed to use what's best for benchmarking from the available functionality.
¹ unless threading gets in the way…
² Python ≥3.3: there are time.perf_counter() and time.process_time(). perf_counter is being used by the timeit module.
What you need is time() function from time module:
import time
start = time.time()
do_long_code()
print "it took", time.time() - start, "seconds."
You can use timeit module for more options though.
Here's a solution that I started using recently:
class Timer:
def __enter__(self):
self.begin = now()
def __exit__(self, type, value, traceback):
print(format_delta(self.begin, now()))
You use it like this (You need at least Python 2.5):
with Timer():
do_long_code()
When your code finishes, Timer automatically prints out the run time. Sweet! If I'm trying to quickly bench something in the Python Interpreter, this is the easiest way to go.
And here's a sample implementation of 'now' and 'format_delta', though feel free to use your preferred timing and formatting method.
import datetime
def now():
return datetime.datetime.now()
# Prints one of the following formats*:
# 1.58 days
# 2.98 hours
# 9.28 minutes # Not actually added yet, oops.
# 5.60 seconds
# 790 milliseconds
# *Except I prefer abbreviated formats, so I print d,h,m,s, or ms.
def format_delta(start,end):
# Time in microseconds
one_day = 86400000000
one_hour = 3600000000
one_second = 1000000
one_millisecond = 1000
delta = end - start
build_time_us = delta.microseconds + delta.seconds * one_second + delta.days * one_day
days = 0
while build_time_us > one_day:
build_time_us -= one_day
days += 1
if days > 0:
time_str = "%.2fd" % ( days + build_time_us / float(one_day) )
else:
hours = 0
while build_time_us > one_hour:
build_time_us -= one_hour
hours += 1
if hours > 0:
time_str = "%.2fh" % ( hours + build_time_us / float(one_hour) )
else:
seconds = 0
while build_time_us > one_second:
build_time_us -= one_second
seconds += 1
if seconds > 0:
time_str = "%.2fs" % ( seconds + build_time_us / float(one_second) )
else:
ms = 0
while build_time_us > one_millisecond:
build_time_us -= one_millisecond
ms += 1
time_str = "%.2fms" % ( ms + build_time_us / float(one_millisecond) )
return time_str
Please let me know if you have a preferred formatting method, or if there's an easier way to do all of this!
The time module in python gives you access to the clock() function, which returns time in seconds as a floating point.
Different systems will have different accuracy based on their internal clock setup (ticks per second) but it's generally at least under 20milliseconds, and in some cases better than a few microseconds.
-Adam
import datetime
start = datetime.datetime.now()
do_long_code()
finish = datetime.datetime.now()
delta = finish - start
print delta.seconds
From midnight:
import datetime
midnight = datetime.datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
now = datetime.datetime.now()
delta = now - midnight
print delta.seconds
If you have many statements you want to time, you could use something like this:
class Ticker:
def __init__(self):
self.t = clock()
def __call__(self):
dt = clock() - self.t
self.t = clock()
return 1000 * dt
Then your code could look like:
tick = Ticker()
# first command
print('first took {}ms'.format(tick())
# second group of commands
print('second took {}ms'.format(tick())
# third group of commands
print('third took {}ms'.format(tick())
That way you don't need to type t = time() before each block and 1000 * (time() - t) after it, while still keeping control over formatting (though you could easily put that in Ticket too).
It's a minimal gain, but I think it's kind of convenient.
Related
I'm trying to estimate time of running AES in Python. I have a code from here:
https://gist.github.com/jeetsukumaran/1291836
And i'm using this:
https://repl.it/languages/python3
Sometimes I get negative algorithm run times. Why is it? How to measure it right?
Relevant timing loop:
start = timeit.timeit()
r = Rijndael("abcdefg1234567890123456789012345", block_size = 32)
ciphertext = r.encrypt("99999999999999999999999999999995")
plaintext = r.decrypt(ciphertext)
end = timeit.timeit()
The full code is here.
Use time.time(), not timeit.timeit().
import time
# unrelated code
start = time.time()
r = Rijndael("abcdefg1234567890123456789012345", block_size = 32)
ciphertext = r.encrypt("99999999999999999999999999999995")
plaintext = r.decrypt(ciphertext)
end = time.time()
elapsed = end - start # will not be negative!
Notes
How does time.time() work?
time.time() will always return the number of seconds since January 1, 1970, 00:00:00 (UTC).
How is timeit.timeit() used?
Time one-liners, get average time over 1,000,000 calls.
>>> import timeit
>>> timeit.timeit('4 + 5') # runs 4 + 5 1,000,000 times; returns average speed (ms)
0.009406077000000401
I execute my code in a loop for many objects and it seems that process too much time.
I would like to add a condition that stops the execution after 30 min for example. How should it be done? Do I need another for loop and the timeit module for that or it can be done easier?
You can do it with something like this:
import time
time_limit = 60 * 30 # Number of seconds in one minute
t0 = time.time()
for obj in list_of_objects_to_iterate_over:
do_some_stuff(obj)
if time.time() - t0 > time_limit:
break
The break statement will exit the loop whenever the end of your iteration is reached after the time limit you have set.
You could Implement a starting time and stop executing after 30 mins past from starting time
from datetime import datetime, timedelta
starting_time = datetime.now()
for item in something:
#do something
if (datetime.now()- starting_time) //timedelta(minutes=1) >= 30:
break
So here is my variant how this can be done(assuming that we gather data via sql response):
for i in items:
if total_records % 100 == 0:
logger.warning(
"Processed {} items in {} ms".format(total_records, int(time.time() * 1000) - start_time_ms))
if int(time.time() * 1000) - start_time_ms > 3600000:
cur.close()
conn.close()
return total_records
I want to compute how many times my computer can do counter += 1 in one second. A naive approach is the following:
from time import time
counter = 0
startTime = time()
while time() - startTime < 1:
counter += 1
print counter
The problem is time() - startTime < 1 may be considerably more expensive than counter += 1.
Is there a way to make a less "clean" 1 sec sample of my algorithm?
The usual way to time algorithms is the other way around: Use a fixed number of iterations and measure how long it takes to finish them. The best way to do such timings is the timeit module.
print timeit.timeit("counter += 1", "counter = 0", number=100000000)
Note that timing counter += 1 seems rather pointless, though. What do you want to achieve?
Why don't you infer the time instead? You can run something like:
from datetime import datetime
def operation():
counter = 0
tbeg = datetime.utcnow()
for _ in range(10**6):
counter += 1
td = datetime.utcnow() - tbeg
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6)/10.0**6
def timer(n):
stack = []
for _ in range(n):
stack.append(operation()) # units of musec/increment
print sum(stack) / len(stack)
if __name__ == "__main__":
timer(10)
and get the average elapsed microseconds per increment; I get 0.09 (most likely very inaccurate). Now, it is a simple operation to infer that if I can make one increment in 0.09 microseconds, then I am able to make about 11258992 in one second.
I think the measurements are very inaccurate, but maybe is a sensible approximation?
I have never worked with the time() library, but according to that code I assume it counts seconds, so what if you do the /sec calculations after ctrl+C happens? It would be something like:
#! /usr/bin/env python
from time import time
import signal
import sys
#The ctrl+C interruption function:
def signal_handler(signal, frame):
counts_per_sec = counter/(time()-startTime)
print counts_per_sec
exit(0)
signal.signal(signal.SIGINT, signal_handler)
counter = 0
startTime = time()
while 1:
counter = counter + 1
Of course, it wont be exact because of the time passed between the last second processed and the interruption signal, but the more time you leave the script running, the more precise it will be :)
Here is my approach
import time
m = 0
timeout = time.time() + 1
while True:
if time.time() > timeout:
break
m = m + 1
print(m)
Is it possible to receive the output of time.time() in Python 2.5 as a Decimal?
If not (and it has to be a float), then is it possible to guarantee that inaccuracy will always be more than (rather than less than) the original value. In other words:
>>> repr(0.1)
'0.10000000000000001' # More than 0.1 which is what I want
>>> repr(0.99)
'0.98999999999999999' # Less than 0.99 which is unacceptable
Code example:
import math, time
sleep_time = 0.1
while True:
time_before = time.time()
time.sleep(sleep_time)
time_after = time.time()
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
EDIT:
Now using the following (which does not fail in testing but could still theoretically fail):
import time
from decimal import Decimal
def to_dec(float_num):
return Decimal('%2f' % float_num)
sleep_time = to_dec(0.1)
while True:
time_before = to_dec(time.time())
time.sleep(float(sleep_time))
time_after = to_dec(time.time())
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
print 'time_taken (%s) >= sleep_time (%s)' % (time_taken, sleep_time)
You could simply multiple time.time() by some value to get the precision you want (note that many calls can't guarantee sub-second accuracy anyways). So,
startTime = int(time.time() * 100)
#...
endTime = int(time.time() * 100)
Will satisfy your condition that endTime - startTime >= sleepTime
You could format your float value as so:
>>> '%.2f' % 0.99
'0.99'
See Python's String Formatting Operations
It is for a timing function which needs to register a time before and after a call that may sleep. So time after - time before >= sleep time, which is not always the case with floats.
I think your requirements are inconsistent. You seem to want to call the same function twice, and have the first call round the result downwards, and the second call to round the result upwards.
If I were you:
I'd time many calls instead of just one.
When drawing any conclusions I'd take into account the resolution of the timer and any floating-point issues (if relevant).
at the start and end of my program, I have
from time import strftime
print int(strftime("%Y-%m-%d %H:%M:%S")
Y1=int(strftime("%Y"))
m1=int(strftime("%m"))
d1=int(strftime("%d"))
H1=int(strftime("%H"))
M1=int(strftime("%M"))
S1=int(strftime("%S"))
Y2=int(strftime("%Y"))
m2=int(strftime("%m"))
d2=int(strftime("%d"))
H2=int(strftime("%H"))
M2=int(strftime("%M"))
S2=int(strftime("%S"))
print "Difference is:"+str(Y2-Y1)+":"+str(m2-m1)+":"+str(d2-d1)\
+" "+str(H2-H1)+":"+str(M2-M1)+":"+str(S2-S1)
But when I tried to get the difference, I get syntax errors.... I am doing a few things wrong, but I'm not sure what is going on...
Basically, I just want to store a time in a variable at the start of my program, then store a 2nd time in a second variable near the end, then at the last bit of the program, compute the difference and display it. I am not trying to time a function speed. I am trying to log how long it took for a user to progress through some menus. What is the best way to do this?
The datetime module will do all the work for you:
>>> import datetime
>>> a = datetime.datetime.now()
>>> # ...wait a while...
>>> b = datetime.datetime.now()
>>> print(b-a)
0:03:43.984000
If you don't want to display the microseconds, just use (as gnibbler suggested):
>>> a = datetime.datetime.now().replace(microsecond=0)
>>> b = datetime.datetime.now().replace(microsecond=0)
>>> print(b-a)
0:03:43
from time import time
start_time = time()
...
end_time = time()
seconds_elapsed = end_time - start_time
hours, rest = divmod(seconds_elapsed, 3600)
minutes, seconds = divmod(rest, 60)
You cannot calculate the differences separately ... what difference would that yield for 7:59 and 8:00 o'clock? Try
import time
time.time()
which gives you the seconds since the start of the epoch.
You can then get the intermediate time with something like
timestamp1 = time.time()
# Your code here
timestamp2 = time.time()
print "This took %.2f seconds" % (timestamp2 - timestamp1)
Both time.monotonic() and time.monotonic_ns() are correct. Correct as in monotonic.
>>> import time
>>>
>>> time.monotonic()
452782.067158593
>>>
>>> t0 = time.monotonic()
>>> time.sleep(1)
>>> t1 = time.monotonic()
>>> print(t1 - t0)
1.001658110995777
Regardless of language, monotonic time is the right answer, and real time is the wrong answer. The difference is that monotonic time is supposed to give a consistent answer when measuring durations, while real time isn't, as real time may be adjusted – indeed needs to be adjusted – to keep up with reality. Monotonic time is usually the computer's uptime.
As such, time.time() and datetime.now() are wrong ways to do this.
Python also has time.perf_counter() and time.perf_counter_ns(), which are specified to have the highest available resolution, but aren't guarranteed to be monotonic. On PC hardware, though, both typically have nanosecond resolution.
Here is a piece of code to do so:
def(StringChallenge(str1)):
#str1 = str1[1:-1]
h1 = 0
h2 = 0
m1 = 0
m2 = 0
def time_dif(h1,m1,h2,m2):
if(h1 == h2):
return m2-m1
else:
return ((h2-h1-1)*60 + (60-m1) + m2)
count_min = 0
if str1[1] == ':':
h1=int(str1[:1])
m1=int(str1[2:4])
else:
h1=int(str1[:2])
m1=int(str1[3:5])
if str1[-7] == '-':
h2=int(str1[-6])
m2=int(str1[-4:-2])
else:
h2=int(str1[-7:-5])
m2=int(str1[-4:-2])
if h1 == 12:
h1 = 0
if h2 == 12:
h2 = 0
if "am" in str1[:8]:
flag1 = 0
else:
flag1= 1
if "am" in str1[7:]:
flag2 = 0
else:
flag2 = 1
if flag1 == flag2:
if h2 > h1 or (h2 == h1 and m2 >= m1):
count_min += time_dif(h1,m1,h2,m2)
else:
count_min += 1440 - time_dif(h2,m2,h1,m1)
else:
count_min += (12-h1-1)*60
count_min += (60 - m1)
count_min += (h2*60)+m2
return count_min