Is it possible to receive the output of time.time() in Python 2.5 as a Decimal?
If not (and it has to be a float), then is it possible to guarantee that inaccuracy will always be more than (rather than less than) the original value. In other words:
>>> repr(0.1)
'0.10000000000000001' # More than 0.1 which is what I want
>>> repr(0.99)
'0.98999999999999999' # Less than 0.99 which is unacceptable
Code example:
import math, time
sleep_time = 0.1
while True:
time_before = time.time()
time.sleep(sleep_time)
time_after = time.time()
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
EDIT:
Now using the following (which does not fail in testing but could still theoretically fail):
import time
from decimal import Decimal
def to_dec(float_num):
return Decimal('%2f' % float_num)
sleep_time = to_dec(0.1)
while True:
time_before = to_dec(time.time())
time.sleep(float(sleep_time))
time_after = to_dec(time.time())
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
print 'time_taken (%s) >= sleep_time (%s)' % (time_taken, sleep_time)
You could simply multiple time.time() by some value to get the precision you want (note that many calls can't guarantee sub-second accuracy anyways). So,
startTime = int(time.time() * 100)
#...
endTime = int(time.time() * 100)
Will satisfy your condition that endTime - startTime >= sleepTime
You could format your float value as so:
>>> '%.2f' % 0.99
'0.99'
See Python's String Formatting Operations
It is for a timing function which needs to register a time before and after a call that may sleep. So time after - time before >= sleep time, which is not always the case with floats.
I think your requirements are inconsistent. You seem to want to call the same function twice, and have the first call round the result downwards, and the second call to round the result upwards.
If I were you:
I'd time many calls instead of just one.
When drawing any conclusions I'd take into account the resolution of the timer and any floating-point issues (if relevant).
Related
In playing around with the python execution of time, I found an odd behavior when calling time.time() twice within a single statement. There is a very small processing delay in obtaining time.time() during statement execution.
E.g. time.time() - time.time()
If executed immediately in a perfect world, would compute in a result of 0.
However, in real world, this results in a very small number as there is an assumed delay in when the processor executes the first time.time() computation and the next. However, when running this same execution and comparing it to a variable computed in the same way, the results are skewed in one direction.
See the small code snippet below.
This also holds true for very large data sets
import time
counts = 300000
def at_once():
first = 0
second = 0
x = 0
while x < counts:
x += 1
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('2nds: %s' % second)
prints:
1sts: 39630
2nds: 260370
Unless I have my logic incorrect, I would expect the results to very close to 50:50, but it does not seem to be the case. Is there anyone who could explain what causes this behavior or point out a potential flaw with the code logic that is making the results skewed in one direction?
Could it be that exec_first == exec_second? Your if-else would add 1 to second in that case.
Try changing you if-else to something like:
if exec_first > exec_second:
first += 1
elif exec_second > exec_first:
second += 1
else:
pass
You assign all of the ties to one category. Try it with a middle ground:
import time
counts = 300000
first = 0
second = 0
same = 0
for _ in range(counts):
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first == exec_second:
same += 1
elif exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('same: %s' % same)
print('2nds: %s' % second)
Output:
$ python3 so.py
1sts: 53099
same: 194616
2nds: 52285
$ python3 so.py
1sts: 57529
same: 186726
2nds: 55745
Also, I'm confused as to why you think that a function call might take 0 time. Every invocation requires at least access to the system clock and copying that value to a temporary location of some sort. This isn't free of overhead on any current computer.
I want to print random data ranging from -1 to 1 in csv file for each millisecond using Python. I started with to print for each second and it worked. But, I am facing difficulty with printing random data for each millisecond. I want the timestamp to be in UNIX epoch format like "1476449030.55676" (for milliseconds, decimal point is not required)
tstep = datetime.timedelta(milliseconds=1)
tnext = datetime.datetime.now() + tstep
NumberOfReadings = 10; # 10 values ( 1 value for 1 millisecond)
i = 0;
f = open(sys.argv[1], 'w+')
try:
writer = csv.writer(f)
while i < NumberOfReadings:
writer.writerow((random.uniform(-1, 1), time.time()))
tdiff = tnext - datetime.datetime.now()
time.sleep(float(tdiff.total_seconds()/1000))
tnext = tnext + tstep
i =i+1;
finally:
f.close()
UPD: time.sleep() accepts argument in seconds, so you don't need to divide it by 1000. After fixing this, my output looks like this:
0.18153176446804853,1476466290.720721
-0.9331178681567136,1476466290.721784
-0.37142653326337327,1476466290.722779
0.1397040393287503,1476466290.723766
0.7126280853504974,1476466290.724768
-0.5367844384018245,1476466290.725762
0.44284645253432786,1476466290.726747
-0.2914685960956531,1476466290.727744
-0.40353712249981943,1476466290.728778
0.035369003158632895,1476466290.729771
Which is as good as it gets, given the precision of time.sleep and other time functions.
Here's a stripped down version, which outputs timestamps into stdout every second:
import time
tstep = 0.001
tnext = time.time() + tstep
NumberOfReadings = 10; # 10 values ( 1 value for 1 millisecond)
for i in range(NumberOfReadings):
now = time.time()
print(now)
time.sleep(tnext - now)
tnext += tstep
================================================
This is the problem:
float(tdiff.total_seconds()/1000)
You use integer division, and then convert result to float.
Instead, you need to use float division:
tdiff.total_seconds()/1000.0
I want to compute how many times my computer can do counter += 1 in one second. A naive approach is the following:
from time import time
counter = 0
startTime = time()
while time() - startTime < 1:
counter += 1
print counter
The problem is time() - startTime < 1 may be considerably more expensive than counter += 1.
Is there a way to make a less "clean" 1 sec sample of my algorithm?
The usual way to time algorithms is the other way around: Use a fixed number of iterations and measure how long it takes to finish them. The best way to do such timings is the timeit module.
print timeit.timeit("counter += 1", "counter = 0", number=100000000)
Note that timing counter += 1 seems rather pointless, though. What do you want to achieve?
Why don't you infer the time instead? You can run something like:
from datetime import datetime
def operation():
counter = 0
tbeg = datetime.utcnow()
for _ in range(10**6):
counter += 1
td = datetime.utcnow() - tbeg
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6)/10.0**6
def timer(n):
stack = []
for _ in range(n):
stack.append(operation()) # units of musec/increment
print sum(stack) / len(stack)
if __name__ == "__main__":
timer(10)
and get the average elapsed microseconds per increment; I get 0.09 (most likely very inaccurate). Now, it is a simple operation to infer that if I can make one increment in 0.09 microseconds, then I am able to make about 11258992 in one second.
I think the measurements are very inaccurate, but maybe is a sensible approximation?
I have never worked with the time() library, but according to that code I assume it counts seconds, so what if you do the /sec calculations after ctrl+C happens? It would be something like:
#! /usr/bin/env python
from time import time
import signal
import sys
#The ctrl+C interruption function:
def signal_handler(signal, frame):
counts_per_sec = counter/(time()-startTime)
print counts_per_sec
exit(0)
signal.signal(signal.SIGINT, signal_handler)
counter = 0
startTime = time()
while 1:
counter = counter + 1
Of course, it wont be exact because of the time passed between the last second processed and the interruption signal, but the more time you leave the script running, the more precise it will be :)
Here is my approach
import time
m = 0
timeout = time.time() + 1
while True:
if time.time() > timeout:
break
m = m + 1
print(m)
at the start and end of my program, I have
from time import strftime
print int(strftime("%Y-%m-%d %H:%M:%S")
Y1=int(strftime("%Y"))
m1=int(strftime("%m"))
d1=int(strftime("%d"))
H1=int(strftime("%H"))
M1=int(strftime("%M"))
S1=int(strftime("%S"))
Y2=int(strftime("%Y"))
m2=int(strftime("%m"))
d2=int(strftime("%d"))
H2=int(strftime("%H"))
M2=int(strftime("%M"))
S2=int(strftime("%S"))
print "Difference is:"+str(Y2-Y1)+":"+str(m2-m1)+":"+str(d2-d1)\
+" "+str(H2-H1)+":"+str(M2-M1)+":"+str(S2-S1)
But when I tried to get the difference, I get syntax errors.... I am doing a few things wrong, but I'm not sure what is going on...
Basically, I just want to store a time in a variable at the start of my program, then store a 2nd time in a second variable near the end, then at the last bit of the program, compute the difference and display it. I am not trying to time a function speed. I am trying to log how long it took for a user to progress through some menus. What is the best way to do this?
The datetime module will do all the work for you:
>>> import datetime
>>> a = datetime.datetime.now()
>>> # ...wait a while...
>>> b = datetime.datetime.now()
>>> print(b-a)
0:03:43.984000
If you don't want to display the microseconds, just use (as gnibbler suggested):
>>> a = datetime.datetime.now().replace(microsecond=0)
>>> b = datetime.datetime.now().replace(microsecond=0)
>>> print(b-a)
0:03:43
from time import time
start_time = time()
...
end_time = time()
seconds_elapsed = end_time - start_time
hours, rest = divmod(seconds_elapsed, 3600)
minutes, seconds = divmod(rest, 60)
You cannot calculate the differences separately ... what difference would that yield for 7:59 and 8:00 o'clock? Try
import time
time.time()
which gives you the seconds since the start of the epoch.
You can then get the intermediate time with something like
timestamp1 = time.time()
# Your code here
timestamp2 = time.time()
print "This took %.2f seconds" % (timestamp2 - timestamp1)
Both time.monotonic() and time.monotonic_ns() are correct. Correct as in monotonic.
>>> import time
>>>
>>> time.monotonic()
452782.067158593
>>>
>>> t0 = time.monotonic()
>>> time.sleep(1)
>>> t1 = time.monotonic()
>>> print(t1 - t0)
1.001658110995777
Regardless of language, monotonic time is the right answer, and real time is the wrong answer. The difference is that monotonic time is supposed to give a consistent answer when measuring durations, while real time isn't, as real time may be adjusted – indeed needs to be adjusted – to keep up with reality. Monotonic time is usually the computer's uptime.
As such, time.time() and datetime.now() are wrong ways to do this.
Python also has time.perf_counter() and time.perf_counter_ns(), which are specified to have the highest available resolution, but aren't guarranteed to be monotonic. On PC hardware, though, both typically have nanosecond resolution.
Here is a piece of code to do so:
def(StringChallenge(str1)):
#str1 = str1[1:-1]
h1 = 0
h2 = 0
m1 = 0
m2 = 0
def time_dif(h1,m1,h2,m2):
if(h1 == h2):
return m2-m1
else:
return ((h2-h1-1)*60 + (60-m1) + m2)
count_min = 0
if str1[1] == ':':
h1=int(str1[:1])
m1=int(str1[2:4])
else:
h1=int(str1[:2])
m1=int(str1[3:5])
if str1[-7] == '-':
h2=int(str1[-6])
m2=int(str1[-4:-2])
else:
h2=int(str1[-7:-5])
m2=int(str1[-4:-2])
if h1 == 12:
h1 = 0
if h2 == 12:
h2 = 0
if "am" in str1[:8]:
flag1 = 0
else:
flag1= 1
if "am" in str1[7:]:
flag2 = 0
else:
flag2 = 1
if flag1 == flag2:
if h2 > h1 or (h2 == h1 and m2 >= m1):
count_min += time_dif(h1,m1,h2,m2)
else:
count_min += 1440 - time_dif(h2,m2,h1,m1)
else:
count_min += (12-h1-1)*60
count_min += (60 - m1)
count_min += (h2*60)+m2
return count_min
I'm just trying to time a piece of code. The pseudocode looks like:
start = get_ticks()
do_long_code()
print "It took " + (get_ticks() - start) + " seconds."
How does this look in Python?
More specifically, how do I get the number of ticks since midnight (or however Python organizes that timing)?
In the time module, there are two timing functions: time and clock. time gives you "wall" time, if this is what you care about.
However, the python docs say that clock should be used for benchmarking. Note that clock behaves different in separate systems:
on MS Windows, it uses the Win32 function QueryPerformanceCounter(), with "resolution typically better than a microsecond". It has no special meaning, it's just a number (it starts counting the first time you call clock in your process).
# ms windows
t0= time.clock()
do_something()
t= time.clock() - t0 # t is wall seconds elapsed (floating point)
on *nix, clock reports CPU time. Now, this is different, and most probably the value you want, since your program hardly ever is the only process requesting CPU time (even if you have no other processes, the kernel uses CPU time now and then). So, this number, which typically is smaller¹ than the wall time (i.e. time.time() - t0), is more meaningful when benchmarking code:
# linux
t0= time.clock()
do_something()
t= time.clock() - t0 # t is CPU seconds elapsed (floating point)
Apart from all that, the timeit module has the Timer class that is supposed to use what's best for benchmarking from the available functionality.
¹ unless threading gets in the way…
² Python ≥3.3: there are time.perf_counter() and time.process_time(). perf_counter is being used by the timeit module.
What you need is time() function from time module:
import time
start = time.time()
do_long_code()
print "it took", time.time() - start, "seconds."
You can use timeit module for more options though.
Here's a solution that I started using recently:
class Timer:
def __enter__(self):
self.begin = now()
def __exit__(self, type, value, traceback):
print(format_delta(self.begin, now()))
You use it like this (You need at least Python 2.5):
with Timer():
do_long_code()
When your code finishes, Timer automatically prints out the run time. Sweet! If I'm trying to quickly bench something in the Python Interpreter, this is the easiest way to go.
And here's a sample implementation of 'now' and 'format_delta', though feel free to use your preferred timing and formatting method.
import datetime
def now():
return datetime.datetime.now()
# Prints one of the following formats*:
# 1.58 days
# 2.98 hours
# 9.28 minutes # Not actually added yet, oops.
# 5.60 seconds
# 790 milliseconds
# *Except I prefer abbreviated formats, so I print d,h,m,s, or ms.
def format_delta(start,end):
# Time in microseconds
one_day = 86400000000
one_hour = 3600000000
one_second = 1000000
one_millisecond = 1000
delta = end - start
build_time_us = delta.microseconds + delta.seconds * one_second + delta.days * one_day
days = 0
while build_time_us > one_day:
build_time_us -= one_day
days += 1
if days > 0:
time_str = "%.2fd" % ( days + build_time_us / float(one_day) )
else:
hours = 0
while build_time_us > one_hour:
build_time_us -= one_hour
hours += 1
if hours > 0:
time_str = "%.2fh" % ( hours + build_time_us / float(one_hour) )
else:
seconds = 0
while build_time_us > one_second:
build_time_us -= one_second
seconds += 1
if seconds > 0:
time_str = "%.2fs" % ( seconds + build_time_us / float(one_second) )
else:
ms = 0
while build_time_us > one_millisecond:
build_time_us -= one_millisecond
ms += 1
time_str = "%.2fms" % ( ms + build_time_us / float(one_millisecond) )
return time_str
Please let me know if you have a preferred formatting method, or if there's an easier way to do all of this!
The time module in python gives you access to the clock() function, which returns time in seconds as a floating point.
Different systems will have different accuracy based on their internal clock setup (ticks per second) but it's generally at least under 20milliseconds, and in some cases better than a few microseconds.
-Adam
import datetime
start = datetime.datetime.now()
do_long_code()
finish = datetime.datetime.now()
delta = finish - start
print delta.seconds
From midnight:
import datetime
midnight = datetime.datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
now = datetime.datetime.now()
delta = now - midnight
print delta.seconds
If you have many statements you want to time, you could use something like this:
class Ticker:
def __init__(self):
self.t = clock()
def __call__(self):
dt = clock() - self.t
self.t = clock()
return 1000 * dt
Then your code could look like:
tick = Ticker()
# first command
print('first took {}ms'.format(tick())
# second group of commands
print('second took {}ms'.format(tick())
# third group of commands
print('third took {}ms'.format(tick())
That way you don't need to type t = time() before each block and 1000 * (time() - t) after it, while still keeping control over formatting (though you could easily put that in Ticket too).
It's a minimal gain, but I think it's kind of convenient.