I'm trying to create a microsecond timer on python. The goal is to have a "tick" every microsecond.
My current approach was:
us = list()
while len(us) <= 1000:
t = time.time()*1000000
if t.is_integer():
us.append(t)
It shows that there are clear limitations in term of timing that I am not aware of.
The 656 first values were 1532960518213592.0. The while loop executes in the "same" microsecond.
Then the value jumps to 1532960518217613.0. The maximum resolution seems to be 4021 us.
How can I overcome those limitations?
EDIT: About this measurements.
Chrome with a youtube video was running in the background. + Outlooks, Teams, Adobe and some other stuff.
The CPU is an i5-5200U CPU # 2.20 GHz (2 cores).
The problem is that the current time is functionality provided by your operating system, so it will have different behaviors on different systems, both in terms of precision of the clock and in terms of how often it is polled. Also keep in mind that your program can be paused an resumed in it's execution by the scheduler of your operating system.
Here is a simplified version of your code:
[time.time() * 10**6 for i in range(1000)]
On my local computer (Windows Ubuntu Subsystem), this produces the following (notice it's about one per second with gaps):
[1532961190053186.0,
1532961190053189.0,
1532961190053190.0,
1532961190053191.0,
1532961190053192.0,
1532961190053193.0,
1532961190053194.0,
1532961190053195.0,
1532961190053196.0,
1532961190053198.0, ...]
On a server (Ubuntu), this produces the following (notice the same time occurring multiple times):
[1532961559708196.0,
1532961559708198.0,
1532961559708199.0,
1532961559708199.0,
1532961559708199.0,
1532961559708200.0,
1532961559708200.0,
1532961559708200.0,
1532961559708200.0,
1532961559708201.0, ...]
Related
I'm trying to write a python based metronome with librosa and sounddevice but I've came across some problems with it's accuracy. Here's the code:
from time import sleep, perf_counter
import librosa
import sounddevice as sd
bpm = 200
delay = 60/bpm
tone = librosa.tone(440, sr=22050, length=1000)
try:
while True:
sd.play(tone, 22050)
sleep(delay)
except KeyboardInterrupt:
pass
First of all, the upper limit for properly functioning metronome seems to be around 180bpm - if you set the bpm to be any higher then 200bpm then there is no sound produced. In slower tempos I can hear that metronme is not so consistent as it should be with spacing in between the clicks. I've runned the script from this topic and my results were pretty poor compared to the author of this answer(which was using "old single core 32 bit 2GHz machine" against my six-core 3.9GHz 64bit windows running):
150.0 bpm
+0.007575200
+0.006221200
-0.012907700
+0.001935400
+0.002982700
+0.006840000
-0.009625400
+0.003260200
+0.005553100
+0.000668100
-0.010895100
+0.017142500
-0.012933300
+0.001465200
+0.004203100
+0.004769100
-0.012183100
+0.002174500
+0.002301000
-0.001611100
So i wonder if my metronome problems are somehow correlated to these poor results and what I can do to fix it.
The second problem that I encounter is the way in which the metronome is switched off - I want it to be running up until the point where the user inputs a specific button, or in my case(no GUI) a specific value from the keyboard - let's say the space key. So as you can see now it works only with ctrl + c, but I have no idea how to implement interrupt with a specified key.
Running your code on a mac, the timing inconsistencies are noticeable but also the tempo was of quite a bit off from the set bpm.
This is mostly because sleep() isn't that accurate, but also because you have to account for the time that has elapsed since the last event. e.g. how much time did it take to call sd.play()
I don't know on what operating system you did run this, but most operating systems have a special timer for precise callbacks (e.g. Multimedia Timers on Windows). if you don't want a platform specific solution to improve the timing you could do a "busy wait" instead on sleep(). To do this you could sleep for have the delay, and then go into a loop where you constantly check the time elapsed.
lastTime = perf_counter()
while True:
currentTime = perf_counter()
delta = abs(lastTime - currentTime)
sleep(delay / 2.0)
while True:
currentTime = perf_counter()
if (currentTime - lastTime >= delay):
sd.play(tone, 22050)
lastTime = currentTime
break
Not a perfect solution but it'll get you closer.
You can further optimise the fraction of the delay that is spent sleeping to take load of the CPU.
I need to meassure the time certain parts of my code take. While executing my code on a powerfull server, I get 10 diffrent results
I tried comparing time measured with time.time(), time.perf_counter(), time.perf_counter_ns(), time.process_time() and time.process_time_ns().
import time
for _ in range(10):
start = time.perf_counter()
i = 0
while i < 100000:
i = i + 1
time.sleep(1)
end = time.perf_counter()
print(end - start)
I'm expecting when executing the same code 10 times, to be the same (the results to have a resolution of at least 1ms) ex. 1.041XX and not 1.030sec - 1.046sec.
When executing my code on a 16 cpu, 32gb memory server I'm receiving this result:
1.045549364
1.030857833
1.0466020120000001
1.0309665050000003
1.0464690349999994
1.046397238
1.0309525370000001
1.0312070380000007
1.0307592159999999
1.046095523
Im expacting the result to be:
1.041549364
1.041857833
1.0416020120000001
1.0419665050000003
1.0414690349999994
1.041397238
1.0419525370000001
1.0412070380000007
1.0417592159999999
1.041095523
Your expectations are wrong. If you want to measure code average time consumption use the timeit module. It executes your code multiple times and averages over the times.
The reason your code has different runtimes lies in your code:
time.sleep(1) # ensures (3.5+) _at least_ 1000ms are waited, won't be less, might be more
You are calling it in a tight loop,resulting in accumulated differences:
Quote from time.sleep(..) documentation:
Suspend execution of the calling thread for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale).
Emphasis mine.
Perfoming a code do not take the same time at each loop iteration because of the scheduling of the system (system puts on hold your process to perform another process then back to it...).
I have worked on Project Euler problems and decided to add timing, so added timeit to time main() via the following snippet (storing global RESULT in main() for convenience)
t = timeit.timeit(main, 'gc.enable()', number=1)
print("# Euler", PROBLEM, ".py RESULT: ", RESULT))
Works fine. But, some run so fast I thought I could do this.
t = timeit.timeit(main, 'gc.enable()', number=1)
if (t < 0.001):
t2 = timeit.timeit(main, 'gc.enable()', number=1000)
And it works sometimes. However if I run this repeatedly I sometime gets negative values for t2. Using Euler #2 for example I get these results from running this 5 times in a row.
# Euler2.py RESULT: 4613732 3.17869758716347e-05 seconds
repeats timing -3.7966224778973e-05 sec per call
# Euler2.py RESULT: 4613732 3.1558464885145785e-05 seconds
repeats timing 2.4836235955056177e-05 sec per call
# Euler2.py RESULT: 4613732 3.131149340411519e-05 seconds
repeats timing -3.5684903805855466e-05 sec per call
# Euler2.py RESULT: 4613732 3.177450256451194e-05 seconds
repeats timing 2.4558941864410162e-05 sec per call
# Euler2.py RESULT: 4613732 3.158939868681022e-05 seconds
repeats timing 2.4268726425449536e-05 sec per call
Now if I change the repeat count to 100,000 or more I don't see negative t2 values at all and the time per call is consistently around 2.4e-5 sec per call.
If I repeated 10,000 times I see new behavior. t2 is consistently positive, but the values bounce around a lot. For 10 runs I get
repeats timing 2.4194581894745244e-05 sec per call
repeats timing 1.8200670315524775e-05 sec per call
repeats timing 2.4408832248987168e-05 sec per call
repeats timing 2.4378118077314547e-05 sec per call
repeats timing 1.8361976570139902e-05 sec per call
repeats timing 1.8055080028738498e-05 sec per call
repeats timing 1.8102133534236732e-05 sec per call
repeats timing 2.4485323058654477e-05 sec per call
repeats timing 3.118363087991698e-05 sec per call
repeats timing 1.803846408685413e-05 sec per call
Finally, set the repeat count to 1000 and removed the initial (repeat=1) timeit. Same time of result, some negatives and a lot of bounce.
I repeated this set using Python 2.7, similar result -- everything else was version 3.4
To me, this looks like a bug in the timeit feature when the total time is on the same order as the system timer interrupt, but I thought perhaps I was missing something.
ADDED
I should also add that I know about other timer functions including perf_counter().
I am asking specifically about timeit() because I thought it to be an easy to use hi-hres timer that I could use in new and old versions of python and would like to continue to do so if it can be trusted.
ADDED
Based on the answer provided, I changed my timing code to be bsaed on pef_counter() and sure enough, I got negative values sometimes too. So, small timing increments are simply not reliable if you use on old window box. That's what I wanted to know. What make me think it was in the Python stack was that for the very small timings, the values seemed accurate. Should have guessed it was the combination of Windows and device drivers.
On Windows, timeit by default uses time.clock as its time source, which in turn uses the Windows API QueryPerformanceCounter. Depending on the version of Windows and the capabilities of the machine QueryPerformanceCounter uses the processor's time stamp counter (TSC) as a timer. A number of older multiprocessor machines weren't able to keep the TSC in sync between processors and didn't report this correctly to Windows, or had bugs trying to do so. This results in QueryPerformanceCounter returning results that appear to jump around as the process gets executed on different processors.
Microsoft has a long detailed description of the problem on MSDN: http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx
AMD released a fix for this problem on Windows XP systems called the AMD Dual Core Optimizer.
Is there a simple way / module to correctly measure the elapsed time in python? I know that I can simply call time.time() twice and take the difference, but that will yield wrong results if the system time is changed. Granted, that doesn't happen very often, but it does indicate that I'm measuring the wrong thing.
Using time.time() to measure durations is incredibly roundabout when you think about it. You take the difference of two absolute time measurements which are in turn constructed from duration measurements (performed by timers) and known absolute times (set manually or via ntp), that you aren't interested in at all.
So, is there a way to query this "timer time" directly? I'd imagine that it can be represented as a millisecond or microsecond value that has no meaningful absolute representation (and thus doesn't need to be adjusted with system time). Looking around a bit it seems that this is exactly what System.nanoTime() does in Java, but I did not find a corresponding Python function, even though it should (hardware-technically) be easier to provide than time.time().
Edit: To avoid confusion and address the answers below: This is not about DST changes, and I don't want CPU time either - I want elapsed physical time. It doesn't need to be very fine-grained, and not even particularly accurate. It just shouldn't give me negative durations, or durations which are off by several orders of magnitude (above the granularity), just because someone decided to set the system clock to a different value. Here's what the Python docs say about 'time.time()':
"While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls"
This is exactly what I want to avoid, since it can lead to strange things like negative values in time calculations. I can work around this at the moment, but I believe it is a good idea to learn using the proper solutions where feasible, since the kludges will come back to bite you one day.
Edit2: Some research shows that you can get a system time independent measurement like I want in Windows by using GetTickCount64(), under Linux you can get it in the return value of times(). However, I still can't find a module which provides this functionality in Python.
For measuring elapsed CPU time, look at time.clock(). This is the equivalent of Linux's times() user time field.
For benchmarking, use timeit.
The datetime module, which is part of Python 2.3+, also has microsecond time if supported by the platform.
Example:
>>> import datetime as dt
>>> n1=dt.datetime.now()
>>> n2=dt.datetime.now()
>>> (n2-n1).microseconds
678521
>>> (n2.microsecond-n1.microsecond)/1e6
0.678521
ie, it took me .678521 seconds to type the second n2= line -- slow
>>> n1.resolution
datetime.timedelta(0, 0, 1)
1/1e6 resolution is claimed.
If you are concerned about system time changes (from DS -> ST) just check the object returned by datetime.Presumably, the system time could have a small adjustment from an NTP reference adjustment. This should be slewed, and corrections are applied gradually, but ntp sync beats can have an effect with very small (millisec or microsec) time references.
You can also reference Alex Martelli's C function if you want something of that resolution. I would not go too far to reinvent the wheel. Accurate time is basic and most modern OS's do a pretty good job.
Edit
Based on your clarifications, it sounds like you need a simple side check if the system's clock has changed. Just compare to a friendly, local ntp server:
import socket
import struct
import time
ntp="pool.ntp.org" # or whatever ntp server you have handy
client = socket.socket( socket.AF_INET, socket.SOCK_DGRAM )
data = '\x1b' + 47 * '\0'
client.sendto( data, ( ntp, 123 ))
data, address = client.recvfrom( 1024 )
if data:
print 'Response received from:', address
t = struct.unpack( '!12I', data )[10]
t -= 2208988800L #seconds since Epoch
print '\tTime=%s' % time.ctime(t)
NTP is accurate to milliseconds over the Internet and has representation resolution of resolution of 2−32 seconds (233 picoseconds). Should be good enough?
Be aware that the NTP 64 bit data structure will overflow in 2036 and every 136 years thereafter -- if you really want a robust solution, better check for overflow...
What you seem to be looking for is a monotonic timer. A monotonic time reference does not jump or go backwards.
There have been several attempts to implement a cross platform monotomic clock for Python based on the OS reference of it. (Windows, POSIX and BSD are quite different) See the discussions and some of the attempts at monotonic time in this SO post.
Mostly, you can just use os.times():
os.times()
Return a 5-tuple of floating point numbers indicating
accumulated (processor or other) times, in seconds. The items are:
user time, system time, children’s user time, children’s system time,
and elapsed real time since a fixed point in the past, in that order.
See the Unix manual page times(2) or the corresponding Windows
Platform API documentation. On Windows, only the first two items are
filled, the others are zero.
Availability: Unix, Windows
But that does not fill in the needed elapsed real time (the fifth tuple) on Windows.
If you need Windows support, consider ctypes and you can call GetTickCount64() directly, as has been done in this recipe.
Python 3.3 added a monotonic timer into the standard library, which does exactly what I was looking for. Thanks to Paddy3118 for pointing this out in "How do I get monotonic time durations in python?".
>>> import datetime
>>> t1=datetime.datetime.utcnow()
>>> t2=datetime.datetime.utcnow()
>>> t2-t1
datetime.timedelta(0, 8, 600000)
Using UTC avoids those embarassing periods when the clock shifts due to daylight saving time.
As for using an alternate method rather than subtracting two clocks, be aware that the OS does actually contain a clock which is initialized from a hardware clock in the PC. Modern OS implementations will also keep that clock synchronized with some official source so that it doesn't drift. This is much more accurate than any interval timer the PC might be running.
You can use perf_counter function of time module in Python Standard Library:
from datetime import timedelta
from time import perf_counter
startTime = perf_counter()
CallYourFunc()
finishedTime = perf_counter()
duration = timedelta(seconds=(finishedTime - startTime))
The example functions you state in your edit are two completely different things:
Linux times() returns process times in CPU milliseconds. Python's equivalent is time.clock() or os.times().
Windows GetTickCount64() returns system uptime.
Although two different functions, both (potentially) could be used to reveal a system clock that had a "burp" with these methods:
First:
You could take both a system time with time.time() and a CPU time with time.clock(). Since wall clock time will ALWAYS be greater than or equal to CPU time, discard any measurements where the interval between the two time.time() readings is less than the paired time.clock() check readings.
Example:
t1=time.time()
t1check=time.clock()
# your timed event...
t2=time.time()
t2check=time.clock()
if t2-t1 < t2check - t1check:
print "Things are rotten in Denmark"
# discard that sample
else:
# do what you do with t2 - t1...
Second:
Getting system uptime is also promising if you are concerned about the system's clock, since a user reset does not reset the uptime tick count in most cases. (that I am aware of...)
Now the harder question: getting system uptime in a platform independent way -- especially without spawning a new shell -- at the sub second accuracy. Hmmm...
Probably the best bet is psutil. Browsing the source, they use uptime = GetTickCount() / 1000.00f; for Windows and sysctl "kern.boottime" for BSD / OS X, etc. Unfortunately, these are all 1 second resolution.
from datetime import datetime
start = datetime.now()
print 'hello'
end = datetime.now()
delta = end-start
print type(delta)
<type 'datetime.timedelta'>
import datetime
help(datetime.timedelta)
...elapsed seconds and microseconds...
After using time.monotonic from Python 3.3+ I found that on Mac it works well, never going backwards. On Windows, where it uses GetTickCount64() it can very rarely go backwards by a substantial amount (for the purposes of my program that was in excess of 5.0) Adding a wrapper can prevent monotonic from going backwards:
with a_lock:
original_value = time.monotonic()
if original_value < last_value:
# You can post a metric here to monitor frequency and size of backward jumps
offset = last_value - original_value
last_value = original_value
return offset + original_value
How often did it go backwards? Perhaps a handful of times over many days across millions of machines and again, only on Windows. Sorry, I did not track which versions of Windows. I can update this later if people wish.
I ran following script on different machine and got quite different results. The elapsed time.clock() is so large.
Script:
#------------------------------------------------------------------------------------
import time
start_clock = time.clock()
time.sleep(60)
end_clock = time.clock()
print "Sleep Clock = ", str(end_clock - start_clock)
start_time = time.time()
time.sleep(60)
end_time = time.time()
print "Sleep Time = ", str(end_time - start_time)
#-------------------------------------------------------------------------------------
Output:
Instance (Windows Server 2008, X64):
Sleep Clock = 938.306471633
Sleep Time = 60.0119998455
Local Machine (Windows Vista, X86):
Sleep Clock = 59.9997987873
Sleep Time = 59.996999979
Following result really confused me:
Sleep Clock = 938.306471633
P.s:
I have not tested on other X64 OSs. This Windows Server 2008 is a running Amazon Instance.
Per the docs on time.clock
On Windows, this function returns
wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter().
so my (blind, i.e., I've never seen Amazon's code for Windows virtualization!-) guess would be that Amazon's virtualization doesn't go quite deep enough to trick QueryPerformanceCounter (which is a very low-level, low-overhead function). Tricking time.time (in a virtualizing hypervisor) is easier (and a more common need).
Do you know what happens e.g. on Microsoft's Azure, and with other non-Microsoft virtualizers such as Parallels or VMWare? I wouldn't be surprised to see different "depth" to the amount of "trickery" (deep virtualization) performed in each case. (I don't doubt that the explanation for this observation must have to do with virtualization, although the specific guess I make above could be flawed).
It would also be interesting to try (again, on various different virtualizers) a tiny C program doing just QueryPerformanceCounter, just to confirm that Python's runtime has nothing to do with the case (I believe so, by inspection of the runtime's source, but a confirmation could not hurt -- unfortunately I don't have access to the resources needed to try it myself).