python - issues with time.clock()/time() - python

I wrote a system to exchange crc-checked struct data between an arduino nano and my python script. This is working pretty well but when i let the system run i get unexpected output on my python monitor (using pycharm)
print "Took ", (time.time() - timeout), " s" sometimes prints out Took 0.0 s.
Usually it prints Took 0.0160000324249 s.
Using win7-64bit professional.
From time doc : Return the time in seconds since the epoch as a floating point number. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
I´m looking for something like millis() thats enough precision for my case
Code Python :
import serial
import time
import binascii
import struct
from ctypes import *
arduino = serial.Serial()
def receive_struct2():
start = 0x85
detected_start = False
arduino.baudrate = 57600
arduino.timeout = 0
arduino.port = 'COM8'
try:
arduino.open()
except serial.SerialException, e:
print e
while True:
if(arduino.inWaiting() >= 1 and detected_start == False):
data = ord(arduino.read())
if data == start:
print "Detected begin"
detected_start = True
else: print chr(data),
if arduino.inWaiting() >= 1 and detected_start == True:
message_length = ord(arduino.read())
#print "Got message length ", message_length
timeout = time.time()
while time.time() - timeout <= 0.3 and arduino.inWaiting() < message_length-1:pass
print "Took ", (time.time() - timeout), " s"
....

Summary: Use timeit.default_timer() instead of time.time(), to measure a short duration.
16ms error for time.time() is not surprising on Windows.
Currently, Python uses GetSystemTimeAsFileTime() to implement time.time() on Windows that has the resolution (precision) of 0.1ms (instead of ftime()'s 1ms) and the accuracy is between 0.5 ms and 15 ms (you could change it system-wide using NtSetTimerResolution()). See Python bug: Use GetSystemTimeAsFileTime() to get a resolution of 100 ns on Windows and another SO question: How does python's time.time() method work?
A better alternative to measure short time intervals on Windows is to use time.clock() that is implemented using QueryPerformanceCounter() on Windows. For portability, you could use timeit.default_timer that is assigned to time.clock() on Windows, time.time() on other systems, and it is time.perf_counter() since Python3.3. See Python - time.clock() vs. time.time() - accuracy?

Related

time a script with less than seconds

i have this script but counts from seconds while the scripts ends in less than a second.
import time
start = time.time()
p=[1,2,3,4,5]
print('It took {0:0.1f} seconds'.format(time.time() - start))
python 3.7 uses a new function that can do that. I have 3.6.5. How do i do that?
time.perf_counter(), available since Python 3.3, lets you access a high-resolution wallclock.
t0 = time.perf_counter()
time.sleep(.1)
print(time.perf_counter() - t0)
It doesn't count in seconds. It counts in fractions of a second, it's just that the script ends faster than the precision allowed by the string formatted float, ie. much less than a second.
Try:
import time
start = time.time()
p=[1,2,3,4,5]
time.sleep(0.5)
print('It took {0:0.1f} seconds'.format(time.time() - start))
Also, for shorter sleep you may want to increase the precision of your float formatter (eg {0:0.3f}), so that for shorter sleeps (eg 0.007) you don't have a 0.0 printed to console.
import time
start = time.time()
p=[1,2,3,4,5]
time.sleep(0.007)
print('It took {0:0.3f} seconds'.format(time.time() - start))
Or just remove the formatter entirely (As commented by Inder):
import time
start = time.time()
p=[1,2,3,4,5]
time.sleep(0.007)
print ('It took ' + str(time.time()-start) + ' seconds')
See here for more details of timer resolution: https://docs.python.org/2/library/time.html

Python - Accurate time.sleep

I am working on a project which accurate timer is really crucial. I am working on python and am using timer.sleep() function.
I noticed that timer.sleep() function will add additional delay because of the scheduling problem (refer to timer.sleep docs). Due to that issue, the longer my program runs, the more inaccurate the timer is.
Is there any more accurate timer/ticker to sleep the program or solution for this problem?
Any help would be appreciated. Cheers.
I had a solution similar to above, but it became processor heavy very quickly. Here is a processor-heavy idea and a workaround.
def processor_heavy_sleep(ms): # fine for ms, starts to work the computer hard in second range.
start = time.clock()
end = start + ms /1000.
while time.clock() < end:
continue
return start, time.clock()
def efficient_sleep(secs, expected_inaccuracy=0.5): # for longer times
start = time.clock()
end = secs + start
time.sleep(secs - expected_inaccuracy)
while time.clock() < end:
continue
return start, time.clock()
output of efficient_sleep(5, 0.5) 3 times was:
(3.1999303695151594e-07, 5.0000003199930365)
(5.00005983869791, 10.00005983869791)
(10.000092477987678, 15.000092477987678)
This is on windows. I'm running it for 100 loops right now. Here are the results.
(485.003749358414, 490.003749358414)
(490.0037919174879, 495.0037922374809)
(495.00382903668014, 500.00382903668014)
The sleeps remain accurate, but the calls are always delayed a little. If you need a scheduler that accurately calls every xxx secs to the millisecond, that would be a different thing.
the longer my program runs, the more inaccurate the timer is.
So, for example by expecting 0.5s delay, it will be time.sleep(0.5 - (start-end)). But still didn't solve the issue
You seem to be complaining about two effects, 1) the fact that timer.sleep() may take longer than you expect, and 2) the inherent creep in using a series of timer.sleep() calls.
You can't do anything about the first, short of switching to a real-time OS. The underlying OS calls are defined to sleep for at least as long as requested. They only guarantee that you won't wake early; they make no guarantee that you won't wake up late.
As for the second, you ought to figure your sleep time according to an unchanging epoch, not from your wake-up time. For example:
import time
import random
target = time.time()
def myticker():
# Sleep for 0.5s between tasks, with no creep
target += 0.5
now = time.time()
if target > now:
time.sleep(target - now)
def main():
previous = time.time()
for _ in range(100):
now = time.time()
print(now - previous)
previous = now
# simulate some work
time.sleep(random.random() / 10) # Always < tick frequency
# time.sleep(random.random()) # Not always < tick frequency
myticker()
if __name__ == "__main__":
main()
Working on Linux with zero knowledge of Windows, I may be being naive here but is there some reason that writing your own sleep function, won't work for you?
Something like:
import time
def sleep_time():
start_time = time.time()
while (time.time() - start_time) < 0.0001:
continue
end_time = time.time() + 60 # run for a minute
cnt = 0
while time.time() < end_time:
cnt += 1
print('sleeping',cnt)
sleep_time()
print('Awake')
print("Slept ",cnt," Times")

Accurate sleep/delay within Python while loop

I have a while True loop which sends variables to an external function, and then uses the returned values. This send/receive process has a user-configurable frequency, which is saved and read from an external .ini configuration file.
I've tried time.sleep(1 / Frequency), but am not satisfied with the accuracy, given the number of threads being used elsewhere. E.g. a frequency of 60Hz (period of 0.0166667) is giving an 'actual' time.sleep() period of ~0.0311.
My preference would be to use an additional while loop, which compares the current time to the start time plus the period, as follows:
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
sleep(0)
This would fit into the end of my while True function as follows:
while True:
A = random.randint(0, 5)
B = random.randint(0, 10)
C = random.randint(0, 20)
Values = ExternalFunction.main(Variable_A = A, Variable_B = B, Variable_C = C)
Return_A = Values['A_Out']
Return_B = Values['B_Out']
Return_C = Values['C_Out']
#Updated other functions with Return_A, Return_B and Return_C
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
time.sleep(0)
I'm missing something, as the addition of the while loop causes the function to execute once only. How can I get the above to function correctly? Is this the best approach to 'accurate' frequency control on a non-real time operating system? Should I be using threading for this particular component? I'm testing this function on both Windows 7 (64-bit) and Ubuntu (64-bit).
If I understood your question correctly, you want to execute ExternalFunction.main at a given frequency. The problem is that the execution of ExternalFunction.main itself takes some time. If you don't need very fine precision -- it seems that you don't -- my suggestion is doing something like this.
import time
frequency = 1 # Hz
period = 1.0/frequency
while True:
time_before = time.time()
[...]
ExternalFunction.main([...])
[...]
while (time.time() - time_before) < period:
time.sleep(0.001) # precision here
You may tune the precision to your needs. Greater precision (smaller number) will make the inner while loop execute more often.
This achieves decent results when not using threads. However, when using Python threads, the GIL (Global Interpreter Lock) makes sure only one thread runs at a time. If you have a huge number of threads it may be that it is taking way too much time for the program to go back to your main thread. Increasing the frequency Python changes between threads may give you more accurate delays.
Add this to the beginning of your code to increase the thread switching frequency.
import sys
sys.setcheckinterval(1)
1 is the number of instructions executed on each thread before switching (the default is 100), a larger number improves performance but will increase the threading switching time.
You may want to try python-pause
Pause until a unix time, with millisecond precision:
import pause
pause.until(1370640569.7747359)
Pause using datetime:
import pause, datetime
dt = datetime.datetime(2013, 6, 2, 14, 36, 34, 383752)
pause.until(dt)
You may use it like:
freqHz=60.0
td=datetime.timedelta(seconds=1/freqHz)
dt=datetime.now()
while true:
#Your code here
dt+=td
pause.until(dt)
Another solution for an accurate delay is to use the perf_counter() function from module time. Especially useful in windows as time.sleep is not accurate in milliseconds. See below example where function accurate_delay creates a delay in milliseconds.
import time
def accurate_delay(delay):
''' Function to provide accurate time delay in millisecond
'''
_ = time.perf_counter() + delay/1000
while time.perf_counter() < _:
pass
delay = 10
t_start = time.perf_counter()
print('Wait for {:.0f} ms. Start: {:.5f}'.format(delay, t_start))
accurate_delay(delay)
t_end = time.perf_counter()
print('End time: {:.5f}. Delay is {:.5f} ms'.
format(t_end, 1000*(t_end - t_start)))
sum = 0
ntests = 1000
for _ in range(ntests):
t_start = time.perf_counter()
accurate_delay(delay)
t_end = time.perf_counter()
print('Test completed: {:.2f}%'.format(_/ntests * 100), end='\r', flush=True)
sum = sum + 1000*(t_end - t_start) - delay
print('Average difference in time delay is {:.5f} ms.'.format(sum/ntests))`

Implement sub millisecond processing in python without busywait

How would i implement processing of an array with millisecond precision using python under linux (running on a single core Raspberry Pi).
I am trying to parse information from a MIDI file, which has been preprocessed to an array where each millisecond i check if the array has entries at the current timestamp and trigger some functions if it does.
Currently i am using time.time() and employ busy waiting (as concluded here). This eats up all the CPU, therefor i opt for a better solution.
# iterate through all milliseconds
for current_ms in xrange(0, last+1):
start = time()
# check if events are to be processed
try:
events = allEvents[current_ms]
# iterate over all events for this millisecond
for event in events:
# check if event contains note information
if 'note' in event:
# check if mapping to pin exists
if event['note'] in mapping:
pin = mapping[event['note']]
# check if event contains on/off information
if 'mode' in event:
if event['mode'] == 0:
pin_off(pin)
elif event['mode'] == 1:
pin_on(pin)
else:
debug("unknown mode in event:"+event)
else:
debug("no mapping for note:" + event['note'])
except:
pass
end = time()
# fill the rest of the millisecond
while (end-start) < (1.0/(1000.0)):
end = time()
where last is the millisecond of the last event (known from preprocessing)
This is not a question about time() vs clock() more about sleep vs busy wait.
I cant really sleep in the "fill rest of millisecond" loop, because of the too low accuracy of sleep(). If i were to use ctypes, how would i go about it correctly?
Is there some Timer library which would call a callback each millisecond reliably?
My current implementation is on GitHub. With this approach i get a skew of around 4 or 5ms on the drum_sample, which is 3.7s total (with mock, so no real hardware attached). On a 30.7s sample, the skew is around 32ms (so its at least not linear!).
I have tried using time.sleep() and nanosleep() via ctypes with the following code
import time
import timeit
import ctypes
libc = ctypes.CDLL('libc.so.6')
class Timespec(ctypes.Structure):
""" timespec struct for nanosleep, see:
http://linux.die.net/man/2/nanosleep """
_fields_ = [('tv_sec', ctypes.c_long),
('tv_nsec', ctypes.c_long)]
libc.nanosleep.argtypes = [ctypes.POINTER(Timespec),
ctypes.POINTER(Timespec)]
nanosleep_req = Timespec()
nanosleep_rem = Timespec()
def nsleep(us):
#print('nsleep: {0:.9f}'.format(us))
""" Delay microseconds with libc nanosleep() using ctypes. """
if (us >= 1000000):
sec = us/1000000
us %= 1000000
else: sec = 0
nanosleep_req.tv_sec = sec
nanosleep_req.tv_nsec = int(us * 1000)
libc.nanosleep(nanosleep_req, nanosleep_rem)
LOOPS = 10000
def do_sleep(min_sleep):
#print('try: {0:.9f}'.format(min_sleep))
total = 0.0
for i in xrange(0, LOOPS):
start = timeit.default_timer()
nsleep(min_sleep*1000*1000)
#time.sleep(min_sleep)
end = timeit.default_timer()
total += end - start
return (total / LOOPS)
iterations = 5
iteration = 1
min_sleep = 0.001
result = None
while True:
result = do_sleep(min_sleep)
#print('res: {0:.9f}'.format(result))
if result > 1.5 * min_sleep:
if iteration > iterations:
break
else:
min_sleep = result
iteration += 1
else:
min_sleep /= 2.0
print('FIN: {0:.9f}'.format(result))
The result on my i5 is
FIN: 0.000165443
while on the RPi it is
FIN: 0.000578617
which suggest a sleep period of about 0.1 or 0.5 milliseconds, with the given jitter (tend to sleep longer) that at most helps me reduce the load a little bit.
One possible solution, using the sched module:
import sched
import time
def f(t0):
print 'Time elapsed since t0:', time.time() - t0
s = sched.scheduler(time.time, time.sleep)
for i in range(10):
s.enterabs(t0 + 10 + i, 0, f, (t0,))
s.run()
Result:
Time elapsed since t0: 10.0058200359
Time elapsed since t0: 11.0022959709
Time elapsed since t0: 12.0017120838
Time elapsed since t0: 13.0022599697
Time elapsed since t0: 14.0022521019
Time elapsed since t0: 15.0015859604
Time elapsed since t0: 16.0023040771
Time elapsed since t0: 17.0023028851
Time elapsed since t0: 18.0023078918
Time elapsed since t0: 19.002286911
Apart from some constant offset of about 2 millisecond (which you could calibrate), the jitter seems to be on the order of 1 or 2 millisecond (as reported by time.time itself). Not sure if that is good enough for your application.
If you need to do some useful work in the meantime, you should look into multi-threading or multi-processing.
Note: a standard Linux distribution that runs on a RPi is not a hard real-time operating system. Also Python can show non-deterministic timing, e.g. when it starts a garbage collection. So your code might run fine with low jitter most of the time, but you might have occasional 'hickups', where there is a bit of delay.

What is the best/most efficient way to output value every x seconds during a loop

I have always been curious about this as the simple way is definitely not efficient. How would you efficiently go about outputting a value every x seconds?
Here is an example of what I mean:
import time
num = 50000000
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
print time.time() - startTime
#output time: 24 seconds
startTime = time.time()
newTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
if time.time() - newTime > 0.5:
newTime = time.time()
print i
print time.time() - startTime
#output time: 32 seconds
A whole 1/3rd faster when not outputting the progress every half a second.
I know this is because it requires an extra calculation every loop, but the same applies with other similar checks you may want to do - how would you go about implementing something like this without seriously affecting the execution time?
Well, you know that you're doing many iterations per second, so you really don't need to make the time.time() call on every iteration. You can use a modulo operator to only actually check if you need to output something every N iterations of the loop.
startTime = time.time()
newTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
if i % 50 == 0: # Only check every 50th iteration
if time.time() - newTime > 0.5:
newTime = time.time()
print i, newTime
print time.time() - startTime
# 45 seconds (the original version took 42 on my system)
Checking only every 50 iterations reduces my run time from 56 seconds to 43 (the original took with no printing 42, and Tom Page's solution took 50 seconds), and the iterations complete quickly enough that its still outputting exactly every 0.5 seconds according to time.time():
0 1409083225.39
605000 1409083225.89
1201450 1409083226.39
1821150 1409083226.89
2439250 1409083227.39
3054400 1409083227.89
3644100 1409083228.39
4254350 1409083228.89
4831600 1409083229.39
5433450 1409083229.89
6034850 1409083230.39
6644400 1409083230.89
7252650 1409083231.39
7840100 1409083231.89
8438300 1409083232.39
9061200 1409083232.89
9667350 1409083233.39
...
You might save a few clock cycles by keeping track of the next time that a print is due
nexttime = time.time() + 0.5
And then your condition will be a simple comparison
If time.time() >= nexttime
As opposed to a subtraction followed by a comparison
If time.time() - newTime > 0.5
You'll only have to do an addition after each message as opposed to doing a subtraction after each itteration
I tried it with a sideband thread doing the printing. It added 5 seconds to exec time on python 2.x but virtually not extra time on python 3.x. Python 2.x threads have a lot of overhead. Here's my example with timing included as comments:
import time
import threading
def showit(event):
global i # could pass in a mutable object instead
while not event.is_set():
event.wait(.5)
print 'value is', i
num = 50000000
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
print time.time() - startTime
#output time: 23 seconds
event = threading.Event()
showit_thread = threading.Thread(target=showit, args=(event,))
showit_thread.start()
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
event.set()
time.sleep(.1)
print time.time() - startTime
#output time: 28 seconds
If you want to wait a specified period of time before doing something, just use the time.sleep() method.
for i in range(100):
print(i)
time.sleep(0.5)
This will wait half a second before printing the next value of i.
If you don't care about Windows, signal.setitimer will be simpler than using a background thread, and on many *nix platforms a whole lot more efficient.
Here's an example:
import signal
import time
num = 50000000
startTime = time.time()
def ontimer(sig, frame):
global i
print(i)
signal.signal(signal.SIGVTALRM, ontimer)
signal.setitimer(signal.ITIMER_VIRTUAL, 0.5, 0.5)
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
signal.setitimer(signal.ITIMER_VIRTUAL, 0)
print(time.time() - startTime)
This is about as close to free as you're going to get performance-wise.
In some use cases, a virtual timer isn't sufficiently accurate, so you need to change that to ITIMER_REAL and change the signal to SIGALRM. That's a little more expensive, but still pretty cheap, and still dead simple.
On some (older) *nix platforms, alarm may be more efficient than setitmer, but unfortunately alarm only takes integral seconds, so you can't use it to fire twice/second.
Timings from my MacBook Pro:
no output: 15.02s
SIGVTALRM: 15.03s
SIGALRM: 15.44s
thread: 19.9s
checking time.time(): 22.3s
(I didn't test with either dano's optimization or Tom Page's; obviously those will reduce the 22.3, but they're not going to get it down to 15.44…)
Part of the problem here is that you're using time.time.
On my MacBook Pro, time.time takes more than 1/3rd as long as all of the work you're doing:
In [2]: %timeit time.time()
10000000 loops, best of 3: 105 ns per loop
In [3]: %timeit (((j+10)**0.5)**2)**0.5
1000000 loops, best of 3: 268 ns per loop
And that 105ns is fast for time—e.g., an older Windows box with no better hardware timer than ACPI can take 100x longer.
On top of that, time.time is not guaranteed to have enough precision to do what you want anyway:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
Even on platforms where it has better precision than 1 second, it may have a lower accuracy; e.g., it may only be updated once per scheduler tick.
And time isn't even guaranteed to be monotonic; on some platforms, if the system time changes, time may go down.
Calling it less often will solve the first problem, but not the others.
So, what can you do?
Unfortunately, there's no built-in answer, at least not with Python 2.7. The best solution is different on different platforms—probably GetTickCount64 on Windows, clock_gettime with the appropriate clock ID on most modern *nixes, gettimeofday on most other *nixes. These are relatively easy to use via ctypes if you don't want to distribute a C extension… but someone really should wrap it all up in a module and post it on PyPI, and unfortunately I couldn't find one…

Categories