High-precision system time in python - python

Is there any way of obtaining high-precision system time in python?
I have a little application to work with virtual COM-port. I want to measure the time interval between the sending of the message and its receiving.
At the moment it works like this:
I obtain the message, use
time.time()
and append its 20 digits to the message. The client application receives this message and gets
time.time()
again, then calculates their differences. At the most of the cases the time interval (as i expected) equals zero.
The question is: is there any way of doing this in more intelligent way and with more precision?

Here is an excerpt from time.clock
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very
definition of the meaning of “processor time”, depends on that of the
C function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on
the Win32 function QueryPerformanceCounter(). The resolution is
typically better than one microsecond.
(emphasis mine)

Related

Time variable in script became negative during execution

I have a Python program running on my Raspberry Pi 3B doing a bunch of image processing and so on. I wanted to gather some data measurements from the program by writing it into a .csv file, and I wanted to write the corresponding time with each measurement. I used time.clock() (see code snippet below) to find the time before each write operation, but somewhere between 2147 seconds and 2148 seconds, the time becomes negative (see snippet table below). I expect some kind over overflow occurred, but I'm having trouble understanding in which manner it overflowed. The Raspberry Pi is a 32 bit system, and as I understand, the time.clock() method returns a float. Shouldn't the time variable have overflowed only at much larger values, or is the time variable in this case not 32 bits?
Since then, I've read various threads that indicate time.time() might have been a better method for this use-case, and I might do that in future tests, but I just want to see what I can do with the values I've gathered thus far. I believe I can do some processing on the logged time to "de-overflow", for the lack of a better word, and use it as is. Any thoughts?
import time
import csv
def somefunction(someX, someY, csvwriter):
t = time.clock()
x = somefunc(someX)
y = somefunc(someY)
csvwriter.writerow([t, x, y])
return
Time (s)
X value
Y value
2146.978524
-0.0019
0.00032
2147.30423
-0.00191
0.00023
-2147.336675
-0.00182
0.00034
-2147.000555
-0.00164
0.00037
I doubt this is an 32-bit issue. The third bullet point near the beginning of the Python 3.7 documentation of the time module says:
The functions in this module may not handle dates and times before the epoch or far in the future. The cut-off point in the future is determined by the C library; for 32-bit systems, it is typically in 2038.
That said, I don't really know what the problem is. Perhaps using the time.perf_counter() or time.process_time() functions instead would avoid the issue.

Why does time.process_time() and time.perf_counter() return such different values? [duplicate]

I have some questions about the new functions time.perf_counter() and time.process_time().
For the former, from the documentation:
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
Is this 'highest resolution' the same on all systems? Or does it always slightly depend if, for example, we use linux or windows?
The question comes from the fact the reading the documentation of time.time() it says that 'not all systems provide time with a better precision than 1 second' so how can they provide a better and higher resolution now?
About the latter, time.process_time():
Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
I don't understand, what are those 'system time' and 'user CPU time'? What's the difference?
There are two distincts types of 'time', in this context: absolute time and relative time.
Absolute time is the 'real-world time', which is returned by time.time() and which we are all used to deal with. It is usually measured from a fixed point in time in the past (e.g. the UNIX epoch of 00:00:00 UTC on 01/01/1970) at a resolution of at least 1 second. Modern systems usually provide milli- or micro-second resolution. It is maintained by the dedicated hardware on most computers, the RTC (real-time clock) circuit is normally battery powered so the system keeps track of real time between power ups. This 'real-world time' is also subject to modifications based on your location (time-zones) and season (daylight savings) or expressed as an offset from UTC (also known as GMT or Zulu time).
Secondly, there is relative time, which is returned by time.perf_counter and time.process_time. This type of time has no defined relationship to real-world time, in the sense that the relationship is system and implementation specific. It can be used only to measure time intervals, i.e. a unit-less value which is proportional to the time elapsed between two instants. This is mainly used to evaluate relative performance (e.g. whether this version of code runs faster than that version of code).
On modern systems, it is measured using a CPU counter which is monotonically increased at a frequency related to CPU's hardware clock. The counter resolution is highly dependent on the system's hardware, the value cannot be reliably related to real-world time or even compared between systems in most cases. Furthermore, the counter value is reset every time the CPU is powered up or reset.
time.perf_counter returns the absolute value of the counter. time.process_time is a value which is derived from the CPU counter but updated only when a given process is running on the CPU and can be broken down into 'user time', which is the time when the process itself is running on the CPU, and 'system time', which is the time when the operating system kernel is running on the CPU on behalf on the process.

Unexpected time.sleep() behaviour

Recently, when creating a loop with a very short wait at the end, I ran into an unexpected behaviour of time.sleep() when used in quick succession.
I used this piece of code to look further into my problem
import time
import statistics
def average_wait(func):
waits=[]
loops=0
while loops<1000:
start=time.time()
func(1/1000)
waits.append(time.time()-start)
loops+=1
print(waits)
print("Average wait for 0.001: {}".format(statistics.mean(waits)))
average_wait(time.sleep)
This function usually returns something around 0.0013 which is many many times less accurate than just calling time.sleep() once, upon further inspection of this problem by looking at the waits list, I found that the amount of time time.sleep() was actually sleeping for was either almost exactly the right amount of time or almost exactly double the amount of time.
Here is a sample from waits:
[0.0010008811950683594, 0.0020041465759277344, 0.0009999275207519531, 0.0019621849060058594, 0.0010418891906738281]
Is there any reason for this behaviour and anything that can be done to avoid it?
From the time.time() documentation:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
The precision is platform dependent. Moreover, it produces wall-clock time, and your process is never the only thing running on a modern OS, other processes also are given time to process and you'll see variation in timings in your own process because of that.
The module offers different clocks, with more precision and some are per-process. See the time.get_clock_info() function to see what precision they offer. Note time.process_time() offers per-process time but excludes sleep time.
Next, time.sleep() is also not going to sleep in exact time spans; again from the relevant documentation:
[T]he suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
It too is subject to OS scheduling.
Together, these effects can easily add up to the millisecond variation in timings you see in your experiments. So this is not a doubling of time slept; even if you used different values for time.sleep() you'd still see a similar deviation from the requested time.

time.time(): get theprecision [duplicate]

Consider a very simple timer;
start = time.time()
end = time.time() - start
while(end<5):
end = time.time() - start
print end
how precise is this timer ? I mean compared to real-time clock, how synchronized and real-time is this one ?
Now for the real question ;
What is the smallest scale of time that can be measured precisely with Python ?
This is entirely platform dependent. Use the timeit.default_timer() function, it'll return the most precise timer for your platform.
From the documentation:
Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()‘s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise.
So, on Windows, you get microseconds, on Unix, you'll get whatever precision the platform can provide, which is usually (much) better than 1/100th of a second.
This entirely depends on the system you are running it on - there is no guarantee Python has any way of tracking time at all.
That said, it's pretty safe to assume you are going to get millisecond accuracy on modern systems, beyond that, it really is highly dependent on the system. To quote the docs:
Although this module is always available, not all functions are
available on all platforms. Most of the functions defined in this
module call platform C library functions with the same name. It may
sometimes be helpful to consult the platform documentation, because
the semantics of these functions varies among platforms.
And:
The precision of the various real-time functions may be less than
suggested by the units in which their value or argument is expressed.
E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a
second.

Storing and replaying binary network data with python

I have a Python application which sends 556 bytes of data across the network at a rate of 50 Hz. The binary data is generated using struct.pack() which returns a string, which is subsequently written to a UDP socket.
As well as transmitting this data, I would like to save this data to file as space-efficiently as possible, including a timestamp for each message, so that I can replay the data at a later time. What would be the best way of doing this using Python?
I have mulled over using a logging object, but have not yet found out whether Python can read in log files so that I can replay the data. Also, I don't know whether the logging object can handle binary data.
Any tips would be much appreciated! Although Wireshark would be an option, I'd rather store the data using my application so that I can automatically start new data files each time I run the program.
Python's logging system is intended to process human-readable strings, and it's intended to be easy to enable or disable depending on whether it's you (the developer) or someone else running your program. Don't use it for something that your application always needs to output.
The simplest way to store the data is to just write the same 556-byte string that you send over the socket out to a file. If you want to have timestamps, you could precede each 556-byte message with the time of sending, converted to an integer, and packed into 4 or 8 bytes using struct.pack(). The exact method would depend on your specific requirements, e.g. how precise you need the time to be, and whether you need absolute time or just relative to some reference point.
One possibility for a compact timestamp for replay purposes...: set the time as a floating point number of seconds since the epoch with time.time(), multiply by 50 since you said you're repeating this 50 times a second (the resulting unit, one fiftieth of a second, is sometimes called "a jiffy"), truncate to int, subtract from the similar int count of jiffies since the epoch that you measured at the start of your program, and struct.pack the result into an unsigned int with the number of bytes you need to represent the intended duration -- for example, with 2 bytes for this timestamp, you could represent runs of about 1200 seconds (20 minutes), but if you plan longer runs you'd need 4 bytes (3 bytes is just too unwieldy IMHO;-).
Not all operating systems have time.time() returning decent precision, so you may need more devious means if you need to run on such unfortunately limited OSs. (That's VERY os-dependent, of course). What OSs do you need to support...?
Anyway...: for even more compactness, use a slightly higher multiplier than 50 (say 10000) for more accuracy, and store, each time, the difference wrt the previous timestamp -- since that difference should not be much different from a jiffy (if I understand your spec correctly) that should be about 200 or so of these "tenth-thousands of a second" and you can store a single unsigned byte (and have no limit wrt the duration of runs you're storing for future replay). This depends even more on accurate returns from time.time() of course.
If your 556-byte binary data is highly compressible, it will be worth your while to use gzip to store the stream of timestamp-then-data in compressed form; this is best assessed empirically on your actual data, though.

Categories