How do I get the current time in milliseconds in Python? - python

How do I get the current time in milliseconds in Python?

Using time.time():
import time
def current_milli_time():
return round(time.time() * 1000)
Then:
>>> current_milli_time()
1378761833768

For Python 3.7+, time.time_ns() gives the time passed in nanoseconds since the epoch.
This gives time in milliseconds as an integer:
import time
ms = time.time_ns() // 1_000_000

time.time() may only give resolution to the second, the preferred approach for milliseconds is datetime.
from datetime import datetime
dt = datetime.now()
dt.microsecond

def TimestampMillisec64():
return int((datetime.datetime.utcnow() - datetime.datetime(1970, 1, 1)).total_seconds() * 1000)

Just sample code:
import time
timestamp = int(time.time()*1000.0)
Output:
1534343781311

In versions of Python after 3.7, the best answer is to use time.perf_counter_ns(). As stated in the docs:
time.perf_counter() -> float
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
time.perf_counter_ns() -> int
Similar to perf_counter(), but return time as nanoseconds
As it says, this is going to use the best counter your system has to offer, and it is specifically designed for using in measuring performance (and therefore tries to avoid the common pitfalls of other timers).
It also gives you a nice integer number of nanoseconds, so just divide by 1000000 to get your milliseconds:
start = time.perf_counter_ns()
# do some work
duration = time.perf_counter_ns() - start
print(f"Your duration was {duration // 1000000}ms.")

another solution is the function you can embed into your own utils.py
import time as time_ #make sure we don't override time
def millis():
return int(round(time_.time() * 1000))

If you want a simple method in your code that returns the milliseconds with datetime:
from datetime import datetime
from datetime import timedelta
start_time = datetime.now()
# returns the elapsed milliseconds since the start of the program
def millis():
dt = datetime.now() - start_time
ms = (dt.days * 24 * 60 * 60 + dt.seconds) * 1000 + dt.microseconds / 1000.0
return ms

If you're concerned about measuring elapsed time, you should use the monotonic clock (python 3). This clock is not affected by system clock updates like you would see if an NTP query adjusted your system time, for example.
>>> import time
>>> millis = round(time.monotonic() * 1000)
It provides a reference time in seconds that can be used to compare later to measure elapsed time.

The simpliest way I've found to get the current UTC time in milliseconds is:
# timeutil.py
import datetime
def get_epochtime_ms():
return round(datetime.datetime.utcnow().timestamp() * 1000)
# sample.py
import timeutil
timeutil.get_epochtime_ms()

If you use my code (below), the time will appear in seconds, then, after a decimal, milliseconds. I think that there is a difference between Windows and Unix - please comment if there is.
from time import time
x = time()
print(x)
my result (on Windows) was:
1576095264.2682993
EDIT: There is no difference:) Thanks tc0nn

UPDATED: thanks to #neuralmer.
One of the most efficient ways:
(time.time_ns() + 500000) // 1000000 #rounding last digit (1ms digit)
or
time.time_ns() // 1000000 #flooring last digit (1ms digit)
Both are very efficient among other methods.
BENCHMARK:
You can see some benchmark results of different methods on my own machine below:
import time
t = time.perf_counter_ns()
for i in range(1000):
o = time.time_ns() // 1000000 #each 200 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)
t = time.perf_counter_ns()
for i in range(1000):
o = (time.time_ns() + 500000) // 1000000 #each 227 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)
t = time.perf_counter_ns()
for i in range(1000):
o = round(time.time_ns() / 1000000) #each 456 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)
t = time.perf_counter_ns()
for i in range(1000):
o = int(time.time_ns() / 1000000) #each 467 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)
t = time.perf_counter_ns()
for i in range(1000):
o = int(time.time()* 1000) #each 319 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)
t = time.perf_counter_ns()
for i in range(1000):
o = round(time.time()* 1000) #each 342 ns
t2 = time.perf_counter_ns()
print((t2 - t)//1000)```

After some testing in Python 3.8+ I noticed that those options give the exact same result, at least in Windows 10.
import time
# Option 1
unix_time_ms_1 = int(time.time_ns() / 1000000)
# Option 2
unix_time_ms_2 = int(time.time() * 1000)
Feel free to use the one you like better and I do not see any need for a more complicated solution then this.

These multiplications to 1000 for milliseconds may be decent for solving or making some prerequisite acceptable. It could be used to fill a gap in your database which doesn't really ever use it. Although, for real situations which require precise timing it would ultimately fail. I wouldn't suggest anyone use this method for mission-critical operations which require actions, or processing at specific timings.
For example:
round-trip pings being 30-80ms in the USA... You couldn't just round that up and use it efficiently.
My own example requires tasks at every second which means if I rounded up after the first tasks responded I would still incur the processing time multiplied every main loop cycle. This ended up being a total function call every 60 seconds. that's ~1440 a day.. not too accurate.
Just a thought for people looking for more accurate reasoning beyond solving a database gap which never really uses it.

Time since unix
from time import time
while True:
print(str(time()*1000)+'ms \r', end='')
Time since start of program
from time import time
init = time()
while True:
print(str((time()-init)*1000)+'ms \r', end='')
Thanks for your time

Just another solution using the datetime module for Python 3+.
round(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)

Related

How can I get the time execution of the code? [duplicate]

So in Java, we can do How to measure time taken by a function to execute
But how is it done in python? To measure the time start and end time between lines of code?
Something that does this:
import some_time_library
starttime = some_time_library.some_module()
code_tobe_measured()
endtime = some_time_library.some_module()
time_taken = endtime - starttime
If you want to measure CPU time, can use time.process_time() for Python 3.3 and above:
import time
start = time.process_time()
# your code here
print(time.process_time() - start)
First call turns the timer on, and second call tells you how many seconds have elapsed.
There is also a function time.clock(), but it is deprecated since Python 3.3 and will be removed in Python 3.8.
There are better profiling tools like timeit and profile, however time.process_time() will measure the CPU time and this is what you're are asking about.
If you want to measure wall clock time instead, use time.time().
You can also use time library:
import time
start = time.time()
# your code
# end
print(f'Time: {time.time() - start}')
With a help of a small convenience class, you can measure time spent in indented lines like this:
with CodeTimer():
line_to_measure()
another_line()
# etc...
Which will show the following after the indented line(s) finishes executing:
Code block took: x.xxx ms
UPDATE: You can now get the class with pip install linetimer and then from linetimer import CodeTimer. See this GitHub project.
The code for above class:
import timeit
class CodeTimer:
def __init__(self, name=None):
self.name = " '" + name + "'" if name else ''
def __enter__(self):
self.start = timeit.default_timer()
def __exit__(self, exc_type, exc_value, traceback):
self.took = (timeit.default_timer() - self.start) * 1000.0
print('Code block' + self.name + ' took: ' + str(self.took) + ' ms')
You could then name the code blocks you want to measure:
with CodeTimer('loop 1'):
for i in range(100000):
pass
with CodeTimer('loop 2'):
for i in range(100000):
pass
Code block 'loop 1' took: 4.991 ms
Code block 'loop 2' took: 3.666 ms
And nest them:
with CodeTimer('Outer'):
for i in range(100000):
pass
with CodeTimer('Inner'):
for i in range(100000):
pass
for i in range(100000):
pass
Code block 'Inner' took: 2.382 ms
Code block 'Outer' took: 10.466 ms
Regarding timeit.default_timer(), it uses the best timer based on OS and Python version, see this answer.
Putting the code in a function, then using a decorator for timing is another option. (Source) The advantage of this method is that you define timer once and use it with a simple additional line for every function.
First, define timer decorator:
import functools
import time
def timer(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
start_time = time.perf_counter()
value = func(*args, **kwargs)
end_time = time.perf_counter()
run_time = end_time - start_time
print("Finished {} in {} secs".format(repr(func.__name__), round(run_time, 3)))
return value
return wrapper
Then, use the decorator while defining the function:
#timer
def doubled_and_add(num):
res = sum([i*2 for i in range(num)])
print("Result : {}".format(res))
Let's try:
doubled_and_add(100000)
doubled_and_add(1000000)
Output:
Result : 9999900000
Finished 'doubled_and_add' in 0.0119 secs
Result : 999999000000
Finished 'doubled_and_add' in 0.0897 secs
Note: I'm not sure why to use time.perf_counter instead of time.time. Comments are welcome.
I always prefer to check time in hours, minutes and seconds (%H:%M:%S) format:
from datetime import datetime
start = datetime.now()
# your code
end = datetime.now()
time_taken = end - start
print('Time: ',time_taken)
output:
Time: 0:00:00.000019
I was looking for a way how to output a formatted time with minimal code, so here is my solution. Many people use Pandas anyway, so in some cases this can save from additional library imports.
import pandas as pd
start = pd.Timestamp.now()
# code
print(pd.Timestamp.now()-start)
Output:
0 days 00:05:32.541600
I would recommend using this if time precision is not the most important, otherwise use time library:
%timeit pd.Timestamp.now() outputs 3.29 µs ± 214 ns per loop
%timeit time.time() outputs 154 ns ± 13.3 ns per loop
You can try this as well:
from time import perf_counter
t0 = perf_counter()
...
t1 = perf_counter()
time_taken = t1 - t0
Let me add a little more to https://stackoverflow.com/a/63665115/7412781 solution.
Removed dependency on functools.
Used process time taken time.process_time() instead of absolute counter of time.perf_counter() because the process can be context switched out via kernel.
Used the raw function pointer print to get the correct class name as well.
This is the decorator code.
import time
def decorator_time_taken(fnc):
def inner(*args):
start = time.process_time()
ret = fnc(*args)
end = time.process_time()
print("{} took {} seconds".format(fnc, round((end - start), 6)))
return ret
return inner
This is the usage sample code. It's checking if 193939 is prime or not.
class PrimeBrute:
#decorator_time_taken
def isPrime(self, a):
for i in range(a-2):
if a % (i+2) == 0: return False
return True
inst = PrimeBrute()
print(inst.isPrime(193939))
This is the output.
<function PrimeBrute.isPrime at 0x7fc0c6919ae8> took 0.015789 seconds
True
Use timeit module to benchmark your performance:
def test():
print("test")
emptyFunction()
for i in [x for x in range(10000)]:
i**i
def emptyFunction():
pass
if __name__ == "__main__":
import timeit
print(timeit.timeit("test()", number = 5, globals = globals()))
#print(timeit.timeit("test()", setup = "from __main__ import test",
# number = 5))
the first parameter defines the piece of code which we want to execute test in this case & number defines how many times you want to repeat the execution.
Output:
test
test
test
test
test
36.81822113099952
Using the module time, we can calculate unix time at the start of the function and at the end of a function. Here is how the code might look like:
from time import time as unix
This code imports time.time which allows us to calculate unix time.
from time import sleep
This is not mandatory, but I am also importing time.sleep for one of the demonstrations.
START_TIME = unix()
This is what calculates unix time and puts it in a variable. Remember, the function unix is not an actual function. I imported time.time as unix, so if you did not put as unix in the first import, you will need to use time.time().
After this, we put whichever function or code we want.
At the end of the code snippet we put
TOTAL_TIME = unix()-START_TIME
This line of code does two things: It calculates unix time at the end of the function, and using the variable START_TIME from before, we calculate the amount of time it took to execute the code snippet.
We can then use this variable wherever we want, including for a print() function.
print("The snippet took {} seconds to execute".format(TOTAL_TIME))
Here I wrote a quick demonstration code that has two experiments as a demonstration. (Fully commented)
from time import time as unix # Import the module to measure unix time
from time import sleep
# Here are a few examples:
# 1. Counting to 100 000
START_TIME = unix()
for i in range(0, 100001):
print("Number: {}\r".format(i), end="")
TOTAL_TIME = unix() - START_TIME
print("\nFinal time (Expirement 1): {} s\n".format(TOTAL_TIME))
# 2. Precision of sleep
for i in range(10):
START_TIME = unix()
sleep(0.1)
TOTAL_TIME = unix() - START_TIME
print("Sleep(0.1): Index: {}, Time: {} s".format(i,TOTAL_TIME))
Here was my output:
Number: 100000
Final time (Expirement 1): 16.666812419891357 s
Sleep(0.1): Index: 0, Time: 0.10014867782592773 s
Sleep(0.1): Index: 1, Time: 0.10016226768493652 s
Sleep(0.1): Index: 2, Time: 0.10202860832214355 s
Sleep(0.1): Index: 3, Time: 0.10015869140625 s
Sleep(0.1): Index: 4, Time: 0.10014724731445312 s
Sleep(0.1): Index: 5, Time: 0.10013675689697266 s
Sleep(0.1): Index: 6, Time: 0.10014677047729492 s
Sleep(0.1): Index: 7, Time: 0.1001439094543457 s
Sleep(0.1): Index: 8, Time: 0.10044598579406738 s
Sleep(0.1): Index: 9, Time: 0.10014700889587402 s
>
import datetime
#this code before computation
%%timeit
~code~

How to properly use time.time()

I am trying to time a running function. But I need to know how many hours/minutes/seconds does it takes. I am using time.time(), but I don't understand the output. How can I convert this output in terms of how many hours/minutes/seconds does a function took? Or, if there is another proper library?
import time
starttime = time.time()
x=0
for i in range(100000):
x+=i
endtime = time.time()
print('Job took: ', endtime-starttime)
I'd recommend using time.perf_counter instead of time.time, and using timedelta to format the units:
>>> from datetime import timedelta
>>> import time
>>> starttime = time.perf_counter()
>>> x=0
>>> for i in range(100000):
... x+=i
...
>>> duration = timedelta(seconds=time.perf_counter()-starttime)
>>> print('Job took: ', duration)
Job took: 0:00:00.015017
The benefit of using perf_counter is that it won't be impacted by weird things like the timezone or system clock changing while you're measuring, and its resolution is guaranteed to be as high as possible (which may be important if you're timing very quick events).
In either case, the return value is measured in seconds, but you need to know what function it came from in order to know what the float value corresponds to. timedelta is a nicer way to represent a duration than a pure float IMO because it includes the units.
time.time():
The time() function returns the number of seconds passed since epoch.
For Unix system, January 1, 1970, 00:00:00 at UTC is epoch (the point
where time begins).
import time
seconds = time.time()
print("Seconds since epoch =", seconds)
This might not be what you want
time.time() gives the seconds when you started a process. Therefore endtime-starttime gives you the amount of seconds between the beginning and the end of the loop.
A preferable way to stop time in python is to use datetime:
import datetime
starttime = datetime.datetime.now()
x=0
for i in range(100000):
x+=i
endtime = datetime.datetime.now()
diff = endtime - starttime
print('Job took: ', diff.days, diff.seconds, diff.microseconds)

How to increase sleep/pause timing accuracy in python?

I ran an experiment to compare sleep/pause timing accuracy in python and C++
Experiment summary:
In a loop of 1000000 iterations, sleep 1 microsecond in each iteration.
Expected duration: 1.000000 second (for 100% accurate program)
In python:
import pause
import datetime
start = time.time()
dt = datetime.datetime.now()
for i in range(1000000):
dt += datetime.timedelta(microseconds=1)
pause.until(dt)
end = time.time()
print(end - start)
Expected: 1.000000 sec, Actual (approximate): 2.603796
In C++:
#include <iostream>
#include <chrono>
#include <thread>
using namespace std;
using usec = std::chrono::microseconds;
using datetime = chrono::_V2::steady_clock::time_point;
using clk = chrono::_V2::steady_clock;
int main()
{
datetime dt;
usec timedelta = static_cast<usec>(1);
dt = clk::now();
const auto start = dt;
for(int i=0; i < 1000000; ++i) {
dt += timedelta;
this_thread::sleep_until(dt);
}
const auto end = clk::now();
chrono::duration<double> elapsed_seconds = end - start;
cout << elapsed_seconds.count();
return 0;
}
Expected: 1.000000 sec, Actual (approximate): 1.000040
It is obvious that C++ is much more accurate, but I am developing a project in python and need to increase the accuracy. Any ideas?
P.S It's OK if you suggest another python library/technique as long as it is more accurate :)
The problem is not only that the sleep timer of python is inaccurate, but that each part of the loop requires some time.
Your original code has a run-time of ~1.9528656005859375 on my system.
If I only run this part of your code without any sleep:
for i in range(100000):
dt += datetime.timedelta(microseconds=1)
Then the required time for that loop is already ~0.45999741554260254.
If I only run
for i in range(1000000):
pause.milliseconds(0)
Then the run-time of the code is ~0.5583224296569824.
Using always the same date:
dt = datetime.datetime.now()
for i in range(1000000):
pause.until(dt)
Results in a runtime of ~1.326077938079834
If you do the same with the timestamp:
dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
pause.until(ts)
Then the run-time changes to ~0.36722803115844727
And if you increment the timestamp with one microsecond:
dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
ts += 0.000001
pause.until(ts)
Then you get a runtime of ~0.9536933898925781
That it is smaller then 1 is due to floating point inaccuracies, adding print(ts-dt.timestamp()) after the loop will show ~0.95367431640625, so the pause duration itself is correct, but the ts += 0.000001 is accumulating an error.
You will get the best result if you count the iterations you had and add iterationCount/1000000 to the start time:
dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
pause.until(ts+i/1000000)
And this would result in ~1.000023365020752
So in my case pause itself would already allow an accuracy with less then 1 microsecond. The problem is actually in the datetime part that is required for both datetime.timedelta and sleep_until.
So if you want to have microseconds accuracy then you need to look for a time library that performs better then datetime.
import pause
import datetime
import time
start = time.time()
dt = datetime.datetime.now()
for i in range(1000000):
dt += datetime.timedelta(microseconds=1)
pause.until(1)
end = time.time()
print(end - start)
OUTPUT:
1.0014092922210693
The pause library says that
The precision should be within 0.001 of a second, however, this will depend on how >precise your system sleep is and other performance factors.
If you multiply 0.001 by 1000000 you will get a large accumulated error.
A couple of questions:
Why do you need to sleep?
What is the minimum required accuracy?
How time consistent are the operations you are calling? If these function calls vary by more than 0.001 then the accumulated error will be more due to the operations you are performing than can be attributed to the pauses/sleeps.
Sleeping a thread is inherently non-deterministic - you cannot talk about 'precision' really for thread sleeep in general - perhaps only in the context of a particular system and platform - there are just too many factors that can possibly play a role for example how many cpu cores, etc..
To illustrate the point, a thought experiment:
Suppose you made many threads (at least 1000) and scheduled them to run at the same exact time. What 'precision' would you then expect ?

Accurate sleep/delay within Python while loop

I have a while True loop which sends variables to an external function, and then uses the returned values. This send/receive process has a user-configurable frequency, which is saved and read from an external .ini configuration file.
I've tried time.sleep(1 / Frequency), but am not satisfied with the accuracy, given the number of threads being used elsewhere. E.g. a frequency of 60Hz (period of 0.0166667) is giving an 'actual' time.sleep() period of ~0.0311.
My preference would be to use an additional while loop, which compares the current time to the start time plus the period, as follows:
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
sleep(0)
This would fit into the end of my while True function as follows:
while True:
A = random.randint(0, 5)
B = random.randint(0, 10)
C = random.randint(0, 20)
Values = ExternalFunction.main(Variable_A = A, Variable_B = B, Variable_C = C)
Return_A = Values['A_Out']
Return_B = Values['B_Out']
Return_C = Values['C_Out']
#Updated other functions with Return_A, Return_B and Return_C
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
time.sleep(0)
I'm missing something, as the addition of the while loop causes the function to execute once only. How can I get the above to function correctly? Is this the best approach to 'accurate' frequency control on a non-real time operating system? Should I be using threading for this particular component? I'm testing this function on both Windows 7 (64-bit) and Ubuntu (64-bit).
If I understood your question correctly, you want to execute ExternalFunction.main at a given frequency. The problem is that the execution of ExternalFunction.main itself takes some time. If you don't need very fine precision -- it seems that you don't -- my suggestion is doing something like this.
import time
frequency = 1 # Hz
period = 1.0/frequency
while True:
time_before = time.time()
[...]
ExternalFunction.main([...])
[...]
while (time.time() - time_before) < period:
time.sleep(0.001) # precision here
You may tune the precision to your needs. Greater precision (smaller number) will make the inner while loop execute more often.
This achieves decent results when not using threads. However, when using Python threads, the GIL (Global Interpreter Lock) makes sure only one thread runs at a time. If you have a huge number of threads it may be that it is taking way too much time for the program to go back to your main thread. Increasing the frequency Python changes between threads may give you more accurate delays.
Add this to the beginning of your code to increase the thread switching frequency.
import sys
sys.setcheckinterval(1)
1 is the number of instructions executed on each thread before switching (the default is 100), a larger number improves performance but will increase the threading switching time.
You may want to try python-pause
Pause until a unix time, with millisecond precision:
import pause
pause.until(1370640569.7747359)
Pause using datetime:
import pause, datetime
dt = datetime.datetime(2013, 6, 2, 14, 36, 34, 383752)
pause.until(dt)
You may use it like:
freqHz=60.0
td=datetime.timedelta(seconds=1/freqHz)
dt=datetime.now()
while true:
#Your code here
dt+=td
pause.until(dt)
Another solution for an accurate delay is to use the perf_counter() function from module time. Especially useful in windows as time.sleep is not accurate in milliseconds. See below example where function accurate_delay creates a delay in milliseconds.
import time
def accurate_delay(delay):
''' Function to provide accurate time delay in millisecond
'''
_ = time.perf_counter() + delay/1000
while time.perf_counter() < _:
pass
delay = 10
t_start = time.perf_counter()
print('Wait for {:.0f} ms. Start: {:.5f}'.format(delay, t_start))
accurate_delay(delay)
t_end = time.perf_counter()
print('End time: {:.5f}. Delay is {:.5f} ms'.
format(t_end, 1000*(t_end - t_start)))
sum = 0
ntests = 1000
for _ in range(ntests):
t_start = time.perf_counter()
accurate_delay(delay)
t_end = time.perf_counter()
print('Test completed: {:.2f}%'.format(_/ntests * 100), end='\r', flush=True)
sum = sum + 1000*(t_end - t_start) - delay
print('Average difference in time delay is {:.5f} ms.'.format(sum/ntests))`

What is the best/most efficient way to output value every x seconds during a loop

I have always been curious about this as the simple way is definitely not efficient. How would you efficiently go about outputting a value every x seconds?
Here is an example of what I mean:
import time
num = 50000000
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
print time.time() - startTime
#output time: 24 seconds
startTime = time.time()
newTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
if time.time() - newTime > 0.5:
newTime = time.time()
print i
print time.time() - startTime
#output time: 32 seconds
A whole 1/3rd faster when not outputting the progress every half a second.
I know this is because it requires an extra calculation every loop, but the same applies with other similar checks you may want to do - how would you go about implementing something like this without seriously affecting the execution time?
Well, you know that you're doing many iterations per second, so you really don't need to make the time.time() call on every iteration. You can use a modulo operator to only actually check if you need to output something every N iterations of the loop.
startTime = time.time()
newTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
if i % 50 == 0: # Only check every 50th iteration
if time.time() - newTime > 0.5:
newTime = time.time()
print i, newTime
print time.time() - startTime
# 45 seconds (the original version took 42 on my system)
Checking only every 50 iterations reduces my run time from 56 seconds to 43 (the original took with no printing 42, and Tom Page's solution took 50 seconds), and the iterations complete quickly enough that its still outputting exactly every 0.5 seconds according to time.time():
0 1409083225.39
605000 1409083225.89
1201450 1409083226.39
1821150 1409083226.89
2439250 1409083227.39
3054400 1409083227.89
3644100 1409083228.39
4254350 1409083228.89
4831600 1409083229.39
5433450 1409083229.89
6034850 1409083230.39
6644400 1409083230.89
7252650 1409083231.39
7840100 1409083231.89
8438300 1409083232.39
9061200 1409083232.89
9667350 1409083233.39
...
You might save a few clock cycles by keeping track of the next time that a print is due
nexttime = time.time() + 0.5
And then your condition will be a simple comparison
If time.time() >= nexttime
As opposed to a subtraction followed by a comparison
If time.time() - newTime > 0.5
You'll only have to do an addition after each message as opposed to doing a subtraction after each itteration
I tried it with a sideband thread doing the printing. It added 5 seconds to exec time on python 2.x but virtually not extra time on python 3.x. Python 2.x threads have a lot of overhead. Here's my example with timing included as comments:
import time
import threading
def showit(event):
global i # could pass in a mutable object instead
while not event.is_set():
event.wait(.5)
print 'value is', i
num = 50000000
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
print time.time() - startTime
#output time: 23 seconds
event = threading.Event()
showit_thread = threading.Thread(target=showit, args=(event,))
showit_thread.start()
startTime = time.time()
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
event.set()
time.sleep(.1)
print time.time() - startTime
#output time: 28 seconds
If you want to wait a specified period of time before doing something, just use the time.sleep() method.
for i in range(100):
print(i)
time.sleep(0.5)
This will wait half a second before printing the next value of i.
If you don't care about Windows, signal.setitimer will be simpler than using a background thread, and on many *nix platforms a whole lot more efficient.
Here's an example:
import signal
import time
num = 50000000
startTime = time.time()
def ontimer(sig, frame):
global i
print(i)
signal.signal(signal.SIGVTALRM, ontimer)
signal.setitimer(signal.ITIMER_VIRTUAL, 0.5, 0.5)
j=0
for i in range(num):
j = (((j+10)**0.5)**2)**0.5
signal.setitimer(signal.ITIMER_VIRTUAL, 0)
print(time.time() - startTime)
This is about as close to free as you're going to get performance-wise.
In some use cases, a virtual timer isn't sufficiently accurate, so you need to change that to ITIMER_REAL and change the signal to SIGALRM. That's a little more expensive, but still pretty cheap, and still dead simple.
On some (older) *nix platforms, alarm may be more efficient than setitmer, but unfortunately alarm only takes integral seconds, so you can't use it to fire twice/second.
Timings from my MacBook Pro:
no output: 15.02s
SIGVTALRM: 15.03s
SIGALRM: 15.44s
thread: 19.9s
checking time.time(): 22.3s
(I didn't test with either dano's optimization or Tom Page's; obviously those will reduce the 22.3, but they're not going to get it down to 15.44…)
Part of the problem here is that you're using time.time.
On my MacBook Pro, time.time takes more than 1/3rd as long as all of the work you're doing:
In [2]: %timeit time.time()
10000000 loops, best of 3: 105 ns per loop
In [3]: %timeit (((j+10)**0.5)**2)**0.5
1000000 loops, best of 3: 268 ns per loop
And that 105ns is fast for time—e.g., an older Windows box with no better hardware timer than ACPI can take 100x longer.
On top of that, time.time is not guaranteed to have enough precision to do what you want anyway:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
Even on platforms where it has better precision than 1 second, it may have a lower accuracy; e.g., it may only be updated once per scheduler tick.
And time isn't even guaranteed to be monotonic; on some platforms, if the system time changes, time may go down.
Calling it less often will solve the first problem, but not the others.
So, what can you do?
Unfortunately, there's no built-in answer, at least not with Python 2.7. The best solution is different on different platforms—probably GetTickCount64 on Windows, clock_gettime with the appropriate clock ID on most modern *nixes, gettimeofday on most other *nixes. These are relatively easy to use via ctypes if you don't want to distribute a C extension… but someone really should wrap it all up in a module and post it on PyPI, and unfortunately I couldn't find one…

Categories