How can I get the time execution of the code? [duplicate] - python

So in Java, we can do How to measure time taken by a function to execute
But how is it done in python? To measure the time start and end time between lines of code?
Something that does this:
import some_time_library
starttime = some_time_library.some_module()
code_tobe_measured()
endtime = some_time_library.some_module()
time_taken = endtime - starttime

If you want to measure CPU time, can use time.process_time() for Python 3.3 and above:
import time
start = time.process_time()
# your code here
print(time.process_time() - start)
First call turns the timer on, and second call tells you how many seconds have elapsed.
There is also a function time.clock(), but it is deprecated since Python 3.3 and will be removed in Python 3.8.
There are better profiling tools like timeit and profile, however time.process_time() will measure the CPU time and this is what you're are asking about.
If you want to measure wall clock time instead, use time.time().

You can also use time library:
import time
start = time.time()
# your code
# end
print(f'Time: {time.time() - start}')

With a help of a small convenience class, you can measure time spent in indented lines like this:
with CodeTimer():
line_to_measure()
another_line()
# etc...
Which will show the following after the indented line(s) finishes executing:
Code block took: x.xxx ms
UPDATE: You can now get the class with pip install linetimer and then from linetimer import CodeTimer. See this GitHub project.
The code for above class:
import timeit
class CodeTimer:
def __init__(self, name=None):
self.name = " '" + name + "'" if name else ''
def __enter__(self):
self.start = timeit.default_timer()
def __exit__(self, exc_type, exc_value, traceback):
self.took = (timeit.default_timer() - self.start) * 1000.0
print('Code block' + self.name + ' took: ' + str(self.took) + ' ms')
You could then name the code blocks you want to measure:
with CodeTimer('loop 1'):
for i in range(100000):
pass
with CodeTimer('loop 2'):
for i in range(100000):
pass
Code block 'loop 1' took: 4.991 ms
Code block 'loop 2' took: 3.666 ms
And nest them:
with CodeTimer('Outer'):
for i in range(100000):
pass
with CodeTimer('Inner'):
for i in range(100000):
pass
for i in range(100000):
pass
Code block 'Inner' took: 2.382 ms
Code block 'Outer' took: 10.466 ms
Regarding timeit.default_timer(), it uses the best timer based on OS and Python version, see this answer.

Putting the code in a function, then using a decorator for timing is another option. (Source) The advantage of this method is that you define timer once and use it with a simple additional line for every function.
First, define timer decorator:
import functools
import time
def timer(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
start_time = time.perf_counter()
value = func(*args, **kwargs)
end_time = time.perf_counter()
run_time = end_time - start_time
print("Finished {} in {} secs".format(repr(func.__name__), round(run_time, 3)))
return value
return wrapper
Then, use the decorator while defining the function:
#timer
def doubled_and_add(num):
res = sum([i*2 for i in range(num)])
print("Result : {}".format(res))
Let's try:
doubled_and_add(100000)
doubled_and_add(1000000)
Output:
Result : 9999900000
Finished 'doubled_and_add' in 0.0119 secs
Result : 999999000000
Finished 'doubled_and_add' in 0.0897 secs
Note: I'm not sure why to use time.perf_counter instead of time.time. Comments are welcome.

I always prefer to check time in hours, minutes and seconds (%H:%M:%S) format:
from datetime import datetime
start = datetime.now()
# your code
end = datetime.now()
time_taken = end - start
print('Time: ',time_taken)
output:
Time: 0:00:00.000019

I was looking for a way how to output a formatted time with minimal code, so here is my solution. Many people use Pandas anyway, so in some cases this can save from additional library imports.
import pandas as pd
start = pd.Timestamp.now()
# code
print(pd.Timestamp.now()-start)
Output:
0 days 00:05:32.541600
I would recommend using this if time precision is not the most important, otherwise use time library:
%timeit pd.Timestamp.now() outputs 3.29 µs ± 214 ns per loop
%timeit time.time() outputs 154 ns ± 13.3 ns per loop

You can try this as well:
from time import perf_counter
t0 = perf_counter()
...
t1 = perf_counter()
time_taken = t1 - t0

Let me add a little more to https://stackoverflow.com/a/63665115/7412781 solution.
Removed dependency on functools.
Used process time taken time.process_time() instead of absolute counter of time.perf_counter() because the process can be context switched out via kernel.
Used the raw function pointer print to get the correct class name as well.
This is the decorator code.
import time
def decorator_time_taken(fnc):
def inner(*args):
start = time.process_time()
ret = fnc(*args)
end = time.process_time()
print("{} took {} seconds".format(fnc, round((end - start), 6)))
return ret
return inner
This is the usage sample code. It's checking if 193939 is prime or not.
class PrimeBrute:
#decorator_time_taken
def isPrime(self, a):
for i in range(a-2):
if a % (i+2) == 0: return False
return True
inst = PrimeBrute()
print(inst.isPrime(193939))
This is the output.
<function PrimeBrute.isPrime at 0x7fc0c6919ae8> took 0.015789 seconds
True

Use timeit module to benchmark your performance:
def test():
print("test")
emptyFunction()
for i in [x for x in range(10000)]:
i**i
def emptyFunction():
pass
if __name__ == "__main__":
import timeit
print(timeit.timeit("test()", number = 5, globals = globals()))
#print(timeit.timeit("test()", setup = "from __main__ import test",
# number = 5))
the first parameter defines the piece of code which we want to execute test in this case & number defines how many times you want to repeat the execution.
Output:
test
test
test
test
test
36.81822113099952

Using the module time, we can calculate unix time at the start of the function and at the end of a function. Here is how the code might look like:
from time import time as unix
This code imports time.time which allows us to calculate unix time.
from time import sleep
This is not mandatory, but I am also importing time.sleep for one of the demonstrations.
START_TIME = unix()
This is what calculates unix time and puts it in a variable. Remember, the function unix is not an actual function. I imported time.time as unix, so if you did not put as unix in the first import, you will need to use time.time().
After this, we put whichever function or code we want.
At the end of the code snippet we put
TOTAL_TIME = unix()-START_TIME
This line of code does two things: It calculates unix time at the end of the function, and using the variable START_TIME from before, we calculate the amount of time it took to execute the code snippet.
We can then use this variable wherever we want, including for a print() function.
print("The snippet took {} seconds to execute".format(TOTAL_TIME))
Here I wrote a quick demonstration code that has two experiments as a demonstration. (Fully commented)
from time import time as unix # Import the module to measure unix time
from time import sleep
# Here are a few examples:
# 1. Counting to 100 000
START_TIME = unix()
for i in range(0, 100001):
print("Number: {}\r".format(i), end="")
TOTAL_TIME = unix() - START_TIME
print("\nFinal time (Expirement 1): {} s\n".format(TOTAL_TIME))
# 2. Precision of sleep
for i in range(10):
START_TIME = unix()
sleep(0.1)
TOTAL_TIME = unix() - START_TIME
print("Sleep(0.1): Index: {}, Time: {} s".format(i,TOTAL_TIME))
Here was my output:
Number: 100000
Final time (Expirement 1): 16.666812419891357 s
Sleep(0.1): Index: 0, Time: 0.10014867782592773 s
Sleep(0.1): Index: 1, Time: 0.10016226768493652 s
Sleep(0.1): Index: 2, Time: 0.10202860832214355 s
Sleep(0.1): Index: 3, Time: 0.10015869140625 s
Sleep(0.1): Index: 4, Time: 0.10014724731445312 s
Sleep(0.1): Index: 5, Time: 0.10013675689697266 s
Sleep(0.1): Index: 6, Time: 0.10014677047729492 s
Sleep(0.1): Index: 7, Time: 0.1001439094543457 s
Sleep(0.1): Index: 8, Time: 0.10044598579406738 s
Sleep(0.1): Index: 9, Time: 0.10014700889587402 s
>

import datetime
#this code before computation
%%timeit
~code~

Related

measure time a function takes in python [duplicate]

I want to measure the time it took to execute a function. I couldn't get timeit to work:
import timeit
start = timeit.timeit()
print("hello")
end = timeit.timeit()
print(end - start)
Use time.time() to measure the elapsed wall-clock time between two points:
import time
start = time.time()
print("hello")
end = time.time()
print(end - start)
This gives the execution time in seconds.
Another option since Python 3.3 might be to use perf_counter or process_time, depending on your requirements. Before 3.3 it was recommended to use time.clock (thanks Amber). However, it is currently deprecated:
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of “processor time”, depends on that of the C function
of the same name.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Deprecated since version 3.3: The behaviour of this function depends
on the platform: use perf_counter() or process_time() instead,
depending on your requirements, to have a well defined behaviour.
Use timeit.default_timer instead of timeit.timeit. The former provides the best clock available on your platform and version of Python automatically:
from timeit import default_timer as timer
start = timer()
# ...
end = timer()
print(end - start) # Time in seconds, e.g. 5.38091952400282
timeit.default_timer is assigned to time.time() or time.clock() depending on OS. On Python 3.3+ default_timer is time.perf_counter() on all platforms. See Python - time.clock() vs. time.time() - accuracy?
See also:
Optimizing code
How to optimize for speed
Python 3 only:
Since time.clock() is deprecated as of Python 3.3, you will want to use time.perf_counter() for system-wide timing, or time.process_time() for process-wide timing, just the way you used to use time.clock():
import time
t = time.process_time()
#do some stuff
elapsed_time = time.process_time() - t
The new function process_time will not include time elapsed during sleep.
Measuring time in seconds:
from timeit import default_timer as timer
from datetime import timedelta
start = timer()
# ....
# (your code runs here)
# ...
end = timer()
print(timedelta(seconds=end-start))
Output:
0:00:01.946339
Given a function you'd like to time,
test.py:
def foo():
# print "hello"
return "hello"
the easiest way to use timeit is to call it from the command line:
% python -mtimeit -s'import test' 'test.foo()'
1000000 loops, best of 3: 0.254 usec per loop
Do not try to use time.time or time.clock (naively) to compare the speed of functions. They can give misleading results.
PS. Do not put print statements in a function you wish to time; otherwise the time measured will depend on the speed of the terminal.
It's fun to do this with a context-manager that automatically remembers the start time upon entry to a with block, then freezes the end time on block exit. With a little trickery, you can even get a running elapsed-time tally inside the block from the same context-manager function.
The core library doesn't have this (but probably ought to). Once in place, you can do things like:
with elapsed_timer() as elapsed:
# some lengthy code
print( "midpoint at %.2f seconds" % elapsed() ) # time so far
# other lengthy code
print( "all done at %.2f seconds" % elapsed() )
Here's contextmanager code sufficient to do the trick:
from contextlib import contextmanager
from timeit import default_timer
#contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
And some runnable demo code:
import time
with elapsed_timer() as elapsed:
time.sleep(1)
print(elapsed())
time.sleep(2)
print(elapsed())
time.sleep(3)
Note that by design of this function, the return value of elapsed() is frozen on block exit, and further calls return the same duration (of about 6 seconds in this toy example).
I prefer this. timeit doc is far too confusing.
from datetime import datetime
start_time = datetime.now()
# INSERT YOUR CODE
time_elapsed = datetime.now() - start_time
print('Time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed))
Note, that there isn't any formatting going on here, I just wrote hh:mm:ss into the printout so one can interpret time_elapsed
Here's another way to do this:
>> from pytictoc import TicToc
>> t = TicToc() # create TicToc instance
>> t.tic() # Start timer
>> # do something
>> t.toc() # Print elapsed time
Elapsed time is 2.612231 seconds.
Comparing with traditional way:
>> from time import time
>> t1 = time()
>> # do something
>> t2 = time()
>> elapsed = t2 - t1
>> print('Elapsed time is %f seconds.' % elapsed)
Elapsed time is 2.612231 seconds.
Installation:
pip install pytictoc
Refer to the PyPi page for more details.
The easiest way to calculate the duration of an operation:
import time
start_time = time.monotonic()
<operations, programs>
print('seconds: ', time.monotonic() - start_time)
Official docs here.
Here are my findings after going through many good answers here as well as a few other articles.
First, if you are debating between timeit and time.time, the timeit has two advantages:
timeit selects the best timer available on your OS and Python version.
timeit disables garbage collection, however, this is not something you may or may not want.
Now the problem is that timeit is not that simple to use because it needs setup and things get ugly when you have a bunch of imports. Ideally, you just want a decorator or use with block and measure time. Unfortunately, there is nothing built-in available for this so you have two options:
Option 1: Use timebudget library
The timebudget is a versatile and very simple library that you can use just in one line of code after pip install.
#timebudget # Record how long this function takes
def my_method():
# my code
Option 2: Use my small module
I created below little timing utility module called timing.py. Just drop this file in your project and start using it. The only external dependency is runstats which is again small.
Now you can time any function just by putting a decorator in front of it:
import timing
#timing.MeasureTime
def MyBigFunc():
#do something time consuming
for i in range(10000):
print(i)
timing.print_all_timings()
If you want to time portion of code then just put it inside with block:
import timing
#somewhere in my code
with timing.MeasureBlockTime("MyBlock"):
#do something time consuming
for i in range(10000):
print(i)
# rest of my code
timing.print_all_timings()
Advantages:
There are several half-backed versions floating around so I want to point out few highlights:
Use timer from timeit instead of time.time for reasons described earlier.
You can disable GC during timing if you want.
Decorator accepts functions with named or unnamed params.
Ability to disable printing in block timing (use with timing.MeasureBlockTime() as t and then t.elapsed).
Ability to keep gc enabled for block timing.
Using time.time to measure execution gives you the overall execution time of your commands including running time spent by other processes on your computer. It is the time the user notices, but is not good if you want to compare different code snippets / algorithms / functions / ...
More information on timeit:
Using the timeit Module
timeit – Time the execution of small bits of Python code
If you want a deeper insight into profiling:
http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Profiling_Code
How can you profile a python script?
Update: I used http://pythonhosted.org/line_profiler/ a lot during the last year and find it very helpfull and recommend to use it instead of Pythons profile module.
Use profiler module. It gives a very detailed profile.
import profile
profile.run('main()')
it outputs something like:
5 function calls in 0.047 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(exec)
1 0.047 0.047 0.047 0.047 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
0 0.000 0.000 profile:0(profiler)
1 0.000 0.000 0.047 0.047 profile:0(main())
1 0.000 0.000 0.000 0.000 two_sum.py:2(twoSum)
I've found it very informative.
The python cProfile and pstats modules offer great support for measuring time elapsed in certain functions without having to add any code around the existing functions.
For example if you have a python script timeFunctions.py:
import time
def hello():
print "Hello :)"
time.sleep(0.1)
def thankyou():
print "Thank you!"
time.sleep(0.05)
for idx in range(10):
hello()
for idx in range(100):
thankyou()
To run the profiler and generate stats for the file you can just run:
python -m cProfile -o timeStats.profile timeFunctions.py
What this is doing is using the cProfile module to profile all functions in timeFunctions.py and collecting the stats in the timeStats.profile file. Note that we did not have to add any code to existing module (timeFunctions.py) and this can be done with any module.
Once you have the stats file you can run the pstats module as follows:
python -m pstats timeStats.profile
This runs the interactive statistics browser which gives you a lot of nice functionality. For your particular use case you can just check the stats for your function. In our example checking stats for both functions shows us the following:
Welcome to the profile statistics browser.
timeStats.profile% stats hello
<timestamp> timeStats.profile
224 function calls in 6.014 seconds
Random listing order was used
List reduced from 6 to 1 due to restriction <'hello'>
ncalls tottime percall cumtime percall filename:lineno(function)
10 0.000 0.000 1.001 0.100 timeFunctions.py:3(hello)
timeStats.profile% stats thankyou
<timestamp> timeStats.profile
224 function calls in 6.014 seconds
Random listing order was used
List reduced from 6 to 1 due to restriction <'thankyou'>
ncalls tottime percall cumtime percall filename:lineno(function)
100 0.002 0.000 5.012 0.050 timeFunctions.py:7(thankyou)
The dummy example does not do much but give you an idea of what can be done. The best part about this approach is that I dont have to edit any of my existing code to get these numbers and obviously help with profiling.
Here's another context manager for timing code -
Usage:
from benchmark import benchmark
with benchmark("Test 1+1"):
1+1
=>
Test 1+1 : 1.41e-06 seconds
or, if you need the time value
with benchmark("Test 1+1") as b:
1+1
print(b.time)
=>
Test 1+1 : 7.05e-07 seconds
7.05233786763e-07
benchmark.py:
from timeit import default_timer as timer
class benchmark(object):
def __init__(self, msg, fmt="%0.3g"):
self.msg = msg
self.fmt = fmt
def __enter__(self):
self.start = timer()
return self
def __exit__(self, *args):
t = timer() - self.start
print(("%s : " + self.fmt + " seconds") % (self.msg, t))
self.time = t
Adapted from http://dabeaz.blogspot.fr/2010/02/context-manager-for-timing-benchmarks.html
Here is a tiny timer class that returns "hh:mm:ss" string:
class Timer:
def __init__(self):
self.start = time.time()
def restart(self):
self.start = time.time()
def get_time_hhmmss(self):
end = time.time()
m, s = divmod(end - self.start, 60)
h, m = divmod(m, 60)
time_str = "%02d:%02d:%02d" % (h, m, s)
return time_str
Usage:
# Start timer
my_timer = Timer()
# ... do something
# Get time string:
time_hhmmss = my_timer.get_time_hhmmss()
print("Time elapsed: %s" % time_hhmmss )
# ... use the timer again
my_timer.restart()
# ... do something
# Get time:
time_hhmmss = my_timer.get_time_hhmmss()
# ... etc
(With Ipython only) you can use %timeit to measure average processing time:
def foo():
print "hello"
and then:
%timeit foo()
the result is something like:
10000 loops, best of 3: 27 µs per loop
I like it simple (python 3):
from timeit import timeit
timeit(lambda: print("hello"))
Output is microseconds for a single execution:
2.430883963010274
Explanation:
timeit executes the anonymous function 1 million times by default and the result is given in seconds. Therefore the result for 1 single execution is the same amount but in microseconds on average.
For slow operations add a lower number of iterations or you could be waiting forever:
import time
timeit(lambda: time.sleep(1.5), number=1)
Output is always in seconds for the total number of iterations:
1.5015795179999714
on python3:
from time import sleep, perf_counter as pc
t0 = pc()
sleep(1)
print(pc()-t0)
elegant and short.
One more way to use timeit:
from timeit import timeit
def func():
return 1 + 1
time = timeit(func, number=1)
print(time)
To get insight on every function calls recursively, do:
%load_ext snakeviz
%%snakeviz
It just takes those 2 lines of code in a Jupyter notebook, and it generates a nice interactive diagram. For example:
Here is the code. Again, the 2 lines starting with % are the only extra lines of code needed to use snakeviz:
# !pip install snakeviz
%load_ext snakeviz
import glob
import hashlib
%%snakeviz
files = glob.glob('*.txt')
def print_files_hashed(files):
for file in files:
with open(file) as f:
print(hashlib.md5(f.read().encode('utf-8')).hexdigest())
print_files_hashed(files)
It also seems possible to run snakeviz outside notebooks. More info on the snakeviz website.
How to measure the time between two operations. Compare the time of two operations.
import time
b = (123*321)*123
t1 = time.time()
c = ((9999^123)*321)^123
t2 = time.time()
print(t2-t1)
7.987022399902344e-05
Here's a pretty well documented and fully type hinted decorator I use as a general utility:
from functools import wraps
from time import perf_counter
from typing import Any, Callable, Optional, TypeVar, cast
F = TypeVar("F", bound=Callable[..., Any])
def timer(prefix: Optional[str] = None, precision: int = 6) -> Callable[[F], F]:
"""Use as a decorator to time the execution of any function.
Args:
prefix: String to print before the time taken.
Default is the name of the function.
precision: How many decimals to include in the seconds value.
Examples:
>>> #timer()
... def foo(x):
... return x
>>> foo(123)
foo: 0.000...s
123
>>> #timer("Time taken: ", 2)
... def foo(x):
... return x
>>> foo(123)
Time taken: 0.00s
123
"""
def decorator(func: F) -> F:
#wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
nonlocal prefix
prefix = prefix if prefix is not None else f"{func.__name__}: "
start = perf_counter()
result = func(*args, **kwargs)
end = perf_counter()
print(f"{prefix}{end - start:.{precision}f}s")
return result
return cast(F, wrapper)
return decorator
Example usage:
from timer import timer
#timer(precision=9)
def takes_long(x: int) -> bool:
return x in (i for i in range(x + 1))
result = takes_long(10**8)
print(result)
Output:
takes_long: 4.942629056s
True
The doctests can be checked with:
$ python3 -m doctest --verbose -o=ELLIPSIS timer.py
And the type hints with:
$ mypy timer.py
If you want to be able to time functions conveniently, you can use a simple decorator:
import time
def timing_decorator(func):
def wrapper(*args, **kwargs):
start = time.perf_counter()
original_return_val = func(*args, **kwargs)
end = time.perf_counter()
print("time elapsed in ", func.__name__, ": ", end - start, sep='')
return original_return_val
return wrapper
You can use it on a function that you want to time like this:
#timing_decorator
def function_to_time():
time.sleep(1)
function_to_time()
Any time you call function_to_time, it will print how long it took and the name of the function being timed.
Kind of a super later response, but maybe it serves a purpose for someone. This is a way to do it which I think is super clean.
import time
def timed(fun, *args):
s = time.time()
r = fun(*args)
print('{} execution took {} seconds.'.format(fun.__name__, time.time()-s))
return(r)
timed(print, "Hello")
Keep in mind that "print" is a function in Python 3 and not Python 2.7. However, it works with any other function. Cheers!
You can use timeit.
Here is an example on how to test naive_func that takes parameter using Python REPL:
>>> import timeit
>>> def naive_func(x):
... a = 0
... for i in range(a):
... a += i
... return a
>>> def wrapper(func, *args, **kwargs):
... def wrapper():
... return func(*args, **kwargs)
... return wrapper
>>> wrapped = wrapper(naive_func, 1_000)
>>> timeit.timeit(wrapped, number=1_000_000)
0.4458435332577161
You don't need wrapper function if function doesn't have any parameters.
print_elapsed_time function is below
def print_elapsed_time(prefix=''):
e_time = time.time()
if not hasattr(print_elapsed_time, 's_time'):
print_elapsed_time.s_time = e_time
else:
print(f'{prefix} elapsed time: {e_time - print_elapsed_time.s_time:.2f} sec')
print_elapsed_time.s_time = e_time
use it in this way
print_elapsed_time()
.... heavy jobs ...
print_elapsed_time('after heavy jobs')
.... tons of jobs ...
print_elapsed_time('after tons of jobs')
result is
after heavy jobs elapsed time: 0.39 sec
after tons of jobs elapsed time: 0.60 sec
the pros and cons of this function is that you don't need to pass start time
We can also convert time into human-readable time.
import time, datetime
start = time.clock()
def num_multi1(max):
result = 0
for num in range(0, 1000):
if (num % 3 == 0 or num % 5 == 0):
result += num
print "Sum is %d " % result
num_multi1(1000)
end = time.clock()
value = end - start
timestamp = datetime.datetime.fromtimestamp(value)
print timestamp.strftime('%Y-%m-%d %H:%M:%S')
Although it's not strictly asked in the question, it is quite often the case that you want a simple, uniform way to incrementally measure the elapsed time between several lines of code.
If you are using Python 3.8 or above, you can make use of assignment expressions (a.k.a. the walrus operator) to achieve this in a fairly elegant way:
import time
start, times = time.perf_counter(), {}
print("hello")
times["print"] = -start + (start := time.perf_counter())
time.sleep(1.42)
times["sleep"] = -start + (start := time.perf_counter())
a = [n**2 for n in range(10000)]
times["pow"] = -start + (start := time.perf_counter())
print(times)
=>
{'print': 2.193450927734375e-05, 'sleep': 1.4210970401763916, 'power': 0.005671024322509766}
I made a library for this, if you want to measure a function you can just do it like this
from pythonbenchmark import compare, measure
import time
a,b,c,d,e = 10,10,10,10,10
something = [a,b,c,d,e]
#measure
def myFunction(something):
time.sleep(0.4)
#measure
def myOptimizedFunction(something):
time.sleep(0.2)
myFunction(input)
myOptimizedFunction(input)
https://github.com/Karlheinzniebuhr/pythonbenchmark
This unique class-based approach offers a printable string representation, customizable rounding, and convenient access to the elapsed time as a string or a float. It was developed with Python 3.7.
import datetime
import timeit
class Timer:
"""Measure time used."""
# Ref: https://stackoverflow.com/a/57931660/
def __init__(self, round_ndigits: int = 0):
self._round_ndigits = round_ndigits
self._start_time = timeit.default_timer()
def __call__(self) -> float:
return timeit.default_timer() - self._start_time
def __str__(self) -> str:
return str(datetime.timedelta(seconds=round(self(), self._round_ndigits)))
Usage:
# Setup timer
>>> timer = Timer()
# Access as a string
>>> print(f'Time elapsed is {timer}.')
Time elapsed is 0:00:03.
>>> print(f'Time elapsed is {timer}.')
Time elapsed is 0:00:04.
# Access as a float
>>> timer()
6.841332235
>>> timer()
7.970274425

How to calculate the time it takes for subprocesses of a line of code to run [duplicate]

I have a command line program in Python that takes a while to finish. I want to know the exact time it takes to finish running.
I've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.
The simplest way in Python:
import time
start_time = time.time()
main()
print("--- %s seconds ---" % (time.time() - start_time))
This assumes that your program takes at least a tenth of second to run.
Prints:
--- 0.764891862869 seconds ---
In Linux or Unix:
$ time python yourprogram.py
In Windows, see this StackOverflow question: How do I measure execution time of a command on the Windows command line?
For more verbose output,
$ time -v python yourprogram.py
Command being timed: "python3 yourprogram.py"
User time (seconds): 0.08
System time (seconds): 0.02
Percent of CPU this job got: 98%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.10
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 9480
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 1114
Voluntary context switches: 0
Involuntary context switches: 22
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
I put this timing.py module into my own site-packages directory, and just insert import timing at the top of my module:
import atexit
from time import clock
def secondsToStr(t):
return "%d:%02d:%02d.%03d" % \
reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],
[(t*1000,),1000,60,60])
line = "="*40
def log(s, elapsed=None):
print line
print secondsToStr(clock()), '-', s
if elapsed:
print "Elapsed time:", elapsed
print line
print
def endlog():
end = clock()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
def now():
return secondsToStr(clock())
start = clock()
atexit.register(endlog)
log("Start Program")
I can also call timing.log from within my program if there are significant stages within the program I want to show. But just including import timing will print the start and end times, and overall elapsed time. (Forgive my obscure secondsToStr function, it just formats a floating point number of seconds to hh:mm:ss.sss form.)
Note: A Python 3 version of the above code can be found here or here.
I like the output the datetime module provides, where time delta objects show days, hours, minutes, etc. as necessary in a human-readable way.
For example:
from datetime import datetime
start_time = datetime.now()
# do your work here
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
Sample output e.g.
Duration: 0:00:08.309267
or
Duration: 1 day, 1:51:24.269711
As J.F. Sebastian mentioned, this approach might encounter some tricky cases with local time, so it's safer to use:
import time
from datetime import timedelta
start_time = time.monotonic()
end_time = time.monotonic()
print(timedelta(seconds=end_time - start_time))
import time
start_time = time.clock()
main()
print(time.clock() - start_time, "seconds")
time.clock() returns the processor time, which allows us to calculate only the time used by this process (on Unix anyway). The documentation says "in any case, this is the function to use for benchmarking Python or timing algorithms"
I really like Paul McGuire's answer, but I use Python 3. So for those who are interested: here's a modification of his answer that works with Python 3 on *nix (I imagine, under Windows, that clock() should be used instead of time()):
#python3
import atexit
from time import time, strftime, localtime
from datetime import timedelta
def secondsToStr(elapsed=None):
if elapsed is None:
return strftime("%Y-%m-%d %H:%M:%S", localtime())
else:
return str(timedelta(seconds=elapsed))
def log(s, elapsed=None):
line = "="*40
print(line)
print(secondsToStr(), '-', s)
if elapsed:
print("Elapsed time:", elapsed)
print(line)
print()
def endlog():
end = time()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
start = time()
atexit.register(endlog)
log("Start Program")
If you find this useful, you should still up-vote his answer instead of this one, as he did most of the work ;).
You can use the Python profiler cProfile to measure CPU time and additionally how much time is spent inside each function and how many times each function is called. This is very useful if you want to improve performance of your script without knowing where to start. This answer to another Stack Overflow question is pretty good. It's always good to have a look in the documentation too.
Here's an example how to profile a script using cProfile from a command line:
$ python -m cProfile euler048.py
1007 function calls in 0.061 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.061 0.061 <string>:1(<module>)
1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>)
1 0.005 0.005 0.061 0.061 euler048.py:2(<module>)
1 0.000 0.000 0.061 0.061 {execfile}
1 0.002 0.002 0.053 0.053 {map}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects}
1 0.000 0.000 0.000 0.000 {range}
1 0.003 0.003 0.003 0.003 {sum}
Just use the timeit module. It works with both Python 2 and Python 3.
import timeit
start = timeit.default_timer()
# All the program statements
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+str(execution_time)) # It returns time in seconds
It returns in seconds and you can have your execution time. It is simple, but you should write these in thew main function which starts program execution. If you want to get the execution time even when you get an error then take your parameter "Start" to it and calculate there like:
def sample_function(start,**kwargs):
try:
# Your statements
except:
# except statements run when your statements raise an exception
stop = timeit.default_timer()
execution_time = stop - start
print("Program executed in " + str(execution_time))
time.clock()
Deprecated since version 3.3: The behavior of this function depends
on the platform: use perf_counter() or process_time() instead,
depending on your requirements, to have a well-defined behavior.
time.perf_counter()
Return the value (in fractional seconds) of a performance counter,
i.e. a clock with the highest available resolution to measure a short
duration. It does include time elapsed during sleep and is
system-wide.
time.process_time()
Return the value (in fractional seconds) of the sum of the system and
user CPU time of the current process. It does not include time elapsed
during sleep.
start = time.process_time()
... do something
elapsed = (time.process_time() - start)
time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
import time
start_time = time.perf_counter ()
for x in range(1, 100):
print(x)
end_time = time.perf_counter ()
print(end_time - start_time, "seconds")
For the data folks using Jupyter Notebook
In a cell, you can use Jupyter's %%time magic command to measure the execution time:
%%time
[ x**2 for x in range(10000)]
Output
CPU times: user 4.54 ms, sys: 0 ns, total: 4.54 ms
Wall time: 4.12 ms
This will only capture the execution time of a particular cell. If you'd like to capture the execution time of the whole notebook (i.e. program), you can create a new notebook in the same directory and in the new notebook execute all cells:
Suppose the notebook above is called example_notebook.ipynb. In a new notebook within the same directory:
# Convert your notebook to a .py script:
!jupyter nbconvert --to script example_notebook.ipynb
# Run the example_notebook with -t flag for time
%run -t example_notebook
Output
IPython CPU timings (estimated):
User : 0.00 s.
System : 0.00 s.
Wall time: 0.00 s.
The following snippet prints elapsed time in a nice human readable <HH:MM:SS> format.
import time
from datetime import timedelta
start_time = time.time()
#
# Perform lots of computations.
#
elapsed_time_secs = time.time() - start_time
msg = "Execution took: %s secs (Wall clock time)" % timedelta(seconds=round(elapsed_time_secs))
print(msg)
Similar to the response from #rogeriopvl I added a slight modification to convert to hour minute seconds using the same library for long running jobs.
import time
start_time = time.time()
main()
seconds = time.time() - start_time
print('Time Taken:', time.strftime("%H:%M:%S",time.gmtime(seconds)))
Sample Output
Time Taken: 00:00:08
For functions, I suggest using this simple decorator I created.
def timeit(method):
def timed(*args, **kwargs):
ts = time.time()
result = method(*args, **kwargs)
te = time.time()
if 'log_time' in kwargs:
name = kwargs.get('log_name', method.__name__.upper())
kwargs['log_time'][name] = int((te - ts) * 1000)
else:
print('%r %2.22f ms' % (method.__name__, (te - ts) * 1000))
return result
return timed
#timeit
def foo():
do_some_work()
# foo()
# 'foo' 0.000953 ms
from time import time
start_time = time()
...
end_time = time()
time_taken = end_time - start_time # time_taken is in seconds
hours, rest = divmod(time_taken,3600)
minutes, seconds = divmod(rest, 60)
I've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.
$ python -mtimeit -n1 -r1 -t -s "from your_module import main" "main()"
It runs your_module.main() function one time and print the elapsed time using time.time() function as a timer.
To emulate /usr/bin/time in Python see Python subprocess with /usr/bin/time: how to capture timing info but ignore all other output?.
To measure CPU time (e.g., don't include time during time.sleep()) for each function, you could use profile module (cProfile on Python 2):
$ python3 -mprofile your_module.py
You could pass -p to timeit command above if you want to use the same timer as profile module uses.
See How can you profile a Python script?
I was having the same problem in many places, so I created a convenience package horology. You can install it with pip install horology and then do it in the elegant way:
from horology import Timing
with Timing(name='Important calculations: '):
prepare()
do_your_stuff()
finish_sth()
will output:
Important calculations: 12.43 ms
Or even simpler (if you have one function):
from horology import timed
#timed
def main():
...
will output:
main: 7.12 h
It takes care of units and rounding. It works with python 3.6 or newer.
I liked Paul McGuire's answer too and came up with a context manager form which suited my needs more.
import datetime as dt
import timeit
class TimingManager(object):
"""Context Manager used with the statement 'with' to time some execution.
Example:
with TimingManager() as t:
# Code to time
"""
clock = timeit.default_timer
def __enter__(self):
"""
"""
self.start = self.clock()
self.log('\n=> Start Timing: {}')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""
"""
self.endlog()
return False
def log(self, s, elapsed=None):
"""Log current time and elapsed time if present.
:param s: Text to display, use '{}' to format the text with
the current time.
:param elapsed: Elapsed time to display. Dafault: None, no display.
"""
print s.format(self._secondsToStr(self.clock()))
if(elapsed is not None):
print 'Elapsed time: {}\n'.format(elapsed)
def endlog(self):
"""Log time for the end of execution with elapsed time.
"""
self.log('=> End Timing: {}', self.now())
def now(self):
"""Return current elapsed time as hh:mm:ss string.
:return: String.
"""
return str(dt.timedelta(seconds = self.clock() - self.start))
def _secondsToStr(self, sec):
"""Convert timestamp to h:mm:ss string.
:param sec: Timestamp.
"""
return str(dt.datetime.fromtimestamp(sec))
In IPython, "timeit" any script:
def foo():
%run bar.py
timeit foo()
Use line_profiler.
line_profiler will profile the time individual lines of code take to execute. The profiler is implemented in C via Cython in order to reduce the overhead of profiling.
from line_profiler import LineProfiler
import random
def do_stuff(numbers):
s = sum(numbers)
l = [numbers[i]/43 for i in range(len(numbers))]
m = ['hello'+str(numbers[i]) for i in range(len(numbers))]
numbers = [random.randint(1,100) for i in range(1000)]
lp = LineProfiler()
lp_wrapper = lp(do_stuff)
lp_wrapper(numbers)
lp.print_stats()
The results will be:
Timer unit: 1e-06 s
Total time: 0.000649 s
File: <ipython-input-2-2e060b054fea>
Function: do_stuff at line 4
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4 def do_stuff(numbers):
5 1 10 10.0 1.5 s = sum(numbers)
6 1 186 186.0 28.7 l = [numbers[i]/43 for i in range(len(numbers))]
7 1 453 453.0 69.8 m = ['hello'+str(numbers[i]) for i in range(len(numbers))]
I used a very simple function to time a part of code execution:
import time
def timing():
start_time = time.time()
return lambda x: print("[{:.2f}s] {}".format(time.time() - start_time, x))
And to use it, just call it before the code to measure to retrieve function timing, and then call the function after the code with comments. The time will appear in front of the comments. For example:
t = timing()
train = pd.read_csv('train.csv',
dtype={
'id': str,
'vendor_id': str,
'pickup_datetime': str,
'dropoff_datetime': str,
'passenger_count': int,
'pickup_longitude': np.float64,
'pickup_latitude': np.float64,
'dropoff_longitude': np.float64,
'dropoff_latitude': np.float64,
'store_and_fwd_flag': str,
'trip_duration': int,
},
parse_dates = ['pickup_datetime', 'dropoff_datetime'],
)
t("Loaded {} rows data from 'train'".format(len(train)))
Then the output will look like this:
[9.35s] Loaded 1458644 rows data from 'train'
I tried and found time difference using the following scripts.
import time
start_time = time.perf_counter()
[main code here]
print (time.perf_counter() - start_time, "seconds")
Later answer, but I use the built-in timeit:
import timeit
code_to_test = """
a = range(100000)
b = []
for i in a:
b.append(i*2)
"""
elapsed_time = timeit.timeit(code_to_test, number=500)
print(elapsed_time)
# 10.159821493085474
Wrap all your code, including any imports you may have, inside code_to_test.
number argument specifies the amount of times the code should repeat.
Demo
Timeit is a class in Python used to calculate the execution time of small blocks of code.
Default_timer is a method in this class which is used to measure the wall clock timing, not CPU execution time. Thus other process execution might interfere with this. Thus it is useful for small blocks of code.
A sample of the code is as follows:
from timeit import default_timer as timer
start= timer()
# Some logic
end = timer()
print("Time taken:", end-start)
First, install humanfriendly package by opening Command Prompt (CMD) as administrator and type there -
pip install humanfriendly
Code:
from humanfriendly import format_timespan
import time
begin_time = time.time()
# Put your code here
end_time = time.time() - begin_time
print("Total execution time: ", format_timespan(end_time))
Output:
You do this simply in Python. There is no need to make it complicated.
import time
start = time.localtime()
end = time.localtime()
"""Total execution time in minutes$ """
print(end.tm_min - start.tm_min)
"""Total execution time in seconds$ """
print(end.tm_sec - start.tm_sec)
There is a timeit module which can be used to time the execution times of Python code.
It has detailed documentation and examples in Python documentation, 26.6. timeit — Measure execution time of small code snippets.
Following this answer created a simple but convenient instrument.
import time
from datetime import timedelta
def start_time_measure(message=None):
if message:
print(message)
return time.monotonic()
def end_time_measure(start_time, print_prefix=None):
end_time = time.monotonic()
if print_prefix:
print(print_prefix + str(timedelta(seconds=end_time - start_time)))
return end_time
Usage:
total_start_time = start_time_measure()
start_time = start_time_measure('Doing something...')
# Do something
end_time_measure(start_time, 'Done in: ')
start_time = start_time_measure('Doing something else...')
# Do something else
end_time_measure(start_time, 'Done in: ')
end_time_measure(total_start_time, 'Total time: ')
The output:
Doing something...
Done in: 0:00:01.218000
Doing something else...
Done in: 0:00:01.313000
Total time: 0:00:02.672000
This is Paul McGuire's answer that works for me. Just in case someone was having trouble running that one.
import atexit
from time import clock
def reduce(function, iterable, initializer=None):
it = iter(iterable)
if initializer is None:
value = next(it)
else:
value = initializer
for element in it:
value = function(value, element)
return value
def secondsToStr(t):
return "%d:%02d:%02d.%03d" % \
reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],
[(t*1000,),1000,60,60])
line = "="*40
def log(s, elapsed=None):
print (line)
print (secondsToStr(clock()), '-', s)
if elapsed:
print ("Elapsed time:", elapsed)
print (line)
def endlog():
end = clock()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
def now():
return secondsToStr(clock())
def main():
start = clock()
atexit.register(endlog)
log("Start Program")
Call timing.main() from your program after importing the file.
The time of a Python program's execution measure could be inconsistent depending on:
Same program can be evaluated using different algorithms
Running time varies between algorithms
Running time varies between implementations
Running time varies between computers
Running time is not predictable based on small inputs
This is because the most effective way is using the "Order of Growth" and learn the Big "O" notation to do it properly.
Anyway, you can try to evaluate the performance of any Python program in specific machine counting steps per second using this simple algorithm:
adapt this to the program you want to evaluate
import time
now = time.time()
future = now + 10
step = 4 # Why 4 steps? Because until here already four operations executed
while time.time() < future:
step += 3 # Why 3 again? Because a while loop executes one comparison and one plus equal statement
step += 4 # Why 3 more? Because one comparison starting while when time is over plus the final assignment of step + 1 and print statement
print(str(int(step / 10)) + " steps per second")

how to measure execution time of python code

need help for this code
import timeit
mysetup=""
mycode='''
def gener():
...my code here...
return x
'''
# timeit statement
print (timeit.timeit(setup = mysetup,
stmt = mycode,
number = 1000000))
print("done")
As result I got 0.0008606994517737132
As I read this unit is in "seconds"
So my funtion executed 1 million time in 0.8 ms ?
I think this is not real, too fast.
I also tried basic option
start = time.time()
my code here
end = time.time()
print(end - start)
and got 0.23901081085205078 for one time execution it seems a little slow...
So what I'm I doing wrong ?
Thanks
The way you have defined this in mycode for the timeit method, all that is going to happen is the function gener will be defined, not run. You need to run the function in your code block in order to report time taken for execution.
As for what length of time is reasonable (too fast/too slow) it very much depends on what your code is doing. But I suspect you have executed the function in method 2 and only defined it in method 1, hence the discrepancy.
Edit: example code
To illustrate the difference, in the example below the block code1 just defines a function, it does not execute it. The block code2 defines and executes the function.
import timeit
code1 = '''
def gener():
time.sleep(0.01)
'''
code2 = '''
def gener():
time.sleep(0.01)
gener()
'''
We should expect running time.sleep(0.01) 100 times to take approximately 1 second. Running timeit for code1 returns ~ 10^-5 seconds, because the function gener is not actually being called:
timeit.timeit(stmt=code1, number=100)
Running timeit for code2 returns the expected result of ~1 second:
timeit.timeit(stmt=code2, number=100)
Further to this, the point of the setup argument is to do setup (the parts of the code which are not meant to be timed). If you want timeit to capture the execution time of gener, you should use this:
import timeit
setup = '''
def gener():
time.sleep(0.01)
'''
stmt = "gener()"
timeit.timeit(setup=setup, stmt=stmt, number=100)
This returns the time taken to run gener 100 times, not including the time taken to define it.
Here is a general way to measure time of code snippets.
import time
class timer(object):
"""
A simple timer used to time blocks of code. Usage as follows:
with timer("optional_name"):
some code ...
some more code
"""
def __init__(self, name=None):
self.name = name
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
self.end = time.time()
self.interval = self.end - self.start
if self.name:
print("{} - Elapsed time: {:.4f}s".format(self.name, self.interval))
else:
print("Elapsed time: {:.4f}s".format(self.interval))
gist available here: https://gist.github.com/Jakobovski/191b9e95ac964b61e8abc7436111d1f9
If you want to time a function timeit can be used like so:
# defining some function you want to time
def test(n):
s = 0
for i in range(n):
s += i
return s
# defining a function which runs the function to be timed with desired input arguments
timed_func = lambda : test(1000)
# the above is done so that we have a function which takes no input arguments
N = 10000 # number of repeats
time_per_run = timeit.timeit(stmt=timed_func, number=N)/N
For your case you can do:
# defining some function you want to time
def gener():
...my code here...
return x
N = 1000000 # number of repeats
time_per_run = timeit.timeit(stmt=gener, number=N)/N
Any importing of libraries can be done globally before calling the timeit function and timeit will use the globally imported libraries
e.g.
import numpy as np
# defining some function you want to time
def gener():
...my code here...
x = np.sqrt(y)
return x
N = 1000000 # number of repeats
time_per_run = timeit.timeit(stmt=gener, number=N)/N
Working code
# importing the required module
import timeit
# code snippet to be executed only once
mysetup = '''
from collections import OrderedDict
def gener():
some lines of code here
return x'''
# code snippet whose execution time is to be measured
mycode="gener()"
# timeit statement
nb=10
print("The code run {} time in: ".format(nb ))
print("{} secondes".format(timeit.timeit(setup = mysetup,
stmt = mycode,
number = nb)))
print("done")
execution time was almost the same with the one got below
start = time.time()
my code here
end = time.time()
print(end - start)
0.23 sec with timeit and with the basic measurement code above 0.24 ,they both fluctuate...
So thanks, question resolved

Printing time in Python multiprocessing script return negative time elapsed

Running it on Ubuntu 14 with Python 2.7.6
I simplified script to show my problem:
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
print time.clock() - start_time, "lol seconds"
def worker(n):
print time.clock() - start_time, "multiprocesor seconds"
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
lol()
mp_handler()
And the output:
8e-06 lol seconds
6.9e-05 lol seconds
-0.030019 multiprocesor seconds
-0.029907 multiprocesor seconds
Process finished with exit code 0
Using time.time() gives non-negative values (as marked here Timer shows negative time elapsed)
but I'm curious what is the problem with time.clock() in python multiprocessing and reading time from CPU.
multiprocessing spawns new processes and time.clock() on linux has the same meaning of the C's clock():
The value returned is the CPU time used so far as a clock_t;
So the values returned by clock restart from 0 when a process start. However your code uses the parent's process start_time to determine the time spent in the child process, which is obviously incorrect if the child processes CPU time resets.
The clock() function makes sense only when handling one process, because its return value is the CPU time spent by that process. Child processes are not taken into account.
The time() function on the other hand uses a system-wide clock, and thus can be used even between different processes (although it is not monotonic, so it might return wrong results if somebody changes the system time during the events).
Forking a running python instance is probably faster then starting a new one from scratch, hence start_time is almost always bigger then the value returned by time.clock().
Take into account that the parent process also had to read your file on disk, perform the imports which may require reading other .py files, searching directories etc.
The forked child processes don't have to do all that.
Example code that shows that the return value of time.clock() resets to 0:
from __future__ import print_function
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "lol seconds")
def worker(n):
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "multiprocesor seconds")
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
print('start_time', start_time)
lol()
mp_handler()
Result:
$python ./testing.py
start_time 0.020721
t: 0.020779 5.8e-05 lol seconds
t: 0.020804 8.3e-05 lol seconds
t: 0.001036 -0.019685 multiprocesor seconds
t: 0.001166 -0.019555 multiprocesor seconds
Note how t is monotonic for the lol case while goes back to 0.001 in the other case.
To add a concise Python 3 example to Bakuriu's excellent answer above you can use the following method to get a global timer independent of the subprocesses:
import multiprocessing as mp
import time
# create iterable
iterable = range(4)
# adds three to the given element
def add_3(num):
a = num + 3
return a
# multiprocessing attempt
def main():
pool = mp.Pool(2)
results = pool.map(add_3, iterable)
return results
if __name__ == "__main__": #Required not to spawn deviant children
start=time.time()
results = main()
print(list(results))
elapsed = (time.time() - start)
print("\n","time elapsed is :", elapsed)
Note that if we had instead used time.process_time() instead of time.time() we will get an undesired result.

Accurate timing of functions in python

I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run.
def time_it(f, *args):
start = time.clock()
f(*args)
return (time.clock() - start)*1000
i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.)
This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running.
Is there a more standard or accepted way to do this?
When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here?
One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors.
Use the timeit module from the Python standard library.
Basic usage:
from timeit import Timer
# first argument is the code to be run, the second "setup" argument is only run once,
# and it not included in the execution time.
t = Timer("""x.index(123)""", setup="""x = range(1000)""")
print t.timeit() # prints float, for example 5.8254
# ..or..
print t.timeit(1000) # repeat 1000 times instead of the default 1million
Instead of writing your own profiling code, I suggest you check out the built-in Python profilers (profile or cProfile, depending on your needs): http://docs.python.org/library/profile.html
You can create a "timeme" decorator like so
import time
def timeme(method):
def wrapper(*args, **kw):
startTime = int(round(time.time() * 1000))
result = method(*args, **kw)
endTime = int(round(time.time() * 1000))
print(endTime - startTime,'ms')
return result
return wrapper
#timeme
def func1(a,b,c = 'c',sleep = 1):
time.sleep(sleep)
print(a,b,c)
func1('a','b','c',0)
func1('a','b','c',0.5)
func1('a','b','c',0.6)
func1('a','b','c',1)
This code is very inaccurate
total= 0
for i in range(1000):
start= time.clock()
function()
end= time.clock()
total += end-start
time= total/1000
This code is less inaccurate
start= time.clock()
for i in range(1000):
function()
end= time.clock()
time= (end-start)/1000
The very inaccurate suffers from measurement bias if the run-time of the function is close to the accuracy of the clock. Most of the measured times are merely random numbers between 0 and a few ticks of the clock.
Depending on your system workload, the "time" you observe from a single function may be entirely an artifact of OS scheduling and other uncontrollable overheads.
The second version (less inaccurate) has less measurement bias. If your function is really fast, you may need to run it 10,000 times to damp out OS scheduling and other overheads.
Both are, of course, terribly misleading. The run time for your program -- as a whole -- is not the sum of the function run-times. You can only use the numbers for relative comparisons. They are not absolute measurements that convey much meaning.
If you want to time a python method even if block you measure may throw, one good approach is to use with statement. Define some Timer class as
import time
class Timer:
def __enter__(self):
self.start = time.clock()
return self
def __exit__(self, *args):
self.end = time.clock()
self.interval = self.end - self.start
Then you may want to time a connection method that may throw. Use
import httplib
with Timer() as t:
conn = httplib.HTTPConnection('google.com')
conn.request('GET', '/')
print('Request took %.03f sec.' % t.interval)
__exit()__ method will be called even if the connection request thows. More precisely, you'd have you use try finally to see the result in case it throws, as with
try:
with Timer() as t:
conn = httplib.HTTPConnection('google.com')
conn.request('GET', '/')
finally:
print('Request took %.03f sec.' % t.interval)
More details here.
This is neater
from contextlib import contextmanager
import time
#contextmanager
def timeblock(label):
start = time.clock()
try:
yield
finally:
end = time.clock()
print ('{} : {}'.format(label, end - start))
with timeblock("just a test"):
print "yippee"
Similar to #AlexMartelli's answer
import timeit
timeit.timeit(fun, number=10000)
can do the trick.

Categories