How can I implement this ipython code in python?
[1] %timeit sum(list(range(1000)))
Ps: I want to do it in a single line of code. I have tried several times but failed every time.
Thanks.
This will give you the time taken (in seconds):
from timeit import timeit
timeTaken = timeit(lambda: sum(list(range(1000))), number=100000)
You can look up the timeit documentation for more options.
Related
I want to benchmark a bit of python code (not the language I am used to, but I have to do some comparisons in python). I have understood that timeit is a good tool for this, and I have a code like this:
n = 10
duration = timeit.Timer(my_func).timeit(number=n)
duration/n
to measure the mean runtime of the function. Now, I want to instead have the median time (the reason is that I want to make a comparison to something I get in median time, and it would be good to use the same measure in all cases). Now, timeit only seems to return the full runtime, and not the time of each individual run, so I am not sure how to find the median runtime. What is the best way to get this?
You can use the repeat method instead, which gives you the individual times as a list:
import timeit
from statistics import median
def my_func():
for _ in range(1000000):
pass
n = 10
durations = timeit.Timer(my_func).repeat(repeat=n, number=1)
print(median(durations))
Try it online!
The timeit module is great for measuring the execution time of small code snippets but when the code changes global state (like timeit) it's really hard to get accurate timings.
For example if I want to time it takes to import a module then the first import will take much longer than subsequent imports, because the submodules and dependencies are already imported and the files are already cached. So using a bigger number of repeats, like in:
>>> import timeit
>>> timeit.timeit('import numpy', number=1)
0.2819331711316805
>>> # Start a new Python session:
>>> timeit.timeit('import numpy', number=1000)
0.3035142574359181
doesn't really work, because the time for one execution is almost the same as for 1000 rounds. I could execute the command to "reload" the package:
>>> timeit.timeit('imp.reload(numpy)', 'import importlib as imp; import numpy', number=1000)
3.6543283935557156
But that it's only 10 times slower than the first import seems to suggest it's not accurate either.
It also seems impossible to unload a module entirely ("Unload a module in Python").
So the question is: What would be an appropriate way to accuratly measure the import time?
Since it's nearly impossible to fully unload a module, maybe the inspiration behind this answer is this...
You could run a loop in a python script to run x times a python command importing numpy and another one doing nothing, and substract both + average:
import subprocess,time
n=100
python_load_time = 0
numpy_load_time = 0
for i in range(n):
s = time.time()
subprocess.call(["python","-c","import numpy"])
numpy_load_time += time.time()-s
s = time.time()
subprocess.call(["python","-c","pass"])
python_load_time += time.time()-s
print("average numpy load time = {}".format((numpy_load_time-python_load_time)/n))
In the command line, I can do:
python -m timeit 'a = 1'
According to the docs:
If -n is not given, a suitable number of loops is calculated by trying successive powers
of 10 until the total time is at least 0.2 seconds.
This works great, but how can I get the same behavior when using timeit in my program? If I leave out the number argument to the timeit.timeit call, it will simply default to 1000000.
The docs don't make it obvious that there exists such functionality, I'll assume there isn't. Fortunately, defining a wrapper that does it for you is isn't too hard:
import timeit
def auto_timeit(stmt='pass', setup='pass'):
n = 1
t = timeit.timeit(stmt, setup, number=n)
while t < 0.2:
n *= 10
t = timeit.timeit(stmt, setup, number=n)
return t / n # normalise to time-per-run
Here is an alternative to writing a wrapper:
import subprocess
import sys
subprocess.call([sys.executable, '-m', 'timeit', stmt])
I have a function defined by a combination of basic math functions (abs, cosh, sinh, exp, ...).
I was wondering if it makes a difference (in speed) to use, for example,
numpy.abs() instead of abs()?
Here are the timing results:
lebigot#weinberg ~ % python -m timeit 'abs(3.15)'
10000000 loops, best of 3: 0.146 usec per loop
lebigot#weinberg ~ % python -m timeit -s 'from numpy import abs as nabs' 'nabs(3.15)'
100000 loops, best of 3: 3.92 usec per loop
numpy.abs() is slower than abs() because it also handles Numpy arrays: it contains additional code that provides this flexibility.
However, Numpy is fast on arrays:
lebigot#weinberg ~ % python -m timeit -s 'a = [3.15]*1000' '[abs(x) for x in a]'
10000 loops, best of 3: 186 usec per loop
lebigot#weinberg ~ % python -m timeit -s 'import numpy; a = numpy.empty(1000); a.fill(3.15)' 'numpy.abs(a)'
100000 loops, best of 3: 6.47 usec per loop
(PS: '[abs(x) for x in a]' is slower in Python 2.7 than the better map(abs, a), which is about 30 % faster—which is still much slower than NumPy.)
Thus, numpy.abs() does not take much more time for 1000 elements than for 1 single float!
You should use numpy function to deal with numpy's types and use regular python function to deal with regular python types.
Worst performance usually occurs when mixing python builtins with numpy, because of types conversion. Those type conversion have been optimized lately, but it's still often better to not use them. Of course, your mileage may vary, so use profiling tools to figure out.
Also consider the use of programs like cython or making a C module if you want to optimize further your program. Or consider not to use python when performances matters.
but, when your data has been put into a numpy array, then numpy can be really fast at computing bunch of data.
In fact, on numpy array
built in abs calls numpy's implementation via __abs__, see Why built-in functions like abs works on numpy array?
So, in theory there shouldn't be much performance difference.
import timeit
x = np.random.standard_normal(10000)
def pure_abs():
return abs(x)
def numpy_abs():
return np.abs(x)
n = 10000
t1 = timeit.timeit(pure_abs, number = n)
print('Pure Python abs:', t1)
t2 = timeit.timeit(numpy_abs, number = n)
print('Numpy abs:', t2)
Pure Python abs: 0.435754060745
Numpy abs: 0.426516056061
I am trying to calculate the time taken by pow function to calculate exponential modulo. With the values of g,x,p hardcoded the code gives error and with the values placed in the pow function, the code hangs. The same piece of code is working efficiently when i am using time() and clock() to calculate the time taken by this piece of code.
i wanted accuracy and for that now i have moved to timeit module after testing with clock() and time() functions.
The code works fine with small values such as pow(2, 3, 5) which makes sense. how can i improve the efficency to calculate time using timeit module.
Also i am a beginner to python, forgive me if there is any stupid mistake in the code.
import math
import random
import hashlib
import time
from timeit import Timer
g = 141802876407053547664378835005750805370737584038368838959151050908654130616798415530564917923311706921535439557793280725844349256960807398107370211978304
x = 1207729835787890214
p = 4870352607375058055471602136317178172283784073796673298937466544646468718314482464390112574915498953621226853454222898392076852427324057496200810018794472
t = Timer('pow(g,x,p)', 'import math')
z = t.timeit()
print ('the value of z is: '), z
Thanks
There are two issues here:
You can't directly access globals from timeit: See this question. You can use this to fix the error:
t = Timer('pow(g,x,p)', 'from __main__ import g,x,p')
Or just put the numerical values directly in the string.
By default, the timeit module runs 1000000 iterations, which will take much too long here. You can change the number of iterations, for example:
z = t.timeit(1000)
This will prevent what seems like a hang (but is actually just a very long calculation).