I have an application which requires to initialize a large number of objects with Python (3.5.2) and encounter some occasional slow-downs.
The slow-down seems to occur on a specific initialization: most of the calls to __init__ last less than 1 ns, but one of them sometimes lasts several dozens of seconds.
I've been able to reproduce this using the following snippet that initializes 500k a simple object.
import cProfile
class A:
def __init__(self):
pass
cProfile.run('[A() for _ in range(500000)]')
I'm running this code in a notebook. Most of the times (9/10), this code outputs the following (normal execution)
500004 function calls in 0.675 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
500000 0.031 0.000 0.031 0.000 <ipython-input-5-634b77609653>:2(__init__)
1 0.627 0.627 0.657 0.657 <string>:1(<listcomp>)
1 0.018 0.018 0.675 0.675 <string>:1(<module>)
1 0.000 0.000 0.675 0.675 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
The other times, it outputs the following (slow execution)
500004 function calls in 40.154 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
500000 0.031 0.000 0.031 0.000 <ipython-input-74-634b77609653>:2(__init__)
1 40.110 40.110 40.140 40.140 <string>:1(<listcomp>)
1 0.014 0.014 40.154 40.154 <string>:1(<module>)
1 0.000 0.000 40.154 40.154 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Using tqdm, the loop seems to get stuck on one iteration. It's important to note that I was able to reproduce this in a notebook with already a lot of memory allocated.
I suspect that it comes from the list of references to the objects used by the garbage collector that might need to be copied from time to time.
What is exactly happening here, and are there any ways to avoid this ?
Related
I need to test the performance of code that will be objective and roughly the same across all machines. Timing code does not work since it's tied to your or mine machine specs, but counting instructions issued by a CPU does (with minor differences).
I can use strace in Linux, but my god its slow and I just want total not individual calls.
Say:
def foo(bar):
for i in range(bar):
print(i)
foo(10)
This will execute at different speeds on different machines (bear with me, imagine a more complicated algorithm). But the amount of operation done is the same, 10 ios. This is important because if you have a faster computer you won't notice a millisecond that might take 5 seconds on my machine.
Is there a way to count # of CPU instructions done since in Python?
I'm asking because I want to know if a refactor will 2x my CPU instructions.
Thank you.
You can use the Python profiler cProfile
$ python -m cProfile euler048.py
1007 function calls in 0.061 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 0.061 0.061 <string>:1(<module>)
1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>)
1 0.005 0.005 0.061 0.061 euler048.py:2(<module>)
1 0.000 0.000 0.061 0.061 {execfile}
1 0.002 0.002 0.053 0.053 {map}
1 0.000 0.000 0.000 0.000 {method someMethod}
1 0.000 0.000 0.000 0.000 {range}
1 0.003 0.003 0.003 0.003 {sum}
excerpt from the previous question I linked, hope this helps
This is the profiling result of my python code.
As you can see below, method 'recv_into' of '_socket.socket' objects takes too much time ( 17.265 as tottime )
What is it? And is there any way to reduce its time?
When is it called?
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.402 0.402 37.668 37.668 c:\Users\user\Google ����̺�\Business\Project\Jessica Project\jessica-1\simulation\simulatorW.py:239(backtestWithArgumentsList)
1 0.173 0.173 26.762 26.762 c:\Users\user\Google ����̺�\Business\Project\Jessica Project\jessica-1\simulation\simulatorW.py:110(getPrices)
1 0.000 0.000 26.588 26.588 c:\Users\user\Google ����̺�\Business\Project\Jessica Project\jessica-1\dto\__init__.py:5(__init__)
1 1.734 1.734 25.380 25.380 c:\Users\user\Google ����̺�\Business\Project\Jessica Project\jessica-1\dto\__init__.py:21(priceInfoListToDeque)
815679 2.204 0.000 23.473 0.000 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\cursor.py:1152(next)
13 0.021 0.002 20.631 1.587 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\cursor.py:1039(_refresh)
12 0.008 0.001 20.609 1.717 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\cursor.py:937(__send_message)
12 0.000 0.000 20.601 1.717 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\mongo_client.py:1306(_run_operation_with_response)
12 0.000 0.000 20.601 1.717 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\mongo_client.py:1437(_retryable_read)
12 0.000 0.000 20.597 1.716 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\mongo_client.py:1334(_cmd)
12 0.001 0.000 20.597 1.716 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\server.py:70(run_operation_with_response)
18 0.001 0.000 17.386 0.966 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\network.py:192(receive_message)
12 0.013 0.001 17.379 1.448 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\pool.py:637(receive_message)
36 0.066 0.002 17.331 0.481 C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\network.py:249(_receive_data_on_socket)
19984 17.265 0.001 17.265 0.001 {method 'recv_into' of '_socket.socket' objects}
1 2.499 2.499 6.522 6.522 c:\Users\user\Google ����̺�\Business\Project\Jessica Project\jessica-1\simulation\simulatorW.py:138(filterIndicesWithTimeCondition)
It's a low level networking call. This is the time spent reading whatever you are loading. Take a look at its callers.
p.print_callers("{method 'recv_into' of '_socket.socket' objects}")
Keep going up the callers tree picking the ones that have longer times. Remember that the restriction is a regexp. Use escapes when necessary:
p.sort_stats("tottime").print_callers("api.py:104\(post\)")
The top 4 lines are more interesting than the recv_into one. If you go up the caller tree, you're likely to end up in one of those. There could be many ways to optimize those, since no details are provided. Cacheing, compressing, getting only what you need, and otherwise reducing network footprint.
I have some Python code that is generating a large data set via numerical simulation. The code is using Numpy for a lot of the calculations and Pandas for a lot of the top-level data. The data sets are large so the code is running slowly, and now I'm trying to see if I can use cProfile to find and fix some hot spots.
The trouble is that cProfile is identifying a lot of the hot spots as pieces of code within Pandas, within Numpy, and/or Python builtins. Here are the cProfile statistics sorted by 'tottime' (total time within the function itself). Note that I'm obscuring project name and file names since the code itself is not owned by me and I don't have permission to share details.
foo.sort_stats('tottime').print_stats(50)
Wed Jun 5 13:18:28 2019 c:\localwork\xxxxxx\profile_data
297514385 function calls (291105230 primitive calls) in 306.898 seconds
Ordered by: internal time
List reduced from 4141 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
281307 31.918 0.000 34.731 0.000 {pandas._libs.lib.infer_dtype}
800 31.443 0.039 31.476 0.039 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\numpy\lib\function_base.py:4703(delete)
109668 23.837 0.000 23.837 0.000 {method 'clear' of 'dict' objects}
153481 19.369 0.000 19.369 0.000 {method 'ravel' of 'numpy.ndarray' objects}
5861614 14.182 0.000 78.492 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexes\base.py:3090(get_value)
5861614 8.891 0.000 8.891 0.000 {method 'get_value' of 'pandas._libs.index.IndexEngine' objects}
5861614 8.376 0.000 99.084 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\series.py:764(__getitem__)
26840695 7.032 0.000 11.009 0.000 {built-in method builtins.isinstance}
26489324 6.547 0.000 14.410 0.000 {built-in method builtins.getattr}
11846279 6.177 0.000 19.809 0.000 {pandas._libs.lib.values_from_object}
[...]
Is there a sensible way for me to figure out which parts of my code are excessively leaning on these library functions and built-ins? I anticipate one answer would be "look at the cumulative time statistics, that will probably indicate where these costly calls are originating". The cumulative times give a little bit of insight:
foo.sort_stats('cumulative').print_stats(50)
Wed Jun 5 13:18:28 2019 c:\localwork\xxxxxx\profile_data
297514385 function calls (291105230 primitive calls) in 306.898 seconds
Ordered by: cumulative time
List reduced from 4141 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
643/1 0.007 0.000 307.043 307.043 {built-in method builtins.exec}
1 0.000 0.000 307.043 307.043 xxxxxx.py:1(<module>)
1 0.002 0.002 306.014 306.014 xxxxxx.py:264(write_xxx_data)
1 0.187 0.187 305.991 305.991 xxxxxx.py:256(write_yyyy_data)
1 0.077 0.077 305.797 305.797 xxxxxx.py:250(make_zzzzzzz)
1 0.108 0.108 187.845 187.845 xxxxxx.py:224(generate_xyzxyz)
108223 1.977 0.000 142.816 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexing.py:298(_setitem_with_indexer)
1 0.799 0.799 126.733 126.733 xxxxxx.py:63(populate_abcabc_data)
1 0.030 0.030 117.874 117.874 xxxxxx.py:253(<listcomp>)
7201 0.077 0.000 116.612 0.016 C:\LocalWork\xxxxxx\yyyyyyyyyyyy.py:234(xxx_yyyyyy)
108021 0.497 0.000 112.908 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexing.py:182(__setitem__)
5861614 8.376 0.000 99.084 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\series.py:764(__getitem__)
110024 0.917 0.000 81.210 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:3500(apply)
108021 0.185 0.000 80.685 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:3692(setitem)
5861614 14.182 0.000 78.492 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexes\base.py:3090(get_value)
108021 1.887 0.000 73.064 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:819(setitem)
[...]
Is there a good way to pin down the hot spots -- better than "crawl through xxxxxx.py and search for all places where Pandas might be inferring a dataype, and where Numpy might be deleting objects"...?
I am trying to parallelize an embarrassingly parallel for loop (previously asked here) and settled on this implementation that fit my parameters:
with Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets, train_size_common, feat_sel_size, train_perc,
total_test_samples, num_classes, num_features, label_set,
method_names, pos_class_index, out_results_dir, exhaustive_search])
partial_func_holdout = partial(holdout_trial_compare_datasets, *shared_inputs)
with Pool(processes=num_procs) as pool:
cv_results = pool.map(partial_func_holdout, range(num_repetitions))
The reason I need to use a proxy object (shared between processes) is the first element in the shared proxy list datasets that is a list of large objects (each about 200-300MB). This datasets list usually has 5-25 elements. I typically need to run this program on a HPC cluster.
Here is the question, when I run this program with 32 processes and 50GB of memory (num_repetitions=200, with datasets being a list of 10 objects, each 250MB), I do not see a speedup even by factor of 16 (with 32 parallel processes). I do not understand why - any clues? Any obvious mistakes, or bad choices? Where can I improve this implementation? Any alternatives?
I am sure this has been discussed before, and the reasons can be varied and very specific to implementation - hence I request you to provide me your 2 cents. Thanks.
Update: I did some profiling with cProfile to get a better idea - here is some info, sorted by cumulative time.
In [19]: p.sort_stats('cumulative').print_stats(50)
Mon Oct 16 16:43:59 2017 profiling_log.txt
555404 function calls (543552 primitive calls) in 662.201 seconds
Ordered by: cumulative time
List reduced from 4510 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
897/1 0.044 0.000 662.202 662.202 {built-in method builtins.exec}
1 0.000 0.000 662.202 662.202 test_rhst.py:2(<module>)
1 0.001 0.001 661.341 661.341 test_rhst.py:70(test_chance_classifier_binary)
1 0.000 0.000 661.336 661.336 /Users/Reddy/dev/neuropredict/neuropredict/rhst.py:677(run)
4 0.000 0.000 661.233 165.308 /Users/Reddy/anaconda/envs/py36/lib/python3.6/threading.py:533(wait)
4 0.000 0.000 661.233 165.308 /Users/Reddy/anaconda/envs/py36/lib/python3.6/threading.py:263(wait)
23 661.233 28.749 661.233 28.749 {method 'acquire' of '_thread.lock' objects}
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:261(map)
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:637(get)
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:634(wait)
866/8 0.004 0.000 0.868 0.108 <frozen importlib._bootstrap>:958(_find_and_load)
866/8 0.003 0.000 0.867 0.108 <frozen importlib._bootstrap>:931(_find_and_load_unlocked)
720/8 0.003 0.000 0.865 0.108 <frozen importlib._bootstrap>:641(_load_unlocked)
596/8 0.002 0.000 0.865 0.108 <frozen importlib._bootstrap_external>:672(exec_module)
1017/8 0.001 0.000 0.863 0.108 <frozen importlib._bootstrap>:197(_call_with_frames_removed)
522/51 0.001 0.000 0.765 0.015 {built-in method builtins.__import__}
The profiling info now sorted by time
In [20]: p.sort_stats('time').print_stats(20)
Mon Oct 16 16:43:59 2017 profiling_log.txt
555404 function calls (543552 primitive calls) in 662.201 seconds
Ordered by: internal time
List reduced from 4510 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
23 661.233 28.749 661.233 28.749 {method 'acquire' of '_thread.lock' objects}
115/80 0.177 0.002 0.211 0.003 {built-in method _imp.create_dynamic}
595 0.072 0.000 0.072 0.000 {built-in method marshal.loads}
1 0.045 0.045 0.045 0.045 {method 'acquire' of '_multiprocessing.SemLock' objects}
897/1 0.044 0.000 662.202 662.202 {built-in method builtins.exec}
3 0.042 0.014 0.042 0.014 {method 'read' of '_io.BufferedReader' objects}
2037/1974 0.037 0.000 0.082 0.000 {built-in method builtins.__build_class__}
286 0.022 0.000 0.061 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/misc/doccer.py:12(docformat)
2886 0.021 0.000 0.021 0.000 {built-in method posix.stat}
79 0.016 0.000 0.016 0.000 {built-in method posix.read}
597 0.013 0.000 0.021 0.000 <frozen importlib._bootstrap_external>:830(get_data)
276 0.011 0.000 0.013 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/sre_compile.py:250(_optimize_charset)
108 0.011 0.000 0.038 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:626(_construct_argparser)
1225 0.011 0.000 0.050 0.000 <frozen importlib._bootstrap_external>:1233(find_spec)
7179 0.009 0.000 0.009 0.000 {method 'splitlines' of 'str' objects}
33 0.008 0.000 0.008 0.000 {built-in method posix.waitpid}
283 0.008 0.000 0.015 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/misc/doccer.py:128(indentcount_lines)
3 0.008 0.003 0.008 0.003 {method 'poll' of 'select.poll' objects}
7178 0.008 0.000 0.008 0.000 {method 'expandtabs' of 'str' objects}
597 0.007 0.000 0.007 0.000 {method 'read' of '_io.FileIO' objects}
More profiling info sorted by percall info:
Update 2
The elements in the large list datasets I mentioned earlier are not usually as big - they are typically 10-25MB each. But depending on the floating point precision used, number of samples and features, this can easily grow to 500MB-1GB per element also. hence I'd prefer a solution that can scale.
Update 3:
The code inside holdout_trial_compare_datasets uses method GridSearchCV of scikit-learn, which internally uses joblib library if we set n_jobs > 1 (or whenever we even set it). This might lead to some bad interactions between multiprocessing and joblib. So trying another config where I do not set n_jobs at all (which should to default no parallelism within scikit-learn). Will keep you posted.
Based on discussion in the comments, I did a mini experiment, compared three versions of implementation:
v1: basically as same as your approach, in fact, as partial(f1, *shared_inputs) will unpack proxy_manager.list immediately, Manager.List not involved here, data passed to worker with the internal queue of Pool.
v2: v2 made use Manager.List, work function will receive a ListProxy object, it fetches shared data via a internal connection to a server process.
v3: child process share data from the parent, take advantage of fork(2) system call.
def f1(*args):
for e in args[0]: pow(e, 2)
def f2(*args):
for e in args[0][0]: pow(e, 2)
def f3(n):
for i in datasets: pow(i, 2)
def v1(np):
with mp.Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets,])
pf = partial(f1, *shared_inputs)
with mp.Pool(processes=np) as pool:
r = pool.map(pf, range(16))
def v2(np):
with mp.Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets,])
pf = partial(f2, shared_inputs)
with mp.Pool(processes=np) as pool:
r = pool.map(pf, range(16))
def v3(np):
with mp.Pool(processes=np) as pool:
r = pool.map(f3, range(16))
datasets = [2.0 for _ in range(10 * 1000 * 1000)]
for f in (v1, v2, v3):
print(f.__code__.co_name)
for np in (2, 4, 8, 16):
s = time()
f(np)
print("%s %.2fs" % (np, time()-s))
results taken on a 16 core E5-2682 VPC, it is obvious that v3 scales better:
{method 'acquire' of '_thread.lock' objects}
Looking at your profiler output I would say that the shared object lock/unlock overhead overwhelms the speed gains of multithreading.
Refactor so that the work is farmed out to workers that do not need to talk to one another as much.
Specifically, if possible, derive one answer per data pile and then act on the accumulated results.
This is why Queues can seem so much faster: they involve a type of work that does not require an object that has to be 'managed' and so locked/unlocked.
Only 'manage' things that absolutely need to be shared between processes. Your managed list contains some very complicated looking objects...
A faster paradigm is:
allwork = manager.list([a, b,c])
theresult = manager.list()
and then
while mywork:
unitofwork = allwork.pop()
theresult = myfunction(unitofwork)
If you do not need a complex shared object, then only use a list of the most simple objects imaginable.
Then tell the workers to acquire the complex data that they can process in their own little world.
Try:
allwork = manager.list([datasetid1, datasetid2 ,...])
theresult = manager.list()
while mywork:
unitofworkid = allwork.pop()
theresult = myfunction(unitofworkid)
def myfunction(unitofworkid):
thework = acquiredataset(unitofworkid)
result = holdout_trial_compare_datasets(thework, ...)
I hope that this makes sense. It should not take too much time to refactor in this direction. And you should see that {method 'acquire' of '_thread.lock' objects} number drop like a rock when you profile.
I'm trying to profile a function that calls other functions. I call the profiler as follows:
from mymodule import foo
def start():
# ...
foo()
import cProfile as profile
profile.run('start()', output_file)
p = pstats.Stats(output_file)
print "name: "
print p.sort_stats('name')
print "all stats: "
p.print_stats()
print "cumulative (top 10): "
p.sort_stats('cumulative').print_stats(10)
I find that the profiler says all the time was spend in function "foo()" of mymodule, instead of brekaing it down into the subfunctions foo() calls, which is what I want to see. How can I make the profiler report the performance of these functions?
thanks.
You need p.print_callees() to get hierarchical breakdown of method calls. The output is quite self explanatory: On the left column you can find your function of interest e.g.foo(), then going to the right side column shows all called sub-functions and their scoped total and cumulative times. Breakdowns for these sub-calls are also included etc.
First, I want to say that I was unable to replicate the Asker's issue. The profiler (in py2.7) definitely descends into the called functions and methods. (The docs for py3.6 look identical, but I haven't tested on py3.) My guess is that by limiting it to the top 10 returns, sorted by cumulative time, the first N of those were very high-level functions called a minimum of time, and the functions called by foo() dropped off the bottom of the list.
I decided to play with some big numbers for testing. Here's my test code:
# file: mymodule.py
import math
def foo(n = 5):
for i in xrange(1,n):
baz(i)
bar(i ** i)
def bar(n):
for i in xrange(1,n):
e = exp200(i)
print "len e: ", len("{}".format(e))
def exp200(n):
result = 1
for i in xrange(200):
result *= n
return result
def baz(n):
print "{}".format(n)
And the including file (very similiar to Asker's):
# file: test.py
from mymodule import foo
def start():
# ...
foo(8)
OUTPUT_FILE = 'test.profile_info'
import pstats
import cProfile as profile
profile.run('start()', OUTPUT_FILE)
p = pstats.Stats(OUTPUT_FILE)
print "name: "
print p.sort_stats('name')
print "all stats: "
p.print_stats()
print "cumulative (top 10): "
p.sort_stats('cumulative').print_stats(10)
print "time (top 10): "
p.sort_stats('time').print_stats(10)
Notice the last line. I added a view sorted by time, which is the total time spent in the function "excluding time made in calls to sub-functions". I find this view much more useful, as it tends to favor the functions that are doing actual work, and may be in need of optimization.
Here's the part of the results that the Asker was working from (cumulative-sorted):
cumulative (top 10):
Thu Mar 24 21:26:32 2016 test.profile_info
2620840 function calls in 76.039 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 76.039 76.039 <string>:1(<module>)
1 0.000 0.000 76.039 76.039 test.py:5(start)
1 0.000 0.000 76.039 76.039 /Users/jhazen/mymodule.py:4(foo)
7 10.784 1.541 76.039 10.863 /Users/jhazen/mymodule.py:10(bar)
873605 49.503 0.000 49.503 0.000 /Users/jhazen/mymodule.py:15(exp200)
873612 15.634 0.000 15.634 0.000 {method 'format' of 'str' objects}
873605 0.118 0.000 0.118 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:21(baz)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
See how the top 3 functions in this display were only called once. Let's look at the time-sorted view:
time (top 10):
Thu Mar 24 21:26:32 2016 test.profile_info
2620840 function calls in 76.039 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873605 49.503 0.000 49.503 0.000 /Users/jhazen/mymodule.py:15(exp200)
873612 15.634 0.000 15.634 0.000 {method 'format' of 'str' objects}
7 10.784 1.541 76.039 10.863 /Users/jhazen/mymodule.py:10(bar)
873605 0.118 0.000 0.118 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:21(baz)
1 0.000 0.000 76.039 76.039 /Users/jhazen/mymodule.py:4(foo)
1 0.000 0.000 76.039 76.039 test.py:5(start)
1 0.000 0.000 76.039 76.039 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Now the number one entry makes sense. Obviously raising something to the 200th power by repeated multiplication is a "naive" strategy. Let's replace it:
def exp200(n):
return n ** 200
And the results:
time (top 10):
Thu Mar 24 21:32:18 2016 test.profile_info
2620840 function calls in 30.646 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873612 15.722 0.000 15.722 0.000 {method 'format' of 'str' objects}
7 9.760 1.394 30.646 4.378 /Users/jhazen/mymodule.py:10(bar)
873605 5.056 0.000 5.056 0.000 /Users/jhazen/mymodule.py:15(exp200)
873605 0.108 0.000 0.108 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:18(baz)
1 0.000 0.000 30.646 30.646 /Users/jhazen/mymodule.py:4(foo)
1 0.000 0.000 30.646 30.646 test.py:5(start)
1 0.000 0.000 30.646 30.646 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
That's a nice improvement in time. Now str.format() is our worst offender. I added the line in bar() to print the length of the number, because my first attempt (just computing the number and doing nothing with it) got optimized away, and my attempt to avoid that (printing the number, which got really big really fast) seemed like it might be blocking on I/O, so I compromised on printing the length of the number. Hey, that's the base-10 log. Let's try that:
def bar(n):
for i in xrange(1,n):
e = exp200(i)
print "log e: ", math.log10(e)
And the results:
time (top 10):
Thu Mar 24 21:40:16 2016 test.profile_info
1747235 function calls in 11.279 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
7 6.082 0.869 11.279 1.611 /Users/jhazen/mymodule.py:10(bar)
873605 4.996 0.000 4.996 0.000 /Users/jhazen/mymodule.py:15(exp200)
873605 0.201 0.000 0.201 0.000 {math.log10}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:18(baz)
1 0.000 0.000 11.279 11.279 /Users/jhazen/mymodule.py:4(foo)
7 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 11.279 11.279 test.py:5(start)
1 0.000 0.000 11.279 11.279 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Hmm, still a fair amount of time spent in bar(), even without the str.format(). Let's get rid of that print:
def bar(n):
z = 0
for i in xrange(1,n):
e = exp200(i)
z += math.log10(e)
return z
And the results:
time (top 10):
Thu Mar 24 21:45:24 2016 test.profile_info
1747235 function calls in 5.031 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873605 4.487 0.000 4.487 0.000 /Users/jhazen/mymodule.py:17(exp200)
7 0.440 0.063 5.031 0.719 /Users/jhazen/mymodule.py:10(bar)
873605 0.104 0.000 0.104 0.000 {math.log10}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:20(baz)
1 0.000 0.000 5.031 5.031 /Users/jhazen/mymodule.py:4(foo)
7 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 5.031 5.031 test.py:5(start)
1 0.000 0.000 5.031 5.031 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Now it looks like the stuff doing the actual work is the busiest function, so I think we're done optimizing.
Hope that helps!
Maybe you faced with a similar problem, so I'm going to describe here my issue. My profiling code looked like this:
def foobar():
import cProfile
pr = cProfile.Profile()
pr.enable()
for event in reader.events():
baz()
# and other things
pr.disable()
pr.dump_stats('result.prof')
And the final profiling output contained only events() call. And I spent not so little time to realise that I had empty loop profiling. Of course, there was more than one call of foobar() from a client code, but meaningful profiling results had been overwritten by last one call with empty loop.