Django + PostgreSQL - 1000 inserts/sec, how to speed up? - python

I did some diagnosis and found that on htop:
python save_to_db.py takes 86% of the CPU
postgres: mydb mydb localhost idle in transaction takes 16% of the CPU.
My code for save_to_db.py looks something like:
import datetime
import django
import os
import sys
import json
import itertools
import cProfile
# setting up standalone django environment
...
from django.db import transaction
from xxx.models import File
INPUT_FILE = "xxx"
with open("xxx", "r") as f:
volume_name = f.read().strip()
def todate(seconds):
return datetime.datetime.fromtimestamp(seconds)
#transaction.atomic
def batch_save_files(files, volume_name):
for jf in files:
metadata = files[jf]
f = File(xxx=jf, yyy=todate(metadata[0]), zzz=todate(metadata[1]), vvv=metadata[2], www=volume_name)
f.save()
with open(INPUT_FILE, "r") as f:
dirdump = json.load(f)
timestamp = dirdump["curtime"]
files = {k : dirdump["files"][k] for k in list(dirdump["files"].keys())[:1000000]}
cProfile.run('batch_save_files(files, volume_name)')
And the respective cProfile dump(I only kept the ones with large cumtime):
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 881.336 881.336 <string>:1(<module>)
1000000 5.325 0.000 844.553 0.001 base.py:655(save)
1000000 14.574 0.000 834.125 0.001 base.py:732(save_base)
1000000 10.108 0.000 800.494 0.001 base.py:795(_save_table)
1000000 5.265 0.000 720.608 0.001 base.py:847(_do_update)
1000000 4.522 0.000 446.781 0.000 compiler.py:1038(execute_sql)
1000000 23.669 0.000 196.273 0.000 compiler.py:1314(as_sql)
1000000 7.473 0.000 458.064 0.000 compiler.py:1371(execute_sql)
1 0.000 0.000 881.336 881.336 contextlib.py:49(inner)
1000000 7.370 0.000 62.090 0.000 lookups.py:150(process_lhs)
1000000 3.907 0.000 81.685 0.000 lookups.py:159(as_sql)
1000000 3.251 0.000 44.679 0.000 lookups.py:74(process_lhs)
1000000 3.594 0.000 53.745 0.000 manager.py:81(manager_method)
1000000 19.855 0.000 106.487 0.000 query.py:1117(build_filter)
1000000 5.523 0.000 161.104 0.000 query.py:1241(add_q)
1000000 10.684 0.000 152.080 0.000 query.py:1258(_add_q)
1000000 7.448 0.000 513.984 0.001 query.py:697(_update)
1000000 2.221 0.000 201.359 0.000 query.py:831(filter)
1000000 5.371 0.000 199.138 0.000 query.py:845(_filter_or_exclude)
1 7.982 7.982 881.329 881.329 save_to_db.py:47(batch_save_files)
1000000 1.834 0.000 204.064 0.000 utils.py:67(execute)
1000000 3.099 0.000 202.231 0.000 utils.py:73(_execute_with_wrappers)
1000000 4.306 0.000 199.131 0.000 utils.py:79(_execute)
1000000 10.830 0.000 222.880 0.000 utils.py:97(execute)
2/1 0.000 0.000 881.336 881.336 {built-in method builtins.exec}
1000001 189.750 0.000 193.764 0.000 {method 'execute' of 'psycopg2.extensions.cursor' objects}
Running time python save_to_db.py takes 14minutes, and roughly around 1000 inserts/sec. This is fairly slow.
My schema for File looks like:
xxx TEXT UNIQUE NOT NULL PRIMARY KEY
yyy DATETIME
zzz DATETIME
vvv INTEGER
www TEXT
I can't seem to figure out how to speed this process up. Is there some way of doing this that I'm not aware of? Currently I index everything, but I would be very surprised if that's the main bottleneck.
Thank you!

You can use bulk create.
objs = [
File(
xxx=jf,
yyy=todate(metadata[0]),
zzz=todate(metadata[1]),
vvv=metadata[2],
www=volume_name
)
for jf in files
]
filelist = File.objects.bulk_create(objs)

Related

Numpy array calculations take too long - Bug?

NumPy version: 1.14.5
Purpose of the 'foo' function:
Finding the Euclid distance between arrays with the shapes (1,512), which represent facial features.
Issue:
foo function takes ~223.32 ms , but after that, some background operations related to NumPy take 170 seconds for some reason
Question:
Is keeping arrays in dictionaries, and iterating over them is a very dangerous usage of NumPy arrays?
Request for an Advice:
When I keep the arrays stacked and separate from dict, Euclid distance calculation takes half the time (~120ms instead of ~250ms), but overall performance doesn't change much for some reason. Allocating new arrays and stacking them may have countered the benefits of bigger array calculations.
I am open to any advice.
Code:
import numpy as np
import time
import uuid
import random
from funcy import print_durations
#print_durations
def foo(merged_faces_rec, face):
t = time.time()
for uid, feature_list in merged_faces_rec.items():
dist = np.linalg.norm( np.subtract(feature_list[0], face))
print("foo inside : ", time.time()-t)
rand_age = lambda : random.choice(["0-18", "18-35", "35-55", "55+"])
rand_gender = lambda : random.choice(["Erkek", "Kadin"])
rand_emo = lambda : random.choice(["happy", "sad", "neutral", "scared"])
date_list = []
emb = lambda : np.random.rand(1, 512)
def generate_faces_rec(d, n=12000):
for _ in range(n):
d[uuid.uuid4().hex] = [emb(), rand_gender(), rand_age(), rand_emo(), date_list]
faces_rec1 = dict()
generate_faces_rec(faces_rec1)
faces_rec2 = dict()
generate_faces_rec(faces_rec2)
faces_rec3 = dict()
generate_faces_rec(faces_rec3)
faces_rec4 = dict()
generate_faces_rec(faces_rec4)
faces_rec5 = dict()
generate_faces_rec(faces_rec5)
merged_faces_rec = dict()
st = time.time()
merged_faces_rec.update(faces_rec1)
merged_faces_rec.update(faces_rec2)
merged_faces_rec.update(faces_rec3)
merged_faces_rec.update(faces_rec4)
merged_faces_rec.update(faces_rec5)
t2 = time.time()
print("updates: ", t2-st)
face = list(merged_faces_rec.values())[0][0]
t3 = time.time()
print("face: ", t3-t2)
t4 = time.time()
foo(merged_faces_rec, face)
t5 = time.time()
print("foo: ", t5-t4)
Result:
Computations between t4 and t5 took 168 seconds.
updates: 0.00468754768371582
face: 0.0011434555053710938
foo inside : 0.2232837677001953
223.32 ms in foo({'d02d46999aa145be8116..., [[0.96475353 0.8055263...)
foo: 168.42408967018127
cProfile
python3 -m cProfile -s tottime test.py
cProfile Result:
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
30720512 44.991 0.000 85.425 0.000 arrayprint.py:888(__call__)
36791296 42.447 0.000 42.447 0.000 {built-in method numpy.core.multiarray.dragon4_positional}
30840514/60001 36.154 0.000 149.749 0.002 arrayprint.py:659(recurser)
24649728 25.967 0.000 25.967 0.000 {built-in method numpy.core.multiarray.dragon4_scientific}
30720512 20.183 0.000 26.420 0.000 arrayprint.py:636(_extendLine)
10 12.281 1.228 12.281 1.228 {method 'sub' of '_sre.SRE_Pattern' objects}
60001 11.434 0.000 79.370 0.001 arrayprint.py:804(fillFormat)
228330011/228329975 10.270 0.000 10.270 0.000 {built-in method builtins.len}
204081 4.815 0.000 16.469 0.000 {built-in method builtins.max}
18431577 4.624 0.000 21.742 0.000 arrayprint.py:854(<genexpr>)
18431577 4.453 0.000 28.627 0.000 arrayprint.py:859(<genexpr>)
30720531 3.987 0.000 3.987 0.000 {method 'split' of 'str' objects}
12348936 3.012 0.000 13.873 0.000 arrayprint.py:829(<genexpr>)
12348936 3.007 0.000 17.955 0.000 arrayprint.py:832(<genexpr>)
18431577 2.179 0.000 2.941 0.000 arrayprint.py:863(<genexpr>)
18431577 2.124 0.000 2.870 0.000 arrayprint.py:864(<genexpr>)
12348936 1.625 0.000 3.180 0.000 arrayprint.py:833(<genexpr>)
12348936 1.468 0.000 1.992 0.000 arrayprint.py:834(<genexpr>)
12348936 1.433 0.000 1.922 0.000 arrayprint.py:844(<genexpr>)
12348936 1.432 0.000 1.929 0.000 arrayprint.py:837(<genexpr>)
12324864 1.074 0.000 1.074 0.000 {method 'partition' of 'str' objects}
6845518 0.761 0.000 0.761 0.000 {method 'rstrip' of 'str' objects}
60001 0.747 0.000 80.175 0.001 arrayprint.py:777(__init__)
2 0.637 0.319 245.563 122.782 debug.py:237(smart_repr)
120002 0.573 0.000 0.573 0.000 {method 'reduce' of 'numpy.ufunc' objects}
60001 0.421 0.000 231.153 0.004 arrayprint.py:436(_array2string)
60000 0.370 0.000 0.370 0.000 {method 'rand' of 'mtrand.RandomState' objects}
60000 0.303 0.000 232.641 0.004 arrayprint.py:1334(array_repr)
60001 0.274 0.000 232.208 0.004 arrayprint.py:465(array2string)
60001 0.261 0.000 80.780 0.001 arrayprint.py:367(_get_format_function)
120008 0.255 0.000 0.611 0.000 numeric.py:2460(seterr)
Update to Clearify the Question
This is the part that has the bug. Something behind the scenes causes to program to take too long. Is it something to do with garbage collector, or just weird numpy bug? I don't have any clue.
t6 = time.time()
foo1(big_array, face) # 223.32ms
t7 = time.time()
print("foo1 : ", t7-t6) # foo1 : 170 seconds

Speed up extraction of coordinates from DICOM structure set

Using numpy.reshape helped a lot and using map helped a little. Is it possible to speed this up some more?
import pydicom
import numpy as np
import cProfile
import pstats
def parse_coords(contour):
"""Given a contour from a DICOM ROIContourSequence, returns coordinates
[loop][[x0, x1, x2, ...][y0, y1, y2, ...][z0, z1, z2, ...]]"""
if not hasattr(contour, "ContourSequence"):
return [] # empty structure
def _reshape_contour_data(loop):
return np.reshape(np.array(loop.ContourData),
(3, len(loop.ContourData) // 3),
order='F')
return list(map(_reshape_contour_data,contour.ContourSequence))
def profile_load_contours():
rs = pydicom.dcmread('RS.gyn1.dcm')
structs = [parse_coords(contour) for contour in rs.ROIContourSequence]
cProfile.run('profile_load_contours()','prof.stats')
p = pstats.Stats('prof.stats')
p.sort_stats('cumulative').print_stats(30)
Using a real structure set exported from Varian Eclipse.
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 12.165 12.165 {built-in method builtins.exec}
1 0.151 0.151 12.165 12.165 <string>:1(<module>)
1 0.000 0.000 12.014 12.014 load_contour_time.py:19(profile_load_contours)
1 0.000 0.000 11.983 11.983 load_contour_time.py:21(<listcomp>)
56 0.009 0.000 11.983 0.214 load_contour_time.py:7(parse_coords)
50745/33837 0.129 0.000 11.422 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:455(__getattr__)
50741/33825 0.152 0.000 10.938 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:496(__getitem__)
16864 0.069 0.000 9.839 0.001 load_contour_time.py:12(_reshape_contour_data)
16915 0.101 0.000 9.780 0.001 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataelem.py:439(DataElement_from_raw)
16915 0.052 0.000 9.300 0.001 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:320(convert_value)
16864 0.038 0.000 7.099 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:89(convert_DS_string)
16870 0.042 0.000 7.010 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/valuerep.py:495(MultiString)
16908 1.013 0.000 6.826 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/multival.py:29(__init__)
3004437 3.013 0.000 5.577 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/multival.py:42(number_string_type_constructor)
3038317/3038231 1.037 0.000 3.171 0.000 {built-in method builtins.hasattr}
Much of the time is in convert_DS_string. Is it possible to make it faster? I guess part of the problem is that the coordinates are not stored very efficiently in the DICOM file.
EDIT:
As a way of avoiding the loop at the end of MultiVal.__init__ I am wondering about getting the raw double string of each ContourData and using numpy.fromstring on it. However, I have not been able to get the raw double string.
Eliminating the loop in MultiVal.__init__ and using numpy.fromstring provides more than 4 times speedup. I will post on the pydicom github see if there is some interest in taking this into the library code. It is a little ugly. I would welcome advice on further improvement.
import pydicom
import numpy as np
import cProfile
import pstats
def parse_coords(contour):
"""Given a contour from a DICOM ROIContourSequence, returns coordinates
[loop][[x0, x1, x2, ...][y0, y1, y2, ...][z0, z1, z2, ...]]"""
if not hasattr(contour, "ContourSequence"):
return [] # empty structure
cd_tag = pydicom.tag.Tag(0x3006, 0x0050) # ContourData tag
def _reshape_contour_data(loop):
val = super(loop.__class__, loop).__getitem__(cd_tag).value
try:
double_string = val.decode(encoding='utf-8')
double_vec = np.fromstring(double_string, dtype=float, sep=chr(92)) # 92 is '/'
except AttributeError: # 'MultiValue' has no 'decode' (bytes does)
# It's already been converted to doubles and cached
double_vec = loop.ContourData
return np.reshape(np.array(double_vec),
(3, len(double_vec) // 3),
order='F')
return list(map(_reshape_contour_data, contour.ContourSequence))
def profile_load_contours():
rs = pydicom.dcmread('RS.gyn1.dcm')
structs = [parse_coords(contour) for contour in rs.ROIContourSequence]
profile_load_contours()
cProfile.run('profile_load_contours()','prof.stats')
p = pstats.Stats('prof.stats')
p.sort_stats('cumulative').print_stats(15)
Result
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 2.800 2.800 {built-in method builtins.exec}
1 0.017 0.017 2.800 2.800 <string>:1(<module>)
1 0.000 0.000 2.783 2.783 load_contour_time3.py:29(profile_load_contours)
1 0.000 0.000 2.761 2.761 load_contour_time3.py:31(<listcomp>)
56 0.006 0.000 2.760 0.049 load_contour_time3.py:9(parse_coords)
153/109 0.001 0.000 2.184 0.020 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:455(__getattr__)
149/97 0.001 0.000 2.182 0.022 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:496(__getitem__)
51 0.000 0.000 2.178 0.043 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataelem.py:439(DataElement_from_raw)
51 0.000 0.000 2.177 0.043 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:320(convert_value)
44 0.000 0.000 2.176 0.049 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:255(convert_SQ)
44 0.035 0.001 2.176 0.049 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:427(read_sequence)
152/66 0.000 0.000 2.171 0.033 {built-in method builtins.hasattr}
16920 0.147 0.000 1.993 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:452(read_sequence_item)
16923 0.116 0.000 1.267 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:365(read_dataset)
84616 0.113 0.000 0.699 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:960(__setattr__)

Explain {isinstance} in iPython prun output?

I'm trying to profile a few lines of Pandas code, and when I run %prun i'm finding most of my time is taken by {isinstance}. This seems to happen a lot -- can anyone suggest what that means and, for bonus points, suggest a way to avoid it?
This isn't meant to be application specific, but here's a thinned out version of the code if that's important:
def flagOtherGroup(df):
try:mostUsed0 = df[df.subGroupDummy == 0].siteid.iloc[0]
except: mostUsed0 = -1
try: mostUsed1 = df[df.subGroupDummy == 1].siteid.iloc[0]
except: mostUsed1 = -1
df['mostUsed'] = 0
df.loc[(df.subGroupDummy == 0) & (df.siteid == mostUsed1), 'mostUsed'] = 1
df.loc[(df.subGroupDummy == 1) & (df.siteid == mostUsed0), 'mostUsed'] = 1
return df[['mostUsed']]
%prun -l15 temp = test.groupby('userCode').apply(flagOtherGroup)
And top lines of prun:
Ordered by: internal time
List reduced from 531 to 15 due to restriction <15>
ncalls tottime percall cumtime percall filename:lineno(function)
834472 1.908 0.000 2.280 0.000 {isinstance}
497048/395400 1.192 0.000 1.572 0.000 {len}
32722 0.879 0.000 4.479 0.000 series.py:114(__init__)
34444 0.613 0.000 1.792 0.000 internals.py:3286(__init__)
25990 0.568 0.000 0.568 0.000 {method 'reduce' of 'numpy.ufunc' objects}
82266/78821 0.549 0.000 0.744 0.000 {numpy.core.multiarray.array}
42201 0.544 0.000 1.195 0.000 internals.py:62(__init__)
42201 0.485 0.000 1.812 0.000 internals.py:2015(make_block)
166244 0.476 0.000 0.615 0.000 {getattr}
4310 0.455 0.000 1.121 0.000 internals.py:2217(_rebuild_blknos_and_blklocs)
12054 0.417 0.000 2.134 0.000 internals.py:2355(apply)
9474 0.385 0.000 1.284 0.000 common.py:727(take_nd)
isinstance, len and getattr are just the built-in functions. There are a huge number of calls to the isinstance() function here; it is not that the call itself takes a lot of time, but the function was used 834472 times.
Presumably it is the pandas code that uses it.

Performance difference between list comprehensions and for loops

I have a script that finds the sum of all numbers that can be written as the sum of fifth powers of their digits. (This problem is described in more detail on the Project Euler web site.)
I have written it two ways, but I do not understand the performance difference.
The first way uses nested list comprehensions:
exp = 5
def min_combo(n):
return ''.join(sorted(list(str(n))))
def fifth_power(n, exp):
return sum([int(x) ** exp for x in list(n)])
print sum( [fifth_power(j,exp) for j in set([min_combo(i) for i in range(101,1000000) ]) if int(j) > 10 and j == min_combo(fifth_power(j,exp)) ] )
and profiles like this:
$ python -m cProfile euler30.py
443839
3039223 function calls in 2.040 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1007801 1.086 0.000 1.721 0.000 euler30.py:10(min_combo)
7908 0.024 0.000 0.026 0.000 euler30.py:14(fifth_power)
1 0.279 0.279 2.040 2.040 euler30.py:6(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1007801 0.175 0.000 0.175 0.000 {method 'join' of 'str' objects}
1 0.013 0.013 0.013 0.013 {range}
1007801 0.461 0.000 0.461 0.000 {sorted}
7909 0.002 0.000 0.002 0.000 {sum}
The second way is the more usual for loop:
exp = 5
ans= 0
def min_combo(n):
return ''.join(sorted(list(str(n))))
def fifth_power(n, exp):
return sum([int(x) ** exp for x in list(n)])
for j in set([ ''.join(sorted(list(str(i)))) for i in range(100, 1000000) ]):
if int(j) > 10:
if j == min_combo(fifth_power(j,exp)):
ans += fifth_power(j,exp)
print 'answer', ans
Here is the profiling info again:
$ python -m cProfile euler30.py
answer 443839
2039325 function calls in 1.709 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
7908 0.024 0.000 0.026 0.000 euler30.py:13(fifth_power)
1 1.081 1.081 1.709 1.709 euler30.py:6(<module>)
7902 0.009 0.000 0.015 0.000 euler30.py:9(min_combo)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1007802 0.147 0.000 0.147 0.000 {method 'join' of 'str' objects}
1 0.013 0.013 0.013 0.013 {range}
1007802 0.433 0.000 0.433 0.000 {sorted}
7908 0.002 0.000 0.002 0.000 {sum}
Why does the list comprehension implementation call min_combo() 1,000,000 more times than the for loop implementation?
Because on the second one you implemented again the content of min_combo inside the set call...
Do the same thing and you'll have the same result.
BTW, change those to avoid big lists being created:
sum([something for foo in bar]) -> sum(something for foo in bar)
set([something for foo in bar]) -> set(something for foo in bar)
(without [...] they become generator expressions).

Increasing the depth of cProfiler in Python to report more functions?

I'm trying to profile a function that calls other functions. I call the profiler as follows:
from mymodule import foo
def start():
# ...
foo()
import cProfile as profile
profile.run('start()', output_file)
p = pstats.Stats(output_file)
print "name: "
print p.sort_stats('name')
print "all stats: "
p.print_stats()
print "cumulative (top 10): "
p.sort_stats('cumulative').print_stats(10)
I find that the profiler says all the time was spend in function "foo()" of mymodule, instead of brekaing it down into the subfunctions foo() calls, which is what I want to see. How can I make the profiler report the performance of these functions?
thanks.
You need p.print_callees() to get hierarchical breakdown of method calls. The output is quite self explanatory: On the left column you can find your function of interest e.g.foo(), then going to the right side column shows all called sub-functions and their scoped total and cumulative times. Breakdowns for these sub-calls are also included etc.
First, I want to say that I was unable to replicate the Asker's issue. The profiler (in py2.7) definitely descends into the called functions and methods. (The docs for py3.6 look identical, but I haven't tested on py3.) My guess is that by limiting it to the top 10 returns, sorted by cumulative time, the first N of those were very high-level functions called a minimum of time, and the functions called by foo() dropped off the bottom of the list.
I decided to play with some big numbers for testing. Here's my test code:
# file: mymodule.py
import math
def foo(n = 5):
for i in xrange(1,n):
baz(i)
bar(i ** i)
def bar(n):
for i in xrange(1,n):
e = exp200(i)
print "len e: ", len("{}".format(e))
def exp200(n):
result = 1
for i in xrange(200):
result *= n
return result
def baz(n):
print "{}".format(n)
And the including file (very similiar to Asker's):
# file: test.py
from mymodule import foo
def start():
# ...
foo(8)
OUTPUT_FILE = 'test.profile_info'
import pstats
import cProfile as profile
profile.run('start()', OUTPUT_FILE)
p = pstats.Stats(OUTPUT_FILE)
print "name: "
print p.sort_stats('name')
print "all stats: "
p.print_stats()
print "cumulative (top 10): "
p.sort_stats('cumulative').print_stats(10)
print "time (top 10): "
p.sort_stats('time').print_stats(10)
Notice the last line. I added a view sorted by time, which is the total time spent in the function "excluding time made in calls to sub-functions". I find this view much more useful, as it tends to favor the functions that are doing actual work, and may be in need of optimization.
Here's the part of the results that the Asker was working from (cumulative-sorted):
cumulative (top 10):
Thu Mar 24 21:26:32 2016 test.profile_info
2620840 function calls in 76.039 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 76.039 76.039 <string>:1(<module>)
1 0.000 0.000 76.039 76.039 test.py:5(start)
1 0.000 0.000 76.039 76.039 /Users/jhazen/mymodule.py:4(foo)
7 10.784 1.541 76.039 10.863 /Users/jhazen/mymodule.py:10(bar)
873605 49.503 0.000 49.503 0.000 /Users/jhazen/mymodule.py:15(exp200)
873612 15.634 0.000 15.634 0.000 {method 'format' of 'str' objects}
873605 0.118 0.000 0.118 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:21(baz)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
See how the top 3 functions in this display were only called once. Let's look at the time-sorted view:
time (top 10):
Thu Mar 24 21:26:32 2016 test.profile_info
2620840 function calls in 76.039 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873605 49.503 0.000 49.503 0.000 /Users/jhazen/mymodule.py:15(exp200)
873612 15.634 0.000 15.634 0.000 {method 'format' of 'str' objects}
7 10.784 1.541 76.039 10.863 /Users/jhazen/mymodule.py:10(bar)
873605 0.118 0.000 0.118 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:21(baz)
1 0.000 0.000 76.039 76.039 /Users/jhazen/mymodule.py:4(foo)
1 0.000 0.000 76.039 76.039 test.py:5(start)
1 0.000 0.000 76.039 76.039 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Now the number one entry makes sense. Obviously raising something to the 200th power by repeated multiplication is a "naive" strategy. Let's replace it:
def exp200(n):
return n ** 200
And the results:
time (top 10):
Thu Mar 24 21:32:18 2016 test.profile_info
2620840 function calls in 30.646 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873612 15.722 0.000 15.722 0.000 {method 'format' of 'str' objects}
7 9.760 1.394 30.646 4.378 /Users/jhazen/mymodule.py:10(bar)
873605 5.056 0.000 5.056 0.000 /Users/jhazen/mymodule.py:15(exp200)
873605 0.108 0.000 0.108 0.000 {len}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:18(baz)
1 0.000 0.000 30.646 30.646 /Users/jhazen/mymodule.py:4(foo)
1 0.000 0.000 30.646 30.646 test.py:5(start)
1 0.000 0.000 30.646 30.646 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
That's a nice improvement in time. Now str.format() is our worst offender. I added the line in bar() to print the length of the number, because my first attempt (just computing the number and doing nothing with it) got optimized away, and my attempt to avoid that (printing the number, which got really big really fast) seemed like it might be blocking on I/O, so I compromised on printing the length of the number. Hey, that's the base-10 log. Let's try that:
def bar(n):
for i in xrange(1,n):
e = exp200(i)
print "log e: ", math.log10(e)
And the results:
time (top 10):
Thu Mar 24 21:40:16 2016 test.profile_info
1747235 function calls in 11.279 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
7 6.082 0.869 11.279 1.611 /Users/jhazen/mymodule.py:10(bar)
873605 4.996 0.000 4.996 0.000 /Users/jhazen/mymodule.py:15(exp200)
873605 0.201 0.000 0.201 0.000 {math.log10}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:18(baz)
1 0.000 0.000 11.279 11.279 /Users/jhazen/mymodule.py:4(foo)
7 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 11.279 11.279 test.py:5(start)
1 0.000 0.000 11.279 11.279 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Hmm, still a fair amount of time spent in bar(), even without the str.format(). Let's get rid of that print:
def bar(n):
z = 0
for i in xrange(1,n):
e = exp200(i)
z += math.log10(e)
return z
And the results:
time (top 10):
Thu Mar 24 21:45:24 2016 test.profile_info
1747235 function calls in 5.031 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
873605 4.487 0.000 4.487 0.000 /Users/jhazen/mymodule.py:17(exp200)
7 0.440 0.063 5.031 0.719 /Users/jhazen/mymodule.py:10(bar)
873605 0.104 0.000 0.104 0.000 {math.log10}
7 0.000 0.000 0.000 0.000 /Users/jhazen/mymodule.py:20(baz)
1 0.000 0.000 5.031 5.031 /Users/jhazen/mymodule.py:4(foo)
7 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 5.031 5.031 test.py:5(start)
1 0.000 0.000 5.031 5.031 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Now it looks like the stuff doing the actual work is the busiest function, so I think we're done optimizing.
Hope that helps!
Maybe you faced with a similar problem, so I'm going to describe here my issue. My profiling code looked like this:
def foobar():
import cProfile
pr = cProfile.Profile()
pr.enable()
for event in reader.events():
baz()
# and other things
pr.disable()
pr.dump_stats('result.prof')
And the final profiling output contained only events() call. And I spent not so little time to realise that I had empty loop profiling. Of course, there was more than one call of foobar() from a client code, but meaningful profiling results had been overwritten by last one call with empty loop.

Categories