Pandas optimization - python

I wrote a function to process data with pandas. Profiling log using %prun of my function is posted at bottom (only a top few lines). I want to optimize my code because I need call this function I wrote more than 4,000 times. And it took 37.7 s to run this function once.
It seems the most time consuming part is nonzero of numpy.ndarray. Since almost all of my operations are based on pandas, I wonder which function in pandas rely on this method heavily?
My operations are mostly consisted by dataframe slicing based on datetimeindex using df.ix[] and dataframe merges using pandas.merge().
I know it's hard to tell without posting my actual script, but the script is too long to be meaningful and most operations are ad hoc, so i can't rewrite it into small script to post it here.
16439731 function calls (16108083 primitive calls) in 37.766 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
7461 3.712 0.000 3.712 0.000 {method 'nonzero' of 'numpy.ndarray' objects}
244 1.731 0.007 5.434 0.022 index.py:1126(_partial_date_slice)
122 1.655 0.014 1.655 0.014 {pandas.algos.inner_join_indexer_int64}
610 1.578 0.003 1.578 0.003 {method 'factorize' of 'pandas.hashtable.Int64Factorizer' objects}
118817 0.764 0.000 0.764 0.000 {method 'reduce' of 'numpy.ufunc' objects}
22474 0.753 0.000 0.917 0.000 index.py:409(is_unique)
353210 0.669 0.000 1.228 0.000 {numpy.core.multiarray.array}
1577935 0.596 0.000 0.925 0.000 {isinstance}
1221 0.511 0.000 0.516 0.000 index.py:402(is_monotonic)
183 0.427 0.002 0.427 0.002 {pandas.algos.left_outer_join}
34529 0.376 0.000 1.286 0.000 index.py:98(__new__)
12356 0.358 0.000 0.358 0.000 {method 'take' of 'numpy.ndarray' objects}
3812 0.352 0.000 0.352 0.000 {pandas.algos.take_2d_axis0_int64_int64}
610 0.344 0.001 0.349 0.001 index.py:35(wrapper)
981 0.334 0.000 0.335 0.000 {method 'copy' of 'numpy.ndarray' objects}

The df.ix[] is a little unpredictable in that is primarily label-based but has an integer-position fallback. You should try using .loc[] instead. If you just pass a single label it will return a series of the row at that index label. You can also slice by passing a range. So instead of:
df.ix[begin_date:end_date]
Try:
df.loc[begin_date:end_date]
Even faster would be to use the integer-based slicing method .iloc[]. Since you're looping over the index anyway you could just add an enumerate() to your loop and use the enumerate() values, i.e.:
df.iloc[4:9]
On my machine .iloc tends to be about twice as fast as .loc.

Related

How to extract useful info from cProfile with Pandas and Numpy?

I have some Python code that is generating a large data set via numerical simulation. The code is using Numpy for a lot of the calculations and Pandas for a lot of the top-level data. The data sets are large so the code is running slowly, and now I'm trying to see if I can use cProfile to find and fix some hot spots.
The trouble is that cProfile is identifying a lot of the hot spots as pieces of code within Pandas, within Numpy, and/or Python builtins. Here are the cProfile statistics sorted by 'tottime' (total time within the function itself). Note that I'm obscuring project name and file names since the code itself is not owned by me and I don't have permission to share details.
foo.sort_stats('tottime').print_stats(50)
Wed Jun 5 13:18:28 2019 c:\localwork\xxxxxx\profile_data
297514385 function calls (291105230 primitive calls) in 306.898 seconds
Ordered by: internal time
List reduced from 4141 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
281307 31.918 0.000 34.731 0.000 {pandas._libs.lib.infer_dtype}
800 31.443 0.039 31.476 0.039 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\numpy\lib\function_base.py:4703(delete)
109668 23.837 0.000 23.837 0.000 {method 'clear' of 'dict' objects}
153481 19.369 0.000 19.369 0.000 {method 'ravel' of 'numpy.ndarray' objects}
5861614 14.182 0.000 78.492 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexes\base.py:3090(get_value)
5861614 8.891 0.000 8.891 0.000 {method 'get_value' of 'pandas._libs.index.IndexEngine' objects}
5861614 8.376 0.000 99.084 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\series.py:764(__getitem__)
26840695 7.032 0.000 11.009 0.000 {built-in method builtins.isinstance}
26489324 6.547 0.000 14.410 0.000 {built-in method builtins.getattr}
11846279 6.177 0.000 19.809 0.000 {pandas._libs.lib.values_from_object}
[...]
Is there a sensible way for me to figure out which parts of my code are excessively leaning on these library functions and built-ins? I anticipate one answer would be "look at the cumulative time statistics, that will probably indicate where these costly calls are originating". The cumulative times give a little bit of insight:
foo.sort_stats('cumulative').print_stats(50)
Wed Jun 5 13:18:28 2019 c:\localwork\xxxxxx\profile_data
297514385 function calls (291105230 primitive calls) in 306.898 seconds
Ordered by: cumulative time
List reduced from 4141 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
643/1 0.007 0.000 307.043 307.043 {built-in method builtins.exec}
1 0.000 0.000 307.043 307.043 xxxxxx.py:1(<module>)
1 0.002 0.002 306.014 306.014 xxxxxx.py:264(write_xxx_data)
1 0.187 0.187 305.991 305.991 xxxxxx.py:256(write_yyyy_data)
1 0.077 0.077 305.797 305.797 xxxxxx.py:250(make_zzzzzzz)
1 0.108 0.108 187.845 187.845 xxxxxx.py:224(generate_xyzxyz)
108223 1.977 0.000 142.816 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexing.py:298(_setitem_with_indexer)
1 0.799 0.799 126.733 126.733 xxxxxx.py:63(populate_abcabc_data)
1 0.030 0.030 117.874 117.874 xxxxxx.py:253(<listcomp>)
7201 0.077 0.000 116.612 0.016 C:\LocalWork\xxxxxx\yyyyyyyyyyyy.py:234(xxx_yyyyyy)
108021 0.497 0.000 112.908 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexing.py:182(__setitem__)
5861614 8.376 0.000 99.084 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\series.py:764(__getitem__)
110024 0.917 0.000 81.210 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:3500(apply)
108021 0.185 0.000 80.685 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:3692(setitem)
5861614 14.182 0.000 78.492 0.000 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\indexes\base.py:3090(get_value)
108021 1.887 0.000 73.064 0.001 C:\LocalWork\WPy-3710\python-3.7.1.amd64\lib\site-packages\pandas\core\internals.py:819(setitem)
[...]
Is there a good way to pin down the hot spots -- better than "crawl through xxxxxx.py and search for all places where Pandas might be inferring a dataype, and where Numpy might be deleting objects"...?

Why am I not seeing speed up via multiprocessing in Python?

I am trying to parallelize an embarrassingly parallel for loop (previously asked here) and settled on this implementation that fit my parameters:
with Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets, train_size_common, feat_sel_size, train_perc,
total_test_samples, num_classes, num_features, label_set,
method_names, pos_class_index, out_results_dir, exhaustive_search])
partial_func_holdout = partial(holdout_trial_compare_datasets, *shared_inputs)
with Pool(processes=num_procs) as pool:
cv_results = pool.map(partial_func_holdout, range(num_repetitions))
The reason I need to use a proxy object (shared between processes) is the first element in the shared proxy list datasets that is a list of large objects (each about 200-300MB). This datasets list usually has 5-25 elements. I typically need to run this program on a HPC cluster.
Here is the question, when I run this program with 32 processes and 50GB of memory (num_repetitions=200, with datasets being a list of 10 objects, each 250MB), I do not see a speedup even by factor of 16 (with 32 parallel processes). I do not understand why - any clues? Any obvious mistakes, or bad choices? Where can I improve this implementation? Any alternatives?
I am sure this has been discussed before, and the reasons can be varied and very specific to implementation - hence I request you to provide me your 2 cents. Thanks.
Update: I did some profiling with cProfile to get a better idea - here is some info, sorted by cumulative time.
In [19]: p.sort_stats('cumulative').print_stats(50)
Mon Oct 16 16:43:59 2017 profiling_log.txt
555404 function calls (543552 primitive calls) in 662.201 seconds
Ordered by: cumulative time
List reduced from 4510 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
897/1 0.044 0.000 662.202 662.202 {built-in method builtins.exec}
1 0.000 0.000 662.202 662.202 test_rhst.py:2(<module>)
1 0.001 0.001 661.341 661.341 test_rhst.py:70(test_chance_classifier_binary)
1 0.000 0.000 661.336 661.336 /Users/Reddy/dev/neuropredict/neuropredict/rhst.py:677(run)
4 0.000 0.000 661.233 165.308 /Users/Reddy/anaconda/envs/py36/lib/python3.6/threading.py:533(wait)
4 0.000 0.000 661.233 165.308 /Users/Reddy/anaconda/envs/py36/lib/python3.6/threading.py:263(wait)
23 661.233 28.749 661.233 28.749 {method 'acquire' of '_thread.lock' objects}
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:261(map)
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:637(get)
1 0.000 0.000 661.233 661.233 /Users/Reddy/anaconda/envs/py36/lib/python3.6/multiprocessing/pool.py:634(wait)
866/8 0.004 0.000 0.868 0.108 <frozen importlib._bootstrap>:958(_find_and_load)
866/8 0.003 0.000 0.867 0.108 <frozen importlib._bootstrap>:931(_find_and_load_unlocked)
720/8 0.003 0.000 0.865 0.108 <frozen importlib._bootstrap>:641(_load_unlocked)
596/8 0.002 0.000 0.865 0.108 <frozen importlib._bootstrap_external>:672(exec_module)
1017/8 0.001 0.000 0.863 0.108 <frozen importlib._bootstrap>:197(_call_with_frames_removed)
522/51 0.001 0.000 0.765 0.015 {built-in method builtins.__import__}
The profiling info now sorted by time
In [20]: p.sort_stats('time').print_stats(20)
Mon Oct 16 16:43:59 2017 profiling_log.txt
555404 function calls (543552 primitive calls) in 662.201 seconds
Ordered by: internal time
List reduced from 4510 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
23 661.233 28.749 661.233 28.749 {method 'acquire' of '_thread.lock' objects}
115/80 0.177 0.002 0.211 0.003 {built-in method _imp.create_dynamic}
595 0.072 0.000 0.072 0.000 {built-in method marshal.loads}
1 0.045 0.045 0.045 0.045 {method 'acquire' of '_multiprocessing.SemLock' objects}
897/1 0.044 0.000 662.202 662.202 {built-in method builtins.exec}
3 0.042 0.014 0.042 0.014 {method 'read' of '_io.BufferedReader' objects}
2037/1974 0.037 0.000 0.082 0.000 {built-in method builtins.__build_class__}
286 0.022 0.000 0.061 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/misc/doccer.py:12(docformat)
2886 0.021 0.000 0.021 0.000 {built-in method posix.stat}
79 0.016 0.000 0.016 0.000 {built-in method posix.read}
597 0.013 0.000 0.021 0.000 <frozen importlib._bootstrap_external>:830(get_data)
276 0.011 0.000 0.013 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/sre_compile.py:250(_optimize_charset)
108 0.011 0.000 0.038 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:626(_construct_argparser)
1225 0.011 0.000 0.050 0.000 <frozen importlib._bootstrap_external>:1233(find_spec)
7179 0.009 0.000 0.009 0.000 {method 'splitlines' of 'str' objects}
33 0.008 0.000 0.008 0.000 {built-in method posix.waitpid}
283 0.008 0.000 0.015 0.000 /Users/Reddy/anaconda/envs/py36/lib/python3.6/site-packages/scipy/misc/doccer.py:128(indentcount_lines)
3 0.008 0.003 0.008 0.003 {method 'poll' of 'select.poll' objects}
7178 0.008 0.000 0.008 0.000 {method 'expandtabs' of 'str' objects}
597 0.007 0.000 0.007 0.000 {method 'read' of '_io.FileIO' objects}
More profiling info sorted by percall info:
Update 2
The elements in the large list datasets I mentioned earlier are not usually as big - they are typically 10-25MB each. But depending on the floating point precision used, number of samples and features, this can easily grow to 500MB-1GB per element also. hence I'd prefer a solution that can scale.
Update 3:
The code inside holdout_trial_compare_datasets uses method GridSearchCV of scikit-learn, which internally uses joblib library if we set n_jobs > 1 (or whenever we even set it). This might lead to some bad interactions between multiprocessing and joblib. So trying another config where I do not set n_jobs at all (which should to default no parallelism within scikit-learn). Will keep you posted.
Based on discussion in the comments, I did a mini experiment, compared three versions of implementation:
v1: basically as same as your approach, in fact, as partial(f1, *shared_inputs) will unpack proxy_manager.list immediately, Manager.List not involved here, data passed to worker with the internal queue of Pool.
v2: v2 made use Manager.List, work function will receive a ListProxy object, it fetches shared data via a internal connection to a server process.
v3: child process share data from the parent, take advantage of fork(2) system call.
def f1(*args):
for e in args[0]: pow(e, 2)
def f2(*args):
for e in args[0][0]: pow(e, 2)
def f3(n):
for i in datasets: pow(i, 2)
def v1(np):
with mp.Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets,])
pf = partial(f1, *shared_inputs)
with mp.Pool(processes=np) as pool:
r = pool.map(pf, range(16))
def v2(np):
with mp.Manager() as proxy_manager:
shared_inputs = proxy_manager.list([datasets,])
pf = partial(f2, shared_inputs)
with mp.Pool(processes=np) as pool:
r = pool.map(pf, range(16))
def v3(np):
with mp.Pool(processes=np) as pool:
r = pool.map(f3, range(16))
datasets = [2.0 for _ in range(10 * 1000 * 1000)]
for f in (v1, v2, v3):
print(f.__code__.co_name)
for np in (2, 4, 8, 16):
s = time()
f(np)
print("%s %.2fs" % (np, time()-s))
results taken on a 16 core E5-2682 VPC, it is obvious that v3 scales better:
{method 'acquire' of '_thread.lock' objects}
Looking at your profiler output I would say that the shared object lock/unlock overhead overwhelms the speed gains of multithreading.
Refactor so that the work is farmed out to workers that do not need to talk to one another as much.
Specifically, if possible, derive one answer per data pile and then act on the accumulated results.
This is why Queues can seem so much faster: they involve a type of work that does not require an object that has to be 'managed' and so locked/unlocked.
Only 'manage' things that absolutely need to be shared between processes. Your managed list contains some very complicated looking objects...
A faster paradigm is:
allwork = manager.list([a, b,c])
theresult = manager.list()
and then
while mywork:
unitofwork = allwork.pop()
theresult = myfunction(unitofwork)
If you do not need a complex shared object, then only use a list of the most simple objects imaginable.
Then tell the workers to acquire the complex data that they can process in their own little world.
Try:
allwork = manager.list([datasetid1, datasetid2 ,...])
theresult = manager.list()
while mywork:
unitofworkid = allwork.pop()
theresult = myfunction(unitofworkid)
def myfunction(unitofworkid):
thework = acquiredataset(unitofworkid)
result = holdout_trial_compare_datasets(thework, ...)
I hope that this makes sense. It should not take too much time to refactor in this direction. And you should see that {method 'acquire' of '_thread.lock' objects} number drop like a rock when you profile.

Optimize function slicing numpy arrays

I have the following function, which takes a numpy array of floats and an integer as its arguments. Each row in the array 'counts' is the result of some experiment, and I want to randomly draw a list of the experiments and add them up, then repeat this process to create lots of samples groups.
def my_function(counts,nSamples):
''' Create multiple randomly drawn (with replacement)
samples from the raw data '''
nSat,nRegions = counts.shape
sampleData = np.zeros((nSamples,nRegions))
for i in range(nSamples):
rc = np.random.randint(0,nSat,size=nSat)
sampleData[i] = counts[rc].sum(axis=0)
return sampleData
This function seems quite slow, typically counts has around 100,000 rows (and 4 columns) and nSamples is around 2000. I have tried using numba and implicit for loops to try and speed up this code with no success.
What are some other methods to try and increase the speed?
I have run cProfile on the function and got the following output.
8005 function calls in 60.208 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 60.208 60.208 <string>:1(<module>)
2000 0.010 0.000 13.306 0.007 _methods.py:31(_sum)
1 40.950 40.950 60.208 60.208 optimize_bootstrap.py:25(bootstrap)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2000 5.938 0.003 5.938 0.003 {method 'randint' of 'mtrand.RandomState' objects}
2000 13.296 0.007 13.296 0.007 {method 'reduce' of 'numpy.ufunc' objects}
2000 0.015 0.000 13.321 0.007 {method 'sum' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {numpy.core.multiarray.zeros}
1 0.000 0.000 0.000 0.000 {range}
Are you sure that
rc = np.random.randint(0,nSat,size=nSat)
is what you want, instead of size=someconstant? Otherwise you're summing over all the rows with many repeats.
edit
does it help to replace the slicing altogether with a matrix product:
rcvec=np.zeros(nSat,np.int)
for i in rc:
rcvec[i]+=1
sampleData[i] = rcvec.dot(counts)
(maybe there is a function in numpy that can give you rcvec faster)
Simply generate all indices in one go with a 2D size for np.random.randint, use those to index into counts array and then sum along the first axis, just like you were doing with the loopy one.
Thus, one vectorized way and as such faster one, would be like so -
RC = np.random.randint(0,nSat,size=(nSat, nSamples))
sampleData_out = counts[RC].sum(axis=0)

How do I speed up a piece of python code which has a numpy function embedded in it?

Here is the rate limiting function in my code
def timepropagate(wv1, ham11,
ham12, ham22, scalararray, nt):
wv2 = np.zeros((nx, ny), 'c16')
fw1 = np.zeros((nx, ny), 'c16')
fw2 = np.zeros((nx, ny), 'c16')
for t in range(0, nt, 1):
wv1, wv2 = scalararray*wv1, scalararray*wv2
fw1, fw2 = (np.fft.fft2(wv1), np.fft.fft2(wv2))
fw1 = ham11*fw1+ham12*fw2
fw2 = ham12*fw1+ham22*fw2
wv1, wv2 = (np.fft.ifft2(fw1), np.fft.ifft2(fw2))
wv1, wv2 = scalararray*wv1, scalararray*wv2
del(fw1)
del(fw2)
return np.array([wv1, wv2])
What I would need to do is find a reasonably fast implementation that would allow me to go at twice the speed, preferably the fastest.
The more general question I'm interested in, is what way can I speed up this piece, using minimal possible connections back to python. I assume that even if I speed up specific segments of the code, say the scalar array multiplications, I would still come back and go from python at the Fourier transforms which would take time. Are there any ways I can use, say numba or cython and not make this "coming back" to python in the middle of the loops?
On a personal note, I'd prefer something fast on a single thread considering that I'd be using my other threads already.
Edit: here are results of profiling, the 1st one for 4096x4096 arrays for 10 time steps, I need to scale it up for nt = 8000.
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.099 0.099 432.556 432.556 <string>:1(<module>)
40 0.031 0.001 28.792 0.720 fftpack.py:100(fft)
40 45.867 1.147 68.055 1.701 fftpack.py:195(ifft)
80 0.236 0.003 47.647 0.596 fftpack.py:46(_raw_fft)
40 0.102 0.003 1.260 0.032 fftpack.py:598(_cook_nd_args)
40 1.615 0.040 99.774 2.494 fftpack.py:617(_raw_fftnd)
20 0.225 0.011 29.739 1.487 fftpack.py:819(fft2)
20 2.252 0.113 72.512 3.626 fftpack.py:908(ifft2)
80 0.000 0.000 0.000 0.000 fftpack.py:93(_unitary)
40 0.631 0.016 0.820 0.021 fromnumeric.py:43(_wrapit)
80 0.009 0.000 0.009 0.000 fromnumeric.py:457(swapaxes)
40 0.338 0.008 1.158 0.029 fromnumeric.py:56(take)
200 0.064 0.000 0.219 0.001 numeric.py:414(asarray)
1 329.728 329.728 432.458 432.458 profiling.py:86(timepropagate)
1 0.036 0.036 432.592 432.592 {built-in method builtins.exec}
40 0.001 0.000 0.001 0.000 {built-in method builtins.getattr}
120 0.000 0.000 0.000 0.000 {built-in method builtins.len}
241 3.930 0.016 3.930 0.016 {built-in method numpy.core.multiarray.array}
3 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.zeros}
40 18.861 0.472 18.861 0.472 {built-in method numpy.fft.fftpack_lite.cfftb}
40 28.539 0.713 28.539 0.713 {built-in method numpy.fft.fftpack_lite.cfftf}
1 0.000 0.000 0.000 0.000 {built-in method numpy.fft.fftpack_lite.cffti}
80 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
40 0.006 0.000 0.006 0.000 {method 'astype' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
80 0.000 0.000 0.000 0.000 {method 'pop' of 'list' objects}
40 0.000 0.000 0.000 0.000 {method 'reverse' of 'list' objects}
80 0.000 0.000 0.000 0.000 {method 'setdefault' of 'dict' objects}
80 0.001 0.000 0.001 0.000 {method 'swapaxes' of 'numpy.ndarray' objects}
40 0.022 0.001 0.022 0.001 {method 'take' of 'numpy.ndarray' objects}
I think I've done it wrong the first time, using time.time() to calculate time differences for small arrays and extrapolating the conclusions for larger ones.
If most of the time is spent in the hamiltonian multiplication, you may want to apply numba on that part. The most benefit coming from removing all the temporal arrays that would be needed if evaluating expressions from within NumPy.
Bear also in mind that the arrays (4096, 4096, c16) are big enough to not fit comfortably in the processor caches. A single matrix would take 256 MiB. So think that the performance is unlikely to be related at all with the operations, but rather on the bandwidth. So implement those operations in a way that you only perform one pass in the input operands. This is really trivial to implement in numba. Note: You will only need to implement in numba the hamiltonian expressions.
I want also to point out that the "preallocations" using np.zeros seems to signal that your code is not following your intent as:
fw1 = ham11*fw1+ham12*fw2
fw2 = ham12*fw1+ham22*fw2
will actually create new arrays for fw1, fw2. If your intent was to reuse the buffer, you may want to use "fw1[:,:] = ...". Otherwise the np.zeros do nothing but waste time and memory.
You may want to consider to join (wv1, wv2) into a (2, 4096, 4096, c16) array. The same with (fw1, fw2). That way code will be simpler as you can rely on broadcasting to handle the "scalararray" product. fft2 and ifft2 will actually do the right thing (AFAIK).

Python: Calling functions is really slow?

I've got an Element class that has some functions, one like this:
def clean(self):
self.dirty = False
I have 1024 elements, and I'm calling clean on each one of them in a while 1: loop.
If I stop calling the clean method, game framerate goes up from 76fps to 250 fps.
This is pretty disturbing. Do I really have to be THIS careful not to completely lag out my code?
Edit (here's the full code):
250 fps code
for layer in self.layers:
elements = self.layers[layer]
for element in elements:
if element.getDirty():
element.update()
self.renderImage(element.getImage(), element.getRenderPos())
element.clean()
76fps code
for layer in self.layers:
elements = self.layers[layer]
for element in elements:
if element.getDirty():
element.update()
self.renderImage(element.getImage(), element.getRenderPos())
element.clean()
2Edit (Here's the profiling results):
Sat Feb 9 22:39:58 2013 stats.dat
23060170 function calls (23049668 primitive calls) in 27.845 seconds
Ordered by: internal time
List reduced from 613 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
3720076 5.971 0.000 12.048 0.000 element.py:47(clean)
909 4.869 0.005 17.918 0.020 chipengine.py:30(updateElements)
3742947 4.094 0.000 5.443 0.000 copy.py:67(copy)
4101 3.972 0.001 3.972 0.001 engine.py:152(killScheduledElements)
11773 1.321 0.000 1.321 0.000 {method 'blit' of 'pygame.Surface' objects}
4 1.210 0.302 1.295 0.324 resourceloader.py:14(__init__)
3720076 0.918 0.000 0.918 0.000 element.py:55(getDirty)
1387 0.712 0.001 0.712 0.001 {built-in method flip}
3742947 0.705 0.000 0.705 0.000 copy.py:102(_copy_immutable)
3728284 0.683 0.000 0.683 0.000 {method 'copy' of 'pygame.Rect' objects}
3743140 0.645 0.000 0.645 0.000 {method 'get' of 'dict' objects}
5494 0.566 0.000 0.637 0.000 element.py:89(isBeclouded)
2296 0.291 0.000 0.291 0.000 {built-in method get}
1 0.267 0.267 0.267 0.267 {built-in method init}
1387 0.244 0.000 25.714 0.019 engine.py:67(updateElements)
2295 0.143 0.000 0.143 0.000 {method 'tick' of 'Clock' objects}
11764 0.095 0.000 0.169 0.000 element.py:30(update)
8214/17 0.062 0.000 4.455 0.262 engine.py:121(createElement)
40 0.052 0.001 0.052 0.001 {built-in method load_extended}
36656 0.046 0.000 0.067 0.000 element.py:117(isCollidingWith)
The profiling says that calling the clean method takes about 6 out of 28 seconds during the profiling. It also gets called 3.7 million times during that time.
That means that the loop you are showing must be the main loop of the software. That main loop also does only the following things:
Checks if the element is dirty.
If it is, it draws it.
Then cleans it.
Since most elements are not dirty (update() only gets called 11 thousand of these 3.7 million loops), the end result is that your main loop is now doing only one thing: Checking if the element is dirty, and then calling .clean() on it.
By only calling clean if the element is dirty, you have effectively cut the main loop in half.
Do I really have to be THIS careful not to completely lag out my code?
Yes. If you have a very tight loop that for the most time does nothing, then you have to make sure that this loop is in fact tight.
This is pretty disturbing.
No, it's a fundamental computing fact.
(comment, but my stats are to low to "comment")
If you are calling element.getDirty() 3.7 million times
and it's only dirty 11 thousand times,
you should be keeping a dirty list, not polling every time.
That is, don't set the dirty flag, but add the dirty element to a dirty element list.
It looks like you might need a dirty list for each layer.

Categories