Why is numpy.array so slow? - python

I am baffled by this
def main():
for i in xrange(2560000):
a = [0.0, 0.0, 0.0]
main()
$ time python test.py
real 0m0.793s
Let's now see with numpy:
import numpy
def main():
for i in xrange(2560000):
a = numpy.array([0.0, 0.0, 0.0])
main()
$ time python test.py
real 0m39.338s
Holy CPU cycles batman!
Using numpy.zeros(3) improves, but still not enough IMHO
$ time python test.py
real 0m5.610s
user 0m5.449s
sys 0m0.070s
numpy.version.version = '1.5.1'
If you are wondering if the list creation is skipped for optimization in the first example, it is not:
5 19 LOAD_CONST 2 (0.0)
22 LOAD_CONST 2 (0.0)
25 LOAD_CONST 2 (0.0)
28 BUILD_LIST 3
31 STORE_FAST 1 (a)

Numpy is optimised for large amounts of data. Give it a tiny 3 length array and, unsurprisingly, it performs poorly.
Consider a separate test
import timeit
reps = 100
pythonTest = timeit.Timer('a = [0.] * 1000000')
numpyTest = timeit.Timer('a = numpy.zeros(1000000)', setup='import numpy')
uninitialised = timeit.Timer('a = numpy.empty(1000000)', setup='import numpy')
# empty simply allocates the memory. Thus the initial contents of the array
# is random noise
print 'python list:', pythonTest.timeit(reps), 'seconds'
print 'numpy array:', numpyTest.timeit(reps), 'seconds'
print 'uninitialised array:', uninitialised.timeit(reps), 'seconds'
And the output is
python list: 1.22042918205 seconds
numpy array: 1.05412316322 seconds
uninitialised array: 0.0016028881073 seconds
It would seem that it is the zeroing of the array that is taking all the time for numpy. So unless you need the array to be initialised then try using empty.

Holy CPU cycles batman!, indeed.
But please rather consider something very fundamental related to numpy; sophisticated linear algebra based functionality (like random numbers or singular value decomposition). Now, consider these seamingly simple calculations:
In []: A= rand(2560000, 3)
In []: %timeit rand(2560000, 3)
1 loops, best of 3: 296 ms per loop
In []: %timeit u, s, v= svd(A, full_matrices= False)
1 loops, best of 3: 571 ms per loop
and please trust me that this kind of performance will not be beaten significantly by any package currently available.
So, please describe your real problem, and I'll try to figure out decent numpy based solution for it.
Update:
Here is some simply code for ray sphere intersection:
import numpy as np
def mag(X):
# magnitude
return (X** 2).sum(0)** .5
def closest(R, c):
# closest point on ray to center and its distance
P= np.dot(c.T, R)* R
return P, mag(P- c)
def intersect(R, P, h, r):
# intersection of rays and sphere
return P- (h* (2* r- h))** .5* R
# set up
c, r= np.array([10, 10, 10])[:, None], 2. # center, radius
n= 5e5
R= np.random.rand(3, n) # some random rays in first octant
R= R/ mag(R) # normalized to unit length
# find rays which will intersect sphere
P, b= closest(R, c)
wi= b<= r
# and for those which will, find the intersection
X= intersect(R[:, wi], P[:, wi], r- b[wi], r)
Apparently we calculated correctly:
In []: allclose(mag(X- c), r)
Out[]: True
And some timings:
In []: % timeit P, b= closest(R, c)
10 loops, best of 3: 93.4 ms per loop
In []: n/ 0.0934
Out[]: 5353319 #=> more than 5 million detection's of possible intersections/ s
In []: %timeit X= intersect(R[:, wi], P[:, wi], r- b[wi])
10 loops, best of 3: 32.7 ms per loop
In []: X.shape[1]/ 0.0327
Out[]: 874037 #=> almost 1 million actual intersections/ s
These timings are done with very modest machine. With modern machine, a significant speed-up can be still expected.
Anyway, this is only a short demonstration how to code with numpy.

Late answer, but could be important for other viewers.
This problem has been considered in the kwant project as well.
Indeed small arrays are not optimized in numpy and quite frequently small arrays are exactly what you need.
In this regard they created a substitute for small arrays which behaves and co-exists with the numpy arrays (any non-implemented operation in the new data-type is processed by numpy).
You should look into this project:
https://pypi.python.org/pypi/tinyarray/1.0.5
which main purpose is to behave nicely for small arrays. Of course some of the more fancy things you can do with numpy is not supported by this. But numerics seems to be your request.
I have made some small tests:
python
I have added numpy import to get the load time correct
import numpy
def main():
for i in xrange(2560000):
a = [0.0, 0.0, 0.0]
main()
numpy
import numpy
def main():
for i in xrange(2560000):
a = numpy.array([0.0, 0.0, 0.0])
main()
numpy-zero
import numpy
def main():
for i in xrange(2560000):
a = numpy.zeros((3,1))
main()
tinyarray
import numpy,tinyarray
def main():
for i in xrange(2560000):
a = tinyarray.array([0.0, 0.0, 0.0])
main()
tinyarray-zero
import numpy,tinyarray
def main():
for i in xrange(2560000):
a = tinyarray.zeros((3,1))
main()
I ran this:
for f in python numpy numpy_zero tiny tiny_zero ; do
echo $f
for i in `seq 5` ; do
time python ${f}_test.py
done
done
And got:
python
python ${f}_test.py 0.31s user 0.02s system 99% cpu 0.339 total
python ${f}_test.py 0.29s user 0.03s system 98% cpu 0.328 total
python ${f}_test.py 0.33s user 0.01s system 98% cpu 0.345 total
python ${f}_test.py 0.31s user 0.01s system 98% cpu 0.325 total
python ${f}_test.py 0.32s user 0.00s system 98% cpu 0.326 total
numpy
python ${f}_test.py 2.79s user 0.01s system 99% cpu 2.812 total
python ${f}_test.py 2.80s user 0.02s system 99% cpu 2.832 total
python ${f}_test.py 3.01s user 0.02s system 99% cpu 3.033 total
python ${f}_test.py 2.99s user 0.01s system 99% cpu 3.012 total
python ${f}_test.py 3.20s user 0.01s system 99% cpu 3.221 total
numpy_zero
python ${f}_test.py 1.04s user 0.02s system 99% cpu 1.075 total
python ${f}_test.py 1.08s user 0.02s system 99% cpu 1.106 total
python ${f}_test.py 1.04s user 0.02s system 99% cpu 1.065 total
python ${f}_test.py 1.03s user 0.02s system 99% cpu 1.059 total
python ${f}_test.py 1.05s user 0.01s system 99% cpu 1.064 total
tiny
python ${f}_test.py 0.93s user 0.02s system 99% cpu 0.955 total
python ${f}_test.py 0.98s user 0.01s system 99% cpu 0.993 total
python ${f}_test.py 0.93s user 0.02s system 99% cpu 0.953 total
python ${f}_test.py 0.92s user 0.02s system 99% cpu 0.944 total
python ${f}_test.py 0.96s user 0.01s system 99% cpu 0.978 total
tiny_zero
python ${f}_test.py 0.71s user 0.03s system 99% cpu 0.739 total
python ${f}_test.py 0.68s user 0.02s system 99% cpu 0.711 total
python ${f}_test.py 0.70s user 0.01s system 99% cpu 0.721 total
python ${f}_test.py 0.70s user 0.02s system 99% cpu 0.721 total
python ${f}_test.py 0.67s user 0.01s system 99% cpu 0.687 total
Now these tests are (as already pointed out) not the best tests. However, they still show that tinyarray is better suited for small arrays.
Another fact is that the most common operations should be faster with tinyarray. So it might have better benefits of usage than just data creations.
I have never tried it in a fully fledged project, but the kwant project is using it

Of course numpy consumes more time in this case, since: a = np.array([0.0, 0.0, 0.0]) <=~=> a = [0.0, 0.0, 0.0]; a = np.array(a), it took two steps. But numpy-array has many good qualities, its high speed can be seen in the operations on them, not the creation of them. Part of my personal thoughts:).

Related

Why does parallelization over more CPUs/cores scale so poorly with Python?

Background
I am trying to speed-up computation by parallelization (via joblib) using more available cores in Python 3.8, but observed that it does scale poorly.
Trials
I wrote a little script to test and demonstrate the behavior which can be found later. The script (see later) is designed to have a completely independent task doing some iterations of dummy operations using NumPy and Pandas. There is no input and no output to the task, no disc or other I/O, nor any communication or shared memory, just plain CPU and RAM usage. The processes do not use any other resources either other than the occasional request for the current time. Amdahl's Law should not apply to the code here, since there is no common code at all except for process setup.
I ran some experiments with increased workloads by duplicating the tasks using sequential vs. parallelization processing and measured the time it takes for each iteration and the whole (parallel) processes to complete. I ran the script on my Windows 10 laptop, and two AWS EC2 Linux (Amazon Linux 2) machines. The number of parallel processed never exceeded the number of available cores.
Observation
I observed the following (see results later for details, duration in seconds):
In case the number of parallel processed was less than the number of available cores, the total average CPUs utilization (user) never was more than 93%, system calls did not exceed 4%, and no iowait (measured with iostat -hxm 10)
The workload seems to be distributed equally over the available cores, though, which might be an indication for frequent switches between processes even though there are plenty of cores available
Interestingly, for sequential processing, the CPU utilization (user) was around 48%
The summed duration of all iterations is only slightly less than the total duration of a process, hence the process setup does not seem to be a major factor
For each doubling of the number of parallel processes there is a decrease in speed per each iteration/process of 50%
Whereas the duration for sequential processing approx. doubles as expected with doubling the workload (total number of iterations),
the duration for the parallel processing also increased significantly by approx. 50% per each doubling
These findings in this magnitude are unexpected to me.
Questions
What is the cause for this beavior?
Am I missing something?
How can it be remedied in order to utilize the full prospect of using more cores?
Detailed results
Windows 10
6 CPUs, 12 cores
Call: python .\time_parallel_processing.py 1,2,4,8 10
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParallelCount
Joblib 1 4.363902 0.195268 43.673971 10
2 6.322100 0.140654 63.870973 20
4 9.270582 0.464706 93.631790 40
8 15.489000 0.222859 156.670544 80
Seq 1 4.409772 0.126686 44.133441 10
2 4.465326 0.113183 89.377296 20
4 4.534959 0.125097 181.528372 40
8 4.444790 0.083315 355.849860 80
AWS c5.4xlarge
8 CPUs, 16 cores
Call: python time_parallel_processing.py 1,2,4,8,16 10
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Joblib 1 2.196086 0.009798 21.987626 10
2 3.392873 0.010025 34.297323 20
4 4.519174 0.126054 45.967140 40
8 6.888763 0.676024 71.815990 80
16 12.191278 0.156941 123.287779 160
Seq 1 2.192089 0.010873 21.945536 10
2 2.184294 0.008955 43.735713 20
4 2.201437 0.027537 88.156621 40
8 2.145312 0.009631 171.805374 80
16 2.137723 0.018985 342.393953 160
AWS c5.9xlarge
18 CPUs, 36 cores
Call: python time_parallel_processing.py 1,2,4,8,16,32 10
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Joblib 1 1.888071 0.023799 18.905295 10
2 2.797132 0.009859 28.307708 20
4 3.349333 0.106755 34.199839 40
8 4.273267 0.705345 45.998927 80
16 6.383214 1.455857 70.469109 160
32 10.974141 4.220783 129.671016 320
Seq 1 1.891170 0.030131 18.934494 10
2 1.866365 0.007283 37.373133 20
4 1.893082 0.041085 75.813468 40
8 1.855832 0.007025 148.643725 80
16 1.896622 0.007573 303.828529 160
32 1.864366 0.009142 597.301383 320
Script code
import argparse
import sys
import time
from argparse import Namespace
from typing import List
import numpy as np
import pandas as pd
from joblib import delayed
from joblib import Parallel
from tqdm import tqdm
RESULT_COLUMNS = {"Mode": str, "ParCount": int, "ProcessId": int, "IterId": int, "Duration": float}
def _create_empty_data_frame() -> pd.DataFrame:
return pd.DataFrame({key: [] for key, _ in RESULT_COLUMNS.items()}).astype(RESULT_COLUMNS)
def _do_task() -> None:
for _ in range(10):
array: np.ndarray = np.random.rand(2500, 2500)
_ = np.matmul(array, array)
data_frame: pd.DataFrame = pd.DataFrame(np.random.rand(250, 250), columns=list(map(str, list(range(250)))))
_ = data_frame.merge(data_frame)
def _process(process_id: int, iter_count: int) -> pd.DataFrame:
durations: pd.DataFrame = _create_empty_data_frame()
for i in tqdm(range(iter_count)):
iter_start_time: float = time.time()
_do_task()
durations = durations.append(
{
"Mode": "",
"ParCount": 0,
"ProcessId": process_id,
"IterId": i,
"Duration": time.time() - iter_start_time,
},
ignore_index=True,
)
return durations
def main(args: Namespace) -> None:
"""Execute main script."""
iter_durations: List[pd.DataFrame] = []
mode_durations: List[pd.DataFrame] = []
for par_count in list(map(int, args.par_counts.split(","))):
total_iter_count: int = par_count * int(args.iter_count)
print(f"\nRunning {par_count} processes in parallel and {total_iter_count} iterations in total")
start_time_joblib: float = time.time()
with Parallel(n_jobs=par_count) as parallel:
joblib_durations: List[pd.DataFrame] = parallel(
delayed(_process)(process_id, int(args.iter_count)) for process_id in range(par_count)
)
iter_durations.append(pd.concat(joblib_durations).assign(**{"Mode": "Joblib", "ParCount": par_count}))
end_time_joblib: float = time.time()
print(f"\nRunning {par_count} processes sequentially with {total_iter_count} iterations in total")
start_time_seq: float = time.time()
seq_durations: List[pd.DataFrame] = []
for process_id in range(par_count):
seq_durations.append(_process(process_id, int(args.iter_count)))
iter_durations.append(pd.concat(seq_durations).assign(**{"Mode": "Seq", "ParCount": par_count}))
end_time_seq: float = time.time()
mode_durations.append(
pd.DataFrame(
{
"Mode": ["Joblib", "Seq"],
"ParCount": [par_count] * 2,
"Duration": [end_time_joblib - start_time_joblib, end_time_seq - start_time_seq],
"TotalIterCount": [total_iter_count] * 2,
}
)
)
print("\nDuration in seconds")
grouping_columns: List[str] = ["Mode", "ParCount"]
print(
pd.concat(iter_durations)
.groupby(grouping_columns)
.agg({"Duration": ["mean", "std"]})
.merge(
pd.concat(mode_durations).groupby(grouping_columns).agg({"Duration": ["mean"], "TotalIterCount": "mean"}),
on=grouping_columns,
suffixes=["/Iter", " total"],
how="inner",
)
)
if __name__ == "__main__":
print(f"Command line: {sys.argv}")
parser: argparse.ArgumentParser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"par_counts",
help="Comma separated list of parallel processes counts to start trials for (e.g. '1,2,4,8,16,32')",
)
parser.add_argument("iter_count", help="Number of iterations per parallel process to carry out")
args: argparse.Namespace = parser.parse_args()
start_time: float = time.time()
main(args)
print(f"\nTotal elapsed time: {time.time() - start_time:.2f} seconds")
Environment
Created with' conda env create -f environment.time_parallel.yaml
environment.time_parallel.yaml:
name: time_parallel
channels:
- defaults
- conda-forge
dependencies:
- python=3.8.5
- pip=20.3.3
- pandas=1.2.0
- numpy=1.19.2
- joblib=1.0.0
- tqdm=4.55.1
Update 1
Thanks to the coment of #sholderbach I investigated into the NumPy/Pandas usage and found out a couple of things.
1)
NumPy uses a linear algebra backend which automatically will run some commands (including matrix multiplication) in parallel threads which results in too many threads altogether clogging the system, the more parallel processes, the more, hence the increasing duration per iteration.
I tested this hypthesis by removing NumPy and Pandas operations in method _do_task adn replacing it by simple math operations only:
def _do_task() -> None:
for _ in range(10):
for i in range(10000000):
_ = 1000 ^ 2 % 200
The results are exactly as expected in that the duration of an iteration does not change when increasing the number of processes (beyond the number of cores available).
Windows 10
Call python time_parallel_processing.py 1,2,4,8 5
Duration in seconds
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Joblib 1 2.562570 0.015496 13.468393 5
2 2.556241 0.021074 13.781174 10
4 2.565614 0.054754 16.171828 20
8 2.630463 0.258474 20.328055 40
Seq 2 2.576542 0.033270 25.874965 10
AWS c5.9xlarge
Call python time_parallel_processing.py 1,2,4,8,16,32 10
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Joblib 1 2.082849 0.022352 20.854512 10
2 2.126195 0.034078 21.596860 20
4 2.287874 0.254493 27.420978 40
8 2.141553 0.030316 21.912917 80
16 2.156828 0.137937 24.483243 160
32 3.581366 1.197282 42.884399 320
Seq 2 2.076256 0.004231 41.571033 20
2)
Following the hint of #sholderbach I found a number of other links which cover the topic of linear algebra backends using multiple threads automatically and how to turn this off:
NumPy issue (from #sholderbach)
threadpoolctl package
Nice article
Pinning process to a specific CPU with Python (and package psutil)
Add to _process:
proc = psutil.Process()
proc.cpu_affinity([process_id])
with threadpool_limits(limits=1):
...
Add to environment:
- threadpoolctl=2.1.0
- psutil=5.8.0
Note: I had to replace joblib by multiprocessing, since pinning did not work properly with joblib (only one half of the processes got spawned at a time on Linux).
I did some tests with mixed results. Monitoring shows that pinnng and restricting to one thread per process works for both Windows 10 and Linux/AWS c5.9xlarge. Unfortunately, the absolute duration per iteration increases by these "fixes".
Also, the duration per iteration still begins to increase at some point of parallelization.
Here are the results:
Windows 10
Call: python time_parallel_processing.py 1,2,4,8 5
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Joblib 1 9.502184 0.046554 47.542230 5
2 9.557120 0.092897 49.488612 10
4 9.602235 0.078271 50.249238 20
8 10.518716 0.422020 60.186707 40
Seq 2 9.493682 0.062105 95.083382 10
AWS c5.9xlarge
Call python time_parallel_processing.py 1,2,4,8,16,20,24,28,32 5
Duration/Iter Duration total TotalIterCount
mean std mean mean
Mode ParCount
Parallel 1 5.271010 0.008730 15.862883 3
2 5.400430 0.016094 16.271649 6
4 5.708021 0.069001 17.428172 12
8 6.088623 0.179789 18.745922 24
16 8.330902 0.177772 25.566504 48
20 10.515132 3.081697 47.895538 60
24 13.506221 4.589382 53.348917 72
28 16.318631 4.961513 57.536180 84
32 19.800182 4.435462 64.717435 96
Seq 2 5.212529 0.037129 31.332297 6
What is the cause for this behavior?
Very generally, this type of slowdown usually indicates some combination of being blocked by the GIL, context-switching between cores, or doing a lot of pickling
Am I missing something?
You may be missing some small issues - try profiling (some sampling profiler may be much more performant than cProfile) to see where the time is spent!
However, there's still a finite limit to how fast this can be made before you are reimplementing the suggestions below
How can it be remedied in order to utilize the full prospect of using more cores?
Take a look at numba and dask, which can allow you to get tremendous speedups on numpy and pandas code through parallelization that steps outside of the GIL
numba compiles numpy code and caches it for greater speed and practical processor operations
dask is a framework full of good tricks for efficient parallelization on a single and multiple systems
When I had a scaling issue with ipyparallel, it was caused by garbage collection during the run.
The source code of timeit shows how to disable gc properly.

Did I/O become slower since Python 2.7?

I'm currently having a small side project in which I want to sort a 20GB file on my machine as fast as possible. The idea is to chunk the file, sort the chunks, merge the chunks. I just used pyenv to time the radixsort code with different Python versions and saw that 2.7.18 is way faster than 3.6.10, 3.7.7, 3.8.3 and 3.9.0a. Can anybody explain why Python 3.x is slower than 2.7.18 in this simple example? Were there new features added?
import os
def chunk_data(filepath, prefixes):
"""
Pre-sort and chunk the content of filepath according to the prefixes.
Parameters
----------
filepath : str
Path to a text file which should get sorted. Each line contains
a string which has at least 2 characters and the first two
characters are guaranteed to be in prefixes
prefixes : List[str]
"""
prefix2file = {}
for prefix in prefixes:
chunk = os.path.abspath("radixsort_tmp/{:}.txt".format(prefix))
prefix2file[prefix] = open(chunk, "w")
# This is where most of the execution time is spent:
with open(filepath) as fp:
for line in fp:
prefix2file[line[:2]].write(line)
Execution times (multiple runs):
2.7.18: 192.2s, 220.3s, 225.8s
3.6.10: 302.5s
3.7.7: 308.5s
3.8.3: 279.8s, 279.7s (binary mode), 295.3s (binary mode), 307.7s, 380.6s (wtf?)
3.9.0a: 292.6s
The complete code is on Github, along with a minimal complete version
Unicode
Yes, I know that Python 3 and Python 2 deal different with strings. I tried opening the files in binary mode (rb / wb), see the "binary mode" comments. They are a tiny bit faster on a couple of runs. Still, Python 2.7 is WAY faster on all runs.
Try 1: Dictionary access
When I phrased this question, I thought that dictionary access might be a reason for this difference. However, I think the total execution time is way less for dictionary access than for I/O. Also, timeit did not show anything important:
import timeit
import numpy as np
durations = timeit.repeat(
'a["b"]',
repeat=10 ** 6,
number=1,
setup="a = {'b': 3, 'c': 4, 'd': 5}"
)
mul = 10 ** -7
print(
"mean = {:0.1f} * 10^-7, std={:0.1f} * 10^-7".format(
np.mean(durations) / mul,
np.std(durations) / mul
)
)
print("min = {:0.1f} * 10^-7".format(np.min(durations) / mul))
print("max = {:0.1f} * 10^-7".format(np.max(durations) / mul))
Try 2: Copy time
As a simplified experiment, I tried to copy the 20GB file:
cp via shell: 230s
Python 2.7.18: 237s, 249s
Python 3.8.3: 233s, 267s, 272s
The Python stuff is generated by the following code.
My first thought was that the variance is quite high. So this could be the reason. But then, the variance of chunk_data execution time is also high, but the mean is noticeably lower for Python 2.7 than for Python 3.x. So it seems not to be an I/O scenario as simple as I tried here.
import time
import sys
import os
version = sys.version_info
version = "{}.{}.{}".format(version.major, version.minor, version.micro)
if os.path.isfile("numbers-tmp.txt"):
os.remove("numers-tmp.txt")
t0 = time.time()
with open("numbers-large.txt") as fin, open("numers-tmp.txt", "w") as fout:
for line in fin:
fout.write(line)
t1 = time.time()
print("Python {}: {:0.0f}s".format(version, t1 - t0))
My System
Ubuntu 20.04
Thinkpad T460p
Python through pyenv
This is a combination of multiple effects, mostly the fact that Python 3 needs to perform unicode decoding/encoding when working in text mode and if working in binary mode it will send the data through dedicated buffered IO implementations.
First of all, using time.time to measure execution time uses the wall time and hence includes all sorts of Python unrelated things such as OS-level caching and buffering, as well as buffering of the storage medium. It also reflects any interference with other processes that require the storage medium. That's why you are seeing these wild variations in timing results. Here are the results for my system, from seven consecutive runs for each version:
py3 = [660.9, 659.9, 644.5, 639.5, 752.4, 648.7, 626.6] # 661.79 +/- 38.58
py2 = [635.3, 623.4, 612.4, 589.6, 633.1, 613.7, 603.4] # 615.84 +/- 15.09
Despite the large variation it seems that these results indeed indicate different timings as can be confirmed for example by a statistical test:
>>> from scipy.stats import ttest_ind
>>> ttest_ind(p2, p3)[1]
0.018729004515179636
i.e. there's only a 2% chance that the timings emerged from the same distribution.
We can get a more precise picture by measuring the process time rather than the wall time. In Python 2 this can be done via time.clock while Python 3.3+ offers time.process_time. These two functions report the following timings:
py3_process_time = [224.4, 226.2, 224.0, 226.0, 226.2, 223.7, 223.8] # 224.90 +/- 1.09
py2_process_time = [171.0, 171.1, 171.2, 171.3, 170.9, 171.2, 171.4] # 171.16 +/- 0.16
Now there's much less spread in the data since the timings reflect the Python process only.
This data suggests that Python 3 takes about 53.7 seconds longer to execute. Given the large amount of lines in the input file (550_000_000) this amounts to about 97.7 nanoseconds per iteration.
The first effect causing increased execution time are unicode strings in Python 3. The binary data is read from the file, decoded and then encoded again when it is written back. In Python 2 all strings are stored as binary strings right away, so this doesn't introduce any encoding/decoding overhead. You don't see this effect clearly in your tests because it disappears in the large variation introduced by various external resources which are reflected in the wall time difference. For example we can measure the time it takes for a roundtrip from binary to unicode to binary:
In [1]: %timeit b'000000000000000000000000000000000000'.decode().encode()
162 ns ± 2 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
This does include two attribute lookups as well as two function calls, so the actual time needed is smaller than the value reported above. To see the effect on execution time, we can change the test script to use binary modes "rb" and "wb" instead of text modes "r" and "w". This reduces the timing results for Python 3 as follows:
py3_binary_mode = [200.6, 203.0, 207.2] # 203.60 +/- 2.73
That reduces the process time by about 21.3 seconds or 38.7 nanoseconds per iteration. This is in agreement with timing results for the roundtrip benchmark minus timing results for name lookups and function calls:
In [2]: class C:
...: def f(self): pass
...:
In [3]: x = C()
In [4]: %timeit x.f()
82.2 ns ± 0.882 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [5]: %timeit x
17.8 ns ± 0.0564 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)
Here %timeit x measures the additional overhead of resolving the global name x and hence the attribute lookup and function call make 82.2 - 17.8 == 64.4 seconds. Subtracting this overhead twice from the above roundtrip data gives 162 - 2*64.4 == 33.2 seconds.
Now there's still a difference of 32.4 seconds between Python 3 using binary mode and Python 2. This comes from the fact that all the IO in Python 3 goes through the (quite complex) implementation of io.BufferedWriter .write while in Python 2 the file.write method proceeds fairly straightforward to fwrite.
We can check the types of the file objects in both implementations:
$ python3.8
>>> type(open('/tmp/test', 'wb'))
<class '_io.BufferedWriter'>
$ python2.7
>>> type(open('/tmp/test', 'wb'))
<type 'file'>
Here we also need to note that the above timing results for Python 2 have been obtained by using text mode, not binary mode. Binary mode aims to support all objects implementing the buffer protocol which results in additional work being performed also for strings (see also this question). If we switch to binary mode also for Python 2 then we obtain:
py2_binary_mode = [212.9, 213.9, 214.3] # 213.70 +/- 0.59
which is actually a bit larger than the Python 3 results (18.4 ns / iteration).
The two implementations also differ in other details such as the dict implementation. To measure this effect we can create a corresponding setup:
from __future__ import print_function
import timeit
N = 10**6
R = 7
results = timeit.repeat(
"d[b'10'].write",
setup="d = dict.fromkeys((str(i).encode() for i in range(10, 100)), open('test', 'rb'))", # requires file 'test' to exist
repeat=R, number=N
)
results = [x/N for x in results]
print(['{:.3e}'.format(x) for x in results])
print(sum(results) / R)
This gives the following results for Python 2 and Python 3:
Python 2: ~ 56.9 nanoseconds
Python 3: ~ 78.1 nanoseconds
This additional difference of about 21.2 nanoseconds amounts to about 12 seconds for the full 550M iterations.
The above timing code checks the dict lookup for only one key, so we also need to verify that there are no hash collisions:
$ python3.8 -c "print(len({str(i).encode() for i in range(10, 100)}))"
90
$ python2.7 -c "print len({str(i).encode() for i in range(10, 100)})"
90

Why is the following simple parallelized code much slower than a simple loop in Python?

A simple program which calculates square of numbers and stores the results:
import time
from joblib import Parallel, delayed
import multiprocessing
array1 = [ 0 for i in range(100000) ]
def myfun(i):
return i**2
#### Simple loop ####
start_time = time.time()
for i in range(100000):
array1[i]=i**2
print( "Time for simple loop --- %s seconds ---" % ( time.time()
- start_time
)
)
#### Parallelized loop ####
start_time = time.time()
results = Parallel( n_jobs = -1,
verbose = 0,
backend = "threading"
)(
map( delayed( myfun ),
range( 100000 )
)
)
print( "Time for parallelized method --- %s seconds ---" % ( time.time()
- start_time
)
)
#### Output ####
# >>> ( executing file "Test_vr20.py" )
# Time for simple loop --- 0.015599966049194336 seconds ---
# Time for parallelized method --- 7.763299942016602 seconds ---
Could it be the difference in array handling for the two options? My actual program would have something more complicated but this is the kind of calculation that I need to parallelize, as simply as possible, but not with such results.
System Model: HP ProBook 640 G2, Windows 7,
IDLE for Python System Type: x64-based PC Processor:
Intel(R) Core(TM) i5-6300U CPU # 2.40GHz,
2401 MHz,
2 Core(s),
4 Logical Processor(s)
From the documentation of threading:
If you know that the function you are calling is based on a compiled
extension that releases the Python Global Interpreter Lock (GIL)
during most of its computation ...
The problem is that in the this case, you don't know that. Python itself will only allow one thread to run at once (the python interpreter locks the GIL every time it executes a python operation).
threading is only going to be useful if myfun() spends most of its time in a compiled Python extension, and that extension releases the GIL.
The Parallel code is so embarrassingly slow because you are doing a huge amount of work to create multiple threads - and then you only execute one thread at a time anyway.
If you use the multiprocessing backend, then you have to copy the input data into each of four or eight processes (one per core), do the processing in each processes, and then copy the output data back. The copying is going to be slow, but if the processing is a little bit more complex than just calculating a square, it might be worth it. Measure and see.
Why?Because trying to use tools in cases,where tools principally cannot and DO NOT adjust the costs of entry:
I love python.
I pray educators better explain the costs of tools, otherwise we get lost in these wish-to-get [PARALLEL]-schedules.
A few facts:
No.0: With a lot of simplification, python intentionally uses GIL to [SERIAL]-ise access to variables and thus avoiding any potential collision from [CONCURRENT] modifications - paying these add-on costs of GIL-stepped dancing in extra time
No.1: [PARALLEL]-code execution is way harder than a "just"-[CONCURRENT] ( read more )
No.2: [SERIAL]-process has to pay extra costs, if trying to split work onto [CONCURRENT]-workers
No.3: If a process does inter-worker communication, immense extra costs per data exchange are paid
No.4: If hardware has few resources for [CONCURRENT] processes, results get way worse further
To have some smell of what can be done in standard python 2.7.13:
Efficiency is in better using silicon, not in bulldozing syntax-constructors into territories, where they are legal, but their performance has adverse effects on the experiment-under-test end-to-end speed:
You pay about 8 ~ 11 [ms] just to iteratively assemble an empty array1
>>> from zmq import Stopwatch
>>> aClk = Stopwatch()
>>> aClk.start();array1 = [ 0 for i in xrange( 100000 ) ];aClk.stop()
9751L
10146L
10625L
9942L
10346L
9359L
10473L
9171L
8328L
( the Stopwatch().stop() method yields [us] from .start() )
while, the memory-efficient, vectorisable, GIL-free approach can do the same about +230x ~ +450x faster:
>>> import numpy as np
>>>
>>> aClk.start();arrayNP = np.zeros( 100000 );aClk.stop()
15L
22L
21L
23L
19L
22L
>>> aClk.start();arrayNP = np.zeros( 100000, dtype = np.int );aClk.stop()
43L
47L
42L
44L
47L
So, using the proper tools just starts the story of performance:
>>> def test_SERIAL_python( nLOOPs = 100000 ):
... aClk.start()
... for i in xrange( nLOOPs ): # py3 range() ~ xrange() in py27
... array1[i] = i**2 # your loop-code
... _ = aClk.stop()
... return _
While a naive [SERIAL]-iterative implementation works, you pay immense costs for opting to do so ~ 70 [ms] for a 100000-D vector:
>>> test_SERIAL_python( nLOOPs = 100000 )
70318L
69211L
77825L
70943L
74834L
73079L
Using a more suitable / appropriate tool costs just ~ 0.2 [ms] i.e. ++350x FASTER
>>> aClk.start();arrayNP[:] = arrayNP[:]**2;aClk.stop()
189L
171L
173L
187L
183L
188L
193L
and with another glitch, a.k.a. an inplace modus-operandi:
>>> aClk.start();arrayNP[:] *=arrayNP[:];aClk.stop()
138L
139L
136L
137L
136L
136L
137L
Yields ~ +514x SPEEDUP, just from using appropriate tool
The art of performance is not in following marketing-sounding claimsabout parallellizing-( at-any-cost ),but in using know-how based methods, that pay least costs for biggest speedups achievable.
For "small"-problems, typical costs of distributing "thin"-work-packages are indeed hard to get covered by any potentially achievable speedups, so "problem-size" actually limits one's choice of methods, that could reach positive gain ( speedups of 0.9 or even << 1.0 are so often reported here, on StackOverflow, that you need not feel lost or alone in this sort of surprise ).
Epilogue
Processor number counts.
Core number counts.
But cache-sizes + NUMA-irregularities count more than that.
Smart, vectorised, HPC-cured, GIL-free libraries matter ( numpy et al - thanks a lot Travis OLIPHANT & al ... Great Salute to his team ... )
As an overhead-strict Amdahl Law (re-)-formulation explains, why even many-N-CPU parallelised code execution may ( and indeed often does ) suffer from speedups << 1.0
Overhead-strict formulation of the Amdahl's Law speedup S includes the very costs of the paid [PAR]-Setup + [PAR]-Terminate Overheads, explicitly:
1
S = __________________________; where s, ( 1 - s ), N were defined above
( 1 - s ) pSO:= [PAR]-Setup-Overhead add-on
s + pSO + _________ + pTO pTO:= [PAR]-Terminate-Overhead add-on
N
( an interactive animated tool for 2D visualising effects of these performance constraints is cited here )

Execution Time Difference based on sum() on IPython

I'm doing a simple Monte Carlo simulation exercise, using ipcluster engines of IPython. I've noticed a huge difference in execution time based on how I define my function, and I'm asking the reason for this. Here are the details:
When I definde the task as below, it is fast:
def sample(n):
return (rand(n)**2 + rand(n)**2 <= 1).sum()
When run in parallel:
from IPython.parallel import Client
rc = Client()
v = rc[:]
with v.sync_imports():
from numpy.random import rand
n = 1000000
timeit -r 1 -n 1 print 4.* sum(v.map_sync(sample, [n]*len(v))) / (n*len(v))
3.141712
1 loops, best of 1: 53.4 ms per loop
But if I change the function to:
def sample(n):
return sum(rand(n)**2 + rand(n)**2 <= 1)
I get:
3.141232
1 loops, best of 1: 3.81 s per loop
...which is 71 time slower. What can be the reason for this?
I can't go too in-depth, but the reason it is slower is because sum(<array>) is the built-in CPython sum function, whereas your <numpy array>.sum() is using the numpy sum function, which is substantially faster than the built-in python version.
I imagine you would get similar results if you replaced sum(<array>) with numpy.sum(<array>)
see numpy sum docs here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html

Measuring time using pycuda.driver.Event gives wrong results

I ran SimpleSpeedTest.py from the PyCuda examples, producing the following output:
Using nbr_values == 8192
Calculating 100000 iterations
SourceModule time and first three results:
0.058294s, [ 0.005477 0.005477 0.005477]
Elementwise time and first three results:
0.102527s, [ 0.005477 0.005477 0.005477]
Elementwise Python looping time and first three results:
2.398071s, [ 0.005477 0.005477 0.005477]
GPUArray time and first three results:
8.207257s, [ 0.005477 0.005477 0.005477]
CPU time measured using :
0.000002s, [ 0.005477 0.005477 0.005477]
The first four time measurements are reasonable, the last one (0.000002s) however is way off. The CPU result should be the slowest one but it is orders of magnitude faster than the fastest GPU method. So obviously the measured time must be wrong. This is strange since the same timing method seems to work fine for the first four results.
So I took some code from SimpleSpeedTest.py and made a small test file [2], which produced:
time measured using option 1:
0.000002s
time measured using option 2:
5.989620s
Option 1 measures the duration using pycuda.driver.Event.record() (as in SimpleSpeedTest.py), option 2 uses time.clock(). Again, option 1 is off while option 2 gives a reasonable result (the time it takes to run the test file is around 6s).
Does anyone have an idea as to why this is happening?
Since using option 1 is endorsed in SimpleSpeedTest.py, could it be my setup that is causing the problem? I am running a GTX 470, Display Driver 301.42, CUDA 4.2, Python 2.7 64, PyCuda 2012.1, X5650 Xeon
[2] Test file:
import numpy
import time
import pycuda.driver as drv
import pycuda.autoinit
n_iter = 100000
nbr_values = 8192 # = 64 * 128 (values as used in SimpleSpeedTest.py)
start = drv.Event() # option 1 uses pycuda.driver.Event
end = drv.Event()
a = numpy.ones(nbr_values).astype(numpy.float32) # test data
start.record() # start option 1 (inserting recording points into GPU stream)
tic = time.clock() # start option 2 (using CPU time)
for i in range(n_iter):
a = numpy.sin(a) # do some work
end.record() # end option 1
toc = time.clock() # end option 2
end.synchronize()
events_secs = start.time_till(end)*1e-3
time_secs = toc - tic
print "time measured using option 1:"
print "%fs " % events_secs
print "time measured using option 2:"
print "%fs " % time_secs
I contacted Andreas Klöckner and he suggested to synchronize on the start event, too.
...
start.record()
start.synchronize()
...
And this seems to solve the issue!
time measured using option 1:
5.944461s
time measured using option 2:
5.944314s
Apparently CUDA's behaviour changed in the last two years. I updated SimpleSpeedTest.py.

Categories