Disappointing results in pyCUDA benchmark for distance computing between N points - python

The following script was set-up for benchmark purposes. It computes the distance between N points using an Euclidean L2 norm. Three different routines are implemented:
High-level solution using the scipy.spatial.distance.pdist function.
Fairly low-level OpenMP powered scipy.weave.inline solution.
pyCUDA powered GPGPU solution.
Here are the benchmark results on a i5-3470 (16GB RAM) using a GTX660 (2GB RAM):
------------
Scipy Pdist
Execution time: 3.01975 s
Frist five elements: [ 0.74968684 0.71457213 0.833188 0.48084545 0.86407363]
Last five elements: [ 0.65717077 0.76850474 0.29652017 0.856179 0.56074625]
------------
Weave Inline
Execution time: 2.48705 s
Frist five elements: [ 0.74968684 0.71457213 0.83318806 0.48084542 0.86407363]
Last five elements: [ 0.65717083 0.76850474 0.29652017 0.856179 0.56074625]
------------
pyCUDA
CUDA clock timing: 0.713028930664
Execution time: 2.04364 s
Frist five elements: [ 0.74968684 0.71457213 0.83318806 0.48084542 0.86407363]
Last five elements: [ 0.65717083 0.76850468 0.29652017 0.856179 0.56074625]
------------
I am a bit disappointed on the pyCUDA perfomance. Since I am new to CUDA, there is probably something I am missing here. So where is the crux of the matter ? Am I reaching the limits of global memory bandwidth ? Poor choice of block- and gridsizes ?
import numpy,time,math
import pycuda.autoinit
import pycuda.driver as drv
from pycuda.compiler import SourceModule
from scipy.spatial.distance import pdist
from scipy import weave
def weave_solution(x):
"""
OpenMP powered weave inline.
"""
N,DIM = numpy.shape(x)
L = ((N-1)**2+(N-1))/2
solution = numpy.zeros(L).astype(numpy.float32)
ncpu = 4
weave_omp = {'headers' : ['<omp.h>'],
'extra_compile_args': ['-fopenmp'],
'extra_link_args' : ['-lgomp']}
code = \
r'''
omp_set_num_threads(ncpu);
#pragma omp parallel
{
int j,d,pos;
float r=0.0;
#pragma omp for
for (int i=0; i<(N-1); i++){
for (j=(i+1); j<N; j++){
r = 0.0;
for (d=0; d<DIM; d++){
r += (x[i*DIM+d]-x[j*DIM+d])*(x[i*DIM+d]-x[j*DIM+d]);
}
pos = (i*N+j)-(i*(i+1)/2)-i-1;
solution[pos] = sqrt(r);
}
}
}
'''
weave.inline(code,['x','N','DIM','solution','ncpu'],**weave_omp)
return numpy.array(solution)
def scipy_solution(x):
"""
SciPy High-level function
"""
return pdist(x).astype(numpy.float32)
def cuda_solution(x):
"""
pyCUDA
"""
N,DIM = numpy.shape(x)
N = numpy.int32(N)
DIM = numpy.int32(DIM)
L = ((N-1)**2+(N-1))/2
solution = numpy.zeros(L).astype(numpy.float32)
start = drv.Event()
end = drv.Event()
mod = SourceModule("""
__global__ void distance(float *x,int N,int DIM,float *solution){
const int i = blockDim.x * blockIdx.x + threadIdx.x;
int j,d,pos;
float r=0.0;
if ( i < (N-1) ){
for (j=(i+1); j<N; j++){
r = 0.0;
for (d=0; d<DIM; d++){
r += (x[i*DIM+d]-x[j*DIM+d])*(x[i*DIM+d]-x[j*DIM+d]);
}
pos = (i*N+j)-(i*(i+1)/2)-i-1;
solution[pos] = sqrt(r);
}
}
}
""")
func = mod.get_function("distance")
start.record()
func(drv.In(x),N,DIM,drv.Out(solution),block=(192,1,1),grid=(192,1))
end.record()
end.synchronize()
secs = start.time_till(end)*1e-3
print "CUDA clock timing: ",secs
return solution
if __name__ == '__main__':
# Set up data points
N = 25000
DIM = 3
x = numpy.random.rand(N,DIM).astype(numpy.float32)
print "-"*12
# Scipy solution
print "Scipy Pdist"
stime = time.time()
spsolution = scipy_solution(x)
stime = time.time()-stime
print "Execution time: {0:.5f} s".format(stime)
print "Frist five elements:", spsolution[:5]
print "Last five elements:", spsolution[-5:]
print "-"*12
# Weave solution
print "Weave Inline"
wtime = time.time()
wsolution = weave_solution(x)
wtime = time.time()-wtime
print "Execution time: {0:.5f} s".format(wtime)
print "Frist five elements:", wsolution[:5]
print "Last five elements:", wsolution[-5:]
print "-"*12
# pyCUDA solution
print "pyCUDA"
ctime = time.time()
csolution = cuda_solution(x)
ctime = time.time()-ctime
print "Execution time: {0:.5f} s".format(ctime)
print "Frist five elements:", csolution[:5]
print "Last five elements:", csolution[-5:]
print "-"*12
Edit:
I have added the hash bang line
#!/usr/bin/env python
at the top of the file and made it executable. After commenting out the computation using weave.inline and scipy.spatial.distance.pdist, the NVIDIA Visual Profiler promts the following results:

Right now you have 192 threads each updating N-1 positions, you could easily launch more blocks/threads.
What you want to do is instead of this loop for (j=(i+1); j<N; j++){, replace it with N-1 threads doing just the inner loop.
If you want to take it further you could have N-1 * DIM threads each doing the statement in the inner loop, store the result to shared memory and finally do an reduction on that. See Optimizing Parallel Reduction in CUDA
Looking at this line:
r += (x[i*DIM+d]-x[j*DIM+d])*(x[i*DIM+d]-x[j*DIM+d]);
The memory transactions and pattern is not uniform and coalesced. Also do not know if nvcc will optimizes your expression to only two memory transactions instead of four shown here, as I do not know if pycuda passes -O3 to nvcc. Put (x[i*DIM+d]-x[j*DIM+d]) into a register variable to make sure and just square it yourself.
Else you can also try to put #pragma unroll before each for loop to unroll them if possible.

Related

PyCUDA LogicError: cuModuleLoadDataEx failed: an illegal memory access was encountered

I am trying to parallelize the bitonic sort with pycuda. For this I use SourceModule and the C code of the parallel bitonic sort. For the memory copies management I use InOut of the pycuda.driver that simplify some of the memory transfers
import pycuda.autoinit
import pycuda.driver as drv
from pycuda.compiler import SourceModule
from pycuda import gpuarray
import numpy as np
from time import time
ker = SourceModule(
"""
__device__ void swap(int & a, int & b){
int tmp = a;
a = b;
b = tmp;
}
__global__ void bitonicSort(int * values, int N){
extern __shared__ int shared[];
int tid = threadIdx.x + blockDim.x * blockIdx.x;
// Copy input to shared mem.
shared[tid] = values[tid];
__syncthreads();
// Parallel bitonic sort.
for (int k = 2; k <= N; k *= 2){
// Bitonic merge:
for (int j = k / 2; j>0; j /= 2){
int ixj = tid ^ j;
if (ixj > tid){
if ((tid & k) == 0){
//Sort ascending
if (shared[tid] > shared[ixj]){
swap(shared[tid], shared[ixj]);
}
}
else{
//Sort descending
if (shared[tid] < shared[ixj]){
swap(shared[tid], shared[ixj]);
}
}
}
__syncthreads();
}
}
values[tid] = shared[tid];
}
"""
)
N = 8 #lenght of A
A = np.int32(np.random.randint(1, 20, N)) #random numbers in A
BLOCK_SIZE = 256
NUM_BLOCKS = (N + BLOCK_SIZE-1)//BLOCK_SIZE
bitonicSort = ker.get_function("bitonicSort")
t1 = time()
bitonicSort(drv.InOut(A), np.int32(N), block=(BLOCK_SIZE,1,1), grid=(NUM_BLOCKS,1), shared=4*N)
t2 = time()
print("Execution Time {0}".format(t2 - t1))
print(A)
As in the kernel I use extern __shared__, in pycuda I use the shared parameter with the respective 4*N. Also try using __shared__ int shared[N] in the kernel but it doesn't work either (check here: Getting started with shared memory on PyCUDA)
Running in Google Collab I get the following error:
/usr/local/lib/python3.6/dist-packages/pycuda/compiler.py in __init__(self, source, nvcc, options, keep, no_extern_c, arch, code, cache_dir, include_dirs)
292
293 from pycuda.driver import module_from_buffer
--> 294 self.module = module_from_buffer(cubin)
295
296 self._bind_module()
LogicError: cuModuleLoadDataEx failed: an illegal memory access was encountered
Does anyone know what could be generating this error?
Your device code isn't accounting for the sizes of your arrays correctly.
You are launching 256 threads in a single block. That means that you will have 256 threads, with tid numbered 0..255, trying to execute each line of code. For example, in this case:
shared[tid] = values[tid];
You will have, for example, one thread trying to do shared[255] = values[255];
Neither your shared nor values array are that large. That is the reason for the illegal memory access error.
The simplest solution for this kind of trivial problem is to make your array sizes match your block size.
BLOCK_SIZE = N
According to my testing, that change clears up any errors and results in a properly sorted array.
It won't work for N greater than 1024, or multi-block usage, but your code would have to be modified for a multi-block sort, anyway.
If you still have trouble after making that change, I suggest restarting your python session or your colab session.

My computationally intensive Numba function runs 10x slower on the GPU than on CPU. Am I missing anything?

I'ḿ testing Numba's performance and tried a dummy but computationally intense function two ways: on the CPU with parallelism enabled (and prange), and on the GPU where I see it occupies 100% of the GPU when running. Both run fine, but the CPU one takes 10X less time to complete. I was epxecting the GPU to be faster in this case, even though my GPU is not very strong (Geforce 1050 ti) and my CPU is strong (Threadripper 3970x).
Here is my CPU benchmark:
from numba import *
import numpy as np
from numba import cuda
import time
def benchmark():
input_list = np.random.randint(10, size=320000)
out = np.zeros(len(input_list))
cpu_run_test(input_list, out)
print('Result size: ' + str(len(out)) + ' ' + str(out))
#njit(parallel=True, fastmath=True, nogil=True)
def cpu_run_test(input_list, out):
for step in range(len(input_list)):
for j in range(10):
count = 0
for item2 in input_list:
if input_list[step] == item2:
count = count + 1
out[step] = count
if __name__ == '__main__':
import timeit
print(timeit.timeit("benchmark()", setup="from __main__ import benchmark", number=1))
And here is my GPU benchmark (same computation, just partitioning work differently to take advantage of the GPUs blocks and threads appropriately:
from numba import *
import numpy as np
from numba import cuda
import time
def benchmark():
for xx in range(1):
new_array_duration = time.time()
input_list = np.random.randint(10, size=320000)
new_array_duration = time.time() - new_array_duration
print('New array duration: ' + str(new_array_duration))
to_device_duration = time.time()
d_array = cuda.to_device(input_list)
to_device_duration = time.time() - to_device_duration
print('To device duration: ' + str(to_device_duration))
kernel_duration = time.time()
run_test[16, 768](d_array)
kernel_duration = time.time() - kernel_duration
print('Kernel duration: ' + str(kernel_duration))
to_host_duration = time.time()
out = d_array.copy_to_host()
to_host_duration = time.time() - to_host_duration
print('To host duration: ' + str(to_host_duration))
#cuda.jit(fastmath=True)
def run_test(d_array):
array_slice_len = len(d_array) / cuda.blockDim.x
slice_start = (cuda.threadIdx.x * (cuda.blockIdx.x + 1)) * array_slice_len
for step in prange(slice_start, slice_start + array_slice_len):
if step > len(d_array) - 1:
return
for j in range(10):
count = 0
for item2 in d_array:
if d_array[step] == item2:
count = count + 1
d_array[step] = count
if __name__ == '__main__':
import timeit
# make_multithread(benchmark, 64)
print(timeit.timeit("benchmark()", setup="from __main__ import benchmark", number=1))
Anyone can just copy and paste and run either. I'm on Linux Mint 20, Python 3.7, latest Numba (0.51) and latest cudatoolkit installed.
The results are as follows:
CPU: 15.20 secs
GPU: 145.54 secs
Is this correct or I am missing some optimizations for getting the GPU code running faster?
What am I missing?
Not necessarely an answer, but way too long for a comment.
Some things I would try; altering the blocks and threads and see what happens. Also utilizing Local, Shared and Global Memory on the GPU.
Global Memory is the only one that can be set by the host and is accessible to all threads. It is also the slowest type of memory for access, so you generally want to minimize the amount of transfers and use Memory Coalescing where possible.
Shared Memory is memory that is available for all threads in one Block and is faster than global memory. Local Memory is the fastest and only visible to one thread.
Is starting prange on the GPU actually using the GPU threads? Because normaly (non-python) you don't have to invoke stuff in parallel on the GPU. Maybe that loop runs in serial instead?
You should also try to find an algorithm that does not involve branching (if there is one). If I understand your code correctly, you have a list with 320.000 items containing random integers in the range [0;10). For each item, you count how often that item appears and replace the item with the count of that item. And 10x for good measure. This will cause the items of the resulting set to oscillate between the values [0;10) and somewhat between [0;320.000), but probably more around 32.000 in general.
To make it easier to transform the algorithm by getting rid of that unpleasant oscillation, increasing the list size by factor 10 and getting rid of the loop should do the same.
As I don't speak python, here's some pseudocode for a non-branching vectorized implementation. Although I don't know if this will be faster at all, it is something I would try.
{l} local memory
{s} shared memory
{g} global memory
Host:
input_list{g} = (0..9)[3.200.000]
accumulator{g} = (0..0)[10]
run _1
run _2
Device Kernel _1:
let blocksize = get_blocksize()
let threadId = get_threadId()
let globalId = get_globalId()
let localSum{s} = arr[10]
let localCopy{s} = arr[blocksize.x]
if(globalId.x < size(input_list)) {
localCopy[threadId] = input_list[globalId]
localSum[localCopy[threadId]]++
}
if(threadId.x < 10) {
accumulator[threadId] += localSum[threadId]
}
Device Kernel _2:
let blocksize = get_blocksize()
let threadId = get_threadId()
let globalId = get_globalId()
let localAcc{l} = arr[10]
for(i = 0 to 10) {
localAcc[i] = accumulator[i]
}
if(globalId < size(input_list)) {
let val = input_list[globalId]
input_list[globalId] = localAcc[val]
}

Optimize cython functions operating on python lists

I am currently migrating to Cython a set of functions that are currently implemented in C++ through scipy.weave (now deprecated).
These functions operate on timeseries points that are 2D-lists (eg. [[17100, 19.2], [17101, 20.7], [17102, 20.3], ...]) both in input and in output. A sample function is subtract that accepts two timeseries and calculates a new timeserie as subtraction of the two inputs going date-by-date.
The structure and the interfaces have to be mantained for retrocompatibility, but my profiling trials show that Cython porting is about 30%-40% slower than the original scipy.weave implementation.
I have tried many ways to optimize (inner conversions to Numpy arrays and memoryviews, C pointers, ...), but the conversion time required lenghtens the overall execution time. Even trying to define input and output as C++ vectors, leveraging on Cython implicit conversions doesn't seem to be effective in order to mantain scipy.weave speed. I have also used the various hints on boundscheck, wraparound, division, ...
The highest slow-downs seem to be on functions that uses nested loops and I've seen that a little gain can be predefining the list size (cdef list target = [[-1, float('nan')]]*size).
I am aware that Cython can't be so much performing on Python structures, especially lists, but are there any other tricks or techniques with which a speedup can be obtained?
=== EDIT - ADD CODE EXAMPLE ===
A good example of the typology of functions is the following.
The function takes a 2-D list of dates/prices and a 2-D list of dates/decimal factors and searches matching dates between the two lists, calculating the output on the corresponding price/factor by multiplying or dividing (that is a third input parameter).
My best-performing cython code:
#cython.cdivision(True)
#cython.boundscheck(False)
#cython.wraparound(False)
cpdef apply_conversion(list original_timeserie, list factor_timeserie, int divide_or_multiply=False):
cdef:
Py_ssize_t i, j = 0, size = len(original_timeserie), size2 = len(factor_timeserie)
long original_date, factor_date
double original_price, factor_price, conv_price
list result = []
for i in range(size):
original_date = original_timeserie[i][0]
for j in range(j, size2):
factor_date = factor_timeserie[j][0]
if original_date == factor_date:
original_price = original_timeserie[i][1]
factor_price = factor_timeserie[j][1]
if divide_or_multiply:
if factor_price != 0:
conv_price = original_price / factor_price
else:
conv_price = float('inf')
else:
conv_price = original_price * factor_price
result.append([original_date, conv_price])
break
return result
The original scipy function:
int len = original_timeserie.length();
int len2 = factor_timeserie.length();
PyObject* py_serieconv = PyList_New(len);
PyObject* original_item = NULL;
PyObject* factor_item = NULL;
PyObject* date = NULL;
PyObject* value = NULL;
long original_date = 0;
long factor_date = 0;
double original_price = 0;
double factor_price = 0;
int j = 0;
for(int i=0;i<len;i++) {
original_item = PyList_GetItem(original_timeserie, i);
date = PyList_GetItem(original_item, 0);
original_date = PyInt_AsLong(date);
original_price = PyFloat_AsDouble( PyList_GetItem(original_item, 1) );
factor_item = NULL;
for(;j<len2;) {
factor_item = PyList_GetItem(factor_timeserie, j++);
factor_date = PyInt_AsLong(PyList_GetItem(factor_item, 0));
if (factor_date == original_date) {
factor_price = PyFloat_AsDouble(PyList_GetItem(factor_item, 1));
value = PyFloat_FromDouble(original_price * (divide_or_multiply==0 ? factor_price : 1/factor_price));
PyObject* py_new_item = PyList_New(2);
Py_XINCREF(date);
PyList_SetItem(py_new_item, 0, date);
PyList_SetItem(py_new_item, 1, value);
PyList_SetItem(py_serieconv, i, py_new_item);
break;
}
}
}
return_val = py_serieconv;
Py_XDECREF(py_serieconv);

Python implementation faster than C

I apologise if comparisons are not supposed to work this way. I'm new to programming and just curious as to why this is the case.
I have a large binary file containing word embeddings (4.5gb). Each line has a word followed by its embedding which is comprised of 300 float values. I'm simply finding the total number of lines.
For C, I use mmap:
int fd;
struct stat sb;
off_t offset = 0, pa_offset;
size_t length, i;
char *addr;
int count = 0;
fd = open("processed_data/crawl-300d-2M.vec", O_RDONLY);
if(fd == -1){
handle_error("open");
exit(1);
}
if(fstat(fd, &sb) < 0){
handle_error("fstat");
close(fd);
exit(1);
}
pa_offset = offset & ~(sysconf(_SC_PAGE_SIZE) - 1);
if(offset >= sb.st_size){
fprintf(stderr, "offset is past end of file\n");
exit(EXIT_FAILURE);
}
length = sb.st_size - offset;
addr = mmap(0, (length + offset - pa_offset), PROT_READ, MAP_SHARED, fd, pa_offset);
if (addr == MAP_FAILED) handle_error("mmap");
//Timing only this loop
clock_t begin = clock();
for(i=0;i<length;i++){
if(*(addr+i) == '\n') count++;
}
printf("%d\n", count);
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
printf("%f\n", time_spent);
This takes 11.283060 seconds.
Python:
file = open('processed_data/crawl-300d-2M.vec', 'r')
count = 0
start_time = timeit.default_timer()
for line in file:
count += 1
print(count)
elapsed = timeit.default_timer() - start_time
print(elapsed)
This takes 3.0633065439997154 seconds.
Doesn't the Python code read each character to find new lines? If so, why is my C code so inefficient?
Hard to say, because I assume that it will be heavily implementation dependant. But at first glance, the main difference between your Python and C programs is that the C program uses mmap. It is a very powerful tool (that you do not really need here...) and as such can come with some overhead. As the reference Python implementation is written in C, it is likely that the loop
for line in file:
count += 1
will end in a loop over a tiny function calling fgets. I would bet a coin that a naive C program using fgets will be slightly faster than the Python equivalent, because it will save all the Python overhead. But IMHO there is no surprise that using mmap in C is less efficient than fgets in Python

Create a list with initial capacity in Python

Code like this often happens:
l = []
while foo:
# baz
l.append(bar)
# qux
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be refactored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there an equivalent for us Python programmers?
Warning: This answer is contested. See comments.
def doAppend( size=10000 ):
result = []
for i in range(size):
message= "some unique object %d" % ( i, )
result.append(message)
return result
def doAllocate( size=10000 ):
result=size*[None]
for i in range(size):
message= "some unique object %d" % ( i, )
result[i]= message
return result
Results. (evaluate each function 144 times and average the duration)
simple append 0.0102
pre-allocate 0.0098
Conclusion. It barely matters.
Premature optimization is the root of all evil.
Python lists have no built-in pre-allocation. If you really need to make a list, and need to avoid the overhead of appending (and you should verify that you do), you can do this:
l = [None] * 1000 # Make a list of 1000 None's
for i in xrange(1000):
# baz
l[i] = bar
# qux
Perhaps you could avoid the list by using a generator instead:
def my_things():
while foo:
#baz
yield bar
#qux
for thing in my_things():
# do something with thing
This way, the list isn't every stored all in memory at all, merely generated as needed.
Short version: use
pre_allocated_list = [None] * size
to preallocate a list (that is, to be able to address 'size' elements of the list instead of gradually forming the list by appending). This operation is very fast, even on big lists. Allocating new objects that will be later assigned to list elements will take much longer and will be the bottleneck in your program, performance-wise.
Long version:
I think that initialization time should be taken into account.
Since in Python everything is a reference, it doesn't matter whether you set each element into None or some string - either way it's only a reference. Though it will take longer if you want to create a new object for each element to reference.
For Python 3.2:
import time
import copy
def print_timing (func):
def wrapper (*arg):
t1 = time.time()
res = func (*arg)
t2 = time.time ()
print ("{} took {} ms".format (func.__name__, (t2 - t1) * 1000.0))
return res
return wrapper
#print_timing
def prealloc_array (size, init = None, cp = True, cpmethod = copy.deepcopy, cpargs = (), use_num = False):
result = [None] * size
if init is not None:
if cp:
for i in range (size):
result[i] = init
else:
if use_num:
for i in range (size):
result[i] = cpmethod (i)
else:
for i in range (size):
result[i] = cpmethod (cpargs)
return result
#print_timing
def prealloc_array_by_appending (size):
result = []
for i in range (size):
result.append (None)
return result
#print_timing
def prealloc_array_by_extending (size):
result = []
none_list = [None]
for i in range (size):
result.extend (none_list)
return result
def main ():
n = 1000000
x = prealloc_array_by_appending(n)
y = prealloc_array_by_extending(n)
a = prealloc_array(n, None)
b = prealloc_array(n, "content", True)
c = prealloc_array(n, "content", False, "some object {}".format, ("blah"), False)
d = prealloc_array(n, "content", False, "some object {}".format, None, True)
e = prealloc_array(n, "content", False, copy.deepcopy, "a", False)
f = prealloc_array(n, "content", False, copy.deepcopy, (), False)
g = prealloc_array(n, "content", False, copy.deepcopy, [], False)
print ("x[5] = {}".format (x[5]))
print ("y[5] = {}".format (y[5]))
print ("a[5] = {}".format (a[5]))
print ("b[5] = {}".format (b[5]))
print ("c[5] = {}".format (c[5]))
print ("d[5] = {}".format (d[5]))
print ("e[5] = {}".format (e[5]))
print ("f[5] = {}".format (f[5]))
print ("g[5] = {}".format (g[5]))
if __name__ == '__main__':
main()
Evaluation:
prealloc_array_by_appending took 118.00003051757812 ms
prealloc_array_by_extending took 102.99992561340332 ms
prealloc_array took 3.000020980834961 ms
prealloc_array took 49.00002479553223 ms
prealloc_array took 316.9999122619629 ms
prealloc_array took 473.00004959106445 ms
prealloc_array took 1677.9999732971191 ms
prealloc_array took 2729.999780654907 ms
prealloc_array took 3001.999855041504 ms
x[5] = None
y[5] = None
a[5] = None
b[5] = content
c[5] = some object blah
d[5] = some object 5
e[5] = a
f[5] = []
g[5] = ()
As you can see, just making a big list of references to the same None object takes very little time.
Prepending or extending takes longer (I didn't average anything, but after running this a few times I can tell you that extending and appending take roughly the same time).
Allocating new object for each element - that is what takes the most time. And S.Lott's answer does that - formats a new string every time. Which is not strictly required - if you want to preallocate some space, just make a list of None, then assign data to list elements at will. Either way it takes more time to generate data than to append/extend a list, whether you generate it while creating the list, or after that. But if you want a sparsely-populated list, then starting with a list of None is definitely faster.
The Pythonic way for this is:
x = [None] * numElements
Or whatever default value you wish to prepopulate with, e.g.
bottles = [Beer()] * 99
sea = [Fish()] * many
vegetarianPizzas = [None] * peopleOrderingPizzaNotQuiche
(Caveat Emptor: The [Beer()] * 99 syntax creates one Beer and then populates an array with 99 references to the same single instance)
Python's default approach can be pretty efficient, although that efficiency decays as you increase the number of elements.
Compare
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # Millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
result = []
i = 0
while i < Elements:
result.append(i)
i += 1
def doAllocate():
result = [None] * Elements
i = 0
while i < Elements:
result[i] = i
i += 1
def doGenerator():
return list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
x = 0
while x < Iterations:
fn()
x += 1
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
with
#include <vector>
typedef std::vector<unsigned int> Vec;
static const unsigned int Elements = 100000;
static const unsigned int Iterations = 144;
void doAppend()
{
Vec v;
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doReserve()
{
Vec v;
v.reserve(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doAllocate()
{
Vec v;
v.resize(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v[i] = i;
}
}
#include <iostream>
#include <chrono>
using namespace std;
void test(const char* name, void(*fn)(void))
{
cout << name << ": ";
auto start = chrono::high_resolution_clock::now();
for (unsigned int i = 0; i < Iterations; ++i) {
fn();
}
auto end = chrono::high_resolution_clock::now();
auto elapsed = end - start;
cout << chrono::duration<double, milli>(elapsed).count() << "ms\n";
}
int main()
{
cout << "Elements: " << Elements << ", Iterations: " << Iterations << '\n';
test("doAppend", doAppend);
test("doReserve", doReserve);
test("doAllocate", doAllocate);
}
On my Windows 7 Core i7, 64-bit Python gives
Elements: 100000, Iterations: 144
doAppend: 3587.204933ms
doAllocate: 2701.154947ms
doGenerator: 1721.098185ms
While C++ gives (built with Microsoft Visual C++, 64-bit, optimizations enabled)
Elements: 100000, Iterations: 144
doAppend: 74.0042ms
doReserve: 27.0015ms
doAllocate: 5.0003ms
C++ debug build produces:
Elements: 100000, Iterations: 144
doAppend: 2166.12ms
doReserve: 2082.12ms
doAllocate: 273.016ms
The point here is that with Python you can achieve a 7-8% performance improvement, and if you think you're writing a high-performance application (or if you're writing something that is used in a web service or something) then that isn't to be sniffed at, but you may need to rethink your choice of language.
Also, the Python code here isn't really Python code. Switching to truly Pythonesque code here gives better performance:
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
for x in range(Iterations):
result = []
for i in range(Elements):
result.append(i)
def doAllocate():
for x in range(Iterations):
result = [None] * Elements
for i in range(Elements):
result[i] = i
def doGenerator():
for x in range(Iterations):
result = list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
fn()
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
Which gives
Elements: 100000, Iterations: 144
doAppend: 2153.122902ms
doAllocate: 1346.076965ms
doGenerator: 1614.092112ms
(in 32-bit, doGenerator does better than doAllocate).
Here the gap between doAppend and doAllocate is significantly larger.
Obviously, the differences here really only apply if you are doing this more than a handful of times or if you are doing this on a heavily loaded system where those numbers are going to get scaled out by orders of magnitude, or if you are dealing with considerably larger lists.
The point here: Do it the Pythonic way for the best performance.
But if you are worrying about general, high-level performance, Python is the wrong language. The most fundamental problem being that Python function calls has traditionally been up to 300x slower than other languages due to Python features like decorators, etc. (PythonSpeed/PerformanceTips, Data Aggregation).
As others have mentioned, the simplest way to preseed a list is with NoneType objects.
That being said, you should understand the way Python lists actually work before deciding this is necessary.
In the CPython implementation of a list, the underlying array is always created with overhead room, in progressively larger sizes ( 4, 8, 16, 25, 35, 46, 58, 72, 88, 106, 126, 148, 173, 201, 233, 269, 309, 354, 405, 462, 526, 598, 679, 771, 874, 990, 1120, etc), so that resizing the list does not happen nearly so often.
Because of this behavior, most list.append() functions are O(1) complexity for appends, only having increased complexity when crossing one of these boundaries, at which point the complexity will be O(n). This behavior is what leads to the minimal increase in execution time in S.Lott's answer.
Source: Python list implementation
I ran S.Lott's code and produced the same 10% performance increase by preallocating. I tried Ned Batchelder's idea using a generator and was able to see the performance of the generator better than that of the doAllocate. For my project the 10% improvement matters, so thanks to everyone as this helps a bunch.
def doAppend(size=10000):
result = []
for i in range(size):
message = "some unique object %d" % ( i, )
result.append(message)
return result
def doAllocate(size=10000):
result = size*[None]
for i in range(size):
message = "some unique object %d" % ( i, )
result[i] = message
return result
def doGen(size=10000):
return list("some unique object %d" % ( i, ) for i in xrange(size))
size = 1000
#print_timing
def testAppend():
for i in xrange(size):
doAppend()
#print_timing
def testAlloc():
for i in xrange(size):
doAllocate()
#print_timing
def testGen():
for i in xrange(size):
doGen()
testAppend()
testAlloc()
testGen()
Output
testAppend took 14440.000ms
testAlloc took 13580.000ms
testGen took 13430.000ms
Concerns about preallocation in Python arise if you're working with NumPy, which has more C-like arrays. In this instance, preallocation concerns are about the shape of the data and the default value.
Consider NumPy if you're doing numerical computation on massive lists and want performance.
Python's list doesn't support preallocation. Numpy allows you to preallocate memory, but in practice it doesn't seem to be worth it if your goal is to speed up the program.
This test simply writes an integer into the list, but in a real application you'd likely do more complicated things per iteration, which further reduces the importance of the memory allocation.
import timeit
import numpy as np
def list_append(size=1_000_000):
result = []
for i in range(size):
result.append(i)
return result
def list_prealloc(size=1_000_000):
result = [None] * size
for i in range(size):
result[i] = i
return result
def numpy_prealloc(size=1_000_000):
result = np.empty(size, np.int32)
for i in range(size):
result[i] = i
return result
setup = 'from __main__ import list_append, list_prealloc, numpy_prealloc'
print(timeit.timeit('list_append()', setup=setup, number=10)) # 0.79
print(timeit.timeit('list_prealloc()', setup=setup, number=10)) # 0.62
print(timeit.timeit('numpy_prealloc()', setup=setup, number=10)) # 0.73
For some applications, a dictionary may be what you are looking for. For example, in the find_totient method, I found it more convenient to use a dictionary since I didn't have a zero index.
def totient(n):
totient = 0
if n == 1:
totient = 1
else:
for i in range(1, n):
if math.gcd(i, n) == 1:
totient += 1
return totient
def find_totients(max):
totients = dict()
for i in range(1,max+1):
totients[i] = totient(i)
print('Totients:')
for i in range(1,max+1):
print(i,totients[i])
This problem could also be solved with a preallocated list:
def find_totients(max):
totients = None*(max+1)
for i in range(1,max+1):
totients[i] = totient(i)
print('Totients:')
for i in range(1,max+1):
print(i,totients[i])
I feel that this is not as elegant and prone to bugs because I'm storing None which could throw an exception if I accidentally use them wrong, and because I need to think about edge cases that the map lets me avoid.
It's true the dictionary won't be as efficient, but as others have commented, small differences in speed are not always worth significant maintenance hazards.
Fastest Way - use * like list1 = [False] * 1_000_000
Comparing all the common methods (list appending vs preallocation vs for vs while), I found that using * gives the most efficient execution time.
import time
large_int = 10_000_000
start_time = time.time()
# Test 1: List comprehension
l1 = [False for _ in range(large_int)]
end_time_1 = time.time()
# Test 2: Using *
l2 = [False] * large_int
end_time_2 = time.time()
# Test 3: Using append with for loop & range
l3 = []
for _ in range(large_int):
l3.append(False)
end_time_3 = time.time()
# Test 4: Using append with while loop
l4, i = [], 0
while i < large_int:
l4.append(False)
i += 1
end_time_4 = time.time()
# Results
diff_1 = end_time_1 - start_time
diff_2 = end_time_2 - end_time_1
diff_3 = end_time_3 - end_time_2
diff_4 = end_time_4 - end_time_3
print(f"Test 1. {diff_1:.4f} seconds")
print(f"Test 2. {diff_2:.4f} seconds")
print(f"Test 3. {diff_3:.4f} seconds")
print(f"Test 4. {diff_4:.4f} seconds")
print("\nTest 2 is faster than - ")
print(f" Test 1 by - {(diff_1 / diff_2 * 100 - 1):,.0f}%")
print(f" Test 3 by - {(diff_3 / diff_2 * 100 - 1):,.0f}%")
print(f" Test 4 by - {(diff_4 / diff_2 * 100 - 1):,.0f}%")
From what I understand, Python lists are already quite similar to ArrayLists. But if you want to tweak those parameters I found this post on the Internet that may be interesting (basically, just create your own ScalableList extension):
http://mail.python.org/pipermail/python-list/2000-May/035082.html

Categories