I used cython to speed my bottleneck in python. The task is to compute the selective inverse (below S) of a sparse matrix given by its cholesky factorization provided in csc-format (data, indptr, indices). But the the task is not really important, in the end it is a 3 times nested for-loop where I have to access elements of S fast.
When I use a memoryview of a full/huge matrix
double[:,:] Sfull
and access the entries then the algorithm is quite fast and meets my expectations. But it is clear, that this is only possible when the matrix Sfull fits into the memory.
My approach was to use a list/vector of dictionaries/maps such that I can access the elements also relatively fast.
cdef vector[map[int, double]] S
It turned out, that accessing the elements inside the loop with this data structure is around 20 times slower. Is this expected or is there another issue? Do you see any other data structure?
Thank you very much for any comments or help!
Best,
Manuel
Bellow, the cython code, where the version with the full memoryview is commented out.
cdef int invTakC12( double[:] id_diag, double[:] data, int len_i, int[:] indptr, int[:] indices, double[:, :] Sfull):
cdef vector[map[int, double]] S = testDictC(len_i-1) #list of empty dicts
cdef int i, j, j_i, lc
cdef double q
for i in range(len_i-2, -1, -1):
for j_i in range(indptr[i+1]-1, indptr[i]-1, -1):
j = indices[j_i]
q = 0
for lc in range(indptr[i+1] -1, indptr[i], -1):
q += data[lc] * S[j][ indices[lc] ]
#q += data[lc] * Sfull[ indices[lc], j ]
S[j][i] = -q
#Sfull[i,j] = -q
if i==j:
S[j][i] += id_diag[i]
#Sfull[i,j] += id_diag[i]
else:
S[i][j] -= q
#Sfull[j,i] -= id_diag[i]
return 0
You can access the arrays independently - e.g.:
cdef double[:] S_data = S.data
cdef np.int32_t[:] S_ind = S.indices
cdef np.int32_t[:] S_indptr = S.indptr
If that's too inconvenient, you can put them in a C struct as pointers:
cdef struct mapped_csc:
double *data
np.int32_t *indices
np.int32_t *indptr
I'm trying to create a simple cython module and have the following problem. I would like to create a function like:
cdef float calc(float[:] a1, float[:] a2):
cdef float res = 0
cdef int l = len(a2)
cdef float item_a2
cdef float item_a1
for idx in range(l):
if a2[idx] > 0:
item_a2 = a2[idx]
item_a1 = a1[idx]
res += item_a2 * item_a1
return res
When the function is being executed, a1 and a2 params are python lists. Therefore I get the error:
TypeError: a bytes-like object is required, not 'list'
I just need to make such calculations and nothing more. But how shall I define input params float[:] a1 and float[:] a2 if I need to maximize speed up using C?
Probably it's necessary to convert lists to arrays manually?
P.S. would appreciate also if you can also explain to me whether it's necessary to declare cdef float item_a2 explicitly to perform multiplication (in terms of performance) or it is equally to result += a2[idx] * a1[idx]
cdef float calc(float[:] a1, float[:] a2):
a1 and a2 can be any object that supports the buffer protocol and has a float type. The most common examples would be either a numpy array or the standard library array module. They will not accept Python lists because a Python list is not a single homogeneous C type packed efficiently into memory, but instead a collection of Python objects.
To create a suitable object from a Python list you can do either:
numpy.array([1.0,2.0],dtype=numpy.float32)
array.array('f',[1.0,2.0])
(You may want to consider using double/float64 instead of float for extra precision, but that's your choice)
If you don't want to create array objects like this then Cython will not help you much since there is not much speed up possible with plain lists.
The np.ndarray[FLOAT, ndim=1] a1 syntax suggested in the other answer an outdated version of the memoryview syntax you're already using. There are no advantages (and a few small disadvantages) to using it.
result += a2[idx] * a1[idx]
is fine - Cython knows the types of a1 and a2 so there is no need to create temporary intermediate variables. You can get a html highlighted file with cython -a filename.pyx to inspect that will help indicate where the non-accelerated parts are.
Cython answer
One way you can do this (if you're open to using numpy):
import numpy as np
cimport numpy as np
ctypedef np.npy_float FLOAT
ctypedef np.npy_intp INTP
cdef FLOAT calc(np.ndarray[FLOAT, ndim=1, mode='c'] a1,
np.ndarray[FLOAT, ndim=1, mode='c'] a2):
cdef FLOAT res = 0
cdef INTP l = a2.shape[0]
cdef FLOAT item_a2
cdef FLOAT item_a1
for idx in range(l):
if a2[idx] > 0:
item_a2 = a2[idx]
item_a1 = a1[idx]
res += item_a2 * item_a1
return res
This will require a np.float32 dtype for your array. If you wanted a np.float64, you can redefine FLOAT as np.float64_t.
One unsolicited piece of advice... l is a bad name for a variable, since it looks like a digit. Consider renaming it length, or something of the like.
Pure python with Numpy
Finally, it looks like you're trying to compute the dot product between two vectors where elements in one array are positive. You could use Numpy here pretty efficiently to get the same result.
>>> import numpy as np
>>> a1 = np.array([0, 1, 2, 3, 4, 5, 6])
>>> a2 = np.array([1, 2, 0, 3, -1])
>>> a1[:a2.shape[0]].dot(np.maximum(a2, 0))
11
Note, I added the a1 slice since you didn't check for length equality in your Cython function, but used a2's length. So I assumed the lengths may differ.
Say I have a simple function that takes as input a pointer to an integer. How do I change the originating integer value?
My idea was as follows:
cdef myFunc(int n, int *nnz):
nnz_int = <uintptr_t>nnz
nnz_int = 0
for i in range(0, n):
nnz_int += n
but upon reflection, I think I only initially cast the value of nnz onto nnz_int, and then change nnz_int, without changing the original nnz. How do I achieve that?
From the Cython docs:
Note that Cython uses array access for pointer dereferencing, as *x is not valid Python syntax, whereas x[0] is.
So this should work:
cdef myFunc(int n, int *nnz):
for i in range(0, n):
nnz[0] += n
Not sure what you're trying to achieve by adding n to the pointed-to value n times; why not simply add n*n to it once?
I currently have the following double loop in my Python code:
for i in range(a):
for j in range(b):
A[:,i]*=B[j][:,C[i,j]]
(A is a float matrix. B is a list of float matrices. C is a matrix of integers. By matrices I mean m x n np.arrays.
To be precise, the sizes are: A: mxa B: b matrices of size mxl (with l different for each matrix) C: axb. Here m is very large, a is very large, b is small, the l's are even smaller than b
)
I tried to speed it up by doing
for j in range(b):
A[:,:]*=B[j][:,C[:,j]]
but surprisingly to me this performed worse.
More precisely, this did improve performance for small values of m and a (the "large" numbers), but from m=7000,a=700 onwards the first appraoch is roughly twice as fast.
Is there anything else I can do?
Maybe I could parallelize? But I don't really know how.
(I am not committed to either Python 2 or 3)
Here's a vectorized approach assuming B as a list of arrays that are of the same shape -
# Convert B to a 3D array
B_arr = np.asarray(B)
# Use advanced indexing to index into the last axis of B array with C
# and then do product-reduction along the second axis.
# Finally, we perform elementwise multiplication with A
A *= B_arr[np.arange(B_arr.shape[0]),:,C].prod(1).T
For cases with smaller a, we could run a loop that iterates through the length of a instead. Also, for more performance, it might be a better idea to store those elements into a separate 2D array instead and perform the elementwise multiplication only once after we get out of the loop.
Thus, we would have an alternative implementation like so -
range_arr = np.arange(B_arr.shape[0])
out = np.empty_like(A)
for i in range(a):
out[:,i] = B_arr[range_arr,:,C[i,:]].prod(0)
A *= out
In terms of performance (algebraic operations, lookup, caching, etc.), is there a difference between C arrays (which can be exposed as a C array, or a cython.view.array [Cython array], or a memoryview of the aforementioned two) and a NumPy arrays (which in Cython should have no Python overhead)
Edit:
I should mention that in the NumPy array is statically typed using Cython, and the dtypes are NumPy compile-time datypes (e.g. cdef np.int_t or cdef np.float32_t), and the types in the C case are the C equivalents (cdef int_t and cdef float)
Edit2:
Here is the example from the Cython Memoryview documentation to further illustrate my question:
from cython.view cimport array as cvarray
import numpy as np
# Memoryview on a NumPy array
narr = np.arange(27, dtype=np.dtype("i")).reshape((3, 3, 3))
cdef int [:, :, :] narr_view = narr
# Memoryview on a C array
cdef int carr[3][3][3]
cdef int [:, :, :] carr_view = carr
# Memoryview on a Cython array
cyarr = cvarray(shape=(3, 3, 3), itemsize=sizeof(int), format="i")
cdef int [:, :, :] cyarr_view = cyarr
Is there any difference between sticking with a C array vs a Cython array vs a NumPy array?
My knowledge on this is still imperfect, but this may be helpful.
I ran some informal benchmarks to show what each array type is good for and was intrigued by what I found.
Though these array types are different in many ways, if you are doing heavy computation with large arrays, you should be able to get similar performance out of any of them since item-by-item access should be roughly the same across the board.
A NumPy array is a Python object implemented using Python's C API.
NumPy arrays do provide an API at the C level, but they cannot be created independent from the Python interpreter.
They are especially useful because of all the different array manipulation routines available in NumPy and SciPy.
A Cython memory view is also a Python object, but it is made as a Cython extension type.
It does not appear to be designed for use in pure Python since it isn't a part of Cython that can be imported directly from Python, but you can return a view to Python from a Cython function.
You can look at the implementation at https://github.com/cython/cython/blob/master/Cython/Utility/MemoryView.pyx
A C array is a native type in the C language.
It is indexed like a pointer, but arrays and pointers are different.
There is some good discussion on this at http://c-faq.com/aryptr/index.html
They can be allocated on the stack and are easier for the C compiler to optimize, but they will be more difficult to access outside of Cython.
I know you can make a NumPy array from memory that has been dynamically allocated by other programs, but it seems a lot more difficult that way.
Travis Oliphant posted an example of this at http://blog.enthought.com/python/numpy-arrays-with-pre-allocated-memory/
If you are using C arrays or pointers for temporary storage within your program they should work very well for you.
They will not be as convenient for slicing or for any other sort of vectorized computation since you will have to do everything yourself with explicit looping, but they should allocate and deallocate faster and ought to provide a good baseline for speed.
Cython also provides an array class.
It looks like it is designed for internal use.
Instances are created when a memoryview is copied.
See http://docs.cython.org/src/userguide/memoryviews.html#view-cython-arrays
In Cython, you can also allocate memory and index a pointer to treat the allocated memory somewhat like an array.
See http://docs.cython.org/src/tutorial/memory_allocation.html
Here are some benchmarks that show somewhat similar performance for indexing large arrays.
This is the Cython file.
from numpy cimport ndarray as ar, uint64_t
cimport cython
import numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
def ndarr_time(uint64_t n=1000000, uint64_t size=10000):
cdef:
ar[uint64_t] A = np.empty(n, dtype=np.uint64)
uint64_t i, j
for i in range(n):
for j in range(size):
A[j] = n
def carr_time(uint64_t n=1000000):
cdef:
ar[uint64_t] A = np.empty(n, dtype=np.uint64)
uint64_t AC[10000]
uint64_t a
int i, j
for i in range(n):
for j in range(10000):
AC[j] = n
#cython.boundscheck(False)
#cython.wraparound(False)
def ptr_time(uint64_t n=1000000, uint64_t size=10000):
cdef:
ar[uint64_t] A = np.empty(n, dtype=np.uint64)
uint64_t* AP = &A[0]
uint64_t a
int i, j
for i in range(n):
for j in range(size):
AP[j] = n
#cython.boundscheck(False)
#cython.wraparound(False)
def view_time(uint64_t n=1000000, uint64_t size=10000):
cdef:
ar[uint64_t] A = np.empty(n, dtype=np.uint64)
uint64_t[:] AV = A
uint64_t i, j
for i in range(n):
for j in range(size):
AV[j] = n
Timing these using IPython we obtain
%timeit -n 10 ndarr_time()
%timeit -n 10 carr_time()
%timeit -n 10 ptr_time()
%timeit -n 10 view_time()
10 loops, best of 3: 6.33 s per loop
10 loops, best of 3: 3.12 s per loop
10 loops, best of 3: 6.26 s per loop
10 loops, best of 3: 3.74 s per loop
These results struck me as a little odd, considering that, as per Efficiency: arrays vs pointers , arrays are unlikely to be significantly faster than pointers.
It appears that some sort of compiler optimization is making the pure C arrays and the typed memory views faster.
I tried turning off all the optimization flags on my C compiler and got the timings
1 loops, best of 3: 25.1 s per loop
1 loops, best of 3: 25.5 s per loop
1 loops, best of 3: 32 s per loop
1 loops, best of 3: 28.4 s per loop
It looks to me like the item-by item access is pretty much the same across the board, except that C arrays and Cython memory views seem to be easier for the compiler to optimize.
More commentary on this can be seen at a these two blog posts I found some time ago:
http://jakevdp.github.io/blog/2012/08/08/memoryview-benchmarks/
http://jakevdp.github.io/blog/2012/08/16/memoryview-benchmarks-2/
In the second blog post he comments on how, if memory view slices are inlined, they can provide speeds similar to that of pointer arithmetic.
I have noticed in some of my own tests that explicitly inlining functions that use Memory View slices isn't always necessary.
As an example of this, I'll compute the inner product of every combination of two rows of an array.
from numpy cimport ndarray as ar
cimport cython
from numpy import empty
# An inlined dot product
#cython.boundscheck(False)
#cython.wraparound(False)
cdef inline double dot_product(double[:] a, double[:] b, int size):
cdef int i
cdef double tot = 0.
for i in range(size):
tot += a[i] * b[i]
return tot
# non-inlined dot-product
#cython.boundscheck(False)
#cython.wraparound(False)
cdef double dot_product_no_inline(double[:] a, double[:] b, int size):
cdef int i
cdef double tot = 0.
for i in range(size):
tot += a[i] * b[i]
return tot
# function calling inlined dot product
#cython.boundscheck(False)
#cython.wraparound(False)
def dot_rows_slicing(ar[double,ndim=2] A):
cdef:
double[:,:] Aview = A
ar[double,ndim=2] res = empty((A.shape[0], A.shape[0]))
int i, j
for i in range(A.shape[0]):
for j in range(A.shape[0]):
res[i,j] = dot_product(Aview[i], Aview[j], A.shape[1])
return res
# function calling non-inlined version
#cython.boundscheck(False)
#cython.wraparound(False)
def dot_rows_slicing_no_inline(ar[double,ndim=2] A):
cdef:
double[:,:] Aview = A
ar[double,ndim=2] res = empty((A.shape[0], A.shape[0]))
int i, j
for i in range(A.shape[0]):
for j in range(A.shape[0]):
res[i,j] = dot_product_no_inline(Aview[i], Aview[j], A.shape[1])
return res
# inlined dot product using numpy arrays
#cython.boundscheck(False)
#cython.boundscheck(False)
cdef inline double ndarr_dot_product(ar[double] a, ar[double] b):
cdef int i
cdef double tot = 0.
for i in range(a.size):
tot += a[i] * b[i]
return tot
# non-inlined dot product using numpy arrays
#cython.boundscheck(False)
#cython.boundscheck(False)
cdef double ndarr_dot_product_no_inline(ar[double] a, ar[double] b):
cdef int i
cdef double tot = 0.
for i in range(a.size):
tot += a[i] * b[i]
return tot
# function calling inlined numpy array dot product
#cython.boundscheck(False)
#cython.wraparound(False)
def ndarr_dot_rows_slicing(ar[double,ndim=2] A):
cdef:
ar[double,ndim=2] res = empty((A.shape[0], A.shape[0]))
int i, j
for i in range(A.shape[0]):
for j in range(A.shape[0]):
res[i,j] = ndarr_dot_product(A[i], A[j])
return res
# function calling nun-inlined version for numpy arrays
#cython.boundscheck(False)
#cython.wraparound(False)
def ndarr_dot_rows_slicing_no_inline(ar[double,ndim=2] A):
cdef:
ar[double,ndim=2] res = empty((A.shape[0], A.shape[0]))
int i, j
for i in range(A.shape[0]):
for j in range(A.shape[0]):
res[i,j] = ndarr_dot_product(A[i], A[j])
return res
# Version with explicit looping and item-by-item access.
#cython.boundscheck(False)
#cython.wraparound(False)
def dot_rows_loops(ar[double,ndim=2] A):
cdef:
ar[double,ndim=2] res = empty((A.shape[0], A.shape[0]))
int i, j, k
double tot
for i in range(A.shape[0]):
for j in range(A.shape[0]):
tot = 0.
for k in range(A.shape[1]):
tot += A[i,k] * A[j,k]
res[i,j] = tot
return res
Timing these we see
A = rand(1000, 1000)
%timeit dot_rows_slicing(A)
%timeit dot_rows_slicing_no_inline(A)
%timeit ndarr_dot_rows_slicing(A)
%timeit ndarr_dot_rows_slicing_no_inline(A)
%timeit dot_rows_loops(A)
1 loops, best of 3: 1.02 s per loop
1 loops, best of 3: 1.02 s per loop
1 loops, best of 3: 3.65 s per loop
1 loops, best of 3: 3.66 s per loop
1 loops, best of 3: 1.04 s per loop
The results were as fast with explicit inlining as they were without it.
In both cases, the typed memory views were comparable to a version of the function that was written without slicing.
In the blog post, he had to write a specific example to force the compiler to not inline a function.
It appears that a decent C compiler (I'm using MinGW) is able to take care of these optimizations without being told to inline certain functions.
Memoryviews can be faster for passing array slices between functions within a Cython module, even without explicit inlining.
In this particular case, however, even pushing the loops to C doesn't really reach a speed anywhere near what can be achieved through proper use of matrix multiplication.
The BLAS is still the best way to do things like this.
%timeit A.dot(A.T)
10 loops, best of 3: 25.7 ms per loop
There is also automatic conversion from NumPy arrays to memoryviews as in
cimport cython
#cython.boundscheck(False)
#cython.wraparound(False)
def cysum(double[:] A):
cdef tot = 0.
cdef int i
for i in range(A.size):
tot += A[i]
return tot
The one catch is that, if you want a function to return a NumPy array, you will have to use np.asarray to convert the memory view object to a NumPy array again.
This is a relatively inexpensive operation since memory views comply with http://www.python.org/dev/peps/pep-3118/
Conclusion
Typed memory views seem to be a viable alternative to NumPy arrays for internal use in a Cython module.
Array slicing will be faster with memory views, but there are not as many functions and methods written for memory views as there are for NumPy arrays.
If you don't need to call a bunch of the NumPy array methods and want easy array slicing, you can use memory views in place of NumPy arrays.
If you need both the array slicing and the NumPy functionality for a given array, you can make a memory view that points to the same memory as the NumPy array.
You can then use the view for passing slices between functions and the array for calling NumPy functions.
That approach is still somewhat limited, but it will work well if you are doing most of your processing with a single array.
C arrays and/or dynamically allocated blocks of memory could be useful for intermediate calculations, but they are not as easy to pass back to Python for use there.
In my opinion, it is also more cumbersome to dynamically allocate multidimensional C arrays.
The best approach I am aware of is to allocate a large block of memory and then use integer arithmetic to index it as if it were a multidimensional array.
This could be an issue if you want easy allocation of arrays on the fly.
On the other hand, allocation times are probably a good bit faster for C arrays.
The other array types are designed to be nearly as fast and much more convenient, so I would recommend using them unless there is a compelling reason to do otherwise.
Update: As mentioned in the answer by #Veedrac you can still pass Cython memory views to most NumPy functions.
When you do this, NumPy will usually have to create a new NumPy array object to work with the memory view anyway, so this will be somewhat slower.
For large arrays the effect will be negligible.
A call to np.asarray for a memory view will be relatively fast regardless of array size.
However, to demonstrate this effect, here is another benchmark:
Cython file:
def npy_call_on_view(npy_func, double[:] A, int n):
cdef int i
for i in range(n):
npy_func(A)
def npy_call_on_arr(npy_func, ar[double] A, int n):
cdef int i
for i in range(n):
npy_func(A)
in IPython:
from numpy.random import rand
A = rand(1)
%timeit npy_call_on_view(np.amin, A, 10000)
%timeit npy_call_on_arr(np.amin, A, 10000)
output:
10 loops, best of 3: 282 ms per loop
10 loops, best of 3: 35.9 ms per loop
I tried to choose an example that would show this effect well.
Unless many NumPy function calls on relatively small arrays are involved, this shouldn't change the time a whole lot.
Keep in mind that, regardless of which way we are calling NumPy, a Python function call still occurs.
This applies only to the functions in NumPy.
Most of the array methods are not available for memoryviews (some of the attributes still are, like size and shape and T).
For example A.dot(A.T) with NumPy arrays would become np.dot(A, A.T).
Don't use cython.view.array, use cpython.array.array.
See this answer of mine for details, although that only deals with speed. The recommendation is to treat cython.view.array as "demo" material, and cpython.array.array as an actual solid implementation. These arrays are very lightweight and better when just using them as scratch space.
Further, if you're ever tempted by malloc, raw access on these is no slower and instantiation takes only twice as long.
With regards to IanH's
If you need both the array slicing and the NumPy functionality for a given array, you can make a memory view that points to the same memory as the NumPy array.
It's worth noting that memoryviews have a "base" property and many Numpy functions can also take memoryviews, so these do not have to be separated variables.