Convert pointer to pointer in C++ to Numpy Array in Cython - python

I am working on an algorithm that leverages Cython and C++ code to speed up a computation. Part of the computation involves keeping track of a 2D matrix, vecs that is D x D' dimensions (e.g. 1000 x 100). The algorithm parallelizes within Cython to set values per column. I am then interested in then obtaining the vecs values as a NumPy array in Python.
Modifying vecs in Cython
The pseudocode for setting each column of vecs is something like:
# this occurs in a Cython/C++ function
for icolumn in range(D'):
for irow in range(D):
vecs[irow, icolumn] = val
Data structure for vecs
To represent such a matrix, I am using a pointer of pointers of type npy_float32 (which I think is just numpy's float 32 type). I have a pointer to pointer array now that looks like this:
ctypedef np.npy_float32 DTYPE_t
cdef DTYPE_t** vecs # (D, D') array of vectors
Goal to obtain the vecs in NumPy Array at the Python Level
I am interested in converting this vecs variable into a numpy array. This is my attempt, but it doesn't work. I'm pretty novice at C++ and Cython.
numpy_vec_arr = np.asarray(<DTYPE_t[:,:]> proj_vecs_arr)

I'm not an expert in NumPy nor Cython, but I do know some about C++ and about interoperating C++ with other languages.
What problem are you trying to solve?
Answering that will prevent the XY problem, and might allow folks here to better help you.
Now, answering your original question. I see two ways to do this.
Use an available constructor of the NumPy array to construct a NumPy array. This will clearly give you a NumPy array, problem solved.
Create an object with the same in-memory structure as the NumPy array that Python expects, then convert that object to a NumPy array. This is tricky to do, bug prone, and depends on many implementation details. For those reasons, I would not suggest taking this approach.

Related

Are there dynamic arrays in numpy?

Let's say I create 2 numpy arrays, one of which is an empty array and one which is of size 1000x1000 made up of zeros:
import numpy as np;
A1 = np.array([])
A2 = np.zeros([1000,1000])
When I want to change a value in A2, this seems to work fine:
A2[n,m] = 17
The above code would change the value of position [n][m] in A2 to 17.
When I try the above with A1 I get this error:
A1[n,m] = 17
IndexError: index n is out of bounds for axis 0 with size 0
I know why this happens, because there is no defined position [n,m] in A1 and that makes sense, but my question is as follows:
Is there a way to define a dynamic array without that updates the array with new rows and columns if A[n,m] = somevalue is entered when n or m or both are greater than the bound of an Array A?
It doesn't have to be in numpy, any library or method that can update array size would be awesome. If it is a method, I can imagine there being an if loop that checks if [n][m] is out of bounds and does something about it.
I am coming from a MATLAB background where it's easy to do this. I tried to find something about this in the documentation in numpy.array but I've been unsuccessful.
EDIT:
I want to know if some way to create a dynamic list is possible at all in Python, not just in the numpy library. It appears from this question that it doesn't work with numpy Creating a dynamic array using numpy in python.
This can't be done in numpy, and it technically can't be done in MATLAB either. What MATLAB is doing behind-the-scenes is creating an entire new matrix, then copying all the data to the new matrix, then deleting the old matrix. It is not dynamically resizing, that isn't actually possible because of how arrays/matrices work. This is extremely slow, especially for large arrays, which is why MATLAB nowadays warns you not to do it.
Numpy, like MATLAB, cannot resize arrays (actually, unlike MATLAB it technically can, but only if you are lucky so I would advise against trying). But in order to avoid the sort of confusion and slow code this causes in MATLAB, numpy requires that you explicitly make the new array (using np.zeros) then copy the data over.
Python, unlike MATLAB, actually does have a truly resizable data structure: the list. Lists still require there to be enough elements, since this avoids silent indexing errors that are hard to catch in MATLAB, but you can resize an array with very good performance. You can make an effectively n-dimensional list by using nested lists of lists. Then, once the list is done, you can convert it to a numpy array.

two dimensional array slicing in cython

I have a simple question, why this is not efficient:
import numpy as np
cimport numpy as c_np
import cython
def function():
cdef c_np.ndarray[double, ndim=2] A = np.random.random((10,10))
cdef c_np.ndarray[double, ndim=1] slice
slice = A[1,:] #this line is marked as slow by the profiler cython -a
return
How should I slice a numpy matrix in python without overhead.
In my code, A is an adjacency matrix, so the slices are the neighbours in my routing algorithm.
The lines marked by the annotator are only suggestions, not based on actual profiling. I think it uses a relatively simple heuristic, something like number of python api calls. It also does not take into account the number of times something is called - yellow lines inside tight loops are much more important than something called once.
In this case, what you are doing is fairly efficient - one call to numpy to get the sliced array, and the assignment of that array to a buffer.
The generated C code looks like it may be better using the memoryview syntax which is functionally equivalent, but you would have to profile to know for sure if this is actually faster.
%%cython -a
import numpy as np
cimport numpy as c_np
import cython
def function():
cdef double[:, :] A = np.random.random((10,10))
cdef double[:] slice
slice = A[1,:]
return
At the risk of repeating an already good answer (and not answering the question!), I'm going to point out a common misunderstanding with the annotator... Yellow indicates a lot of interacting with the python interpreter. That (and the code you see when expanded) is a really useful hint when optimizating.
However! To quote from the first paragraph in every document on code optimization ever:
Profile first, then optimize. Seriously: don't guess, profile first.
And the annotations are definitely not a profile. Check out the cython docs on profiling and maybe this answer for line profiling How to profile cython functions line-by-line
As an example my code has some bright yellow where it calls some numpy functions on big arrays. There's actually very, very little room there for improvement* as that python interaction overhead is amortized over a lot of computation on those big arrays.
*I might be able to eek out a little by using the numpy C interface directly, but again, amortized over huge computation.

Questions regarding numpy in Python

I wrote a program using normal Python, and I now think it would be a lot better to use numpy instead of standard lists. The problem is there are a number of things where I'm confused how to use numpy, or whether I can use it at all.
In general how do np.arrays work? Are they dynamic in size like a C++ vector or do I have declare their length and type beforehand like a standard C++ array? In my program I've got a lot of cases where I create a list
ex_list = [] and then cycle through something and append to it ex_list.append(some_lst). Can I do something like with a numpy array? What if I knew the size of ex_list, could I declare and empty one and then add to it?
If I can't, let's say I only call this list, would it be worth it to convert it to numpy afterwards, i.e. is calling a numpy list faster?
Can I do more complicated operations for each element using a numpy array (not just adding 5 to each etc), example below.
full_pallete = [(int(1+i*(255/127.5)),0,0) for i in range(0,128)]
full_pallete += [col for col in right_palette if col[1]!=0 or col[2]!=0 or col==(0,0,0)]
In other words, does it make sense to convert to a numpy array and then cycle through it using something other than for loop?
Numpy arrays can be appended to (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.append.html), although in general calling the append function many times in a loop has a heavy performance cost - it is generally better to pre-allocate a large array and then fill it as necessary. This is because the arrays themselves do have fixed size under the hood, but this is hidden from you in python.
Yes, Numpy is well designed for many operations similar to these. In general, however, you don't want to be looping through numpy arrays (or arrays in general in python) if they are very large. By using inbuilt numpy functions, you basically make use of all sorts of compiled speed up benefits. As an example, rather than looping through and checking each element for a condition, you would use numpy.where().
The real reason to use numpy is to benefit from pre-compiled mathematical functions and data processing utilities on large arrays - both those in the core numpy library as well as many other packages that use them.

Creating new numpy scalar through C API and implementing a custom view

Short version
Given a built-in quaternion data type, how can I view a numpy array of quaternions as a numpy array of floats with an extra dimension of size 4 (without copying memory)?
Long version
Numpy has built-in support for floats and complex floats. I need to use quaternions -- which generalize complex numbers, but rather than having two components, they have four. There's already a very nice package that uses the C API to incorporate quaternions directly into numpy, which seems to do all the operations perfectly fast. There are a few more quaternion functions that I need to add to it, but I think I can mostly handle those.
However, I would also like to be able to use these quaternions in other functions that I need to write using the awesome numba package. Unfortunately, numba cannot currently deal with custom types. But I don't need the fancy quaternion functions in those numba-ed functions; I just need the numbers themselves. So I'd like to be able to just re-cast an array of quaternions as an array of floats with one extra dimension (of size 4). In particular, I'd like to just use the data that's already in the array without copying, and view it as a new array. I've found the PyArray_View function, but I don't know how to implement it.
(I'm pretty confident the data are held contiguously in memory, which I assume would be required for having a simple view of them. Specifically, elsize = 8*4 and alignment = 8 in the quaternion package.)
Turns out that was pretty easy. The magic of numpy means it's already possible. While thinking about this, I just tried the following with complex numbers:
import numpy as np
a = np.array([1+2j, 3+4j, 5+6j])
a.view(np.float).reshape(a.shape[0],2)
And this gave exactly what I was looking for. Somehow the same basic idea works with the quaternion type. I guess the internals just rely on that elsize, divide by sizeof(float) and use that to set the new size in the last dimension???
To answer my own question then, the same idea can be applied to the quaternion module:
import numpy as np, quaternions
a = np.array([np.quaternion(1,2,3,4), np.quaternion(5,6,7,8), np.quaternion(9,0,1,2)])
a.view(np.float).reshape(a.shape[0],4)
The view transformation and reshaping combined seem to take about 1 microsecond on my laptop, independent of the size of the input array (presumably because there's no memory copying, other than a few members in some basic python object).
The above is valid for simple 1-d arrays of quaternions. To apply it to general shapes, I just write a function inside the quaternion namespace:
def as_float_array(a):
"View the quaternion array as an array of floats with one extra dimension of size 4"
return a.view(np.float).reshape(a.shape+(4,))
Different shapes don't seem to slow the function down significantly.
Also, it's easy to convert back to from a float array to a quaternion array:
def as_quat_array(a):
"View a float array as an array of floats with one extra dimension of size 4"
if(a.shape[-1]==4) :
return a.view(np.quaternion).reshape(a.shape[:-1])
return a.view(np.quaternion).reshape(a.shape[:-1]+(a.shape[-1]//4,))

Numpy equivalent of MATLAB's cell array

I want to create a MATLAB-like cell array in Numpy. How can I accomplish this?
Matlab cell arrays are most similar to Python lists, since they can hold any object - but scipy.io.loadmat imports them as numpy object arrays - which is an array with dtype=object.
To be honest though you are just as well off using Python lists - if you are holding general objects you will loose almost all of the advantages of numpy arrays (which are designed to hold a sequence of values which each take the same amount of memory).

Categories