Related
Following this answer to "Can I force a numpy ndarray to take ownership of its memory?" I attempted to use the Python C API function PyArray_ENABLEFLAGS through Cython's NumPy wrapper and found it is not exposed.
The following attempt to expose it manually (this is just a minimum example reproducing the failure)
from libc.stdlib cimport malloc
import numpy as np
cimport numpy as np
np.import_array()
ctypedef np.int32_t DTYPE_t
cdef extern from "numpy/ndarraytypes.h":
void PyArray_ENABLEFLAGS(np.PyArrayObject *arr, int flags)
def test():
cdef int N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, np.NPY_INT32, data)
PyArray_ENABLEFLAGS(arr, np.NPY_ARRAY_OWNDATA)
fails with a compile error:
Error compiling Cython file:
------------------------------------------------------------
...
def test():
cdef int N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, np.NPY_INT32, data)
PyArray_ENABLEFLAGS(arr, np.NPY_ARRAY_OWNDATA)
^
------------------------------------------------------------
/tmp/test.pyx:19:27: Cannot convert Python object to 'PyArrayObject *'
My question: Is this the right approach to take in this case? If so, what am I doing wrong? If not, how do I force NumPy to take ownership in Cython, without going down to a C extension module?
You just have some minor errors in the interface definition. The following worked for me:
from libc.stdlib cimport malloc
import numpy as np
cimport numpy as np
np.import_array()
ctypedef np.int32_t DTYPE_t
cdef extern from "numpy/arrayobject.h":
void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)
cdef data_to_numpy_array_with_spec(void * ptr, np.npy_intp N, int t):
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, t, ptr)
PyArray_ENABLEFLAGS(arr, np.NPY_OWNDATA)
return arr
def test():
N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
arr = data_to_numpy_array_with_spec(data, N, np.NPY_INT32)
return arr
This is my setup.py file:
from distutils.core import setup, Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("_owndata", ["owndata.pyx"])]
setup(cmdclass={'build_ext': build_ext}, ext_modules=ext_modules)
Build with python setup.py build_ext --inplace. Then verify that the data is actually owned:
import _owndata
arr = _owndata.test()
print arr.flags
Among others, you should see OWNDATA : True.
And yes, this is definitely the right way to deal with this, since numpy.pxd does exactly the same thing to export all the other functions to Cython.
#Stefan's solution works for most scenarios, but is somewhat fragile. Numpy uses PyDataMem_NEW/PyDataMem_FREE for memory-management and it is an implementation detail, that these calls are mapped to the usual malloc/free + some memory tracing (I don't know which effect Stefan's solution has on the memory tracing, at least it seems not to crash).
There are also more esoteric cases possible, in which free from numpy-library doesn't use the same memory-allocator as malloc in the cython code (linked against different run-times for example as in this github-issue or this SO-post).
The right tool to pass/manage the ownership of the data is PyArray_SetBaseObject.
First we need a python-object, which is responsible for freeing the memory. I'm using a self-made cdef-class here (mostly because of logging/demostration), but there are obviously other possiblities as well:
%%cython
from libc.stdlib cimport free
cdef class MemoryNanny:
cdef void* ptr # set to NULL by "constructor"
def __dealloc__(self):
print("freeing ptr=", <unsigned long long>(self.ptr)) #just for debugging
free(self.ptr)
#staticmethod
cdef create(void* ptr):
cdef MemoryNanny result = MemoryNanny()
result.ptr = ptr
print("nanny for ptr=", <unsigned long long>(result.ptr)) #just for debugging
return result
...
Now, we use a MemoryNanny-object as sentinel for the memory, which gets freed as soon as the parent-numpy-array gets destroyed. The code is a little bit awkward, because PyArray_SetBaseObject steals the reference, which is not handled by Cython automatically:
%%cython
...
from cpython.object cimport PyObject
from cpython.ref cimport Py_INCREF
cimport numpy as np
#needed to initialize PyArray_API in order to be able to use it
np.import_array()
cdef extern from "numpy/arrayobject.h":
# a little bit awkward: the reference to obj will be stolen
# using PyObject* to signal that Cython cannot handle it automatically
int PyArray_SetBaseObject(np.ndarray arr, PyObject *obj) except -1 # -1 means there was an error
cdef array_from_ptr(void * ptr, np.npy_intp N, int np_type):
cdef np.ndarray arr = np.PyArray_SimpleNewFromData(1, &N, np_type, ptr)
nanny = MemoryNanny.create(ptr)
Py_INCREF(nanny) # a reference will get stolen, so prepare nanny
PyArray_SetBaseObject(arr, <PyObject*>nanny)
return arr
...
And here is an example, how this functionality can be called:
%%cython
...
from libc.stdlib cimport malloc
def create():
cdef double *ptr=<double*>malloc(sizeof(double)*8);
ptr[0]=42.0
return array_from_ptr(ptr, 8, np.NPY_FLOAT64)
which can be used as follows:
>>> m = create()
nanny for ptr= 94339864945184
>>> m.flags
...
OWNDATA : False
...
>>> m[0]
42.0
>>> del m
freeing ptr= 94339864945184
with results/output as expected.
Note: the resulting arrays doesn't really own the data (i.e. flags return OWNDATA : False), because the memory is owned be the memory-nanny, but the result is the same: the memory gets freed as soon as array is deleted (because nobody holds a reference to the nanny anymore).
MemoryNanny doesn't have to guard a raw C-pointer. It can be anything else, for example also a std::vector:
%%cython -+
from libcpp.vector cimport vector
cdef class VectorNanny:
#automatically default initialized/destructed by Cython:
cdef vector[double] vec
#staticmethod
cdef create(vector[double]& vec):
cdef VectorNanny result = VectorNanny()
result.vec.swap(vec) # swap and not copy
return result
# for testing:
def create_vector(int N):
cdef vector[double] vec;
vec.resize(N, 2.0)
return VectorNanny.create(vec)
The following test shows, that the nanny works:
nanny=create_vector(10**8) # top shows additional 800MB memory are used
del nanny # top shows, this additional memory is no longer used.
The latest Cython version allows you to do with with minimal syntax, albeit slightly more overhead than the lower-level solutions suggested.
numpy_array = np.asarray(<np.int32_t[:10, :10]> my_pointer)
https://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html#coercion-to-numpy
This alone does not pass ownership.
Notably, a Cython array is generated with this call, via array_cwrapper.
This generates a cython.array, without allocating memory. The cython.array uses the stdlib.h malloc and free by default, so it would be expected that you use the default malloc, as well, instead of any special CPython/Numpy allocators.
free is only called if ownership is set for this cython.array, which it is by default only if it allocates data. For our case, we can manually set it via:
my_cyarr.free_data = True
So to return a 1D array, it would be as simple as:
from cython.view cimport array as cvarray
# ...
cdef cvarray cvarr = <np.int32_t[:N]> data
cvarr.free_data = True
return np.asarray(cvarr)
Dealing with processing large matrices (NxM with 1K <= N <= 20K & 10K <= M <= 200K), I often need to pass Numpy matrices to C++ through Cython to get the job done and this works as expected & without copying.
However, there are times when I need to initiate and preprocess a matrix in C++ and pass it to Numpy (Python 3.6). Let's assume the matrices are linearized (so the size is N*M and it's a 1D matrix - col/row major doesn't matter here). Following the information in here: exposing C-computed arrays in Python without data copies & modifying it for C++ compatibility, I'm able to pass C++ array.
The problem is if I want to use std vector instead of initiating array, I'd get Segmentation fault. For example, considering the following files:
fast.h
#include <iostream>
#include <vector>
using std::cout; using std::endl; using std::vector;
int* doit(int length);
fast.cpp
#include "fast.h"
int* doit(int length) {
// Something really heavy
cout << "C++: doing it fast " << endl;
vector<int> WhyNot;
// Heavy stuff - like reading a big file and preprocessing it
for(int i=0; i<length; ++i)
WhyNot.push_back(i); // heavy stuff
cout << "C++: did it really fast" << endl;
return &WhyNot[0]; // or WhyNot.data()
}
faster.pyx
cimport numpy as np
import numpy as np
from libc.stdlib cimport free
from cpython cimport PyObject, Py_INCREF
np.import_array()
cdef extern from "fast.h":
int* doit(int length)
cdef class ArrayWrapper:
cdef void* data_ptr
cdef int size
cdef set_data(self, int size, void* data_ptr):
self.data_ptr = data_ptr
self.size = size
def __array__(self):
print ("Cython: __array__ called")
cdef np.npy_intp shape[1]
shape[0] = <np.npy_intp> self.size
ndarray = np.PyArray_SimpleNewFromData(1, shape,
np.NPY_INT, self.data_ptr)
print ("Cython: __array__ done")
return ndarray
def __dealloc__(self):
print("Cython: __dealloc__ called")
free(<void*>self.data_ptr)
print("Cython: __dealloc__ done")
def faster(length):
print("Cython: calling C++ function to do it")
cdef int *array = doit(length)
print("Cython: back from C++")
cdef np.ndarray ndarray
array_wrapper = ArrayWrapper()
array_wrapper.set_data(length, <void*> array)
print("Ctyhon: array wrapper set")
ndarray = np.array(array_wrapper, copy=False)
ndarray.base = <PyObject*> array_wrapper
Py_INCREF(array_wrapper)
print("Cython: all done - returning")
return ndarray
setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
ext_modules = [Extension(
"faster",
["faster.pyx", "fast.cpp"],
language='c++',
extra_compile_args=["-std=c++11"],
extra_link_args=["-std=c++11"]
)]
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
include_dirs=[numpy.get_include()]
)
If you build this with
python setup.py build_ext --inplace
and run Python 3.6 interpreter, if you enter the following you'd get seg fault after a couple of tries.
>>> from faster import faster
>>> a = faster(1000000)
Cython: calling C++ function to do it
C++: doing it fast
C++: did it really fast
Cython: back from C++
Ctyhon: array wrapper set
Cython: __array__ called
Cython: __array__ done
Cython: all done - returning
>>> a = faster(1000000)
Cython: calling C++ function to do it
C++: doing it fast
C++: did it really fast
Cython: back from C++
Ctyhon: array wrapper set
Cython: __array__ called
Cython: __array__ done
Cython: all done - returning
Cython: __dealloc__ called
Segmentation fault (core dumped)
Couple of things to note:
If you use array instead of vector (in fast.cpp) this would work like a charm!
If you call faster(1000000) and put the result into something other than variable a this would work.
If you enter smaller number like faster(10) you'd get a more detailed info like:
Cython: calling C++ function to do it
C++: doing it fast
C++: did it really fast
Cython: back from C++
Ctyhon: array wrapper set
Cython: __array__ called
Cython: __array__ done
Cython: all done - returning
Cython: __dealloc__ called <--- Perhaps this happened too early or late?
*** Error in 'python': double free or corruption (fasttop): 0x0000000001365570 ***
======= Backtrace: =========
More info here ....
It's really puzzling that why this doesn't happen with arrays? No matter what!
I make use of vectors a lot and would love to be able to use them in these scenarios.
I think #FlorianWeimer's answer provides a decent solution (allocate a vector and pass that into your C++ function) but it should be possible to return a vector from doit and avoid copies by using the move constructor.
from libcpp.vector cimport vector
cdef extern from "<utility>" namespace "std" nogil:
T move[T](T) # don't worry that this doesn't quite match the c++ signature
cdef extern from "fast.h":
vector[int] doit(int length)
# define ArrayWrapper as holding in a vector
cdef class ArrayWrapper:
cdef vector[int] vec
cdef Py_ssize_t shape[1]
cdef Py_ssize_t strides[1]
# constructor and destructor are fairly unimportant now since
# vec will be destroyed automatically.
cdef set_data(self, vector[int]& data):
self.vec = move(data)
# #ead suggests `self.vec.swap(data)` instead
# to avoid having to wrap move
# now implement the buffer protocol for the class
# which makes it generally useful to anything that expects an array
def __getbuffer__(self, Py_buffer *buffer, int flags):
# relevant documentation http://cython.readthedocs.io/en/latest/src/userguide/buffer.html#a-matrix-class
cdef Py_ssize_t itemsize = sizeof(self.vec[0])
self.shape[0] = self.vec.size()
self.strides[0] = sizeof(int)
buffer.buf = <char *>&(self.vec[0])
buffer.format = 'i'
buffer.internal = NULL
buffer.itemsize = itemsize
buffer.len = self.v.size() * itemsize # product(shape) * itemsize
buffer.ndim = 1
buffer.obj = self
buffer.readonly = 0
buffer.shape = self.shape
buffer.strides = self.strides
buffer.suboffsets = NULL
You should then be able to use it as:
cdef vector[int] array = doit(length)
cdef ArrayWrapper w
w.set_data(array) # "array" itself is invalid from here on
numpy_array = np.asarray(w)
Edit: Cython isn't hugely good with C++ templates - it insists on writing std::move<vector<int>>(...) rather than std::move(...) then letting C++ deduce the types. This sometimes causes problems with std::move. If you're having issues with it then the best solution is usually to tell Cython about only the overloads you want:
cdef extern from "<utility>" namespace "std" nogil:
vector[int] move(vector[int])
When you return from doit, the WhyNot object goes out of scope, and the array elements are deallocated. This means that &WhyNot[0] is no longer a valid pointer. You need to store the WhyNot object somewhere else, probably in a place provided by the caller.
One way to do this is to split doit into three functions, doit_allocate which allocates the vector and returns a pointer to it, doit as before (but with an argument which receives a pointer to the preallocated vector, anddoit_free` which deallocates the vector.
Something like this:
vector<int> *
doit_allocate()
{
return new vector<int>;
}
int *
doit(vector<int> *WhyNot, int length)
{
// Something really heavy
cout << "C++: doing it fast " << endl;
// Heavy stuff - like reading a big file and preprocessing it
for(int i=0; i<length; ++i)
WhyNot->push_back(i); // heavy stuff
cout << "C++: did it really fast" << endl;
return WhyNot->front();
}
void
doit_free(vector<int> *WhyNot)
{
delete WhyNot;
}
I'm trying to use partial_sort from the <algorithm> library within Cython, but I just cannot find the correct way to properly extern it.
reference
Here's my failed attempt:
%%cython -f
cdef extern from "<algorithm>" namespace "std":
void partial_sort[RandomAccessIterator](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last)
void partial_sort[RandomAccessIterator, Compare](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last, Compare comp)
Error message when using Cython 0.19.1:
Error compiling Cython file:
------------------------------------------------------------
...
cdef extern from "<algorithm>" namespace "std":
cdef cppclass RandomAccessIterator:
cppclass Compare
void partial_sort[RandomAccessIterator]
^
------------------------------------------------------------
/Users/richizy/.ipython/cython/_cython_magic_cf4fbe14563c3de19c8c3af3253a182e.pyx:5:46: Not allowed in a constant expression
Error compiling Cython file:
------------------------------------------------------------
...
cdef extern from "<algorithm>" namespace "std":
cdef cppclass RandomAccessIterator:
cppclass Compare
void partial_sort[RandomAccessIterator]
^
------------------------------------------------------------
/Users/richizy/.ipython/cython/_cython_magic_cf4fbe14563c3de19c8c3af3253a182e.pyx:5:46: Array dimension not integer
Error compiling Cython file:
------------------------------------------------------------
...
cdef extern from "<algorithm>" namespace "std":
cdef cppclass RandomAccessIterator:
cppclass Compare
void partial_sort[RandomAccessIterator]
^
------------------------------------------------------------
/Users/richizy/.ipython/cython/_cython_magic_cf4fbe14563c3de19c8c3af3253a182e.pyx:5:25: Array element type 'void' is incomplete
Error message when using Cython 0.20.1:
CompileError: command 'gcc' failed with exit status 1
warning: .ipython/cython/_cython_magic_121a91d1fdd64d85c4b01e6540fd86d6.pyx:4:52: Function signature does not match previous declaration
Edit: As of 2/22/14, for Cython 0.20.1
https://groups.google.com/forum/#!topic/cython-users/H4UEM6IlvpM
Correct, Cython does not support default template specializations (for
functions or classes). Nor does it support non-typename template
parameters (with out some hacking). Both are missing features that
we'd like to get to someday.
Robert
It seems that Cython doesn't work well with template specialization. The following code works for me (Cython version 0.20, Python 2.7.5, g++ (SUSE Linux) 4.8.1)
from libcpp.vector cimport vector
cdef extern from "<algorithm>" namespace "std":
void partial_sort[RandomAccessIterator](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last)
# void partial_sort[RandomAccessIterator, Compare](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last, Compare comp)
cpdef bla():
cdef vector[int] v
cdef int i = 0
cdef list res = []
v.push_back(4)
v.push_back(6)
v.push_back(2)
v.push_back(5)
partial_sort[vector[int].iterator](v.begin(), v.end(), v.end())
for i in v:
res.append(i)
return res
Then
>>> import bla
>>> bla.bla()
[2, 4, 5, 6]
However uncommenting the line break the code with
bla.pyx:15:16: Wrong number of template arguments: expected 2, got 1
Here is a workaround: You declare the different specialization of the template function under two different names:
cdef extern from "<algorithm>" namespace "std":
void partial_sort_1 "std::partial_sort"[RandomAccessIterator](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last)
void partial_sort_2 "std::partial_sort"[RandomAccessIterator, Compare](RandomAccessIterator first, RandomAccessIterator middle, RandomAccessIterator last, Compare comp)
And then you use the correct one as in:
partial_sort_1[vector[int].iterator](v.begin(), v.end(), v.end())
Note: I got from Cython-Users that this is a know problem and that having template partial specialization on Cython is on their wish list for a moment.
I'm new to cython and I'm a bit lost when I need to define a type for my numpy array when wrapping an existing C library (foolib). This library uses a defined type for the int size depending on the platform and if it has been built in 32 or 64 bits.
Here is one excerpt of foolib.h:
#include "footypes.h"
...
int foo_zone_read(int fn, int B, int Z, char *zonename, foosize_t *size);
Here is what is found in footypes.h:
# if FOO_BUILD_64BIT
# define FOO_SIZEOF_SIZE 64
typedef long foosize_t;
# else
# define FOO_SIZEOF_SIZE 32
typedef int foosize_t;
# endif
I have also the cython wrapper. Here is the cython header foolib.pxd:
cdef extern from "foolib.h":
ctypedef int foosize_t
...
int foo_zone_read(int fn, int B, int Z, char *zonename, foosize_t *size)
And the actual cython wrapper bar.pyx:
import numpy
cimport numpy
cimport foolib
...
cdef class pyFOO(object):
def __init__(self, filename)
pass
cpdef zone_read(self, int B, int Z):
cdef char zonename[MAXNAMELENGTH]
cdef int *zsize
cdef int ztype_read
cdef numpy.ndarray[cgsize_t, ndim=1] azsize
On the last line I have a cython error when I try to compile:
cdef numpy.ndarray[cgsize_t, ndim=1] azsize
^
------------------------------------------------------------
bar.pyx:285:31: Invalid type.
I tried to read the C-API numpy reference but I'm a little bit lost. All I want is to type asize with the correct type. I know that on linux it would be dtype=int32 or dtype=int64 depending on the compilation of foolib. If I would have created the array like this: asize=numpy.ones(2, dtype=numpy.int64) it would have worked, but now the type is hardcoded and I don't want that.
So my question is how do I get the correct type by passing the information somehow between footypes.h ans the numpy array.
My secondary question is how do I specify the size of my numpy array (if I do it in python syntax it's easy, but with the numpy C-API it's not clear to me). I know that it can be an array of shape (2) or (6) depending on the runtime conditions.
Thanks in advance.
I am currently trying to optimize my Python program and got started with Cython in order to reduce the function calling overhead and perhaps later on include optimized C-libraries functions.
So I ran into the first problem:
I am using composition in my code to create a larger class. So far I have gotten one of my Python classes converted to Cython (which was difficult enough). Here's the code:
import numpy as np
cimport numpy as np
ctypedef np.float64_t dtype_t
ctypedef np.complex128_t cplxtype_t
ctypedef Py_ssize_t index_t
cdef class bendingForcesClass(object):
cdef dtype_t bendingRigidity
cdef np.ndarray matrixPrefactor
cdef np.ndarray bendingForces
def __init__(self, dtype_t bendingRigidity, np.ndarray[dtype_t, ndim=2] waveNumbersNorm):
self.bendingRigidity = bendingRigidity
self.matrixPrefactor = -self.bendingRigidity * waveNumbersNorm ** 2
cpdef np.ndarray calculate(self, np.ndarray membraneHeight):
cdef np.ndarray bendingForces
bendingForces = self.matrixPrefactor * membraneHeight
return bendingForces
From my composed Python/Cython class I am calling the class-method calculate, so that in my composed class I have the following (reduced) code:
from bendingForcesClass import bendingForcesClass
cdef class membraneClass(object):
def __init__(self, systemSideLength, lowerCutoffLength, bendingRigidity):
self.bendingForces = bendingForcesClass(bendingRigidity, self.waveNumbers.norm)
def calculateForces(self, heightR):
return self.bendingForces.calculate(heightR)
I have found out that cpdef makes the method/functions callable from Python and Cython, which is great and works, as long as I don't try to define the type of self.bendingForces beforehand - which according to the documentation (Early Binding For Speed) is necessary in order to remove the function-calling overhead. I have tried the following, which does not work:
from bendingForcesClass import bendingForcesClass
from bendingForcesClass cimport bendingForcesClass
cdef class membraneClass(object):
cdef bendingForcesClass bendingForces
def __init__(self, systemSideLength, lowerCutoffLength, bendingRigidity):
self.bendingForces = bendingForcesClass(bendingRigidity, self.waveNumbers.norm)
def calculateForces(self, heightR):
return self.bendingForces.calculate(heightR)
With this I get this error, when trying to build membraneClass.pyx with Cython:
membraneClass.pyx:18:6: 'bendingForcesClass' is not a type identifier
building 'membraneClass' extension
Note that the declarations are in two separate files, which makes this more difficult.
So I how do I get this done? I would be very thankful if someone could give me a pointer, as I can't find any information about this, besides the link given above.
Thanks and best regards!
Disclaimer: This question is very old and I am not sure the current solution would work for 2011 Cython code.
In order to cimport an extension class (cdef class) from another file you need to provide a .pxd file (also known as a definitions file) declaring all C classes, attributes and methods. See Sharing Extension Types in the documentation for reference.
For your example, you would need a file bendingForcesClass.pxd, which declares the class you want to share, as well as all cimports, module level variables, typedefs, etc.:
bendingForcesClass .pxd
# cimports
cimport numpy as np
# typedefy you want to share
ctypedef np.float64_t dtype_t
ctypedef np.complex128_t cplxtype_t
ctypedef Py_ssize_t index_t
cdef class bendingForcesClass:
# declare C attributes
cdef dtype_t bendingRigidity
cdef np.ndarray matrixPrefactor
cdef np.ndarray bendingForces
# declare C functions
cpdef np.ndarray calculate(self, np.ndarray membraneHeight)
# note that __init__ is missing, it is not a C (cdef) function
All imports, variables, and attributes that now are declared in the .pxd file can (and have to be) removed from the .pyx file:
bendingForcesClass .pyx
import numpy as np
cdef class bendingForcesClass(object):
def __init__(self, dtype_t bendingRigidity, np.ndarray[dtype_t, ndim=2] waveNumbersNorm):
self.bendingRigidity = bendingRigidity
self.matrixPrefactor = -self.bendingRigidity * waveNumbersNorm ** 2
cpdef np.ndarray calculate(self, np.ndarray membraneHeight):
cdef np.ndarray bendingForces
bendingForces = self.matrixPrefactor * membraneHeight
return bendingForces
Now your cdef class bendingForcesClass can be cimported from other Cython modules, making it a valid type identifier, which should solve your problem.
You need to use a declaration ".pxd" file and cimport. (Essentially, cimport happens at compile time, while import happens at run time so Cython can't make use of anything important).
Create "utils.pxd":
cdef class MyClass:
cdef readonly int field
cdef void go(self, int i)
"utils.pyx" now reads
cdef class MyClass:
def __init__(self, field):
self.field = field
cdef void go(self, int i):
self.field = i
all declarations which have been in the pyx file go into the .pxd file.
Then in mymodule.pyx
from utils import MyClass
from utils cimport MyClass
# other code follows...
// Extended answer from here:
Cython: using imported class in a type declaration
These are probably not the source of the error, but just to narrow down the problem, you might try to change the following:
Could it be that you are using bendingForces as the name of the variable here:
cpdef np.ndarray calculate( self, np.ndarray membraneHeight ) :
cdef np.ndarray bendingForces
bendingForces = self.matrixPrefactor * membraneHeight
return bendingForces
and also the name of the member object here:
cdef class membraneClass( object ):
cdef bendingForcesClass bendingForces
Also, bendingForcesClass is the name of the module as well as the class. Finally, how about making a ctypedef from the class bendingForcesClass?