As the title suggests, I'm trying to create a bunch of attributes but the code is getting repetitive and messy. I want to use the closure argument to make the code more compact.
According to the C API reference, the closure is a function pointer that provides additional information for getters/setters. I have not been able to find an example of it in use.
This is how I am currently using it:
static void closure_1() {};
static void closure_2() {};
...
static PyObject *
FOO_getter(FOO* self, void *closure) {
if (closure == &closure_1) {
return self->bar_1;
} else if (closure == &closure_2) {
return self->bar_2;
}
}
static int
FOO_setter(FOO* self, PyObject *value, void *closure) {
if (closure == &closure_1) {
if (somehow value is invalid) {
PyErr_SetString(PyExc_ValueError, "invalid value for bar_1.");
return -1;
}
} else if (closure == closure_2) {
if (somehow value is invalid) {
PyErr_SetString(PyExc_ValueError, "invalid value for bar_2.");
return -1;
}
}
return 0;
}
static PyGetSetDef FOO_getsetters[] = {
{"bar_1", (getter) FOO_getter, (setter) FOO_setter, "bar_1 attribute", closure_1},
{"bar_2", (getter) FOO_getter, (setter) FOO_setter, "bar_2 attribute", closure_2},
{NULL} /* Sentinel */
};
...
It works the way I want it to, but it looks more like a hack than something "pythonic". Is there a better way to handle this? e.g. call the closure in some way.
I guess this "closure" is used to pass an extra context to Foo_getter. It should be something that simplifies accessing members of Foo. Documentation it likely wrong. It should be "optional pointer", not "optional function pointer".
Consider passing offsets of the members. Offsets to struct members can be easily obtained with standard offsetof macro defined in stddef.h. It is a small unsigned integer that will fit to void* type.
static PyGetSetDef FOO_getsetters[] = {
{"bar_1", (getter) FOO_getter, (setter) FOO_setter, "bar_1 attribute", (void*)offsetof(FOO, bar_1)},
{"bar_2", (getter) FOO_getter, (setter) FOO_setter, "bar_2 attribute", (void*)offsetof(FOO, bar_2)},
{NULL} /* Sentinel */
};
Now the getter could be:
static PyObject *
FOO_getter(FOO* self, void *closure) {
// pointer to location where the FOO's member is stored
char *memb_ptr = (char*)self + (size_t)closure;
// cast to `PyObject**` because `mem_ptr` points to location where a pointer to `PyObject` is stored
return *(PyObject**)mem_ptr;
}
Use similar schema for the setter.
Despite the documentation, I'm assuming the closure can be any pointer you want. So how about passing an "object", seeing as C doesn't support closures (short of literally generating functions at run-time).
In an object, we can store the offset of the member in FOO, and a pointer to an attribute-specific validator.
typedef int (*Validator)(FOO *, const struct Attribute *, void *);
typedef struct Attribute {
const char *name;
size_t offset;
Validator validator;
} Attribute;
static PyObject **resolve_offset(FOO *self, const Attribute *attr) {
return (PyObject **)( ( (char *)self ) + attr->offset );
}
static PyObject *FOO_getter(FOO *self, void *_attr) {
const Attribute *attr = (const Attribute *)_attr;
return *resolve_offset(self, attr);
}
static int FOO_setter(FOO *self, PyObject *val, void *_attr) {
const Attribute *attr = (const Attribute *)_attr;
if (attr->validator(self, attr, val)) {
*resolve_offset(self, attr) = val;
return 0;
} else {
// Building the string to include attr->name is left to you.
PyErr_SetString(PyExc_ValueError, "invalid value.");
return -1;
}
}
static int FOO_bar_1_validator(FOO *self, const Attribute *attr, void *val) { ... }
static int FOO_bar_2_validator(FOO *self, const Attribute *attr, void *val) { ... }
#define ATTRIBUTE(name) \
static Attribute FOO_ ## name ## attr = { \
#name, \
offsetof(FOO, name), \
FOO_ ## name ## _validator \
};
ATTRIBUTE(bar_1);
ATTRIBUTE(bar_2);
#define PY_ATTR_DEF(name) { \
#name, \
(getter)FOO_getter, \
(setter)FOO_setter, \
#name " attribute", \
&(FOO_ ## name ## attr) \
}
static PyGetSetDef FOO_getsetters[] = {
PY_ATTR_DEF(bar_1),
PY_ATTR_DEF(bar_2),
{ NULL }
};
I originally wrote:
resolve_offset surely relies on undefined behaviour, but it should work fine. The alternative would be to have three functions in our attribute object (get, validate, set) instead of one, but that defies the point of the question.
But #tstanisl points out that it looks like it isn't UB. Awesome!
Related
I'm writing a python + c module, and I'm trying to pass a pointer to a certain struct I need. I'm using PyCapsule to encapsulate the pointer, but I'm having problems when retrieving the pointer from it.
The C functions used are like:
static PyObject *
spam_new (PyObject *self, PyObject *args)
{
unsigned int number;
struct spam *pointer;
if (!PyArg_ParseTuple(args, "I", &number)) {
return NULL;
}
state = (struct spam*) malloc(sizeof (struct spam));
if (state == NULL) {
return NULL;
}
spam_init(*pointer, number);
return PyCapsule_New((void*) pointer, "spam", &spam_destroy);
}
static PyObject *
spam_get (PyObject *self, PyObject *args)
{
PyObject *capsule, *result;
void *raw_pointer;
struct spam *pointer;
unsigned long long int number;
if (!PyArg_ParseTuple(args, "OK", capsule, &number)) {
return NULL;
}
printf("[DEBUG] Number: %llu\n", number);
printf("[DEBUG] Capsule pointer: %p\n", capsule);
raw_pointer = PyCapsule_GetPointer(capsule, "spam");
if (raw_pointer == NULL) {
return NULL;
}
pointer = (struct spam*) raw_pointer;
.
.
.
}
They are both declared with METH_VARARGS.
When in python, custom.new(1) returns a capsule as expected, which I store in a variable c.
When calling custom.get(c, 14) python crashes at the PyCapsule_GetPointer function call. Both prints show the same (14), meaning that PyArg_ParseTuple is not getting the capsule passed as a parameter.
For security reasons passing the pointer as a long is not an option.
Thanks.
In the Python documentation is stated that the "O" format string will try to get a pointer to a PyObject (a PyObject*), not a PyObject.
Therefore when using PyArg_ParseTuple to get a PyObject* you have to pass a pointer to a PyObject*.
The provided code was fixed by adding a & to the capsule in the line
if (!PyArg_ParseTuple(args, "OK", &capsule, &number)) {
Fixed thanks to Davis Herring's comment.
I want to check if an object is an instance of a certain class. In Python I can do this with instanceof. In C/C++, I found a function named PyObject_IsInstance. But it seems not to work like isinstance.
In detail (also described as sample codes below):
In C++ I defined my custom type My. The type definition is MyType and the object definition is MyObject.
Add MyType to the exported module with name My.
In Python, create a new instance my = My(), and isinstance(my, My) returns True.
While in C++ we use PyObject_IsInstance(my, (PyObject*)&MyType) to check my, and this returns 0, which means my is not an instance of the class defined by MyType.
Full C++ code:
#define PY_SSIZE_T_CLEAN
#include <python3.6/Python.h>
#include <python3.6/structmember.h>
#include <stddef.h>
typedef struct {
PyObject_HEAD
int num;
} MyObject;
static PyTypeObject MyType = []{
PyTypeObject ret = {
PyVarObject_HEAD_INIT(NULL, 0)
};
ret.tp_name = "cpp.My";
ret.tp_doc = NULL;
ret.tp_basicsize = sizeof(MyObject);
ret.tp_itemsize = 0;
ret.tp_flags = Py_TPFLAGS_DEFAULT;
ret.tp_new = PyType_GenericNew;
return ret;
}();
// check if obj is an instance of MyType
static PyObject *Py_fn_checkMy(PyObject *obj) {
if (PyObject_IsInstance(obj, (PyObject *)&MyType)) Py_RETURN_TRUE;
else Py_RETURN_FALSE;
}
static PyMethodDef modmethodsdef[] = {
{ "checkMy", (PyCFunction)Py_fn_checkMy, METH_VARARGS, NULL },
{ NULL }
};
static PyModuleDef moddef = []{
PyModuleDef ret = {
PyModuleDef_HEAD_INIT
};
ret.m_name = "cpp";
ret.m_doc = NULL;
ret.m_size = -1;
return ret;
}();
PyMODINIT_FUNC
PyInit_cpp(void)
{
PyObject *mod;
if (PyType_Ready(&MyType) < 0)
return NULL;
mod = PyModule_Create(&moddef);
if (mod == NULL)
return NULL;
Py_INCREF(&MyType);
PyModule_AddObject(mod, "My", (PyObject *)&MyType);
PyModule_AddFunctions(mod, modmethodsdef);
return mod;
}
Compile this into cpp.so, and test it in Python:
>>> import cpp
>>> isinstance(cpp.My(), cpp.My)
True
>>> cpp.checkMy(cpp.My())
False
METH_VARARGS
This is the typical calling convention, where the methods have the type PyCFunction. The function expects two PyObject* values. The first one is the self object for methods; for module functions, it is the module object. The second parameter (often called args) is a tuple object representing all arguments. This parameter is typically processed using PyArg_ParseTuple() or PyArg_UnpackTuple().
The function signature of Py_fn_checkMy does not match this. It should take two arguments. The first is the module, and this is what you are checking against MyType. The second argument (which you don't actually accept) is a tuple containing the object you passed. You should extract the argument from the tuple and check the type of this.
You'd probably be better using METH_O to specify a single argument instead of extracting arguments from a tuple:
static PyObject *Py_fn_checkMy(PyObject *self, PyObject *obj) {
if (PyObject_IsInstance(obj, (PyObject *)&MyType)) Py_RETURN_TRUE;
else Py_RETURN_FALSE;
}
static PyMethodDef modmethodsdef[] = {
{ "checkMy", (PyCFunction)Py_fn_checkMy, METH_O, NULL },
{ NULL }
};
I am writing a simple extension module using the C API. Here is a test example:
#include <Python.h>
#include <structmember.h>
typedef struct {
PyObject_HEAD
int i;
} MyObject;
static PyMemberDef my_members[] = {
{"i", T_INT, offsetof(MyObject, i), READONLY, "Some integer"},
{NULL} /* Sentinel */
};
static int MyType_init(MyObject *self, PyObject *args, PyObject *kwds)
{
char *keywords[] = {"i", NULL};
int i = 0;
if(!PyArg_ParseTupleAndKeywords(args, kwds, "|i", keywords, &i)) {
return -1;
}
self->i = i;
return 0;
}
static PyTypeObject MyType = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "_my.MyType",
.tp_doc = "Pointless placeholder class",
.tp_basicsize = sizeof(MyObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
.tp_init = (initproc)MyType_init,
.tp_members = my_members,
};
static PyModuleDef my_module = {
PyModuleDef_HEAD_INIT,
.m_name = "_my",
.m_doc = "My module is undocumented!",
.m_size = -1,
};
/* Global entry point */
PyMODINIT_FUNC PyInit__my(void)
{
PyObject *m = NULL;
PyExc_MyException = PyErr_NewExceptionWithDoc("_my.MyException",
"Indicates that something went horribly wrong.",
NULL, NULL);
if(PyExc_MyException == NULL) {
return NULL;
}
if(PyType_Ready(&MyType) < 0) {
goto err;
}
if((m = PyModule_Create(&my_module)) == NULL) {
goto err;
}
Py_INCREF(&MyType);
PyModule_AddObject(m, "MyType", (PyObject *)&MyType);
PyModule_AddObject(m, "MyException", (PyObject *)PyExc_MyException);
return m;
err:
Py_CLEAR(PyExc_MyException);
return NULL;
}
This module contains a class MyType and an exception MyException. The purpose of this module is to provide the base functionality for some additional code written in Python. I would like to be able to do something like:
my.py
from _my import *
class MyType(MyType):
def __init__(i=0):
raise MyException
Obviously this example is highly contrived. I just want to illustrate how I would like to use the names in my extension. Right now the import is working fine according to this rule:
If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character ('_').
However, I would like to have a finer level of control. So far, all I have been able to come up with is manually adding an __all__ attribute to the module:
PyObject *all = NULL;
...
all = Py_BuildValue("[s, s]", "MyType", "MyException");
if(all == NULL) {
goto err;
}
PyModule_AddObject(m, "__all__", all);
...
err:
...
Py_CLEAR(all);
This seems a bit clunky. Is there a function in the C API for defining an internal equivalent to __all__ for a module?
Part of the reason that I think there may be an internal equivalent is that __all__ is after all a dunder attribute.
namespace test_py
{
class Event
{
public:
enum Type { BEGIN = 0, RESULT, END };
Type get_type( ) const { return m_type; }
protected:
Event( ) { }
~Event( ) { }
Type m_type;
};
class EventBegin : public Event
{
public:
EventBegin( ) { m_type = Event::BEGIN; }
~EventBegin( ) {}
};
class EventResult : public Event
{
public:
EventResult( int result ) { m_type = Event::RESULT; m_result = result; }
~EventResult( ) {}
int get_result( ) { return m_result; }
protected:
int m_result;
};
class EventEnd : public Event
{
public:
EventEnd( ) { m_type = Event::END; }
~EventEnd( ) {}
};
class EventListener
{
public:
virtual void on_event( const Event& event ) = 0;
};
struct EventListenerWrap: EventListener, py::wrapper< EventListener >
{
void
on_event( const Event& event )
{
this->get_override( "on_event" )( event );
}
};
BOOST_PYTHON_MODULE( test_py )
{
{
py::scope outer = py::class_< Event, boost::noncopyable >( "Event", py::no_init )
.add_property( "event_type", &Event::get_type );
py::enum_< Event::Type >( "EventType" )
.value( "BEGIN", Event::BEGIN )
.value( "RESULT", Event::RESULT )
.value( "END", Event::END )
.export_values( );
}
{
py::class_< EventBegin, py::bases< Event > >( "EventBegin" );
}
{
py::class_< EventResult, py::bases< Event > >( "EventResult", py::no_init )
.def( py::init< int >( ( py::arg( "result" ) ) ) )
.add_property( "result", &EventResult::get_result );
}
{
py::class_< EventEnd, py::bases< Event > >( "EventEnd" );
}
{
py::class_< EventListenerWrap, boost::noncopyable >( "EventListener", py::no_init )
.def( "on_event", py::pure_virtual( &EventListener::on_event ) );
}
}
}
I have a protected constructor and destructor in Event base class and cannot change that.
In Python 2.7 I need to derive from EventListener class and send pointer back to C++ code.
During compilation I got error like that:
/boost/python/detail/destroy.hpp: In instantiation of ‘static void boost::python::detail::value_destroyer<false>::execute(const volatile T*) [with T = test_py::Event]’:
/boost/python/detail/destroy.hpp:95:36: required from ‘void boost::python::detail::destroy_referent_impl(void*, T& (*)()) [with T = const test_py::Event]’
/boost/python/detail/destroy.hpp:101:39: required from ‘void boost::python::detail::destroy_referent(void*, T (*)()) [with T = const test_py::Event&]’
/boost/python/converter/rvalue_from_python_data.hpp:135:71: required from ‘boost::python::converter::rvalue_from_python_data<T>::~rvalue_from_python_data() [with T = const test_py::Event&]’
/boost/python/converter/arg_from_python.hpp:107:8: required from ‘PyObject* boost::python::detail::caller_arity<2u>::impl<F, Policies, Sig>::operator()(PyObject*, PyObject*) [with F = void (test_py::EventListener::*)(const test_py::Event&); Policies = boost::python::default_call_policies; Sig = boost::mpl::vector3<void, test_py::EventListener&, const test_py::Event&>; PyObject = _object]’
/boost/python/object/py_function.hpp:38:33: required from ‘PyObject* boost::python::objects::caller_py_function_impl<Caller>::operator()(PyObject*, PyObject*) [with Caller = boost::python::detail::caller<void (test_py::EventListener::*)(const test_py::Event&), boost::python::default_call_policies, boost::mpl::vector3<void, test_py::EventListener&, const test_py::Event&> >; PyObject = _object]’
EventListener.cpp:193:1: required from here
EventListener.cpp:18:5: error: ‘test_py::Event::~Event()’ is protected
~Event( ) { }
^
In file included from /boost/python/converter/rvalue_from_python_data.hpp:10:0,
from /boost/python/converter/registry.hpp:9,
from /boost/python/converter/registered.hpp:8,
from /boost/python/object/make_instance.hpp:10,
from /boost/python/object/make_ptr_instance.hpp:8,
from /boost/python/to_python_indirect.hpp:11,
from /boost/python/converter/arg_to_python.hpp:10,
from /boost/python/call.hpp:15,
from /boost/python/object_core.hpp:14,
from /boost/python/object/class.hpp:9,
from /boost/python/class.hpp:13,
from ../../defs.hpp:6,
from ../defs.hpp:3,
from defs.hpp:3,
from EventListener.cpp:1:
/boost/python/detail/destroy.hpp:33:9: error: within this context
p->~T();
^
py::scope outer = py::class_< Event, boost::noncopyable >( "Event", py::no_init )
.add_property( "event_type", &Event::get_type );
First glance tells me you have a problem here. py::class_<Event, ...> only knows about binding to the Event, which has the protected destructor.
You're going to have to wrap Event in a class that exposes the destructor publically.
If that's not possible (because you cant change the definition of EventBegin, EventEnd etc for example) then you're going to have to write a polymorphic container that holds on to the derived classes through its own internal interface, internally treating the events as non-polymorphic objects.
This is not as difficult as it sounds:
#include <memory>
namespace test_py
{
class Event
{
public:
enum Type { BEGIN = 0, RESULT, END };
Type get_type( ) const { return m_type; }
protected:
Event( ) { }
~Event( ) { }
Type m_type;
};
class EventBegin : public Event
{
public:
EventBegin( ) { m_type = Event::BEGIN; }
~EventBegin( ) {}
};
class EventResult : public Event
{
public:
EventResult( int result ) { m_type = Event::RESULT; m_result = result; }
~EventResult( ) {}
int get_result( ) { return m_result; }
protected:
int m_result;
};
class EventEnd : public Event
{
public:
EventEnd( ) { m_type = Event::END; }
~EventEnd( ) {}
};
class EventProxy
{
// define an interface for turning a non-polymorphic event
// into a polymorphic one
struct concept
{
virtual const Event* as_event() const = 0;
virtual ~concept() = default;
};
// define a model to support the polymorphic interface for a
// non-polymorphic concrete object
template<class T> struct model : concept
{
template<class...Args> model(Args&&... args)
: _event(std::forward<Args>(args)...)
{}
const Event* as_event() const override {
return &_event;
}
T _event;
};
// construct the model that contains any Event
template<class T>
EventProxy(std::shared_ptr<T> ptr)
: _impl(std::move(ptr))
{}
public:
// T should be derived from Event...
template<class T, class...Args>
static EventProxy create(Args&&... args)
{
return EventProxy(std::make_shared<model<T>>(std::forward<Args>(args)...));
}
// simply return the address of the internal non-polymorphic event
const Event* as_event() const {
return _impl->as_event();
}
// return a shared pointer that points to the internal Event BUT
// defers lifetime ownership to our internal shared_ptr to
// our model. This means we never invoke the polymorphic
// destructor of Event through the protected interface.
std::shared_ptr<const Event> as_shared_event() const {
return std::shared_ptr<const Event>(_impl, _impl->as_event());
}
private:
// lifetime of the proxy is owned by this shared_ptr.
std::shared_ptr<concept> _impl;
};
}
// a quick test.
auto main() -> int
{
auto ep = test_py::EventProxy::create<test_py::EventBegin>();
const test_py::Event* p = ep.as_event();
std::shared_ptr<const test_py::Event> sp = ep.as_shared_event();
}
When exposing a function, Boost.Python will generate converters for each of the arguments. For arguments with types T and T&, the resulting Python converter will hold a copy of the object, and hence needs access to the copy-constructor and destructor. The rationale for this behavior is to prevent accidentally exposing dangling references. The same holds true when passing C++ arguments to Python.
This behavior presents a problem when:
exposing EventListener::on_event(const Event&), as Boost.Python is attempting to create an object that will hold a copy of the Event. To resolve this, consider exposing an auxiliary function that accepts a Event*, and then delegates to the original function.
passing an Event object to the Python in EventListenerWrap::on_event. To resolve this, consider wrapping the argument in boost::ref() or boost::python::ptr().
Be aware that by not creating copies, it creates the chance for dangling references. If the actual Event object is owned by Python, then its lifetime needs to be at least as long as any reference to it in C++. Likewise. If the actual Event object is owned by C++, then its lifetime needs to be at least as long as any reference to it in Python.
struct EventListenerWrap
: EventListener,
boost::python::wrapper<EventListener>
{
void on_event(const Event& event)
{
this->get_override("on_event")(boost::ref(event));
}
};
/// #brief Auxiliary function that will delegate to EventListener::on_event and
/// avoid by-value conversions at the language boundary. This prevents
/// prevents Boost.Python from creating instance holders that would hold
/// the value as an rvalue.
void event_listener_on_event_aux(EventListener& listener, Event* event)
{
return listener.on_event(*event);
}
BOOST_PYTHON_MODULE(...)
{
namespace python = boost::python;
python::class_<EventListenerWrap, boost::noncopyable>("EventListener")
.def("on_event", python::pure_virtual(&event_listener_on_event_aux))
;
}
An interesting detail is that boost::python::pure_virtual() will duplicate the signature of the function it wraps, but it will never actual invoke the wrapped function. Hence, the wrapped function could have a no-op/empty implementation, but providing an implementation is a good idea incase the pure_virtual designator is removed or if the auxiliary function is invoked directly.
Also, note that to allow a Python class to derive from a Boost.Python class, the Boost.Python must expose an __init__() method. Providing no methods, such as by using boost::python::no_init(), will result in a runtime error.
Here is a minimal complete example based on the original code that demonstrates exposing a class with protected constructor and destructor, two derived classes, and virtual dispatch of the derived classes through Boost.Python:
#include <iostream>
#include <string>
#include <boost/python.hpp>
// Legacy API.
class event
{
public:
std::string name;
protected:
event(std::string name) : name(name) {}
~event() = default;
};
struct event_a: event { event_a(): event("event_a") {} };
struct event_b: event { event_b(): event("event_b") {} };
class event_listener
{
public:
virtual void on_event(const event& event) = 0;
};
// Boost.Python helper types and functions.
struct event_listener_wrap
: event_listener,
boost::python::wrapper<event_listener>
{
void on_event(const event& event)
{
std::cout << "event_listener_wrap::on_event()" << std::endl;
this->get_override("on_event")(boost::ref(event));
}
};
/// #brief Auxiliary function that will delegate to EventListener::on_event and
/// avoid by-value conversions at the language boundary. This prevents
/// prevents Boost.Python from creating instance holders that would hold
/// the value as an rvalue.
void event_listener_on_event_wrap(event_listener& listener, event* event)
{
return listener.on_event(*event);
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose event and suppress default by-value converters and initailizers.
// This will prevent Boost.Python from trying to access constructors and
// destructors.
python::class_<event, boost::noncopyable>("Event", python::no_init)
.def_readonly("name", &event::name)
;
// Expose event_a and event_b as derived from event.
python::class_<event_a, python::bases<event>>("EventA");
python::class_<event_b, python::bases<event>>("EventB");
// Expose event_listener_wrap.
python::class_<event_listener_wrap, boost::noncopyable>("EventListener")
.def("on_event", python::pure_virtual(&event_listener_on_event_wrap))
;
// Expose a function that will perform virtual resolution.
python::def("do_on_event", &event_listener_on_event_wrap);
}
Interactive usage:
>>> import example
>>> class Listener(example.EventListener):
... def on_event(self, event):
... assert(isinstance(event, example.Event))
... print "Listener::on_event: ", event, event.name
...
>>> listener = Listener()
>>> listener.on_event(example.EventA())
Listener::on_event: <example.EventA object at 0x7f3bc1176368> event_a
>>> example.do_on_event(listener, example.EventB())
event_listener_wrap::on_event()
Listener::on_event: <example.Event object at 0x7f3bc1246fa0> event_b
When Python is directly aware of method, it will invoke it without passing through Boost.Python. Notice how listener.on_event() did not get dispatched through C++, and event object maintains its example.EventA type. On the other hand, when dispatching is forced into C++, upcasting will not occur. When Listener.on_event() is invoked through example.do_on_event(), the event object's type is example.Event and not example.EventB.
I'm trying to write a C-Extension for python. What I'd like to write is a ModPolynomial class which represents a polynomial on (Z/nZ)[x]/x^r-1[even though you may answer to my question without knowing anything of such polynomials].
I've written some code, which seems to work. Basically I just store three PyObject* in my ModPoly structure. Now I'd like to add the storage for the coefficients of the polynomial.
Since I want the coefficients to be read-only, I'd like to add a getter/setter pair of functions through PyGetSetDef. But when I access the getter from python(e.g print pol.coefficients) I receive a Segmentation Fault.
The original code, without the "coefficients" can be found here.
The code with the coefficients is here.
I hope someone of you can tell me where I'm doing wrong here.
By the way, also comments on the code are welcome. This is my first extension and I know that I'm probably doing things quite badly.
As ecatmur says in the comments PyVarObject store a certain number of "slots" at the end of the struct. So I've decided to avoid them.
The relevant code is:
typedef struct {
PyObject_HEAD
/* Type specific fields */
Py_ssize_t ob_size;
PyObject **ob_item;
Py_ssize_t allocated;
PyObject *r_modulus;
PyObject *n_modulus;
PyObject *degree;
} ModPoly;
static PyObject *
ModPoly_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
ModPoly *self;
self = (ModPoly *)type->tp_alloc(type, 0);
if (self != NULL) {
[...]
self->ob_size = 0;
self->ob_item = NULL;
self->allocated = 0;
}
return (PyObject *)self;
}
static int
ModPoly_init(ModPoly *self, PyObject *args, PyObject *kwds)
{
PyObject *r_modulus=NULL, *n_modulus=NULL, *coefs=NULL, *tmp;
PyObject **tmp_ar;
static char *kwlist[] = {"r_modulus", "n_modulus", "coefficients", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|O", kwlist,
&r_modulus, &n_modulus, &coefs))
return -1;
[...]
// The polynomial defaults to "x", so the coefficients should be [0, 1].
tmp_ar = (PyObject **)malloc(2 * sizeof(PyObject*));
if (tmp_ar == NULL) {
Py_DECREF(self->r_modulus);
Py_DECREF(self->n_modulus);
Py_DECREF(self->degree);
return -1;
}
tmp_ar[0] = PyInt_FromLong(0);
if (tmp_ar[0] != NULL) {
tmp_ar[1] = PyInt_FromLong(1);
}
if (tmp_ar[0] == NULL || tmp_ar[0] == NULL) {
Py_DECREF(self->r_modulus);
Py_DECREF(self->n_modulus);
Py_DECREF(self->degree);
Py_XDECREF(tmp_ar[0]);
Py_XDECREF(tmp_ar[1]);
free(tmp_ar);
return -1;
}
self->ob_size = 2;
self->allocated = 2;
return 0;
}
[...]
static PyObject *
ModPoly_getcoefs(ModPoly *self, void *closure)
{
printf("here"); // "here" is never printed
PyTupleObject *res=(PyTupleObject*)PyTuple_New(self->ob_size);
Py_ssize_t i;
PyObject *tmp;
if (res == NULL)
return NULL;
for (i=0; i < self->ob_size; i++) {
tmp = self->ob_item[i];
Py_INCREF(tmp);
PyTuple_SET_ITEM(res, i, tmp);
}
return (PyObject *)res;
}
static PyObject *
ModPoly_setcoefs(ModPoly *self, PyObject *value, void* closure)
{
PyErr_SetString(PyExc_AttributeError,
"Cannot set the coefficients of a polynomial.");
return NULL;
}
[...]
static PyGetSetDef ModPoly_getsetters[] = {
{"coefficients",
(getter)ModPoly_getcoefs, (setter)ModPoly_setcoefs,
"The polynomial coefficients.", NULL},
{NULL, 0, 0, NULL, NULL}
};
static PyTypeObject ModPolyType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
[...]
ModPoly_members, /* tp_members */
ModPoly_getsetters, /* tp_getset */
0, /* tp_base */
[...]
};
[...]
edit
I tried to reimplement the getter instruction by instruction, and I understood what I wasn't doing. In the ModPoly_init function I create the tmp_ar where I store the coefficients, but I do not assign it to self->ob_item.
-facepalm-
You only seem to be assigning to ModPoly.ob_item in ModPoly_new() (setting it to NULL).
ModPoly_getcoefs() then dereferences the null pointer, which would give you your segfault. It looks like you intended to assign to ob_item in ModPoly_init(), but don't actually get around to doing so.