How do I stop pybind11 from deallocating an object constructed from Python? - python

So, I know that pybind lets you set a return value policy for methods that you wrap up. However, that doesn't seem to be working for me when I try to use this policy on a constructor. I have a class to wrap my C++ type that looks like this:
class PyComponent{
public:
static Component* Create(ComponentType type) {
Component* c = new Component(type);
// Irrelevant stuff removed here
return c;
}
/// #brief Wrap a behavior for Python
static void PyInitialize(py::module_& m);
};
void PyComponent::PyInitialize(py::module_ & m)
{
py::class_<Component>(m, "Component")
.def(py::init<>(&PyComponent::Create), py::return_value_policy::reference)
;
}
However, this does NOT stop my Component type from getting deallocated from the Python side if I call Component() and the created object goes out of scope. Any suggestions?

I did figure out the solution to this. It's to pass py::nodelete to the wrapper for my class
void PyComponent::PyInitialize(py::module_ & m)
{
py::class_<Component, std::unique_ptr<Component, py::nodelete>>(m, "Component")
.def(py::init<>(&PyComponent::Create), py::return_value_policy::reference)
;
}

Related

Down Casting and Template Solution Vs Python

this is basic C++ but I am getting to a point where python actually seems way simpler. Suppose:
class Base
{
public:
virtual ~Base() = default;
virtual std::string type() const { return "base"; };
};
class Derived1 : public Base
{
public:
virtual std::string type() const { return "dervied1"; };
};
class Derived2 : public Base
{
public:
virtual std::string type() const { return "dervied2"; };
};
I find my self having other functions of type:
void process(Base& derived_from_base)
{
};
Q1: How do I know what I am taking as input, should I call type() and later down cast, problem is I think type() will give always "base"?
Q2: This is so annoying. I later have to down case the input to the correct derived class. I am just wondering if python is doing this in the background, am I sure that with all of this I am faster than python?
Q3: Is it true I can replace virtual function/inheritance and all casting using templates? (heard this somewhere and not sure).
Thank you very much.
Q1: How do I know what I am taking as input?
A1: Keep the full type instead of erasing it to the base class. E.g. instead of
void process(Base& base) {
if (base.type() == "derived1") process_derived1(static_cast<Derived1&>(base));
else process_anything_else(base);
}
int main() {
std::unique_ptr<Base> base = std::make_unique<Derived1>();
process(*base);
}
use
void process(Derived1& derived1) { process_derived1(derived1); }
void process(auto& t) { process_anything_else(t); }
int main() {
Derived derived1;
process(derived1);
}
Q2: I am just wondering if python is doing this in the background, am I sure that with all of this I am faster than python?
A2: Python has something like this:
int main() {
Object derived1; // a hash map
derived1["__type__"] = "Derived1";
}
With the approach from A1, you are faster than anything (assuming everything else in the program isn't worse) because of static dispatch: thanks to templates, overload resolution happens at compile time and therefore costs nothing.
Q3: Is it true I can replace virtual function/inheritance and all casting using templates? (heard this somewhere and not sure)
A3: With proper design, you can do that most of the times, e.g. see A1. However, some things force linking dynamically: OS API, game plugins etc. In such cases, consider localizing the clumsy borderline part, so most of the code can be written as usual.
virtual function/inheritance
Note: inheritance without virtual functions is perfectly fine and zero-cost (on its own), e.g. see CRTP.

Revert cppyy automatic mapping of operator() to __getitem__ via C++ pythonization callback

As is also explained in this cppyy issue, an A& operator() on the C++ side is mapped to the python __getitem__.
On the issue it is suggested to add a special pythonization if this is not the wished for result.
An extra constraint in my case would be to add this to the C++ class itself to ensure that this pythonization is always applied.
I'm however having trouble figuring out how to properly do this via the Python C API. (1st time working with the that API so I'm a bit lost)
Minimal Reproducer somewhat contrived but shows the problem:
Note in the example below that struct A is code that I can't modify because that class is defined in library code. So the callback has to be in B.
import cppyy
cppyy.include("Python.h")
cppyy.cppdef(r"""
void myprint(PyObject* py){
PyObject* tmp = PyObject_Str(py);
Py_ssize_t size = 0;
const char* s = PyUnicode_AsUTF8AndSize(tmp, &size);
std::cout << std::string(s, size) << std::endl;
}
template <typename T>
struct A {
T& operator[](size_t idx) { return arr[idx]; }
const T& operator[](size_t idx) const { return arr[idx]; }
std::array<T, 10> arr{};
};
template <typename T>
struct B : public A<T> {
B& operator()() { return *this; };
static void __cppyy_pythonize__( PyObject* klass, const std::string& name){
std::cout << "Hello from pythonize" << std::endl;
PyObject* getitem = PyObject_GetAttrString(klass, "__getitem__");
myprint(getitem);
}
};
using tmp = B<double>;
""")
t = cppyy.gbl.B['double']
print(t.__getitem__.__doc__)
I can get the __getitem__ function from the PyObject* klass but, as explained in the docs, the callback happens at the very end after all the internal processing of the class.
Thus the __call__ function, which here is B& operator()(), has already been mapped to __getitem__.
Unfortunately, I can't for the life of me figure out how I would undo that mapping and get back that old __getitem__ function.
Is that operator[]() function even still accessible via the PyObject* klass ?
Any help/pointers would be much appreciated :)
First, to answer your question, to find the __getitem__ you want, get it from the base class of klass, not from klass directly. You can also do this in Python, rather than adding pythonizations in C++. In fact, doing this in Python is preferred as then you don't have to deal with the C-API.
However, since the actual bug report is not the one you referenced, but this one, and since the suggestion made there, which you followed here, makes this a classic XY-problem, let me also add that what you really want is to simply do PyObject_DelAttrString(klass, "__getitem__") in your code example.
Completely aside, the code that is giving you trouble here is from the Gaudi project, the core developers of which are the ones who asked for this automatic mapping in the first place. You may want to take this up with them.

Calling a function of an object instance using embedded Python

I want to be able to run Python scripts in my app to allow automating stuff and modifying existing objects/calling methods of existing objects.
In my application there is a BasicWindow class and MainWindow class that derives from the former. For now at application start I initialize one instance of MainWindow. This object has many functions, among them there is one that loads files (LoadFile()), and I will use it as example here.
Lets say that I want to call that particular function (but not limited to that function, it is just an example of the functionality that I want to achieve from Python) of that particular object instance.
This method is not a static one. For this I am using Boost.Python and I am creating a module this way:
BOOST_PYTHON_MODULE(MyModule)
{
MainWindow::PythonExpose(); //not really sure how to operate here
//more stuff
}
The idea is that I could call from Python something like:
MainWindow.LoadFile()
or even better, just:
LoadFile()
One solution could be to create static, application scoped functions and then just expose those functions. In C++ I could find the particular instance of MainWindow: (both methods are static)
void AppHelper::LoadFile()
{
GetMainWindow()->LoadFile();
}
void AppHelper::PythonExposeGlobal()
{
using namespace boost::python;
def("LoadFile", &AppHelper::LoadFile);
}
Is it possible to achieve this? The general question would be: is it possible to call methods of existing objects (in C++) from Python? If so, how to do it? If not, what can I do to mimic this behavior?
For example, I could easily enable scripting capabilities in my C# application and sharing instances of existing objects. (But of course C# has reflection).
If you can guarantee that the object will live as long as any scripts using it run, then there's a fairly simple approach that I use.
I'll use a primitive counter class for demonstration:
class counter
{
public:
counter() : count(0) {}
void increment() { ++count; }
int count;
};
Now, I expose this class to python, such that it considers it non-copyable, and doesn't allow construction of new instances. I also expose any members that I want to use from the scripts.
BOOST_PYTHON_MODULE(example)
{
bp::class_<counter, boost::noncopyable>("Counter", bp::no_init)
.def("increment", &counter::increment)
;
}
Next step is to create a Python object that uses an existing instance, and allow the script to use it (e.g. add it as an attribute of some module, such as the main one).
counter c;
bp::object main_module(bp::import("__main__"));
main_module.attr("c") = bp::object(bp::ptr(&c));
Now your scripts can use this instance:
c.increment()
Sample program:
#include <boost/python.hpp>
#include <iostream>
namespace bp = boost::python;
// Simple counter that can be incremented
class counter
{
public:
counter() : count(0) {}
void increment() { ++count; }
int count;
};
// Expose the counter class to Python
// We don't need constructor, since we only intend to use instance
// already existing on the C++ side
BOOST_PYTHON_MODULE(example)
{
bp::class_<counter, boost::noncopyable>("Counter", bp::no_init)
.def("increment", &counter::increment)
;
}
int main()
{
Py_InitializeEx(0);
// Bind our class
initexample();
counter c;
bp::object main_module(bp::import("__main__"));
bp::object main_namespace(main_module.attr("__dict__"));
// Add the current instance of counter to Python as attribute c of the main module
main_module.attr("c") = bp::object(bp::ptr(&c));
std::cout << "Before: " << c.count << '\n';
// Increment the counter from Python side
bp::exec("c.increment()", main_namespace);
std::cout << "After: " << c.count << '\n';
Py_Finalize();
return 0;
}
Console Output:
Before: 0
After: 1

Wrapping an std::vector using boost::python vector_indexing_suite

I am working on a C++ library with Python bindings (using boost::python) representing data stored in a file. Majority of my semi-technical users will be using Python to interact with it, so I need to make it as Pythonic as possible. However, I will also have C++ programmers using the API, so I do not want to compromise on the C++ side to accommodate Python bindings.
A large part of the library will be made out of containers. To make things intuitive for the python users, I would like them to behave like python lists, i.e.:
# an example compound class
class Foo:
def __init__( self, _val ):
self.val = _val
# add it to a list
foo = Foo(0.0)
vect = []
vect.append(foo)
# change the value of the *original* instance
foo.val = 666.0
# which also changes the instance inside the container
print vect[0].val # outputs 666.0
The test setup
#include <boost/python.hpp>
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
#include <boost/python/register_ptr_to_python.hpp>
#include <boost/shared_ptr.hpp>
struct Foo {
double val;
Foo(double a) : val(a) {}
bool operator == (const Foo& f) const { return val == f.val; }
};
/* insert the test module wrapping code here */
int main() {
Py_Initialize();
inittest();
boost::python::object globals = boost::python::import("__main__").attr("__dict__");
boost::python::exec(
"import test\n"
"foo = test.Foo(0.0)\n" // make a new Foo instance
"vect = test.FooVector()\n" // make a new vector of Foos
"vect.append(foo)\n" // add the instance to the vector
"foo.val = 666.0\n" // assign a new value to the instance
// which should change the value in vector
"print 'Foo =', foo.val\n" // and print the results
"print 'vector[0] =', vect[0].val\n",
globals, globals
);
return 0;
}
The way of the shared_ptr
Using the shared_ptr, I can get the same behaviour as above, but it also means that I have to represent all data in C++ using shared pointers, which is not nice from many points of view.
BOOST_PYTHON_MODULE( test ) {
// wrap Foo
boost::python::class_< Foo, boost::shared_ptr<Foo> >("Foo", boost::python::init<double>())
.def_readwrite("val", &Foo::val);
// wrap vector of shared_ptr Foos
boost::python::class_< std::vector < boost::shared_ptr<Foo> > >("FooVector")
.def(boost::python::vector_indexing_suite<std::vector< boost::shared_ptr<Foo> >, true >());
}
In my test setup, this produces the same output as pure Python:
Foo = 666.0
vector[0] = 666.0
The way of the vector<Foo>
Using a vector directly gives a nice clean setup on the C++ side. However, the result does not behave in the same way as pure Python.
BOOST_PYTHON_MODULE( test ) {
// wrap Foo
boost::python::class_< Foo >("Foo", boost::python::init<double>())
.def_readwrite("val", &Foo::val);
// wrap vector of Foos
boost::python::class_< std::vector < Foo > >("FooVector")
.def(boost::python::vector_indexing_suite<std::vector< Foo > >());
}
This produces:
Foo = 666.0
vector[0] = 0.0
Which is "wrong" - changing the original instance did not change the value inside the container.
I hope I don't want too much
Interestingly enough, this code works no matter which of the two encapsulations I use:
footwo = vect[0]
footwo.val = 555.0
print vect[0].val
Which means that boost::python is able to deal with "fake shared ownership" (via its by_proxy return mechanism). Is there any way to achieve the same while inserting new elements?
However, if the answer is no, I'd love to hear other suggestions - is there an example in the Python toolkit where a similar collection encapsulation is implemented, but which does not behave as a python list?
Thanks a lot for reading this far :)
Due to the semantic differences between the languages, it is often very difficult to apply a single reusable solution to all scenarios when collections are involved. The largest issue is that the while Python collections directly support references, C++ collections require a level of indirection, such as by having shared_ptr element types. Without this indirection, C++ collections will not be able to support the same functionality as Python collections. For instance, consider two indexes that refer to the same object:
s = Spam()
spams = []
spams.append(s)
spams.append(s)
Without pointer-like element types, a C++ collection could not have two indexes referring to the same object. Nevertheless, depending on usage and needs, there may be options that allow for a Pythonic-ish interface for the Python users while still maintaining a single implementation for C++.
The most Pythonic solution would be to use a custom converter that would convert a Python iterable object to a C++ collection. See this answer for implementation details. Consider this option if:
The collection's elements are cheap to copy.
The C++ functions operate only on rvalue types (i.e., std::vector<> or const std::vector<>&). This limitation prevents C++ from making changes to the Python collection or its elements.
Enhance vector_indexing_suite capabilities, reusing as many capabilities as possible, such as its proxies for safely handling index deletion and reallocation of the underlying collection:
Expose the model with a custom HeldType that functions as a smart pointer and delegate to either the instance or the element proxy objects returned from vector_indexing_suite.
Monkey patch the collection's methods that insert elements into the collection so that the custom HeldType will be set to delegate to a element proxy.
When exposing a class to Boost.Python, the HeldType is the type of object that gets embedded within a Boost.Python object. When accessing the wrapped types object, Boost.Python invokes get_pointer() for the HeldType. The object_holder class below provides the ability to return a handle to either an instance it owns or to an element proxy:
/// #brief smart pointer type that will delegate to a python
/// object if one is set.
template <typename T>
class object_holder
{
public:
typedef T element_type;
object_holder(element_type* ptr)
: ptr_(ptr),
object_()
{}
element_type* get() const
{
if (!object_.is_none())
{
return boost::python::extract<element_type*>(object_)();
}
return ptr_ ? ptr_.get() : NULL;
}
void reset(boost::python::object object)
{
// Verify the object holds the expected element.
boost::python::extract<element_type*> extractor(object_);
if (!extractor.check()) return;
object_ = object;
ptr_.reset();
}
private:
boost::shared_ptr<element_type> ptr_;
boost::python::object object_;
};
/// #brief Helper function used to extract the pointed to object from
/// an object_holder. Boost.Python will use this through ADL.
template <typename T>
T* get_pointer(const object_holder<T>& holder)
{
return holder.get();
}
With the indirection supported, the only thing remaining is patching the collection to set the object_holder. One clean and reusable way to support this is to use def_visitor. This is a generic interface that allows for class_ objects to be extended non-intrusively. For instance, the vector_indexing_suite uses this capability.
The custom_vector_indexing_suite class below monkey patches the append() method to delegate to the original method, and then invokes object_holder.reset() with a proxy to the newly set element. This results in the object_holder referring to the element contained within the collection.
/// #brief Indexing suite that will resets the element's HeldType to
/// that of the proxy during element insertion.
template <typename Container,
typename HeldType>
class custom_vector_indexing_suite
: public boost::python::def_visitor<
custom_vector_indexing_suite<Container, HeldType>>
{
private:
friend class boost::python::def_visitor_access;
template <typename ClassT>
void visit(ClassT& cls) const
{
// Define vector indexing support.
cls.def(boost::python::vector_indexing_suite<Container>());
// Monkey patch element setters with custom functions that
// delegate to the original implementation then obtain a
// handle to the proxy.
cls
.def("append", make_append_wrapper(cls.attr("append")))
// repeat for __setitem__ (slice and non-slice) and extend
;
}
/// #brief Returned a patched 'append' function.
static boost::python::object make_append_wrapper(
boost::python::object original_fn)
{
namespace python = boost::python;
return python::make_function([original_fn](
python::object self,
HeldType& value)
{
// Copy into the collection.
original_fn(self, value.get());
// Reset handle to delegate to a proxy for the newly copied element.
value.reset(self[-1]);
},
// Call policies.
python::default_call_policies(),
// Describe the signature.
boost::mpl::vector<
void, // return
python::object, // self (collection)
HeldType>() // value
);
}
};
Wrapping needs to occur at runtime and custom functor objects cannot be directly defined on the class via def(), so the make_function() function must be used. For functors, it requires both CallPolicies and a MPL front-extensible sequence representing the signature.
Here is a complete example that demonstrates using the object_holder to delegate to proxies and custom_vector_indexing_suite to patch the collection.
#include <boost/python.hpp>
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
/// #brief Mockup type.
struct spam
{
int val;
spam(int val) : val(val) {}
bool operator==(const spam& rhs) { return val == rhs.val; }
};
/// #brief Mockup function that operations on a collection of spam instances.
void modify_spams(std::vector<spam>& spams)
{
for (auto& spam : spams)
spam.val *= 2;
}
/// #brief smart pointer type that will delegate to a python
/// object if one is set.
template <typename T>
class object_holder
{
public:
typedef T element_type;
object_holder(element_type* ptr)
: ptr_(ptr),
object_()
{}
element_type* get() const
{
if (!object_.is_none())
{
return boost::python::extract<element_type*>(object_)();
}
return ptr_ ? ptr_.get() : NULL;
}
void reset(boost::python::object object)
{
// Verify the object holds the expected element.
boost::python::extract<element_type*> extractor(object_);
if (!extractor.check()) return;
object_ = object;
ptr_.reset();
}
private:
boost::shared_ptr<element_type> ptr_;
boost::python::object object_;
};
/// #brief Helper function used to extract the pointed to object from
/// an object_holder. Boost.Python will use this through ADL.
template <typename T>
T* get_pointer(const object_holder<T>& holder)
{
return holder.get();
}
/// #brief Indexing suite that will resets the element's HeldType to
/// that of the proxy during element insertion.
template <typename Container,
typename HeldType>
class custom_vector_indexing_suite
: public boost::python::def_visitor<
custom_vector_indexing_suite<Container, HeldType>>
{
private:
friend class boost::python::def_visitor_access;
template <typename ClassT>
void visit(ClassT& cls) const
{
// Define vector indexing support.
cls.def(boost::python::vector_indexing_suite<Container>());
// Monkey patch element setters with custom functions that
// delegate to the original implementation then obtain a
// handle to the proxy.
cls
.def("append", make_append_wrapper(cls.attr("append")))
// repeat for __setitem__ (slice and non-slice) and extend
;
}
/// #brief Returned a patched 'append' function.
static boost::python::object make_append_wrapper(
boost::python::object original_fn)
{
namespace python = boost::python;
return python::make_function([original_fn](
python::object self,
HeldType& value)
{
// Copy into the collection.
original_fn(self, value.get());
// Reset handle to delegate to a proxy for the newly copied element.
value.reset(self[-1]);
},
// Call policies.
python::default_call_policies(),
// Describe the signature.
boost::mpl::vector<
void, // return
python::object, // self (collection)
HeldType>() // value
);
}
// .. make_setitem_wrapper
// .. make_extend_wrapper
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose spam. Use a custom holder to allow for transparent delegation
// to different instances.
python::class_<spam, object_holder<spam>>("Spam", python::init<int>())
.def_readwrite("val", &spam::val)
;
// Expose a vector of spam.
python::class_<std::vector<spam>>("SpamVector")
.def(custom_vector_indexing_suite<
std::vector<spam>, object_holder<spam>>())
;
python::def("modify_spams", &modify_spams);
}
Interactive usage:
>>> import example
>>> spam = example.Spam(5)
>>> spams = example.SpamVector()
>>> spams.append(spam)
>>> assert(spams[0].val == 5)
>>> spam.val = 21
>>> assert(spams[0].val == 21)
>>> example.modify_spams(spams)
>>> assert(spam.val == 42)
>>> spams.append(spam)
>>> spam.val = 100
>>> assert(spams[1].val == 100)
>>> assert(spams[0].val == 42) # The container does not provide indirection.
As the vector_indexing_suite is still being used, the underlying C++ container should only be modified using the Python object's API. For instance, invoking push_back on the container may cause a reallocation of the underlying memory and cause problems with existing Boost.Python proxies. On the other hand, one can safely modify the elements themselves, such as was done via the modify_spams() function above.
Unfortunately, the answer is no, you can't do what you want. In python, everything is a pointer, and lists are a container of pointers. The C++ vector of shared pointers work because the underlying data structure is more or less equivalent to a python list. What you are requesting is to have the C++ vector of allocated memory act like a vector of pointers, which can't be done.
Let's see what's happening in python lists, with C++ equivalent pseudocode:
foo = Foo(0.0) # Foo* foo = new Foo(0.0)
vect = [] # std::vector<Foo*> vect
vect.append(foo) # vect.push_back(foo)
At this point, foo and vect[0] both point to the same allocated memory, so changing *foo changes *vect[0].
Now with the vector<Foo> version:
foo = Foo(0.0) # Foo* foo = new Foo(0.0)
vect = FooVector() # std::vector<Foo> vect
vect.append(foo) # vect.push_back(*foo)
Here, vect[0] has it's own allocated memory, and is a copy of *foo. Fundamentally, you can't make vect[0] be the same memory as *foo.
On a side note, be careful with lifetime management of footwo when using std::vector<Foo>:
footwo = vect[0] # Foo* footwo = &vect[0]
A subsequent append may require moving the allocated storage for the vector, and may invalidate footwo (&vect[0] may change).

Boost.Python: How to expose std::unique_ptr

I am fairly new to boost.python and trying to expose the return value of a function to python.
The function signature looks like this:
std::unique_ptr<Message> someFunc(const std::string &str) const;
When calling the function in python, I get the following error:
TypeError: No to_python (by-value) converter found for C++ type: std::unique_ptr<Message, std::default_delete<Message> >
My function call in python looks like this:
a = mymodule.MyClass()
a.someFunc("some string here") # error here
I tried to expose the std::unique_ptr but just cant get it to work..
Does someone know how to properly expose the pointer class?
Thanks!
Edit:
I tried the following:
class_<std::unique_ptr<Message, std::default_delete<Message>>, bost::noncopyable ("Message", init<>())
;
This example compiles, but I still get the error mentioned above.
Also, I tried to expose the class Message itself
class_<Message>("Message", init<unsigned>())
.def(init<unsigned, unsigned>())
.def("f", &Message::f)
;
In short, Boost.Python does not support move-semantics, and therefore does not support std::unique_ptr. Boost.Python's news/change log has no indication that it has been updated for C++11 move-semantics. Additionally, this feature request for unique_ptr support has not been touched for over a year.
Nevertheless, Boost.Python supports transferring exclusive ownership of an object to and from Python via std::auto_ptr. As unique_ptr is essentially a safer version of auto_ptr, it should be fairly straight forward to adapt an API using unique_ptr to an API that uses auto_ptr:
When C++ transfers ownership to Python, the C++ function must:
be exposed with CallPolicy of boost::python::return_value_policy with a boost::python::manage_new_object result converter.
have unique_ptr release control via release() and return a raw pointer
When Python transfers ownership to C++, the C++ function must:
accept the instance via auto_ptr. The FAQ mentions that pointers returned from C++ with a manage_new_object policy will be managed via std::auto_ptr.
have auto_ptr release control to a unique_ptr via release()
Given an API/library that cannot be changed:
/// #brief Mockup Spam class.
struct Spam;
/// #brief Mockup factory for Spam.
struct SpamFactory
{
/// #brief Create Spam instances.
std::unique_ptr<Spam> make(const std::string&);
/// #brief Delete Spam instances.
void consume(std::unique_ptr<Spam>);
};
The SpamFactory::make() and SpamFactory::consume() need to be wrapped via auxiliary functions.
Functions transferring ownership from C++ to Python can be generically wrapped by a function that will create Python function objects:
/// #brief Adapter a member function that returns a unique_ptr to
/// a python function object that returns a raw pointer but
/// explicitly passes ownership to Python.
template <typename T,
typename C,
typename ...Args>
boost::python::object adapt_unique(std::unique_ptr<T> (C::*fn)(Args...))
{
return boost::python::make_function(
[fn](C& self, Args... args) { return (self.*fn)(args...).release(); },
boost::python::return_value_policy<boost::python::manage_new_object>(),
boost::mpl::vector<T*, C&, Args...>()
);
}
The lambda delegates to the original function, and releases() ownership of the instance to Python, and the call policy indicates that Python will take ownership of the value returned from the lambda. The mpl::vector describes the call signature to Boost.Python, allowing it to properly manage function dispatching between the languages.
The result of adapt_unique is exposed as SpamFactory.make():
boost::python::class_<SpamFactory>(...)
.def("make", adapt_unique(&SpamFactory::make))
// ...
;
Generically adapting SpamFactory::consume() is a more difficult, but it is easy enough to write a simple auxiliary function:
/// #brief Wrapper function for SpamFactory::consume_spam(). This
/// is required because Boost.Python will pass a handle to the
/// Spam instance as an auto_ptr that needs to be converted to
/// convert to a unique_ptr.
void SpamFactory_consume(
SpamFactory& self,
std::auto_ptr<Spam> ptr) // Note auto_ptr provided by Boost.Python.
{
return self.consume(std::unique_ptr<Spam>{ptr.release()});
}
The auxiliary function delegates to the original function, and converts the auto_ptr provided by Boost.Python to the unique_ptr required by the API. The SpamFactory_consume auxiliary function is exposed as SpamFactory.consume():
boost::python::class_<SpamFactory>(...)
// ...
.def("consume", &SpamFactory_consume)
;
Here is a complete code example:
#include <iostream>
#include <memory>
#include <boost/python.hpp>
/// #brief Mockup Spam class.
struct Spam
{
Spam(std::size_t x) : x(x) { std::cout << "Spam()" << std::endl; }
~Spam() { std::cout << "~Spam()" << std::endl; }
Spam(const Spam&) = delete;
Spam& operator=(const Spam&) = delete;
std::size_t x;
};
/// #brief Mockup factor for Spam.
struct SpamFactory
{
/// #brief Create Spam instances.
std::unique_ptr<Spam> make(const std::string& str)
{
return std::unique_ptr<Spam>{new Spam{str.size()}};
}
/// #brief Delete Spam instances.
void consume(std::unique_ptr<Spam>) {}
};
/// #brief Adapter a non-member function that returns a unique_ptr to
/// a python function object that returns a raw pointer but
/// explicitly passes ownership to Python.
template <typename T,
typename ...Args>
boost::python::object adapt_unique(std::unique_ptr<T> (*fn)(Args...))
{
return boost::python::make_function(
[fn](Args... args) { return fn(args...).release(); },
boost::python::return_value_policy<boost::python::manage_new_object>(),
boost::mpl::vector<T*, Args...>()
);
}
/// #brief Adapter a member function that returns a unique_ptr to
/// a python function object that returns a raw pointer but
/// explicitly passes ownership to Python.
template <typename T,
typename C,
typename ...Args>
boost::python::object adapt_unique(std::unique_ptr<T> (C::*fn)(Args...))
{
return boost::python::make_function(
[fn](C& self, Args... args) { return (self.*fn)(args...).release(); },
boost::python::return_value_policy<boost::python::manage_new_object>(),
boost::mpl::vector<T*, C&, Args...>()
);
}
/// #brief Wrapper function for SpamFactory::consume(). This
/// is required because Boost.Python will pass a handle to the
/// Spam instance as an auto_ptr that needs to be converted to
/// convert to a unique_ptr.
void SpamFactory_consume(
SpamFactory& self,
std::auto_ptr<Spam> ptr) // Note auto_ptr provided by Boost.Python.
{
return self.consume(std::unique_ptr<Spam>{ptr.release()});
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<Spam, boost::noncopyable>(
"Spam", python::init<std::size_t>())
.def_readwrite("x", &Spam::x)
;
python::class_<SpamFactory>("SpamFactory", python::init<>())
.def("make", adapt_unique(&SpamFactory::make))
.def("consume", &SpamFactory_consume)
;
}
Interactive Python:
>>> import example
>>> factory = example.SpamFactory()
>>> spam = factory.make("a" * 21)
Spam()
>>> spam.x
21
>>> spam.x *= 2
>>> spam.x
42
>>> factory.consume(spam)
~Spam()
>>> spam.x = 100
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Boost.Python.ArgumentError: Python argument types in
None.None(Spam, int)
did not match C++ signature:
None(Spam {lvalue}, unsigned int)
My suggestion is to get the raw pointer from the std::unique_ptr container with get(). You will have to careful to keep the unique_ptr in scope for for whole time that you wish to use the raw pointer value, otherwise the object will be deleted and you'll have a pointer to an invalid area of memory.
Boost supports movable semantics and unique_ptr since v.1.55.
But in my project I used previous version and made such simple wrapper:
class_<unique_ptr<HierarchyT>, noncopyable>(typpedName<LinksT>("hierarchy", false)
, "hierarchy holder")
.def("__call__", &unique_ptr<HierarchyT>::get,
return_internal_reference<>(),
"get holding hierarchy")
.def("reset", &unique_ptr<HierarchyT>::reset,
"reset holding hierarhy")
;
to create unique_ptr<HierarchyT> as Python shierarchy and pass it to the function that accepts it by reference.
Python code:
hier = mc.shierarchy()
mc.clusterize(hier, nds)
where C++ function is float clusterize(unique_ptr<HierarchyT>& hier,...).
Then to access results in Python make a call hier() to get the wrapped object from the unique_ptr:
output(hier(), nds)
I think nowadays there is no way to do what you are looking for... The reason is because std::unique_ptr<Message> someFunc(const std::string &str) is returning by value, which means one of two things:
The return value is going to be copied (but unique_ptr is not copyable);
The return value is going to be moved (now the problem is that boost::python doesn't provide support to move semantics). (heyy, I'm using boost 1,53, not sure in the newest versions);
Is someFunc() creating the object? In case YES, I think the solution is to create a wrapper, in case NO, you can return by reference:
std::unique_ptr<Message>& someFunc(const std::string &str)
expose the class:
class_<std::unique_ptr<Message, std::default_delete<Message>>, boost::noncopyable>("unique_ptr_message")
.def("get", &std::unique_ptr<Message>::get, return_value_policy<reference_existing_object>())
;
and also the functions:
def("someFunc", someFunc, return_value_policy<reference_existing_object>());

Categories