Expose OpenCV C++ Mat to Python - python

Hello everyone basically i have the following
A test.cpp as it follows
cv::Mat load(string filename){
Mat img = imread(filename,CV_LOAD_IMAGE_COLOR);
GpuMat cudaMat;
cudaMat.upload(img);
cuda::DeviceInfo deviceinfo;
cout << "GPU: "<< deviceinfo.cuda::DeviceInfo::name() << endl;
imshow("opencvtest_load",img);
waitKey(0);
return img;
}
and i'm wrapping with boost as it follows:
#include<boost/python.hpp>
using namespace boost::python;
BOOST_PYTHON_MODULE(opencvtest)
{
def("load",load);
}
I generate everythong with the make command To be called from python code test.py
image = "some directory and image"
from opencvtest import load
img3 = load(image)
So, in fact what i need now is to get that Mat from the load method to be processed converted to python.
At the time i have the following error:
TypeError: No to_python (by-value) converter found for C++ type:
cv::Mat
So i'm done with all the libraries solutions because they trow errors all the time. Is there a better solution for this. Thanks a lot in advance

Long story short: in order to use Boost with a C++ class, you need to expose the C++ methods to Python as well.
Python's memory management strategy is very different from C++'s, so in order for an object to be managed by both, you need to wrap the C++ class in a Python-friendly wrapper. Boost has functions to facilitate this, though I haven't checked exactly how to do this and Boost's documentation isn't amazing for the Python library. Check this part of Boost's python documentation for how to do it.
If you want to use OpenCV in both C++ and Python, it's a little more complicated than it sounds. Even though OpenCV has both C++ and Python bindings, the Python bindings all use NumPy whereas the C++ bindings use classes such as cv::Mat. In order to use them together, you'll have to deal with converting to and from those formats.
However, that conversion can be made a little simpler, since Boost also has NumPy array bindings (which means that the conversion can be handled entirely in C++). I found this bit of code that helped me handle the rather confusing conversions, and found that even with an HD image the overhead of converting was minimal (since it's mostly memcpy).
And as for the comment of "why use Python if you have C++ available"- development time of C++ can be slow, especially when dealing with image processing. Python may be slow, but the development time can be much, much faster. If you've written working Python code and want to speed it up, it can be easier to convert pieces of it to C++ than to start rewriting it entirely.

Related

How to use cppyy to embed Python instead of boost-python

I am currently using boost-python to embed a Python interpreter into my C++ application and facilitate passing data from the executed Python process to the running C++ application through the boost-python bindings, as per https://www.boost.org/doc/libs/1_75_0/libs/python/doc/html/tutorial/tutorial/embedding.html
I have some troubles with respect to performance, especially when calling wrapped functions with a large number of arguments, the overhead of parsing and boxing all those arguments for "passing" to the C++ "side" is considerable.
I checked alternatives to boost-python, for example pybind11, which can also be embedded, but the performance is unlikely to improve. I also found out about cppyy, but from the documentation I am at loss as how to facilitate embedding an interpreter into my program, or rather, how I should convert my current approach of an embedded interpreter to be able to use cppyy. My aim in trying out cppyy is to check whether using cppyy and/or PyPy as an interpreter can increase the performance of my code, since neither boost-python nor pybind11 support embedding PyPy.
Can anyone give any pointers in how to replace an embedded Python interpreter using boost-python with cppyy?
The cppyy embedding interface isn't documented yet because it's not functional on PyPy/cppyy (which seems to be what you're asking for most specifically), only for CPython. And for the latter, I don't necessarily see whether/how it would be faster than boost.python or pybind11, as it still relies on boxing variables and the C-API to call into Python. Potentially, C++ type lookups are faster, but that would be all.
You can play with it rather easily to get some performance numbers first by calling into Python from C++ (Cling) from cppyy and see how it looks. Here's a trivial example:
import cppyy
import time
N = 10000000
def pycall(a):
return a
cppyy.cppdef("""\
int (*ptr)(int) = 0;
void func(uint64_t N) {
for (uint64_t i = 0; i < N; ++i)
ptr(1);
}""")
cppyy.gbl.ptr = pycall
ts = time.perf_counter()
cppyy.gbl.func(N)
print('time per call:', (time.perf_counter()-ts)/N)
To use your own code and types, rather than int, just include headers with cppyy.include and load libraries with cppyy.load_library(). Both Cling on the C++ side and cppyy on the Python side will then have full access, so you can use the types in the callback.
If the numbers look better, the main pieces you need are in CPyCppyy/API.h, see here: https://github.com/wlav/CPyCppyy/blob/master/include/CPyCppyy/API.h
My best recommendation, at this point in time, would however have to be CFFI, which is C only, but will give you callbacks that you can use directly and that are JIT friendly on PyPy:
https://cffi.readthedocs.io/en/latest/embedding.html

Handling objects from a C library in python code

I would like to implement a C / C++ library from a .dll file into a Python script to control a piece of i/o equipment called ClipX by HBM (in case anyone needs help with this in the future).
The manufacturer gives an example C implementation, and an example C++ implementation. In the C example, the Connect() function returns some pointer, which is used in subsequent read/write functions. In the C++ example, a ClipX class is used to establish the connection, and read/write functions are methods in that class. I've simplified the code for the purposes of this question.
Basically, I want to connect() to the device, and at some later point read() from it. From what I've read, it seems like Cython would be a good way to wrap connect() and read() as separate functions, and import them as a module into Python. My questions are:
For the C implementation, would I be able to pass MHandle pointer back to Python, after connecting, for later use (i.e. calling the read function)? Would the pointer even have any meaning, being used later in a different function call?
For the C++ implementation, could the dev object be passed to the Python code, to be later passed back for a Read()? Can you do that with arbitrary objects?
I am a mechanical engineer, sorry if this is gibberish or wildly uninformed. Any guidance is very much appreciated.
C Code:
/*From .h file*/
----------------------------------------------------
struct sClipX {
void *obj;
};
typedef struct sClipX * MHandle;
ClipX_API MHandle __stdcall Connect(const char *);
----------------------------------------------------
/*End .h file*/
int main()
{
const char * IP = "172.21.104.76";
MHandle m=Connect(IP);
Read(m, 0x4428);
}
C++ Code:
int main(){
ClipX dev = ClipX();
dev.Connect("172.21.104.76");
dev.Read(0x4428);
C++ functions are callable from C if you declare them as extern "C". This is related to name mangling
The Python interpreter can be extended with C functions. Read carefully the Extending and Embedding the Python Interpreter chapter.
Be careful about C++ exceptions. You don't want them to cross the Python interpreter code. So any extern "C" C++ function called from Python should handle and catch exceptions raised by internal routines.
At last, be careful about memory management and garbage collection. P.Wilson old paper on Uniprocessor garbage collection techniques is relevant, at least for terminology and insights. Or read the GC handbook. Python uses a reference counting scheme and handles specially weak references. Be careful about circular references.
Be of course aware of the GIL in Python. Roughly speaking, you cannot have several threads doing Python things without precautions.
Serialization of device-related data would also be a concern, but you probably don't need it.
Most importantly, document well your code.
Tools like doxygen could help (perhaps with LaTeX or DocBook).
Use of course a good enough version control system. I recommend git. Also a good build automation tool.
My suggestion is to publish your C++ code as open source, e.g. on github or gitlab. You then could get useful code reviews and feedback.
If your hardware + software system is safety-critical, consider static program analysis techniques e.g. with Frama-C or Clang static analyzer or with your own GCC plugin. In a few months (end of 2020), you might try Bismon (read also this draft report).
I am definitely biased, but I do recommend trying some Linux distribution (e.g. Ubuntu or Debian) as your cross-development platform. Be aware that a lot of devices (including RaspBerryPi) are running some embedded Linux system, so the learning effort makes sense. Then read Advanced Linux Programming

Boost.Python: Converters unavailable from standalone python script

The title may not be as explicit as I wish it would be but here is what I am trying to achieve:
Using Boost.Python, I expose a set of class/functions to Python in the typical BOOST_PYTHON_MODULE(MyPythonModule) macro from C++ that produces MyPythonModule.pyd after compilation. I can now invoke a python script from C++ and play around with MyPythonModule without any issue (eg. create objects, call methods and use my registered converters). FYI: the converter I'm refering to is a numpy.ndarray to cv::Mat converter.
This works fine, but when I try to write a standalone Python script that uses MyPythonModule, my converters are not available. I tried to expose the C++ method that performs the converter registration to Python without any luck.
If my explanation isn't clear enough, don't hesitate to ask questions in the comments.
Thanks a lot for your help / suggestions.
I found the problem... The prototype of my C++ function was taking cv::Mat& as argument and the converter was registered for cv::Mat without reference.
That was silly.

LIBSVM Python ctypes string function pointer segmentation faults

I've been porting a Python package that uses libsvm onto some production servers and ran into a strange segmentation fault which I traced to a ctypes function pointer. I'm trying to determine where the ctypes wrapper failed and if this is a distro specific problem or not.
The system I am running this on is a very clean virtual machine with almost nothing installed:
Solaris 5.11
amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
Python 2.7.2
Now for the problem description and how I narrowed to ctypes. In libsvm you can specify the print function by passing a void (*print_func)(const char *) pointer into the svm_set_print_string_function function. The default with a NULL pointer is to print to stdout. Now the interesting part is that the Python wrapper for libsvm (which works fine on a variety of other systems) makes such a function pointer when asking for quiet mode (no printing) via the following:
PRINT_STRING_FUN = CFUNCTYPE(None, c_char_p)
def print_null(s):
return
if argv[i] == "-q":
self.print_func = PRINT_STRING_FUN(print_null)
libsvm.svm_set_print_string_function(self.print_func)
When I set quiet mode libsvm accepts the function pointer but hangs after a few seconds when calling svm_train then seg faults. I tried making a void * argument function pointer and then casting it to a const char * function pointer with the same results, which means it wasn't the conversion from const char * to a PyStringObject.
Then I finally just wrote a C++ function to set the function pointer to a no-op in the library itself by:
void print_null(const char *) {}
void svm_set_print_null() {
svm_set_print_string_function(&print_null);
}
which worked as expected with no segmentation faults. This leads me to think that the ctypes is failing at some internal point of function pointer conversion. Looking through the ctypes source files hasn't revealed anything obvious to me though I haven't worked a lot with ctypes explicitly so it's difficult to narrow down where the bug might be.
I can use my library addition solution for now, but if I want to silently process the returns I would need to actually be able to pass a function pointer into libsvm. Plus it doesn't give me peace of mind about stability if I need to implement such workarounds without knowing what's the true root cause of the problem.
Has anyone else had problems with libsvm print functions on Solaris or specifically ctypes function pointers in Python on Solaris? I couldn't find anything searching online about either such problems with Solaris. I planning on playing around with library calls and making some function processing libs to find the exact boundaries of failure, but someone else's input might save me a day or two of debug testing.
UPDATE
The problem is reproducible on the 32bit version of Solaris 5.11 as well.

Prototyping with Python code before compiling

I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually.
IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP?
For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
Finally a question that I can really put a value answer to :).
I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown:
Disclaimer: This is my personal experience. I am not involved with any of these projects.
swig:
does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it.
Ctypes:
I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs.
Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python.
Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop.
Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython.
Summary: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question.
Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
I haven't used SWIG or SIP, but I find writing Python wrappers with boost.python to be very powerful and relatively easy to use.
I'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic boost::python::object argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa.
If you plan use boost.python, the tutorial is a good place to start.
I have implemented something somewhat similar to what you need. I have a C++ function that
accepts a python function and an image as arguments, and applies the python function to each pixel in the image.
Image* unary(boost::python::object op, Image& im)
{
Image* out = new Image(im.width(), im.height(), im.channels());
for(unsigned int i=0; i<im.size(); i++)
{
(*out)[i] == extract<float>(op(im[i]));
}
return out;
}
In this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a __call__ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function):
import image
im = image.load('somefile.tiff')
double_im = image.unary(lambda x: 2.0*x, im)
As for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - this might be helpful.
The best way to plan for an eventual transition to compiled code is to write the performance sensitive portions as a module of simple functions in a functional style (stateless and without side effects), which accept and return basic data types.
This will provide a one-to-one mapping from your Python prototype code to the eventual compiled code, and will let you use ctypes easily and avoid a whole bunch of headaches.
For peak fitting, you'll almost certainly need to use arrays, which will complicate things a little, but is still very doable with ctypes.
If you really want to use more complicated data structures, or modify the passed arguments, SWIG or Python's standard C-extension interface will let you do what you want, but with some amount of hassle.
For what you're doing, you may also want to check out NumPy, which might do some of the work you would want to push to C, as well as offering some additional help in moving data back and forth between Python and C.
f2py (part of numpy) is a simpler alternative to SWIG and boost.python for wrapping C/Fortran number-crunching code.
In my experience, there are two easy ways to call into C code from Python code. There are other approaches, all of which are more annoying and/or verbose.
The first and easiest is to compile a bunch of C code as a separate shared library and then call functions in that library using ctypes. Unfortunately, passing anything other than basic data types is non-trivial.
The second easiest way is to write a Python module in C and then call functions in that module. You can pass anything you want to these C functions without having to jump through any hoops. And it's easy to call Python functions or methods from these C functions, as described here: https://docs.python.org/extending/extending.html#calling-python-functions-from-c
I don't have enough experience with SWIG to offer intelligent commentary. And while it is possible to do things like pass custom Python objects to C functions through ctypes, or to define new Python classes in C, these things are annoying and verbose and I recommend taking one of the two approaches described above.
Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
In C you cannot pass a function as an argument to a function but you can pass a function pointer which is just as good a function.
I don't know how much that would help when you are trying to integrate C and Python code but I just wanted to clear up one misconception.
In addition to the tools above, I can recommend using Pyrex
(for creating Python extension modules) or Psyco (as JIT compiler for Python).

Categories