what is the best method to pass a int C++ Array to a python program using shared memory? for example i've array as defined:*
int arr[100];
for(int i =0;i<100;i++) {
arr[i] = i*2;
}
and i would get on python program:
[0,2,4,6.... 200]
?
Thanks
You can always create a python module using pybind11 and create a function that exposes the array content.
pybind11 tutorial to get an idea of the complexity:
https://bastian.rieck.me/blog/posts/2018/cmake_cpp_pybind11_hard_mode/
Related
Is it possible to pass a python dict into a function expecting a nlohmann::json (nlohmann/json) object via cppyy? This question has to have come up by now, but I wasn't able to find anything on it.
Minimal example to reproduce (without regard to performance/safety, pls forgive):
test-json.h
#include <iostream>
#include <nlohmann/json.hpp>
using nlohmann::json;
void print_name_and_age(json j) {
std::cout << j["name"] << "\n"
<< j["age"] << "\n";
}
test-cppyy.py
import cppyy
cppyy.include('test-json.h')
from cppyy.gbl import print_name_and_age
some_dict = {
"name": "alfred",
"age": 25
}
print_name_and_age(some_dict)
runs into
print_name_and_age(some_dict)
NotImplementedError: void ::print_name_and_age(nlohmann::json j) =>
NotImplementedError: could not convert argument 1 (this method cannot (yet) be called)
I would like to be able to pass a python dict into the C++ function, and receive it as a nlohmann::json object. I presume I would need to write some custom converter for this?
Design requirement/background (optional)
I have a reinforcement learning environment class (written in C++) that needs to accept some configuration to initialize it (in its constructor). Everything's all fine passing a nlohmann::json object into the constructor while in the C++ domain, but I have a Python wrapper around the class too, written with cppyy that provides similar functionality to the C++ interface.
Uptill now, because of the aforementioned issue, I've been forced to receive a const std::map<std::string, float>& in the constructor instead of a nlohmann::json, which is what a python dict containing only str -> float mappings easily gets converted to by cppyy. But this obviously limits my input json files to only contain floats as values (my usecase requires having strings as keys but strings, ints, floats and bools as values in the JSON file). I can ofcourse write some pre-processing code to encode my heterogenous python dict into a homogenous str->float mapping on the python front (and do the same for C++) but I'd like a cleaner solution, if possible.
Could anyone please help me achieve passing a python dict into the cppyy-imported C++ function and have it converted into a nlohmann::json object in the C++ function? If this requires forking cppyy to add extra converter code/too much trouble I presume I would need to use a std::map<std::string, std::any / variant> alternative? I haven't worked alot with std::any/variant, would like to ask if this would even be possible - python dict to map<string, any> - if this is the best alternative to a custom converter - in terms of performance / clean elegant code.
Environment:
Python 3.9.13
C++17 ( I believe cppyy-2.4.0 doesn't support C++20 yet, I don't have any constraint on the C++ standard)
cppyy==2.3.1
cppyy-backend==1.14.8
cppyy-cling==6.25.3
cppyythonizations==1.2.1
MacOS Monterey, Apple M1
This has been answered at GitHub: Conversion from python dict into nlohmann::json
I'm having a C++ function which is returning of type vector.
Example:
std::vector<std::string> exampleFunction() {
std::vector<std::string> vec;
# intermediate code
return vec;
}
Now I need to call this function exampleFunction after importing the C++ built file.
The return of exampleFunction, vector I need to access.
I tried defining the structure like below in Python
class tempStructure(Structure):
_fields_ = [("vec", POINTER(c_char_p)]
but this is not helping me to get the full vector array. With this, I'm able to access only the first item of the vector through this.
How to access the full array?
I am trying to pass a numpy array to C++ using Boost.Python.
The C++ code is:
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
void f(boost::python::numpy::ndarray& x){}
BOOST_PYTHON_MODULE(libtest)
{
boost::python::def("f", f);
}
The Python code is:
import libtest
import numpy
x=numpy.array(range(3))
libtest.f(x)
This gives a segmentation fault. This happens when passing the variable by value and by reference.
I have found a way to do what I needed. However the purpose of using Boost.Python was to be able to simply call the functions from the module without having to write a wrapper on the Python side as is the case with ctypes where certain types or return values have to be dealt with.
Is is possible to simply pass a reference to a numpy array?
Thanks!
I had the same problem and apparently solved it by putting
boost::python::numpy::initialize();
at top of my BOOST_PYHON_MODULE definition.
I'm trying to write a small, modular program in Python that will dynamically load C functions and use them to execute computationally intensive code. In this program I am creating a couple of large matrices that I will be passing back and forth between my Python code to different C functions. I would prefer to pass these matrices by reference to avoid additional computational overhead.
I've tried reading through the Python docs for ctypes, but it doesn't seem to explain how to do this. I understand, for instance, that I can use byref() or pointer() to pass a pointer from Python to a C function, but how to I pass a pointer from an external C function back to Python? Given that variables are names in Python, is this just done "automatically" (for lack of a better term) when Python receives a value from a C function?
As a concrete example, this is what I'm trying to do (in pseudo-code):
foo = ctypes.CDLL("pathToFoo")
bar = ctypes.CDLL("pathToBar")
# Generate a large matrix by calling a C function.
reallyBigMatrix = foo.generateReallyBigMatrix()
# Pass reallyBigMatrix to another function and perform some operation
# on it. Since the matrix is really big, I would prefer to pass a
# reference to this matrix to my next C function rather than passing
# the matrix by value.
modifiedReallyBigMatrix = bar.modifyReallyBigMatrix(reallBigMatrix)
Alternatively, I'm using Python and C in conjunction as I need an easy way to dynamically load C functions in my program. I may pass paths to different C files to my Python program so that the Python program will execute the same code on different functions. As an example, I may want to run my program two different ways: keep the same "generateReallyBigMatrix" function in both runs, but used a different "modifyReallyBigMatrix" program between run 1 and run 2. If there is an easy, cross-platform way to do this in C or C++ I would be happy to implement that solution rather than using ctypes and Python. However, I haven't been able to find a simple, cross-platform solution.
You've mentioned that you are writing all the code, both Python and C, from yourself. I suggest not using ctypes for this, as ctypes is best suited for using C libraries that cannot be modified.
Instead, write a module in C using the Python C API. It will expose a single function to start with, like this:
PyObject* generateReallyBigMatrix(void);
Now, instead of trying to return a raw C pointer, you can return any Python object that you like. Good choices here would be to return a NumPy array (using the NumPy C API), or to return a Python "buffer" (from which a NumPy array can be constructed in Python if desired).
Either way, once this function is written in C using the appropriate APIs, your Python code will be simple:
import foo
reallyBigMatrix = foo.generateReallyBigMatrix()
To do it using the NumPy C API, your C code will look like this:
PyObject* generateReallyBigMatrix(void)
{
npy_intp dimension = 100;
PyArray_Descr* descr;
PyArray_DescrAlignConverter2("float64", &descr); // use any dtype
PyObject* array = PyArray_Empty(1, &dimension, descr, 0/*fortran*/);
Py_DECREF(descr);
void* data = PyArray_DATA(array);
// TODO: populate data
return array;
}
static PyMethodDef methods[] = {
{"generateReallyBigMatrix", generateReallyBigMatrix, METH_VARARGS, "doc"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
PyMODINIT_FUNC initfoo(void)
{
import_array(); // enable NumPy C API
Py_InitModule("foo", methods);
}
Note that the NumPy C API requires a slightly strange initialization ritual. See also Numpy C API: Link several object files
You then compile the code as a shared library called foo.so (no lib prefix).
I am using SWIG to generate Python bindings for a library (lets call it Spam) that is written in C++. The library internally defines its own Vector datatype, defined in the Spam::Vector class.
Consider the following functions to be wrapped:
void ham(Spam::Vector &vec_in, Spam::Vector &vec_out);
void eggs(Spam::Vector &vec_in, double arg2, double result);
I would like to be able to call these functions using Python lists AND NumPy arrays as inputs (instead of having to create a Spam::Vector object in Python and then populate it using the associated C++ methods - it is very unpythonic).
How would I go about writing the SWIG typemap to achieve this? Also, is there a way to incorporate/leverage numpy.i for this purpose?
The right way to do this is with a custom typemap. Precisely what this will look like depends a lot on the type Spam::Vector itself. In general though you can do this with something like:
%typemap(in) {
// Maybe you'd rather check for iterable here, with this check after numpy?
if (PyList_Check($input)) {
$1 = ... // Code to iterate over a list and prepare a Spam::Vector
}
else if (PyType_IsSubtype($input->ob_type, NumpyType)) {
$1 = ... // Code to convert from numpy input
}
else {
// code to raise an error
}
}
There are various hacks that might be possible in other more specific circumstances, but this is the general solution.