I have a dictionary and i would like to know if is it possible to use it as a parameter of a kernel.
for instance
i have the cuda kernel signature
__global__ void calTab(Tableaux)
Tableaux is a C structure corresponding to
typedef struct
{
float *Tab1;
float *Tab2;
} Tableaux;
in python Tableaux correspond to the dictionary below:
Tableaux={}
Tableaux["Tab1"]=[]
Tableaux["Tab2"]=[]
is it possible to use the dictionary as the C structure without using a C API?
Thank you in advance
None of what you are proposing is possible. In PyCUDA, you cannot
Pass a dictionary to a kernel
Pass a list to a kernel
Directly translate a dictionary to C++ structure in device code
Directly translate a list to a C++ linear array in device
PyCUDA can use Python classes as C++ structures and it has a numpy like array for use on the GPU. So points 3 and 4 are possible, but not as you would like to do them. Both techniques are discussed in the documentation,here for gpuarray and here for structures.
Related
Is it possible to pass a python dict into a function expecting a nlohmann::json (nlohmann/json) object via cppyy? This question has to have come up by now, but I wasn't able to find anything on it.
Minimal example to reproduce (without regard to performance/safety, pls forgive):
test-json.h
#include <iostream>
#include <nlohmann/json.hpp>
using nlohmann::json;
void print_name_and_age(json j) {
std::cout << j["name"] << "\n"
<< j["age"] << "\n";
}
test-cppyy.py
import cppyy
cppyy.include('test-json.h')
from cppyy.gbl import print_name_and_age
some_dict = {
"name": "alfred",
"age": 25
}
print_name_and_age(some_dict)
runs into
print_name_and_age(some_dict)
NotImplementedError: void ::print_name_and_age(nlohmann::json j) =>
NotImplementedError: could not convert argument 1 (this method cannot (yet) be called)
I would like to be able to pass a python dict into the C++ function, and receive it as a nlohmann::json object. I presume I would need to write some custom converter for this?
Design requirement/background (optional)
I have a reinforcement learning environment class (written in C++) that needs to accept some configuration to initialize it (in its constructor). Everything's all fine passing a nlohmann::json object into the constructor while in the C++ domain, but I have a Python wrapper around the class too, written with cppyy that provides similar functionality to the C++ interface.
Uptill now, because of the aforementioned issue, I've been forced to receive a const std::map<std::string, float>& in the constructor instead of a nlohmann::json, which is what a python dict containing only str -> float mappings easily gets converted to by cppyy. But this obviously limits my input json files to only contain floats as values (my usecase requires having strings as keys but strings, ints, floats and bools as values in the JSON file). I can ofcourse write some pre-processing code to encode my heterogenous python dict into a homogenous str->float mapping on the python front (and do the same for C++) but I'd like a cleaner solution, if possible.
Could anyone please help me achieve passing a python dict into the cppyy-imported C++ function and have it converted into a nlohmann::json object in the C++ function? If this requires forking cppyy to add extra converter code/too much trouble I presume I would need to use a std::map<std::string, std::any / variant> alternative? I haven't worked alot with std::any/variant, would like to ask if this would even be possible - python dict to map<string, any> - if this is the best alternative to a custom converter - in terms of performance / clean elegant code.
Environment:
Python 3.9.13
C++17 ( I believe cppyy-2.4.0 doesn't support C++20 yet, I don't have any constraint on the C++ standard)
cppyy==2.3.1
cppyy-backend==1.14.8
cppyy-cling==6.25.3
cppyythonizations==1.2.1
MacOS Monterey, Apple M1
This has been answered at GitHub: Conversion from python dict into nlohmann::json
I'm having a C++ function which is returning of type vector.
Example:
std::vector<std::string> exampleFunction() {
std::vector<std::string> vec;
# intermediate code
return vec;
}
Now I need to call this function exampleFunction after importing the C++ built file.
The return of exampleFunction, vector I need to access.
I tried defining the structure like below in Python
class tempStructure(Structure):
_fields_ = [("vec", POINTER(c_char_p)]
but this is not helping me to get the full vector array. With this, I'm able to access only the first item of the vector through this.
How to access the full array?
what is the best method to pass a int C++ Array to a python program using shared memory? for example i've array as defined:*
int arr[100];
for(int i =0;i<100;i++) {
arr[i] = i*2;
}
and i would get on python program:
[0,2,4,6.... 200]
?
Thanks
You can always create a python module using pybind11 and create a function that exposes the array content.
pybind11 tutorial to get an idea of the complexity:
https://bastian.rieck.me/blog/posts/2018/cmake_cpp_pybind11_hard_mode/
I'm trying to write a small, modular program in Python that will dynamically load C functions and use them to execute computationally intensive code. In this program I am creating a couple of large matrices that I will be passing back and forth between my Python code to different C functions. I would prefer to pass these matrices by reference to avoid additional computational overhead.
I've tried reading through the Python docs for ctypes, but it doesn't seem to explain how to do this. I understand, for instance, that I can use byref() or pointer() to pass a pointer from Python to a C function, but how to I pass a pointer from an external C function back to Python? Given that variables are names in Python, is this just done "automatically" (for lack of a better term) when Python receives a value from a C function?
As a concrete example, this is what I'm trying to do (in pseudo-code):
foo = ctypes.CDLL("pathToFoo")
bar = ctypes.CDLL("pathToBar")
# Generate a large matrix by calling a C function.
reallyBigMatrix = foo.generateReallyBigMatrix()
# Pass reallyBigMatrix to another function and perform some operation
# on it. Since the matrix is really big, I would prefer to pass a
# reference to this matrix to my next C function rather than passing
# the matrix by value.
modifiedReallyBigMatrix = bar.modifyReallyBigMatrix(reallBigMatrix)
Alternatively, I'm using Python and C in conjunction as I need an easy way to dynamically load C functions in my program. I may pass paths to different C files to my Python program so that the Python program will execute the same code on different functions. As an example, I may want to run my program two different ways: keep the same "generateReallyBigMatrix" function in both runs, but used a different "modifyReallyBigMatrix" program between run 1 and run 2. If there is an easy, cross-platform way to do this in C or C++ I would be happy to implement that solution rather than using ctypes and Python. However, I haven't been able to find a simple, cross-platform solution.
You've mentioned that you are writing all the code, both Python and C, from yourself. I suggest not using ctypes for this, as ctypes is best suited for using C libraries that cannot be modified.
Instead, write a module in C using the Python C API. It will expose a single function to start with, like this:
PyObject* generateReallyBigMatrix(void);
Now, instead of trying to return a raw C pointer, you can return any Python object that you like. Good choices here would be to return a NumPy array (using the NumPy C API), or to return a Python "buffer" (from which a NumPy array can be constructed in Python if desired).
Either way, once this function is written in C using the appropriate APIs, your Python code will be simple:
import foo
reallyBigMatrix = foo.generateReallyBigMatrix()
To do it using the NumPy C API, your C code will look like this:
PyObject* generateReallyBigMatrix(void)
{
npy_intp dimension = 100;
PyArray_Descr* descr;
PyArray_DescrAlignConverter2("float64", &descr); // use any dtype
PyObject* array = PyArray_Empty(1, &dimension, descr, 0/*fortran*/);
Py_DECREF(descr);
void* data = PyArray_DATA(array);
// TODO: populate data
return array;
}
static PyMethodDef methods[] = {
{"generateReallyBigMatrix", generateReallyBigMatrix, METH_VARARGS, "doc"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
PyMODINIT_FUNC initfoo(void)
{
import_array(); // enable NumPy C API
Py_InitModule("foo", methods);
}
Note that the NumPy C API requires a slightly strange initialization ritual. See also Numpy C API: Link several object files
You then compile the code as a shared library called foo.so (no lib prefix).
I am wrapping a C module with SWIG for Python. Is there any way to turn all Python lists/tuples whose members are all of the same type (same kind of Swig object) to C arrays?
Typemaps. What you are most likely looking for is an "in" typemap, which maps Python types to C types. The declaration looks something like this:
%typemap(in) {
/* C code to convert Python tuple object to C array */
}
Inside the typemap code you can use the variable $input to reference the PyObject* to convert, and assign your converted C array to $1.
http://docs.python.org/c-api/ has information on the Python/C API, which you'll need to unpack the tuple to get the items and convert them to C.
http://www.swig.org/Doc1.3/Typemaps.html has the SWIG documentation for typemaps.
The documentation can be hard to understand at first, so take a look at some example typemaps in /share. carrays.i in that directory might also serve as a good starting point.