Background
I have some functions which are written in C++, which require high real-time performance. I want to quickly export these functions as dynamic link library to be exposed to Python so that I could do some high level programming.
In these functions, in order to simply usage, I use PyList_New in <Python.h> to collect some intermedia data. But I met some errors.
Code Example
I found the core problem is that I CAN'T event export a python object. After compiling the source to dll and use ctypes to load it, result shows
OSError: exception: access violation reading 0x0000000000000008
C++ code:
#include <Python.h>
#ifdef _MSC_VER
#define DLL_EXPORT __declspec( dllexport )
#else
#define DLL_EXPORT
#endif
#ifdef __cplusplus
extern "C"{
#endif
DLL_EXPORT PyObject *test3() {
PyObject* ptr = PyList_New(10);
return ptr;
}
#ifdef __cplusplus
}
#endif
Python test code:
if __name__ == "__main__":
import ctypes
lib = ctypes.cdll.LoadLibrary(LIB_DLL)
test3 = lib.test3
test3.argtypes = None
test3.restype = ctypes.py_object
print(test3())
Environment Config
Clion with Microsoft Visual Studio 2019 Community, and the arch is amd64.
I know that, the right way is to use the recommanded method to wrap C++ source using Python/C Api to a module, but it seems that I have to code a lot. Anyone can help?
ctypes is normally for calling "regular" C functions, not Python C API functions, but it can be done. You must use PyDLL to load a function that uses Python, as it won't release the GIL (global intepreter lock) required to be held when using Python functions. Your code as shown is invalid, however, because it doesn't populate the list it creates (using OP code as test.c):
>>> from ctypes import *
>>> lib = PyDLL('./test')
>>> lib.test3.restype=py_object
>>> lib.test3()
[<NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>]
Instead, write a C or C++ function normally:
test.cpp
#ifdef _MSC_VER
#define DLL_EXPORT __declspec( dllexport )
#else
#define DLL_EXPORT
#endif
#ifdef __cplusplus
extern "C"{
#endif
DLL_EXPORT int* create(int n) {
auto p = new int[n];
for(int i = 0; i < n; ++i)
p[i] = i;
return p;
}
DLL_EXPORT void destroy(int* p) {
delete [] p;
}
#ifdef __cplusplus
}
#endif
test.py
from ctypes import *
lib = CDLL('./test')
lib.create.argtypes = c_int,
lib.create.restype = POINTER(c_int)
lib.destroy.argtypes = POINTER(c_int),
lib.destroy.restype = None
p = lib.create(5)
print(p) # pointer to int
print(p[:5]) # convert to list...pointer doesn't have length so slice.
lib.destroy(p) # free memory
Output:
<__main__.LP_c_long object at 0x000001E094CD9DC0>
[0, 1, 2, 3, 4]
I solved it by myself. Just change to Release and all the problems are solved.
Related
I have a problem with the functional feature of Pybind11 when I use it with a for-loop with OpenMP. I've done some research and my problem sounds pretty similar to the one in this Pull Request from 2 years ago, but although this PR is closed and the issue seems to be fixed I still have this issue. A code example I created will hopefully explain my problem better:
b.h
#include <pybind11/pybind11.h>
#include <pybind11/functional.h>
#include <omp.h>
namespace py = pybind11;
class B {
public:
B(int n, const int& initial_value);
void map(const std::function<int(int)> &f);
private:
int n;
int* elements;
};
b.cpp
#include <pybind11/pybind11.h>
#include <pybind11/functional.h>
#include "b.h"
namespace py = pybind11;
B::B(int n, const int& v)
: n(n) {
elements = new int[n];
#pragma omp parallel for
for (int i = 0; i < n; i++) {
elements[i] = v;
}
}
void B::map(const std::function<int(int)> &f) {
#pragma omp parallel for
for (int i = 0; i < n; i++) {
elements[i] = f(elements[i]);
}
}
PYBIND11_MODULE(m, handle) {
handle.doc() = "Example Module";
py::class_<B>(handle, "B")
.def(py::init<int, int>())
.def("map", &B::map)
;
}
CMakeLists.txt
cmake_minimum_required(VERSION 3.4...3.18)
project(example)
find_package(OpenMP)
add_subdirectory(pybind11)
pybind11_add_module(m b.cpp)
if(OpenMP_CXX_FOUND)
target_link_libraries(m PUBLIC OpenMP::OpenMP_CXX)
else()
message( FATAL_ERROR "Your compiler does not support OpenMP" )
endif()
test.py
from build.m import *
def test(i):
return i * 20
b = B(2, 2)
b.map(test)
I basically have an array where I want to apply a Python function to every element using a for-loop. I know that it is an issue with functional and OpenMP specifically because in other parts of my project I am using OpenMP successfully and functional is also working if I am not using OpenMP.
Edit: It freezes at the map function and has to be terminated. I am using Ubuntu 21.10, Python 3.9, GCC 11.2.0, OpenMP 4.5, and the newest version of the pybind11 repo.
You're likely experiencing a deadlock between OpenMP's scheduler and Python's GIL (Global Interpreter Lock).
I suggest attaching gdb to your process and looking at where the threads are to verify that's really the problem.
IMHO mixing Python functions and OpenMP like that is asking for trouble. If you want multi-threading of Python functions you can use multiprocessing.pool.ThreadPool. But unless your functions release the GIL most of the time you won't benefit from multi-threading.
I have a C++ code which execute python script with boost_python package. Everything is fine, as longa as I extract int, string, or other not-array variables from python. However I have to extract a numpy::ndarray and convert it to cpp vector. I tried as follow:
main.cpp
#include <iostream>
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
using namespace boost::python;
int main()
double t_end=7
try
{
Py_Initialize();
object module = import("__main__");
object name_space = module.attr("__dict__");
exec_file("MyModule.py", name_space, name_space);
object MyFunc = name_space["MyFunc"];
object result = MyFunc(t_end);
auto result_array = extract<numpy::ndarray>(result);
const numpy::ndarray& ret = result_array();
int input_size = ret.shape(0);
double* input_ptr = reinterpret_cast<double*>(ret.get_data());
std::vector<double> v(input_size);
for (int i = 0; i < input_size; ++i)
v[i] = *(input_ptr + i);
}
catch (error_already_set)
{
PyErr_Print();
}
Py_Finalize();
And example py script:
MyModule.py
import numpy as np
def MyFunc(t_end):
result = np.array([2,3,1,t_end])
return results
However it ends with error:
read access violation BOOST_NUMPY_ARRAY_API was nullptr
I also was trying to declare numpy::ndarray directly like numpy::ndarray result_array = extract<numpy::ndarray>(result); But the error is exactly the same. I've checked if my ndarray is not empty by printing it directly from python, and it is not. At the python step all seems to be correct. So what is causing the violation and how to fix it?
That error occurs since you're using the numpy module without first initializing it.
Notice the beginning of the official tutorial:
Initialise the Python runtime, and the numpy module. Failure to call these results in segmentation errors:
namespace np = boost::python::numpy;
int main(int argc, char **argv)
{
Py_Initialize();
np::initialize();
Your code is lacking the call to np::initialize();.
I need to read local variables from Python in C/C++. When I try to PyEval_GetLocals, I get a NULL. This happens although Python is initialized. The following is a minimal example.
#include <iostream>
#include <Python.h>
Py_Initialize();
PyRun_SimpleString("a=5");
PyObject *locals = PyEval_GetLocals();
std::cout<<locals<<std::endl; //prints NULL (prints 0)
Py_Finalize();
In the manual, it says that it returns NULL if no frame is running, but... there's a frame running!
What am I doing wrong?
I'm running this in Debian Jessie.
Turns out the right way to access variables in the scope is:
Py_Initialize();
PyObject *main = PyImport_AddModule("__main__");
PyObject *globals = PyModule_GetDict(main);
PyObject *a = PyDict_GetItemString(globals, "a");
std::cout<<globals<<std::endl; //Not NULL
Py_Finalize();
I've been trying to write a DLL in C.
Install hook sets up the KeyboardProc. Calling the InstallHook() and UninstallHook() functions from Python always returns a 0, which I guess is because my callback function KeyboardProc isn't working.
The following is my C code for the DLL:
#include "stdafx.h"
#include <windows.h>
#include <string.h>
#include <ctype.h>
#include <stdio.h>
#include "ourdll.h"
//#pragma comment(linker, "/SECTION:.SHARED,RWS")
//#pragma data_seg(".SHARED")
HHOOK hKeyboardHook = 0;
int keypresses = 0;
HMODULE hInstance = 0;
//#pragma data_seg()
BOOL WINAPI DllMain (HANDLE hModule, DWORD dwFunction, LPVOID lpNot)
{
hInstance = hModule; //Edit
return TRUE;
}
LRESULT CALLBACK KeyboardProc(int hookCode, WPARAM vKeyCode, LPARAM flags)
{
if(hookCode < 0)
{
return CallNextHookEx(hKeyboardHook, hookCode, vKeyCode, flags);
}
keypresses++;;
return CallNextHookEx(hKeyboardHook, hookCode, vKeyCode, flags);
}
__declspec(dllexport) void InstallHook(void)
{
hKeyboardHook = SetWindowsHookEx(WH_KEYBOARD, KeyboardProc, hInstance, 0);
}
__declspec(dllexport) int UninstallHook(void)
{
UnhookWindowsHookEx(hKeyboardHook);
hKeyboardHook = 0;
return keypresses;
}
The Python code to use this is as follows:
>>> from ctypes import *
>>> dll = CDLL('C:\...\OurDLL.dll')
>>> dll.InstallHook()
[Type something at this point]
>>> result = dll.UninstallHook()
>>> result
0
EDIT: I should probably mention that I've also tried out a LowLevelKeyboardHook. I understand that the LowLevel hook is global and will catch all keystrokes, but that just caused my dll.InstallHook() Python code to freeze for a second or two before returning zero.
I am no expert in C, so any help would be greatly appreciated. Thanks.
hKeyboardHook = SetWindowsHookEx(WH_KEYBOARD, KeyboardProc, NULL, 0);
SetWindowsHookEx requires a hModule - save the hModule from DllMain and pass it here. (You can pass NULL only if the thread id is your own thread.)
One exception to this is for the _LL hook types; these don't need a hmodule param since these hook don't inject into the target process - that's why your code using KEYBOARD_LL is 'succeeding'.
As for why it might be blocking when you use KEYBOARD_LL - docs for LowLevelKeyboardHookProc mention that the thread that installs the hook (ie. calls SetWindowsHookEx) must have a message loop, which you might not have in your python code.
Debugging tips: it looks like SetWindowsHookEx should be returning NULL (with GetLastError() returning a suitable error code); while developing code, using some combination of assert/printf/OutputDebugString as appropriate to check these return values is a good way to ensure that your assumptions are correct and give you some clues as to where things are going wrong.
BTW, one other thing to watch for with KEYBOARD vs KEYBOARD_LL: the KEYBOARD hook gets loaded into the target process - but only if it's the same bitness - so a 32-bit hook only sees keys pressed by other 32-bit processes. OTOH, KEYBOARD_LL is called back in your own process, so you get to see all keys - and also don't need to deal with the shared segment (though as far as I know it's also less efficient as a KEYBOARD hook).
I have a bunch of C macros the operation of which I need to simulate in python. I saw some pointers to pygccxml or ctypeslib etc. Are these the ways to go? Or is there something out there that is better ?
The C macros if and when they change, I would like the python implementation to be auto generated rather than having to make manual modifications. Hence the question.
my_c_header.h:
#ifdef OS
#define NUM_FLAGS (uint16_t)(3)
#define NUM_BITS (uint16_t)(8)
#else
#define NUM_FLAGS (uint16_t)(6)
#define NUM_BITS (uint16_t)(16)
#endif
#define MAKE_SUB_FLAGS (uint16_t)((1<<NUMFLAGS) -1)
#define MAKE_TOTAL_FLAGS(x) (uint16_t)((x & MAKE_SUB_FLAGS) >> NUM_BITS)
/* #defines type 2 */
#ifdef OS
#DO_SOMETHING(X) os_specifc_process(x)
#else
#DO_SOMETHING(x)
#endif
/* #defines type 3 */
enum
{
CASE0,
CASE1,
CASE2
}
#define MY_CASE_0 ((uint16_t)CASE0)
#define MY_CASE_1 ((uint16_t)CASE1)
#define MY_CASE_2 ((uint16_t)CASE2)
/*End of file <my_c_header.h> */
If you are writing an extension module, use https://docs.python.org/3/c-api/module.html#c.PyModule_AddIntMacro