Segfault when import_array not in same translation unit - python

I'm having problems getting the NumPy C API to properly initialize. I think I've isolated the problem to calling import_array from a different translation unit, but I don't know why this should matter.
Minimal working example:
header1.hpp
#ifndef HEADER1_HPP
#define HEADER1_HPP
#include <Python.h>
#include <numpy/npy_3kcompat.h>
#include <numpy/arrayobject.h>
void initialize();
#endif
file1.cpp
#include "header1.hpp"
void* wrap_import_array()
{
import_array();
return (void*) 1;
}
void initialize()
{
wrap_import_array();
}
file2.cpp
#include "header1.hpp"
#include <iostream>
void* loc_wrap_import_array()
{
import_array();
return (void*) 1;
}
void loc_initialize()
{
loc_wrap_import_array();
}
int main()
{
Py_Initialize();
#ifdef USE_LOC_INIT
loc_initialize();
#else
initialize();
#endif
npy_intp dims[] = {5};
std::cout << "creating descr" << std::endl;
PyArray_Descr* dtype = PyArray_DescrFromType(NPY_FLOAT64);
std::cout << "zeros" << std::endl;
PyArray_Zeros(1, dims, dtype, 0);
std::cout << "cleanup" << std::endl;
return 0;
}
Compiler commands:
g++ file1.cpp file2.cpp -o segissue -lpython3.4m -I/usr/include/python3.4m -DUSE_LOC_INIT
./segissue
# runs fine
g++ file1.cpp file2.cpp -o segissue -lpython3.4m -I/usr/include/python3.4m
./segissue
# segfaults
I've tested this with Clang 3.6.0, GCC 4.9.2, Python 2.7, and Python 3.4 (with a suitably modified wrap_import_array because this is different between Python 2.x and 3.x). The various combinations all give the same result: if I don't call loc_initialize, the program will segfault in the PyArray_DescrFromType call. I have NumPy version 1.8.2. For reference, I'm running this in Ubuntu 15.04.
What baffles me most of all is this C++ NumPy wrapper appears to get away with calling import_array in a different translation unit.
What am I missing? Why must I call import_array from the same translation unit in order for it to actually take effect? More importantly, how do I get it to work when I call import_array from a different translation unit like the Boost.NumPy wrapper does?

After digging through the NumPy headers, I think I've found a solution:
in numpy/__multiarray_api.h, there's a section dealing with where an internal API buffer should be. For conciseness, here's the relevant snippet:
#if defined(PY_ARRAY_UNIQUE_SYMBOL)
#define PyArray_API PY_ARRAY_UNIQUE_SYMBOL
#endif
#if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY)
extern void **PyArray_API;
#else
#if defined(PY_ARRAY_UNIQUE_SYMBOL)
void **PyArray_API;
#else
static void **PyArray_API=NULL;
#endif
#endif
It looks like this is intended to allow multiple modules define their own internal API buffer, in which each module must call their own import_array define.
A consistent way to get several translation units to use the same internal API buffer is in every module, define PY_ARRAY_UNIQUE_SYMBOL to some library unique name, then every translation unit other than the one where the import_array wrapper is defined defines NO_IMPORT or NO_IMPORT_ARRAY. Incidentally, there are similar macros for the ufunc features: PY_UFUNC_UNIQUE_SYMBOL, and NO_IMPORT/NO_IMPORT_UFUNC.
The modified working example:
header1.hpp
#ifndef HEADER1_HPP
#define HEADER1_HPP
#ifndef MYLIBRARY_USE_IMPORT
#define NO_IMPORT
#endif
#define PY_ARRAY_UNIQUE_SYMBOL MYLIBRARY_ARRAY_API
#define PY_UFUNC_UNIQUE_SYMBOL MYLIBRARY_UFUNC_API
#include <Python.h>
#include <numpy/npy_3kcompat.h>
#include <numpy/arrayobject.h>
void initialize();
#endif
file1.cpp
#define MYLIBRARY_USE_IMPORT
#include "header1.hpp"
void* wrap_import_array()
{
import_array();
return (void*) 1;
}
void initialize()
{
wrap_import_array();
}
file2.cpp
#include "header1.hpp"
#include <iostream>
int main()
{
Py_Initialize();
initialize();
npy_intp dims[] = {5};
std::cout << "creating descr" << std::endl;
PyArray_Descr* dtype = PyArray_DescrFromType(NPY_FLOAT64);
std::cout << "zeros" << std::endl;
PyArray_Zeros(1, dims, dtype, 0);
std::cout << "cleanup" << std::endl;
return 0;
}
I don't know what pitfalls there are with this hack or if there are any better alternatives, but this appears to at least compile and run without any segfaults.

Related

How to use future / async in cppyy

I'm trying to use future from C++ STL via cppyy (a C++-python binding packet).
For example, I could run this following code in C++ (which is adapted from this answer)
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
using namespace std;
using namespace chrono_literals;
int main () {
promise<int> p;
future<int> f = p.get_future();
thread t([&p]() {
this_thread::sleep_for(10s);
p.set_value(2);
});
auto status = f.wait_for(10ms);
if (status == future_status::ready) {
cout << "task is read" << endl;
} else {
cout << "task is running" << endl;
}
t.join();
return 0;
}
A similar implementation of the above in Python is
import cppyy
cppyy.cppdef(r'''
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
using namespace std;
int test () {
promise<int> p;
future<int> f = p.get_future();
thread t([&p]() {
this_thread::sleep_for(10s);
p.set_value(2);
});
auto status = f.wait_for(10ms);
if (status == future_status::ready) {
cout << "task is read" << endl;
} else {
cout << "task is running" << endl;
}
t.join();
return 0;
}
''')
cppyy.gbl.test()
And the above code yields
IncrementalExecutor::executeFunction: symbol '__emutls_v._ZSt15__once_callable' unresolved while linking symbol '__cf_4'!
IncrementalExecutor::executeFunction: symbol '__emutls_v._ZSt11__once_call' unresolved while linking symbol '__cf_4'!
It looks like it's caused by using future in cppyy.
Any solutions to this?
Clang9's JIT does not support thread local storage the way the modern g++ implements it, will check again when the (on-going) upgrade to Clang13 is finished, which may resolve this issue.
Otherwise, cppyy mixes fine with threaded code (e.g. the above example runs fine on MacOS, with Clang the system compiler). Just that any TLS use needs to sit in compiled library code while the JIT has this limitation.

PyBind11 Using Qt results in ImportError when importing library in Python

I've been struggling to get an example of pybind11 with Qt working. I can import other libraries like VTK fine, but when I include a Qt library, say QString, and create a simple QString object inside one of my functions, the built library has an import error when it's being imported in Python. I am not sure how to debug these issues, as there is no useful error anywhere that I can see. I tried to look at docs, but they don't show a way to debug these errors. There are no warnings or other issues when building the library.
>>> import pyLib
ImportError: DLL load failed while importing pyLib: The specified module could not be found.
I tried to create a minimal example below. The executable target pyLib2 builds and runs just fine, but the pyLib python library target doesn't work when imported due to this line QString x;. Without it, it works fine:
CMake
cmake_minimum_required (VERSION 3.24)
project (pybindTest)
include_directories(${PROJECT_BINARY_DIR} src)
# set C++ settings
set (CXX_VERSION 20) # sets which standard of C we are using, e.g. C++20
set (CMAKE_CXX_FLAGS "/EHsc /O2 /favor:INTEL64 /W4 /MP -std:c++${CXX_VERSION}")
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION TRUE)
add_subdirectory (external/pybind11)
add_executable(pyLib2 main.cpp)
pybind11_add_module(pyLib pyLib.cpp)
target_include_directories(pyLib PUBLIC
"C:/.../external/pybind11/include"
)
find_package (Qt5 5.15.2 EXACT COMPONENTS CONFIG REQUIRED Core Widgets SerialPort Network)
target_link_libraries(pyLib PUBLIC
Qt5::Core
Qt5::Widgets
Qt5::SerialPort
)
target_link_libraries(pyLib2 PUBLIC
Qt5::Core
Qt5::Widgets
Qt5::SerialPort
)
pyLib.cpp
#include <pybind11/pybind11.h>
#include <QString>
#include <array>
#include <iostream>
namespace py = pybind11;
float test(float a)
{
QString x; // Commenting this line out works fine
return a * 2.0;
}
void test2()
{
std::cout << "test2!" << std::endl;
}
void init_pyLib(py::module& handle)
{
std::cout << "here!" << std::endl;
}
PYBIND11_MODULE(pyLib, handle)
{
handle.doc() = "test doc";
handle.def("testpy", &test, py::arg("i"));
handle.def("testpy2", &test2);
init_pyLib(handle);
}
main.cpp
#include <QString>
#include <array>
#include <iostream>
float test(float a)
{
QString x;
return a * 2.0;
}
void test2()
{
std::cout << "test2!" << std::endl;
}
void init_pyLib()
{
std::cout << "here!" << std::endl;
}
int main()
{
std::cout << "hello!\n";
QString x;
test(5.0f);
std::cout << "goodbye!\n";
}

How to wrap c++ code calling python as .dll or .so?

I want to wrap my c++ code as .so and .dll file.I know how to wrap c++ code as dynamic library, But my c++ code is calling python, normally we call embedding python.
I write a basic simple code.
python code:
def init_test(env, mode):
print(env)
print(mode)
return 1
c++ code calling python:
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <iostream>
#include <exception>
/**
* #description: run risk performance use Python
* #param {string} env
* #param {string } mode
* #return {*}
*/
extern "C" int init_python_test(char* env, char* mode) {
std::cout << "start" <<std::endl;
if(Py_IsInitialized == 0){
std::cout << "not init" << std::endl;
}
else{
std::cout << "init already" <<std::endl;
//std::cout << Py_FinalizeEx() <<std::endl;
Py_Finalize();
}
std::cout << "init:"<<Py_IsInitialized() << std::endl;
Py_Initialize();
PyErr_Print();
std::cout <<"second" <<std::endl;
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append('./')");
std::cout <<"ok" <<std::endl;
//int res;
PyObject *pModule,*pFunc = NULL;
PyObject *pArgs, *pValue = NULL;
pModule = PyImport_ImportModule("mini");//0x7ffff64b9cc0
if(!pModule)
std::cout << "can't open python file" << std::endl;
PyErr_Print();
pFunc = PyObject_GetAttrString(pModule, "init_test");
PyErr_Print();
if(pFunc && PyCallable_Check(pFunc)){
PyErr_Print();
pValue = PyObject_CallObject(pFunc, Py_BuildValue("(ss)", env, mode));
PyErr_Print();
}
Py_FinalizeEx();
return 1;
}
int main(){
char *env = (char*)"prod";
char * mode = (char*)"prod";
init_python_test(env, mode);
std::cout << "ok" <<std::endl;
}
I am able to run my c++ code properly with g++ command linked with python dynamic library. And I can use g++ to wrap my c++ code as .so file. When I use another c++ code and python code to test the init_python_test function. Segmentation fault occurs when the code runs into Py_Initialize().
So, how to resolve this question? and did I wrap c++ code properly with g++? here is my shell.
g++ -fPIC -shared -Wall -o libtest.so ./mini_test.cpp -DLINUX -D_GLIBCXX_USE_CXX11_ABI=0 -I /usr/include/python3.8 -L/usr/lib/python3 -L/usr/lib/python3.8 -lpython3.8
Somebody helps me! plz!!! thank u!!!!

Call C function inside a C extension in Python

I've tried to make a C extension to Python. My problem is that I have C function calls inside the C function I have made a C extension for. For example I am using C functions in pmd.h and usb1024LS.h inside these C functions. When I try running my script, I get errors like "undefined symbol: hid_init". Where hid_init is a function.
I have tried running the program in a c main program, and it works.
How do I call C functions from inside other C functions which have an extension?
Thanks!
My code:
test.py - test script:
import ctypes
import myTest_1024LS
ctypes_findInterface = ctypes.CDLL('/home/oysmith/NetBeansProjects/MCCDAQ/usb1024LS_with_py/myTest_1024LS.so').findInterface
ctypes_findInterface.restype = ctypes.c_void_p
ctypes_findInterface.argtypes = [ctypes.c_void_p]
ctypes_findInterface()
setup.py:
from distutils.core import setup, Extension
setup(name="myTest_1024LS", version="0.0", ext_modules = [Extension("myTest_1024LS", ["myTest_1024LS.c"])])
myTest_1024LS.c:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <ctype.h>
#include <sys/types.h>
#include <asm/types.h>
#include <python2.7/Python.h>
#include "pmd.h"
#include "usb-1024LS.h"
#include "myTest_1024LS.h"
void findInterface(void){
int interface;
hid_return ret;
ret = hid_init();
if (ret != HID_RET_SUCCESS) {
fprintf(stderr, "hid_init failed with return code %d\n", ret);
exit(1);
}
if ((interface = PMD_Find_Interface(&hid, 0, USB1024LS_PID)) >= 0) {
printf("USB 1024LS Device is found! interface = %d\n", interface);
} else if ((interface = PMD_Find_Interface(&hid, 0, USB1024HLS_PID)) >= 0) {
printf("USB 1024HLS Device is found! interface = %d\n", interface);
} else {
fprintf(stderr, "USB 1024LS and USB 1024HLS not found.\n");
exit(1);
}
}
PyDoc_STRVAR(myTest_1024LS__doc__, "myTes_1024LS point evaluation kernel");
PyDoc_STRVAR(findInterface__doc__, "find device");
static PyObject *py_findInterface(PyObject *self, PyObject *args);
static PyMethodDef wrapper_methods[] = {
{"findInterface", py_findInterface, METH_VARARGS, findInterface__doc__},
{NULL, NULL}
};
PyMODINIT_FUNC initwrapper(void){
Py_InitModule3("wrapper", wrapper_methods, myTest_1024LS__doc__);
}
static PyObject *py_findInterface(PyObject *self, PyObject *args){
if(!PyArg_ParseTuple(args, "")){
return NULL;
}
findInterface();
return 0;
}
When building C extensions which themselves have to be linked against other shared libraries you'll have to tell which ones to link against in the setup.py. In this case at least the library which exports the hid_init() function. See the Python documentation for more details and examples: Building C and C++ Extensions with distutils. The second example contains arguments to link an extra library to the extension module.
The ctypes ”declarations” are wrong: void is not the same as a void pointer (void*). The findInterface() C function has neither arguments nor a return value, which is ”declared” as:
ctypes_findInterface.argtypes = []
ctypes_findInterface.restype = None

typedef does not work with SWIG (python wrapping C++ code)

Hello and thanks for your help in advance !
I am writing a python wrapper (SWIG 2.0 + Python 2.7) for a C++ code. The C++ code has typedef which I need to access in python wrapper. Unfortunately, I am getting following error when executing my Python code:
tag = CNInt32(0)
NameError: global name 'CNInt32' is not defined
I looked into SWIG documentation section 5.3.5 which explains size_t as typedef but I could not get that working too.
Following is simpler code to reproduce the error:
C++ header:
#ifndef __EXAMPLE_H__
#define __EXAMPLE_H__
/* File: example.h */
#include <stdio.h>
#if defined(API_EXPORT)
#define APIEXPORT __declspec(dllexport)
#else
#define APIEXPORT __declspec(dllimport)
#endif
typedef int CNInt32;
class APIEXPORT ExampleClass {
public:
ExampleClass();
~ExampleClass();
void printFunction (int value);
void updateInt (CNInt32& var);
};
#endif //__EXAMPLE_H__
C++ Source:
/* File : example.cpp */
#include "example.h"
#include <iostream>
using namespace std;
/* I'm a file containing use of typedef variables */
ExampleClass::ExampleClass() {
}
ExampleClass::~ExampleClass() {
}
void ExampleClass::printFunction (int value) {
cout << "Value = "<< value << endl;
}
void ExampleClass::updateInt(CNInt32& var) {
var = 10;
}
Interface file:
/* File : example.i */
%module example
typedef int CNInt32;
%{
#include "example.h"
%}
%include <windows.i>
%include "example.h"
Python Code:
# file: runme.py
from example import *
# Try to set the values of some typedef variables
exampleObj = ExampleClass()
exampleObj.printFunction (20)
var = CNInt32(5)
exampleObj.updateInt (var)
Thanks again for your help.
Santosh
I got it working. I had to use typemaps in the interface file, see below:
- Thanks a lot to "David Froger" on Swig mailing lists.
- Also, thanks to doctorlove for initial hints.
%include typemaps.i
%apply CNInt32& INOUT { CNInt32& };
And then in python file:
var = 5 # Note: old code problematic line: var = CNInt32(5)
print "Python value = ",var
var = exampleObj.updateInt (var) # Note: 1. updated values returned automatically by wrapper function.
# 2. Multiple pass by reference also work.
# 3. It also works if your c++ function is returning some value.
print "Python Updated value var = ",var
Thanks again !
Santosh

Categories