Importing python boost module - python

I built a DLL in VS2010 with boost::python to export some function to a python module:
myDLL.cpp:
std::string greet() { return "hello, world"; }
int square(int number) { return number * number; }
BOOST_PYTHON_MODULE(getting_started1)
{
// Add regular functions to the module.
def("greet", greet);
def("square", square);
}
Up to here, everything compiles just fine. I then get the myDLL.dll and myDLL.lib file in c:\myDLL\Debug.
According to boost doc (http://wiki.python.org/moin/boost.python/SimpleExample), I need to add this to PYTHONPATH, so I added c:\myDLL\Debug to it:
PYTHONPATH:
C:\Python27;c:\myDLL\Debug;
then, from my .py file, I try to import it:
import getting_started1
print getting_started1.greet()
number = 11
print number, '*', number, '=', getting_started1.square(number)
I have also tried from myDLL import getting_started1, and from getting_started1 import *, and all possible combinations of sorts.
Can anyone tell me how am I supposed to call my module? Thanks
EDIT:
According to cgohlke, there should be a getting_started1.pyd somewhere in my PYTHONPATH when I compile in VS? This file is inexistant... Do I have to set somethign different in VS2010? I have a default win32 DLL project.
But the boost doc says " If we build this shared library and put it on our PYTHONPATH", isn't a shared library on windows a DLL? ergo, the DLL should be in the PYTHONPATH?

The standard, portable way to build Python extensions is via distutils. However, Visual Studio 2010 is not a supported compiler for Python 2.7. The following setup.py works for me with Visual Studio 2008 and boost_1_48_0. The build command is python setup.py build_ext --inplace.
# setup.py
from distutils.core import setup
from distutils.extension import Extension
setup(name="getting_started1",
ext_modules=[
Extension("getting_started1", ["getting_started1.cpp"],
include_dirs=['boost_1_48_0'],
libraries = ['boost_python-vc90-mt-1_48'],
extra_compile_args=['/EHsc', '/FD', '/DBOOST_ALL_DYN_LINK=1']
)
])
For your Visual Studio 2010 project, try to change the linker output file to getting_started1.pyd instead of myDLL.dll.

I managed to get it working only in Release configuration and not in Debug.
From the project properties, on the General tab, modify Target Extension to .pyd
The project should be indeed a dll, as you did
In the Python script you need to specify the location of the dll as in this example:
import sys
sys.path.append("d:\\TheProjectl\\bin\\Release")
import getting_started #Dll name without the extension

Related

Error importing c module compiled with Cython into python module on Mac

I have a Cython module compiled from pyx files to c files that I am trying to import and use in a python module. I'm running python 3.6 on a Mac. When I run gcc -v the output is:
Configured with: --prefix=/Library/Developer/CommandLineTools/usr - -with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include/c++/4.2.1 Apple LLVM version 10.0.1 (clang-1001.0.46.4) Target: x86_64-apple-darwin18.7.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin
Running python setup.py build and python setup.py install gives no errors, and the .so and .c files for the corresponding files appear in the right directory, which is on the path.
When I try to import the module, I get an error in the init file, from a line that tries to import another submodule:
from . import subModule
I've tried updating python and Cython, and I've made sure that gcc is in user/lib.
This is my setup.py file:
from Cython.Build import cythonize
setupArgs = dict(
name="module",
version="1.1.0",
description=description,
url="https://github.com/XXXX/XXXXX/module",
author="CTcue",
author_email="info#XXXXXX.com",
ext_modules=cythonize(["module/*.pyx"]),
)
# Use distutils setup
from distutils.core import setup
setup(**setupArgs)
This is the error message:
File "module/subModule.pyx", line 4, in init module.subModule
ModuleNotFoundError: No module named 'thirdModule'
The thirdModule in question has a .c file and a .so file that correspond to the .pyx file, and as far as I can tell everything is order there.
module's init.py:
__title__ = 'Pseudonomizer'
__version__ = '1.0.2'
__author__ = 'CTcue'
__license__ = 'Proprietary'
__copyright__ = 'Copyright 2016 CTcue'
from . import subModule
subModule:
import thirdModule
thirdModule.doSomething()
third module:
import re
from . import anotherModule
def doSomething:
#code that does something
Edit : In an attempt to see if the compiler is at fault, I tried to manually compile the .c file of thirdModule with "gcc thirdModule", and got the following error:
Undefined symbols for architecture x86_64:
This seems to suggest that the issue is compiler-related, but I still haven't found the solution.
Any help would be much appreciated.
It turns out #ead was right, and the problem was that the module had implicit relative imports which are no longer allowed in python 3.

How do I setup a CMakeLists.txt file to get xtensor-python sample code up and running

I'm trying to use xtensor-python example found here.
I have xtensor-python, pybind11, and xtensor installed and also created a CMakeLists.txt.
from /build I ran.
$ cmake ..
$ make
and it builds without errors.
My CMakeLists.txt looks like this.
cmake_minimum_required(VERSION 3.15)
project(P3)
find_package(xtensor-python REQUIRED)
find_package(pybind11 REQUIRED)
find_package(xtensor REQUIRED)
My example.cpp file.
#include <numeric> // Standard library import for std::accumulate
#include "pybind11/pybind11.h" // Pybind11 import to define Python bindings
#include "xtensor/xmath.hpp" // xtensor import for the C++ universal functions
#define FORCE_IMPORT_ARRAY // numpy C api loading
#include "xtensor-python/pyarray.hpp" // Numpy bindings
double sum_of_sines(xt::pyarray<double>& m)
{
auto sines = xt::sin(m); // sines does not actually hold values.
return std::accumulate(sines.cbegin(), sines.cend(), 0.0);
}
PYBIND11_MODULE(ex3, m)
{
xt::import_numpy();
m.doc() = "Test module for xtensor python bindings";
m.def("sum_of_sines", sum_of_sines, "Sum the sines of the input values");
}
My python file.
import numpy as np
import example as ext
a = np.arange(15).reshape(3, 5)
s = ext.sum_of_sines(v)
s
But my python file can't import my example.cpp file.
File "examplepyth.py", line 2, in <module>
import example as ext
ImportError: No module named 'example'
I am new to cmake. I would like to know how to set this project up properly with CMakeLists.txt
The recommended way is to build and install with a setup.py file instead of cmake. You can use the cookie-cutter to get the boilerplate generated for you.
Hey I am not sure about xtensor-python since I have not used it, but I might give you some pointers for building pybind11 with cmake within an Anaconda environment. Your Cmake.txt looks a bit incomplete. For me the following set-up works:
In my Anaconda-shell I use the following commands:
cmake -S <folder where Cmake.txt is> B <folder where Cmake.txt is\build> -G"Visual Studio 15 2017 Win64"
which puts all the linking into the subfolder build, so the actual building can be done via
cmake --build build
The necessary Cmake.txt looks something like below. The created library TEST is then located in the subfolder debug\Build
#minimum version of cmake
cmake_minimum_required(VERSION 2.8.12)
#setup project
project(TEST)
#load the libraries
find_package(pybind11 REQUIRED)
set(EXTERNAL_LIBRARIES_ROOT_PATH <Filepath where my external libraries are at>)
set(EIGEN3_INCLUDE_DIR ${EXTERNAL_LIBRARIES_ROOT_PATH}/eigen-eigen-c753b80c5aa6)
#get all the files in the folder
file(GLOB SOURCES
${CMAKE_CURRENT_SOURCE_DIR}/*.h
${CMAKE_CURRENT_SOURCE_DIR}/*.cpp
)
#include the directories
include_directories(${PYTHON_INCLUDE_DIRS} ${pybind11_INCLUDE_DIRS} ${EIGEN3_INCLUDE_DIR})
pybind11_add_module(TEST MODULE ${SOURCES})
#in some cases need to link the libraries
#target_link_libraries(TEST PUBLIC ${OpenCV_LIBRARIES} ${Boost_LIBRARIES})
If you want a working minimal example for which I use exactly this Cmake.txt file, it happens to be in another question I posted here on stackoverflow: pybind11 how to use custom type caster for simple example class
Hope this helps as a first starting point (I left the EIGEN3 inside as to give you an idea how it is done with a header-only library. For actual libraries like OpenCV you need the target_link_libraries command in addition).

Writing a python wrapper for c++ code

I modified a C++ code (Kraskov_v1.C) and I now wish to call it from Python.
I am able to convert the C++ code (Kraskov_v1.C) into a .so file and integrate it into my python library. However when I try and import the library, it throws up an error. The error says "undefined symbol: _Z8mir_xnynPPdiiiiS_S_S_"
mir_xn_yn is a function (written in another c++ file namely miutils) that my Kraskov_v1 code calls. I included the header file
include "miutils.h"
in my file containing Kraskov_v1.
Here is the setup.py file I wrote to build and install this package.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy.distutils.misc_util
setup(name='Kraskov_v1',
version='0.1.0',
ext_modules=[Extension('_Kraskov_v1',sources =
["Kraskov_v1.i","Kraskov_v1.C"],
include_dirs = ['src'])
])
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Can someone tell me whats wrong? I am new to python and c++ and would appreciate some help.
The Extension needs a list of libraries to link with after compiling.
Missing symbols means a required library is not linked to the shared object (.so) and definitions from that library are not available.
setup(name='Kraskov_v1',
version='0.1.0',
ext_modules=[Extension('_Kraskov_v1',sources =
["Kraskov_v1.i","Kraskov_v1.C"],
include_dirs = ['src']),
libraries=['kraskov', <..>],
])

Cython dynamic library linking

I'm actually trying to link an existing C library to my Cython program.
I have access to the entrypoint header (.h) of the library with all functions declared as:
EXPORT_API int _stdcall LibFunction();
I suppose the EXPORT_API is used to create the dll with __declspec(dllexport)...
I also have access to the .lib and .dll files.
I've tried to use this function with the usual cdef extern fromof Cython:
cdef extern from "include\\entrypoint.h":
int LibFunction()
def c_LibFunction():
LibFunction()
And I'm using the following setup.py
from setuptools import setup, Extension
from Cython.Distutils import build_ext
NAME = 'testlib'
REQUIRES = ['cython']
SRC_DIR = 'testlib'
PACKAGES = [SRC_DIR]
INCLUDE_DIR = 'testlib\include'
LIB_DIR = 'testlib\lib'
ext = Extension(SRC_DIR + '.wrapped',
[SRC_DIR + '/wrapped.pyx'],
include_dirs=[INCLUDE_DIR],
library_dirs = [LIB_DIR],
libraries=['cfunc', 'MyLib']
)
if __name__ == "__main__":
setup(
install_requires=REQUIRES,
packages=PACKAGES,
name=NAME,
ext_modules=[ext],
cmdclass={"build_ext": build_ext}
)
But when I compile my Cython python setup.py build_ext I get an unresolved external reference:
error LNK2001: unresolved external symbol __imp_LibFunction
As I found on other thread it seems so be a question of static or dynamic library linking.
I think it comes from the setuptools compiling options, I tried to investigate using distutils documentation and Cython documentation.
The thing is, I also tried to do my own C library (cfunc.lib, a static one) and I managed to use function in it the same way I described above.
I also used DUMPBIN on MyLib.lib and I found the symbo int __cdecl LibFunction(void)and as expected, the __imp_ is not in the symbol.
It someone has an idea of what's going on, why it's going on and how I can solve my problem it could be really helpful !
I finally found the solution so I post it if someone needs help in the future !
I work on Windows using Visual Studio to compile my code.
Even if I created my cfuncproject as a Visual C++ project, it didn't compile as a C++ project but as a C project so it worked by default (it only has a .c and a .h files).
My entrypoint.h only contains C-style function declaration but the dll is compiled as a C++ project, that's why it couldn't work, the name mangling was bad.
So I just added language = 'c++'in my setup.py

Compiling an optional cython extension only when possible in setup.py

I have a python module fully implemented in python. (For portability reasons.)
The implementation of a small part has been duplicated in a cython module. To improve perfomance where possible.
I know how to install the .c modules created by cython with distutils. However if a machine has no compiler installed, I suspect the setup will fail even though the module is still usable in pure python mode.
Is there a way to compile the .c module if possible but fail gracefully and install without it if compiling is not possible?
I guess you will have to make some modification both in your setup.py and in one __init__ file in your module.
Let say the name of your package will be "module" and you have a functionality, sub for which you have pure python code in the sub subfolder and the equivalent C code in c_sub subfolder.
For example in your setup.py :
import logging
from setuptools.extension import Extension
from setuptools.command.build_ext import build_ext
from distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError
logging.basicConfig()
log = logging.getLogger(__file__)
ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError, IOError, SystemExit)
setup_args = {'name': 'module', 'license': 'BSD', 'author': 'xxx',
'packages': ['module', 'module.sub', 'module.c_sub'],
'cmdclass': {'build_ext': build_ext}
}
ext_modules = [Extension("module.c_sub._sub", ["module/c_sub/_sub.c"])]
try:
# try building with c code :
setup(ext_modules=ext_modules, **setup_args)
except ext_errors as ex:
log.warn(ex)
log.warn("The C extension could not be compiled")
## Retry to install the module without C extensions :
# Remove any previously defined build_ext command class.
if 'build_ext' in setup_args['cmdclass']:
del setup_args['cmdclass']['build_ext']
# If this new 'setup' call don't fail, the module
# will be successfully installed, without the C extension :
setup(**setup_args)
log.info("Plain-Python installation succeeded.")
Now you will need to include something like this in your __init__.py file (or at any place relevant in your case):
try:
from .c_sub import *
except ImportError:
from .sub import *
In this way the C version will be used if it was build, other-wise the plain python version is used. It assumes that sub and c_sub will provide the same API.
You can find an example of setup file doing this way in the Shapely package. Actually most of the code I posted was copied (the construct_build_ext function) or adapted (lines after) from this file.
Class Extension has parameter optional in constructor:
optional - specifies that a build failure in the extension should not
abort the build process, but simply skip the extension.
Here is also a link to the quite interesting history of piece of code proposed by mgc.
The question How should I structure a Python package that contains Cython code
is related, there the question is how to fallback from Cython to the "already generated C code". You could use a similar strategy to select which of the .py or the .pyx code to install.

Categories