Cython build resulting in undefined symbol - python

I've got a c++ program I'm trying to wrap/convert to Cython. It uses a particular library that for some reason, will not result in a working module for importing. There is a working c++ program, by the way. Here's the setup.py:
ext_modules = [
Extension(
name="libnmfpy",
sources=["interface/nmf_lib.pyx"],
include_dirs = ["../src/", numpy.get_include()],
libraries=["nmf","mpi_cxx","mpi","m"],
library_dirs=["../build/Linux/bin.release","/usr/local/lib/","/usr/lib"],
language="c++",)
]
setup(
name = 'libnmfpy',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
)
I should mention it is libnmf that seems to be causing problems. The first build of libnmf would cause this script to generate this error:
/usr/bin/ld: ../build/Linux/bin.release/libnmf.a(nmf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
../build/Linux/bin.release/libnmf.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
When I rebuild libnmf with -fPIC, setup generates a libnmfpy.so, but when I import that in another script, I would get the aforementioned undefined symbol:
Traceback (most recent call last):
File "test.py", line 1, in <module>
import libnmfpy
ImportError: $path/cython/libnmfpy.so: undefined symbol: _ZN4elem6lapack3SVDEiiPdiS1_
If it would help, here's something my search suggested:
nm libnmfpy.so | grep _ZN4elem6lapack3SVDEiiPdiS1_
U _ZN4elem6lapack3SVDEiiPdiS1_
nm ../build/Linux/bin.release/libnmf.a | grep _ZN4elem6lapack3SVDEiiPdiS1_
U _ZN4elem6lapack3SVDEiiPdiS1_
Which is what I guess to the cause of the error. I look in what I think is the offending library that libnmf is built on:
nm $another_path/lib/libelemental.a | grep _ZN4elem6lapack3SVDEiiPdiS1_
0000000000005290 T _ZN4elem6lapack3SVDEiiPdiS1_
I am not too familiar yet with libraries and linkers, so any help would be appreciated. Thanks
edit: a little digging made me realize something. Is there a difference between Mac OS X and Linux that I should watch out for? The people I work for that wrote this originally reported no build errors like this

You should use nm -C, to unmangle your symbols. It also looks like you are mixing static and shared libraries which is generally not a good idea. Also, gcc's linker is a one pass linker which means the order of library flags matters. You want to list the libraries in reverse dependency order. In other words, if a depends on b, then b must appear before a in the linker flags.

Well I can't try to recreate your setup and then work out and test a solution on my setup since I don't have your source, but it seems to me that when you recompiled libnmf with fpic, it was recompiled with dynamic linking, while before it used to be statically linked.
If my guess is correct, then you can try either:
compiling libnmf again with -fPIC , AND -static.
changing your setup.py - add "elemental" to the libraries list - this will make the linker fetch that lib as well.
You should note that approach #1 is usually considered less desirable, but as I said it could be that it was originally compiled that way anyways. #2, however, could take more work because if there are other libs that are required, you will have to find and add them as well.

Related

Pybind11 - ImportError: .../pybindx.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E

I wrote some binding code to bind C++ code with python in pybindx.cpp file. I want to call some functions (implemented in C++) using python. When I use python setup.py build_ext command, the .so file ./build/lib.linux-x86_64-3.8/pybindx.cpython-38-x86_64-linux-gnu.so is getting created, but when I try to import(import pybindx) in test.py to call binded functions, It gives the following error:
ImportError: <path-to-.so-file>/pybindx.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E
I have added <path-to-.so-file> to PYTHONPATH and LD_LIBRARY_PATH.
My setup.py file contains following code:
import os, sys
from distutils.core import setup, Extension
from distutils import sysconfig
cpp_args = ['-std=c++11']
ext_modules = [
Extension(
'pybindx',
['class1.cpp', 'class2.cpp', 'base_class1.cpp', 'base_class2.cpp', 'pybindx.cpp'],
include_dirs=['paths/to/include/header/files', 'path/to/protobuf/include'],
language='c++',
extra_compile_args = cpp_args,
),
]
setup(
name='pybindx',
version='0.0.1',
author='xxxxx',
author_email='xxxxx',
description='desc',
ext_modules=ext_modules,
)
Where, class1.cpp, class2.cpp, base_class1.cpp, base_class2.cpp are the files having implementation of classes and functions which I want to bind with python.
I am new to pybind11, can someone help me with this?
Thanks!
I tried writing small example code without protobuf, where I am able to call the C++ function using test.py, but here I want to use protobuf.
Anyways, fixed it. I Used Makefile rather than setup.py. I have to link static libprotobuf.a with my pybindx.so.
Added following command with other required commands in Makefile:
g++ -shared -fPIC ./build/*.o <path-to-static-protobuf>/libprotobuf.a -o pybindx.so
Where, *.o are the object files created using some command(s) in Makefile.

How build pyobjc 7.3 with nix?

I want to build pyobjc-7.3, because it has fix for send2trash.
Classic building on BigSur 20.5.0 is strait forward.
cd pyobjc-7.3/pyobjc-framework-Cocoa
python3 setup.py build
though once I run same build inside nix-shell magic happens.
nix-shell -p pkgs.python39Packages.setuptools
python3 setup.py build
clang-7: error: argument unused during compilation:
'-fno-strict-overflow' [-Werror,-Wunused-command-lin\ e-argument]
ok. no big deal. let's disable warning.
CFLAGS="-Wno-unused-argument" python3 setup.py build
what? now clang is like a blind kitten.
Modules/pyobjc-api.h:19:10: fatal error: 'objc/objc.h' file not found
#include <objc/objc.h>
-isysroot option and -I has no effect.
-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk
I noticed lots additions to -I flag in clang such as:
-iwithprefix /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk/usr/include
it helps clang to find objc header file, though this is not the end of the story.
Modules/pyobjc-api.h:21:9: fatal error: 'Foundation/Foundation.h' file
not found
how come?!
oh there is another header files of special kind - frameworks. Wheel reinvention...
clang, take another argument
-iframeworkwithsysroot /System/Library/Frameworks
Here I get tons of type errors and I run out of ideas what could I try next:
/Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk/System/Library/Frameworks/Foundation.framework/Headers/NSString.h:138:1:
error:
function cannot return function type 'NSComparisonResult' (aka 'int (int)')
(NSComparisonResult)compare:(NSString *)string options:(NSStringCompareOptions)mask range:(NSR...
After days of trying I've found solution.
There are a few bugs are causing the problems:
First is nix provides older (10.12) sdk while setup.py thinks is 10.15.
This enables CPP sections for unsupported SDK API therefore type errors.
Following hack makes pyobjc to think that SDK is older that it is.
with pkgs;
with pkgs.lib;
with pkgs.python39Packages;
let
pyobjc-core = buildPythonPackage rec {
pname = "pyobjc-core";
version = "7.3";
name = "${pname}-${version}";
src = pkgs.python39Packages.fetchPypi {
pname = "pyobjc-core";
inherit version;
sha256 = "0x3msrzvcszlmladdpl64s48l52fwk4xlnnri8daq2mliggsx0ah";
};
preBuild=''
export SDKROOT="/Library/Developer/CommandLineTools/SDKs/MacOSX10.12.sdk"
Second problem is with header discovery and overstrict lint from python nix
CFLAGS = "-iwithsysroot /usr/include -Wno-unused-argument";
Third problem big sur linkder is dynamic and ffi libray is not found.
Providing through nix derivation
buildInputs = [ pkgs.libffi ];
Forth problem is tests are broken
doCheck = false;

CMake not linking Python

Sorry if I'm duplicating a question, but I just cannot find the solution to what I'm looking for anywhere on the internet, yet I believe that this is a very simple problem.
I'm trying to extend python with some custom C++ libraries, and building my C++ libraries with CMake. I'm following the instructions on https://docs.python.org/2/extending/extending.html, but it's not compiling correctly.
When I try to build it, I get these messages:
"C:\Program Files (x86)\JetBrains\CLion 140.2310.6\bin\cmake\bin\cmake.exe" --build C:\Users\pkim2\.clion10\system\cmake\generated\76c451cd\76c451cd\Debug --target parsers -- -j 8
Linking CXX executable parsers.exe
CMakeFiles\parsers.dir/objects.a(main.cpp.obj): In function `spam_system':
C:/code/ground-trac/ground/launch/trunk/Software Support/Data Analysis Scripts/data_review_automation/parsers/main.cpp:9: undefined reference to `_imp__PyArg_ParseTuple'
C:/code/ground-trac/ground/launch/trunk/Software Support/Data Analysis Scripts/data_review_automation/parsers/main.cpp:12: undefined reference to `_imp__Py_BuildValue'
c:/mingw/bin/../lib/gcc/mingw32/4.8.1/../../../libmingw32.a(main.o):(.text.startup+0xa7): undefined reference to `WinMain#16'
collect2.exe: error: ld returned 1 exit status
CMakeFiles\parsers.dir\build.make:87: recipe for target 'parsers.exe' failed
CMakeFiles\Makefile2:59: recipe for target 'CMakeFiles/parsers.dir/all' failed
CMakeFiles\Makefile2:71: recipe for target 'CMakeFiles/parsers.dir/rule' failed
mingw32-make.exe[3]: *** [parsers.exe] Error 1
mingw32-make.exe[2]: *** [CMakeFiles/parsers.dir/all] Error 2
mingw32-make.exe[1]: *** [CMakeFiles/parsers.dir/rule] Error 2
mingw32-make.exe: *** [parsers] Error 2
Makefile:109: recipe for target 'parsers' failed
Based on this, I suspect that this is a problem with the way I'm linking things in the CMakeLists.txt file, but I have no idea how to do it properly. This is what my CMakeLists.txt looks like right now:
cmake_minimum_required(VERSION 2.8.4)
project(parsers)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
set(SOURCE_FILES main.cpp)
include_directories(C:\\Python27\\include)
link_directories(C:\\Python27\\)
target_link_libraries(python2.7)
add_executable(parsers ${SOURCE_FILES})
How in the world do I get this thing to compile correctly? I am running Windows 7 64-bit, and using CLion as my IDE.
Your first problem is that you are using target_link_libraries wrong: you should pass it the target to which to add a link and then the library you want to link in:
target_link_libraries(parsers python2.7)
Your second problem is that you are building an executable, instead of a shared library. If you want to make your extension accessible from python it needs to be a library.
add_library(parsers SHARED ${SOURCE_FILES})
But now comes the good news: your life becomes much simpler (and more portable) if you use the built in CMake module FindPythonLibs.cmake. To build a python module you would only need to do the following:
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
find_package(PythonLibs REQUIRED)
add_library(parsers SHARED ${SOURCE_FILES})
include_directories(${PYTHON_INCLUDE_DIRS})
target_link_libraries(parsers ${PYTHON_LIBRARIES})
if you use windows, try this CMakeLists.txt
cmake_minimum_required(VERSION 3.8)
project(CSample)
set( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -ftest-coverage -fprofile-arcs" )
link_directories(D:/Programs/mingw/mingw64/lib/)
include_directories(D:/Programs/Python/Python37/include/)
link_libraries(D:/Programs/Python/Python37/libs/python37.lib)
# Python
add_executable(CSample main.cpp)
You are using target_link_libraries() wrong. Check the docs; you probably want something like:
add_executable(parsers ${SOURCE_FILES})
target_link_libraries(parsers python2.7)
Note that the output from CMake should already tell you that something is wrong. On my machine:
CMake Error at CMakeLists.txt:8 (target_link_libraries):
Cannot specify link libraries for target "python2.7" which is not built by
this project.

Cython and fortran - how to compile together without f2py

FINAL UPDATE
This question is about how to write a setup.py that will compile a cython module which accesses FORTRAN code directly, like C would. It was a rather long and arduous journey to the solution, but the full mess is included below for context.
ORIGINAL QUESTION
I have an extension which is a Cython file, which sets up some heap memory and passes it to the fortran code, and a fortran file, which is a venerable old module that I'd like to avoid reimplementing if I can.
The .pyx file compiles fine to C, but the cython compiler chokes on the .f90 file with the following error:
$ python setup.py build_ext --inplace
running build_ext
cythoning delaunay/__init__.pyx to delaunay/__init__.c
building 'delaunay' extension
error: unknown file type '.f90' (from 'delaunay/stripack.f90')
Here's (the top half of) my setup file:
from distutils.core import setup, Extension
from Cython.Distutils import build_ext
ext_modules = [
Extension("delaunay",
sources=["delaunay/__init__.pyx",
"delaunay/stripack.f90"])
]
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
...
)
NOTE: I originally had the fortran file's location incorrectly specified (without the directory prefix) but this breaks in exactly the same way after I fixed that.
Things I have tried:
I found this, and tried passing in the name of the fortran compiler (i.e. gfortran) like this:
$ python setup.py config --fcompiler=gfortran build_ext --inplace
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --fcompiler not recognized
And I've also tried removing --inplace, in case that was the problem (it wasn't, same as the top error message).
So, how do I compile this fortran? Can I hack it into a .o myself and get away with linking it? Or is this a bug in Cython, which will force me to reimplement distutils or hack around with the preprocessor?
UPDATE
So, having checked out the numpy.distutils packages, I understand the problem a bit more. It seems that you have to
Use cython to convert the .pyx files to cpython .c files,
Then use an Extension/setup() combination that supports fortran, like numpy's.
Having tried this, my setup.py now looks like this:
from numpy.distutils.core import setup
from Cython.Build import cythonize
from numpy.distutils.extension import Extension
cy_modules = cythonize('delaunay/sphere.pyx')
e = cy_modules[0]
ext_modules = [
Extension("delaunay.sphere",
sources=e.sources + ['delaunay/stripack.f90'])
]
setup(
ext_modules = ext_modules,
name="delaunay",
...
)
(note that I've also restructured the module a bit, since seemingly an __init__.pyx is disallowed...)
Now is where things become buggy and platform-dependent. I have two testing systems available - one Mac OS X 10.6 (Snow Leopard), using Macports Python 2.7, and one Mac OS X 10.7 (Lion) using the system python 2.7.
On Snow Leopard, the following applies:
This means that the module compiles (hurray!) (although there's no --inplace for numpy, it seems, so I had to system-wide install the testing module :/) but I still get a crash on import as follows:
>>> import delaunay
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<snip>site-packages/delaunay/__init__.py", line 1, in <module>
from sphere import delaunay_mesh
ImportError: dlopen(<snip>site-packages/delaunay/sphere.so, 2): no suitable image found. Did find:
<snip>site-packages/delaunay/sphere.so: mach-o, but wrong architecture
and on Lion, I get a compile error, following a rather confusing looking compile line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
/usr/local/bin/gfortran -Wall -arch i686 -arch x86_64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/fortranobject.o build/temp.macosx-10.7-intel-2.7/delaunay/stripack.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.o -lgfortran -o build/lib.macosx-10.7-intel-2.7/delaunay/sphere.so
ld: duplicate symbol _initsphere in build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o ldand :build /temp.macosx-10.7-intelduplicate- 2.7symbol/ delaunay/sphere.o _initsphere in forbuild architecture /i386
temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o and build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o for architecture x86_64
Now let's just step back a moment before we pore over the details here. Firstly, I know there are a bunch of headaches over architecture clashes in 64-bit Mac OS X; I had to work very hard to get Macports Python working on the Snow Leopard machine (just to upgrade from system python 2.6). I also know that when you see gfortran -arch i686 -arch x86_64 you are sending mixed messages to your compiler. There are all manner of platform-specific problems buried in there, that we don't need to worry about in the context of this question.
But let's just look at this line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
what is numpy doing?! I don't need any f2py features in this build! I actually wrote a cython module in order to avoid dealing with f2py's insanity (I need to have 4 or 5 output variables, as well as neither-in-nor-out arguments - neither of which is well supported in f2py.) I just want it to compile .c -> .o, and .f90 -> .o and link them. I could write this compiler line myself if I knew how to include all the relevant headers.
Please tell me I don't need to write my own makefile for this... or that there's a way to translate fortran to (output-compatible) C so I can just avoid python ever seeing the .f90 extension (which fixes the whole problem.) Note that f2c is not suitable for this as it only works on F77 and this is a more modern dialect (hence the .f90 file extension).
UPDATE 2
The following bash script will happily compile and link the code in place:
PYTHON_H_LOCATION="/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/"
cython sphere.pyx
gcc -arch x86_64 -c sphere.c -I$PYTHON_H_LOCATION
gfortran -arch x86_64 -c stripack.f90
gfortran -arch x86_64 -bundle -undefined dynamic_lookup -L/opt/local/lib *.o -o sphere.so
Any advice on how to make this kind of hack compatible with a setup.py? I don't anyone installing this module to have to go find Python.h manually...
UPDATE: I've created a project on github which wraps up this generating of compile lines by hand. it's called complicated_build.
UPDATE 2: in fact, "generating by hand" is a really bad idea as it's platform specific — the project now reads the values from the distutils.sysconfig module, which is the settings used to compile python (i.e. exactly what we want,) the only setting which is guessed is fortran compiler and file extensions (which are user-configurable). I suspect it is reimplementing a fair bit of distutils now!
The way to do this is to write your own compiler lines, and hack them into your setup.py. I show an example below which works for my (very simple) case, which has the following strucutre:
imports
cythonize() any .pyx files, so you only have fortran and C files.
define a build() function which compiles your code:
maybe some easy-to-change constants, like compiler names and architecture
list up the fortran and C files
generate the shell commands that will build the modules
add the linker line
run the shell commands.
if the command was install and the target doesn't exist yet, build it.
run setup (which will build the pure python sections)
if the command was build, run the build now.
my implementation of this is shown below. It's designed for only one extension module, and it recompiles all the files every time, so may require further extension to be of more general use. Also note that I've hard coded various unix /s, so if you're porting this to windows make sure you adapt or replace with os.path.sep.
from distutils.core import setup
from distutils.sysconfig import get_python_inc
from Cython.Build import cythonize
import sys, os, shutil
cythonize('delaunay/sphere.pyx')
target = 'build/lib/delaunay/sphere.so'
def build():
fortran_compiler = 'gfortran'
c_compiler = 'gcc'
architecture = 'x86_64'
python_h_location = get_python_inc()
build_temp = 'build/custom_temp'
global target
try:
shutil.rmtree(build_temp)
except OSError:
pass
os.makedirs(build_temp) # if you get an error here, please ensure the build/ ...
# folder is writable by this user.
c_files = ['delaunay/sphere.c']
fortran_files = ['delaunay/stripack.f90']
c_compile_commands = []
for cf in c_files:
# use the path (sans /s), without the extension, as the object file name:
components = os.path.split(cf)
name = components[0].replace('/', '') + '.'.join(components[1].split('.')[:-1])
c_compile_commands.append(
c_compiler + ' -arch ' + architecture + ' -I' + python_h_location + ' -o ' +
build_temp + '/' + name + '.o -c ' + cf
)
fortran_compile_commands = []
for ff in fortran_files:
# prefix with f in case of name collisions with c files:
components = os.path.split(ff)
name = components[0].replace('/', '') + 'f' + '.'.join(components[1].split('.')[:-1])
fortran_compile_commands.append(
fortran_compiler + ' -arch ' + architecture + ' -o ' + build_temp +
'/' + name + '.o -c ' + ff
)
commands = c_compile_commands + fortran_compile_commands + [
fortran_compiler + ' -arch ' + architecture +
' -bundle -undefined dynamic_lookup ' + build_temp + '/*.o -o ' + target
]
for c in commands:
os.system(c)
if 'install' in sys.argv and not os.path.exists(target):
try:
os.makedirs('build/lib/delaunay')
except OSError:
# we don't care if the containing folder already exists.
pass
build()
setup(
name="delaunay",
version="0.1",
...
packages=["delaunay"]
)
if 'build' in sys.argv:
build()
This could be wrapped up into a new Extension class I guess, with it's own build_ext command - an exercise for the advanced student ;)
Simply build and install your vintage Fortran library outside of Python, then link to it in distutils. Your question indicates that you do not intend to temper with this library, so a once-and-for-all install will probably do (using the library's build and installation instructions). Then link the Python extension to the installed external library:
ext_modules = [
Extension("delaunay",
sources = ["delaunay/__init__.pyx"],
libraries = ["delaunay"])
]
This approach is also safe for the case that you realize that you need wrappers for other languages as well, such as Matlab, Octave, IDL, ...
Update
At some point, if you end up with more than a few such external libraries that you want to wrap, it is advantageous to add a top-level build system that installs all these libraries, and manages the building of all the wrappers as well. I have cmake for this purpose, which is great at handling system-wide builds and installations. However, it cannot build Python stuff out of the box, but it can be taught easily to call "python setup.py install" in each subdirectory python, thus invoking distutils. So the overall build process looks like this:
mkdir build
cd build
cmake ..
make
make install
make python
(make octave)
(make matlab)
It is very important to always separate core library code from wrappers for specific front-end languages (also for your own projects!), since they tend to change rather fast. What happens otherwise can be seen at the example of numpy: Instead of writing one general-purpose C library libndarray.so and creating thin wrappers for Python, there are Python C API calls everywhere in the sources. This is what is now holding back Pypy as a serious alternative to CPython, since in order to get numpy they have to support every last bit of the CPython API, which they can't do, since they have a just-in-time compiler and a different garbage collector. This means we are missing out on a lot of potential improvements.
Bottom line:
Build general purpose Fortran/C libraries separately and install them system-wide.
Have a separate build step for the wrappers, which should be kept as lightweight as possible, so that it's easy to adapt for the next big language X that comes about. If there is one safe assumption, it is that X will support linking with C libraries.
You can build the object file outside of distutils then include it at the linking step using the extra_objects argument to Extension's constructor. In setup.py:
...
e = Extension(..., extra_objects = ['holycode.o'])
...
On the command prompt:
# gfortran -c -fPIC holycode.f
# ./setup.py build_ext ...
With only one external object, this will be the easiest way for many.

Python interpreter embedded in the application fails to load native modules

I have an application that statically links to libpython.a (2.7). From within the application's interpreter I try importing time module (time.so), which fails with:
ImportError: ./time.so: undefined symbol: PyExc_IOError
So, this module has unresolved symbols:
nm -D time.so | grep PyExc_IOError
U PyExc_IOError
I figured that this symbol is discarded by the linker when linking the application. OK, I'm now linking libpython with all symbols:
... -Wl,-whole-archive -lpython -Wl,-no-whole-archive ...
The symbol is now there:
$ nm app | grep PyExc_IOError
8638348 D PyExc_IOError
08638ca0 d _PyExc_IOError
But I still get the same import error. Where is the problem?
Besides making sure all of libpython is included in your binary, you also need to make sure the symbols in the library are exposed to shared objects being loaded. When you're linking libpython (statically) into your main binary this means you need the --export-dynamic linker argument (so -Wl,--export-dynamic or -Xlinker --export-dynamic as the gcc argument.) When loading a shared object with libpython (say, when you embed libpython into a plugin for your app) this means you have to make sure the shared object is loaded with the RTLD_GLOBAL flag to dlopen().

Categories