I have defined the name of my wrapper object in my c file blargUtils.c like this (I have defined methods and the lot for it in Blargmethods)...
void initBlarg(){
Py_InitModule("Blarg", Blargmethods);
}
I compiled it like so...
blarglib: blargUtils.c
gcc -I/usr/include/python2.6 -fPIC -c blargUtils.c -Wall
gcc -shared blargUtils.o -o blargUtils.so
clean:
rm *.so
However, when I try to import the wrapper in my python script...
import Blarg
Its says it says: "ImportError: No module named Blarg". I'm a little lost here and I don't understand why it cannot find the class when they are the exact same spelling. Maybe its a logic error?
If more code is needed let me know.
First of all, from looking at the comments, I see that renaming it didn't work. This means (1) python can't find the .so file, (2) the .so file isn't usable (i.e. not compiled correctly or not all required symbols are found), or (3) there is a .py/.pyc/.pyo file in the same directory which already has that name. If you have Blarg.py already defined, python will look at this file first. The same goes if you have a directory named Blarg in your search path. So instead of bashing your head against the wall, try this:
1) Rename your .so library to something guaranteed not to collide (i.e. _Blarg)
void initBlarg() {
Py_InitModule("_Blarg", Blargmethods);
}
2) Compile it with the SAME NAME
gcc -I/usr/include/python2.6 -fPIC -c blargUtils.c -Wall
gcc -shared blargUtils.o -Wl,-soname -Wl,_Blarg.so -o _Blarg.so
3) Create a python wrapper (i.e. Blarg.py)
import sys
sys.path.append('/path/to/your/library')
import _Blarg
def blargFunc1(*args):
"""Wrap blargFunc1"""
return _Blarg.blargFunc1(*args)
4) Now just use it as normal
import Blarg
Blarg.blargFunc1(1, 2, 3)
Obviously this is a bit of overkill, but it should help you determine where the problem is. Hope this helps.
Related
I've compiled a third-party python module (alembic), alembic import another python module (imath) by PyImport_ImportModule, imath import another python module (iex) again by PyImport_ImportModule. The codes like:
BOOST_PYTHON_MODULE(alembic)
{
handle<> imath(PyImport_ImportModule("imath"));
}
BOOST_PYTHON_MODULE(imath)
{
handle<> iex(PyImport_ImportModule("iex"));
}
BOOST_PYTHON_MODULE(iex)
{
scope().attr("BaseExc") = "An Exception";
}
It works if I import imath first, then import alembic. But if I import alembic directly, it will raise a NoneType error at scope().attr("BaseExc") = "An Exception". I've read the boost code, and I'm sure the reason is detail::current_scope is empty, but I don't know why.
Can any one help me about it? Why it was happened and how can I avoid it?
Add:
I can't reproduce it by the code above. I wrote a cpp file and fill it by these codes:
#include "boost/python.hpp"
using namespace boost::python;
BOOST_PYTHON_MODULE(alembic)
{
handle<> imath(PyImport_ImportModule("imath"));
}
Then use this option to compile it:
g++ -fPIC -shared -I/usr/include -L/usr/lib -lboost_python -lpython2.7 -L/usr/lib64 -Wl,-soname,alembicmodule.so -o alembicmodule.so alembic.cpp
And it works fine. I'm surprised because in the third-party module, the error happened at the first line. Maybe this is not a boost bug, it is a cmake bug?
This issue comes from compiling options. Cmake generates a link script, and this script use libboost_python.a to link the python module alembicmodule.so. When I change libboost_python.a to libboost_python.so, this issue was fixed.
I'm writing a SWIG c++ file to generate a python module. I want to let the users import it in both Python2 and Python3 scripts. Since SWIG has different flags for binding Python2 and Python 3, I was wondering is there a way I can write a create a general module for both.
Let's rewind a little and keep SWIG itself out of the question to start with.
Historically it's been necessary to compile a Python module for every (sometimes even minor) version change of the Python interpreter. This led to PEP-384, which defined a stable ABI for a subset of the Python C-API from Python 3.2 onwards. So obviously that doesn't actually work for your request, because you're interested in 2 vs 3. Furthermore SWIG itself doesn't actually generate PEP-384 compatible, code.
To experiment further and see a bit more about what's going on I've made the following SWIG file:
%module test
%inline %{
char *frobinate(const char *in) {
static char buf[1024];
snprintf(buf, 1024, "World of hello: %s", in);
return buf;
}
%}
If we try to compile this with -DPy_LIMITED_API it failed:
swig3.0 -Wall -python test.i && gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c -DPy_LIMITED_API 2>&1|head
test_wrap.c: In function ‘SWIG_PyInstanceMethod_New’:
test_wrap.c:1114:3: warning: implicit declaration of function ‘PyInstanceMethod_New’ [-Wimplicit-function-declaration]
return PyInstanceMethod_New(func);
^
test_wrap.c:1114:3: warning: return makes pointer from integer without a cast
test_wrap.c: In function ‘SWIG_Python_UnpackTuple’:
test_wrap.c:1315:5: warning: implicit declaration of function ‘PyTuple_GET_SIZE’ [-Wimplicit-function-declaration]
Py_ssize_t l = PyTuple_GET_SIZE(args);
^
test_wrap.c:1327:2: warning: implicit declaration of function ‘PyTuple_GET_ITEM’ [-Wimplicit-function-declaration]
I.e. nobody has picked up that ticket, at least not on the version of SWIG 3.0.2 that I am testing with.
So where does that leave us? Well the SWIG 3.0 documentation on Python 3 support says something interesting:
SWIG is able to support Python 3.0. The wrapper code generated by SWIG can be compiled with both Python 2.x or 3.0. Further more, by passing the -py3 command line option to SWIG, wrapper code with some Python 3 specific features can be generated (see below subsections for details of these features). The -py3 option also disables some incompatible features for Python 3, such as -classic.
My reading of that statement is that if you want to generate source code that can be compiled with both 2.x and 3.x Python headers all you need to do is omit the -py3 argument when running SWIG. And that seems to work with my testing, the same code generated by SWIG compiles just fine with all of:
$ gcc -Wall -Wextra -I/usr/include/python2.7/ -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.4 -shared -o _test.so test_wrap.c
(Note that 3.4 does generate some warnings, but no errors and it does work)
The problem is that there's no interchangeability between the compiled _test.so of any given version. Trying to import a version from one in another will fail with errors from the dynamic linker about missing or undefined symbols. So we're still stuck at the initial problem, although we can compile our module for any Python interpreter we can't compile it for all versions.
In my view the right way to deal with this is use distuitls and simply install your module into the search path for each version of Python you want to support on any given machine.
That said there is a workaround we could use. Essentially we want to build multiple versions of our module for each interpreter and use some generic code to switch between the various implementations. Assuming that you're not building with code generated from SWIG's -builtin option there are two places we could try and do this:
In the shared object itself
In some Python code
The later is substantially simpler to achieve, so let's look at that. I did something by using the following bash script to build a shared object for each version of Python I intended to support:
#!/bin/bash
for v in 2.7 3.2 3.4
do
d=$(echo $v|tr . _)
mkdir -p $d
touch $d/__init__.py
gcc -Wall -Wextra -I/usr/include/python$v -shared -o $d/_test.so test_wrap.c
done
This builds 3 versions of _test.so, in directories named after the Python version they're intended to support. If we tried to import our SWIG generated module now it would complain because there's no indication of where to find the _test module we've compiled. So we need to fix that. There are three ways to do it:
Modify the SWIG generated import code. I didn't manage to make this work with SWIG 3.0.2 - perhaps it's too old?
Modify the Python search path, before the second import. We can do that using %pythonbegin:
%pythonbegin %{
import os.path
import sys
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '%d_%d' % (sys.version_info.major, sys.version_info.minor)))
%}
Write a _test.py stub that finds the right one and then switches them around:
from sys import version_info,modules
from importlib import import_module
real_mod = import_module('%d_%d.%s' % (version_info.major, version_info.minor, __name__))
modules[__name__] = modules[real_mod.__name__]
Either of the last two options worked and resulted in import test succeeding in all three versions of Python. But it's cheating a bit really and far better just to install from source 3 times over.
I am trying to staticaly link the "c++ portaudio library" against my "C++ demo module" which is a python callable library (module).
I'm doing this with distutils, and in order to perform the static linking, I've added the libportaudio to the extra_objects argument, as follows:
module1 = Extension(
"demo",
sources=cppc,
# TODO remove os dependency
extra_compile_args=gccArgs,
# link against shared libraries
#libraries=[""]
# link against static libraries
extra_objects=["./clib-3rd-portaudio/libportaudio.a"]) # << I've added the static lib here
Compiling with "python setup.py build" results in the following linker error:
/usr/bin/ld: ./clib-3rd-portaudio/libportaudio.a(pa_front.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
./clib-3rd-portaudio/libportaudio.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
So at this point I've tried the obvious, I've added the -fPIC flagg to gccArgs (note extra_compile_args=gccArgs above), as follows:
gccArgs = [
"-Icsrc",
"-Icsrc/paExamples",
"-Icinc-3rd-portaudio",
"-Icinc-3rd-portaudio/common",
"-Icinc-3rd-portaudio/linux",
"-fPIC"] # << I've added the -fPIC flag here
However this results in the exact same error, so I guess the -fPIC flag is not the root cause. I'm probably missing something trivial, but I'm a bit lost here, hope somebody can help.
As the error message said, you should recompile the external library libportaudio.a with -fPIC argument, NOT your own codes. That's why it doesn't help to add -fPIC to your extra_compile_args.
Several other posts suggest that the file libportaudio.a cannot be used to build shared library, probably because the default build settings of portaudio don't include -fPIC.
To recompile portaudio correctly, download the source and try to run ./configure with -shared option (or something similar). If you cannot find the proper option, then modify the Makefile and append -fPIC to the extra compile options. You can also compile each object file manually and pack them into libportaudio.a.
Since your target file (libdemo.so) is a shared library, you must make sure ANY object codes included inside are compiled with -fPIC option. To understand why you need this option, please refer to:
What does -fPIC mean when building a shared library? and Position Independent Code (PIC) in shared libraries
I have written a Python C module (just ffmpeg.c which depends on some FFmpeg libs and other libs) and I am wondering how to link.
I'm compiling with:
cc -std=c99 -c ../ffmpeg.c -I /usr/include/python2.7 -g
I'm trying to link right now with:
ld -shared -o ../ffmpeg.so -L/usr/local/lib -lpython2.7 -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint ffmpeg.o -lc
There is no error. However, when I try to import ffmpeg in Python, I get:
ImportError: ./ffmpeg.so: undefined symbol: avio_alloc_context
Maybe this is already correct. I checked the resulting ffmpeg.so with ldd and it partly links to a wrong FFmpeg. This is strange however because of the -L/usr/local/lib which should take precedence over the default. Maybe because my custom installed FFmpeg (in /usr/local/lib) has for some reason only installed static *.a libs and *.so files have precedence over *.a files.
You should put the libraries that you're linking to after the .o file; i.e.:
ld -shared -o ../ffmpeg.so ffmpeg.o -L/usr/local/lib -lpython2.7 -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint -lc
The linker is dumb, and will not link in code from static libraries that it doesn't think are needed until a dependency arises i.e. the use of avio_alloc_context happens in ffmpeg.o, and because it's not listed after the use of the library, then the linker will not consider the code in the library as needed, so it doesn't get linked in - this is the biggest reason why linking using .a files fails.
You can also use --start-group and --end-group around all the files that you are linking - this allows you to link static libraries that have cross dependencies that just seem impossible to resolve through other means:
ld -shared -o ../ffmpeg.so -L/usr/local/lib -lpython2.7 --start-group -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint ffmpeg.o --end-group -lc
using .a files is a little bit trickier than .so files, but these two items generally will work around any issues you have when linking.
Now that I've successfully installed Cython on Windows 7, I try to compile some Cython code using Cython, but gcc makes my life hard.
cdef void say_hello(name):
print "Hello %s" % name
Using gcc to compile the code throws dozens of undefined reference to -erros, and I'm pretty sure the libpython.a is available (as the installation tutorial said, undefined reference to -errors are thrown if this file is missing).
$ cython ctest.pyx
$ gcc ctest.c -I"C:\Python27\include"
C:\Users\niklas\AppData\Local\Temp\cckThGrF.o:ctest.c:(.text+0x1038): undefined reference to `_imp__PyString_FromStringAndSize'
C:\Users\niklas\AppData\Local\Temp\cckThGrF.o:ctest.c:(.text+0x1075): undefined reference to `_imp___Py_TrueStruct'
C:\Users\niklas\AppData\Local\Temp\cckThGrF.o:ctest.c:(.text+0x1086): undefined reference to `_imp___Py_ZeroStruct'
C:\Users\niklas\AppData\Local\Temp\cckThGrF.o:ctest.c:(.text+0x1099): undefined reference to `_imp___Py_NoneStruct'
C:\Users\niklas\AppData\Local\Temp\cckThGrF.o:ctest.c:(.text+0x10b8): undefined reference to `_imp__PyObject_IsTrue'
c:/program files/mingw/bin/../lib/gcc/mingw32/4.5.2/../../../libmingw32.a(main.o):main.c:(.text+0xd2): undefined reference to `WinMain#16'
collect2: ld returned 1 exit status
The weird thing is, using pyximport* or a setup-script works pretty fine, but it's both not very handy when still working on a module.
How to compile those .c files generated with Cython using gcc ?
or any other compiler, important is that it will work !
*pyximport: Is it normal that only python-native functions and classes are contained in the imported module and not cdef-functions and classes ?
like:
# filename: cython_test.pyx
cdef c_foo():
print "c_foo !"
def foo():
print "foo !"
c_foo()
import pyximport as p; p.install()
import cython_test
cython_test.foo()
# foo !\nc_foo !
cython_test.c_foo()
# AttributeError, module object has no attribute c_foo
UPDATE
Calling $ gcc ctest.c "C:\Python27\libs\libpython27.a" kills the undefined reference to -erros, but this one:
c:/program files/mingw/bin/../lib/gcc/mingw32/4.5.2/../../../libmingw32.a(main.o):main.c:(.text+0xd2): undefined reference to `WinMain#16'
Try:
gcc -c -IC:\Python27\include -o ctest.o ctest.c
gcc -shared -LC:\Python27\libs -o ctest.pyd ctest.o -lpython27
-shared creates a shared library. -lpython27 links with the import library C:\Python27\libs\libpython27.a.
That is a linker (ld) error and not a compiler error. You should provide the path to the library (-l and -L) and not only to the headers (-I).