Building and installing for multiple versions of Python using autotools - python

I'm trying to figure out how to build my project for multiple versions of Python.
I'm testing this on Debian wheezy, where the default version of Python is 2.7.
but 2.6 is also supported and installed. However, automake is only installing
and compiling for Python 2.7. I'd like it to compile for 2.6 as well.
Here are the configure.ac and Makefile.am. I can provide more details if necessary, but hopefully these will suffice.
I'm a beginner with Autotools, so there may be some obvious solution to this.
There is what appears to be a wishlist bug about this:
RFE: build against multiple python stacks. There is also
a similar discussion here: RFC: (automake) add support for dual python 2 / python 3 builds.
There is also a proposed solution (which looks complicated), given at
CDBS + Autotools + Python
This is configure.ac.
##################################################################################
# -*- Autoconf -*-
# Process this file with autoconf to produce a configure script.
AC_PREREQ([2.69])
AC_INIT([corrmodel], [0.1], [faheem#faheem.info])
AM_INIT_AUTOMAKE([foreign -Wall -Werror -Wno-extra-portability parallel-tests])
AM_MAINTAINER_MODE
AC_CONFIG_MACRO_DIR([m4])
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([Makefile])
# Checks for programs.
AC_PROG_CXX
AC_PROG_CC
AM_PATH_PYTHON([2.6])
# Checks for libraries.
AX_BOOST_BASE
AX_BOOST_PYTHON
# Checks for header files.
# Checks for typedefs, structures, and compiler characteristics.
# Checks for library functions.
#AC_CONFIG_SUBDIRS([hello/hello-2.9])
LT_INIT
AC_OUTPUT
##################################################################################
Here is Makefile.am.
##################################################################################
# Define primaries
noinst_LTLIBRARIES = libcommon.la
dist_sysconf_DATA = corrmodel/default_conf.yaml
COMMON_CPPFLAGS = -I /usr/include/python$(PYTHON_VERSION) -L/usr/lib/python$(PYTHON_VERSION)/config -ftemplate-depth-100 -fno-strict-aliasing -fno-common -ansi -Wextra -Wall -Werror -Wno-unused-function -Wc++0x-compat -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -std=c++11 -march=native -mtune=native -mfpmath=sse -msse3 -O3 -DBOOST_PYTHON_DYNAMIC_LIB -DBOOST_PYTHON_MAX_ARITY=20
libcommon_la_SOURCES = randutil.cc util.cc gendata_fn.cc model.cc score_fn.cc search_fn.cc pval.cc print.cc \
common.hh util.hh model.hh gendata_fn.hh randutil.hh score_fn.hh search_fn.hh pval.hh \
print.hh
COMMON_LDFLAGS = -lblitz -lRmath -lpython$(PYTHON_VERSION) -lm -lboost_python
libcommon_la_CPPFLAGS = $(COMMON_CPPFLAGS)
#libcommon_la_LDFLAGS = $(COMMON_LDFLAGS)
# name of Python library.
# See http://www.gnu.org/software/automake/manual/html_node/Python.html
pkgpyexec_LTLIBRARIES = cpplib.la
cpplib_la_SOURCES = cpparr_conv_pif.cc cppmap_conv_pif.cc cpppair_conv_pif.cc cppset_conv_pif.cc cpptup_conv_pif.cc \
cppvec_conv_pif.cc cppunicode_conv_pif.cc util_pif.cc gendata_fn_pif.cc model_pif.cc score_fn_pif.cc \
search_fn_pif.cc pval_pif.cc main.cc conv_pif.hh util_pif.hh gendata_fn_pif.hh score_fn_pif.hh search_fn_pif.hh
cpplib_la_CPPFLAGS = $(COMMON_CPPFLAGS)
cpplib_la_LDFLAGS = -module -avoid-version -share $(COMMON_LDFLAGS)
cpplib_la_LIBADD = libcommon.la
ACLOCAL_AMFLAGS = -I m4
pkgpython_PYTHON = corrmodel/dbschema.py corrmodel/__init__.py corrmodel/init_schema.py corrmodel/modeldist.py corrmodel/crossval.py corrmodel/dbutils.py corrmodel/getmodel.py corrmodel/load.py corrmodel/utils.py
##################################################################################

Automake's job is to create a buildsystem consisting of a configure script and a set of Makefiles which detect the properties of the build environment in an appropriate level of detail so the software package can be built for that build environment.
Build environments might be just a specific set of options to configure to run your compiler with different options. Or to run with a different compiler the same machine. Or inside another operating system in a container or a VM, or on two racks full of build server farms.
Defining, setting up, accessing and maintaining those different build environments is outside of Automake's scope. You need to do that yourself with the help of whatever set of tools you want to use for that.
Then you can (or have your special tool set) run the Automake generated buildsystem within these different build environments to build the software package.

Related

Is there a way to create a python module using SWIG C++ which can be imported in both Python2 and Python3

I'm writing a SWIG c++ file to generate a python module. I want to let the users import it in both Python2 and Python3 scripts. Since SWIG has different flags for binding Python2 and Python 3, I was wondering is there a way I can write a create a general module for both.
Let's rewind a little and keep SWIG itself out of the question to start with.
Historically it's been necessary to compile a Python module for every (sometimes even minor) version change of the Python interpreter. This led to PEP-384, which defined a stable ABI for a subset of the Python C-API from Python 3.2 onwards. So obviously that doesn't actually work for your request, because you're interested in 2 vs 3. Furthermore SWIG itself doesn't actually generate PEP-384 compatible, code.
To experiment further and see a bit more about what's going on I've made the following SWIG file:
%module test
%inline %{
char *frobinate(const char *in) {
static char buf[1024];
snprintf(buf, 1024, "World of hello: %s", in);
return buf;
}
%}
If we try to compile this with -DPy_LIMITED_API it failed:
swig3.0 -Wall -python test.i && gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c -DPy_LIMITED_API 2>&1|head
test_wrap.c: In function ‘SWIG_PyInstanceMethod_New’:
test_wrap.c:1114:3: warning: implicit declaration of function ‘PyInstanceMethod_New’ [-Wimplicit-function-declaration]
return PyInstanceMethod_New(func);
^
test_wrap.c:1114:3: warning: return makes pointer from integer without a cast
test_wrap.c: In function ‘SWIG_Python_UnpackTuple’:
test_wrap.c:1315:5: warning: implicit declaration of function ‘PyTuple_GET_SIZE’ [-Wimplicit-function-declaration]
Py_ssize_t l = PyTuple_GET_SIZE(args);
^
test_wrap.c:1327:2: warning: implicit declaration of function ‘PyTuple_GET_ITEM’ [-Wimplicit-function-declaration]
I.e. nobody has picked up that ticket, at least not on the version of SWIG 3.0.2 that I am testing with.
So where does that leave us? Well the SWIG 3.0 documentation on Python 3 support says something interesting:
SWIG is able to support Python 3.0. The wrapper code generated by SWIG can be compiled with both Python 2.x or 3.0. Further more, by passing the -py3 command line option to SWIG, wrapper code with some Python 3 specific features can be generated (see below subsections for details of these features). The -py3 option also disables some incompatible features for Python 3, such as -classic.
My reading of that statement is that if you want to generate source code that can be compiled with both 2.x and 3.x Python headers all you need to do is omit the -py3 argument when running SWIG. And that seems to work with my testing, the same code generated by SWIG compiles just fine with all of:
$ gcc -Wall -Wextra -I/usr/include/python2.7/ -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.4 -shared -o _test.so test_wrap.c
(Note that 3.4 does generate some warnings, but no errors and it does work)
The problem is that there's no interchangeability between the compiled _test.so of any given version. Trying to import a version from one in another will fail with errors from the dynamic linker about missing or undefined symbols. So we're still stuck at the initial problem, although we can compile our module for any Python interpreter we can't compile it for all versions.
In my view the right way to deal with this is use distuitls and simply install your module into the search path for each version of Python you want to support on any given machine.
That said there is a workaround we could use. Essentially we want to build multiple versions of our module for each interpreter and use some generic code to switch between the various implementations. Assuming that you're not building with code generated from SWIG's -builtin option there are two places we could try and do this:
In the shared object itself
In some Python code
The later is substantially simpler to achieve, so let's look at that. I did something by using the following bash script to build a shared object for each version of Python I intended to support:
#!/bin/bash
for v in 2.7 3.2 3.4
do
d=$(echo $v|tr . _)
mkdir -p $d
touch $d/__init__.py
gcc -Wall -Wextra -I/usr/include/python$v -shared -o $d/_test.so test_wrap.c
done
This builds 3 versions of _test.so, in directories named after the Python version they're intended to support. If we tried to import our SWIG generated module now it would complain because there's no indication of where to find the _test module we've compiled. So we need to fix that. There are three ways to do it:
Modify the SWIG generated import code. I didn't manage to make this work with SWIG 3.0.2 - perhaps it's too old?
Modify the Python search path, before the second import. We can do that using %pythonbegin:
%pythonbegin %{
import os.path
import sys
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '%d_%d' % (sys.version_info.major, sys.version_info.minor)))
%}
Write a _test.py stub that finds the right one and then switches them around:
from sys import version_info,modules
from importlib import import_module
real_mod = import_module('%d_%d.%s' % (version_info.major, version_info.minor, __name__))
modules[__name__] = modules[real_mod.__name__]
Either of the last two options worked and resulted in import test succeeding in all three versions of Python. But it's cheating a bit really and far better just to install from source 3 times over.

Installing the "ring.cx SIP client" on a Raspberry PI

The Situation
I would like to get terminal-based (headless) SIP calls working on my Raspberry Pi and I already tried this using linphone:
RaspberryPI: Making SIP outbound calls using linphonec or an alternative SIP soft phone
Since I am currently stuck there I wanted to try another option which was SFLPhone. They pointed me towards the ring software project which offers a daemon dring which allows making SIP calls using a scripting interface:
Indeed, the daemon can run standalone and be controlled using the DBus API.
Note the project have been renamed to "Ring" (version is bumped to 2.x). Experimental packages are available at http://ring.cx/en/documentation/linux-installation
A major feature of Ring
2.x is the optional "DHT" account type allowing to make calls without any SIP server.
There are many other enhancements such as ICE support, UPnP support, stability improvements etc.
(note clients are being re-written (GTK3, Qt5) and there is a new OS X client, they are not yet feature complete and in heavy development.)
The new daemon dring source Git repo URI is : https://gerrit-ring.savoirfairelinux.com/ring .
The DBus API is mostly the same as before. In the tools/dringctrl directory you will find an example python client that we use for testing (uses python3-dbus).
We are willing to fix any bugs you may find, the daemon bugtracker is here : https://projects.savoirfairelinux.com/projects/ring-daemon/issues
Also look at https://projects.savoirfairelinux.com/projects/ring/wiki for build instruction etc.
Regards and good luck for your embedded project,
A. B.
Compiling the dependencies
I tried to compile the dependencies for the project like stated in the README:
git clone https://gerrit-ring.savoirfairelinux.com/ring
cd ring
Compile the dependencies first
cd ../contrib/
rm -fr native/ && mkdir native
cd native
../bootstrap
make
I got this error:
libvpx.webm-4640a0c4804b/third_party/googletest/src/include/gtest/gtest.h
mv libvpx-4640a0c4804b49f1870d5a2d17df0c7d0a77af2f libvpx && touch libvpx
cd libvpx && CROSS= ./configure --target=armv7-linux-gcc \
--as=yasm --disable-docs --disable-examples --disable-unit-tests --disable-install-bins --disable-install-docs --enable-realtime-only --enable-error-concealment --disable-runtime-cpu-detect --disable-webm-io --enable-pic --prefix=/home/pi/ring/contrib/arm-linux-gnueabihf
disabling docs
disabling examples
disabling unit_tests
disabling install_bins
disabling install_docs
enabling realtime_only
enabling error_concealment
disabling runtime_cpu_detect
disabling webm_io
enabling pic
Configuring selected codecs
enabling vp8_encoder
enabling vp8_decoder
enabling vp9_encoder
enabling vp9_decoder
Configuring for target 'armv7-linux-gcc'
enabling armv7
enabling neon
enabling neon_asm
enabling media
Unable to invoke compiler: arm-none-linux-gnueabi-gcc -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
Configuration failed. This could reflect a misconfiguration of your
toolchains, improper options selected, or another problem. If you
don't see any useful error messages above, the next step is to look
at the configure error log file (config.log) to determine what
configure was trying to do when it died.
../../contrib/src/vpx/rules.mak:105: recipe for target '.vpx' failed
make: *** [.vpx] Error 1
Compiling ring
Despite compiling the dependencies failed I did attempt to compile ring:
git clone https://gerrit-ring.savoirfairelinux.com/ring
cd ring
./autogen.sh
./configure
make
make install
This caused the following error:
checking for PJPROJECT... no
configure: error: Missing pjproject files
pi#phone ~/ring $ make
make: *** No targets specified and no makefile found. Stop.
pi#phone ~/ring $ make install
make: *** No rule to make target 'install'. Stop.
So currently I am stuck and I fear that I will not be able to go beyond the current state of my project (🎥):
Edit: Now without the video codecs (like aberaud suggested) I run into the following error:
/bin/bash ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I.. -I../include/opendht -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -fPIC -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -g -fPIC -O3 -std=c++0x -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -g -fPIC -O3 -std=c++0x -c -o libopendht_la-dht.lo `test -f 'dht.cpp' || echo './'`dht.cpp
libtool: compile: g++ -DHAVE_CONFIG_H -I. -I.. -I../include/opendht -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -fPIC -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -g -fPIC -O3 -std=c++0x -I/home/pi/sip-desaster/ring/contrib/arm-linux-gnueabihf/include -g -fPIC -O3 -std=c++0x -c dht.cpp -fPIC -DPIC -o libopendht_la-dht.o
In file included from ../include/opendht/dht.h:29:0,
from dht.cpp:27:
../include/opendht/infohash.h:58:22: error: expected initializer before ‘:’ token
dht.cpp:3105:1: error: expected ‘}’ at end of input
Makefile:386: recipe for target 'libopendht_la-dht.lo' failed
make[2]: *** [libopendht_la-dht.lo] Error 1
make[2]: Leaving directory '/home/pi/sip-desaster/ring/contrib/native/opendht/src'
Makefile:395: recipe for target 'install-recursive' failed
make[1]: *** [install-recursive] Error 1
make[1]: Leaving directory '/home/pi/sip-desaster/ring/contrib/native/opendht'
../../contrib/src/opendht/rules.mak:28: recipe for target '.opendht' failed
make: *** [.opendht] Error 2
Contrib
The "contrib" part of the Ring build is about building dependencies that are not available on the target system, mostly used to build full working packages when cross-compiling or for systems without proper dependency management (basically all OSs except Linux distros).
When in the contrib/native directory, you can run make list to see the list of packages to be built. Since you are cross-compiling you will need to build almost everything.
If you somehow mess up the contrib build you can safely delete the contrib/native and contrib/{target_tuple} (if any) directories and start again.
libvpx
The libvpx library is used by libav to provide the vp8 and vp9 video codecs. It's not an hard dependency and since your project doesn't use video you can safely disable it. We also encountered issues when cross compiling vpx.
In contrib/src/libav/rules.mak line 70, DEPS_libav defines the list of dependencies for libav. You can remove vpx, x264 and $(DEPS_vpx) from the list since you don't use video. You may also add speex and opus audio codecs to the list (they should be in the list but aren't, see this patch
as an example).
After cleaning contrib as described above and boostraping again, when running make list, vpx and x264 shouldn't show up in the list of "To-be-built packages". Then try to build contrib by running make.
If after trying this you encounter the same issue for other packages you may have some sort of cross compilation build path issue (I would then need more logs/details).
As a very last resort, compiling on the Pi itself (with Raspbian) is horribly slow but has the advantage to use local distro-provided dependencies and remove the hassles of cross compilation.
Good luck
I am providing an own answer which should document my steps which will hopefully lead me to the desired end result:
Download the sources
git clone https://gerrit-ring.savoirfairelinux.com/ring
cd ring
Build the contrib section
According to #aberaud's remarks I can update contrib/src/libav/rules.mak and remove any video-related dependencies (remember that I am headless):
So I changed line 70 form
DEPS_libav = zlib x264 vpx $(DEPS_vpx)
to
DEPS_libav = zlib opus speex
Now build the contrib section.
cd ../contrib/
rm -fr native/ && mkdir native
cd native
../bootstrap
make
I was hitting the same error trying to compile the contrib.
The version of Raspbian I was using came with the older version of the gcc compiler, version 4.6. After I upgraded to 4.8 it compiled instantly. Well, as instantly as anything compiles on the Pi at any rate.

Python extension debugging

I'm trying to debug an extension module for python that I wrote in C. I compiled it using the following:
python setup.py build -g install --user
I then debug with:
gdb python
...
b py_node_make
run test.py
It breaks at py_node_make (one of the functions I defined), but then I try:
(gdb) print node
No symbol "node" in current context.
The function I'm trying to debug is:
static Python_node_t* py_node_make(
node_t* node)
{
Python_node_t* pyNode;
pyNode = PyObject_New(Python_node_t, &t_node);
pyNode->node = node;
pyNode->borrowed = true;
return pyNode;
}
For source debugging to work, your C extensions must be built with debug info (gcc -g). Since you're driving the compilation process with distutils, you can specify the compiler flags used through the CFLAGS environment variable (Installing Python Modules: Tweaking compiler/linker flags):
CFLAGS='-Wall -O0 -g' python setup.py build
Note that even if distutils defaults to a higher optimization level than -O0, you really shouldn't get that No symbol "node" in current context error as long as -g is passed and most Python builds do pass -g by default.
The problem is the optimization. I'm not sure how to do it from the command line, but in the setup.py script I just added extra_compile_args=['-O0'], to the Extension constructor and everything worked.
Would still like (and would accept) an answer that involved a command line arg (something after python setup.py build) that would accomplish the same thing so I don't have to have a compiler specific line in my setup.py file.

Cython and fortran - how to compile together without f2py

FINAL UPDATE
This question is about how to write a setup.py that will compile a cython module which accesses FORTRAN code directly, like C would. It was a rather long and arduous journey to the solution, but the full mess is included below for context.
ORIGINAL QUESTION
I have an extension which is a Cython file, which sets up some heap memory and passes it to the fortran code, and a fortran file, which is a venerable old module that I'd like to avoid reimplementing if I can.
The .pyx file compiles fine to C, but the cython compiler chokes on the .f90 file with the following error:
$ python setup.py build_ext --inplace
running build_ext
cythoning delaunay/__init__.pyx to delaunay/__init__.c
building 'delaunay' extension
error: unknown file type '.f90' (from 'delaunay/stripack.f90')
Here's (the top half of) my setup file:
from distutils.core import setup, Extension
from Cython.Distutils import build_ext
ext_modules = [
Extension("delaunay",
sources=["delaunay/__init__.pyx",
"delaunay/stripack.f90"])
]
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
...
)
NOTE: I originally had the fortran file's location incorrectly specified (without the directory prefix) but this breaks in exactly the same way after I fixed that.
Things I have tried:
I found this, and tried passing in the name of the fortran compiler (i.e. gfortran) like this:
$ python setup.py config --fcompiler=gfortran build_ext --inplace
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --fcompiler not recognized
And I've also tried removing --inplace, in case that was the problem (it wasn't, same as the top error message).
So, how do I compile this fortran? Can I hack it into a .o myself and get away with linking it? Or is this a bug in Cython, which will force me to reimplement distutils or hack around with the preprocessor?
UPDATE
So, having checked out the numpy.distutils packages, I understand the problem a bit more. It seems that you have to
Use cython to convert the .pyx files to cpython .c files,
Then use an Extension/setup() combination that supports fortran, like numpy's.
Having tried this, my setup.py now looks like this:
from numpy.distutils.core import setup
from Cython.Build import cythonize
from numpy.distutils.extension import Extension
cy_modules = cythonize('delaunay/sphere.pyx')
e = cy_modules[0]
ext_modules = [
Extension("delaunay.sphere",
sources=e.sources + ['delaunay/stripack.f90'])
]
setup(
ext_modules = ext_modules,
name="delaunay",
...
)
(note that I've also restructured the module a bit, since seemingly an __init__.pyx is disallowed...)
Now is where things become buggy and platform-dependent. I have two testing systems available - one Mac OS X 10.6 (Snow Leopard), using Macports Python 2.7, and one Mac OS X 10.7 (Lion) using the system python 2.7.
On Snow Leopard, the following applies:
This means that the module compiles (hurray!) (although there's no --inplace for numpy, it seems, so I had to system-wide install the testing module :/) but I still get a crash on import as follows:
>>> import delaunay
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<snip>site-packages/delaunay/__init__.py", line 1, in <module>
from sphere import delaunay_mesh
ImportError: dlopen(<snip>site-packages/delaunay/sphere.so, 2): no suitable image found. Did find:
<snip>site-packages/delaunay/sphere.so: mach-o, but wrong architecture
and on Lion, I get a compile error, following a rather confusing looking compile line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
/usr/local/bin/gfortran -Wall -arch i686 -arch x86_64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/fortranobject.o build/temp.macosx-10.7-intel-2.7/delaunay/stripack.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.o -lgfortran -o build/lib.macosx-10.7-intel-2.7/delaunay/sphere.so
ld: duplicate symbol _initsphere in build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o ldand :build /temp.macosx-10.7-intelduplicate- 2.7symbol/ delaunay/sphere.o _initsphere in forbuild architecture /i386
temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o and build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o for architecture x86_64
Now let's just step back a moment before we pore over the details here. Firstly, I know there are a bunch of headaches over architecture clashes in 64-bit Mac OS X; I had to work very hard to get Macports Python working on the Snow Leopard machine (just to upgrade from system python 2.6). I also know that when you see gfortran -arch i686 -arch x86_64 you are sending mixed messages to your compiler. There are all manner of platform-specific problems buried in there, that we don't need to worry about in the context of this question.
But let's just look at this line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
what is numpy doing?! I don't need any f2py features in this build! I actually wrote a cython module in order to avoid dealing with f2py's insanity (I need to have 4 or 5 output variables, as well as neither-in-nor-out arguments - neither of which is well supported in f2py.) I just want it to compile .c -> .o, and .f90 -> .o and link them. I could write this compiler line myself if I knew how to include all the relevant headers.
Please tell me I don't need to write my own makefile for this... or that there's a way to translate fortran to (output-compatible) C so I can just avoid python ever seeing the .f90 extension (which fixes the whole problem.) Note that f2c is not suitable for this as it only works on F77 and this is a more modern dialect (hence the .f90 file extension).
UPDATE 2
The following bash script will happily compile and link the code in place:
PYTHON_H_LOCATION="/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/"
cython sphere.pyx
gcc -arch x86_64 -c sphere.c -I$PYTHON_H_LOCATION
gfortran -arch x86_64 -c stripack.f90
gfortran -arch x86_64 -bundle -undefined dynamic_lookup -L/opt/local/lib *.o -o sphere.so
Any advice on how to make this kind of hack compatible with a setup.py? I don't anyone installing this module to have to go find Python.h manually...
UPDATE: I've created a project on github which wraps up this generating of compile lines by hand. it's called complicated_build.
UPDATE 2: in fact, "generating by hand" is a really bad idea as it's platform specific — the project now reads the values from the distutils.sysconfig module, which is the settings used to compile python (i.e. exactly what we want,) the only setting which is guessed is fortran compiler and file extensions (which are user-configurable). I suspect it is reimplementing a fair bit of distutils now!
The way to do this is to write your own compiler lines, and hack them into your setup.py. I show an example below which works for my (very simple) case, which has the following strucutre:
imports
cythonize() any .pyx files, so you only have fortran and C files.
define a build() function which compiles your code:
maybe some easy-to-change constants, like compiler names and architecture
list up the fortran and C files
generate the shell commands that will build the modules
add the linker line
run the shell commands.
if the command was install and the target doesn't exist yet, build it.
run setup (which will build the pure python sections)
if the command was build, run the build now.
my implementation of this is shown below. It's designed for only one extension module, and it recompiles all the files every time, so may require further extension to be of more general use. Also note that I've hard coded various unix /s, so if you're porting this to windows make sure you adapt or replace with os.path.sep.
from distutils.core import setup
from distutils.sysconfig import get_python_inc
from Cython.Build import cythonize
import sys, os, shutil
cythonize('delaunay/sphere.pyx')
target = 'build/lib/delaunay/sphere.so'
def build():
fortran_compiler = 'gfortran'
c_compiler = 'gcc'
architecture = 'x86_64'
python_h_location = get_python_inc()
build_temp = 'build/custom_temp'
global target
try:
shutil.rmtree(build_temp)
except OSError:
pass
os.makedirs(build_temp) # if you get an error here, please ensure the build/ ...
# folder is writable by this user.
c_files = ['delaunay/sphere.c']
fortran_files = ['delaunay/stripack.f90']
c_compile_commands = []
for cf in c_files:
# use the path (sans /s), without the extension, as the object file name:
components = os.path.split(cf)
name = components[0].replace('/', '') + '.'.join(components[1].split('.')[:-1])
c_compile_commands.append(
c_compiler + ' -arch ' + architecture + ' -I' + python_h_location + ' -o ' +
build_temp + '/' + name + '.o -c ' + cf
)
fortran_compile_commands = []
for ff in fortran_files:
# prefix with f in case of name collisions with c files:
components = os.path.split(ff)
name = components[0].replace('/', '') + 'f' + '.'.join(components[1].split('.')[:-1])
fortran_compile_commands.append(
fortran_compiler + ' -arch ' + architecture + ' -o ' + build_temp +
'/' + name + '.o -c ' + ff
)
commands = c_compile_commands + fortran_compile_commands + [
fortran_compiler + ' -arch ' + architecture +
' -bundle -undefined dynamic_lookup ' + build_temp + '/*.o -o ' + target
]
for c in commands:
os.system(c)
if 'install' in sys.argv and not os.path.exists(target):
try:
os.makedirs('build/lib/delaunay')
except OSError:
# we don't care if the containing folder already exists.
pass
build()
setup(
name="delaunay",
version="0.1",
...
packages=["delaunay"]
)
if 'build' in sys.argv:
build()
This could be wrapped up into a new Extension class I guess, with it's own build_ext command - an exercise for the advanced student ;)
Simply build and install your vintage Fortran library outside of Python, then link to it in distutils. Your question indicates that you do not intend to temper with this library, so a once-and-for-all install will probably do (using the library's build and installation instructions). Then link the Python extension to the installed external library:
ext_modules = [
Extension("delaunay",
sources = ["delaunay/__init__.pyx"],
libraries = ["delaunay"])
]
This approach is also safe for the case that you realize that you need wrappers for other languages as well, such as Matlab, Octave, IDL, ...
Update
At some point, if you end up with more than a few such external libraries that you want to wrap, it is advantageous to add a top-level build system that installs all these libraries, and manages the building of all the wrappers as well. I have cmake for this purpose, which is great at handling system-wide builds and installations. However, it cannot build Python stuff out of the box, but it can be taught easily to call "python setup.py install" in each subdirectory python, thus invoking distutils. So the overall build process looks like this:
mkdir build
cd build
cmake ..
make
make install
make python
(make octave)
(make matlab)
It is very important to always separate core library code from wrappers for specific front-end languages (also for your own projects!), since they tend to change rather fast. What happens otherwise can be seen at the example of numpy: Instead of writing one general-purpose C library libndarray.so and creating thin wrappers for Python, there are Python C API calls everywhere in the sources. This is what is now holding back Pypy as a serious alternative to CPython, since in order to get numpy they have to support every last bit of the CPython API, which they can't do, since they have a just-in-time compiler and a different garbage collector. This means we are missing out on a lot of potential improvements.
Bottom line:
Build general purpose Fortran/C libraries separately and install them system-wide.
Have a separate build step for the wrappers, which should be kept as lightweight as possible, so that it's easy to adapt for the next big language X that comes about. If there is one safe assumption, it is that X will support linking with C libraries.
You can build the object file outside of distutils then include it at the linking step using the extra_objects argument to Extension's constructor. In setup.py:
...
e = Extension(..., extra_objects = ['holycode.o'])
...
On the command prompt:
# gfortran -c -fPIC holycode.f
# ./setup.py build_ext ...
With only one external object, this will be the easiest way for many.

How may I override the compiler (GCC) flags that setup.py uses by default?

I understand that setup.py uses the same CFLAGS that were used to build Python. I have a single C extension of ours that is segfaulting. I need to build it without -O2 because -O2 is optimizing out some values and code so that the core files are not sufficient to pin down the problem.
I just need to modify setup.py so that -O2 is not used.
I've read distutils documentation, in particular distutils.ccompiler and distutils.unixccompiler and see how to add flags and libs and includes, but not how to modify the default GCC flags.
Specifically, this is for a legacy product on Python 2.5.1 with a bunch of backports (Fedora 8, yes, I know...). No, I cannot change the OS or Python version and I cannot, without great problems, recompile Python. I just need to build a one off of the C extension for one customer whose environment is the only one segfaulting.
Prepend CFLAGS="-O0" before you run setup.py:
% CFLAGS="-O0" python ./setup.py
The -O0 will be appended to CFLAGS while compiling, therefore will override previous -O2 setting.
Another way is add -O0 to extra_compile_args in setup.py:
moduleA = Extension('moduleA', .....,
include_dirs = ['/usr/include', '/usr/local/include'],
extra_compile_args = ["-O0"],
)
If you want to remove all default flags, use:
% OPT="" python ./setup.py
I ran into this problem when I needed to fully remove a flag (-pipe) so I could compile SciPy on a low-memory system. I found that, as a hack, I could remove unwanted flags by editing /usr/lib/pythonN.N/_sysconfigdata.py to remove every instance of that flag, where N.N is your Python version. There are a lot of duplicates, and I'm not sure which are actually used by setup.py.
distutils/​setuptools allows any compiler/​linker flags to be specified with extra_compile_args/​extra_link_args argument when defining a Python extension in setup.py script. These extra flags will be added after default ones and will override any mutually exclusive flags present earlier.
For regular use, however, this is not much useful as a package you distribute through PyPI can be built by different compilers having an incompatible options.
The following code allows you to specify these options in extension- and compiler-specific way:
from setuptools import setup
from setuptools.command.build_ext import build_ext
class build_ext_ex(build_ext):
extra_compile_args = {
'extension_name': {
'unix': ['-O0'],
'msvc': ['/Od']
}
}
def build_extension(self, ext):
extra_args = self.extra_compile_args.get(ext.name)
if extra_args is not None:
ctype = self.compiler.compiler_type
ext.extra_compile_args = extra_args.get(ctype, [])
build_ext.build_extension(self, ext)
setup(
...
cmdclass = {'build_ext': build_ext_ex},
...
)
Of course you could simplify it if you want all extensions to use the same (but still compiler-specific) options.
Here is a list of supported compiler types (as returned by setup.py build_ext --help-compiler):
--compiler=bcpp Borland C++ Compiler
--compiler=cygwin Cygwin port of GNU C Compiler for Win32
--compiler=mingw32 Mingw32 port of GNU C Compiler for Win32
--compiler=msvc Microsoft Visual C++
--compiler=unix standard UNIX-style compiler

Categories