Programatically testing for openmp support from a python setup script - python

I'm working on a python project that uses cython and c to speed up time sensitive operations. In a few of our cython routines, we use openmp to further speed up an operation if idle cores are available.
This leads to a bit of an annoying situation on OS X since the default compiler for recent OS versions (llvm/clang on 10.7 and 10.8) doesn't support openmp. Our stopgap solution is to tell people to set gcc as their compiler when they build. We'd really like to do this programmatically since clang can build everything else with no issues.
Right now, compilation will fail with the following error:
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: Command "cc -bundle -undefined dynamic_lookup -L/usr/local/lib -L/usr/local/opt/sqlite/lib build/temp.macosx-10.8-x86_64-2.7/yt/utilities/lib/geometry_utils.o -lm -o yt/utilities/lib/geometry_utils.so -fopenmp" failed with exit status 1
The relevant portion of our setup script looks like this:
config.add_extension("geometry_utils",
["yt/utilities/lib/geometry_utils.pyx"],
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp'],
libraries=["m"], depends=["yt/utilities/lib/fp_utils.pxd"])
The full setup.py file is here.
Is there a way to programmatically test for openmp support from inside the setup script?

I was able to get this working by checking to see if a test program compiles:
import os, tempfile, subprocess, shutil
# see http://openmp.org/wp/openmp-compilers/
omp_test = \
r"""
#include <omp.h>
#include <stdio.h>
int main() {
#pragma omp parallel
printf("Hello from thread %d, nthreads %d\n", omp_get_thread_num(), omp_get_num_threads());
}
"""
def check_for_openmp():
tmpdir = tempfile.mkdtemp()
curdir = os.getcwd()
os.chdir(tmpdir)
filename = r'test.c'
with open(filename, 'w', 0) as file:
file.write(omp_test)
with open(os.devnull, 'w') as fnull:
result = subprocess.call(['cc', '-fopenmp', filename],
stdout=fnull, stderr=fnull)
os.chdir(curdir)
#clean up
shutil.rmtree(tmpdir)
return result

Related

BoostPython and CMake

I have successfully followed this example for how to connect C++ and python. It works fine when I use the given Makefile. When I try to use cmake instead, it does not go as well.
C++ Code:
#include <boost/python.hpp>
#include <iostream>
extern "C"
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
int main(){
std::cout<<greet()<<std::endl;
return 0;
}
Makefile:
# location of the Python header files
PYTHON_VERSION = 27
PYTHON_DOT_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_DOT_VERSION)
# location of the Boost Python include files and library
BOOST_INC = /usr/include
BOOST_LIB = /usr/lib/x86_64-linux-gnu/
# compile mesh classes
TARGET = hello_ext
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic $(TARGET).o -L$(BOOST_LIB) -lboost_python-py$(PYTHON_VERSION) -L/usr/lib/python$(PYTHON_DOT_VERSION)/config-x86_64-linux-gnu -lpython$(PYTHON_DOT_VERSION) -o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp
When I compile this I get a .so file that can be included in the script
import sys
sys.path.append('/home/myname/Code/Trunk/TestBoostPython/build/')
import libhello_ext_lib as hello_ext
print(hello_ext.greet())
I really want to use cmake instead of a manually written Makefile so I wrote this:
cmake_minimum_required(VERSION 3.6)
PROJECT(hello_ext)
# Find Boost
find_package(Boost REQUIRED COMPONENTS python-py27)
set(PYTHON_DOT_VERSION 2.7)
set(PYTHON_INCLUDE /usr/include/python2.7)
set(PYTHON_LIBRARY /usr/lib/python2.7/config-x86_64-linux-gnu)
include_directories(${PYTHON_INCLUDE} ${Boost_INCLUDE_DIRS})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -lrt -O3")
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
SET(LIBNAME hello_ext_lib)
add_library(${LIBNAME} SHARED src/hello_ext.cpp)
add_executable(${PROJECT_NAME} src/hello_ext.cpp)
TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${Boost_LIBRARIES} -lpython2.7 -fPIC)
TARGET_LINK_LIBRARIES(${LIBNAME} ${Boost_LIBRARIES} -lpython2.7 -fPIC -shared)
Here I currently type the Python-paths by hand but I have also tried using fin_package(PythonLibs) without success.
The program compiles fine and executes when I run the executable file in ../bin/. However, when I run the python script I get always:
ImportError: dynamic module does not define init function (initlibhello_ext_lib)
I found this which says this can happen if the lib and the executable have different names. Which indeed is the case, but how can I obtain the .so with correct name?
I also tried to not compile the executable but only the library. That did also not work.
BOOST_PYTHON_MODULE(hello_ext) creates an init function "inithello_ext", which should correspond to a module "hello_ext". But you are trying to import "libhello_ext_lib".
Give the module the same name as the filename. E.g. BOOST_PYTHON_MODULE(libhello_ext_lib).

Run compiled python script with arguments in C++ (Cython)

What I try to achieve is compiling my Python script to lib/dll and invoke it with arguments.
This is my setup.py file for the script:
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("nu3k2nu4k.py")
)
Used this command to produce compilation outputs:
python setup.py build_ext --inplace
following files produced:
nu3k2nu4k.c
nu3k2nu4k.cp36-win_amd64.pyd
nu3k2nu4k.pyx
and a subdirectory named build which contains:
nu3k2nu4k.cp36-win_amd64.exp
nu3k2nu4k.cp36-win_amd64.lib
nu3k2nu4k.obj
How can I invoke the compiled script from c++ code with arguments?
I used Cython for the task but that's not mandatory, boost or others could be used as well (I'm doing this to make the source code not accessible).
Edit:
Using the documentation provided by Mykola Shchetinin I have managed to generate an API header file for my script. I basically added a cdef api keyword to my main function. While this does help me export the method name to c++ I still need a way to pass around some arguments to my script. Since I pass string arguments I was hoping for a similar way like using PyRun to set argv and parse the arguments from the script (since I would like to avoid conversion of strings from c to python encoding if possible) Is there any easy way to pass those string arguments?
EDIT 2:
I got it working! following the sample provided by Mykola and directly compiling the c file output instead of using the .lib I got it working.
I have tried to do that by myself. So I convert char * to bytes in cython. (according to docs)
That is what I came up with. I have the following setup (gcc and centos7.2):
setup.py
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("func.pyx")
)
main.c
#include "Python.h"
#include "func.h"
int main() {
Py_Initialize();
initfunc();
func("0123hello jorghy!!");
Py_Finalize();
return 0;
}
func.pyx
cdef public func(char * arg):
cdef bytes py_bytes = arg
print(str(py_bytes));
I build all this stuff with the following commands:
python setup.py build_ext --inplace
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test main.c func.c
Then when running the executable file test I get the following results:
0123hello jorghy!!
UPDATE
There is another way to approach this problem:
I have created a cython file test.pyx:
import sys
if __name__ == "__main__":
print("hei")
print(sys.argv)
And compiled as an executable it using commands:
cython test.pyx --embed
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test test.c
Then you can call this code as an executable file:
$ ./test 1 2 3
hei
['./test', '1', '2', '3']
Then you can call it from C++ using std::system or other (better) methods.

How to tell which specific compiler will be invoked for a Python C extension: GCC or Clang?

I have a Python C++ extension that requires the following compilation flags when compiled using Clang on OS X:
CPPFLAGS='-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.8'
LDFLAGS='-lc++'
Detecting OS X in my setup.py is easy enough. I can do this:
if sys.prefix == 'darwin':
compile_args.append(['-mmacosx-version-min=10.8', '-stdlib=libc++'])
link_args.append('-lc++')
(See here for full context)
However, on GCC this compilation flag is invalid. So, compilation will fail if someone will try to use GCC on OS X if I write the setup.py this way.
GCC and Clang support different compiler flags. So, I need to know which compiler will be invoked, so I can send different flags. What is the right way to detect the compiler in the setup.py?
Edit 1:
Note that no Python exception is raised for compilation errors:
$ python setup.py build_ext --inplace
running build_ext
building 'spacy.strings' extension
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -c spacy/strings.cpp -o build/temp.linux-x86_64-2.7/spacy/strings.o -O3 -mmacosx-version-min=10.8 -stdlib=libc++
gcc: error: unrecognized command line option ‘-mmacosx-version-min=10.8’
gcc: error: unrecognized command line option ‘-stdlib=libc++’
error: command 'gcc' failed with exit status 1
$
I hit upon your question as I need the same kind of switch. Besides, in my case, sys.prefix is not great as the flags are for clang regardless of the platform.
I am not sure it is perfect but here is what works best for me. So, I check if a CC variable is set; if not, I check what I guess is where distutils looks at.
Any better solution welcome!
import os
import distutils
try:
if os.environ['CC'] == "clang":
clang = True
except KeyError:
clang = False
if clang or distutils.sysconfig_get_config_vars()['CC'] == 'clang':
try:
_ = os.environ['CFLAGS']
except KeyError:
os.environ['CFLAGS'] = ""
os.environ['CFLAGS'] += " -Wno-unused-function"
os.environ['CFLAGS'] += " -Wno-int-conversion"
os.environ['CFLAGS'] += " -Wno-incompatible-pointer-types
Note for the grumpy guys: I would have loved to use the extra_compile_args option, but it puts the flags at the wrong position in the clang compilation command.
Add the following code to your setup.py. It explicitly detects which flags are accepted by the compiler and then only those are added.
# check whether compiler supports a flag
def has_flag(compiler, flagname):
import tempfile
from distutils.errors import CompileError
with tempfile.NamedTemporaryFile('w', suffix='.cpp') as f:
f.write('int main (int argc, char **argv) { return 0; }')
try:
compiler.compile([f.name], extra_postargs=[flagname])
except CompileError:
return False
return True
# filter flags, returns list of accepted flags
def flag_filter(compiler, *flags):
result = []
for flag in flags:
if has_flag(compiler, flag):
result.append(flag)
return result
class BuildExt(build_ext):
# these flags are not checked and always added
compile_flags = {"msvc": ['/EHsc'], "unix": ["-std=c++11"]}
def build_extensions(self):
ct = self.compiler.compiler_type
opts = self.compile_flags.get(ct, [])
if ct == 'unix':
# only add flags which pass the flag_filter
opts += flag_filter(self.compiler,
'-fvisibility=hidden', '-stdlib=libc++', '-std=c++14')
for ext in self.extensions:
ext.extra_compile_args = opts
build_ext.build_extensions(self)
setup(
cmdclass=dict(build_ext=BuildExt),
# other options...
)
The has_flag method was taken from this pybind11 example.
https://github.com/pybind/python_example
Here is a cross-compiler and cross-platform solution:
from setuptools import setup
from setuptools.command.build_ext import build_ext
class build_ext_ex(build_ext):
extra_args = {
'extension_name': {
'clang': (
['-std=c++11', '-stdlib=libc++', '-mmacosx-version-min=10.8'],
['-lc++']
)
}
}
def build_extensions(self):
# only Unix compilers and their ports have `compiler_so`
compiler_cmd = getattr(self.compiler, 'compiler_so', None)
# account for absolute path and Windows version
if compiler_cmd is not None and 'clang' in compiler_cmd[0]:
self.cname = 'clang'
else:
self.cname = self.compiler.compiler_type
build_ext.build_extensions(self)
def build_extension(self, ext):
extra_args = self.extra_args.get(ext.name)
if extra_args is not None:
extra_args = extra_args.get(self.cname)
if extra_args is not None:
ext.extra_compile_args = extra_args[0]
ext.extra_link_args = extra_args[1]
build_ext.build_extension(self, ext)
setup(
...
cmdclass = {'build_ext': build_ext_ex},
...
)
... and a list of supported compiler types (as returned by setup.py build_ext --help-compiler):
--compiler=bcpp Borland C++ Compiler
--compiler=cygwin Cygwin port of GNU C Compiler for Win32
--compiler=mingw32 Mingw32 port of GNU C Compiler for Win32
--compiler=msvc Microsoft Visual C++
--compiler=unix standard UNIX-style compiler
If you will face the same problem as #xoolive, you could override build_extensions(self) only and append the options to the end of self.compiler.compiler_so and self.compiler.linker_so respectively.

Cython and fortran - how to compile together without f2py

FINAL UPDATE
This question is about how to write a setup.py that will compile a cython module which accesses FORTRAN code directly, like C would. It was a rather long and arduous journey to the solution, but the full mess is included below for context.
ORIGINAL QUESTION
I have an extension which is a Cython file, which sets up some heap memory and passes it to the fortran code, and a fortran file, which is a venerable old module that I'd like to avoid reimplementing if I can.
The .pyx file compiles fine to C, but the cython compiler chokes on the .f90 file with the following error:
$ python setup.py build_ext --inplace
running build_ext
cythoning delaunay/__init__.pyx to delaunay/__init__.c
building 'delaunay' extension
error: unknown file type '.f90' (from 'delaunay/stripack.f90')
Here's (the top half of) my setup file:
from distutils.core import setup, Extension
from Cython.Distutils import build_ext
ext_modules = [
Extension("delaunay",
sources=["delaunay/__init__.pyx",
"delaunay/stripack.f90"])
]
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
...
)
NOTE: I originally had the fortran file's location incorrectly specified (without the directory prefix) but this breaks in exactly the same way after I fixed that.
Things I have tried:
I found this, and tried passing in the name of the fortran compiler (i.e. gfortran) like this:
$ python setup.py config --fcompiler=gfortran build_ext --inplace
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --fcompiler not recognized
And I've also tried removing --inplace, in case that was the problem (it wasn't, same as the top error message).
So, how do I compile this fortran? Can I hack it into a .o myself and get away with linking it? Or is this a bug in Cython, which will force me to reimplement distutils or hack around with the preprocessor?
UPDATE
So, having checked out the numpy.distutils packages, I understand the problem a bit more. It seems that you have to
Use cython to convert the .pyx files to cpython .c files,
Then use an Extension/setup() combination that supports fortran, like numpy's.
Having tried this, my setup.py now looks like this:
from numpy.distutils.core import setup
from Cython.Build import cythonize
from numpy.distutils.extension import Extension
cy_modules = cythonize('delaunay/sphere.pyx')
e = cy_modules[0]
ext_modules = [
Extension("delaunay.sphere",
sources=e.sources + ['delaunay/stripack.f90'])
]
setup(
ext_modules = ext_modules,
name="delaunay",
...
)
(note that I've also restructured the module a bit, since seemingly an __init__.pyx is disallowed...)
Now is where things become buggy and platform-dependent. I have two testing systems available - one Mac OS X 10.6 (Snow Leopard), using Macports Python 2.7, and one Mac OS X 10.7 (Lion) using the system python 2.7.
On Snow Leopard, the following applies:
This means that the module compiles (hurray!) (although there's no --inplace for numpy, it seems, so I had to system-wide install the testing module :/) but I still get a crash on import as follows:
>>> import delaunay
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<snip>site-packages/delaunay/__init__.py", line 1, in <module>
from sphere import delaunay_mesh
ImportError: dlopen(<snip>site-packages/delaunay/sphere.so, 2): no suitable image found. Did find:
<snip>site-packages/delaunay/sphere.so: mach-o, but wrong architecture
and on Lion, I get a compile error, following a rather confusing looking compile line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
/usr/local/bin/gfortran -Wall -arch i686 -arch x86_64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/fortranobject.o build/temp.macosx-10.7-intel-2.7/delaunay/stripack.o build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.o -lgfortran -o build/lib.macosx-10.7-intel-2.7/delaunay/sphere.so
ld: duplicate symbol _initsphere in build/temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o ldand :build /temp.macosx-10.7-intelduplicate- 2.7symbol/ delaunay/sphere.o _initsphere in forbuild architecture /i386
temp.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/delaunay/spheremodule.o and build/temp.macosx-10.7-intel-2.7/delaunay/sphere.o for architecture x86_64
Now let's just step back a moment before we pore over the details here. Firstly, I know there are a bunch of headaches over architecture clashes in 64-bit Mac OS X; I had to work very hard to get Macports Python working on the Snow Leopard machine (just to upgrade from system python 2.6). I also know that when you see gfortran -arch i686 -arch x86_64 you are sending mixed messages to your compiler. There are all manner of platform-specific problems buried in there, that we don't need to worry about in the context of this question.
But let's just look at this line:
gfortran:f77: build/src.macosx-10.7-intel-2.7/delaunay/sphere-f2pywrappers.f
what is numpy doing?! I don't need any f2py features in this build! I actually wrote a cython module in order to avoid dealing with f2py's insanity (I need to have 4 or 5 output variables, as well as neither-in-nor-out arguments - neither of which is well supported in f2py.) I just want it to compile .c -> .o, and .f90 -> .o and link them. I could write this compiler line myself if I knew how to include all the relevant headers.
Please tell me I don't need to write my own makefile for this... or that there's a way to translate fortran to (output-compatible) C so I can just avoid python ever seeing the .f90 extension (which fixes the whole problem.) Note that f2c is not suitable for this as it only works on F77 and this is a more modern dialect (hence the .f90 file extension).
UPDATE 2
The following bash script will happily compile and link the code in place:
PYTHON_H_LOCATION="/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/"
cython sphere.pyx
gcc -arch x86_64 -c sphere.c -I$PYTHON_H_LOCATION
gfortran -arch x86_64 -c stripack.f90
gfortran -arch x86_64 -bundle -undefined dynamic_lookup -L/opt/local/lib *.o -o sphere.so
Any advice on how to make this kind of hack compatible with a setup.py? I don't anyone installing this module to have to go find Python.h manually...
UPDATE: I've created a project on github which wraps up this generating of compile lines by hand. it's called complicated_build.
UPDATE 2: in fact, "generating by hand" is a really bad idea as it's platform specific — the project now reads the values from the distutils.sysconfig module, which is the settings used to compile python (i.e. exactly what we want,) the only setting which is guessed is fortran compiler and file extensions (which are user-configurable). I suspect it is reimplementing a fair bit of distutils now!
The way to do this is to write your own compiler lines, and hack them into your setup.py. I show an example below which works for my (very simple) case, which has the following strucutre:
imports
cythonize() any .pyx files, so you only have fortran and C files.
define a build() function which compiles your code:
maybe some easy-to-change constants, like compiler names and architecture
list up the fortran and C files
generate the shell commands that will build the modules
add the linker line
run the shell commands.
if the command was install and the target doesn't exist yet, build it.
run setup (which will build the pure python sections)
if the command was build, run the build now.
my implementation of this is shown below. It's designed for only one extension module, and it recompiles all the files every time, so may require further extension to be of more general use. Also note that I've hard coded various unix /s, so if you're porting this to windows make sure you adapt or replace with os.path.sep.
from distutils.core import setup
from distutils.sysconfig import get_python_inc
from Cython.Build import cythonize
import sys, os, shutil
cythonize('delaunay/sphere.pyx')
target = 'build/lib/delaunay/sphere.so'
def build():
fortran_compiler = 'gfortran'
c_compiler = 'gcc'
architecture = 'x86_64'
python_h_location = get_python_inc()
build_temp = 'build/custom_temp'
global target
try:
shutil.rmtree(build_temp)
except OSError:
pass
os.makedirs(build_temp) # if you get an error here, please ensure the build/ ...
# folder is writable by this user.
c_files = ['delaunay/sphere.c']
fortran_files = ['delaunay/stripack.f90']
c_compile_commands = []
for cf in c_files:
# use the path (sans /s), without the extension, as the object file name:
components = os.path.split(cf)
name = components[0].replace('/', '') + '.'.join(components[1].split('.')[:-1])
c_compile_commands.append(
c_compiler + ' -arch ' + architecture + ' -I' + python_h_location + ' -o ' +
build_temp + '/' + name + '.o -c ' + cf
)
fortran_compile_commands = []
for ff in fortran_files:
# prefix with f in case of name collisions with c files:
components = os.path.split(ff)
name = components[0].replace('/', '') + 'f' + '.'.join(components[1].split('.')[:-1])
fortran_compile_commands.append(
fortran_compiler + ' -arch ' + architecture + ' -o ' + build_temp +
'/' + name + '.o -c ' + ff
)
commands = c_compile_commands + fortran_compile_commands + [
fortran_compiler + ' -arch ' + architecture +
' -bundle -undefined dynamic_lookup ' + build_temp + '/*.o -o ' + target
]
for c in commands:
os.system(c)
if 'install' in sys.argv and not os.path.exists(target):
try:
os.makedirs('build/lib/delaunay')
except OSError:
# we don't care if the containing folder already exists.
pass
build()
setup(
name="delaunay",
version="0.1",
...
packages=["delaunay"]
)
if 'build' in sys.argv:
build()
This could be wrapped up into a new Extension class I guess, with it's own build_ext command - an exercise for the advanced student ;)
Simply build and install your vintage Fortran library outside of Python, then link to it in distutils. Your question indicates that you do not intend to temper with this library, so a once-and-for-all install will probably do (using the library's build and installation instructions). Then link the Python extension to the installed external library:
ext_modules = [
Extension("delaunay",
sources = ["delaunay/__init__.pyx"],
libraries = ["delaunay"])
]
This approach is also safe for the case that you realize that you need wrappers for other languages as well, such as Matlab, Octave, IDL, ...
Update
At some point, if you end up with more than a few such external libraries that you want to wrap, it is advantageous to add a top-level build system that installs all these libraries, and manages the building of all the wrappers as well. I have cmake for this purpose, which is great at handling system-wide builds and installations. However, it cannot build Python stuff out of the box, but it can be taught easily to call "python setup.py install" in each subdirectory python, thus invoking distutils. So the overall build process looks like this:
mkdir build
cd build
cmake ..
make
make install
make python
(make octave)
(make matlab)
It is very important to always separate core library code from wrappers for specific front-end languages (also for your own projects!), since they tend to change rather fast. What happens otherwise can be seen at the example of numpy: Instead of writing one general-purpose C library libndarray.so and creating thin wrappers for Python, there are Python C API calls everywhere in the sources. This is what is now holding back Pypy as a serious alternative to CPython, since in order to get numpy they have to support every last bit of the CPython API, which they can't do, since they have a just-in-time compiler and a different garbage collector. This means we are missing out on a lot of potential improvements.
Bottom line:
Build general purpose Fortran/C libraries separately and install them system-wide.
Have a separate build step for the wrappers, which should be kept as lightweight as possible, so that it's easy to adapt for the next big language X that comes about. If there is one safe assumption, it is that X will support linking with C libraries.
You can build the object file outside of distutils then include it at the linking step using the extra_objects argument to Extension's constructor. In setup.py:
...
e = Extension(..., extra_objects = ['holycode.o'])
...
On the command prompt:
# gfortran -c -fPIC holycode.f
# ./setup.py build_ext ...
With only one external object, this will be the easiest way for many.

Error setting up thrift modules for python

I'm trying to set up thrift in order to incorporate with Cassandra, so when I ran the
setup.py
it out puts this message in command line
running build
running build_py
running build_ext
building 'thrift.protocol.fastbinary' extension
C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Pytho
n26\PC -c src/protocol/fastbinary.c -o build\temp.win32-2.6\Release\src\protocol
\fastbinary.o
src/protocol/fastbinary.c:24:24: netinet/in.h: No such file or directory
src/protocol/fastbinary.c:85:4: #error "Cannot determine endianness"
src/protocol/fastbinary.c: In function `writeI16':
src/protocol/fastbinary.c:295: warning: implicit declaration of function `htons'
src/protocol/fastbinary.c: In function `writeI32':
src/protocol/fastbinary.c:300: warning: implicit declaration of function `htonl'
src/protocol/fastbinary.c: In function `readI16':
src/protocol/fastbinary.c:688: warning: implicit declaration of function `ntohs'
src/protocol/fastbinary.c: In function `readI32':
src/protocol/fastbinary.c:696: warning: implicit declaration of function `ntohl'
error: command 'gcc' failed with exit status 1
Need some helping on this issue.I have already install the MigW32
Thanks.
With a bit of source file tweaking it is possible to install it with MINGW on Windows. I'm using thrift 0.9.1 and Python 27
The steps I followed were:
If you are using Python 2.7, follow the normal setup steps and workarounds for minGW. In particular you may need to open the file C:\Python27\Lib\distutils\cygwinccompiler.py, and modify the class Mingw32CCompiler to remove all references to the -mno-cygwin option. This option is deprecated and will cause the compiler to halt with an error if it is left in.
Open fastbinary.c and add the following include statement,
#include <stdbool.h>,
this includes definitions for true / false which will otherwise cause the compile to fail. (I assume they're included by default on MSVC?)
3) Modify the setup.py file to tell the linker to link to ws2_32.lib. This is done using a pragma comment on MSVC, but gcc doesn't support this option. So your ext_modules should look like :
ext_modules = [
Extension('thrift.protocol.fastbinary',
sources = ['src/protocol/fastbinary.c'],
libraries=['ws2_32'],
include_dirs = include_dirs,
)
],
4) Build as normal using the setup.py
On my setup, I didn't get much of a speed improvement when using the C extension rather than pure python (about a 5% difference), so the effort to do this might not be justified except in extreme cases.
I've only succeeded in installing Thrift with MSVC.
Install MSVC
Get Thrift
Apply thrift-252-python-msvc-1.diff patch (google it)
The fastbinary.c will be patched, but setup.py patch will fail, update manually from hints at setup.py.rej, here's a (seemingly) correct copy:
from distutils.core import setup, Extension
import sys
libraries = []
if sys.platform == 'win32':
libraries.append('ws2_32')
fastbinarymod = Extension('thrift.protocol.fastbinary',
sources = ['src/protocol/fastbinary.c'],
libraries = libraries,
)
setup(name = 'Thrift',
version = '0.1',
description = 'Thrift Python Libraries',
author = 'Thrift Developers',
author_email = 'thrift-dev#incubator.apache.org',
url = 'http://incubator.apache.org/thrift/',
license = 'Apache License 2.0',
packages = [
'thrift',
'thrift.protocol',
'thrift.transport',
'thrift.server',
],
package_dir = {'thrift' : 'src'},
ext_modules = [fastbinarymod],
)
Endianness test will fail, modify fastbinary.c (around line 68):
#ifdef _MSC_VER
#define __BYTE_ORDER __LITTLE_ENDIAN
#endif
After that run python setup.py install, hopefully, you'll get what you need.
Install python-dev
You can run:
sudo apt-get install python-dev
I would recommend you to execute the following command:
pip3 install thriftpy2

Categories