Why does this Makefile rebuild every time? - python

I have a Makefile to build some simple Python bindings. Unfortunately, upon a plain make or make all it will rebuild every time, even when py11_bindings.cpp was not changed. I checked whether maybe the source file would accidentally be touched or something, but that's not the case as far as I see.
PYTHON = /Library/Frameworks/Python.framework/Versions/3.5/bin
CPP = c++
INC = -I/software/pybind11/include -I/software/eigen
PYTHONCFG = `$(PYTHON)/python3.5-config --cflags --ldflags`
SRC = py11_bindings.cpp
TARGET = _chain.so
all: $(SRC)
$(CPP) -O3 -shared -std=c++11 $(INC) $(PYTHONCFG) $^ -o $(TARGET)
clean:
rm $(TARGET)
I have absolutely no clue why this should happen.

I'm not a makefile expert, so maybe I am not using the correct terms.
However: Your all: defines to create the source file, when it should list the targets to create. Then you should, for each target, list the dependencies and describe, how to create it.
So, your makefile should look like this:
all: $(TARGET)
$(TARGET) : $(SRC)
$(CPP) -O3 -shared -std=c++11 $(INC) $(PYTHONCFG) $^ -o $(TARGET)

Related

Is there a way to create a python module using SWIG C++ which can be imported in both Python2 and Python3

I'm writing a SWIG c++ file to generate a python module. I want to let the users import it in both Python2 and Python3 scripts. Since SWIG has different flags for binding Python2 and Python 3, I was wondering is there a way I can write a create a general module for both.
Let's rewind a little and keep SWIG itself out of the question to start with.
Historically it's been necessary to compile a Python module for every (sometimes even minor) version change of the Python interpreter. This led to PEP-384, which defined a stable ABI for a subset of the Python C-API from Python 3.2 onwards. So obviously that doesn't actually work for your request, because you're interested in 2 vs 3. Furthermore SWIG itself doesn't actually generate PEP-384 compatible, code.
To experiment further and see a bit more about what's going on I've made the following SWIG file:
%module test
%inline %{
char *frobinate(const char *in) {
static char buf[1024];
snprintf(buf, 1024, "World of hello: %s", in);
return buf;
}
%}
If we try to compile this with -DPy_LIMITED_API it failed:
swig3.0 -Wall -python test.i && gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c -DPy_LIMITED_API 2>&1|head
test_wrap.c: In function ‘SWIG_PyInstanceMethod_New’:
test_wrap.c:1114:3: warning: implicit declaration of function ‘PyInstanceMethod_New’ [-Wimplicit-function-declaration]
return PyInstanceMethod_New(func);
^
test_wrap.c:1114:3: warning: return makes pointer from integer without a cast
test_wrap.c: In function ‘SWIG_Python_UnpackTuple’:
test_wrap.c:1315:5: warning: implicit declaration of function ‘PyTuple_GET_SIZE’ [-Wimplicit-function-declaration]
Py_ssize_t l = PyTuple_GET_SIZE(args);
^
test_wrap.c:1327:2: warning: implicit declaration of function ‘PyTuple_GET_ITEM’ [-Wimplicit-function-declaration]
I.e. nobody has picked up that ticket, at least not on the version of SWIG 3.0.2 that I am testing with.
So where does that leave us? Well the SWIG 3.0 documentation on Python 3 support says something interesting:
SWIG is able to support Python 3.0. The wrapper code generated by SWIG can be compiled with both Python 2.x or 3.0. Further more, by passing the -py3 command line option to SWIG, wrapper code with some Python 3 specific features can be generated (see below subsections for details of these features). The -py3 option also disables some incompatible features for Python 3, such as -classic.
My reading of that statement is that if you want to generate source code that can be compiled with both 2.x and 3.x Python headers all you need to do is omit the -py3 argument when running SWIG. And that seems to work with my testing, the same code generated by SWIG compiles just fine with all of:
$ gcc -Wall -Wextra -I/usr/include/python2.7/ -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.4 -shared -o _test.so test_wrap.c
(Note that 3.4 does generate some warnings, but no errors and it does work)
The problem is that there's no interchangeability between the compiled _test.so of any given version. Trying to import a version from one in another will fail with errors from the dynamic linker about missing or undefined symbols. So we're still stuck at the initial problem, although we can compile our module for any Python interpreter we can't compile it for all versions.
In my view the right way to deal with this is use distuitls and simply install your module into the search path for each version of Python you want to support on any given machine.
That said there is a workaround we could use. Essentially we want to build multiple versions of our module for each interpreter and use some generic code to switch between the various implementations. Assuming that you're not building with code generated from SWIG's -builtin option there are two places we could try and do this:
In the shared object itself
In some Python code
The later is substantially simpler to achieve, so let's look at that. I did something by using the following bash script to build a shared object for each version of Python I intended to support:
#!/bin/bash
for v in 2.7 3.2 3.4
do
d=$(echo $v|tr . _)
mkdir -p $d
touch $d/__init__.py
gcc -Wall -Wextra -I/usr/include/python$v -shared -o $d/_test.so test_wrap.c
done
This builds 3 versions of _test.so, in directories named after the Python version they're intended to support. If we tried to import our SWIG generated module now it would complain because there's no indication of where to find the _test module we've compiled. So we need to fix that. There are three ways to do it:
Modify the SWIG generated import code. I didn't manage to make this work with SWIG 3.0.2 - perhaps it's too old?
Modify the Python search path, before the second import. We can do that using %pythonbegin:
%pythonbegin %{
import os.path
import sys
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '%d_%d' % (sys.version_info.major, sys.version_info.minor)))
%}
Write a _test.py stub that finds the right one and then switches them around:
from sys import version_info,modules
from importlib import import_module
real_mod = import_module('%d_%d.%s' % (version_info.major, version_info.minor, __name__))
modules[__name__] = modules[real_mod.__name__]
Either of the last two options worked and resulted in import test succeeding in all three versions of Python. But it's cheating a bit really and far better just to install from source 3 times over.

How to rebuild project after SWIG files changed?

Given the below makefile:
TARGET = _example.pyd
OFILES = example.obj example_wrap.obj
HFILES =
CC = cl
CXX = cl
LINK = link
CPPFLAGS = -DNDEBUG -DUNICODE -DWIN32 -I. -Id:\virtual_envs\py351\include
CFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -O2 -MD -W3 -w44456 -w44457 -w44458
CXXFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -D_HAS_EXCEPTIONS=0 -O2 -MD -W3 -w34100 -w34189 -w44996 -w44456 -w44457 -w44458 -wd4577
LFLAGS = /LIBPATH:. /NOLOGO /DYNAMICBASE /NXCOMPAT /DLL /MANIFEST /MANIFESTFILE:$(TARGET).manifest /SUBSYSTEM:WINDOWS /INCREMENTAL:NO
LIBS = /LIBPATH:d:\virtual_envs\py351\libs python35.lib
.SUFFIXES: .c .cpp .cc .cxx .C
{.}.cpp{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo #<<
$<
<<
{.}.cc{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo #<<
$<
<<
{.}.cxx{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo #<<
$<
<<
{.}.C{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo #<<
$<
<<
{.}.c{}.obj::
$(CC) -c $(CFLAGS) $(CPPFLAGS) -Fo #<<
$<
<<
all: $(TARGET)
$(OFILES): $(HFILES)
$(TARGET): $(OFILES)
$(LINK) $(LFLAGS) /OUT:$(TARGET) #<<
$(OFILES) $(LIBS)
<<
mt -nologo -manifest $(TARGET).manifest -outputresource:$(TARGET);2
install: $(TARGET)
#if not exist d:\virtual_envs\py351\Lib\site-packages mkdir d:\virtual_envs\py351\Lib\site-packages
copy /y $(TARGET) d:\virtual_envs\py351\Lib\site-packages\$(TARGET)
clean:
-del $(TARGET)
-del *.obj
-del *.exp
-del *.lib
-del $(TARGET).manifest
test:
python runme.py
I'd like to improve a couple of things here:
I'd like to consider swig files (*.i) in the makefile. For example, every time some swig file has been changed a new wrap file should be generated (ie: swig -python -c++ file_has_changed.cpp) and then rebuild the project
I'd like to avoid having hardcoded object files. For instance, I'd like to use all cpp files using wildcards somehow
I've read a little bit of the docs talking about Makefiles but I'm still pretty much confused. How could I achieve this?
Right now I'm using a hacky solution like swig -python -c++ whatever_file.i && nmake, that's of course it's not ideal at all
REFERENCES
Achieving this inside visual studio IDE is quite easy following these steps but I'd like to use this makefile inside SublimeText, that's why I'm quite interested on knowing how to have a proper Makefile
Producing any kind of target from any kind of source, that's the essence of a makefile:
.i.cpp:
swig -python -c++ $<
This elegance will, however, break with nmake (as opposed to GNU make) if the .cpp file is missing because nmake doesn't try to chain inference rules through a missing link.
Moreover, it will break silently and "build" from stale versions of the files that are later in the build chain (which includes the resulting executable) if they are present.
Possible kludges workarounds here (save for ditching nmake, of course) are:
invoke nmake multiple times, first, to generate all files that are an intermediate steps between two inference rules (which can in turn require multiple invocations if they are generated from one another), and then for the final targets
This requires an external script which can very well be another makefile. E.g.:
move the current Makefile to main_makefile and create a new Makefile with commands for the main target like this:
python -c "import os,os.path,subprocess;
subprocess.check_call(['nmake', '/F', 'main_makefile']
+[os.path.splitext(f)[0]+'.cpp'
for f in os.listdir('.') if os.path.isfile(f)
and f.endswith('.i')])"
nmake /F main_makefile
do not rely solely on inference rules but have an explicit rule for each .cpp to be produced (that's what CMake does btw)
this asks for the relevant part of Makefile to be autogenerated. That part can be !INCLUDE'd, but still, external code is needed to do the generation before nmake gets to work on the result. Example code (again, in Python):
import os,os.path,subprocess
for f in os.listdir('.') if os.path.isfile(f) and f.endswith('.i'):
print '"%s": "%s"'%(os.path.splitext(f)[0]+'.cxx',f)
#quotes are to allow for special characters,
# see https://msdn.microsoft.com/en-us/library/956d3677.aspx
#command is not needed, it will be added from the inferred rule I gave
# in the beginning, see http://www.darkblue.ch/programming/Namke.pdf, p.41 (567)
I have solved this using CMake and this translates directly to using autoconf and automake and thereby makefiles.
The idea is to introduce the following variable
DEPENDENCIES = `swig -M -python -c++ -I. example.i | sed 's/\//g'`
and make your target depend on this. The above generates a list of dependencies of all headers and .i files your SWIG interface file may include.

Issue in Compling C++ method in Eclipse and calling C++ method from python

Simple C++ example class I want to talk to in a file called foo.cpp
#include <iostream>
Since ctypes can only talk to C functions, you need to provide those declaring them as extern "C"
extern "C" {
Foo* Foo_new(){ return new Foo(); }
void Foo_bar(Foo* foo){ foo->bar(); }
}
class Foo{
public:
void bar(){
std::cout << "Hello" << std::endl;
}
};
compile this to a shared library
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
finally I have wrote python wrapper
from ctypes import cdll
lib = cdll.LoadLibrary('./libfoo.so')
class Foo(object):
def __init__(self):
self.obj = lib.Foo_new()
def bar(self):
lib.Foo_bar(self.obj)
f = Foo()
f.bar() #prints "Hello" on the screen
"My main intension is to compile C++ code in eclipse and call the C++ function from python in Linux". This works fine when I compiled C++ code in Linux and call the C++ method from python in Linux. But it doesn't work if I compile C++ code in eclipse and call the C++ method from python in Linux.
Error message:
symbol not found
I am new to the eclipse tool chain, But I am giving compiler option and linking option in as in this
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
Snapshot of eclipse compiler option and linking option will be highly appreciated. Please help me in sorting out this issue. Thanks in advance
You need to create two projects in the Eclipse.
Makefile project with existing code. (File->New->Makefile project with existing code). In this project you must point to your foo.cpp file. Then in the project folder you must create file which name is "Makefile". Makefile must contain folowing lines:
all:
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -W1,-soname,libfoo.so -o libfoo.so foo.o
clean:
rm -f libfoo.so
Then You must create rules ("all" and "clean") for this project in the "Make Target" window. If you don't see this window You must do Window->Show view->Make Target. Thus you can create libfoo.so file using Eclipse when double-clicked on the "all" rule in the "Make target" view.
At this moment You can create PyDev project with foo.py file. If you don't know about PyDev you must go to this site. It is Eclipse plugin for python language. When you will have installed this plugin You will can to work with your python file under the Eclipse.
See some images.

how to link a Python C module

I have written a Python C module (just ffmpeg.c which depends on some FFmpeg libs and other libs) and I am wondering how to link.
I'm compiling with:
cc -std=c99 -c ../ffmpeg.c -I /usr/include/python2.7 -g
I'm trying to link right now with:
ld -shared -o ../ffmpeg.so -L/usr/local/lib -lpython2.7 -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint ffmpeg.o -lc
There is no error. However, when I try to import ffmpeg in Python, I get:
ImportError: ./ffmpeg.so: undefined symbol: avio_alloc_context
Maybe this is already correct. I checked the resulting ffmpeg.so with ldd and it partly links to a wrong FFmpeg. This is strange however because of the -L/usr/local/lib which should take precedence over the default. Maybe because my custom installed FFmpeg (in /usr/local/lib) has for some reason only installed static *.a libs and *.so files have precedence over *.a files.
You should put the libraries that you're linking to after the .o file; i.e.:
ld -shared -o ../ffmpeg.so ffmpeg.o -L/usr/local/lib -lpython2.7 -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint -lc
The linker is dumb, and will not link in code from static libraries that it doesn't think are needed until a dependency arises i.e. the use of avio_alloc_context happens in ffmpeg.o, and because it's not listed after the use of the library, then the linker will not consider the code in the library as needed, so it doesn't get linked in - this is the biggest reason why linking using .a files fails.
You can also use --start-group and --end-group around all the files that you are linking - this allows you to link static libraries that have cross dependencies that just seem impossible to resolve through other means:
ld -shared -o ../ffmpeg.so -L/usr/local/lib -lpython2.7 --start-group -lavutil -lavformat -lavcodec -lswresample -lportaudio -lchromaprint ffmpeg.o --end-group -lc
using .a files is a little bit trickier than .so files, but these two items generally will work around any issues you have when linking.

C ->Python Import Wrapper Problems

I have defined the name of my wrapper object in my c file blargUtils.c like this (I have defined methods and the lot for it in Blargmethods)...
void initBlarg(){
Py_InitModule("Blarg", Blargmethods);
}
I compiled it like so...
blarglib: blargUtils.c
gcc -I/usr/include/python2.6 -fPIC -c blargUtils.c -Wall
gcc -shared blargUtils.o -o blargUtils.so
clean:
rm *.so
However, when I try to import the wrapper in my python script...
import Blarg
Its says it says: "ImportError: No module named Blarg". I'm a little lost here and I don't understand why it cannot find the class when they are the exact same spelling. Maybe its a logic error?
If more code is needed let me know.
First of all, from looking at the comments, I see that renaming it didn't work. This means (1) python can't find the .so file, (2) the .so file isn't usable (i.e. not compiled correctly or not all required symbols are found), or (3) there is a .py/.pyc/.pyo file in the same directory which already has that name. If you have Blarg.py already defined, python will look at this file first. The same goes if you have a directory named Blarg in your search path. So instead of bashing your head against the wall, try this:
1) Rename your .so library to something guaranteed not to collide (i.e. _Blarg)
void initBlarg() {
Py_InitModule("_Blarg", Blargmethods);
}
2) Compile it with the SAME NAME
gcc -I/usr/include/python2.6 -fPIC -c blargUtils.c -Wall
gcc -shared blargUtils.o -Wl,-soname -Wl,_Blarg.so -o _Blarg.so
3) Create a python wrapper (i.e. Blarg.py)
import sys
sys.path.append('/path/to/your/library')
import _Blarg
def blargFunc1(*args):
"""Wrap blargFunc1"""
return _Blarg.blargFunc1(*args)
4) Now just use it as normal
import Blarg
Blarg.blargFunc1(1, 2, 3)
Obviously this is a bit of overkill, but it should help you determine where the problem is. Hope this helps.

Categories