Run compiled python script with arguments in C++ (Cython) - python

What I try to achieve is compiling my Python script to lib/dll and invoke it with arguments.
This is my setup.py file for the script:
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("nu3k2nu4k.py")
)
Used this command to produce compilation outputs:
python setup.py build_ext --inplace
following files produced:
nu3k2nu4k.c
nu3k2nu4k.cp36-win_amd64.pyd
nu3k2nu4k.pyx
and a subdirectory named build which contains:
nu3k2nu4k.cp36-win_amd64.exp
nu3k2nu4k.cp36-win_amd64.lib
nu3k2nu4k.obj
How can I invoke the compiled script from c++ code with arguments?
I used Cython for the task but that's not mandatory, boost or others could be used as well (I'm doing this to make the source code not accessible).
Edit:
Using the documentation provided by Mykola Shchetinin I have managed to generate an API header file for my script. I basically added a cdef api keyword to my main function. While this does help me export the method name to c++ I still need a way to pass around some arguments to my script. Since I pass string arguments I was hoping for a similar way like using PyRun to set argv and parse the arguments from the script (since I would like to avoid conversion of strings from c to python encoding if possible) Is there any easy way to pass those string arguments?
EDIT 2:
I got it working! following the sample provided by Mykola and directly compiling the c file output instead of using the .lib I got it working.

I have tried to do that by myself. So I convert char * to bytes in cython. (according to docs)
That is what I came up with. I have the following setup (gcc and centos7.2):
setup.py
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("func.pyx")
)
main.c
#include "Python.h"
#include "func.h"
int main() {
Py_Initialize();
initfunc();
func("0123hello jorghy!!");
Py_Finalize();
return 0;
}
func.pyx
cdef public func(char * arg):
cdef bytes py_bytes = arg
print(str(py_bytes));
I build all this stuff with the following commands:
python setup.py build_ext --inplace
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test main.c func.c
Then when running the executable file test I get the following results:
0123hello jorghy!!
UPDATE
There is another way to approach this problem:
I have created a cython file test.pyx:
import sys
if __name__ == "__main__":
print("hei")
print(sys.argv)
And compiled as an executable it using commands:
cython test.pyx --embed
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test test.c
Then you can call this code as an executable file:
$ ./test 1 2 3
hei
['./test', '1', '2', '3']
Then you can call it from C++ using std::system or other (better) methods.

Related

BoostPython and CMake

I have successfully followed this example for how to connect C++ and python. It works fine when I use the given Makefile. When I try to use cmake instead, it does not go as well.
C++ Code:
#include <boost/python.hpp>
#include <iostream>
extern "C"
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
int main(){
std::cout<<greet()<<std::endl;
return 0;
}
Makefile:
# location of the Python header files
PYTHON_VERSION = 27
PYTHON_DOT_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_DOT_VERSION)
# location of the Boost Python include files and library
BOOST_INC = /usr/include
BOOST_LIB = /usr/lib/x86_64-linux-gnu/
# compile mesh classes
TARGET = hello_ext
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic $(TARGET).o -L$(BOOST_LIB) -lboost_python-py$(PYTHON_VERSION) -L/usr/lib/python$(PYTHON_DOT_VERSION)/config-x86_64-linux-gnu -lpython$(PYTHON_DOT_VERSION) -o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp
When I compile this I get a .so file that can be included in the script
import sys
sys.path.append('/home/myname/Code/Trunk/TestBoostPython/build/')
import libhello_ext_lib as hello_ext
print(hello_ext.greet())
I really want to use cmake instead of a manually written Makefile so I wrote this:
cmake_minimum_required(VERSION 3.6)
PROJECT(hello_ext)
# Find Boost
find_package(Boost REQUIRED COMPONENTS python-py27)
set(PYTHON_DOT_VERSION 2.7)
set(PYTHON_INCLUDE /usr/include/python2.7)
set(PYTHON_LIBRARY /usr/lib/python2.7/config-x86_64-linux-gnu)
include_directories(${PYTHON_INCLUDE} ${Boost_INCLUDE_DIRS})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -lrt -O3")
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
SET(LIBNAME hello_ext_lib)
add_library(${LIBNAME} SHARED src/hello_ext.cpp)
add_executable(${PROJECT_NAME} src/hello_ext.cpp)
TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${Boost_LIBRARIES} -lpython2.7 -fPIC)
TARGET_LINK_LIBRARIES(${LIBNAME} ${Boost_LIBRARIES} -lpython2.7 -fPIC -shared)
Here I currently type the Python-paths by hand but I have also tried using fin_package(PythonLibs) without success.
The program compiles fine and executes when I run the executable file in ../bin/. However, when I run the python script I get always:
ImportError: dynamic module does not define init function (initlibhello_ext_lib)
I found this which says this can happen if the lib and the executable have different names. Which indeed is the case, but how can I obtain the .so with correct name?
I also tried to not compile the executable but only the library. That did also not work.
BOOST_PYTHON_MODULE(hello_ext) creates an init function "inithello_ext", which should correspond to a module "hello_ext". But you are trying to import "libhello_ext_lib".
Give the module the same name as the filename. E.g. BOOST_PYTHON_MODULE(libhello_ext_lib).

Distributing cpp files generated by Cython

I'm new to Cython and my goal is to use it as some "translator" which compiles basic python code to c++ code which can be distributed and utilized by other users.
I've followed each step in documentation, say I have some file helloworld.pyx:
print("hello world!")
To compile this python code to c++, I create setup.py file:
from distutils.core import setup, Extension
from Cython.Build import cythonize
extensions = [
Extension(
"helloworld",
sources=["helloworld.pyx"],
language="c++"
),
]
setup(
ext_modules = cythonize(extensions),
)
Finally I use python setup.py build_ext --inplace which produces 2 new files - helloworld.o, helloworld.cpp and 1 directory - build.
Trying to execute script without specifying python directory will give an error gcc helloworld.cpp:
helloworld.cpp:17:10: fatal error: 'Python.h' file not found
#include "Python.h"
^ 1 error generated.
When specifying python directory like this gcc -c -I/usr/include/python2.7 helloworld.cpp -o main.out it works, but unfortunately the output is encoded in non-readable characters.
What would be the proper way to compile python code to cpp code which can be used independently (without specification of python file)?
Thank you!
If you're using Conda, it seems you need the libpython-dev package.

Is there a way to create a python module using SWIG C++ which can be imported in both Python2 and Python3

I'm writing a SWIG c++ file to generate a python module. I want to let the users import it in both Python2 and Python3 scripts. Since SWIG has different flags for binding Python2 and Python 3, I was wondering is there a way I can write a create a general module for both.
Let's rewind a little and keep SWIG itself out of the question to start with.
Historically it's been necessary to compile a Python module for every (sometimes even minor) version change of the Python interpreter. This led to PEP-384, which defined a stable ABI for a subset of the Python C-API from Python 3.2 onwards. So obviously that doesn't actually work for your request, because you're interested in 2 vs 3. Furthermore SWIG itself doesn't actually generate PEP-384 compatible, code.
To experiment further and see a bit more about what's going on I've made the following SWIG file:
%module test
%inline %{
char *frobinate(const char *in) {
static char buf[1024];
snprintf(buf, 1024, "World of hello: %s", in);
return buf;
}
%}
If we try to compile this with -DPy_LIMITED_API it failed:
swig3.0 -Wall -python test.i && gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c -DPy_LIMITED_API 2>&1|head
test_wrap.c: In function ‘SWIG_PyInstanceMethod_New’:
test_wrap.c:1114:3: warning: implicit declaration of function ‘PyInstanceMethod_New’ [-Wimplicit-function-declaration]
return PyInstanceMethod_New(func);
^
test_wrap.c:1114:3: warning: return makes pointer from integer without a cast
test_wrap.c: In function ‘SWIG_Python_UnpackTuple’:
test_wrap.c:1315:5: warning: implicit declaration of function ‘PyTuple_GET_SIZE’ [-Wimplicit-function-declaration]
Py_ssize_t l = PyTuple_GET_SIZE(args);
^
test_wrap.c:1327:2: warning: implicit declaration of function ‘PyTuple_GET_ITEM’ [-Wimplicit-function-declaration]
I.e. nobody has picked up that ticket, at least not on the version of SWIG 3.0.2 that I am testing with.
So where does that leave us? Well the SWIG 3.0 documentation on Python 3 support says something interesting:
SWIG is able to support Python 3.0. The wrapper code generated by SWIG can be compiled with both Python 2.x or 3.0. Further more, by passing the -py3 command line option to SWIG, wrapper code with some Python 3 specific features can be generated (see below subsections for details of these features). The -py3 option also disables some incompatible features for Python 3, such as -classic.
My reading of that statement is that if you want to generate source code that can be compiled with both 2.x and 3.x Python headers all you need to do is omit the -py3 argument when running SWIG. And that seems to work with my testing, the same code generated by SWIG compiles just fine with all of:
$ gcc -Wall -Wextra -I/usr/include/python2.7/ -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.2 -shared -o _test.so test_wrap.c
$ gcc -Wall -Wextra -I/usr/include/python3.4 -shared -o _test.so test_wrap.c
(Note that 3.4 does generate some warnings, but no errors and it does work)
The problem is that there's no interchangeability between the compiled _test.so of any given version. Trying to import a version from one in another will fail with errors from the dynamic linker about missing or undefined symbols. So we're still stuck at the initial problem, although we can compile our module for any Python interpreter we can't compile it for all versions.
In my view the right way to deal with this is use distuitls and simply install your module into the search path for each version of Python you want to support on any given machine.
That said there is a workaround we could use. Essentially we want to build multiple versions of our module for each interpreter and use some generic code to switch between the various implementations. Assuming that you're not building with code generated from SWIG's -builtin option there are two places we could try and do this:
In the shared object itself
In some Python code
The later is substantially simpler to achieve, so let's look at that. I did something by using the following bash script to build a shared object for each version of Python I intended to support:
#!/bin/bash
for v in 2.7 3.2 3.4
do
d=$(echo $v|tr . _)
mkdir -p $d
touch $d/__init__.py
gcc -Wall -Wextra -I/usr/include/python$v -shared -o $d/_test.so test_wrap.c
done
This builds 3 versions of _test.so, in directories named after the Python version they're intended to support. If we tried to import our SWIG generated module now it would complain because there's no indication of where to find the _test module we've compiled. So we need to fix that. There are three ways to do it:
Modify the SWIG generated import code. I didn't manage to make this work with SWIG 3.0.2 - perhaps it's too old?
Modify the Python search path, before the second import. We can do that using %pythonbegin:
%pythonbegin %{
import os.path
import sys
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '%d_%d' % (sys.version_info.major, sys.version_info.minor)))
%}
Write a _test.py stub that finds the right one and then switches them around:
from sys import version_info,modules
from importlib import import_module
real_mod = import_module('%d_%d.%s' % (version_info.major, version_info.minor, __name__))
modules[__name__] = modules[real_mod.__name__]
Either of the last two options worked and resulted in import test succeeding in all three versions of Python. But it's cheating a bit really and far better just to install from source 3 times over.

Issue in Compling C++ method in Eclipse and calling C++ method from python

Simple C++ example class I want to talk to in a file called foo.cpp
#include <iostream>
Since ctypes can only talk to C functions, you need to provide those declaring them as extern "C"
extern "C" {
Foo* Foo_new(){ return new Foo(); }
void Foo_bar(Foo* foo){ foo->bar(); }
}
class Foo{
public:
void bar(){
std::cout << "Hello" << std::endl;
}
};
compile this to a shared library
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
finally I have wrote python wrapper
from ctypes import cdll
lib = cdll.LoadLibrary('./libfoo.so')
class Foo(object):
def __init__(self):
self.obj = lib.Foo_new()
def bar(self):
lib.Foo_bar(self.obj)
f = Foo()
f.bar() #prints "Hello" on the screen
"My main intension is to compile C++ code in eclipse and call the C++ function from python in Linux". This works fine when I compiled C++ code in Linux and call the C++ method from python in Linux. But it doesn't work if I compile C++ code in eclipse and call the C++ method from python in Linux.
Error message:
symbol not found
I am new to the eclipse tool chain, But I am giving compiler option and linking option in as in this
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
Snapshot of eclipse compiler option and linking option will be highly appreciated. Please help me in sorting out this issue. Thanks in advance
You need to create two projects in the Eclipse.
Makefile project with existing code. (File->New->Makefile project with existing code). In this project you must point to your foo.cpp file. Then in the project folder you must create file which name is "Makefile". Makefile must contain folowing lines:
all:
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -W1,-soname,libfoo.so -o libfoo.so foo.o
clean:
rm -f libfoo.so
Then You must create rules ("all" and "clean") for this project in the "Make Target" window. If you don't see this window You must do Window->Show view->Make Target. Thus you can create libfoo.so file using Eclipse when double-clicked on the "all" rule in the "Make target" view.
At this moment You can create PyDev project with foo.py file. If you don't know about PyDev you must go to this site. It is Eclipse plugin for python language. When you will have installed this plugin You will can to work with your python file under the Eclipse.
See some images.

cython compiling error: multiple definition of functions

I create a c file named test.c with two functions defined as follows:
#include<stdio.h>
void hello_1(void){
printf("hello 1\n");
}
void hello_2(void){
printf("hello 2\n");
}
After that, I create test.pyx as follows:
import cython
cdef extern void hello_1()
The setup file is as follows:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(cmdclass={'buld_ext':build_ext},
ext_modules=[Extension("test",["test.pyx", "test.c"],
include_dirs=[np.get_include()],
extra_compile_args=['-g', '-fopenmp'],
extra_link_args=['-g', '-fopenmp', '-pthread'])
])
When I run setup file, it always reports hello_1 and hello_2 have multiple definitions. Can anybody tell me the problem?
There are a number of things wrong with your files as posted, and I have no idea which one is causing the problem in your real code—especially since the code you showed us doesn't, and can't possibly, generate those errors.
But if I fix all of the obvious problems, everything works. So, let's go through all of them:
Your setup.py is missing the imports at the top, so it's going to fail with a NameError immediately.
Next, there are multiple typos—Extenson for Extension, buld_ext for build_ext, and I think one more that I fixed but don't remember.
I stripped out the numpy and openmp stuff because it's not relevant to your problem, and it was easier to just get it out of the way.
When you fix all of that and actually run the setup, the next problem becomes immediately obvious:
$ python setup.py build_ext -i
running build_ext
cythoning test.pyx to test.c
You're either overwriting your test.c file with the file that gets built from test.pyx, or, if you get lucky, ignoring the generated test.c file and using the existing test.c as if it were the cython-compiled output of test.pyx. Either way, you're compiling the same file twice and trying to link the results together, hence your multiple definitions.
You can either configure Cython to use a non-default name for that file or, more simply, follow the usual naming conventions and don't have a test.pyx that tries to use a test.c in the first place.
So:
ctest.c:
#include <stdio.h>
void hello_1(void){
printf("hello 1\n");
}
void hello_2(void){
printf("hello 2\n");
}
test.pyx:
import cython
cdef extern void hello_1()
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(cmdclass={'build_ext':build_ext},
ext_modules=[Extension("test",["test.pyx", "ctest.c"],
extra_compile_args=['-g'],
extra_link_args=['-g', '-pthread'])
])
And running it:
$ python setup.py build_ext -i
running build_ext
cythoning test.pyx to test.c
# ...
clang: warning: argument unused during compilation: '-pthread'
$ python
>>> import test
>>>
Tada.

Categories