cython compiling error: multiple definition of functions - python

I create a c file named test.c with two functions defined as follows:
#include<stdio.h>
void hello_1(void){
printf("hello 1\n");
}
void hello_2(void){
printf("hello 2\n");
}
After that, I create test.pyx as follows:
import cython
cdef extern void hello_1()
The setup file is as follows:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(cmdclass={'buld_ext':build_ext},
ext_modules=[Extension("test",["test.pyx", "test.c"],
include_dirs=[np.get_include()],
extra_compile_args=['-g', '-fopenmp'],
extra_link_args=['-g', '-fopenmp', '-pthread'])
])
When I run setup file, it always reports hello_1 and hello_2 have multiple definitions. Can anybody tell me the problem?

There are a number of things wrong with your files as posted, and I have no idea which one is causing the problem in your real code—especially since the code you showed us doesn't, and can't possibly, generate those errors.
But if I fix all of the obvious problems, everything works. So, let's go through all of them:
Your setup.py is missing the imports at the top, so it's going to fail with a NameError immediately.
Next, there are multiple typos—Extenson for Extension, buld_ext for build_ext, and I think one more that I fixed but don't remember.
I stripped out the numpy and openmp stuff because it's not relevant to your problem, and it was easier to just get it out of the way.
When you fix all of that and actually run the setup, the next problem becomes immediately obvious:
$ python setup.py build_ext -i
running build_ext
cythoning test.pyx to test.c
You're either overwriting your test.c file with the file that gets built from test.pyx, or, if you get lucky, ignoring the generated test.c file and using the existing test.c as if it were the cython-compiled output of test.pyx. Either way, you're compiling the same file twice and trying to link the results together, hence your multiple definitions.
You can either configure Cython to use a non-default name for that file or, more simply, follow the usual naming conventions and don't have a test.pyx that tries to use a test.c in the first place.
So:
ctest.c:
#include <stdio.h>
void hello_1(void){
printf("hello 1\n");
}
void hello_2(void){
printf("hello 2\n");
}
test.pyx:
import cython
cdef extern void hello_1()
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(cmdclass={'build_ext':build_ext},
ext_modules=[Extension("test",["test.pyx", "ctest.c"],
extra_compile_args=['-g'],
extra_link_args=['-g', '-pthread'])
])
And running it:
$ python setup.py build_ext -i
running build_ext
cythoning test.pyx to test.c
# ...
clang: warning: argument unused during compilation: '-pthread'
$ python
>>> import test
>>>
Tada.

Related

Compiling cython with gcc: No such file or directory from #include "ios"

Given a file docprep.pyx as simple as
from spacy.structs cimport TokenC
print("loading")
And trying to cythonize it via
cythonize -3 -i docprep.pyx
I get the following error message
docprep.c:613:10: fatal error: ios: No such file or directory
#include "ios"
^~~~~
compilation terminated
As you can tell from the paths, this system has an Anaconda installation with Python 3.7. numpy, spacy and cython are all installed through conda.
In my case, it worked using #mountrix tip, just add the language="c++" to your setup.py, an example:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy
extensions = [
Extension("processing_module", sources=["processing_module.pyx"], include_dirs=[numpy.get_include()], extra_compile_args=["-O3"], language="c++")
]
setup(
name="processing_module",
ext_modules = cythonize(extensions),
)
<ios> is a c++-header. The error message shows that you try to compile a C++-code as C-code.
Per default, Cython will produce a file with extension *.c, which will be interpreted as C-code by the compiler later on.
Cython can also produce a file with the right file-extension for c++, i.e. *.cpp. And there are multiple ways to trigger this behavior:
adding # distutils: language = c++ at the beginning of the pyx-file.
adding language="c++" to the Extension definition in the setup.py-file.
call cython with option --cplus.
in IPython, calling %%cython magic with -+, i.e. %%cython -+.
for alternatives when building with pyximport, see this SO-question.
Actually, for cythonize there is no command line option to trigger c++-generation, thus the first options looks like the best way to go:
# distutils: language = c++
from spacy.structs cimport TokenC
print("loading")
The problem is that spacy/structs.pxd uses c++-constructs, for example vectors or anything else cimported from libcpp:
...
from libcpp.vector cimport vector
...
and thus also c++-libraries/headers are needed for the build.

Distributing cpp files generated by Cython

I'm new to Cython and my goal is to use it as some "translator" which compiles basic python code to c++ code which can be distributed and utilized by other users.
I've followed each step in documentation, say I have some file helloworld.pyx:
print("hello world!")
To compile this python code to c++, I create setup.py file:
from distutils.core import setup, Extension
from Cython.Build import cythonize
extensions = [
Extension(
"helloworld",
sources=["helloworld.pyx"],
language="c++"
),
]
setup(
ext_modules = cythonize(extensions),
)
Finally I use python setup.py build_ext --inplace which produces 2 new files - helloworld.o, helloworld.cpp and 1 directory - build.
Trying to execute script without specifying python directory will give an error gcc helloworld.cpp:
helloworld.cpp:17:10: fatal error: 'Python.h' file not found
#include "Python.h"
^ 1 error generated.
When specifying python directory like this gcc -c -I/usr/include/python2.7 helloworld.cpp -o main.out it works, but unfortunately the output is encoded in non-readable characters.
What would be the proper way to compile python code to cpp code which can be used independently (without specification of python file)?
Thank you!
If you're using Conda, it seems you need the libpython-dev package.

Wrapping a C++ file as a Python file using SWIG

I'm new to using SWIG and a bit out of my depth as a programmer. I would like to be able to call the functions of a C++ class in python 2 by importing the wrapped class as a module 'import C++_file' and then call it within my python class with something like 'C++_file.function(inputs)'.
Following http://intermediate-and-advanced-software-carpentry.readthedocs.io/en/latest/c++-wrapping.html, I am wrapping the header file multiplyChannel.h:
#include <vector>
#include <complex>
using namespace std;
class MultiplyChannel {
public:
MultiplyChannel(double in_alpha);
void applyToVector(vector<complex<double> > *in_signal);
private:
double alpha;
};
which corresponds to my example C++ file multiplyChannel.cpp:
#include "multiplyChannel.h"
#include <vector>
#include <complex>
using namespace std;
MultiplyChannel::MultiplyChannel(double in_alpha){
this->alpha = in_alpha;
}
void MultiplyChannel::applyToVector(vector<complex<double> > *in_signal){
unsigned int size = in_signal->size();
for (int i = 0; i < size; i++) {
in_signal->at(i).real() = in_signal->at(i).real() * this->alpha;
in_signal->at(i).imag() = in_signal->at(i).imag() * this->alpha;
}
}
With the makefile:
all:
swig -python -c++ -o mult.cpp swigdemo.i
python setup.py build_ext --inplace
the wrapper file swigdemo.i:
%module swigdemo
%{
#include <stdlib.h>
#include "multiplyChannel.h"
%}
%include "multiplyChannel.h"
and setup.py build file:
from distutils.core import setup, Extension
extension_mod = Extension("_swigdemo", ["mult.cpp"])
setup(name = "swigdemo", ext_modules=[extension_mod])
by typing in my Ubuntu command window:
$ make
swig -python -c++ -o multiplyChannel.cpp swigdemo.i
python setup.py build_ext --inplace
running build_ext
$ python setup.py build
running build
running build_ext
Testing the import using C++_tester.py, I try to multiply the vector [1, 2, 3] into [5, 10, 15] using the MultiplyChannel object 'demo' with instance variable 'in_alpha' of 5x, multiplying all inputs by 5:
#!/usr/bin/python
import swigdemo
if __name__ == '__main__':
demo = swigdemo.MultiplyChannel(in_alpha=5)
out = demo.applyToVector(in_signal=[1, 2, 3])
print(out)
I am not getting past even the import line, receiving the following error:
$ python C++_tester.py
ImportError: ./_swigdemo.so: undefined symbol: _ZN15MultiplyChannelC1Ed
And am unsure what to do as I cannot even gedit or vim into the .so file. I'm guessing my error lies in wrapping incorrectly in my wrapper, build, or makefile, as pretty much everything in the C++_tester.py file auto-completed in my Pycharm IDE.
Many thanks!
The problem was indeed related to extension build:
swigdemo.i - generates swig wrappers (mult.cpp)
MultiplyChannel class implementation is in multiplyChannel.cpp
When building the extension, since it's a shared object (.so), the linker (by default) doesn't complain about undefined symbols (like the 2 MultiplyChannel methods (as it doesn't know anything about them) - and others), but creates it, considering that the symbols could be available at runtime (when the .so will be loaded)
In short, modify setup.py by adding multiplyChannel.cpp to the extension source flies list:
extension_mod = Extension("_swigdemo", ["mult.cpp", "multiplyChannel.cpp"])
Check [SO]: SWIG: How to pass list of complex from c++ to python for the next problem you're going to run into.

Run compiled python script with arguments in C++ (Cython)

What I try to achieve is compiling my Python script to lib/dll and invoke it with arguments.
This is my setup.py file for the script:
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("nu3k2nu4k.py")
)
Used this command to produce compilation outputs:
python setup.py build_ext --inplace
following files produced:
nu3k2nu4k.c
nu3k2nu4k.cp36-win_amd64.pyd
nu3k2nu4k.pyx
and a subdirectory named build which contains:
nu3k2nu4k.cp36-win_amd64.exp
nu3k2nu4k.cp36-win_amd64.lib
nu3k2nu4k.obj
How can I invoke the compiled script from c++ code with arguments?
I used Cython for the task but that's not mandatory, boost or others could be used as well (I'm doing this to make the source code not accessible).
Edit:
Using the documentation provided by Mykola Shchetinin I have managed to generate an API header file for my script. I basically added a cdef api keyword to my main function. While this does help me export the method name to c++ I still need a way to pass around some arguments to my script. Since I pass string arguments I was hoping for a similar way like using PyRun to set argv and parse the arguments from the script (since I would like to avoid conversion of strings from c to python encoding if possible) Is there any easy way to pass those string arguments?
EDIT 2:
I got it working! following the sample provided by Mykola and directly compiling the c file output instead of using the .lib I got it working.
I have tried to do that by myself. So I convert char * to bytes in cython. (according to docs)
That is what I came up with. I have the following setup (gcc and centos7.2):
setup.py
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("func.pyx")
)
main.c
#include "Python.h"
#include "func.h"
int main() {
Py_Initialize();
initfunc();
func("0123hello jorghy!!");
Py_Finalize();
return 0;
}
func.pyx
cdef public func(char * arg):
cdef bytes py_bytes = arg
print(str(py_bytes));
I build all this stuff with the following commands:
python setup.py build_ext --inplace
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test main.c func.c
Then when running the executable file test I get the following results:
0123hello jorghy!!
UPDATE
There is another way to approach this problem:
I have created a cython file test.pyx:
import sys
if __name__ == "__main__":
print("hei")
print(sys.argv)
And compiled as an executable it using commands:
cython test.pyx --embed
gcc -I/usr/include/python2.7 -lpython2.7 -Wall -Wextra -O2 -g -o test test.c
Then you can call this code as an executable file:
$ ./test 1 2 3
hei
['./test', '1', '2', '3']
Then you can call it from C++ using std::system or other (better) methods.

Cython: Calling Python code from C program

I am trying to make a Cython wrapper so I can call Python code from C. I am having issues with import as I would like the wrapper to be separate from original code.
Code below ends in segfault when calling imported function. If the code is written as a python module and imported via import the program says that the name ... is not defined. The problem does not exhibit itself when everything is in one file and there's no import involved (indeed code generated by Cython fails when cimporting). The code works fine as well when libcimpy.pyx is imported from other python script (either compiled to .so or live)
I have prepared a minimal example. This is far from actual code but covers the principle.
cimpPy.pyx: Sample python code (converted to Cython)
cdef sum(a, b):
return a + b
cimpPy.pxd
cdef sum(a, b)
libcimpy.pyx (glue Cython code)
cimport cimpPy
cdef public int cSum(int a, int b):
return cimpPy.sum(a, b)
ci.c (c code from which we want to call cimpPy)
#include <stdio.h>
#include <stdlib.h>
#include <Python.h>
#include "libcimp.h"
int main(int argc, char **argv) {
Py_Initialize();
initlibcimp();
int a = 2;
int b = 3;
int c = cSum(a, b);
printf("sum of %d and %d is %d\n", a, b, c);
Py_Finalize();
return 0;
}
Makefile
EXECUTABLE = ci
OBJS = ci.o
CC = gcc
CFLAGS = -g -I/usr/include/python2.7 -I$(shell pwd)
LINKER = g++
LDFLAGS = -L$(shell pwd) $(shell python-config --ldflags) -lcimp
.PHONY: clean cython
all: cython $(EXECUTABLE)
cython:
python setup.py build_ext --inplace
.c.o:
$(CC) $(CFLAGS) -c $<
$(EXECUTABLE) : $(OBJS)
$(LINKER) -o $(EXECUTABLE) $(OBJS) $(LDFLAGS)
clean:
rm -rf *.o *.so libcimp.c libcimp.h core build $(EXECUTABLE)
setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
from Cython.Distutils import build_ext
extensions = [
Extension("libcimp", ["libcimp.pyx"])
]
setup(
name = "CIMP",
cmdclass = {"build_ext": build_ext},
ext_modules = cythonize(extensions)
)
What I intend to achieve is being able to plug Python code into larger C system. The assumption is that users will be able to write Python themselves. The C code is a simulation engine which can operate on agents in a environment. The idea is that the behaviour of agents and environment can be specified in python and passed to the engine for evaluation when necessary. The best analogy would be a map reduce system where Python scripts are mappers. In this sense I want to call Python from C and not the other way round.
Converting everything to Cython, while compelling would be to large undertaking.
Is this the right approach? Why import works only under python interpreter and not when embedded externally? Any suggestions and reference articles or documentation are appreciated.
In this code, the initlibcimp() actually fails, but you don't see it right away because the error is reported by setting a python exception. I'm not 100% sure this is the correct way to do this, but I could see the error by adding the following code below that call:
if (PyErr_Occurred())
{
PyErr_Print();
exit(-1);
}
Then, the program will output:
Traceback (most recent call last):
File "libcimpy.pyx", line 1, in init libcimpy (libcimpy.c:814)
cimport cimpPy
ImportError: No module named cimpPy
The reason that the cimpPy module is not yet defined, is that you need to do a call to initcimpPy() before calling initlibcimp.

Categories