I know what Cythons purpose is. It's to write compilable C extensions in a Python-like language in order to produce speedups in your code. What I would like to know (and can't seem to find using my google-fu) is if Cython can somehow compile into an executable format since it already seems to break python code down into C.
I already use Py2Exe, which is just a packager, but am interested in using this to compile down to something that is a little harder to unpack (Anything packed using Py2EXE can basically just be extracted using 7zip which I do not want)
It seems if this is not possible my next alternative would just be to compile all my code and load it as a module and then package that using py2exe at least getting most of my code into compiled form, right?
Here's the wiki page on embedding cython
Assuming you installed python to C:\Python31 and you want to use Microsoft Compiler.
smalltest1.py - is the file you want to compile.
test.exe - name of the executable.
You need to set the environmental variables for cl.
C:\Python31\python.exe C:\Python31\Scripts\cython.py smalltest1.py --embed
cl.exe /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\Python31\include -Ic:\Python31\PC /Tcsmalltest1.c /link /OUT:"test.exe" /SUBSYSTEM:CONSOLE /MACHINE:X86 /LIBPATH:c:\Python31\libs /LIBPATH:c:\Python31\PCbuild
In principal it appears to be possible to do something like what you want, according to the Embedding Pyrex HOWTO. (Pyrex is effectively a previous generation of Cython.)
Hmm... that name suggests a better search than I first tried: "embedding cython" leads to this page which sounds like what you want.
I have successfully used the Cython & gcc to convert the *.py file to *.exe, with below batch file:
# build.bat
set PROJECT_NAME=test
set PYTHON_DIR=C:\python27
%PYTHON_DIR%\python -m cython --embed -o %PROJECT_NAME%.c %PROJECT_NAME%.py
gcc -Os -I %PYTHON_DIR%\include -o %PROJECT_NAME%.exe %PROJECT_NAME%.c -lpython27 -lm -L %PYTHON_DIR%\libs
Aftershock's answer is good, what I want to say is about run app without console. Most like Aftershock's answer, if you want your application run without console, two points are important:
Replace all the main() function in the ".c" file made by cython --embed with wmain()
Add /subsystem:windows /entry:wmainCRTStartup to the end of cl.exe ... command
Related
There is a Github repo containing Python "bindings" for a C++ library that I am interested in playing with. The README has abundant information about how to install the C++ library on Linux like machines, but no information about how to do so with a mac OS.
I have also opened up an issue requesting the README installation instructions include mac OS-specific installs in addition to linux. There hasn't been any activity on that issue.
Here are the two repos:
(Python) https://github.com/asiffer/python3-libspot
(C++) https://github.com/asiffer/libspot
Since the C++ package isn't available for installing via Brew/pip/anaconda, I'm not sure how to get going.
What I've Tried:
I have tried ./configure, and make. There is no ./configure file.
To address the lack of ./configure, read about a tool called autoconf which supposedly generates ./configure for you. I installed it with brew, but am not sure what arguments to pass it. These docs were pretty hard to understand: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Making-configure-Scripts.html
Just using make results in the error clang: error: unsupported option '-fopenmp'That sent me down a whole different rabbit hole which had me adding lines to the Makefile:
CPP = /usr/local/opt/llvm/bin/clang
CPPFLAGS = -I/usr/local/opt/llvm/include -fopenmp
LDFLAGS = -L/usr/local/opt/llvm/lib
omp_hello: omp_hello.c
$(CPP) $(CPPFLAGS) $^ -o $# $(LDFLAGS)
That felt dangerous because I have no idea what any of that stuff means. Plus it resulted in a new error: *** missing separator. Stop.
So then I read that's probably due to using "soft" tabs instead of "hard" tabs which can be identified using cat -e -t -v makefile_name. I found the one line where a "hard" tab was missing (the indented line above) and inserted it. This resulted in a new error:
make: *** No rule to make target `omp_hello.c', needed by `omp_hello'. Stop.
Next, following the advice of Yang Yushi and his follow on comments, I changed lines 39 and 40 according to his answer, plus added the locations of some additional files to the CXXFLAGS variable:
-I//opt/homebrew/Cellar/libomp/11.0.1/include
-L/opt/homebrew/Cellar/libomp/11.0.1/lib
And this got me a little further. Next, OSX didn't like where this script was trying to install, as explained by this answer. So I changed these two lines in the makefile which seemed to dictate install location:
INSTALL_HEAD_DIR = $(DESTDIR)/usr/include/libspot
INSTALL_LIB_DIR = $(DESTDIR)/usr/lib
to
INSTALL_HEAD_DIR = $(DESTDIR)/usr/local/include/libspot
INSTALL_LIB_DIR = $(DESTDIR)/usr/local/lib
And that indeed got me a little farther. Next I ran into an error complaining about the flat -t at these lines in the makefile:
#install -t $(INSTALL_LIB_DIR) $(LIB_DIR)/*.so
#install -t $(INSTALL_HEAD_DIR) $(INC_DIR)/*.h
So I deleted those flags, which then resulted in this error:
Checking the headers installation directory (/usr/local/include/libspot)
Checking the library installation directory (/usr/local/lib)
Installing the shared library (libspot.so)
install: /usr/local/lib: Inappropriate file type or format
For which I can find no reading material and have no clue how to fix. Any further assistance appreciated.
Here's a list of SO and other resources I've perused trying to answer this question:
Enable OpenMP support in clang in Mac OS X (sierra & Mojave)
makefile error: make: *** No rule to make target `omp.h' ; with OpenMP
makefile:4: *** missing separator. Stop
http://www.idryman.org/blog/2016/03/10/autoconf-tutorial-1/
https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Making-configure-Scripts.html
https://developer.gnome.org/anjuta-build-tutorial/stable/create-autotools.html.en
My Question
How do I proceed.
If you know how to do this, could you also include a brief explanation of the concepts behind each step? I'd be happy to learn a little instead of just copying and pasting commands in the right order.
Compile the C++ source code with Apple Clang
I downloaded the prjoect (libspot) and successfully compiled it on my Mac. I change two lines (39 and 40) in the Makefile to make it work. (Following this answer)
CC = clang++ # change from g++ to default Apple clang
CXXFLAGS = -std=c++11 -Wall -pedantic -Xpreprocessor -fopenmp -lomp # additional flags
You should get the binary file by just type make with a "correct" Makefile.
(If you see something like "cant find omp.h", add -I/usr/local/opt/libomp/include to the CXXFLAGS.)
For the Question
The error message in the updated question description
make: *** No rule to make target omp_hello.c', needed by omp_hello'. Stop.
is telling us that the file omp_hello.c is missing. The Makefile is written to compile the source code omp_hello.c to an executable binary file omp_hello. If I have the C source file (omp_hello.c), the Makefile will allow me to compile by just typing
make
instead of
/usr/local/opt/llvm/bin/clang \
-I/usr/local/opt/llvm/include -fopenmp \
-L/usr/local/opt/llvm/lib \
omp_hello.c -o omp_hello
This is just a normal compile process, it has nothing to do with Python. The error message is saying the source code to be compiled (omp_hello.c) is missing.
It looks like this is a small project with custom Makefile. Normally you compile the code with just make. The error you got seems to suggest the lack of llvm. You may want to try install llvm following this answer.
Usually it comes to running brew install <your C++ package> or downloading the source code to some directory and running a set of commands:
./configure
make
make install
While usually it works, some packages can not be installed on Mac since their maintainers did not prepare configuration for Mac.
I am trying to wrap a set of Fortran files to Python using f2py. I am using the gfortran compiler via mingw64. The sources I am trying to wrap contain Lapack functions, so I built Lapack and Blas following the "Easy windows build" instructions on this webpage: https://icl.cs.utk.edu/lapack-for-windows/lapack/#build. I am now able to compile my source file by running
gfortran foo.F90 -ffree-line-length-512 -llapack -lblas -fdec-math -Wl,-allow-multiple-definition
As you can see I need to pass a set of options, which are not really all that relevant here except for the last one -Wl,-allow-multiple-definition. I have found that if I do not include this option the file won't compile and I get a whole set of errors all ending with multiple definition of `_gfortran[...] where [...] contains some extra string, like for example _st_open'. Perhaps that last option is a bit of a hack, but at least the file compiles without issues.
However I do not know how to pass this linker option to f2py. Currently I run,
python -m numpy.f2py -llapack -lblas -c foo.F90
--fcompiler=gnu95 --compiler=mingw32 --f90flags="-ffree-line-length-512 -fdec-math -Wl,-allow-multiple-definition" -m foo_py
But this doesn't seem to do anything, I just get the same multiple definition error as if the linker flag wasn't included. So what syntax should I use?
Thanks!
EDIT: After some extensive googling it seems like f2py contains no option to pass linker flags to the compiler. So now I am wondering if there is some way to force the allow-multiple-definition option on the compiler globally.
I think the error itself must somehow originate in how I have built LAPACK and BLAS. Similar errors have been reported before, see https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=4&t=5315, but seemingly only in the built process and not during a fortran compile. Would there be alternative ways to build LAPACK such that I can easily incorporate it with gfortran?
Did you have a look at this: how to tell f2py module to look in current directory for shared object dependency
There the following is suggested:
set env variable export LDFLAGS=-Wl,-rpath=.
set env variable export NPY_DISTUTILS_APPEND_FLAGS=1
upgrade numpy to 1.16.0 or greater
The idea is to compile a C++ program. It contains a main.cpp, a printer.cpp, a printer.h, a scanner.cpp and scanner.h. These source files have one function print "hello". Now I am trying to create the object files and executables from the command line through a Python script and using cl.exe.
The error I get is LNK2019 so I know the issue is in the linking. I have looked though the options and I am using this for my Python !!
build = subprocess.Popen(['vcvarsall.bat', 'amd64_x86', '&&', 'cl', 'kernel32.lib',...[skipping some files]'uuid.lib','/I' + qtpath,'C:\\Users\\ROY_S\\Desktop\\CppMaker\\main.cpp','/ZI','/Gm','/EHsc','/MDd','/GS','/Fo'+path,'/Fe'+path+'main.exe','/link','/LIBPATH:'+qtlib,'/DEFAULTLIB:'+ qtlib+'QtMainIsar','/DEFAULTLIB:'+ qtlib+'QtCore','/DEFAULTLIB:'+ qtlib+'QtGuiIsar4','/DEFAULTLIB:'+ qtlib+'QtNetwork','/DEFAULTLIB:'+ qtlib+'QtOpenGLIsar4','/DEFAULTLIB:'+ ...[skipping...] qtlib+'QtWebKitIsar4','/INCREMENTAL','/NOLOGO','/TLBID:1','/DYNAMICBASE','/MANIFEST','/NXCOMPAT','/ERRORREPORT:PROMPT','/MACHINE:'+'X86','/OUT:'+path+'main.exe' ] , stdout=subprocess.PIPE)
I removed some libs so that its easier to read. I dont understand how do I link everything even after specifying the lib files in my script.
I can also move on to other solution instead of using cl.exe. Anything that has easy control over the commandline I am okay with it.
In windows I think it's a good idea to use Visual Studio project/solution and build it using MsBuild.
Project/solution file may be created manually or in Visual Studio.
You may also create properties in project/solution and pass them in MsBuild (see MsBuild documentation)
check = subprocess.Popen(['vcvarsall.bat', 'amd64_x86', '&&', 'msbuild',Path,'/p:configuration=%s' % Configuration])
This solves my issue.Instead of playing with the cl.exe , I make a template of a vcxproj file. Then my code adds the sources and header files in the vcxproj file and I compile that using msbuild. I feel its a much easier solution because I dont have o care about the individual libraries to add and can just add a propery sheet to my vcxproj file .
Also see https://msdn.microsoft.com/en-us/library/ms164311.aspx
Visual Studio is the easiest solution although if you REALLY don't like visual studio you can get gcc through https://www.cygwin.com/
I need to write a python wrapper for an existing C++ Module. First I tested the procedere with this basic example (which now actually works fine): C++ - Python Binding with ctypes - Return multiple values in function
Now I tried to change the setting: I want to use the existing lib instead of my single cpp file. I tried it with this:
g++ -c -I. -fPIC projectionWrapper.cpp -o projectionWrapper.o
g++ -shared -Wl,-soname,libproj.so
-L./build/liborig_interface.a,./build/liborig_base.a
-o libproj.so projectionWrapper.o
I wanted to link against both .a files from the given library with the -L command. I don't get any errors on that, but when I try to import the module via ipython, I get this:
import myprojection # I load libproj.so in this python file
OSError: ./libproj.so: undefined symbol: _Z29calibration_loadPKcjbP14camera_typetS2_
There is a function "calibration_load", as well a "camera_type" in the original framework. But I have no clue where the cryptic things in between come from.
Sorry for my vague explanation, I tried to explain it as good as possible, but a C++ Wrapper is not one of my topics where I feel "at home".
The problem is that you're not linking against the external library that you use in your C++ code; add -l<library> to your second g++ call.
g++ -shared projectionWrapper.o
-L./build/base -L./build/interface
-loriginterface -lorigbase
-Wl,-soname,libproj.so
-o libproj.so
did the job. Thanks for the hint that I actually didn't link the libraries as I only used the -L option.
Moreover the order of the options was wrong. I had to state "projectionWrapper.o" right at the beginning, as well as "-loriginterface" before "-lorigbase". This was answered here: "undefined reference" when linking against a static library
(complete name of the libs are: liboriginterface.a and liborigbase.a)
I want to create a python module which can have its functions called from a C++ class and call c++ functions from that class
i have looked at boost however it hasn't seemed to make any sense
it refers to a shared library (which i have no idea how to create) and i cant fallow the code they use in examples (it seems very confusing)
here is their hello world tutorial
(http://www.boost.org/doc/libs/1_55_0b1/libs/python/doc/tutorial/doc/html/index.html#python.quickstart)
Following C/C++ tradition, let's start with the "hello, world". A C++ Function:
char const* greet()
{
return "hello, world";
}
can be exposed to Python by writing a Boost.Python wrapper:
include <boost/python.hpp>
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
That's it. We're done. We can now build this as a shared library. The resulting DLL is now
visible to Python. Here's a sample Python session:
>>> import hello_ext
>>> print hello_ext.greet()
hello, world
Next stop... Building your Hello World module from start to finish...
could someone please help explain what is being done and most of all how python knows about the C++ file
Python does not know about the C++ file, it will only be aware of the extension module that is compiled from the C++ file. This extension module is an object file, called a shared library. This file has an interface that looks to Python as if it was a normal Python module.
This object file will only exist after you tell a compiler to compile the C++ file and link it with all the libraries it needs. Of course, the first library needed is Boost.Python itself, which must be available on the system where you are compiling.
You can tell Python to compile the C++ file for you, so that you do not need to mess with the compiler and its library flags. In order to do so, you need a file called setup.py where you use the Setuptools library or the standard Distutils to define how your other Python modules are to be installed on the system. One of the steps for installing is compiling all extension modules, called the build_ext phase.
Let us imagine you have the following directories and files:
hello-world/
├── hello_ext.cpp
└── setup.py
The content of setup.py is:
from distutils.core import setup
from distutils.extension import Extension
hello_ext = Extension(
'hello_ext',
sources=['hello_ext.cpp'],
include_dirs=['/opt/local/include'],
libraries=['boost_python-mt'],
library_dirs=['/opt/local/lib'])
setup(
name='hello-world',
version='0.1',
ext_modules=[hello_ext])
As you can see, we are telling Python there is an Extension we want to compile, where the source file is, and where the included libraries are to be found. This is system-dependent. The example shown here is for a Mac OS X system, where Boost libraries were installed via MacPorts.
The content of hello_ext.cpp is as shown in the tutorial, but take care to reorder things so that the BOOST_PYTHON_MODULE macro comes after the definitions of whatever must be exported to Python:
#include <boost/python.hpp>
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
You can then tell Python to compile and link for you by executing the following on the command line:
$ python setup.py build_ext --inplace
running build_ext
building 'hello_ext' extension
/usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -pipe -Os -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/opt/local/include -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c hello_ext.cpp -o build/temp.macosx-10.9-x86_64-2.7/hello_ext.o
/usr/bin/clang++ -bundle -undefined dynamic_lookup -L/opt/local/lib -Wl,-headerpad_max_install_names -L/opt/local/lib/db46 build/temp.macosx-10.9-x86_64-2.7/hello_ext.o -L/opt/local/lib -lboost_python-mt -o ./hello_ext.so
(The --inplace flag tells Python to leave the products of compilation right next to the source files. The default is to move them to a build directory, to keep the source directory clean.)
After that, you will find a new file called hello_ext.dll (or hello_ext.so on Unix) on the hello-world directory. If you start a Python interpreter in that directory, you will be able to import the module hello_ext and use the function greet, as shown in the Boost tutorial.
Python is an interpreted language. This means that it needs a virtual machine to execute the statements. For example, if it encounters a = 5, python (or rather the virtual machine that interprets your python code), will create an object in memory that holds some information and the value 5 and will make sure that any following reference to a will find the object. Same goes for more complex statements like input, on these commands, the virtual machine will trigger a hard coded routine which will do a lot of work under the hood before returning back to read the next piece of python code. So far, so good.
About modules. When issuing the import statement, python will look for the specified module name into its path. This is usually a .py file containing only pure python code to interpret. But that can also be a .pyd file, containing compiled routines that python can use like an executable would do with a shared library. This file contains symbols and entry points so that when the interpreter finds a special method name like mymodule.mymethod() it knows where to find the routine to execute and runs it.
However, these routines have to conform to a specific interface, and that's why it is not straightforward to expose C/C++ functions to python. The most obvious problem is that python int is not a C int, not a short, not even a long. It's a special structure that holds a lot more information like how often the variable is referenced (to be able to free memory for variables that are not referenced anymore), the type of the value it holds, etc. Of course, a typical C/C++ library doesn't work with these complex types, but uses vanilla int, float, char* and other nice plain types. So one has to translate the necessary python values to simple C types that can be understood by the library, and convert back the potential results delivered by the library into a format usable by python's virtual machine. This is what is called the wrapper. The wrapper also has to take care of funny things like reference counts, memory management on the heap, initialization and finalization, and other monkeys. See some examples to get an idea of how such code can look like. This is not extremely complicated, but still some work.
Now you get an idea of all the hard work done under the hood by the Python.Boost library (or other wrapping tools for that matters) when calling the ridiculously simple def("greet", greet);.