Adding a library to C - python

I'm working with python, but I have a basic understanding of packaging with C. However I don't know how to build the c 'path.' Also, my google searches seem to be failing me returning results on c++. Or is that my solution?
The objective is to include qrencode.h, I can surly put it in the same folder but I'd like to know how to link to it instead.
Thanks!
PS. As always, addition to read material that is relevant would be much appreciated!

You use an include directive to include the *.h file in your C/C++ code:
#include "qrencode.h"
As #Ignacio Vazquez-Abrams says, though, that's just a header, which declares functions; you need the actual functions, and they'll be in a *.dylib or *.so file, which needs to be linked into an executable. Compiling is turning one *.c file into a *.o file; linking is when you put all the *.o files and libraries together into an application. The -L option on the linker command line tells it where to look for libraries; the -l option tells it to include a library.

Related

Bundle all dependencies in Cython compile [duplicate]

I am trying to make one unix executable file from my python source files.
I have two file, p1.py and p2.py
p1.py :-
from p2 import test_func
print (test_func())
p2.py :-
def test_func():
return ('Test')
Now, as we can see p1.py is dependent on p2.py . I want to make an executable file by combining two files together. I am using cython.
I changed the file names to p1.pyx and p2.pyx respectively.
Now, I can make file executable by using cython,
cython p1.pyx --embed
It will generate a C source file called p1.c . Next we can use gcc to make it executable,
gcc -Os -I /usr/include/python3.5m -o test p1.c -lpython3.5m -lpthread -lm -lutil -ldl
But how to combine two files into one executable ?
People are tempted to do this because it's fairly easy to do for the simplest case (one module, no dependencies). #ead's answer is good but honestly pretty fiddly and it is handling the next simplest case (two modules that you have complete control of, no dependencies).
In general a Python program will depend on a range of external modules. Python comes with a large standard library which most programs use to an extent. There's a wide range of third party libraries for maths, GUIs, web frameworks. Even tracing those dependencies through the libraries and working out what you need to build is complicated, and tools such as PyInstaller attempt it but aren't 100% reliable.
When you're compiling all these Python modules you're likely to come across a few Cython incompatibilities/bugs. It's generally pretty good, but struggles with features like introspection, so it's unlikely a large project will compile cleanly and entirely.
On top of that many of those modules are compiled modules written either in C, or using tools such as SWIG, F2Py, Cython, boost-python, etc.. These compiled modules may have their own unique idiosyncrasies that make them difficult to link together into one large blob.
In summary, it may be possible, but for non-trivial programs it is not a good idea however appealing it seems. Tools like PyInstaller and Py2Exe that use a much simpler approach (bundle everything into a giant zip file) are much more suitable for this task (and even then they struggle to be really robust).
Note this answer is posted with the intention of making this question a canonical duplicate for this problem. While an answer showing how it might be done is useful, "don't do this" is probably the best solution for the vast majority of people.
There are some loops you have to jump through to make it work.
First, you must be aware that the resulting executable is a very slim layer which just delegates the whole work to (i.e. calls functions from) pythonX.Ym.so. You can see this dependency when calling
ldd test
...
libpythonX.Ym.so.1.0 => not found
...
So, to run the program you either need to have the LD_LIBRARY_PATH showing to the location of the libpythonX.Ym.so or build the exe with --rpath option, otherwise at the start-up of test dynamic loader will throw an error similar to
/test: error while loading shared libraries: libpythonX.Ym.so.1.0: cannot open shared object file: No such file or directory
The generic build command would look like following:
gcc -fPIC <other flags> -o test p1.c -I<path_python_include> -L<path_python_lib> -Wl,-rpath=<path_python_lib> -lpython3.6m <other_needed_libs>
It is also possible to build against static version of the python-library, thus eliminating run time dependency on the libpythonX.Ym, see for example this SO-post.
The resulting executable test behaves exactly the same as if it were a python-interpreter. This means that now, test will fail because it will not find the module p2.
One simple solution were to cythonize the p2-module inplace (cythonize p2.pyx -i): you would get the desired behavior - however, you would have to distribute the resulting shared-object p2.so along with test.
It is easy to bundle both extension into one executable - just pass both cythonized c-files to gcc:
# creates p1.c:
cython --empbed p1.pyx
# creates p2.c:
cython p2.pyx
gcc ... -o test p1.c p2.c ...
But now a new (or old) problem arises: the resulting test-executable cannot once again find the module p2, because there is no p2.py and no p2.so on the python-path.
There are two similar SO questions about this problem, here and here. In your case the proposed solutions are kind of overkill, here it is enough to initialize the p2 module before it gets imported in the p1.pyx-file to make it work:
# making init-function from other modules accessible:
cdef extern object PyInit_p2();
#init/load p2-module manually
PyInit_p2() #Cython handles error, i.e. if NULL returned
# actually using already cached imported module
# no search in python path needed
from p2 import test_func
print(test_func())
Calling the init-function of a module prior to importing it (actually the module will not be really imported a second time, only looked up in the cache) works also if there are cyclic dependencies between modules. For example if module p2 imports module p3, which imports p2in its turn.
Warning: Since Cython 0.29, Cython uses multi-phase initialization per default for Python>=3.5, thus calling PyInit_p2 is not enough (see e.g. this SO-post). To switch off this multi-phase initialization -DCYTHON_PEP489_MULTI_PHASE_INIT=0should be passed to gcc or similar to other compilers.
Note: However, even after all of the above, the embedded interpreter will need its standard libraries (see for example this SO-post) - there is much more work to do to make it truly standalone! So maybe one should heed #DavidW's advice:
"don't do this" is probably the best solution for the vast majority of
people.
A word of warning: if we declare PyInit_p2() as
from cpython cimport PyObject
cdef extern PyObject *PyInit_p2();
PyInit_p2(); # TODO: error handling if NULL is returned
Cython will no longer handle the errors and its our responsibility. Instead of
PyObject *__pyx_t_1 = NULL;
__pyx_t_1 = PyInit_p2(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
produced for object-version, the generated code becomes just:
(void)(PyInit_p2());
i.e. no error checking!
On the other hand using
cdef extern from *:
"""
PyObject *PyInit_p2(void);
"""
object PyInit_p2()
will not work with g++ - one has to add extern C to declaration.

Unit test C-generating python code

I have a project which involves some (fairly simple) C-code generation as part of a build system. In essence, I have some version information associated with a project (embedded C) which I want to expose in my binary, so that I can easily determine what firmware version was programmed to a particular device for debugging purposes.
I'm writing some simplistic python tools to do this, and I want to make sure they're thoroughly tested. In general, this has been fairly straightforward, but I'm unsure what the best strategy is for the code-generation portion. Essentially, I want to make sure that the generated files both:
Are syntactically correct
Contain the necessary information
The second, I can (I believe) achieve to a reasonable degree with regex matching. The first, however, is something of a bigger task. I could probably use something like pycparser and examine the resulting AST to accomplish both goals, but that seems like an unnecessarily heavyweight solution.
Edit: A dataflow diagram of my build hierarchy
Thanks for the diagram! Since you are not testing for coverage, if it were me, I would just compile the generated C code and see if it worked :) . You didn't mention your toolchain, but in a Unix-like environment, gcc <whatever build flags> -c generated-file.c || echo 'Oops!' should be sufficient.
Now, it may be that the generated code isn't a freestanding compilation unit. No problem there: write a shim. Example shim.c:
#include <stdio.h>
#include "generated-file.c"
main() {
printf("%s\n", GENERATED_VERSION); //or whatever is in generated-file.c
}
Then gcc -o shim shim.c && diff <(./shim) "name of a file holding the expected output" || echo 'Oops!' should give you a basic test. (The <() is bash process substitution.) The file holding the expected results may already be in your git repo, or you might be able to use your Python routine to write it to disk somewhere.
Edit 2 This approach can work even if your actual toolchain isn't amenable to automation. To test syntactic validity of your code, you can use gcc even if you are using a different compiler for your target processor. For example, compiling with gcc -ansi will disable a number of GNU extensions, which means code that compiles with gcc -ansi is more likely to compile on another compiler than is code that compiles with full-on, GNU-extended gcc. See the gcc page on "C Dialect Options" for all the different flavors you can use (ditto C++).
Edit Incidentally, this is the same approach GNU autoconf uses: write a small test program to disk (autoconf calls it conftest.c), compile it, and see if the compilation succeeded. The test program is (preferably) the bare minimum necessary to test if everything is OK. Depending on how complicated your Python is, you might want to test several different aspects of your generated code with respective, different shims.

How to install python binding of a C++ library

Imaging that we are given a finished C++ source code of a library, called MyAwesomeLib. The goal is to expose some of its power to python, so we create a wrapper using swig and generated a python package called PyMyAwesomeLib.
The directory structure now looks like
root_dir
|-src/
|-lib/
| |- libMyAwesomeLib.so
| |- _PyMyAwesomeLib.so
|-swig/
| |- PyMyAwesomeLib.py
|-python/
|- Script_using_myawesomelib.py
So far so good. Ideally, all we want to do next is to copy lib/*.so swig/*.py and python/*.py into the corresponding directory in site-packages in a pythonic way, i.e. using
python setup.py install
However, I got very confused when trying to achieve this simple goal using setuptools and distutils. Both tools handles the compilation of python extensions through an internal system, where the source file, compiler flags etc. are passed using setup(ext_module=[Extension(...)]). But this is ridiculous since MyAsesomeLib has a fully functioning build system that is based on makefile. Porting the logic embedded in makefiles would be redundant and completely un-necessary work.
After some research, it seems there are two options left, I can either override setuptools.command.build and setuptools.command.install to use the existing makefile and copy the results directly, or I can somehow let setuptools know about these files and ask it to copy them during installation. The second way is more appealing, but it is what gives me the most headache. I have tried the following optionts without success
package_data, and include_package_data does not work because *.so files are not under version control and they are not inside of any package.
data_files does not seems to work since the files only get included when running python setup.py sdist, but ignored when python setup.py install. This is the opposite of what I want. The .so files should not be included in the source distribution, but get copied during the installation step.
MANIFEST.in failed for the same reason as data_files.
eager_resources does not work either, but honestly I do not know the difference between eager_resources and data_files or MANIFEST.in.
I think this is actually a common situation, and I hope there is a simple solution to it. Any help would be greatly appreciated.
Porting the logic embedded in makefiles would be redundant and
completely un-necessary work.
Unfortunately, that's exactly what I had to do. I've been struggling with this same issue for a while now.
Porting it over actually wasn't too bad. distutils does understand SWIG extensions, but it this was implemented rather haphazardly on their part. Running SWIG creates Python files, and the current build order assumes that all Python files have been accounted for before running build_ext. That one wasn't too hard to fix, but it's annoying that they would claim to support SWIG without mentioning this. Distutils attempts to be cross-platform when compiling things, so there is still an advantage to using it.
If you don't want to port your entire build system over, use the system's package manager. Many complex libraries do this (but they also try their best with setup.py). For example, to get numpy and lxml on Ubuntu you'd just do:
sudo apt-get install python-numpy python-lxml. No pip.
I realize you'd rather write one setup file instead of dealing with every package manager ever so this is probably not very helpful.
If you do try to go the setuptools route there is one fatal flaw I ran into: dependencies.
For instance, if you are distributing a SWIG-based project, it's going to need libpython. If they don't have it, an error like this happens:
#include <Python.h>
error: File not found
That's pretty unhelpful to the average user.
Even worse, if you require a shared library but the user's library is out of date, the user can get some crazy errors. You're at the mercy of their C++ compiler to output Google-friendly error messages so they can figure it out.
The long-term solution would be to get setuptools/distutils to get better at detecting non-python libraries, hopefully as good as Ruby's gem. I pretty much had to roll my own. For instance, in this setup.py I'm working on you can see a few functions at the top I hacked together for dependency detection (still doesn't work on all systems...definitely not Windows).

Using F2py in distutils

I am using fortran programs within a python script, and trying to build and install it with a setup.py script, using numpy.distutils. However, I am not adept at knowing how to link in various code files, so I thought I'd ask the question here and hope someone could explain with clarity what to do with each type of file.
Let me explain a hypothetical situation, which happens to be fairly similar to my own. I have two files containing a module each that I wish to become .so files to be importable in python. Say they are read.f90 (containg module read ) and analyse.f90 (containing module analyse ) Both of these modules use subroutines that are defined in another file subs.f90 which I am constantly adding to and updating. The module analyse also relies on another module produce in the file produce.f90, which I may tune up to begin with but basically i will let it be after this most likely. Furthermore, analyse also depends on an external library libfoo.a.
There are two layers to making this work - firstly, the use and include statements must be correct in the f90 files. Secondly, the Extension configuration in the setup.py file must be correct. So far, I know how to get the external library working - in the module analyse, put use foo, and in setup.py, in the Extension function, use the keywords
library_dirs = ["path/to/library"],
libraries = ["foo"],
include_dirs = ["path/to/directory/with/mod/files"]
However, for the rest I am unsure. I have some of it working but it doesn't seem to be the optimal way. Other parts aren't working at all. I just wondered if someone could explain clearly what to do?

Is it possible to compile c code using python?

I want to build a python program that get as input a path to .c file and then it compile its.
The program will output OK to the screen if the compilation is sucessful, and BAD otherwise.
I'm been trying to google it, but could not find anything. I've been also trying to run cmd within python with an argument of the compiling program but it didn't work.
To clarify - I've already got a very specific compiler in my machine which I want to run. I dont want python to act as a compiler. Just get a code, run my compiler over it, and see what's the answer.
It should work on Linux server with python 2.4.
Thanks
You can compile C code using only the standard library, and it will work on every platform and with every Python version (assuming you actually have a C compiler available). Check out the distutils.ccompiler module which Python uses to compile C extension modules. A simple example:
// main.c
#include <stdio.h>
int main() {
printf("Hello world!\n");
return(0);
}
Compilation script:
# build.py
from distutils.ccompiler import new_compiler
if __name__ == '__main__':
compiler = new_compiler()
compiler.compile(['main.c'])
compiler.link_executable(['main.o'], 'main')
Everything else (include paths, library paths, custom flags or link args or macros) can be passed via various configuration options. Check out the above link to the module documentation for more info.
Sure, why not? Of course, you'd need GCC installed (or llvm) so you have something to compile with. You can just use os.system, or any of the other ways for calling an external program.
Of course, you're probably better off looking at something like SCons, which already exists to solve this problem.
Plus, to answer the question actually asked, there's nothing that would prevent you from writing a compiler/assembler/linker in python, they're just programs like anything else. Performance probably wouldn't be very good though.
The following steps should do the trick:
Get PLY. Python Lex and Yacc. http://www.dabeaz.com/ply/
Find a Yacc/Lex configuration for C. http://www.lysator.liu.se/c/ANSI-C-grammar-y.html
Tweak PLY to use the C language rules you found.
Run. You are "compiling" C code -- checking the syntax.
If I understood you clearly, you just want to run compiler with some arguments from python?
In this case, you can just to use os.system. http://docs.python.org/library/os.html#os.system
Or better way is module "subprocess". http://docs.python.org/library/subprocess.html#module-subprocess

Categories