I read in the Python documentation:
The build command is responsible for putting the files to install into a build directory.
I fear this documentation may be incomplete. Does python setup.py build do anything else? I expect this step to generate object files with Python bytecode, which will be interpreted at execution time by Python VM.
Also, I'm building an automated code check in my source code repository. I want to know if there is any benefit of running setup.py build (does it do any checks?) or is a static code/PEP8 checker such as Pylint good enough?
Does python setupy.py build do anything else?
If your package contains C extensions (or defines some custom compilation tasks), they will be compiled too. If you only have Python files in your package, copying is all build does.
I expect this step to generate object files with Python bytecode, which will be interpreted at execution time by Python VM.
No, build does not do that. This happens at install stage.
I want to know if there is any benefit of running setup.py build (Does it do any checks?) or is a static code/PEP8 checker such as Pylint good enough?
By all means, run pylint. build does not even check the syntax.
Related
I know there is some way to call Python from C++, like Python/C API or Boost.Python. My question is, how can I distribute the application? For example, does user still need to install Python and Python packages on their machine?
My user case is: I want to use some Python code from my C++ code. The main application is written in C++. Then I am going to deploy my app. The goal is to make the app self contained, and user don't need to install Python and Python packages at all.
The possible steps may be :
1, calling Python from C++ via Python/C API or boost.Python from source code.
2, bring Python/C libraries together with application.
I hope after these 2 steps, my app will be a self-contained and standalone software. User can just copy the app folder to any other machines which has no Python installed.
Note that due to license issue, I can not use PyInstaller. I also meet some problems when trying to use "Nuitka" to make the Python part self contained. So I am now trying directly calling Python from C++. I know it will run on my developer machine. But needs to confirm that this solution can also make app self-contained and won't ask user to install Python.
Update: Now I feel I need to do something to make my app self-contained if I use Python/C to call python from C++ :
1, I need to bring all needed runtime with my app. (C++ runtime of course, and the python_version.dll)
2, I need to deploy a Python interpreter inside my app. Simply copy the Python folder from Python installation and remove some not needed files (like header files, lib files)
3, use Py_SetPythonHome function to points to the copied Python interpreter inside the app.
I'd say you're on the right track. Basically, you should obtain a Python (shared or static) library, compile your program with it, and of course bundle the Python dependencies you have with your program. The best documentation I've read is available here: https://docs.python.org/3.8/extending/embedding.html#embedding-python-in-another-application. Roughly, the process is:
Get a Python library from python.org and compile with ./configure --enable-shared (I believe omitting --enable-shared does only produce the python binary).
Compile your program. Have it reference the headers under Include and link the library. Note that you can obtain the compiler and linker flags you need as described here.
Call Python code from within your application using e.g. PyRun_SimpleString() or other functions from the C API. Note that you may also depend on the Python standard library (under Lib in the distribution) if there's any functionality you use from it.
If you linked against Python statically, at this point you're done, aside from bundling any Python code you depend on, which I'm not sure is relevant in your case.
I am suffering from the same problem, I had a project which is made up of C++ and python(embedded) and there is a problem of deployment/distribution.
After research, I got a solution which is not perfect (means it will be helpful to run your app in other system)
change visual studio in release mode and compile(you got a folder in your working directory)
install pyinstaller (pip install pyinstaller)
then navigate to pyinstaller folder and command:-pyinstaller.exe "your script_file_path.py"
-it will create a dist folder
copy that folder in working folder where exe exists.
remember.
dist folder and c/python code compiled by same version of python.
now good to go.
it will work.
I have a mainly c++ project that I use CMake to manage. After setting cmake_install_prefix and configuring, it generates makefiles which can then be used to build and install with the very standard:
make
make install
At this point, my binaries end up in cmake_install_prefix, and they can be executed with no additional work. Recently I've added some Python scripts to a few places in the source tree, and some of them depend on others. I can use CMake to copy the Python files+directory structure to the cmake_install_prefix, but if I go into that path and try to use one of the scripts, Python cannot find the other scripts used as imports because PYTHONPATH does not contain cmake_install_prefix. I know you can set an environment variable with CMake, but it doesn't persist across shells, so it's not really "setup" for the user for more than the current terminal session.
The solution seems to be to add a step to your software build instructions that says "set your PYTHONPATH". Is there any way to avoid this? Is this the standard practice for "installing" Python scripts as part of a bigger project? It seems to really complicate things like setting up continuous integration for the project, as something like Jenkins has to be manually configured to inject environment variables, whereas nothing special was required for it to build and execute executables built from c++ code.
Python provides sys.path list, which is used for search modules with import directives. You may adjust this list before include your modules:
script1.py:
# Do some things useful for other scripts
script2.py.in:
# Uses script1.py.
...
sys.path.insert(1, "#SCRIPT1_INSTALL_PATH#")
import script1
...
CMakeLists.txt:
...
# Installation path for script1. Depends from CMAKE_INSTALL_PREFIX.
set(SCRIPT1_INSTALL_PATH ${CMAKE_INSTALL_PREFIX}/<...>)
install(FILES script1.py DESTINATION ${SCRIPT1_INSTALL_PATH}
# Configure 'sys.path' in script2.py, so it may find script1.py.
configure_file("script2.py.in" "script2.py" #ONLY)
set(SCRIPT2_INSTALL_PATH ${CMAKE_INSTALL_PREFIX}/<...>)
install(FILES script2.py DESTINATION ${SCRIPT2_INSTALL_PATH}
...
If you want script2.py to work both in build tree and in install tree, you need to have two instances of it, one which works in build tree, and one which works after being installed. Both instances may be configured from single .in file.
In case of compiled executables and libraries, similar mechanism is uses for help binaries to find libraries in non-standard locations. It is known as RPATH.
Because CMake
knows every binary created (it tracks add_executable and add_library calls),
knows linkage between binaries (target_link_libraries call is also tracked),
has full control over linking procedure,
CMake is able to automatically adjust RPATH when install binaries.
In case of Python scripts CMake doesn't have such information, so adjusting linkage path should be performed manually.
I am trying to embed a piece of Cython code in a C++ project, such that I can compile a binary that has no dependencies on Python 2.7 (so users can run the executable without having Python installed). The Cython source is not pure Cython: There is also Python code in there.
I am compiling my Cython code using distutils in the following script (setup.py):
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("test.pyx")
)
I then run the script using python setup.py build_ext --inplace. This generates a couple of files: test.c, test.h, test.pyd and some library files: test.exp, test.obj and test.lib.
What would be the proper procedure to import this into C++? I managed to get it working by including test.c and test.h during compilation and test.lib during linking.
I am then able to call the Cython functions after I issue
Py_Initialize();
inittest();
in my C++ code.
The issue is that there a numerous dependencies on Python, both during compilation (e.g., in test.h) as well in during linking. Bottom-line is that in order to run the executable, Python has to be installed (otherwise I get errors on missing python27.dll).
Am I going in the right direction with this approach? There are so many options that I am just very confused on how to proceed. Conceptually, it also does not make sense why I should call Py_Initialize() if I want the whole thing to be Python-independent. Furthermore, this is apparently the `Very High Level Embedding' method instead a low-level Cython embedding, but this is just how I got it to work.
If anybody has any insights on this, that would be really appreciated.
Cython cannot make Python code Python-independent; it calls into the Python library in order to handle Python types and function calls. If you want your program to be Python-independent then you should not write any Python code.
(This is primarily extra detail to
Ignacio Vazquez-Abrams's answer which says that you can't eliminate the Python dependency)
If you don't want to force your users to have Python installed themselves, you could always bundle python27.dll with your application (read the license agreement, but I'm almost certain it's fine!).
However, as soon as you do an import in your code, you either have to bundle the relevant module, or make sure it (and anything it imports!) is compiled with Cython. Unless you're doing something very trivial then you could end spending a lot of time chasing dependencies. This includes the majority of the standard library.
Imaging that we are given a finished C++ source code of a library, called MyAwesomeLib. The goal is to expose some of its power to python, so we create a wrapper using swig and generated a python package called PyMyAwesomeLib.
The directory structure now looks like
root_dir
|-src/
|-lib/
| |- libMyAwesomeLib.so
| |- _PyMyAwesomeLib.so
|-swig/
| |- PyMyAwesomeLib.py
|-python/
|- Script_using_myawesomelib.py
So far so good. Ideally, all we want to do next is to copy lib/*.so swig/*.py and python/*.py into the corresponding directory in site-packages in a pythonic way, i.e. using
python setup.py install
However, I got very confused when trying to achieve this simple goal using setuptools and distutils. Both tools handles the compilation of python extensions through an internal system, where the source file, compiler flags etc. are passed using setup(ext_module=[Extension(...)]). But this is ridiculous since MyAsesomeLib has a fully functioning build system that is based on makefile. Porting the logic embedded in makefiles would be redundant and completely un-necessary work.
After some research, it seems there are two options left, I can either override setuptools.command.build and setuptools.command.install to use the existing makefile and copy the results directly, or I can somehow let setuptools know about these files and ask it to copy them during installation. The second way is more appealing, but it is what gives me the most headache. I have tried the following optionts without success
package_data, and include_package_data does not work because *.so files are not under version control and they are not inside of any package.
data_files does not seems to work since the files only get included when running python setup.py sdist, but ignored when python setup.py install. This is the opposite of what I want. The .so files should not be included in the source distribution, but get copied during the installation step.
MANIFEST.in failed for the same reason as data_files.
eager_resources does not work either, but honestly I do not know the difference between eager_resources and data_files or MANIFEST.in.
I think this is actually a common situation, and I hope there is a simple solution to it. Any help would be greatly appreciated.
Porting the logic embedded in makefiles would be redundant and
completely un-necessary work.
Unfortunately, that's exactly what I had to do. I've been struggling with this same issue for a while now.
Porting it over actually wasn't too bad. distutils does understand SWIG extensions, but it this was implemented rather haphazardly on their part. Running SWIG creates Python files, and the current build order assumes that all Python files have been accounted for before running build_ext. That one wasn't too hard to fix, but it's annoying that they would claim to support SWIG without mentioning this. Distutils attempts to be cross-platform when compiling things, so there is still an advantage to using it.
If you don't want to port your entire build system over, use the system's package manager. Many complex libraries do this (but they also try their best with setup.py). For example, to get numpy and lxml on Ubuntu you'd just do:
sudo apt-get install python-numpy python-lxml. No pip.
I realize you'd rather write one setup file instead of dealing with every package manager ever so this is probably not very helpful.
If you do try to go the setuptools route there is one fatal flaw I ran into: dependencies.
For instance, if you are distributing a SWIG-based project, it's going to need libpython. If they don't have it, an error like this happens:
#include <Python.h>
error: File not found
That's pretty unhelpful to the average user.
Even worse, if you require a shared library but the user's library is out of date, the user can get some crazy errors. You're at the mercy of their C++ compiler to output Google-friendly error messages so they can figure it out.
The long-term solution would be to get setuptools/distutils to get better at detecting non-python libraries, hopefully as good as Ruby's gem. I pretty much had to roll my own. For instance, in this setup.py I'm working on you can see a few functions at the top I hacked together for dependency detection (still doesn't work on all systems...definitely not Windows).
There dont seem to be any instructions on how to build this sucker. Downloaded from http://benjamin.smedbergs.us/pymake/
The usual files are not in the top directory - make.py and mkparse.py. neither of them seem to do much.. seems like it needs a makefile, but there isn't one in any part of the distro..
> python make.py build
make.py[0]: Entering directory '/Users/ron/lib/pymake-default'
No makefile found
any hints?
pymake is a make utility, and running make.py looks for a Makefile (that you've created, for your own project). There's no build step specifically required for pymake itself.