Should PYTHONPATH include ./build/*? - python

Running
$ python setup.py build_ext
with the usual Cython extension configuration creates a build directory and places the compiled modules deep within it.
How is the Python interpreter supposed to find them now? Should PYTHONPATH include those sub-sub-directories? It seems kludgy to me. Perhaps this is meant to work differently?

You will find the information here https://docs.python.org/3.5/install/
Build is an intermediary result before python actually install the module. Put in pythonpath the paths to the library that is, for instance:
<dir>/local/lib/python
if you use the "home" installation technique and dir is the directory you have chosen, ex
/home/user2

Presumably, when you write a package containing Cython code, your setup.py will contain something similar to this:
setup(
ext_modules = cythonize("example.pyx")
)
(there are some variations, but that's the general idea). When you run
python setup.py install
or
python setup.py install --user
You will see it creates binary files (with extensions based on your OS - on mine it will be example.so) and copies them to the standard installation directory (also depending on your OS).
These binary files are therefore already in the import path of your Python distribution, and it can import them like regular modules.
Consequently, you do not need to add the build directory to the path. Just install (possibly with --user, or use virtualenv, if you're developing), and let the extensions be imported the regular way.

Related

How to pre-install a python package into a OpenWRT custom image?

I'm developing a router and need a python module snmp_passpersist to be pre-installed.
The original source is coded in python2, so I modified it as to adapt to python3, and need to pre-install into the product image.
I know how to install a python module onto a running live environment by means of pip and a setup.py
that come with original source, but now I'm in the buildroot env of OpenWRT.
I read through the customizing package overview of OpenWRT, but it is for C language and binary executables.
It looks like that some more steps should be done with a python module/package instead of a cp command, e.g. compiling *.py file into *.pyc, and making a egg file with a lot of package info, etc.
Maybe it works to copy simply the egg file into the target lib folder, but I worry about there will be no version information in the PIP environment.
I want to known the correct/formal way.
Thanks!
You should follow an official python package from Openwrt
Add the include makefile for python
include ../pypi.mk
include $(INCLUDE_DIR)/package.mk
include ../python3-package.mk
There is some built-in command for the makefile, ex: $(eval $(call Py3Package,python3-curl))
Pre-built the python package and you can get this in a custom image
Ex: https://github.com/openwrt/packages/blob/openwrt-21.02/lang/python/python-curl/Makefile

How to extend a python package by binary executables?

My package is written almost entirely in python. However, some functionality is based on executables that are run within python using subprocess. If I set up the package locally, I need to first compile the corresponding C++ project (managed by CMake) and ensure that the resulting binary executables are created in the bin folder. My python scripts then can call these utilities.
My project's folder structure resembles the following one:
root_dir
- bin
- binary_tool1
- binary_tool2
- cpp
- CMakeLists.txt
- tool1.cpp
- tool2.cpp
- pkg_name
- __init__.py
- module1.py
- module2.py
- ...
LICENSE
README
setup.py
I now consider to create a distributable python package and to publish it via PyPi/pip. I therefore need to include the build-step of the C++ project into the packaging procedure.
So far, I create the python package (without the binary "payload") as described in this tutorial. I now wonder how to extend the packaging procedure such that the C++ binary files are distributed along with the package.
Questions:
Is setuptools designed for such a use-case at all?
Is this "mixed" package approach feasible at all or are binary compatibility issues to be expected?
I believe that the canonical approach to extend a pure-python package with C code is to create "binary extensions" (e.g. using distutils, or as described here). In this case, the functionality is provided by executables, and not by wrappable C/C++ functions. I would like to avoid redesigning the C++ project to create binary extensions.
I found a number of half-answers to this but nothing complete, so here goes.
Quick and easy (single-platform)
I believe you'll need to remove the dash from your package name. The rest of this answer assumes it's been replaced with an underscore.
Starting with your directory structure, create a copy of bin under pkg_name (or move bin there). The reason for that is, if you do not, you will end up installing files into your python folders site-packages/pkg_name and site-packages/bin instead of having it all under site-packages/pkg_name.
Your minimal set of files needed for packaging should now be as follows:
- pkg_name/
- __init__.py
- module1.py
- module2.py
- bin/
- binary_tool1
- binary_tool2
- setup.py
To call your binary executable from the code, use a relative path to __file__:
def run_binary_tool1(args):
cmd = [os.path.join(os.path.dirname(__file__), 'bin', 'binary_tool1')] + args
p = subprocess.Popen(cmd, ...)
...
In setup.py, reference your binaries in addition to your package folder:
from setuptools import setup
setup(
name='pkg_name',
version='0.1.0',
package_data={
'pkg_name':['bin/binary_tool1','bin/binary_tool2']
},
packages=['pkg_name']
)
Do yourself a favor and create a Makefile:
# Makefile for pkg_name python wheel
# PKG_NAME and VERSION should match what is in setup.py
PKG_NAME=pkg_name
VERSION=0.1.0
# Shouldn't need to change anything below here
# determine the target wheel file
WHEEL_TARGET=dist/${PKG_NAME}-${VERSION}-py2.py3-none-any.whl
# help
help:
#echo "Usage: make <setup|build|install|uninstall|clean>"
# install packaging utilities (only run this once)
setup: pip install wheel setuptools
# build the wheel
build: ${WHEEL_TARGET}
# install to local python environment
install: ${WHEEL_TARGET}
pip install ${WHEEL_TARGET}
# uninstall from local python environment
uninstall:
pip uninstall ${PKG_NAME}
# remove all build artifacts
clean:
#rm -rf build dist ${PKG_NAME}.egg-info
#find . -name __pycache__ -exec rm -rf {} \; 2>/dev/null
# build the wheel
${WHEEL_TARGET}: setup.py ${PKG_NAME}/__init__.py ${PKG_NAME}/module1.py ${PKG_NAME}/module2.py ${PKG_NAME}/bin/binary_tool1 ${PKG_NAME}/bin/binary_tool2
python setup.py bdist_wheel --universal
Now you're ready to roll:
make setup # only run once if needed
make install # runs `make build` first
## optional:
# make uninstall
# make clean
and in python:
import pkg_name
pkg_name.run_binary_tool1(...)
...
Multi-platform
You'll almost certainly want to provide more info in your setup() call, so I won't go into detail on that here. More importantly, the above creates a wheel that purports to be universal but really is not. This might be sufficient for your needs if you are sure you will only be distributing on a single platform and you don't mind this mismatch, but would not be suitable for broader distribution.
For multi-platform distribution, you could go the obvious route and create platform-specific wheels (changing the --universal flag in the above Makefile command, etc).
Alternatively, if you can compile a binary for every platform, you could packages all of the binaries for all platforms in your one universal wheel, and let your python code figure out which binary to call (for example, by checking sys.platform and/or other available variables to determine the platform details).
The advantages of this alternative approach are that the packaging process is still easy, the platform-dynamic code is some simple python, and you can easily reuse the same binary on multiple platforms provided that it actually works on those platforms. Now, I would not be surprised if the "all-binaries" approach is frowned on by at least some if not many, but hey, python users often say that developer time is king, so this approach has that argument in its favor-- as does the overall idea of packaging a binary executable instead of going through all the brain damage of creating a python/C wrapper interface.

Build a namespace package for use via $PYTHONPATH

It's a slightly awkwardly worded question, but basically:
I can only activate python packages by adding to $PYTHONPATH
I want to be able to use arbitrary python packages this way
Some python packages use package namespaces
I've got a wrapper script to build a python package, it mostly just does setup.py --single-version-externally-managed. I can add whatever I want to this build process - but it must be general, as it's used to build a large number of packages.
Normally, I can just add the build result onto $PYTHONPATH, and all is well.
But with namespaced packages (e.g ndg-httpsclient), it looks a lot like they'll only work when in a designated site-packages directory, because they use .pth files.
Confusingly there is a ndg/__init__.py in the source, which contains the namespace package boilerplate code:
__import__('pkg_resources').declare_namespace(__name__)
So that looks like it ought to be importable directly on $PYTHONPATH, but then the installation taunts me with:
Skipping installation of /<some-build-dir>/site-packages/ndg/__init__.py (namespace package)
Presumably it has a good reason for doing that, but its reliance on .pth files means the result can't be used via $PYTHONPATH, only by copying it to a well-known location, which I am unable to do for technical reasons. I am also unable to inject arbitrary python into the app initialization (so site.addsitedir() is out too).
So, is there some way I don't know about to have a namespace module importable without putting it in a well-known location? Alternatively, is there a way to convince setuptools to not skip the __init__.py -- and would that actually work?

How might one specify or add a directory to the Python.h search path during a module build/install using setup.py?

I'm running Linux, and have downloaded a python module I must install without access to any but my particular /home/user directory (I have no root privileges nor the option to pursue them).
This of course requires the Python source. This I've downloaded and have laying around in said user directory.
While asking the admin to copy the proper files into /usr/include/python2.7 is easiest way to go about this, I am hoping for a more general and portable solution to this kind of problem.
Changing only data in the module source (MANIFEST.in, README.txt, setup.py, etc.), how might I add an arbitrary directory to the search path for Python.h and friends?
(Without a solution, "python setup.py build" will continue returning with the "Python.h: No such file or directory" error)
Thank you very much.
For building compiled packages, you need to tell the configure step of setup.py to look in a different location for include files. I believe this can be done like so:
python setup.py config --with-includepath=/path/to/your/install/of/python/includes/
You may also need to tell setup.py about the location of other files (such as libraries), in which case take a look at:
python setup.py config --help
and check out the --libraries and --library-dirs options. To change the location the resulting package is installed to, use the prefix option after building, e.g.:
python setup.py install --prefix=/path/to/install/to/
Although the exact combination of options you require might depend on the package you are installing. If you need to do this frequently, I think it can be done by specifying a setup.cfg config file, as discussed here.

How do I convince setuptools to use a temporary directory for requires packages from setup_require or tests_require?

Inside the setup.py I have something like this:
setup_requires=['nose>=1.0'],
tests_require=[],
The problem is that when I run ./setup.py test it will download and unpack these modules in the directory with setup.py.
How can I convince it to use a temporary directory for this, I do not want to polute the source control system with these, and I do not want to start adding lots and lots of exlude patters.
If the problem is the source tree of your project you should probably make a script to delete all "dist" and "build" directories created by distutils at the end of the setup test.
Downloaded packages are usually *.egg folders in your source tree.
You are not polluting your distribution.
From setuptools documentation :
setup_requires will NOT be automatically installed on the system where
the setup script is being run. They are simply downloaded to the setup
directory if they’re not locally available already. If you want them
to be installed, as well as being available when the setup script is
run, you should add them to install_requires and setup_requires.)
and
tests_require If your project’s tests need one or more additional
packages besides those needed to install it, you can use this option
to specify them. It should be a string or list of strings specifying
what other distributions need to be present for the package’s tests to
run. When you run the test command, setuptools will attempt to obtain
these (even going so far as to download them using EasyInstall). Note
that these required projects will not be installed on the system where
the tests are run, but only downloaded to the project’s setup
directory if they’re not already installed locally.
http://packages.python.org/distribute/setuptools.html
If you have installed some package and you need to remove it,
just find your package in the subfolder "site-packages" of your python distribution
and delete it. Finally remove the package reference in the easy-install.pth file
which is usually located in the same "site-packages" dir.
Starting with setuptools 7.0, transient dependencies loaded for setup_requires, tests_require, and similar are installed into ./.eggs. I strongly encourage adding that to your global ignore list.

Categories