How to install python modules using cppyy? - python

I want to package a python module containing python source and a native c++ library. Cppyy is used to dynamically generate the bindings so the library is really just a normal library. The build system for the library is meson and should not be replaced. The whole thing is in a git repository. I only care about Linux.
My question is how to get from this to “pip install url_to_package builds/installs everything.” in the least complicated way possible.
What I’ve tried:
Extending setuptools with a custom build command:
…that executes meson compile and copies the result in the right place. But pip install will perform its work in some random split-off temporary directory and I can’t find my C++ sources from there.
The Meson python module:
…can build my library and install files directly into some python env. Does not work with pip and has very limited functionality.
Wheels:
…are incredibly confusing and overkill for me. I will likely be the only user of this module. Actually, all I want is to easily use the module in projects that live in different directories…
Along the way, I also came across different CMake solutions, but those are disqualified because of my build system choice. What should I do?

Related

Dynamically linking a pip native package to another pip package

I am building a python module using C++ with pybind11. This project will depend on two separate packages, that provide bindings for native C++ libraries:
opencv (provided by the pip package opencv-python)
imgui (provided by the pip package pyimgui).
This packages, once installed, consist mainly of dynamic libraries, plus some glue code in python.
My question concerns the installation/build process on the final user computer, i.e when pip install . is launched (in build isolation mode): how to find the path to the third parties dynamic libraries when linking?
In the end, they will be located somewhere in the virtual environment of the user; but I need to link them during the pip build (in isolation mode), and I also need to ensure that they will be found when the module will be used.
Note: as far a opencv is concerned, I could link to a local installation of OpenCV (i.e require the user to apt install opencv before, and use find_package(OpenCV)), but I suspect this is not the way to go. Concerning imgui, this solution would not apply.

Is there a way to include a brew instillation in a python build?

I'm trying to build a speech recognition application that works in a browser. Currently using pyodide with a web worker. I have my own package, built alongside pyaudio, that I use for the web worker.
Is there a way that I can include a brew instillation, specifically portaudio, inside of my python package so that when I build the package, portaudio is included in the wheel file? I need portaudio included for this to work in the browser.
Thank you!
I'm understanding two different questions here, so I'll try to answer them both.
Can I have a python build fetch a Homebrew project during buildtime
To my knowledge, the answer is no. The Python distribution system is separate from Homebrew, and they can't interact in this fashion.
Even if they could, this wouldn't necessarily be desirable:
What happens if the user isn't on macOS (or Linux)? Then the build would fail.
The prefix that Homebrew will install the package in isn't very deterministic. The user might be using a custom prefix, or they might be on Apple Silicon (which has a different default prefix to Intel).
Your python package might run into some difficulty locating the package.
What about if they don't have Homebrew installed? They might have another package manager like MacPorts or Fink, or maybe none at all.
Can I bundle portaudio into the build distribution?
Maybe? Even if you could, I almost certainly wouldn't recommend it.
Bundling dependencies increases the size of the distribution unnecessarily.
It would take a reasonable amount of effort to setup, assuming you can do it.
All these reasons are why for the majority of projects that have a similar setup, you will find that they recommend installing certain packages with their system package manager first, before building the Python source code.
This allows them to choose whatever package manager they have installed, and it should also be a quick and painless process.
Therefore, just change your installation instructions to the following:
# On macOS
brew install portaudio
pip install ...

What are ways of using external modules and libraries in Python?

The book "Learning Python" by Mark Lutz states that in Python one can use various types of external modules, which include .py files, .zip archives, C/C++ compiled libraries and others. My question is, how does one usually handle installation of each type of module?
For example, I know that to use a .py module, I simply need to locate it with import. What about something like .dll or .a? Or for example, I found an interesting library on GitHub, which has no installation manual. How do I know which files to import?
Also, are there any ways of installing modules besides pip?
The answer depends on what you want to do.
You can use Ninja for example to use C++ modules and cython for C and there are various packages for almost any type of compiled code.
You can install packages via pip using the pypi package repository or by using cloned repositories that have a setup.py file inside.
Any other python based repo can be imported either by a custom build script (that they will provide) or by directly importing the relevant Python files. This will require you the dive into the code and check what are the relevant files.
Also, are there any ways of installing modules besides pip?
Yes, according to Installing Python Modules (Legacy version) modules packaged using distutils should be downloaded, unpacked and command
python setup.py install
or similar should be run. Beware that
The entire distutils package has been deprecated and will be removed
in Python 3.12.

Building NumPy on RedHat

I installed a local version of Python 2.7 in my home directory (Linux RedHat) under ~/opt using the --prefix flag.
More specifically, Python was placed in ~/home/opt/bin.
Now, I want to install NumPy, but I am not really sure how I would achieve this. All I found in the INSTALL.txt and online documentation was the command to use the compiler.
I tried gfortran, and it worked without any error message:
python setup.py build --fcompiler=gnu95
However, I am not sure how to install it for my local version of Python.
Also, I have to admit that I don't really understand how this whole approach works in general. E.g., what is the setup.py build doing? Is it creating module files that I have to move to a specific folder?
I hope anyone can give me some help here, and I would also appreciate a few lines of information how this approach works, or maybe some resources where I can read it up (I didn't find anything on the NumPy pages).
Your local version of python should keep all of it's files somewhere in ~/opt (presumably). As long as this is the python installation that gets used when you issue the command
python setup.py build --fcompiler=gnu95
you should be all set because in the sys module, there are a bunch of constants which the setup script uses to determine where to put the modules once they are built.
So -- running python setup.py build issues all of the necessary commands to build the module (compiling the C/Fortran code into shared object libraries that python can load dynamically and copying the pure python code to create the proper directory structure). The module is actually built somewhere in the build subdirectory which gets created during the process if it doesn't already exist. Once the library has been built (successfully), installing it should be as simple as:
python setup.py install
(You might need to sudo if you don't have write privileges in the install directory).

Reasons to use distutils when packaging C/Python project

I have an open source project containing both Python and C code. I'm wondering that is there any use for distutils for me, because I'm planning to do a ubuntu/debian package. The C code is not something that I could or want to use as Python extension. C and Python programs communicate with TCP/IP through localhost.
So the bottom line here is that while I'm learning packaging, does the usage of distutils specific files only make me more confused since I can't use my C-code as Python extensions? Or should I divide my C and Python functionality to separate projects to be able to understand packaging concepts better?
distutils can be used to install end user programs, but it's most useful when using it for Python libraries, as it can create source packages and also install them in the correct place. For that I would say it's more or less required.
But for an end user Python program you can also use make or whatever you like and are used to, as you don't need to install any code in the Python site-packages directory, and you don't need to put your code onto PyPI and it doesn't need to be accessible from other Python-code.
I don't think distutils will be neither more or less complicated to use in installing an end-user program compared to other tools. All such install/packaging tools are hella-complex, as Cartman would have said.
Because it uses an unified python setup.py install command? distutils, or setuptools? Whatever, just use one of those.
For development, it's also really useful because you don't have to care where to find such and such dependency. As long as it's standard Python/basic system library stuff, setup.py should find it for you. With setup.py, you don't require anymore ./configure stuff or ugly autotools to create huge Makefiles. It just works (tm)

Categories