I have a Python package myapp which depends on a Python package theirapp.
theirapp is used by others and may update occasionally, but it is not hosted on PyPI.
I currently have my repository setup like this:
my-app/
myapp/
__init__.py
requirements.txt
their-app/
setup.py
theirapp/
__init__.py
My requirements.txt file contains the following line (among others):
./their-app/
their-app is not hosted on PyPI but I want to make sure the latest version is installed. Up to this point I have been downloading a zip file containing my-app and typing pip install -U requirements.txt and using the application manually.
I would like to make an installable Python package. Ideally I would like to download a my-app.zip file and type pip install my-app.zip to install myapp, theirapp and any other dependencies.
Is this possible? If not, what is the best way to handle this scenario?
You may just need to bundle theirapp as part of your project and import it as myapp.contrib.theirapp. If both projects are versioned in git you can impliment it as a submodule, but it may increase complexity for maintainers.
How pip handles a similar problem:
https://github.com/pypa/pip/tree/develop/pip/_vendor
You can see pip imports bundled vendor packages as pip._vendor.theirapp.
Related
I have a python package fmdt-python that anyone can install with pip install fmdt-python. I want to configure this package so that I can call import fmdt anywhere. Despite my best efforts, after successfully installing fmdt-python python can't actually find the package fmdt. How do I configure the project.toml of my pypi project fmdt-python to be imported as fmdt in python?
For reference, the pypi package ffmpeg-python is imported in python as ffmpeg We can inspect the local path pip uses to install packages to see that there is a long versioned name of the package alongside a shorter name used in the import statement:
but for my package fmdt-python pip only installs the directory with the long name:
I would like to configure my package so that pip installs the proper fmdt folder alongside fmdt_python-0.0.12.dist-info.
I am using hatchling as the build system and use a pyproject.toml file to configure this package. For reference, here's the github of the package and this is the pypi index.
The "problem" with my directory structure is that I had a python package fmdt in which I placed all of the configuration files like pyproject.toml, LICENCE, setup.py, etc.
Rearranging the structure to:
with the config files outside of the fmdt folder, I was able to configure my build to make the fmdt package available for import when downloading the pypi distribution fmdt-python
This question already has answers here:
What is the use case for `pip install -e`?
(3 answers)
Closed 4 months ago.
Whilst in some_other_package, I am importing files from the snnalgorithms pip package. I received the error:
No module named 'src.snnalgorithms'. This is a valid error because that src.snnalgorithms file does not exist in the some_other_package project from which I was calling the pip package.
As a solution, I can make all the imports in the snn.algorithms pip package, relative to itself. Instead of:
from src.snnalgorithms.population.DUMMY import DUMMY
One could write:
from snnalgorithms.population.DUMMY import DUMMY
However, that implies that each time I want to run the code to briefly verify a minor change, or run the tests after a change, I will have to:
Upload the changes into the pip package.
Re-install the pip package locally to reflect the changes.
This significantly slows down development. Hence I was wondering are there more efficient solutions for this?
You can use editable mode for development mode
pip install -e . # Install package locally
From pip documentation:
Editable installs allow you to install your project without copying any files. Instead, the files in the development directory are added to Python’s import path. This approach is well suited for development and is also known as a “development installation”.
With an editable install, you only need to perform a re-installation if you change the project metadata (eg: version, what scripts need to be generated etc). You will still need to run build commands when you need to perform a compilation for non-Python code in the project (eg: C extensions).
Example
If you have project: some_other_package from which you call pip package snnalgorithms you can:
cd snnalgorithms
pip install -e .
cd ..
cd some_other_package
python -m src.some_other_package
Assuming you use the same conda environment for both packages, both packages will then be able to use your newest changes that have not even been published to pypi.org yet.
Background
I was about to try Python package downloaded from GitHub, and realized that it did not have a setup.py, so I could not install it with
pip install -e <folder>
Instead, the package had a pyproject.toml file which seems to have very similar entries as the setup.py usually has.
What I found
Googling lead me into PEP-518 and it gives some critique to setup.py in Rationale section. However, it does not clearly tell that usage of setup.py should be avoided, or that pyproject.toml would as such completely replace setup.py.
Questions
Is the pyproject.toml something that is used to replace setup.py? Or should a package come with both, a pyproject.toml and a setup.py?
How would one install a project with pyproject.toml in an editable state?
Yes, pyproject.toml is the specified file format of PEP 518 which contains the build system requirements of Python projects.
This solves the build-tool dependency chicken and egg problem, i.e. pip can read pyproject.toml and what version of setuptools or wheel one may need.
If you need a setup.py for an editable install, you could use a shim in setup.py:
#!/usr/bin/env python
import setuptools
if __name__ == "__main__":
setuptools.setup()
pyproject.toml is the new unified Python project settings file that replaces setup.py.
Editable installs still need a setup.py: import setuptools; setuptools.setup()
To use pyproject.toml, run python -m pip install .
Then, if the project is using poetry instead of pip, you can install dependencies (into %USERPROFILE%\AppData\Local\pypoetry\Cache\virtualenvs) like this:
poetry install
And then run dependencies like pytest:
poetry run pytest tests/
And pre-commit (uses .pre-commit-config.yaml):
poetry run pre-commit install
poetry run pre-commit run --all-files
What is it for?
Currently there are multiple packaging tools being popular in Python community and while setuptools still seems to be prevalent it's not a de facto standard anymore. This situation creates a number of hassles for both end users and developers:
For setuptools-based packages installation from source / build of a distribution can fail if one doesn't have setuptools installed;
pip doesn't support the installation of packages based on other packaging tools from source, so these tools had to generate a setup.py file to produce a compatible package. To build a distribution package one has to install the packaging tool first and then use tool-specific commands;
If package author decides to change the packaging tool, workflows must be changed as well to use different tool-specific commands.
pyproject.toml is a new configuration file introduced by PEP 517 and PEP 518 to solve these problems:
... think of the (rough) steps required to produce a built artifact for a project:
The source checkout of the project.
Installation of the build system.
Execute the build system.
This PEP [518] covers step #2. PEP 517 covers step #3 ...
Any tool can also extend this file with its own section (table) to accept tool-specific options, but it's up to them and not required.
PEP 621 suggests using pyproject.toml to specify package core metadata in static, tool-agnostic way. Which backends currently support this is shown in the following table:
enscons
flit_core
hatchling
pdm-pep517
poetry-core
setuptools
0.26.0+
3.2+
0.3+
0.3.0+
Issue #3332
61.0.0+
Does it replace setup.py?
For setuptools-based packages pyproject.toml is not strictly meant to replace setup.py, but rather to ensure its correct execution if it's still needed. For other packaging tools – yes, it is:
Where the build-backend key exists, this takes precedence and the source tree follows the format and conventions of the specified backend (as such no setup.py is needed unless the backend requires it). Projects may still wish to include a setup.py for compatibility with tools that do not use this spec.
How to install a package in editable mode?
Originally "editable install" was a setuptools-specific feature and as such it was not supported by PEP 517. Later on PEP 660 extended this concept to packages using pyproject.toml.
There are two possible conditions for installing a package in editable mode using pip:
Modern:
Both the frontend (pip) and a backend must support PEP 660.
pip supports it since version 21.3;
Legacy:
Packaging tool must provide a setup.py file which supports the develop command.
Since version 21.1 pip can also install packages using only setup.cfg file in editable mode.
The following table describes the support of editable installs by various backends:
enscons
flit_core
hatchling
pdm-pep517
poetry-core
setuptools
0.28.0+
3.4+
0.3+
0.8.0+
1.0.8+
64.0.0+
Answering this part only, as the rest has nicely been explained by others:
How would one install a project with pyproject.toml in an editable state?
Solution
Since the release of poetry-core v1.0.8 in Feb 2022 you can do this:
a) you need this entry in your pyproject.toml:
[build-system]
requires = ["poetry-core>=1.0.8"]
build-backend = "poetry.core.masonry.api"
b) run:
pip install -e .
Sources
https://github.com/python-poetry/poetry/issues/34#issuecomment-1054626460
pyproject.toml can declare the files in your python package and all the metadata for it that will show in PyPi.
A tool like flit can process the pyproject.toml file into a package that can be uploaded to PyPi or installed with pip.
Other tools use pyproject.toml for other purposes. For example, pytest stores information about where to find and how to run tests, and instructions to pytest about modifying pythonpath (sys.path) before running the tests. Many IDEs can use this to help developers conveniently run tests.
I have developed a python package on github that I released on PyPi. It installs with pip install PACKAGENAME, but does not do anything with the dependencies that are stated in the "install_requires" of the setup.py file.
Weirdly enough, the zip file of the associated release does install all dependencies.. I tried with different virtual environments and on different computers but it never installs the dependencies.. Any help appreciated.
pip install pythutils downloads a wheel if it's available — and it's available for your package.
When generating a wheel setuptools runs python setup.py locally but doesn't include setup.py into the wheel. Download your wheel file and unzip it (it's just a zip archive) — there is your main package directory pythutils and a directory with metadata pythutils-1.1.1.dist-info. In the metadata directory there is a file METADATA that usually lists static dependencies but your file doesn't list any. Because when you were generating wheels all your dependencies has already been installed so all your dynamic code paths were skipped.
The archive that you downloaded from Github release install dependencies because it's not a wheel so pip runs python setup.py install and your dynamic dependencies work.
What you can do? My advice is to avoid dynamic dependencies. Declare static dependencies and allow pip to decide what versions to install:
install_requires=[
'numpy==1.16.5; python_version>="2" and python_version<"3"',
'numpy; python_version>="3"',
],
Another approach would be to create version-specific wheel files — one for Python 2 and another for Python 3 — with fixed dependencies.
Yet another approach is to not publish wheels at all and only publish sdist (source distribution). Then pip is forced to run python setup.py install on the target machine. That not the best approach and it certainly will be problematic for packages with C extensions (user must have a compiler and developer tools to install from sources).
Your setup.py does a series of checks like
try:
import numpy
except ImportError:
if sys.version_info[0] == 2:
install_requires.append('numpy==1.16.5')
if sys.version_info[0] == 3:
install_requires.append("numpy")
Presumably the system where you ran it had all the required modules already installed, and so ended up with an empty list in install_requires. But this is the wrong way to do it anyway; you should simply make a static list (or two static lists, one each for Python 2 and Python 3 if you really want to support both in the same package).
I have a python project where I am using the maskrcnn-benchmark project from Facebook Research. The problem is that the setup file for the facebook project depends on pytorch i.e. the setup file has an import line like:
import torch
So, I need to have pytorch pre-installed and this is causing me some problems. For me, the cleanest solution would be if I could prebuild the maskrcnn-benchmark project as a wheel with all its dependencies like pytorch and then add this wheel as a requirement in my setup.py file.
However, I could not find an easy way to do so. Is there someway to adsd a wheel file as an install_requires step in the setup file of a python project.
The maskrcnn-benchmark project should have torch==1.0.1 (or whichever version) in install_requirements= (along with any other requirements).
Then, you can use
pip wheel . --wheel-dir /tmp/deps
to have pip gather up the wheels (for your current architecture!) in /tmp/deps. Then, to install the dependencies from the wheel dir,
pip install --find-links=/tmp/deps -e .
This technique works for other target types too, like -r requirements.txt.
EDIT: If you also want to build a wheel for the project itself, that'd be python setup.py bdist_wheel, but that won't look for dependencies.