python setup.py install ignores install_requires - python

I am unable to install the local packages using setup.py
Here is the project structure:
my-project/
lib/
local1/
local1.1.0.whl
index.html
local2/
local2.1.0.whl
index.html
setup.py
setup.py
import os
from setuptools import setup
setup(name='my project',
version='1.0',
description='my project',
install_requires=[
'lxml >= 4.3.0',
'local1 # file://localhost/{}/lib/local1/local1.1.0.whl'.format(os.getcwd()),
'local2 # file://localhost/{}/lib/local2/local2.2.0.whl'.format(os.getcwd()),
]
)
I can install if I put the dependencies in a requirements.txt file and use pip install -r requirements.txt --extra-index-url lib/, but I want to know why is it not possible to do python setup.py install or if I am missing something.
This is the error that I get -
No local packages or working download links found for local2# file://localhost//Users/anusha/Documents/my-project/lib/local2/local2.1.0.whl
error: Could not find suitable distribution for Requirement.parse('local2# file://localhost//Users/anusha/Documents/my-project/lib/local2/local2.1.0.whl')
On searching, I found this issue on github, but does not give me any pointers or solution as to how it worked.
Any help is welcome, thanks in advance!

Note this comment from pganssle in the discussion "Setuptools install fails with PEP508 URLs" in setuptools's issue tracker:
Our policy to date has been that if using pip install fixes your problem, you should use pip install and we won't fix the issue.
I believe this is in line with the current evolution of the packaging tools and techniques in the Python community. So if your setuptools-based project with this requirement notation can be installed via pip install . and pip install --editable ., then look no further.
A more complete (exhaustive) article on the topic:
Paul Ganssle's "Why you shouldn't invoke setup.py directly"

Related

Use setuptools to Install a Python package from a private Gitlab package repository

I created a private package for my employer. Since I’m forbidden to upload it to PyPI (it’s proprietary), I uploaded it to the packages index for my project on our private Gitlab hub. I can install it manually with:
$ pip install my-package --extra-index-url https://__token__:my-token-xxx#gitlab.company-domain.com/api/v4/projects/123456/packages/pypi/simple
Now I also want setuptools to be able to find it when listed in the install_requires argument to setup(). I tried:
setup(
install_requires=[
f"my-package # https://__token__:{API_TOKEN}#gitlab.company-domain.com/api/v4/projects/123456/packages/pypi/simple",
...
],
...
pip install -e . results in
ERROR: HTTP error 404 while getting https://__token__:****#gitlab.company-domain.com/api/v4/projects/123456/packages/pypi/simple
This is different than
my-package # git+https://user:password#gitlab.company-domain.com/..../my-package.git
That works, but I want to be able to download it as a pre-built wheel.
I’m not sure whether this is a setuptools issue or a Gitlab issue. The 404 response tells me that it might be a gitlab issue, yet the same URI works perfectly when used with the pip install CLI command.
This question is similar to Include python packages from gitlab's package registry and other external indexes directly into setup.py dependencies, but I don't think that one got sufficient response. I posted the same question to discuss.python.org, but that discussion is old and I think I might get a quicker response here.
I also found this response to a similar question, which wasn't encouraging. It recommends Poetry or Pipenv. I've tried both, and found each to be excruciatingly slow when resolving dependencies, so I fell back on setuptools.
Only include the package name in install_requires. Then, configure your (extra) index URL in your pip configuration (either environment variables or pip.conf/.pypirc or CLI argument). Then using pip install as normal will work.
For example:
In setup.py:
# ...
install_requires=[
'my-package-name',
# ...
],
# ...
Then the install command (assuming the environment variable API_TOKEN exists):
GITLAB_INDEX="https://__token__:${API_TOKEN}#gitlab.company-domain.com/api/v4/projects/123456/packages/pypi/simple"
pip install --extra-index-url "${GITLAB_INDEX}" -e .

setup.py file using requirements.txt

I've read a discussion where a suggestion was to use the requirements.txt inside the setup.py file to ensure the correct installation is available on multiple deployments without having to maintain both a requirements.txt and the list in setup.py.
However, when I'm trying to do an installation via pip install -e ., I get an error:
Obtaining file:///Users/myuser/Documents/myproject
Processing /home/ktietz/src/ci/alabaster_1611921544520/work
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory:
'/System/Volumes/Data/home/ktietz/src/ci/alabaster_1611921544520/work'
It looks like pip is trying to look for packages that are available on pip (alabaster) on my local machine. Why? What am I missing here? Why isn't pip looking for the required packages on the PyPi server?
I have done it before the other way around, maintaining the setup file and not the requirements file. For the requirements file, just save it as:
*
and for setup, do
from distutils.core import setup
from setuptools import find_packages
try:
from Module.version import __version__
except ModuleNotFoundError:
exec(open("Module/version.py").read())
setup(
name="Package Name",
version=__version__,
packages=find_packages(),
package_data={p: ["*"] for p in find_packages()},
url="",
license="",
install_requires=[
"numpy",
"pandas"
],
python_requires=">=3.8.0",
author="First.Last",
author_email="author#company.com",
description="Description",
)
For reference, my version.py script looks like:
__build_number__ = "_LOCAL_"
__version__ = f"1.0.{__build_number__}"
Which Jenkins is replacing the build_number with a tag
This question consists of two separate questions, for the rather philosopihc choice of how to arrange setup requirements is actually unrelated to the installation error that you are experiencing.
First about the error: It looks like the project you are trying to install depends on another library (alabaster) of which you apparently also did an editable install using pip3 install -e . that points to this directory:
/home/ktietz/src/ci/alabaster_1611921544520/work
What the error tells you is that the directory where the install is supposed to be located does not exist anymore. You should only install your project itself in editable mode, but the dependencies should be installed into a classical system directory, i. e. without the option -e.
To clean up, I would suggest that you do the following:
# clean up references to the broken editable install
pip3 uninstall alabaster
# now do a proper non-editable install
pip3 install alabaster
Concerning the question how to arrange setup requirements, you should primarily use the install_requires and extras_require options of setuptools:
# either in setup.py
setuptools.setup(
install_requires = [
'dep1>=1.2',
'dep2>=2.4.1',
]
)
# or in setup.cfg
[options]
install_requires =
dep1>=1.2
dep2>=2.4.1
[options.extras_require]
extra_deps_a =
dep3
dep4>=4.2.3
extra_deps_b =
dep5>=5.2.1
Optional requirements can be organised in groups. To include such an extra group with the install, you can do pip3 install .[extra_deps_name].
If you wish to define specific dependency environments with exact versions (e. g. for Continuous Integration), you may use requirements.txt files in addition, but the general dependency and version constraint definitions should be done in setup.cfg or setup.py.

pip and tox ignore full path dependencies, instead look for "best match" in pypi

This is an extension of SO setup.py ignores full path dependencies, instead looks for "best match" in pypi
I am trying to write setup.py to install a proprietary package from a .tar.gz file on an internal web site. Unfortunately for me the prop package name duplicates a public package in the public PyPI, so I need to force install of the proprietary package at a specific version. I'm building a docker image from a Debian-Buster base image, so pip, setuptools and tox are all freshly installed, the image brings python 3.8 and pip upgrades itself to version 21.2.4.
Solution 1 - dependency_links
I followed the instructions at the post linked above to put the prop package in install_requires and dependency_links. Here are the relevant lines from my setup.py:
install_requires=["requests", "proppkg==70.1.0"],
dependency_links=["https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"]
Installation is successful in Debian-Buster if I run python3 setup.py install in my package directory. I see the proprietary package get downloaded and installed.
Installation fails if I run pip3 install . also tox (version 3.24.4) fails similarly. In both cases, pip shows a message "Looking in indexes" then fails with "ERROR: Could not find a version that satisfies the requirement".
Solution 2 - PEP 508
Studying SO answer pip ignores dependency_links in setup.py which states that dependency_links is deprecated, I started over, revised setup.py to have:
install_requires=[
"requests",
"proppkg # https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"
],
Installation is successful in Debian-Buster if I run pip3 install . in my package directory. Pip shows a message "Looking in indexes" but still downloads and installs the proprietary package successfully.
Installation fails in Debian-Buster if I run python3 setup.py install in my package directory. I see these messages:
Searching for proppkg# https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0
..
Reading https://pypi.org/simple/proppkg/
..
error: Could not find suitable distribution for Requirement.parse(...).
Tox also fails in this scenario as it installs dependencies.
Really speculating now, it almost seems like there's an ordering issue. Tox invokes pip like this:
python -m pip install --exists-action w .tox/.tmp/package/1/te-0.3.5.zip
In that output I see "Collecting proppkg# https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0" as the first step. That install fails because it fails to import package requests. Then tox continues collecting other dependencies. Finally tox reports as its last step "Collecting requests" (and that succeeds). Do I have to worry about ordering of install steps?
I'm starting to think that maybe the proprietary package is broken. I verified that the prop package setup.py has requests in its install_requires entry. Not sure what else to check.
Workaround solution
My workaround is installing the proprietary package in the docker image as a separate step before I install my own package, just by running pip3 install https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz. The setup.py has the PEP508 URL in install_requires. Then pip and tox find the prop package in the pip cache, and work fine.
Please suggest what to try for the latest pip and tox, or if this is as good as it gets, thanks in advance.
Update - add setup.py
Here's a (slightly sanitized) version of my package's setup.py
from setuptools import setup, find_packages
def get_version():
"""
read version string
"""
version_globals = {}
with open("te/version.py") as fp:
exec(fp.read(), version_globals)
return version_globals['__version__']
setup(
name="te",
version=get_version(),
packages=find_packages(exclude=["tests.*", "tests"]),
author="My Name",
author_email="email#mycompany.com",
description="My Back-End Server",
entry_points={"console_scripts": [
"te-be=te.server:main"
]},
python_requires=">=3.7",
install_requires=["connexion[swagger-ui]",
"Flask",
"gevent",
"redis",
"requests",
"proppkg # https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"
],
package_data={"te": ["openapi_te.yml"]},
include_package_data=True, # read MANIFEST.in
)

How to get PyPI to automatically install dependencies [duplicate]

This question already has answers here:
Pip install from pypi works, but from testpypi fails (cannot find requirements)
(2 answers)
Closed 2 years ago.
How can I publish a package on PyPI such that all dependencies are automatically installed, rather than manually by the user.
I specify the dependencies in setup.py with install_requires as follows:
setuptools.setup(name='myPackage',
version='1.0',
packages=setuptools.find_packages(),
include_package_data=True,
classifiers=[
'Programming Language :: Python :: 3',
'Operating System :: OS Independent',
'Topic :: Scientific/Engineering :: Bio-Informatics'
],
install_requires=['numpy', 'pandas', 'sklearn'],
python_requires='>=3'
)
And I have a requirements.txt file which is included in my MANIFEST.in:
numpy==1.15.4
sklearn==0.20.1
pandas==0.23.4
However, after publishing on test.pypi when I try to install the package, I get the following error:
Could not find a version that satisfies the requirement numpy (from myPackage==1.0.0) (from versions: )
No matching distribution found for sklearn (from myPackage==1.0.0)
This means that PyPI does not install the numpy dependency.
How do I enable automatic installation of my dependencies?
Should I use a virtual environment when building and publishing the package? How do I do this?
P.S. I am entirely new to this so I will appreciate explicit code or links to simple tutorial pages. Thank you.
You can specify multiple indexes via --extra-index-url. Point it to TestPyPI so your package is pulled from there, the deps from PyPI:
$ pip install myPackage --extra-index-url=https://test.pypi.org/simple/
However, the real root for the issue is that you have included the wrong dist name for the scikit-learn package. Replace sklearn with scikit-learn:
setup(
...,
install_requires=['numpy', 'pandas', 'scikit-learn'],
)
This is an unfortunate (and known) downside to TestPyPI: The issue is that sklearn does not exist on TestPyPI, and by installing your package from there, you are telling pip to look for dependencies there as well.
Instead, you should publish to PyPI instead, and use a pre-release version so as not to pollute your versions. You can delete these pre-releases from the project later.
I realized that installing packages from test.PyPI does not install all packages, since some of these packages are hosted on PyPI and not test.PyPI.
When I published the package on PyPI as a pre-release version (1.0a1), instead on test.PyPI, the dependencies were correctly installed. Hence, the problem was purely with test.PyPI.
This is my approach.
I like to use a requirements.txt file instead of putting dependencies in install_requires because it's easier during dev to run:
$ pip install -r requirements.txt
To have pip install dependencies automatically, I include at the top of setup.py before setuptools.setup():
requirements = []
with open('requirements.txt', 'r') as fh:
for line in fh:
requirements.append(line.strip())
Then in setuptools.setup():
install_requires = requirements
To install:
pip install --index-url https://test.pypi.org/simple/ --upgrade --no-cache-dir --extra-index-url=https://pypi.org/simple/ <<package name>>
--index-url is telling pip to use the test version of pypi.
--upgrade forces an upgrade if a previous version is installed.
--no-cache-dir resolves issues with caching if doing a very quick re-release (pip doesn't pick up the new version)
--extra-index tells pip to look in the prod version of pypi if it can't find the required package in test (i.e. solves problems of dependencies not being available in test)
Your install_requires should be of the form
...
install_requires=["numpy==1.15.4",
"sklearn==0.20.1",
"pandas==0.23.4"]
...
You can also use >= instead of == to allow for more recent versions of those libraries.

Problems with installing package from dependency_links

Here is my setup.py:
setup(
...
install_requires=['GEDThriftStubs'],
dependency_links=['git+ssh://user#git.server.com/ged-thrift-stubs.git#egg=GEDThriftStubs'],
...)
Then I create package:
python setup.py sdist
Then I try to install it:
pip install file://path/package-0.0.1.tar.gz
And get this in terminal:
Downloading/unpacking GEDThriftStubs (from package==0.0.1)
Could not find any downloads that satisfy the requirement GEDThriftStubs (from package==0.0.1)
No distributions at all found for GEDThriftStubs (from package==0.0.1)
And in pip.log messages like this:
Skipping link git+ssh://user#git.server.com/ged-thrift-stubs.git#egg=GEDThriftStubs; wrong project name (not gedthriftstubs)
And I don't have anywhere in my project that exact name "gedthriftstubs", if it matters.
But this works fine:
pip install git+ssh://user#git.server.com/ged-thrift-stubs.git#egg=GEDThriftStubs
Try:
$ pip install --process-dependency-links file://path/package-0.0.1.tar.gz
Note that this tag is removed from pip in pip 1.6. See this article on pip.pypa.io for more information.
In pip 1.5 processing dependency links was deprecated and it was removed completely in pip 1.6.
There's also a lengthy discussion ( issue #1519 ) regarding pip & dependency links
If that doesn't work, you may also need to add a version suffix on your link, like this:
git+ssh://user#git.server.com/ged-thrift-stubs.git#egg=GEDThriftStubs-0.0.1
where 0.0.1 is the version specified in the setup.py of ged-thrift-stubs

Categories