configure setup.py to install a package unzipped - python

When I install pytz via setuptools, iterating over pytz.all_timezones takes multiple seconds. Someone suggested running pip unzip pytz, and that fixes the performance problem. Now I want to make setuptools install pytz uncompressed any time someone installs my package.
Can I configure setuptools to always unzip a particular dependency of my package?
$ virtualenv ve2.7
$ source ve2.7/bin/activate
(ve2.7)$ python setup.py install
(ve2.7)$ python slowpytz.py
2.62620520592s
(ve2.7)$ pip unzip pytz
DEPRECATION: 'pip zip' and 'pip unzip` are deprecated, and will be removed in a future release.
Unzipping pytz (in ./ve2.7/lib/python2.7/site-packages/pytz-2014.7-py2.7.egg)
(ve2.7)$ python slowpytz.py
0.0149159431458s
setup.py
from setuptools import setup
setup(name='slowpytz', version='0.0.1', install_requires=['pytz==2014.7'])
slowpytz.py
import pytz
import time
start = time.time()
zones = list(pytz.all_timezones)
print(str(time.time() - start) + 's')

There's no way that I know of to force unzipping of your dependencies in all cases. Some things that fall slightly short of that, but might still be useful:
You could submit a bug report for pytz to set zip_safe=False in its setup.py, using performance data as a justification for the change.
Failing that, you could fork pytz, add zip_safe=False, and have your package depend on your fork. (Not a great option.)
You could recommend that users always install your package with pip, which always installs everything unzipped (including dependencies), rather than easy_install or python setup.py install.
If your users must use easy_install, you can recommend they use easy_install -Z, which forces unzipped installation.

Related

pip and tox ignore full path dependencies, instead look for "best match" in pypi

This is an extension of SO setup.py ignores full path dependencies, instead looks for "best match" in pypi
I am trying to write setup.py to install a proprietary package from a .tar.gz file on an internal web site. Unfortunately for me the prop package name duplicates a public package in the public PyPI, so I need to force install of the proprietary package at a specific version. I'm building a docker image from a Debian-Buster base image, so pip, setuptools and tox are all freshly installed, the image brings python 3.8 and pip upgrades itself to version 21.2.4.
Solution 1 - dependency_links
I followed the instructions at the post linked above to put the prop package in install_requires and dependency_links. Here are the relevant lines from my setup.py:
install_requires=["requests", "proppkg==70.1.0"],
dependency_links=["https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"]
Installation is successful in Debian-Buster if I run python3 setup.py install in my package directory. I see the proprietary package get downloaded and installed.
Installation fails if I run pip3 install . also tox (version 3.24.4) fails similarly. In both cases, pip shows a message "Looking in indexes" then fails with "ERROR: Could not find a version that satisfies the requirement".
Solution 2 - PEP 508
Studying SO answer pip ignores dependency_links in setup.py which states that dependency_links is deprecated, I started over, revised setup.py to have:
install_requires=[
"requests",
"proppkg # https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"
],
Installation is successful in Debian-Buster if I run pip3 install . in my package directory. Pip shows a message "Looking in indexes" but still downloads and installs the proprietary package successfully.
Installation fails in Debian-Buster if I run python3 setup.py install in my package directory. I see these messages:
Searching for proppkg# https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0
..
Reading https://pypi.org/simple/proppkg/
..
error: Could not find suitable distribution for Requirement.parse(...).
Tox also fails in this scenario as it installs dependencies.
Really speculating now, it almost seems like there's an ordering issue. Tox invokes pip like this:
python -m pip install --exists-action w .tox/.tmp/package/1/te-0.3.5.zip
In that output I see "Collecting proppkg# https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0" as the first step. That install fails because it fails to import package requests. Then tox continues collecting other dependencies. Finally tox reports as its last step "Collecting requests" (and that succeeds). Do I have to worry about ordering of install steps?
I'm starting to think that maybe the proprietary package is broken. I verified that the prop package setup.py has requests in its install_requires entry. Not sure what else to check.
Workaround solution
My workaround is installing the proprietary package in the docker image as a separate step before I install my own package, just by running pip3 install https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz. The setup.py has the PEP508 URL in install_requires. Then pip and tox find the prop package in the pip cache, and work fine.
Please suggest what to try for the latest pip and tox, or if this is as good as it gets, thanks in advance.
Update - add setup.py
Here's a (slightly sanitized) version of my package's setup.py
from setuptools import setup, find_packages
def get_version():
"""
read version string
"""
version_globals = {}
with open("te/version.py") as fp:
exec(fp.read(), version_globals)
return version_globals['__version__']
setup(
name="te",
version=get_version(),
packages=find_packages(exclude=["tests.*", "tests"]),
author="My Name",
author_email="email#mycompany.com",
description="My Back-End Server",
entry_points={"console_scripts": [
"te-be=te.server:main"
]},
python_requires=">=3.7",
install_requires=["connexion[swagger-ui]",
"Flask",
"gevent",
"redis",
"requests",
"proppkg # https://site.mycompany.com/path/to/proppkg-70.1.0.tar.gz#egg=proppkg-70.1.0"
],
package_data={"te": ["openapi_te.yml"]},
include_package_data=True, # read MANIFEST.in
)

Should setuptools be in the setup_requires entry of setup.cfg files?

The importlib_resources backport for Python < 3.7 of the importlib.resources standard library module has the following section in the setup.cfg file:
[options]
python_requires = >=2.7,!=3.0,!=3.1,!=3.2,!=3.3
setup_requires =
setuptools
wheel
install_requires =
pathlib2; python_version < '3'
typing; python_version < '3.5'
packages = find:
Why does setup_requires include setuptools? This does not seem to make sense since:
the first line of the setup.py file imports setuptools, so by the time the setup function is called and reads the setup.cfg file that instructs to install setuptools it is already too late to install setuptools:
from setuptools import setup
setup()
setuptools is already installed on any fresh Python installation (well, only tested on Windows 10 and MacOS 10.15 with Python 3.8.0):
$ python -V
Python 3.8.0
$ pip list
Package Version
---------- -------
pip 19.2.3
setuptools 41.2.0
WARNING: You are using pip version 19.2.3, however version 19.3.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
No, setuptools should not be included in setup_requires, according to PEP 518 (bold emphasis mine):
Setuptools tried to solve this with a setup_requires argument to its
setup() function [3]. This solution has a number of issues, such as:
No tooling (besides setuptools itself) can access this information without executing the setup.py, but setup.py can't be executed without having these items installed.
While setuptools itself will install anything listed in this, they won't be installed until during the execution of the setup() function, which means that the only way to actually use anything added here is through increasingly complex machinations that delay the import and usage of these modules until later on in the execution of the setup() function.
This cannot include setuptools itself nor can it include a replacement to setuptools, which means that projects such as numpy.distutils are largely incapable of utilizing it and projects cannot take advantage of newer setuptools features until their users naturally upgrade the version of setuptools to a newer one.
The items listed in setup_requires get implicitly installed whenever you execute the setup.py but one of the common ways that the setup.py is executed is via another tool, such as pip, who is already managing dependencies. This means that a command like pip install spam might end up having both pip and setuptools downloading and installing packages and end users needing to configure both tools (and for setuptools without being in control of the invocation) to change settings like which repository it installs from. It also means that users need to be aware of the discovery rules for both tools, as one may support different package formats or determine the latest version differently.
The accepted answer is mostly correct, but where PEP 518 says.
[The setup_requires mechanism] cannot include setuptools itself...
It's technically incorrect, and as importlib_resources demonstrates, it can actually include setuptools. The problem is that including setuptools in setup_requires serves mostly as documentation. It declares that setuptools is a build requirement (required to run setup.py), but it won't be capable of satisfying that requirement if it's not already satisfied.
But, the presence of setuptools in setup_requires is technically correct and does serve the purpose of declaring the requirement and asking setuptools to verify that the requirement is in fact installed (alongside other setup-time requirements).
It is, however, just a legacy artifact and doesn't provide that much value, and as can be seen in the question and answers, it does lead to confusion. The recommended, proper, approach is to use PEP 517 and 518 declarations and builders, but that part of the ecosystem hasn't matured yet, so setuptools vestiges will remain. Try not to let them bother you.
Why does setup_requires includes setuptools? This does not seem to make sense
Does not make sense at all. On the other hand it doesn't hamper anything so why not?

How to declare build-time dependencies without breaking other packages?

I ran into a problem when installing a package which depended on python-daemon. I ultimately traced it to the latest version of the package python-daemon (2.0.3) released yesterday. Testing in a virtual environment on an Ubuntu 14.04 machine and issuing the following commands:
(venv) $ pip list
argparse (1.2.1)
pip (1.5.6)
setuptools (3.6)
wsgiref (0.1.2)
(venv) $ pip install redis
... works fine ....
(venv) $ pip install python-daemon
...
snip
...
File "/home/pwj/.virtualenvs/venv/local/lib/python2.7/site-packages/pkg_resources.py", line 2147, in load
['__name__'])
ImportError: No module named version
(venv)02:15 PM tmp$ pip list
argparse (1.2.1)
lockfile (0.10.2)
pip (1.5.6)
python-daemon (2.0.3)
setuptools (3.6)
wsgiref (0.1.2)
So the install of python-daemon seemed to work but something affected pip or setuptools because other packages (celery, flask), I try to install with pip after this gives me the same traceback:
...
snip
...
File "/home/pwj/.virtualenvs/venv/local/lib/python2.7/site-packages/pkg_resources.py", line 2147, in load
['__name__'])
ImportError: No module named version
If I uninstall python-daemon with pip things again and packages that weren't installing now install fine. Has anyone else come across this or something similar with a different project? My solution was to pip install the previous version
(venv) $ pip install python-daemon==2.0.2
... works ...
but was wondering what might be causing such an error.
(This behaviour is corrected in python-daemon version 2.0.4 and later.)
There are two sides to this:
Setuptools assumes it is the centre of everything.
Version 2.0.3 of python-daemon doesn't take that into account.
A more detailed explanation: There is some complex code using Docutils involved in the python-daemon build process, that isn't needed after install and isn't part of the library code.
It's too complex to leave in the un-importable (and therefore not-unit-testable) setup.py, so that build code is shunted to a separate testable module, version (in the file version.py), which itself uses Docutils.
But then the setup.py has a circular dependency: How to import version, when Docutils isn't yet installed? How to use Setuptools to ensure Docutils is installed, when running setup.py to completion will need version? All the feasible solutions are ugly and confusing.
The approach taken in ‘python-daemon’ 2.0.3 is to declare Docutils required for setup, and declare a Setuptools entry point for the work that needs version. That way setup.py gets to install Docutils before any of the entry points that will use version.
But now we come to the first point, that Setuptools arrogates itself as the centre of everything. By declaring an entry point, setup.py has modified every Setuptools action thereafter, and every package will fail if it can't find the entry points. And, since most of them don't have version or the specified functions in that module, they crash Setuptools.
What is essentially a bug to be fixed, reveals a poorly-understood corner case in Setuptools. So I'm voting your question up.
There doesn't seem to be a good solution to this: having modules available for setup.py but ensuring requirements are met first. Setuptools assumes it is the only build system needed to satisfy all dependencies for everything, and when that assumption fails it's very difficult to get around.
Thanks to the Python Packaging Authority folks, and the distutils-sig forum, for explaining this to me.

Installing gevent in virtualenv

I am just starting with virtualenv, but I am trying to install gevent within a virtualenv environment (I am running Windows). When I use PIP from virtualenv, I get this error:
MyEnv>pip install gevent
Downloading/unpacking gevent
Running setup.py egg_info for package gevent
Please provide path to libevent source with --libevent DIR
The package index has MSIs and EXEs for installing on Windows (http://pypi.python.org/pypi/gevent/0.13.7), but I don't know how to install those into a virtualenv environment (or if that is even possible). When I try pip install gevent-0.13.7.win32-py2.7.exe from the virtualenv promp, I get an error as well:
ValueError: ('Expected version spec in', 'D:\\Downloads\\gevent-0.13.7.win32-py2.7.exe', 'at', ':\\Downloads\\gevent-0.13.7.win32-py2.7.exe')
Does someone know how to do this?
Pip doesn't support installing binary packages, yet. If you want to install from binary package you have to use easy_install - easy_install gevent-0.13.7.win32-py2.7.exe
Microsoft Windows XP [Wersja 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
Z:\>virtualenv z:\venv\gevent-install
New python executable in z:\venv\gevent-install\Scripts\python.exe
Installing distribute..................................................................................................
............................................................................................done.
Installing pip.................done.
Z:\>venv\gevent-install\Scripts\activate
(gevent-install) Z:\>easy_install c:\python\packages\gevent-0.13.7.win32-py2.7.exe
Processing gevent-0.13.7.win32-py2.7.exe
creating 'c:\docume~1\pdobro~1\ustawi~1\temp\easy_install-b5nj3i\gevent-0.13.7-py2.7-win32.egg' and adding 'c:\docume~1
pdobro~1\ustawi~1\temp\easy_install-b5nj3i\gevent-0.13.7-py2.7-win32.egg.tmp' to it
creating z:\venv\gevent-install\lib\site-packages\gevent-0.13.7-py2.7-win32.egg
Extracting gevent-0.13.7-py2.7-win32.egg to z:\venv\gevent-install\lib\site-packages
Adding gevent 0.13.7 to easy-install.pth file
Installed z:\venv\gevent-install\lib\site-packages\gevent-0.13.7-py2.7-win32.egg
Processing dependencies for gevent==0.13.7
Searching for greenlet
Reading http://pypi.python.org/simple/greenlet/
Reading http://bitbucket.org/ambroff/greenlet
Reading https://github.com/python-greenlet/greenlet
Best match: greenlet 0.3.4
Downloading http://pypi.python.org/packages/2.7/g/greenlet/greenlet-0.3.4-py2.7-win32.egg#md5=9941aa246358c586bb274812e
130629
Processing greenlet-0.3.4-py2.7-win32.egg
creating z:\venv\gevent-install\lib\site-packages\greenlet-0.3.4-py2.7-win32.egg
Extracting greenlet-0.3.4-py2.7-win32.egg to z:\venv\gevent-install\lib\site-packages
Adding greenlet 0.3.4 to easy-install.pth file
Installed z:\venv\gevent-install\lib\site-packages\greenlet-0.3.4-py2.7-win32.egg
Finished processing dependencies for gevent==0.13.7
(gevent-install) Z:\>
See Can I install Python windows packages into virtualenvs? Another option is to install from source and you can do this with pip but this requires setting up compiler and environment which is much harder than the simple command above.
From the error message, it would appear you need libevent source code. I would imagine you need to go a step further and compile/install libevent system-wide so pip can find it.
I would start by downloading the latest stable source from http://libevent.org/.
Compile and install it using instructions in the README: https://github.com/libevent/libevent#readme
To compile it on Windows, you'll need to use GNU-style build utilities like make and autoconf. I recommend http://www.mingw.org/.
Once you've installed libevent system-wide, I imagine pip will find it and proceed with gevent installation.
In the msi for gevent-0.13.7 there's an option to select an alternate installation point. point it to the root dir of your particular virtual environment (just above where /Lib and /Scripts are located). That should install it correctly.
You also need to make sure greenlets are installed. For that you can use Piotr's suggested method with easy_install on the .exe.

pypi UserWarning: Unknown distribution option: 'install_requires'

Does anybody encounter this warning when executing python setup.py install of a PyPI package?
install_requires defines what the package requires. A lot of PyPI packages have this option. How can it be an "unknown distribution option"?
python setup.py uses distutils which doesn't support install_requires. setuptools does, also distribute (its successor), and pip (which uses either) do. But you actually have to use them. I.e. call setuptools through the easy_install command or pip install.
Another way is to import setup from setuptools in your setup.py, but this not standard and makes everybody wanting to use your package have to have setuptools installed.
This was the first result on my google search, but had no answer.
I found that upgrading setuptools resolved the issue for me (and pip for good measure)
pip install --upgrade pip
pip install --upgrade setuptools
Hope this helps the next person to find this link!
I'm on a Mac with python 2.7.11. I have been toying with creating extremely simple and straightforward projects, where my only requirement is that I can run python setup.py install, and have setup.py use the setup command, ideally from distutils. There are literally no other imports or code aside from the kwargs to setup() other than what I note here.
I get the error when the imports for my setup.py file are:
from distutils.core import setup
When I use this, I get warnings such as
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'entry_points'
warnings.warn(msg)
If I change the imports (and nothing else) to the following:
from distutils.core import setup
import setuptools # noqa
The warnings go away.
Note that I am not using setuptools, just importing it changes the behavior such that it no longer emits the warnings. For me, this is the cause of a truly baffling difference where some projects I'm using give those warnings, and some others do not.
Clearly, some form of monkey-patching is going on, and it is affected by whether or not that import is done. This probably isn't the situation for everyone researching this problem, but for the narrow environment in which I'm working, this is the answer I was looking for.
This is consistent with the other (community) comment, which says that distutils should monkeypatch setuptools, and that they had the problem when installing Ansible. Ansible appears to have tried to allow installs without having setuptools in the past, and then went back on that.
https://github.com/ansible/ansible/blob/devel/setup.py
A lot of stuff is up in the air... but if you're looking for a simple answer for a simple project, you should probably just import setuptools.
ATTENTION! ATTENTION! Imperfect answer ahead. To get the "latest memo" on the state of packaging in the Python universe, read this fairly detailed essay.
I have just ran into this problem when trying to build/install ansible. The problem seems to be that distutils really doesn't support install_requires. Setuptools should monkey-patch distutils on-the-fly, but it doesn't, probably because the last release of setuptools is 0.6c11 from 2009, whereas distutils is a core Python project.
So even after manually installing the setuptools-0.6c11-py2.7.egg running setup.py only picks up distutils dist.py, and not the one from site-packages/setuptools/.
Also the setuptools documentation hints to using ez_setup and not distutils.
However, setuptools is itself provided by distribute nowadays, and that flavor of setup() supports install_requires.
This is a warning from distutils, and is a sign that you do not have setuptools installed.
Installing it from http://pypi.python.org/pypi/setuptools will remove the warning.
In conclusion:
distutils doesn't support install_requires or entry_points, setuptools does.
change from distutils.core import setup in setup.py to from setuptools import setup or refactor your setup.py to use only distutils features.
I came here because I hadn't realized entry_points was only a setuptools feature.
If you are here wanting to convert setuptools to distutils like me:
remove install_requires from setup.py and just use requirements.txt with pip
change entry_points to scripts (doc) and refactor any modules relying on entry_points to be full scripts with shebangs and an entry point.
sudo apt-get install python-dev # for python2.x installs
sudo apt-get install python3-dev # for python3.x installs
It will install any missing headers. It solved my issue
As far as I can tell, this is a bug in setuptools where it isn't removing the setuptools specific options before calling up to the base class in the standard library: https://bitbucket.org/pypa/setuptools/issue/29/avoid-userwarnings-emitted-when-calling
If you have an unconditional import setuptools in your setup.py (as you should if using the setuptools specific options), then the fact the script isn't failing with ImportError indicates that setuptools is properly installed.
You can silence the warning as follows:
python -W ignore::UserWarning:distutils.dist setup.py <any-other-args>
Only do this if you use the unconditional import that will fail completely if setuptools isn't installed :)
(I'm seeing this same behaviour in a checkout from the post-merger setuptools repo, which is why I'm confident it's a setuptools bug rather than a system config problem. I expect pre-merge distribute would have the same problem)
I've now seen this in legacy tools using Python2.7, where a build (like a Dockerfile) installs an unpinned dependancy, for example pytest. PyTest has dropped Python 2.7 support, so you may need to specify version < the new package release.
Or bite the bullet and convert that app to Python 3 if that is viable.
It works fine if you follow the official documentation:
import setuptools
setuptools.setup(...)
Source: https://packaging.python.org/tutorials/packaging-projects/#creating-setup-py

Categories