Optional dependencies in distutils / pip - python

When installing my python package, I want to be able to tell the user about various optional dependencies. Ideally I would also like to print out a message about these optional requirements and what each of them do.
I haven't seen anything yet in the docs of either pip or docutils. Do tools these support optional dependencies?

These are called extras, here is how to use them in your setup.py, setup.cfg, or pyproject.toml.
The base support is in pkg_resources. You need to enable distribute in your setup.py. pip will also understand them:
pip install 'package[extras]'

Yes, at stated by #Tobu and explained here. In your setup.py file you can add a line like this:
extras_require = {
'full': ['matplotlib', 'tensorflow', 'numpy', 'tikzplotlib']
}
I have an example of this line here.
Now you can either install via PIP basic/vanilla package like pip install package_name or the package with all the optional dependencies like pip install package_name[full]
Where package_name is the name of your package and full is because we put "full" in the extras_require dictionary but it depends on what you put as a name.
If someone is interested in how to code a library that can work with or without a package I recommend this answer

Since PEP-621, this information is better placed in the pyproject.toml rather than setup.py. Here's the relevant specification from PEP 621. Here's an example snippet from a pyproject.toml (credit to #GwynBleidD):
[project.optional-dependencies]
test = [
"pytest < 5.0.0",
"pytest-cov[all]"
]
lint = [
"black",
"flake8"
]
ci = [
"pytest < 5.0.0",
"pytest-cov[all]",
"black",
"flake8"
]
A more complete example is found in the PEP

Related

Python poetry - how to install optional dependencies?

Python's poetry dependency manager allows specifying optional dependencies via command:
$ poetry add --optional redis
Which results in this configuration:
[tool.poetry.dependencies]
python = "^3.8"
redis = {version="^3.4.1", optional=true}
However how do you actually install them? Docs seem to hint to:
$ poetry install -E redis
but that just throws and error:
Installing dependencies from lock file
[ValueError]
Extra [redis] is not specified.
You need to add a tool.poetry.extras group to your pyproject.toml if you want to use the -E flag during install, as described in this section of the docs:
[tool.poetry.extras]
caching = ["redis"]
The key refers to the word that you use with poetry install -E, and the value is a list of packages that were marked as --optional when they were added. There currently is no support for making optional packages part of a specific group during their addition, so you have to maintain this section in your pyproject.toml file by hand.
The reason behind this additional layer of abstraction is that extra-installs usually refer to some optional functionality (in this case caching) that is enabled through the installation of one or more dependencies (in this case just redis). poetry simply mimics setuptools' definition of extra-installs here, which might explain why it's so sparingly documented.
I will add that not only you have to have this extras section added by hand, as well your optional dependencies cannot be in dev section.
Example of code that won't work:
[tool.poetry]
name = "yolo"
version = "1.0.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "2.7"
Django = "*"
[tool.poetry.dev-dependencies]
pytest = "*"
ipdb = {version = "*", optional = true}
[tool.poetry.extras]
dev_tools = ["ipdb"]
But this WILL work:
[tool.poetry]
name = "yolo"
version = "1.0.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "2.7"
Django = "*"
ipdb = {version = "*", optional = true}
[tool.poetry.dev-dependencies]
pytest = "*"
[tool.poetry.extras]
dev_tools = ["ipdb"]
Up-voted Drachenfels's answer.
Dev dependency could not be optional, otherwise, no matter how you tweak it with extras or retry with poetry install -E, it will just never get installed.
This sounds like a bug but somehow by design,
...this is not something I want to add. Extras will be referenced in the distributions metadata when packaging the project but development dependencies do not which will lead to a broken extras.
— concluded in Poetry PR#606 comment by one maintainer. See here for detailed context: https://github.com/python-poetry/poetry/pull/606#issuecomment-437943927
I would say that I can accept the fact that optional dev-dependency cannot be implemented. However, at least Poetry should warn me when I have such a config. If so, I wouldn't have been confused for a long time, reading each corner of the help manual and found nothing helpful.
I found some people did get trap in this problem (Is poetry ignoring extras or pyproject.toml is misconfigured?) but their questions are closed, marked duplicated and re-linked to this question. Thus I decided to answer here and give more details about this problem.
This is now possible (with Poetry version 1.2; perhaps even an earlier version), using the "extras" group:
poetry add redis --group=extras
It will appear in the section
[tool.poetry.group.extras.dependencies]
which is also newer style (compared to [tool.poetry.extras] or [tool.poetry.extras.dependencies]
See the documentation. Interestingely, this still follows the older style, [tool.poetry.extras], and doesn't show the use of poetry add, but the above result is what I get.

Unable to install locally built python package

I recently noticed that I am unable to install my own Python packages. I was getting an error that indicated that a package containing Python modules was invalid. So, I updated my setup.py and removed some elements, this is what I have now:
from setuptools import setup
setup(
name='project',
version='0.3.0',
packages=['project'],
license='GPL',
#zip_safe=False,
#include_package_data=True,
#package_data = { 'package': [ 'README.txt', '*.py' ] },
install_requires=[
'PyYAML >= 3.11',
'logger >= 0.2.0',
],
entry_points={
'console_scripts': ['project = project:main']
},
)
I removed some elements and called the project project. Essentially, within project, I had a package, libraries, with some Python modules. Prior to removing these lines:
#zip_safe=False,
#include_package_data=True,
#package_data = { 'package': [ 'README.txt', '*.py' ] },
... it was not working recently.
Oddly enough though, this setup.py was working as far as I could tell up until a month ago. That said, after commenting those items out and running python setup.py build, I no longer get the error about the package being invalid, but at the same token, I see that nothing gets installed when running pip install dist/project-0.0.1.tar.gz. Inside the file, built by python setup.py sdist, I do see all the files that I would expect to see. They just don't get installed, so I'm effectively missing all of the packages underneath the root folder (which is everything except init).
What am I missing here?
EDIT:
The solution was:
packages=find_packages(),
The hackish solution for me was to do this:
packages=['project', 'project/libraries', 'project/system', 'project/services'],
For whatever reason, packages was no longer working recursively.
As soon as I did that, voila, it worked. I'll probably circle back to this later as I'm curious what changed.

python pip install wheel with custom entry points

In a python virtualenv on Windows, I've installed a custom package that over-rides setuptools.command.install.easy_install.get_script_args to create a new custom type of entry point 'custom_entry'
I have another package that I want to prepare with setuptools exposing a custom entry point.
If I prepare an egg distribution of this package and install it with my modified easy_install.exe, this creates the custom entry points correctly.
However, if I prepare a wheel distribution and install it with a modified pip.exe, the custom entry points do not get added.
Why does pip not follow the same install procedure as easy_install?
Reading the source for pip, it seems that the function get_entrypoints in wheel.py excludes all entry points other than console_scripts and gui_scripts. Is this correct?
If so, how should I install custom entry points for pip installations?
---- Edit
It looks like I should provide more details.
In my first package, custom-installer, I'm over-riding (monkey-patching, really) easy_install.get_script_args, in custom_install.__init__.py:
from setuptools.command import easy_install
_GET_SCRIPT_ARGS = easy_install.get_script_args
def get_script_args(dist, executable, wininst):
for script_arg in _GET_SCRIPT_ARGS(dist, executable, wininst):
yield script_arg # replicate existing behaviour, handles console_scripts and other entry points
for group in ['custom_entry']:
for name, _ in dist.get_entry_map(group).items():
script_text = (
## some custom stuff
)
## do something else
yield (## yield some other stuff) # to create adjunct files to the -custom.py script
yield (name + '-custom.py', script_text, 't')
easy_install.get_script_args = get_script_args
main = easy_install.main
And in that package's setup.py, I provide a (console_script) entry point for my custom installer:
entry_points={
'console_scripts': [
'custom_install = custom_install.__init__:main'
]
}
Installing this package with pip correctly creates the installer script /venv/Scripts/custom_install.exe
With my second package, customized, I have both regular and custom entry points to install from setup.py, for two modules custom and console.
entry_points={
'console_scripts': [
'console = console.__main__:main'
],
'custom_entry': [
'custom = custom.__main__:main'
]
}
I would like to see both of these entry points installed regardless of the install procedure.
If I build the package customized as an egg distribution and install this with custom_install.exe created by custom-installer, then both entry points of customized are installed.
I would like to be able to install this package as a wheel file using pip, but from reading the source code, pip seems to explicitly skip and any entry points other than 'console_scripts' and 'gui_scripts':
def get_entrypoints(filename):
if not os.path.exists(filename):
return {}, {}
# This is done because you can pass a string to entry_points wrappers which
# means that they may or may not be valid INI files. The attempt here is to
# strip leading and trailing whitespace in order to make them valid INI
# files.
with open(filename) as fp:
data = StringIO()
for line in fp:
data.write(line.strip())
data.write("\n")
data.seek(0)
cp = configparser.RawConfigParser()
cp.readfp(data)
console = {}
gui = {}
if cp.has_section('console_scripts'):
console = dict(cp.items('console_scripts'))
if cp.has_section('gui_scripts'):
gui = dict(cp.items('gui_scripts'))
return console, gui
Subsequently, pip generates entry point scripts using a completely different set of code to easy_install. Presumably, I could over-ride pip's implementations of these, as done with easy_install, to create my custom entry points, but I feel like I'm going the wrong way.
Can anyone suggest a simpler way of implementing my custom entry points that is compatible with pip? If not, I can override get_entrypoints and move_wheel_files.
You will probably need to use the keyword console_scripts in your setup.py file. See the following answer:
entry_points does not create custom scripts with pip or easy_install in Python?
It basically states that you need to do the following in your setup.py script:
entry_points = {
'console_scripts': ['custom_entry_point = mypackage.mymod.test:foo']
}
See also: http://calvinx.com/2012/09/09/python-packaging-define-an-entry-point-for-console-commands/

Building and distributing python moduel using rpm

I am trying to build and distribute rpm package of python module for centos. I have followed following steps
created virtualenv and installed requires
in module added setup.py with install_requires.
then using python2.7 from virtualenv build package
../env/bin/python2.7 setup.py bdist_rpm
Now I got src, no-arch and tar-gz files in 'dist' folder.
foo-0.1-1.noarch.rpm, foo-0.1-1.src.rpm, foo-0.1.tar.gz
I tried to install package src-rpm using 'sudo yum install foo-0.1-1.src.rpm',
got error something like wrong architecture
Then I tried to install package no-arch, 'sudo yum install foo-0.1-1.noarch.rpm' it works smoothly.
But after running script, it gave some import error. here I expect to download that module automatically.
The last thing is I am using some third party library which is not on pip.
So I want to whole setup using virtualenv with required modules. So after installing rpm, user can run script directly instead of installing third party libs separately and explicitly.
Some above steps may sounds wrong, as I am new to this stuff.
Following is code in setup.py
from setuptools import setup, find_packages
setup(
name = "foo",
version = "0.1",
packages = find_packages(),
scripts = ['foo/bar.py', ],
# Project uses reStructuredText, so ensure that the docutils get
# installed or upgraded on the target machine
install_requires = ['PyYAML', 'pyOpenSSL', 'pycrypto', 'privatelib1,'privatelib2', 'zope.interface'],
package_data = {
# If any package contains *.txt or *.rst files, include them:
'': ['*.txt', '*.rst'],
# And include any *.msg files found in the 'billing' package, too:
'foo': ['*.msg'],
},
# metadata for upload to PyPI
author = "foo bar",
description = "foo bar",
license = "",
keywords = "foo bar",
# could also include long_description, download_url, classifiers, etc.
)
Also I am using shebang in script as,
#!/usr/bin/env python2.7
Note:
I have multiple python setups. 2.6 and 2.7
By default 'python' commands gives 2.6
while command 'python2.7' gives python2.7
output of `'rpm -qp foo-0.1-1.noarch.rpm --requires' =>
`/usr/bin/python
python(abi) = 2.6
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
When i install pakcage. script's shebang line (which is now '/usr/bin/bar.py') is getting changed to /usr/bin/python' But I exclusively want to run script on python2.7.
Thanks in advance

Python How to install packages to specific directory using setuptools

setup.py
from setuptools import setup
setup(
name = "Project",
version = "1.0",
packages = ['Project','Project.project','Project.LOG',\
'Project.reporting','Project.templates',\
],
install_requires = ['django-grappelli==2.3.8','pycairo==1.10.0','django-chart-tools==0.2.1','django-admin-tools==0.4.0'],
package_data = {
'': ['*.html','*.pyd','*.txt','*.gif','*.png','*.jpeg','*.jpg','*.css','*.js','*.py','*.html~','*.sh','*.wsgi'],
},
# metadata for upload to PyPI
author = "Me",
author_email = "me#example.com",
description = "This is an Example Package", )
Now I want all the packages to be installed in dev/workspace in case of windows and /var/www in case of ubuntu OS. And I want all the install_requires to be installed in python/Lib/site-packages.
How can I do this?
I think you really want to have a look at virtualenv. It lets you create your own Python environment where your code and the dependencies get installed to - wherever you want and totally independent from your existing Python installation. I don't see a good reason why you would want to install dependencies in your existing Python installation.

Categories