PIP Constraints Files - python

I was reading the docs and couldn't wrap my head around it:
Constraints files are requirements files that only control which
version of a requirement is installed, not whether it is installed or
not. Their syntax and contents is nearly identical to Requirements
Files.
There is one key difference: Including a package in a constraints file
does not trigger installation of the package.
So does that mean I need to requirement files first then run constraint?
Need to have both requirements.txt and constraints.txt?
Or just -c requirements.txt?
Could someone please explain in plain English that what the new PIP feature: Constraints Files does?

Update: pip 20.3, released Nov 30, 2020, introduced a new resolver that fixes some design issues. It also reimplements the constraints feature. It is now difficult or impossible to use the feature the way I describe below. I do not understand the new constraints implementation, and I no longer use it. I've had success with pip-compile from pip-tools. I specify top-level requirements in requirements.in, and pip-compile generates a requirements.txt with specific versions an package hashes. See requirements.in and requirements.txt in the ichnaea project for a working example.
Original answer below, for pip 20.2 and previous
I think a constraints file is a good way to keep your "true" requirements separate from your full install list.
It's good practice to fully specify package versions in your requirements file. For example, if you are installing django-allauth with Django LTS, pin it to the latest releases (as of my answer):
Django==1.8.12
django-allauth==0.25.2
When you install the package, it ends up installing some required packages as well. So, you add those as well, so that everyone gets the same versions of packages:
Django==1.8.12
django-allauth==0.25.2
oauthlib==1.0.3
python-openid==2.2.5
requests==2.9.1
requests-oauthlib==0.6.1
And then you get the bug report "Doesn't work under Python 3". Oops, python-openid is Python 2 only, and python3-openid is used instead, further requiring defusedxml:
Django==1.8.12
django-allauth==0.25.2
oauthlib==1.0.3
python-openid==2.2.5 ; python_version < '3.0'
python3-openid==3.0.10 ; python_version >= '3.0'
defusedxml==0.4.1 ; python_version >= '3.0'
requests==2.9.1
requests-oauthlib==0.6.1
Now requirements.txt is getting ugly, and it's hard to see the "requirements" of Django and django-allauth in the mess.
Here is a requirements.txt that refers to a constraints file:
-c constraints.txt
Django==1.8.12
django-allauth==0.25.2
And constraints.txt with a helpful comment:
# django-allauth requirements
oauthlib==1.0.3
python-openid==2.2.5
python3-openid==3.0.10
defusedxml==0.4.1
requests==2.9.1
requests-oauthlib==0.6.1
No Python classifiers are needed, because constraints are only installed if a package requires them, and are ignored otherwise. Additionally, if a package stops requiring another package 2 years down the road, fresh installs will stop installing it.
I think this, plus some comments, is a useful way to communicate what packages you are using for the project, and which ones are included because they are dependencies.
I think it gets even more useful if you are using pip 8.x's hash-checking mode, which requires specifying versions for dependencies-of-dependencies. If you go down that path, I recommend hashin to help you manage the hashes. See browsercompat for a complicated requirements setup using constraints and hashes.

From the book "Secret Recipes of the Python Ninja"
Constraints files differ from requirements files in one key way: putting a package in the constraints file does not cause the package to be installed, whereas a requirements file will install all packages listed. Constraints files are simply requirements files that control which version of a package will be installed but provide no control over the actual installation.
Let say you have requirements.txt file with following code
# requirements.txt
pandas
and constraints.txt
# constraints.txt
# math / science / graph stuff
bokeh==0.11.1
numpy==1.10.4
pandas==0.17.1
scipy==0.17.0
openpyxl==2.3.3
patsy==0.4.1
matplotlib==1.5.1
ggplot==0.6.8
seaborn==0.7.0
scikit-learn==0.17
executing
pip install -c constraints.txt
will install all packages from requirements.txt and using constraints.txt file for version constraint.

Constraint files are very useful when you provide production environments with optimized package compilations for the productive infrastructure.
We provide such environments in docker containers with versions of numpy, scipy, tensorflow, opencv, ... optimized for our production servers. In order to ensure that the requirements files developers set up to make the environment reproducible for them don't overwrite these optimized installations, we use constraint files. They are written as part of the build process. pip is made aware of the constraints by setting the PIP_CONSTRAINT environment variable.
Before, developers always had to make sure their requirements line up with what's available in the prod containers, which can be a pain given that some minor version numbers iterate quite quickly.

I haven't used constrains files myself, so I may be wrong, but here's how I understand it from the documentation.
Let's say, you have multiple projects which you install using pip. Some of them depend on a popular package foobar directly, some indirectly (via their dependencies), some don't depend on it at all.
One day you decide to change foobar code a bit, to solve some problem related to your environment, so you go ahead and create a local copy of foobar. Now you want to ensure that all your projects use this local copy, but you don't want to spend time figuring out which of them depend on foobar and then editing their requirements.txt or setup.py files.
Here comes constraints file, where you point out to your local copy of foobar and use it for all your projects, not caring which of them need foobar, because constrains file only applies when the project being installed actually depends on foobar (whether directly or indirectly).
There's an even better example of possible usage on reddit.

Related

Python upgrade and requirements compatibility

I have a Python 3.6 project that I want to migrate to Python 3.8. I also have a
requirements.txt file generated with pip freeze, which is used by the Python 3.6 project. Is there a way to know if the packages listed in requirements.txt, with their own specific pinned version, support/are compatible with Python 3.8?
I could imagine some ways to do that, like check the packages classifiers or look at tox.ini and so on, but the requirements file has ~300 packages listed and doing that manually would be cumbersome at best.
If you do a pip install -r requirements.txt under the new version of Python (in a venv), it'll tell you if it can't find that particular version of a package for it
If there are several missing, it'll be a bit annoying, because it'll only tell you one at a time, but hopefully there won't be too many
Not sure if there's an official documented procedure in the docs, but this should give you a quick idea, especially on the happy path where it just works

Are there naming conventions for pip requirements files?

Are there conventions for storing multiple requirements.txt files in a Python code repository. For example, one file for simply running the program, another for day-to-day development, another for making a Windows build.
Some repositories contain two files, requirements.txt and requirements_dev.txt, or requirements.txt and requirements_win.txt - this seems pretty ad-hoc.
I have others with a requires subfolder. But I'm not sure what the meaning of requires/requirements.txt is in this context -- running the application, or for development?
There is no mention of storing multiple requirements files in Structuring your project (Hitchhiker's guide to Python) or pip install (pip documentation).
As far as I am aware, there are no hard-and-fast rules here. At least not via PEP (but someone feel free to correct me).
The Hitchhiker's Guide to Python recommends putting the pip requirements file at the root of your project.
There does not appear to be any requirement that a pip requirements file has to be called requirements.txt. The pip install documentation even uses the example of pip install -r example-requirements.txt.
I would think that the conventions vary from project-to-project and largely depend on your deployment process and project-specific documentation.

requirements.txt vs setup.py

I started working with Python. I've added requirements.txt and setup.py to my project. But, I am still confused about the purpose of both files. I have read that setup.py is designed for redistributable things and that requirements.txt is designed for non-redistributable things. But I am not certain this is accurate.
How are those two files truly intended to be used?
requirements.txt:
This helps you to set up your development environment.
Programs like pip can be used to install all packages listed in the file in one fell swoop. After that you can start developing your python script. Especially useful if you plan to have others contribute to the development or use virtual environments.
This is how you use it:
pip install -r requirements.txt
It can be produced easily by pip itself:
pip freeze > requirements.txt
pip automatically tries to only add packages that are not installed by default, so the produced file is pretty minimal.
setup.py:
This helps you to create packages that you can redistribute.
The setup.py script is meant to install your package on the end user's system, not to prepare the development environment as pip install -r requirements.txt does. See this answer for more details on setup.py.
The dependencies of your project are listed in both files.
The short answer is that requirements.txt is for listing package requirements only. setup.py on the other hand is more like an installation script. If you don't plan on installing the python code, typically you would only need requirements.txt.
The file setup.py describes, in addition to the package dependencies, the set of files and modules that should be packaged (or compiled, in the case of native modules (i.e., written in C)), and metadata to add to the python package listings (e.g. package name, package version, package description, author, ...).
Because both files list dependencies, this can lead to a bit of duplication. Read below for details.
requirements.txt
This file lists python package requirements. It is a plain text file (optionally with comments) that lists the package dependencies of your python project (one per line). It does not describe the way in which your python package is installed. You would generally consume the requirements file with pip install -r requirements.txt.
The filename of the text file is arbitrary, but is often requirements.txt by convention. When exploring source code repositories of other python packages, you might stumble on other names, such as dev-dependencies.txt or dependencies-dev.txt. Those serve the same purpose as dependencies.txt but generally list additional dependencies of interest to developers of the particular package, namely for testing the source code (e.g. pytest, pylint, etc.) before release. Users of the package generally wouldn't need the entire set of developer dependencies to run the package.
If multiplerequirements-X.txt variants are present, then usually one will list runtime dependencies, and the other build-time, or test dependencies. Some projects also cascade their requirements file, i.e. when one requirements file includes another file (example). Doing so can reduce repetition.
setup.py
This is a python script which uses the setuptools module to define a python package (name, files included, package metadata, and installation). It will, like requirements.txt, also list runtime dependencies of the package. Setuptools is the de-facto way to build and install python packages, but it has its shortcomings, which over time have sprouted the development of new "meta-package managers", like pip. Example shortcomings of setuptools are its inability to install multiple versions of the same package, and lack of an uninstall command.
When a python user does pip install ./pkgdir_my_module (or pip install my-module), pip will run setup.py in the given directory (or module). Similarly, any module which has a setup.py can be pip-installed, e.g. by running pip install . from the same folder.
Do I really need both?
Short answer is no, but it's nice to have both. They achieve different purposes, but they can both be used to list your dependencies.
There is one trick you may consider to avoid duplicating your list of dependencies between requirements.txt and setup.py. If you have written a fully working setup.py for your package already, and your dependencies are mostly external, you could consider having a simple requirements.txt with only the following:
# requirements.txt
#
# installs dependencies from ./setup.py, and the package itself,
# in editable mode
-e .
# (the -e above is optional). you could also just install the package
# normally with just the line below (after uncommenting)
# .
The -e is a special pip install option which installs the given package in editable mode. When pip -r requirements.txt is run on this file, pip will install your dependencies via the list in ./setup.py. The editable option will place a symlink in your install directory (instead of an egg or archived copy). It allows developers to edit code in place from the repository without reinstalling.
You can also take advantage of what's called "setuptools extras" when you have both files in your package repository. You can define optional packages in setup.py under a custom category, and install those packages from just that category with pip:
# setup.py
from setuptools import setup
setup(
name="FOO"
...
extras_require = {
'dev': ['pylint'],
'build': ['requests']
}
...
)
and then, in the requirements file:
# install packages in the [build] category, from setup.py
# (path/to/mypkg is the directory where setup.py is)
-e path/to/mypkg[build]
This would keep all your dependency lists inside setup.py.
Note: You would normally execute pip and setup.py from a sandbox, such as those created with the program virtualenv. This will avoid installing python packages outside the context of your project's development environment.
For the sake of completeness, here is how I see it in 3 4 different angles.
Their design purposes are different
This is the precise description quoted from the official documentation (emphasis mine):
Whereas install_requires (in setup.py) defines the dependencies for a single project, Requirements Files are often used to define the requirements for a complete Python environment.
Whereas install_requires requirements are minimal, requirements files often contain an exhaustive listing of pinned versions for the purpose of achieving repeatable installations of a complete environment.
But it might still not easy to be understood, so in next section, there come 2 factual examples to demonstrate how the 2 approaches are supposed to be used, differently.
Their actual usages are therefore (supposed to be) different
If your project foo is going to be released as a standalone library (meaning, others would probably do import foo), then you (and your downstream users) would want to have a flexible declaration of dependency, so that your library would not (and it must not) be "picky" about what exact version of YOUR dependencies should be. So, typically, your setup.py would contain lines like this:
install_requires=[
'A>=1,<2',
'B>=2'
]
If you just want to somehow "document" or "pin" your EXACT current environment for your application bar, meaning, you or your users would like to use your application bar as-is, i.e. running python bar.py, you may want to freeze your environment so that it would always behave the same. In such case, your requirements file would look like this:
A==1.2.3
B==2.3.4
# It could even contain some dependencies NOT strickly required by your library
pylint==3.4.5
In reality, which one do I use?
If you are developing an application bar which will be used by python bar.py, even if that is "just script for fun", you are still recommended to use requirements.txt because, who knows, next week (which happens to be Christmas) you would receive a new computer as a gift, so you would need to setup your exact environment there again.
If you are developing a library foo which will be used by import foo, you have to prepare a setup.py. Period.
But you may still choose to also provide a requirements.txt at the same time, which can:
(a) either be in the A==1.2.3 style (as explained in #2 above);
(b) or just contain a magical single .
.
The latter is essentially using the conventional requirements.txt habit to document your installation step is pip install ., which means to "install the requirements based on setup.py" while without duplication. Personally I consider this last approach kind of blurs the line, adds to the confusion, but it is nonetheless a convenient way to explicitly opt out for dependency pinning when running in a CI environment. The trick was derived from an approach mentioned by Python packaging maintainer Donald in his blog post.
Different lower bounds.
Assuming there is an existing engine library with this history:
engine 1.1.0 Use steam
...
engine 1.2.0 Internal combustion is invented
engine 1.2.1 Fix engine leaking oil
engine 1.2.2 Fix engine overheat
engine 1.2.3 Fix occasional engine stalling
engine 2.0.0 Introducing nuclear reactor
You follow the above 3 criteria and correctly decided that your new library hybrid-engine would use a setup.py to declare its dependency engine>=1.2.0,<2, and then your separated application reliable-car would use requirements.txt to declare its dependency engine>=1.2.3,<2 (or you may want to just pin engine==1.2.3). As you see, your choice for their lower bound number are still subtly different, and neither of them uses the latest engine==2.0.0. And here is why.
hybrid-engine depends on engine>=1.2.0 because, the needed add_fuel() API was first introduced in engine 1.2.0, and that capability is the necessity of hybrid-engine, regardless of whether there might be some (minor) bugs inside such version and been fixed in subsequent versions 1.2.1, 1.2.2 and 1.2.3.
reliable-car depends on engine>=1.2.3 because that is the earliest version WITHOUT known issues, so far. Sure there are new capabilities in later versions, i.e. "nuclear reactor" introduced in engine 2.0.0, but they are not necessarily desirable for project reliable-car. (Your yet another new project time-machine would likely use engine>=2.0.0, but that is a different topic, though.)
TL;DR
requirements.txt lists concrete dependencies
setup.py lists abstract dependencies
A common misunderstanding with respect to dependency management in Python is whether you need to use a requirements.txt or setup.py file in order to handle dependencies.
The chances are you may have to use both in order to ensure that dependencies are handled appropriately in your Python project.
The requirements.txt file is supposed to list the concrete dependencies. In other words, it should list pinned dependencies (using the == specifier). This file will then be used in order to create a working virtual environment that will have all the dependencies installed, with the specified versions.
On the other hand, the setup.py file should list the abstract dependencies. This means that it should list the minimal dependencies for running the project. Apart from dependency management though, this file also serves the package distribution (say on PyPI).
For a more comprehensive read, you can read the article requirements.txt vs setup.py in Python on TDS.
Now going forward and as of PEP-517 and PEP-518, you may have to use a pyproject.toml in order to specify that you want to use setuptools as the build-tool and an additional setup.cfg file to specify the details.
For more details you can read the article setup.py vs setup.cfg in Python.

When to use pip requirements file versus install_requires in setup.py?

I'm using pip with virtualenv to package and install some Python libraries.
I'd imagine what I'm doing is a pretty common scenario. I'm the maintainer on several libraries for which I can specify the dependencies explicitly. A few of my libraries are dependent on third party libraries that have transitive dependencies over which I have no control.
What I'm trying to achieve is for a pip install on one of my libraries to download/install all of its upstream dependencies. What I'm struggling with in the pip documentation is if/how requirements files can do this on their own or if they're really just a supplement to using install_requires.
Would I use install_requires in all of my libraries to specify dependencies and version ranges and then only use a requirements file to resolve a conflict and/or freeze them for a production build?
Let's pretend I live in an imaginary world (I know, I know) and my upstream dependencies are straightforward and guaranteed to never conflict or break backward compatibility. Would I be compelled to use a pip requirements file at all or just let pip/setuptools/distribute install everything based on install_requires?
There are a lot of similar questions on here, but I couldn't find any that were as basic as when to use one or the other or using them both together harmoniously.
My philosophy is that install_requires should indicate a minimum of what you need. It might include version requirements if you know that some versions will not work; but it shouldn't have version requirements where you aren't sure (e.g., you aren't sure if a future release of a dependency will break your library or not).
Requirements files on the other hand should indicate what you know does work, and may include optional dependencies that you recommend. For example you might use SQLAlchemy but suggest MySQL, and so put MySQLdb in the requirements file).
So, in summary: install_requires is to keep people away from things that you know don't work, while requirements files to lead people towards things you know do work. One reason for this is that install_requires requirements are always checked, and cannot be disabled without actually changing the package metadata. So you can't easily try a new combination. Requirements files are only checked at install time.
here's what I put in my setup.py:
# this grabs the requirements from requirements.txt
REQUIREMENTS = [i.strip() for i in open("requirements.txt").readlines()]
setup(
.....
install_requires=REQUIREMENTS
)
The Python Packaging User Guide has a page about this topic, I highly recommend you read it:
install_requires vs Requirements files
Summary:
install_requires is there to list the dependencies of the package that absolutely must be installed for the package to work. It is not meant to pin the dependencies to specific versions, but ranges are accepted, for example install_requires=['django>=1.8']. install_requires is observed by pip install name-on-pypi and other tools.
requirements.txt is just a text file, that you can choose to run pip install -r requirements.txt against. It's meant to have versions of all dependencies and subdependencies pinned, like this: django==1.8.1. You can create one using pip freeze > requirements.txt. (Some services, like Heroku, automatically run pip install -r requirements.txt for you.) pip install name-on-pypi does not look at requirements.txt, only at install_requires.
I only ever use a setup.py and install_requires because there is only one place to look at. It is just as powerful as having a requirements file and there is no duplication to maintain.

Migrating to pip+virtualenv from setuptools

So pip and virtualenv sound wonderful compared to setuptools. Being able to uninstall would be great. But my project is already using setuptools, so how do I migrate? The web sites I've been able to find so far are very vague and general. So here's an anthology of questions after reading the main web sites and trying stuff out:
First of all, are virtualenv and pip supposed to be in a usable state by now? If not, please disregard the rest as the ravings of a madman.
How should virtualenv be installed? I'm not quite ready to believe it's as convoluted as explained elsewhere.
Is there a set of tested instructions for how to install matplotlib in a virtual environment? For some reason it always wants to compile it here instead of just installing a package, and it always ends in failure (even after build-dep which took up 250 MB of disk space). After a whole bunch of warnings it prints src/mplutils.cpp:17: error: ‘vsprintf’ was not declared in this scope.
How does either tool interact with setup.py? pip is supposed to replace easy_install, but it's not clear whether it's a drop-in or more complicated relationship.
Is virtualenv only for development mode, or should the users also install it?
Will the resulting package be installed with the minimum requirements (like the current egg), or will it be installed with sources & binaries for all dependencies plus all the build tools, creating a gigabyte monster in the virtual environment?
Will the users have to modify their $PATH and $PYTHONPATH to run the resulting package if it's installed in a virtual environment?
Do I need to create a script from a text string for virtualenv like in the bad old days?
What is with the #egg=Package URL syntax? That's not part of the standard URL, so why isn't it a separate parameter?
Where is #rev included in the URL? At the end I suppose, but the documentation is not clear about this ("You can also include #rev in the URL").
What is supposed to be understood by using an existing requirements file as "as a sort of template for the new file"? This could mean any number of things.
Wow, that's quite a set of questions. Many of them would really deserve their own SO question with more details. I'll do my best:
First of all, are virtualenv and pip
supposed to be in a usable state by
now?
Yes, although they don't serve everyone's needs. Pip and virtualenv (along with everything else in Python package management) are far from perfect, but they are widely used and depended upon nonetheless.
How should virtualenv be installed?
I'm not quite ready to believe it's as
convoluted as explained elsewhere.
The answer you link is complex because it is trying to avoid making any changes at all to your global Python installation and install everything in ~/.local instead. This has some advantages, but is more complex to setup. It's also installing virtualenvwrapper, which is a set of convenience bash scripts for working with virtualenv, but is not necessary for using virtualenv.
If you are on Ubuntu, aptitude install python-setuptools followed by easy_install virtualenv should get you a working virtualenv installation without doing any damage to your global python environment (unless you also had the Ubuntu virtualenv package installed, which I don't recommend as it will likely be an old version).
Is there a set of tested instructions
for how to install matplotlib in a
virtual environment? For some reason
it always wants to compile it here
instead of just installing a package,
and it always ends in failure (even
after build-dep which took up 250 MB
of disk space). After a whole bunch of
warnings it prints
src/mplutils.cpp:17: error: ‘vsprintf’
was not declared in this scope.
It "always wants to compile" because pip, by design, installs only from source, it doesn't install pre-compiled binaries. This is a controversial choice, and is probably the primary reason why pip has seen widest adoption among Python web developers, who use more pure-Python packages and commonly develop and deploy in POSIX environments where a working compilation chain is standard.
The reason for the design choice is that providing precompiled binaries has a combinatorial explosion problem with different platforms and build architectures (including python version, UCS-2 vs UCS-4 python builds, 32 vs 64-bit...). The way easy_install finds the right binary package on PyPI sort of works, most of the time, but doesn't account for all these factors and can break. So pip just avoids that issue altogether (replacing it with a requirement that you have a working compilation environment).
In many cases, packages that require C compilation also have a slower-moving release schedule and it's acceptable to simply install OS packages for them instead. This doesn't allow working with different versions of them in different virtualenvs, though.
I don't know what's causing your compilation error, it works for me (on Ubuntu 10.10) with this series of commands:
virtualenv --no-site-packages tmp
. tmp/bin/activate
pip install numpy
pip install -f http://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.0.1/matplotlib-1.0.1.tar.gz matplotlib
The "-f" link is necessary to get the most recent version, due to matplotlib's unusual download URLs on PyPI.
How does either tool interact with
setup.py? pip is supposed to replace
easy_install, but it's not clear
whether it's a drop-in or more
complicated relationship.
The setup.py file is a convention of distutils, the Python standard library's package management "solution." distutils alone is missing some key features, and setuptools is a widely-used third-party package that "embraces and extends" distutils to provide some additional features. setuptools also uses setup.py. easy_install is the installer bundled with setuptools. Setuptools development stalled for several years, and distribute was a fork of setuptools to fix some longstanding bugs. Eventually the fork was resolved with a merge of distribute back into setuptools, and setuptools development is now active again (with a new maintainer).
distutils2 was a mostly-rewritten new version of distutils that attempted to incorporate the best ideas from setuptools/distribute, and was supposed to become part of the Python standard library. Unfortunately this effort failed, so for the time being setuptools remains the de facto standard for Python packaging.
Pip replaces easy_install, but it does not replace setuptools; it requires setuptools and builds on top of it. Thus it also uses setup.py.
Is virtualenv only for development
mode, or should the users also install
it?
There's no single right answer to that; it can be used either way. In the end it's really your user's choice, and your software ideally should be able to be installed inside or out of a virtualenv; though you might choose to document and emphasize one approach or the other. It depends very much on who your users are and what environments they are likely to need to install your software into.
Will the resulting package be
installed with the minimum
requirements (like the current egg),
or will it be installed with sources &
binaries for all dependencies plus all
the build tools, creating a gigabyte
monster in the virtual environment?
If a package that requires compilation is installed via pip, it will need to be compiled from source. That also applies to any dependencies that require compilation.
This is unrelated to the question of whether you use a virtualenv. easy_install is available by default in a virtualenv and works just fine there. It can install pre-compiled binary eggs, just like it does outside of a virtualenv.
Will the users have to modify their
$PATH and $PYTHONPATH to run the
resulting package if it's installed in
a virtual environment?
In order to use anything installed in a virtualenv, you need to use the python binary in the virtualenv's bin/ directory (or another script installed into the virtualenv that references this binary). The most common way to do this is to use the virtualenv's activate or activate.bat script to temporarily modify the shell PATH so the virtualenv's bin/ directory is first. Modifying PYTHONPATH is not generally useful or necessary with virtualenv.
Do I need to create a script from a
text string for virtualenv like in the
bad old days?
No.
What is with the #egg=Package URL
syntax? That's not part of the
standard URL, so why isn't it a
separate parameter?
The "#egg=projectname-version" URL fragment hack was first introduced by setuptools and easy_install. Since easy_install scrapes links from the web to find candidate distributions to install for a given package name and version, this hack allowed package authors to add links on PyPI that easy_install could understand, even if they didn't use easy_install's standard naming conventions for their files.
Where is #rev included in the URL? At
the end I suppose, but the
documentation is not clear about this
("You can also include #rev in the
URL").
A couple sentences after that quoted fragment there is a link to "read the requirements file format to learn about other features." The #rev feature is fully documented and demonstrated there.
What is supposed to be understood by
using an existing requirements file as
"as a sort of template for the new
file"? This could mean any number of
things.
The very next sentence says "it will keep the packages listed in devel-req.txt in order and preserve comments." I'm not sure what would be a better concise description.
I can't answer all your questions, but hopefully the following helps.
Both virtualenv and pip are very usable. Many Python devs use these everyday.
Since you have a working easy_install, the easiest way to install both is the following:
easy_install pip
easy_install virtualenv
Once you have virtualenv, just type virtualenv yourEnvName and you'll get your new python virtual environment in a directory named yourEnvName.
From there, it's as easy as source yourEnvName/bin/activate and the virtual python interpreter will be your active. I know nothing about matplotlib, but following the installation interactions should work out ok unless there are weird hard-coded path issues.
If you can install something via easy_install you can usually install it via pip. I haven't found anything that easy_install could do that pip couldn't.
I wouldn't count on users being able to install virtualenv (it depends on who your users are). Technically, a virtual python interpreter can be treated as a real one for most cases. It's main use is not cluttering up the real interpreter's site-packages and if you have two libraries/apps that require different and incompatible versions of the same library.
If you or a user install something in a virtualenv, it won't be available in other virtualenvs or the system Python interpreter. You'll need to use source /path/to/yourvirtualenv/bin/activate command to switch to a virtual environment you installed the library on.
What they mean by "as a sort of template for the new file" is that the pip freeze -r devel-req.txt > stable-req.txt command will create a new file stable-req.txt based on the existing file devel-req.txt. The only difference will be anything installed not already specified in the existing file will be in the new file.

Categories