Pip overwrites package with local version specifier - python

I want to install a specific version of PyTorch (inside a Dockerfile) to handle modern GPUs: 1.8.1+cu111. Please note the local version identifier. I then install (both via pip) a library that requests torch>=1.8.1 as a dependency. For some reason, this uninstalls my 1.8.1+cu111 and replaces it with vanilla 1.10.0, despite the custom version with "local version specifier" should be taken as valid as PEP 0440 says (...local version labels MUST be ignored entirely when checking...) and it's what is happening normally, e.g. here or even for me when I try to re-install PyTorch by hand:
pip install "torch>=1.8.1"
Requirement already satisfied: torch>=1.8.1 in /usr/local/lib/python3.7/dist-packages (1.8.1+cu111)
But no, when I install my library, I get this:
...
Collecting torch>=1.8.1
Using cached torch-1.10.0-cp37-cp37m-manylinux1_x86_64.whl (881.9 MB)
...
Installing collected packages: torch,...
Attempting uninstall: torch
Found existing installation: torch 1.8.1+cu111
Uninstalling torch-1.8.1+cu111:
Successfully uninstalled torch-1.8.1+cu111
How can I make it keep the pre-installed 1.8.1+cu111?

In the PEP040, you have above:
When multiple candidate versions match a version specifier, the preferred version SHOULD be the latest version as determined by the consistent ordering defined by the standard Version scheme. Whether or not pre-releases are considered as candidate versions SHOULD be handled as described in Handling of pre-releases.
I think this because you are using an inclusive specifier where local version identifiers are NOT permitted
Can you specify the exact version with pip install "torch==1.8.1+cu111" ? Maybe only 1.8.1 works.

Related

Pip package version conflicts despite seemingly matching ranges

When using pip install -r requirements.txt, I get ERROR: Cannot install -r requirements.txt (line 3), [...] because these package versions have conflicting dependencies..
And further:
The conflict is caused by:
tensorflow 2.11.0 depends on protobuf<3.20 and >=3.9.2
tensorboard 2.11.0 depends on protobuf<4 and >=3.9.2
wandb 0.13.5 depends on protobuf!=4.0.*, !=4.21.0, <5 and >=3.12.0
I don't see any conflicts in these ranges - every version in [3.12.0, 3.20) should be fine. Can someone explain the problem?
Update: As a workaround, I removed all version restrictions and only specified the names of the libraries in the requirements.txt file. Now it works. But I still don't see a problem with the above ranges, so I'll leave the question open.
I would suggest that, rather than using a range of versions, use a specific version you know works. That way, there won't be any problems.
I think that one of the versions of the dependencies is incompatible with the main module, and since it is within the range of versions you ask for, pip tries to intall it and fails to do so since it is incompatible.
Also, pip normally handles dependencies automatically.

Pip wheel collects 2 versions of a package then pip install gets a conflict

We use a pipeline that first uses pip wheel to collect all the packages that are needed in the project and then it creates a docker image that calls to pip install on the collected wheels.
The issue I am encountering is that when calling pip wheel, pip is collecting 2 different versions of a package. This has started occurring once a new version of the package is available.
The project has a requirement for an internal library ecs-deployer==10.1.2 and that library has in turn a requirement in the form of: elb-listener>=3.2.1+25,<4
The relevant output of pip wheel with the verbose option says:
Collecting elb-listener>=3.2.1+25,<4
Created temporary directory: /tmp/pip-unpack-zr930807
File was already downloaded /home/user/path/dist/elb_listener-3.2.2+26-py3-none-any.whl
Added elb-listener>=3.2.1+25,<4 from https://internal-repository.com/path/elb_listener/3.2.2%2B26/elb_listener-3.2.2%2B26-py3-none-any.whl#md5=foo (from ecs-deployer==10.1.2->service==1.0.0) to build tracker '/tmp/pip-req-tracker-1tz9t5ls'
Removed elb-listener>=3.2.1+25,<4 from https://internal-repository.com/path/elb_listener/3.2.2%2B26/elb_listener-3.2.2%2B26-py3-none-any.whl#md5=blabla (from ecs-deployer==10.1.2->service==1.0.0) to build tracker '/tmp/pip-req-tracker-1tz9t5ls'
And also:
Collecting elb-listener>=3.2.1+25,<4
Created temporary directory: /tmp/pip-unpack-yfnxim_u
File was already downloaded /home/user/path/dist/elb_listener-3.2.3+27-py3-none-any.whl
Added elb-listener>=3.2.1+25,<4 from https://internal-repository.com/path/elb_listener/3.2.3%2B27/elb_listener-3.2.3%2B27-py3-none-any.whl#md5=bar (from ecs-deployer==10.1.2->service==1.0.0) to build tracker '/tmp/pip-req-tracker-1tz9t5ls'
Then when the pip install is called I get this:
ERROR: Cannot install elb-listener 3.2.2+26 (from /opt/elb_listener-3.2.2+26-py3-none-any.whl) and cad-aws-elb-listener-target-group-builder 3.2.3+27 (from /opt/elb_listener-3.2.3+27-py3-none-any.whl) because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested elb-listener 3.2.2+26 (from /opt/elb_listener-3.2.2+26-py3-none-any.whl)
The user requested elb-listener 3.2.3+27 (from /opt/elb_listener-3.2.3+27-py3-none-any.whl)
We use pip 20.2.3 with the option --use-feature=2020-resolver
Is it normal that pip wheel collects several versions of the same package?
If so, can I indicate in any way to either pip wheel to only collect one of the versions or to pip install to only use the latest version?
If not, is there any way to solve this problem? I guess changing the requirement to elb-listener>=3.2.1+27,<4 would solve it, but we don't have direct access to that library and it would take a while for other team to change it.
As per #sinoroc comment, upgrading the python to 3.10 and pip version to 21.2.4 solved this particular issue.
As far as I understood, "local version identifiers" such as 3.2.1+25 are far from usual, apparently they are not meant to be used anywhere public (like PyPI), and that might be the reason for all the trouble here. I am really not sure how well they are supported by Python packaging tools and maybe they confuse the dependency resolution.
Local version identifiers SHOULD NOT be used when publishing upstream projects to a public index server, but MAY be used to identify private builds created directly from the project source. Local version identifiers SHOULD be used by downstream projects when releasing a version that is API compatible with the version of the upstream project identified by the public version identifier, but contains additional changes (such as bug fixes). As the Python Package Index is intended solely for indexing and hosting upstream projects, it MUST NOT allow the use of local version identifiers.
-- "Local version identifiers" section of _PEP 440

Pip install not matching a development version of a package

ERROR: Could not find a version that satisfies the requirement my-package==2021.4.* (from versions: 0.0.2, 2021.4.1.dev44+gd452819a91.d20210528, 2021.5.26)
This looks weird, right? Why doesn't the second one in the list match???
~=2021.4.1 doesn't work either. =~2021.4 installs 2021.5.26
The only way I found for it to work is to spell it out completely: ==2021.4.1.dev44+gd452819a91.d20210528
Why don't the match operators work?
By default, pip ignores pre-release and development versions. Per the pip documentation on pre-release versions:
Starting with v1.4, pip will only install stable versions as specified by pre-releases by default. If a version cannot be parsed as a compliant PEP 440 version then it is assumed to be a pre-release.
If a Requirement specifier includes a pre-release or development version (e.g. >=0.0.dev0) then pip will allow pre-release and development versions for that requirement. This does not include the != flag.
If you want pip to also match pre-release and development version against the version specifier, you can pass the --pre flag when invoking pip install:

PIP install not recognizing versions

I have a custom pypi server, that I am installing files from. I attempt to upgrade from version 0.0.1 to a newer version of my own custom module. It is not detecting the later version. When I do a pip install 'mymodule>=17' I see:
Could not find a version that satisfies the requirement mymodule>=17
(from versions: 17.0828.222133-e1e0fd9, 17.0828.222305-e1e0fd9,
17.830.210154-e1e0fd9, 0.0.1)
Notice the versions show up, but it will never detect the 17.X versions with the git sha on the end. Ideas? Why would this be?
Due to the hyphen, 17.0828.222133-e1e0fd9 and the like are not valid version specifiers as defined in PEP 440. As a result, they are treated as "legacy version" strings by pip's internals and are sorted less than all valid version strings. Hence, as far as pip is concerned, these versions are not greater than 17.

How to handle changing the format of a PyPi version number

My project Pyrr was previously using versions that were datestamps.
The last datestamped version was:
version='20130321'
I want to move to a proper major.minor.micro format.
I've updated a new package to PyPi in this format.
version='0.1.0'
When I pip install pyrr I still get the 20130321 version.
$ yolk -V pyrr
pyrr 0.1.0
$ pip install pyrr
Downloading/unpacking pyrr
Downloading pyrr-20130321.tar.gz
<snip>
PyPi has the over versions marked as hidden and the 0.1.0 as the only version not marked hidden.
What do I have to do to get pip / pypi to download the 0.1.0 version instead of the older datestamp versions?
20130321 is the major version, which is obviously higher than 0, therefor version 20130321 is considered the latest version.
The easiest way to fix this would be to delete the outdated version using the webinterface.
If the older versions should still exist, you could download them and reupload them using a newer version. e.g. 0.0.20130321.
If people depend on your package without a version, they wouldn't notice the new versioning system.
If people do depend on a specific version, they would have to change their version dependency. This could be considered annoying, but it is inevitable and it's a small change for them.

Categories