Match versions between packages in setup.py install_requires - python

I have a package with setup.py which will depend on two packages, tensorflow and tensorflow-probability.
Q:
How do I make sure the versions of the above two packages will always match when I install my package? My package currently has the requirement of tensorflow>=2.6.2 and I will be adding tensorflow-probability==tensorflow_MAJOR_MINOR_version.
The tensorflow-probability website has the following clause:
https://www.tensorflow.org/probability/install
Note: Since TensorFlow is not included as a dependency of the TensorFlow Probability package (in setup.py), you must explicitly install the TensorFlow package (tensorflow or tensorflow-gpu). This allows us to maintain one package instead of separate packages for CPU and GPU-enabled TensorFlow.
This means it's up to the installer to install the correct tensorflow package version, but what if this is a setup.py script instead of a human?
I am not finding the syntax to do this in PEP 440 – Version Identification and Dependency Specification.
Dependency Management in Setuptools: Platform specific dependencies shows that it's possible to have conditional dependencies based on operating system platform.
My best guess is to augment setup.py with logic to:
* Get the currently installed version of `tensorflow` on the local system,
or get the latest version that will be installed from pypl.
* Isolate the MAJOR.MINOR component of the fetched version.
* Use that version in a string replace for
install_requires=[
'tensorflow>=2.6.2',
f'tensorflow-probability=={tensorflow_version}.*'
]

Related

How to setup.py my package that depends on PyTorch

I am creating a Python package that depends on PyTorch. PyTorch's installation command is as follows (from https://pytorch.org/):
pip3 install torch==1.8.2+cu102 torchvision==0.9.2+cu102 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
Question 1: How should I prepare setup.py?
I read this Equivalent for `--find-links` in `setup.py` where it says we could add the link into dependency_links list, but this is no longer supported since 2019.
Question 2: How to programatically decide which version to install? (CPU, GPU, CUDA version)
The command above is for a GPU-enabled machine with CUDA 10.2. But if the machine doesn't have GPU, one would use:
pip3 install torch==1.8.2+cpu torchvision==0.9.2+cpu torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
Is it possible for my package's setup.py to automatically identify the user's system and modify the install_requires list accordingly?

The difference between opencv-python and opencv-contrib-python

I was looking at the Python Package Index (PyPi) and noticed 2 very similar packages: opencv-contrib-python and opencv-python and wondering what the difference was. I looked at them and they had the exact same description and version numbers.
As per PyPi documentation:
There are four different packages (see options 1, 2, 3 and 4 below):
Packages for standard desktop environments:
Option 1 - Main modules package: pip install opencv-python
Option 2 -
Full package (contains both main modules and contrib/extra modules):
pip install opencv-contrib-python (check contrib/extra modules listing
from OpenCV documentation)
Packages for server (headless) environments:
Option 3 - Headless main modules package: pip install opencv-python-headless
Option 4 - Headless full package (contains both main modules and contrib/extra modules): pip install opencv-contrib-python-headless
Do not install multiple different packages in the same environment
Opencv has two compilations for each version, the "regular" one that is functional and well tested and the compilation with the extra components (contribs package) in their github's page they put:
This repository is intended for the development of so-called "extra" modules, contributed functionality. New modules quite often do not have stable API, and they are not well-tested. Thus, they shouldn't be released as a part of the official OpenCV distribution, since the library maintains binary compatibility, and tries to provide decent performance and stability.
Also in contribs package there are several non-free computer vision algorithms (for features) such as SURF, BRIEF, Censure, Freak, LUCID, Daisy, BEBLID, TEBLID.

How does pip wheel resolves transitive dependencies?

When I run pip wheel sentry-sdk it downloads the following wheel files:
certifi-2020.6.20-py2.py3-none-any.whl
sentry_sdk-0.18.0-py2.py3-none-any.whl
urllib3-1.25.10-py2.py3-none-any.whl
Where sentry_sdk-0.18.0-py2.py3-none-any.whl is the lib I actually want to use and the other ones are transitive dependencies required by this lib to work. I understand that the file is coming from PyPI however what I do not understand is how pip wheel is choosing the version of the aforementioned transitive dependencies.
More Context
My underlying problem is that the resolved version of the urllib3 clashes with another one already added to the pex file of the project I'm working on (I'm using Bazel to generate the pex) I'm considering downgrading the version of urllib3 to match my project's existing one. Looking at the setup.py from the sentry-sdk in GitHub it says it only requires it to be greater than 1.10.0 ("urllib3>=1.10.0") so I think the downgrade would work but I wanted to be sure to avoid production crashes.
Thanks
the current version of pip (2020-10-13) does not have a dependency resolver, it picks the first constraint greedily (so if urllib3 is encountered unbounded first, it will pick the latest version -- even if a later package has a more restrictive requirement)
this is being changed in pip, you can enable the resolver as an opt-in in pip>=20.2 and it will become the default in the future (later this year)

cannot install tensorflow-text using pip despite having tensorflow 2.0.0-beta1 installed

My tensorflow 2.0.0beta1 runs normally, but I cannot install tensorflow-text using the command pip install tensorflow-text (as described on the tensorflow page). I can find it using pip search tensorflow-text but I am getting an error
ERROR: Could not find a version that satisfies the requirement tensorflow-text (from versions: none)
There are no requirements for this package (i.e. a specific python version).
I am running on windows, using conda, python 3.6.9
Update
The first release candidate of 2.4.0 was published today which features windows wheels for the first time. 2.4.0rc0 on PyPI. Note that only wheels for Python 3.6 and 3.7 are working properly at the moment. Install via e.g.
> py -3.7 -m pip install tensorflow-text==2.4.0rc0
Original answer
At the time of writing this, tensorflow-text is not available for Windows yet.
Windows is something we do wish to add. We've had some difficulties getting a working package though, which is why it is not available yet. The difference between this library and tensorflow-probability is we make use of custom ops written in c++, and building those shared libraries to work well with Tensorflow inside Windows has had issues; plus, the lengthy build times on Windows has made iterating on these issues slow. While the next beta release (this week) will not include Windows, we would like for the next release to include it.
Source.

including python package dependecy as an executable

Currently my python package does not have a dependency on the wmi package and it can be easily installed via
pip install mypackage
If I add a dependency on the wmi package, this will likely fail since when I try installing wmi through pip, I encounter errors since I do not have visual studio 2008 installed...and I only managed to get it installed using the binary distribution.
Is it possible for me to include and install the binary release of wmi in my package?
The main concern is that if people fail to install my package via the pip command, they just avoid using my package.
The first thing to consider is why are you considering adding the wmi package - since it is MS-Windows specific if you use it, or anything depending on it, your package will also be MS-Windows specific.
Are there other ways to achieve what you are trying to do that remain cross platform? If not and you really have to use it then you could include a prerequisite statement in the documentation, and ideally in setup.py, telling people that they need to have an installed & working copy of wmi, hopefully with a pointer to the binary distributions.
The other way to go - if you are on a late enough version of python - is to build and distribute your package as python wheels. Since wheels allow the inclusion of C package elements without relying on the presence of a compiler on the target system - see pep-0427 & here for some more information.
Creating Wheels:
You need to be running python2 > 2.6 or python3, pip >= 1.4 and setuptools >= 0.8.
Basically, assuming that you have a setup.py that will create your, (source), distribution for upload to pip with:
python setup.py sdist
then you can create a binary distribution that should contain all the dependencies of your package for your current python version with:
python setup.py bdist_wheel
This will build a distribution wheel that includes the .pyc files and the binary files from the required packages.
But - you need to do this once for each version of python that you are planning of supporting, (virtualenv is magic for this), and on each platform if you are also planning on supporting 64 bit or mac. Unless, of course, you manage to make a pure python package that will run, without 2to3, under both python 2 & 3 in which case you can build a universal wheel - obviously you can not do this if you require .c extensions.
For more information on wheels see Wheel - Read The Docs.

Categories