We have multiple versions of our package: package1 and package1-unstable - similar to tensorflow and tf-nightly. These are different packages on PyPi but install the same module. This then causes issues when both of these packages are installed as they overlap and write into the same directories in the site-packages folder. When one is removed, the other package stays but most of the module code is now removed, resulting in an even worse, dysfunctional state.
What is the cleanest way to detect colliding packages?
We can hardcode that package1 and package1-unstable are mutually incompatible. We use setup.py for the installation.
My thinking was to use a wrapper class around the install command class.
class Install(install):
def run(self):
if name == "package1":
self.ensure_not_installed("package1-unstable")
else:
self.ensure_not_installed("package1")
install.run(self)
def ensure_not_installed(pkg_name):
"""Raises an error when pkg_name is installed."""
...
...
cmdclass={'install': Install},
This approach seems to work as a general direction. However, I'm unsure yet about how to list exhaustively the installed packages. I'm testing the approaches with both pip install . and python setup.py install.
A couple of approaches that I tried are:
use site.getsitepackages(), iterate through the directories and check for the existence of the given package directories (i.e. package1-{version}.dist-info or pacakge1-unstable-{version}.dist-info - this can work, but this feels hacky / manual / I'm not confident yet that it's going to work in a portable way across all OSes and python distributions
try to call pip list or pip show package1 from within setup.py - this does not seem to work when the setup script is executed via pip install . as pip is not on the import path itself
pkg_resources.working_set works with python setup.py install but not with pip install . probably for similar reasons as calling pip doesn't work: the working set contains only wheel and setuptools when calling the setup with pip install .
in the general case you can't implement this as part of setup.py as pip will build your package to a wheel, cache it, and then never invoke setup.py after that. you're probably best to have some sort of post-installation tests which are run in a different way (makefile, tox.ini, etc.)
You can disable isolated builds by either
pip install --no-build-isolation .
or
PIP_NO_BUILD_ISOLATION=0 pip install .
However, some package installs rely on being invoked in an isolated environment.
Other times, the packaging routine uses a pyproject.toml.
This would be ignored in non-isolated builds.
Related
In my requirements.txt I have packages defined in following manner:
Django ~= 2.2.0
It means that when I use pip install -r requirements.txt pip will find the latest available 2.2.x version and install it along with all dependencies.
What I need is requirements-formatted list of all packages with explicit versions that will be installed but without actually installing any packages. So example output would be something like:
Django==2.2.23
package1==0.2.1
package2==1.4.3
...
So in other words I'm looking for something like pip freeze results but without installing anything.
pip-compile is what you need!
Doc: https://github.com/jazzband/pip-tools)
python -m pip install pip-tools
pip-compile requirements.txt --output-file requirements-all.txt
The pip-compile command lets you compile a requirements.txt file from your dependencies, this way you can pip install you dependencies to always have the same environment
TL;DR
Try pipdetree or pip-tree.
Explanation
pip, contrary to most package managers, doesn't have a big dependency graph to look up. What it does is that it lets arbitrary setup code to be executed, which automatically pulls the dependencies. This means that, for example, a package could manage their dependencies in an other way than putting them in requirements.txt (see fastai for an example of a project that handles the dependencies differently).
So, there is, theoretically, no other way to see all the dependencies than to actually run an install on an isolated environment, see what was pulled, then delete the environment (because it could potentially be the same part of the code that does the installation and that brings the dependencies). You could actually do that with venv.
In practice, tools like pipdetree or pip-tree fetch the dependencies based on some standardization of the requirements (most packages separate the dependencies and the installation, and actually let pip handle both).
today I attempted to remove a file after my package (a python wheel) was installed via pip with the -t --target option.
Post-install script with Python setuptools
I am subclassing install in my setup.py like this:
class PostInstallCommand(install):
"""Post-installation for installation mode."""
def run(self):
install.run(self)
# here I am using
p = os.path.join(self.install_libbase,"myPackage/folder/removeThisPyc.pyc")
if os.path.isfile(p):
os.unlink(p)
#there is also self.install_platlib and
#self.install_purelib which seem to be used by pip distutil scheme
#Have not tested those yet
when running
python setup.py install
this works the file is removed upon install.
But through
pip install path-to-my-wheel.whl
this does not work and the file is still there.
pip install -t /target/dir path-to-my-wheel.whl
does not work either...
So question is, what is pip doing with distutils and or setuptools and how can make this work?
Another thing I noticed is that pip does not seem to be printing anything, I am printing in my setup.py in verbose mode?
Is there away to see the full output from python instead of the "pip" only stuff?
Reading educates:
http://pythonwheels.com/
2. Avoids arbitrary code execution for installation. (Avoids setup.py)
As I am using wheels and wheels wont execute the setup.py, my concept of doing this is rubbish.
https://github.com/pypa/packaging-problems/issues/64
I guess this is between deployment and installation, though I would obviously count my little change to installation...
Is there a way to avoid pyc file creation upon a pip install whl ?
Two options in setup.py develop and install are confusing me. According to this site, using develop creates a special link to site-packages directory.
People have suggested that I use python setup.py install for a fresh installation and python setup.py develop after any changes have been made to the setup file.
Can anyone shed some light on the usage of these commands?
python setup.py install is used to install (typically third party) packages that you're not going to develop/modify/debug yourself.
For your own stuff, you want to first install your package and then be able to frequently edit the code without having to re-install the package every time — and that is exactly what python setup.py develop does: it installs the package (typically just a source folder) in a way that allows you to conveniently edit your code after it’s installed to the (virtual) environment, and have the changes take effect immediately.
Note: It is highly recommended to use pip install . (regular install) and pip install -e . (developer install) to install packages, as invoking setup.py directly will do the wrong things for many dependencies, such as pull prereleases and incompatible package versions, or make the package hard to uninstall with pip.
Update:
The develop counterpart for the latest python -m build approach is as follows (as per):
From the documentation. The develop will not install the package but it will create a .egg-link in the deployment directory back to the project source code directory.
So it's like installing but instead of copying to the site-packages it adds a symbolic link (the .egg-link acts as a multiplatform symbolic link).
That way you can edit the source code and see the changes directly without having to reinstall every time that you make a little change. This is useful when you are the developer of that project hence the name develop. If you are just installing someone else's package you should use install
Another thing that people may find useful when using the develop method is the --user option to install without sudo. Ex:
python setup.py develop --user
instead of
sudo python setup.py develop
I have a Python package that I'm distributing with pip. I need to add some custom code to be run at install time:
from setuptools import setup
from setuptools.command.install import install
class CustomInstall(install):
def run(self):
install.run(self)
print "TEST"
setup(
...
cmdclass={'install': CustomInstall},
...)
I thought the problem might pip suppressing stdout: Custom pip install commands not running. But then I replaced print "TEST" with creating a file and writing some text, and that didn't happen either.
It appears that my custom run method is only happening when I create and upload my_package to test PyPI:
python setup.py sdist bdist_wheel upload -r https://testpypi.python.org/pypi
and not when I pip install it:
pip install -i https://testpypi.python.org/pypi my_package
Maybe I am fundamentally not understanding how pip and setuptools work, but that is the opposite of the behavior I expected.
My questions are:
How can I get my CustomInstall class to work?
and
What actually happens when you call pip install?
I've looked a the setuptools docs and the PyPI docs, and I haven't been able to figure it out. It seems like other people have had success with this: Run custom task when call `pip install`, so I'm not sure what's going wrong.
So I'm not sure how much this will help, but I recently dealt with a similar issue, and here's what I learned.
Your custom install code appears to be correct. However, there are more methods than just run that can be overridden. Another useful one is finalize_options because you can write code to dynamically change the parameters of your setup.py (example here.)
This is a very good question to ask.pip install does various things depending on various factors. From where are you installing the package? From PyPI or some other package index? How was the package distributed? Is it a binary dist (.whl) or a source dist (.gz) file? Are you installing the package via a local directory? A remote repo via a VCS URL? Pip does not necessarily use the same approach for each of these cases. I would recommend using the -vvv flag to see what exactly pip is doing. It may not be running setuptools's install command for whatever reason...do you have
packages=setuptools.find_packages(),
include_package_data=True
in your setup.py file? Without these lines, pip could be installing your package's metadata but not the package itself.
I was wondering how the above "yum install package" & "python setup.py install" are used differently in CentOS? I used yum install ... all the time. However, when I try to do python setup.py install, I always get: this setup.py file couldn't be found even though its path shows up under echo $PATH, unless I try to use it in its current directory or use the absolute path.
When you type python setup.py install, your shell will check your $PATH for the python command, and run that. Then, python will be examining its arguments, which are setup.py install. It knows that it can be given the name of a script, so it looks for the file called setup.py so it can be run. Python doesn't use your $PATH to find scripts, though, so it should be a real path to a file. If you just give it the name setup.py it will only look in your current directory.
The source directory for a python module should not, ideally, be in your $PATH.
yum install is a command that will go to a package repository, download all the files needed to install something, and then put them in the right place. yum (and equivalents on other distributions, like apt for Debian systems) will also fetch and install any other packages you need, including any that aren't python modules.
Python has a package manager, too. You may also find using pip install modulename or pip install --user modulename (if you don't have administrative rights) easier than downloading and installing the module by hand. You can often get more recent versions of modules this way, as the ones provided by an operating system (through yum) tend to be older, more stable versions. Sometimes the module is not available through yum at all. pip can't install any extra packages that aren't python modules, though.
If you don't have pip already (it comes with Python3, but might need installing separately for Python2, depending on how it was set up), then you can install it by following the instructions here: https://pip.pypa.io/en/stable/installing/