I am running automated test suite and one of the test needs to install several Python packages with pip to make sure project scaffolds operate correctly.
However it is quite slowish operation to fetch packages from PyPi and cause unnecessary time burned during the test run. This is also a great source of random failures due to network connectivity errors. My plan was to create a cache tarball of known Python packages which are going to be installed. Then pip could consume packages directly from this tarball or extract it to a virtualenv for the test run.
Also the goal is to make this repeatable, so that the same cache (tarball) would be available on CI and local development.
Does there exist any tools or processes to create a redistributable Python package caches for pip?
Any other ideas how to do this in a platform agnostic way? I assume relocatable virtual environments are specific to target platform?
Use wheel:
pip wheel -r requirements.txt
all requirements are built to folder wheelhouse.
so, on each test-suite you can run pip install wheelhouse/*
Your second option is devpi which is work as pypi cache.
Related
I am trying to edit a python library and hence building it from source. Can someone explain what does the following instruction do and why is this method different from
'pip install package-name' done normally ?
pip install --verbose --no-build-isolation --editable
You can read all the usage options here: https://pip.pypa.io/en/stable/cli/pip_install/
-v, --verbose
Give more output. Option is additive, and can be used up to 3 times.
--no-build-isolation
Disable isolation when building a modern source distribution. Build dependencies specified by PEP 518 must be already installed if this option is used.
It means pip won't install the dependencies, so you have to install the dependencies if any by yourself first or the command will fail.
-e, --editable <path/url>
Install a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url.
Here you have to input a path/url argument to install from an external source.
This information is from pip official documentation. Please refer to it
When making build requirements available, pip does so in an isolated environment. That is, pip does not install those requirements into the user’s site-packages, but rather installs them in a temporary directory which it adds to the user’s sys.path for the duration of the build. This ensures that build requirements are handled independently of the user’s runtime environment. For example, a project that needs a recent version of setuptools to build can still be installed, even if the user has an older version installed (and without silently replacing that version).
In certain cases, projects (or redistributors) may have workflows that explicitly manage the build environment. For such workflows, build isolation can be problematic. If this is the case, pip provides a --no-build-isolation flag to disable build isolation. Users supplying this flag are responsible for ensuring the build environment is managed appropriately (including ensuring that all required build dependencies are installed).
Thank you
I am trying to upgrade a Pyramid project to Python 3. I am also exploring various on how to improve the build system so that is more modern.
Currently we are using Python's buildout to setup an instance. The approach is simple:
All the required eggs (including app - which is my package) are stored in a buildout configuration file with their exact versions specified.
Run buildout to get the exact eggs (including third party stuff) from a local package server.
Run the instance using ./bin/pserve config.ini.
For my new app source code that has Python 3 changes, I am trying to get rid of everything and just use pip instead. This is what I have done now (in a docker container):
git clone git#github.com/org/app.git
cd project
# Our internal components are fetched using the `git` directive inside `requirements.txt`.
pip install -r requirements.txt # Mostly from PyPi.
pip install .
It works, but is this the correct way to deploy an application for deployment?
Will I be able to convert the entire installation to a simple: pip install app and run it using pserve config.ini if I do the following:
Upload the latest app egg to my package server.
Sync setup.py and requirements.txt so that Python to do pip install -r requirements.txt (or its equivalent) internally?
pip install app.
Copy config.ini to the machine where I am going to install.
Run pserver config.ini
I wanted to know if the above approach can be made to work before proceeding with the egg creation, mocking a simple package server etc. I am not sure if I can really do pip install for a web application; and I think requirements.txt has some significance in this case.
I haven't explored wheels yet, but if the above works, I will try that as well.
Since I am really new to packaging, would appreciate if I can some suggestions to modernize by build using the latest tools.
After reading some of the links like requirements.txt vs setup.py, I think requirements.txt is needed for Web Apps especially if you want a consistent behaviour for deployment purposes. A project or an application seems to be different than a library where pip install suffices.
If that is the case, I think the ideal way is to do pip install -r requirements.txt and then pip install app from a local package server without git cloning?
Resources: install_requires vs requirements files
I'm confused about the possibilities of installing external python packages:
install package local with pip into /home/chris/.local/lib/python3.4/site-packages
$ pip install --user packagename
install package global with pip into /usr/local/lib/python3.4/site-packages
(superuser permission required)
$ pip install packagename
install package global with zypper into /usr/lib/python3.4/site-packages
(superuser permission required)
$ zypper install packagename
I use OpenSuse with package-manager zypper and have access to user root.
What I (think to) know about pip is that:
- pip just downloads the latest version.
- For installed packages won't be checked if newer versions are available.
- Own packages can be installed in a virtual env.
- Takes more time to download and install than zypper.
- Local or global installation possible.
The package-manager of my system:
- Does download and installation faster.
- Installs the package only globally.
My question is when and why should I do the installation: pip (local, global) or with zypper?
I've read a lot about this issue but could not answer this question clearly...
The stuff under /usr/lib is system packages considered part of the OS. It's likely/possible that OS scripts and services will have dependencies on these components. I'd recommend not touching these yourself, or really using or depending on them for user scripts either as this will make your app OS or even OS version dependent. Use these if writing scripts that run at system level such as doing maintenance or admin tasks, although I'd seriously consider even these using...
Stuff under /usr/local/lib is installed locally for use by any user. System scripts and such won't depend on these (I don't know SuSE myself though), but other user's scripts might well do, so that needs to be borne in mind when making changes here. It's a shared resource. If your writing scripts that other users might need to run, develop against this to ensure they will have access to all required dependencies.
Stuff in your home directory is all yours, so do as thou wilt. Use this if you're writing something just for yourself and especially if you might need the scripts to be portable to other boxes/OSes.
There might well be other options that make sense, such as if you're part of a team developing application software, in which case install your team's base dev packages in a shared location but perhaps not /usr/local.
In terms of using zypper or pip, I'd suggest using zypper to update /usr/lib for sure as it's the specific tool for OS configuration update. Probably same goes for /usr/local/lib too as that's really part of the 'system' but it's really up to you and which method might make most sense e.g. if you needed to replicate the config an another host. For stuff in your homedir it's up to you but if you decide to move to a new host on a new OS, pip will still be available and so that environment will be easier to recreate.
I am still relatively new to python packaging, each time I think I find "the" solution, I am thrown another curve ball, here is my problem followed by what I've tried:
I have CentOS and Ubuntu systems with Python 2.7.3 installed that is partitioned from the net so I have to create an "all in one package"
The target system does NOT have setuptools, easy_install, pip, virtualenv installed (this is the problem I'm trying to solve here)
The requirements.txt (or setup.py install_dependencies) is fairly heavy (Flask, etc...) for the application (though really, this isn't the problem)
My packaging sophistication has progressed slowly:
For connected systems, I had a really nice process going with
packaging: python2.7 setup.py sdist
installation: create a virtualenv, untar the distribution, python setup.py install
For the disconnected system, I've tried a few things. Wheels seem to be appropriate but I can't get to the "final" installation that includes setuptools, easy_install, pip. I am new to wheels so perhaps I am missing something obvious.
I started with these references:
Python on Wheels, this was super helpful but I could not get my .sh scripts, test data, etc... installed so I am actually using a wheel/sdist hybrid right now
Wheel, the Docs, again, very helpful but I am stuck on "the final mile of a disconnected system"
I then figured out I could package virtualenv as a wheel :-) Yay
I then figured out I could package easy_install as a python program :-) Yay, but it depends on setuptools, boo, I can't find how to get these packaged / installed
Is there a reference around for bootstrapping a system that has Python, is disconnected, but does not have setuptools, pip, wheels, virtualenv? My list of things a person must do to install this simple agent is becoming just way too long :/ I suppose if I can finish the dependency chain there must be a way to latch in a custom script to setup.py to shrink the custom steps back down ...
Your process will likely vary according to what platform you are targeting, but in general, a typical way to get what you are trying to achieve is to download packages on an online machine, copy them over to the offline one, and then install them from a file rather than from a URL or repository).
A possible workflow for RPM-based distros may be:
Install python-pip through binary packages (use rpm or yum-downloadonly, to download the package on an online machine, then copy it over and install it on the offline one with rpm -i python-pip.<whatever-version-and-architecture-you-downloaded>).
On your online machine, use pip install --download <pkgname> to download the packages you need.
scp or rsync the packages to a given directory X onto your offline machine
Use pip install --find-links=<your-dir-here> <pkgname> to install packages on your offline machine.
If you have to replicate the process on many servers, I'd suggest you set up your own repositories behind a firewall. In case of pip, it is very easy, as it's just a matter of telling pip to use a directory as its own index:
$ pip install --no-index --find-links=file:///local/dir/ SomePackage
For RPM or DEB repos is a bit more complicated (but not rocket science!), but possibly also not that necessary, as you really only ought to install python-pip once.
The pip install --download option that #mac mentioned has been deprecated and removed. Instead the documentation states that the pip download method should be used instead. So the workflow should be:
Download the python package or installer using your online machine.
Install python using the offline method used by your package manager or the python installer for windows on the offline machine.
On the online machine use pip download -r requirements.txt where "requirments.txt" contains the packages you will be needing the proper format
Use pip install --find-links=<your-dir-here> <pkgname> to install packages on your offline machine.
I am running a local pypi server. I can install packages from this server by either specifying it with the -i option of the pip command or by setting the PIP_INDEX_URL environment variable. When I install a package that has prerequisites, setup.py has historically honored the PIP_INDEX_URL environment variable, pulling the additional packages from my local server.
However, on a couple of systems that have been recently installed, it is behaving differently. Running, for instance, python setup.py develop fails because it tries to install prerequisites packages from pypi.python.org.
I have updated all of the related python packages (python, distribute, virtualenv, pip, etc...) on all the systems I'm testing on and continue to see this discrepancy. On my "original" system, setup.py downloads prerequisites from the pypi server specified in my PIP_INDEX_URL environment variable. On the newer systems, I can't seem to make it honor this variable.
What am I missing?
Create setup.cfg in the same folder as your setup.py with following content:
[easy_install]
allow_hosts = *.myintranet.example.com
From: http://pythonhosted.org/setuptools/easy_install.html#restricting-downloads-with-allow-hosts
You can use the --allow-hosts (-H) option to restrict what domains EasyInstall will look for links and downloads on.
--allow-hosts=None prevents downloading altogether.
I ran into the same issue. Fundamentally, setup.py is using setuptools which leverages easy_install, not pip. Thus, it ignores any pip-related environment variables you set.
Rather than use python setup.py develop you can run pip (from the top of the package) pip install -e . to produce the same effect.