Python package won't upgrade - python

As part of a deployment procedure, I upload Python source packages (generated with setup.py sdist) to a remote server and install them in a virtualenv using pip install mypackage-1.0.tar.bz2.
This has worked for long time both for new installs and upgrades (specifically, upgrades without a change in the package's version number). For some reason I cannot figure out, since yesterday, it fails to upgrade the packages. No error is reported, the files are just not changed. Now I'm sure I'm doing something differently but I can't explain the change in behaviour.
I can upgrade the package with the -U --no-deps flags, but this technique forces the deployment script to differentiate between first install and upgrades (--no-deps is required as otherwise dependencies would be downloaded each time from pypi).
Any ideas how I can get a single pip command to do installs and upgrades?

pip install package will only be executed with you don't have this package already.
With you want to upgrade the package you'll have to use: pip install -U package

Related

Import Error: Missing optional dependecy 'openpyxl.'

I am familiar with using pip to install Python packages but there is no way to install it in the environment I am working in. We have to call the directory with python.exe to run any Python code. Therefore, it is impossible to use pip install because, since there is no python, there is no pip. How could we install packages without using pip or installing pip via the python.exe file? Here is an image of the error:
Packages like pip can be executed from the python executable using python.exe -m pip install openpyxl. If you don't have sufficient firewall permissions (as you mentioned high security) you may not be able to connect to the package servers, which you would need to discuss with admin.

pip cannot uninstall <package>: "It is a distutils installed project"

I tried to install the Twilio module:
sudo -H pip install twilio
And I got this error:
Installing collected packages: pyOpenSSL
Found existing installation: pyOpenSSL 0.13.1
Cannot uninstall 'pyOpenSSL'. It is a distutils installed project and
thus we cannot accurately determine which files belong to it which
would lead to only a partial uninstall.
Anyone know how to uninstall pyOpenSSL?
This error means that this package's metadata doesn't include a list of files that belong to it. Most probably, you have installed this package via your OS' package manager, so you need to use that rather than pip to update or remove it, too.
See e.g. Upgrading to pip 10: It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. · Issue #5247 · pypa/pip for one such example where the package was installed with apt.
Alternatively, depending on your needs, it may be more productive to not use your system Python and/or its global environment but create a private Python installation and/or environment. There are many options here including virtualenv, venv, pyenv, pipenv and installing Python from source into
/usr/local or $HOME/$HOME/.local (or /opt/<whatever>).
Finally, I must comment on the often-suggested (e.g. at pip 10 and apt: how to avoid "Cannot uninstall X" errors for distutils packages) --ignore-installed pip switch.
It may work (potentially for a long enough time for your business needs), but may just as well break things on the system in unpredictable ways. One thing is sure: it makes the system's configuration unsupported and thus unmaintainable -- because you have essentially overwritten files from your distribution with some other arbitrary stuff. E.g.:
If the new files are binary incompatible with the old ones, other software from the distribution built to link against the originals will segfault or otherwise malfunction.
If the new version has a different set of files, you'll end up with a mix of old and new files which may break dependent software as well as the package itself.
If you change the package with your OS' package manager later, it will overwrite pip-installed files, with similarly unpredictable results.
If there are things like configuration files, differences in them between the versions can also lead to all sorts of breakage.
I had the same error and was able to resolve using the following steps:
pip install --ignore-installed pyOpenSSL
This will install the package with latest version and then if you try to install,
pip install twilio
It will work.
Generally, for similar errors, use this format:
pip install --ignore-installed [package name]==[package version]
I just had this error and the only way I was able to resolve it was by manually deleting the offending directory from site-packages.
After doing this you may need to reinstall the packages with --force-reinstall.
Reading the above comments, I understood that package a was installed with conda and the new package b that I was trying to install using pip was causing problems. I was lucky that package b had conda support so using conda to install package b solved the problem.
In my case, I was installing a package from internal git using the following command:
python -m pip install package.whl --force
I was doing this because I didn't want to explicitly uninstall the previous version and just replace it with a newer version. But what it also does is install all the dependencies again. I was getting the error in one of those packages. Removing --force fixed the problem.
I want to add, having --ignore-installed also worked for me. And removing --force is essentially doing the same thing in my case.

Tox installs the wrong version of pip to it's virtual env

I am using tox to manage some testing environments. I have a dependency (backports.ssl-match-hostname) that I cannot download using the latest version of pip, so I need to revert back to pip 8.0.3 to allow the install to work.
I have included the 8.0.3 version of pip inside my tox.ini file for dependencies.
deps=
pip==8.0.3
However, when I run
source .tox/py27/bin/activate
and enter the virtual testing environment, and then run
pip --version
I end up with
8.1.2
However, outside of my tox environment, when I run the same command, I get
8.0.3
Is there anything special that tox does when grabbing pip? Why am I not able to specify the version of pip that I want to use as a dependency?
EDIT : to add to this, it seems as though I am able to grab the dependency pip==8.0.3, but for the other dependencies, they are still running from the command launched with pip==8.1.2
So, I need to be able to grab pip==8.0.3 first, and then once installed, grab everything else. Still unsure why tox is starting with pip==8.1.2
This was apparently the result of the "virtualenvs" python package containing a pre-selected group of python packages that it refers to, one of which was the latest and greatest pip.
I don't know if this is the preferred way of doing this, but I found success by running
pip uninstall virtualenv
And then reinstalling with the version that worked
pip install virtualenv==15.0.1
With the "correct" version of virtualenv in place, I was able to run my tox command
source .tox/py27/bin/activate
and see the desired version of pip
pip --version
pip 8.0.3
A workaround for this is here: https://github.com/pypa/pip/issues/3666
Although to make it work I had to write "pip install pip==8.1.1" in my script. So to recap:
Add a pip.sh script to your project:
#!/bin/bash
pip install pip==8.1.1
pip install "$#"
Add to your tox.ini:
install_command = {toxinidir}/pip.sh {opts} {packages}
I've recently hit this problem. I've had it for a while but it just didn't register because I had such occasional failures with Python 2/3 code. Another way that this can happen is, if like me, you change the virtualenv between different Python versions and don't clean up.
Check /bin or /Scripts to see whether python2 points to python. If the virtualenv is Python 3 then this will mean that python2 actually calls Python 3. Vice versa, of course, if you the virtualenv is Python 2 and you want to test Python 3 code.
New versions of virtualenv reach out to download the latest pip, setuptools, and wheel -- you can disable this behavior when running through tox with the tox-virtualenv-no-download package See: https://github.com/asottile/tox-virtualenv-no-download#wait-why

Force pip to skip or ignore bad hash on download cache

Wondering if anyone knows a workaround to force pip to either completely skip hash checks or ignore bad sums when installing from a download cache? Install cmd is:
pip.exe install --target=C:\WHERE_I_WANT_INSTALLED --download-cache=C:\MY_DL_CACHE mitmproxy
Mitmproxy requires a specific version of pillow, and in that specific version there just happens to be a defined C function whos signature collides with another function in an include within Mingw x86_64. I'm not changing out my toolchain, as anyone who uses mingw on windows knows, it's a disgustingly painful process to find and keep a stable version.
Anyway, I've posted the question as a bug report on pips github but I thought I'd pose the question here. Thanks in advance.
Well, I found the answer in the very last place I would ever think to look: the documentation.
So basically you run a few commands to have pip download everything that is required for what you're trying to install. In this case, its was mitmproxy. So first I grabbed the requirements.txt file for mitmproxy, and dropped it into a dir. Commands to download packages were:
pip install --download C:\MY_SECRET_PATH\mitm\dl-cache six
pip install --download C:\MY_SECRET_PATH\mitm\dl-cache mock
pip install --download C:\MY_SECRET_PATH\mitm\dl-cache itsdangerous
pip install --download C:\MY_SECRET_PATH\mitm\dl-cache cryptography
pip install --download C:\MY_SECRET_PATH\mitm\dl-cache mitmproxy
Now, everything required for mitm-proxy is stored in the provided path. We then supply this path and a couple of other flags to the command for installing what we're after, again mitmproxy. To make things more interesting, I'm installing all of this stuff in a custom dir. So that command is as follows:
pip.exe install mitmproxy --no-index --target=C:\MY_SECRET_PATH\mitm --find-links=C:\MY_SECRET_PATH\mitm
So we're basically telling pip to install the selected package and all its deps offline, not checking pypy and therefore skipping hash checks. You're then obviously free to modify the sources of the packages you've downloaded, as I have.

How to specify install order for python pip?

I'm working with fabric(0.9.4)+pip(0.8.2) and I need to install some python modules for multiple servers. All servers have old version of setuptools (0.6c8) which needs to be upgraded for pymongo module. Pymongo requires setuptools>=0.6c9.
My problem is that pip starts installation with pymongo instead of setuptools which causes pip to stop. Shuffling module order in requirements file doesn't seem to help.
requirements.txt:
setuptools>=0.6c9
pymongo==1.9
simplejson==2.1.3
Is there a way to specify install order for pip as it doesn't seem to do it properly by itself?
This can be resolved with two separate requirements files but it would be nice if I didn't need to maintain multiple requirements files now or in the future.
Problem persists with pip 0.8.3.
You can just use:
cat requirements.txt | xargs pip install
To allow all types of entries (for example packages from git repositories) in requirements.txt you need to use the following set of commands
cat requirements.txt | xargs -n 1 -L 1 pip install
-n 1 and -L 1 options are necessary to install packages one by one and treat every line in the requirements.txt file as a separate item.
This is a silly hack, but might just work. Write a bash script that reads from your requirements file line by line and runs the pip command on it.
#!/bin/bash
for line in $(cat requirements.txt)
do
pip install $line -E /path/to/virtualenv
done
Sadly the upgrade suggestion won't work. If you read the other details in https://github.com/pypa/pip/issues/24 you will see why
pip will build all packages first, before attempting to install them. So with a requirements file like the following
numpy==1.7.1
scipy==0.13.2
statsmodels==0.5.0
The build of statsmodels will fail with the following statement
ImportError: statsmodels requires numpy
The workaround given for manually calling pip for each entry in the requirements file (via a shell script) seems to be the only current solution.
Pymongo requires setuptools>=0.6c9
How do you know? Requires to build or to install? You don't say what version of Pymongo you were trying to install but looking at setup.py file for current (3.2.2) version there's no specification of neither what Pymongo requires to run setup.py (setup_requires) nor what it requires to install (install_requires). With no such information pip can't ensure specific version of setuptools. If Pymongo requires specific version of setuptools to run its setup.py (as opposed to requiring setuptools to run setup function itself) then the other problem is that until recently there was no way to specify this. Now there's specification – PEP 518 – Specifying Minimum Build System Requirements for Python Projects, which should be shortly implemented in pip – Implement PEP 518 support #3691.
As to order of installation, this was fixed in pip 6.1.0;
From pip install – Installation Order section of pip's documentation:
As of v6.1.0, pip installs dependencies before their dependents, i.e.
in "topological order". This is the only commitment pip currently
makes related to order.
And later:
Prior to v6.1.0, pip made no commitments about install order.
However, without proper specification of requirements by Pymongo it won't help either.
Following on from #lukasrms's solution - I had to do this to get pip to install my requirements one-at-a-time:
cat requirements.txt | xargs -n 1 pip install
If you have comments in your requirements file you'll want to use:
grep -v "^#" requirements.txt | xargs pip install
I ended up running pip inside virtualenv instead of using "pip -E" because with -E pip could still see servers site-packages and that obviously messed up some of the installs.
I also had trouble with servers without virtualenvs. Even if I installed setuptools with separate pip command pymongo would refuse to be installed.
I resolved this by installing setuptools separately with easy_install as this seems to be problem between pip and setuptools.
snippets from fabfile.py:
env.activate = "source %s/bin/activate" % virtualenv_path
_virtualenv("easy_install -U setuptools")
_virtualenv("pip install -r requirements.txt")
def _virtualenv(command)
if env.virtualenv:
sudo(env.activate + "&&" + command)
else:
sudo(command)
I had these problems with pip 0.8.3 and 0.8.2.
Sorry, my first answer was wrong, because I had setuptools>=0.6c9.
It seems it is not possible because pymongo's setup.py needs setuptools>=0.6c9, but pip has only downloaded setuptools>=0.6c9, and not installed yet.
Someone discussed about it in the issue I pointed before.
I have my own created an issue some weeks ago about it: Do not run egg_info to each package in requirements list before installing the previous packages.
Sorry for the noisy.
First answer:
Upgrade your pip to 0.8.3 version, it has a bugfix to installation order.
Now if you upgrade everything works :-)
Check the news here: http://www.pip-installer.org/en/0.8.3/news.html

Categories