I am trying to get a virtual environment for a repo that requires python 3.5. I am using Debian, and from what I can tell, python 3.5 does not have an aptitude package. After reading some posts, it was recommended to download 3.5 source code and compile it.
After running the make and install, python3.5 was installed to /usr/local/bin. I added that to the $PATH variable.
Here is where I ran into problems. After I ran:
$ cd project-dir
$ pyvenv env
$ source env/bin/activate
$ pip install -r requirements.txt
I was getting issues with needing sudo to install the proper packages. I ran:
$ which pip
and it turns out that pip was still using the /usr/local/bin version of pip.
$ echo $PATH
returned
/home/me/project-dir/env/bin:/usr/local/bin:/usr/bin:/bin: ...
I am assuming that because the /usr/local path came after the virtual environment's path in my PATH variable, it is using that version of pip instead of my virtual environments.
What would be the best way to run the correct version of pip within the virtualenv? The two options I can think of is moving the binaries over to /usr/bin or modifying the activate script in my virtual env to place the virtualenv path after /usr/local.
Option 1
You can upgrade pip in a virtual environment manually by executing
pip install -U pip
Option 2
Good method to upgrade pip inside that package
python -m ensurepip --upgrade does indeed upgrade the pip version in the system (if it is lower than the version in ensurepip).
You are facing this problem, because venv uses ensurepip to add pip into new environments:
Unless the --without-pip option is given, ensurepip will be invoked to
bootstrap pip into the virtual environment.
Ensurepip package won't download from the internet or grab files from anywhere else, because all required components are already included into the package. Doing so would add security flaws and is thus unsupported.
Ensurepip is not designed to give you the newest pip, but just "a" pip. To get the newest one use the manual way at the beginning of this post.
To check ensurepip version you can type into python console import ensurepip print(ensurepip.version())
More Findings for further reading:
To upgrade ensurepip manually using files - https://github.com/python/cpython/commit/f649e9c44631c07e707842c42747b651b986dcc4
What's the proper way to install pip, virtualenv, and distribute for Python?
Comprehensive beginner's virtualenv tutorial?
Kesh's answer led me in the right direction.
The problem was that I didn't actually have pip installed in my venv.
It turns out, when I built python3.5 from source, I did not have the libssl-dev package. It looks like one of the dependencies of ensurepip was the python ssl package that didn't get installed because I didn't have libssl-dev.
To fix the problem, I rebuilt python 3.5 for source with the libssl-dev package installed. The rebuilt python now included the ssl package, which allowed ensurepip to install pip in my virtual environment.
Try installing it locally:
pip install --user -r requirements.txt
which would, I believe, install the file in a sub-directory of your $HOME directory (which your virtual env I would think would set). Otherwise I think you could just use:
/path/to/virtualenv/pip install -r requirements.txt
Related
Trying to install django with different version that in system, it shows me:
Installing collected packages: Django
Found existing installation: Django 1.7.11
Not uninstalling django at /home/user/lib/python2.7, outside environment /home/user/webapps/v2_dev/venv
Successfully installed Django-1.8.19
But in fact there is old version
tried with different commands:
./venv/bin/pip install Django==1.8.11
pip install Django==1.8.11
UPDATED:
When I install my packages it shows:
The required version of setuptools (>=16.0) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U setuptools'.
(Currently using setuptools 3.1 (/home/user/lib/python2.7/setuptools-3.1-py2.7.egg))
When I do the upgrade:
venv/bin/pip install --upgrade setuptools
Requirement already up-to-date: setuptools in ./venv/lib/python2.7/site-packages (40.5.0)
I arrived at this post while looking for how to force install something in a virtualenv despite it being already installed in the global python. This happens when the virtual env was created with --system-site-packages.
In this situation, for certain packages it may be important to have a local version within the virtualenv, even if for many other packages we can share the global versions. This is the case of pytest, for example. However, pip will refuse to install a package in the virtualenv if it can already find the most recent version in the system site.
The solution is to use pip install --ignore-installed mypackage.
Instead of installing setuptools and Django like ./venv/bin/pip install ..., try to activate your virtual environment first and install the stuff you need afterwards.
Activating virtual environment:
Go to the folder where your virtual environment is located (typically the root folder of your project) and type one of the two:
source venv/bin/activate (Unix-based systems)
venv\Scripts\activate (Windows)
This will ensure that you are not mixing packages installed in different environments.
Forcing reinstall of the packages:
Simple upgrade can be done by adding: --upgrade or -U
Forcing reinstall of the packages can be done by adding: --force-reinstall
In your case (once the environment is activated):
python -m pip install -U --force-reinstall setuptools Django
Step by step:
Deactivate and delete the old virtual environment
Create new environment using python -m virtualenv venv (python 2) or python -m venv venv (python 3)
python above is the interpreter which you want to use in your project. That's the only point where you might want to use for example python3 or some absolute path instead. Later use the code as is.
source venv/bin/activate
Activating the virtual environment
python -m pip install -U pip
If you have issue with ImportError: No module named _internal than probably you are using an old version of pip. Issue is described here
python -m pip install -U --force-reinstall -r requirements.txt
-U --force-reinstall is a bit of an overkill in case of fresh environment, but it will do no harm
Go to the place where your manage.py is located and start the server using python manage.py runserver
The problem was in Webfaction VPS
Need an empty file named sitecustomize.py in the /home/username/webapps/appName/env/lib/python2.
That empty file overrides their python customizations, one of which is to include any packages in the ~/lib/python2.7 directory.
You might need to deactivate your virtual env and activate it again for changes to take effect.
workaround but it works!
in your virtualenv directory change the properties of the pyvenv.cfg file
include-system-site-packages = True
this will cause the packages installed on the main to be used
I am trying to run a webpage using python flask and connecting it with the database of MySQL and while installing MySQL packages I'm receiving this error.
I'm doing this on ec2 Linux AWS.
TL;DR
The 'ideal' solution (Ubuntu/Debian way):
$ python -m pip uninstall pip to uninstall the new pip 10 and retain your Ubuntu/Debian-provided patched pip 8. For a system-wide installation of modules use apt wherever possible (unless you are in a virtualenv), more on it below. In older Ubuntu/Debian versions, always add --user flag when using pip outside of virtualenvs (installs into ~/.local/, default in python-pip and python3-pip since 2016).
If you still want to use the new pip 10 exclusively, there are 3 quick workarounds:
simply re-open a new bash session (a new terminal tab, or type bash) - and pip 10 becomes available (see pip -V). debian's pip 8 remains installed but is broken; or
$ hash -d pip && pip -V to refresh pip pathname in the $PATH. debian's pip 8 remains installed but is broken; or
$ sudo apt remove python-pip && hash -d pip (for Python 3 it's python3-pip) -- to uninstall debian's pip 8 completely, in favor of your new pip 10.
Note: You will always need to add --user flag to non-debian-provided pip 10, unless you are in a virtualenv! Your use of pip 10 system-wide, outside of virtualenv, is not really supported by Ubuntu/Debian. Never sudo pip!
Details:
https://github.com/pypa/pip/issues/5221#issuecomment-382069604
https://github.com/pypa/pip/issues/5240#issuecomment-381673100
So, here we have Python 2.7.12 in Ubuntu 16.04 ec2 machine, and get ImportError: cannot import name main when trying to use pip. It's caused by the pip install --upgrade pip command: that installs the latest pip version 10 alongside Ubuntu's default pip version from python-pip debian package from OS distribution (the system Python installation), completely bypassing Ubuntu apt subsystem. It breaks the Ubuntu's default pip: the debian-patched launcher script from python-pip (system-installed to /usr/bin/pip*) tries to do import main() from your newly installed pip 10 library, but with a different import path, so it fails.
This error is discussed in more detail in a developer thread of the pip issue tracker, including a few proposed solutions, such as:
The $ hash -d pip command: when hash is invoked, the full pathname of pip is determined by searching the directories in $PATH and remembered. Any previously-remembered pathname is discarded. The -d option causes the shell to "forget" the remembered location of the given package name; or
Similarly, you can simply re-open a new bash session (a new terminal tab) to refresh pip pathname in $PATH; or
You could just use a versioned pip2 command (or pip3 for Python 3) instead of pip to invoke the older system-installed launcher /usr/bin/pip2 , whereas any pip script located in $HOME/.local/bin dir (pip, pip2, pip2.7) will invoke your new user-installed pip 10 version;
You can also use the versioned Python commands in combination with the -m switch to run the appropriate copy of pip, for example:
$ python2 -m pip install --user SomePackage # default Python 2
$ python2.7 -m pip install --user SomePackage # specifically Python 2.7
That is handy if you have several versions of Python and need an extension from PyPI, such as your MySQL-python module (MySQLdb) or a Flask-MySQL, for a specific Python version. The --user switch is only required outside of virtualenv.
Or, uninstall one of the two pips – either user-installed or system-installed – to resolve the conflict:
$ python -m pip uninstall pip – to remove your manually-installed pip in favour of the previously installed Ubuntu-shipped version from python-pip debian package (python3-pip for Python 3); it is slightly older, but it finds and installs latest modules from PyPI just fine, and has a working pip command in the $PATH by default; or
$ sudo apt-get remove python-pip – to uninstall Ubuntu-provided pip in favour of your latest pip 10; if it is not accessible via the short pip command, just add your $HOME/.local/bin directory to your $PATH environment variable to use pip command (see above).
Note: Ubuntu 16.04 pip v8.1.1 and the latest pip v10.0.1 produce exactly the same PyPI index search results and can pull the same module versions;
Finally, you could ignore both pips altogether in favor of APT, and install Python packages system-wide from Ubuntu repo with:
$ apt search <python-package> # or apt-cache search in older Ubuntu
$ apt show <python-package> # e.g. python-flask
$ sudo apt install <python-package> # or sudo apt-get install
Packages prefixed with python- are for Python 2; with python3- are for Python 3.
Standard apt-get installation method may be what you need. For example, in your case:
python-mysqldb - Python interface to MySQL <- a fork of MySQLdb == MySQL-python
python-flask-sqlalchemy - SQL Alchemy support
python-pymysql - pure Python MySQL driver
In fact, python-packages from Ubuntu repository are preferred whenever possible, particularly in case of heavy system dependencies or when used system-wide.
Of course, the amount of Python packages in Ubuntu repository (few thousand!) is relatively smaller compared to PyPI (and have only one version of them), because any OS repository is lagging slightly behind PyPI versions. But the upside of APT is that all the Ubuntu-provided packages underwent integration testing within Ubuntu, plus apt-get quickly resolves heavy dependencies like C extensions automatically. You will always get the system libraries you need as part of the apt install, but with pip you have no such guarantees.
APT may not be an option, however, if you really need only the latest (or certain older) package version, or when it can only be found at PyPI, or when modules need to be isolated; then pip is indeed more appropriate tool. If you have to use pip install command on Ubuntu instead of apt-get install, please ensure it runs in an isolated virtual development environment, such as with virtualenv (sudo apt-get install python-virtualenv), or using a built-in venv module (available in python3 only), or at a per-user level (pip install --user command option), but not system-wide (never sudo pip!).
Note: Using sudo pip command (with root access) on Ubuntu/Debian should be avoided, because it interferes with the operation of the system package manager (apt) and may affect Ubuntu OS components when a system-used python module is unexpectedly upgraded, particularly by dependencies on another pip package. It is advised to never use Pip to change system-wide Python packages, as these are managed by apt-get on Ubuntu.
These steps worked for me.
1- Uninstall the pip update from python.
2- Uninstall pip package from your Ubuntu.
3- Check that pip binary is not longer in your system.
python -m pip uninstall pip
apt remove python-pip
whereis pip
4- Download and install pip. (credits for VanDragt.com)
wget https://bootstrap.pypa.io/get-pip.py -O /tmp/get-pip.py
sudo python3 /tmp/get-pip.py
pip install --user pipenv
pip3 install --user pipenv
echo "PATH=$HOME/.local/bin:$PATH" >> ~/.profile
source ~/.profile
whereis pip
Now you should be able to install any pip package you want.
My cent, I had the same ImportError: cannot import name main.
My system is a Linux Ubuntu distro, I have executed this command:
python -m pip uninstall pip
This has removed one local (for the user) pip version.
I had already an older pip/pip2 system executable (apt-get installed in ancient times) that worked like a charm.
As suggested in pip's github issue
The temporary fix is -
Edit your /usr/bin/pip file and comment the line importing main and edit it
#from pip import main
from pip._internal import main as main
Worked perfectly for me.
Note - this is a temporary fix. Wait for team pip to fix this.
OR
from pip import main
if __name__ == '__main__':
sys.exit(main())
to this:
from pip import __main__
if __name__ == '__main__':
sys.exit(__main__._main())
As suggested in this SO answer.
Try this
Check the python version you use
# Python --version
and try installing for eg if your version is 2.7
#python2.7 <package name>
Will work fine......
I have faced the similar issue after pip 19 upgrade. So I did the following to fix the problem.
pip install --upgrade pip==9.0.3
instead of
pip install -U pip
I am attempting to install a package for python3.4 on Mac OSX 10.9.4. As you know, python ships with OSX, so when I installed python3.4 I was happy to find that it came with its own version of pip, that would install packages to it (installing pip on a mac with multiple versions of python will cause it to install on the system's python2.7.)
I had previously tried installing this package (https://pypi.python.org/pypi/chrome/0.0.1) with my first installation of pip (the one tied to python2.7) and found that it successfully installed on that version, but not on any others.
I ran an install with the new pip keyword for python3.4 (which when called by itself spits out the help page so i know it works) and it told me that the package was already installed and to try updating. The update revealed that I already had the most recent version. so I tried uninstalling it from just the python3.4 and reinstalling to no avail, and got the same results when uninstalling pip from python2.7 and reinstalling only on version 3.4.
I know that's a bit hard to follow but hopefully that makes sense.
I also reviewed the content here with no success.
RESOLVED:
while python did have a directory named the same as a directory it uses with packages, this was not the correct directory, for me it was in a subdirectory of library. while documentation said that referencing pip2 would cause the package to install on python3.4, this was false. however, referencing pip3.4 worked for me.
My suggestion is that you start using virtualenv.
Assuming you have 3.4 installed, then you should also have pyvenv. As for pip and 3.4, it should already be installed.
Using for example version 3.4 create your own virtual environment and activate it:
$ mkdir ~/venv
$ pyvenv-3.4 ~/venv/py34
$ source ~/venv/py34/bin/activate
$ deactive # does what is says...
$ source ~/venv/py34/bin/activate
$ pip install ... # whatever package you need
With version 2.7 first install virtualenv and then create your own virtual environment and activate it. Make sure that setuptools and pip are updated:
$ virtualenv-2.7 ~/venv/venv27
$ . ~/venv/venv27/bin/activate
$ pip install -U setuptools
$ pip install -U pip
$ pip install ... # whatever package you need
How do I control the version of pip which is used in a freshly created venv?
By default, it uses a vendored pip distribution which may be out of date or unsuitable for whatever other reason. I want to be able to create a venv with a user-specified version of pip installed initially, as opposed to creating one and then upgrading the pip installation from within the env.
For me, I just upgraded pip/virtualenv/virtualenvwrapper on my machine (not inside the virtualenv). Subsequently created virtualenvs had the updated version.
deactivate
pip install --upgrade pip virtualenv virtualenvwrapper
mkvirtualenv ...
From reading the source of virtualenv, it looks like pip is installed from a source tarfile included with virtualenv. In virtualenv 1.10.1, it is pip-1.4.1.tar.gz in the site-packages/virtualenv_support directory (it gets setuptools from the same place). You could feasibly replace that archive to control the version; virtualenv.py, at least the version I have, doesn't care which version of pip is there:
if not no_pip:
install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
You could also pass the --no-pip option and then install the version you want from source.
In virtualenv 1.11, it looks for a wheel file (e.g. pip-*.whl) instead of a tar.gz, but other than that it acts the same way (thanks #wim for the update).
You cannot downgrade pip using pip, the solution is to install a specific version in your virtual environment:
virtualenv env -p python3.6 --no-pip
source env/bin/activate
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py pip==18.1
This will allow you to keep using --process-dependency-links that was removed in pip 19.
It's easy enough to replace the pip that gets installed in your virtual environment. Within your virtual environment active, simply execute the following command:
pip install pip==1.4.1
Since Python 3.9 the stdlib venv module has EnvBuilder.upgrade_dependencies. Unfortunately, it has two shortcomings:
Won't really help users to install a specific pip version, only the latest.
It still installs the vendored pip and setuptools versions first, and then uninstall them if they're outdated, which they almost always will be in practice.
It would be ideal to install the latest versions directly! The venv CLI provides a --without-pip argument that is useful here. You can use this to opt-out of the vendored pip, and then actually use the vendored pip wheel to install your desired pip version instead (along with any other packages you might want in a freshly created virtual environment).
It's best to put it into a function - this goes into your shell profile or rc file:
function ve() {
local py="python3"
if [ ! -d ./.venv ]; then
echo "creating venv..."
if ! $py -m venv .venv --prompt=$(basename $PWD) --without-pip; then
echo "ERROR: Problem creating venv" >&2
return 1
else
local whl=$($py -c "import pathlib, ensurepip; [whl] = pathlib.Path(ensurepip.__path__[0]).glob('_bundled/pip*.whl'); print(whl)")
echo "boostrapping pip using $whl"
.venv/bin/python $whl/pip install --upgrade pip setuptools wheel
source .venv/bin/activate
fi
else
source .venv/bin/activate
fi
}
As written, this function just pulls latest pip, setuptools, and wheel from index. To force specific versions you can just change this line of the shell script:
.venv/bin/python $whl/pip install --upgrade pip setuptools wheel
Into this, for example:
.venv/bin/python $whl/pip install pip==19.3.1
For Python 2.7 users, you may do a similar trick because virtualenv provides similar command-line options in --no-pip, --no-setuptools, and --no-wheel, and there is still a vendored pip wheel available to bootstrap since Python 2.7.9. Pathlib will not be available, so you'll need to change the pathlib usage into os.path + glob.
While creating virtual environment using venv module, use optional argument --upgrade-deps.
That will upgrade pip + setuptools to the latest on PyPI.
Example : python3 -m venv --upgrade-deps .venv
Reference link :
venv module documentation
It indicates "Changed in version 3.9: Add --upgrade-deps option to upgrade pip + setuptools to the latest on PyPI"
Note : I tried this using Python 3.10.4
Solved the same issue today on my windows machine with python 3.10.2 installed.
download required pip wheel from history to path\to\python\lib\ensurepip\bundled
in path\to\python\lib\ensurepip\__init__.py change _PIP_VERSION = your version
create environment as usual python -m venv path\to\env
I had issues with pip 22.3.1, so I wanted to downgrade it to 22.3, while pip 22.3.1 produces errors and not letting me downgrade as the other solutions suggest.
I solved the issue by creating a new venv with the specific pip version, as follows:
virtualenv env -p python3.10 --pip 22.3
TLDR
python -m pip install --upgrade pip==<target version number>
Example
Downgrading from pip 20.3 to pip 19.3 from within a virtual environment.
(env) $ pip --version
pip 20.3.1
(env) $ python -m pip install --upgrade pip==19.3 # downgrading
Collecting pip==19.3
Using cached pip-19.3-py2.py3-none-any.whl (1.4 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.3.1
Uninstalling pip-20.3.1:
Successfully uninstalled pip-20.3.1
Successfully installed pip-19.3
(env) $pip --version trex#Tobiahs-MacBook-Pro
pip 19.3
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: https://launchpad.net/~pypy the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
Quoting (with minor changes) from here the pypy website:
If you want to install 3rd party libraries, the most convenient way is
to install pip:
$ curl -O https://bootstrap.pypa.io/get-pip.py
$ ./pypy-2.1/bin/pypy get-pip.py
$ ./pypy-2.1/bin/pip install pygments # for example
In order to use it nicely, you might want to add an alias into e.g. ~/.bashrc:
alias pypy_pip='./pypy-2.1/bin/pip'
Where the actual pip executable is located has to be taken from the output of pypy get-pip.py
To keep a separate installation, you might want to create a virtualenv for PyPy. Within the virtualenv, you can then just run pip install whatever and it will install it for PyPy. When you create a virtualenv, it automatically installs pip for you.
Otherwise, you will need to work out where PyPy will import from and install distribute and pip in one of those locations. pip's installer should do this automatically when run with PyPy. Be careful with this option - if it decides to install in your system Python directories, it could break other things.
if you want to use pip with pypy:
pypy -m pip install [package]
pip is included with pypy so just target pip with the -m flag
The problem with pip installing from the pypy (at least when installing pypy via apt-get) is that it is installed into the system path:
$ whereis pip
pip: /usr/local/bin/pip /usr/bin/pip
So after such install, pypy pip is executed by default (/usr/local/bin/pip) instead of the python pip (/usr/bin/pip) which may break subsequent updates of the whole Ubuntu.
The problem with virtualenv is that you should remember where and what env you created.
Convenient alternative solution is conda (miniconda), which manages not only python deployments: http://conda.pydata.org/miniconda.html.
Comparison of conda, pip and virtualenv:
http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html