I'm looking for a way to make a virtualenv which will contain just some libraries (which I chose) of the base python installation.
To be more concrete, I'm trying to import my matplotlib to virtualenv during the creation of virtualenv. It can't be installed efficiently with pip or easy_install since it misses some fortran compiler libs. The way I did it until now was to manually copy from:
/usr/lib/python2.7/dist-packages/ to virtualenv_name/lib/python2.7/dist-packages/
However this prevents the manully imported links to be registerd by yolk (which prints all currently available libs in virtualenv).
So, is there a way to do a selective variant of the
virtualenv --system-site-packages
Create the environment with virtualenv --system-site-packages . Then, activate the virtualenv and when you want things installed in the virtualenv rather than the system python, use pip install --ignore-installed or pip install -I . That way pip will install what you've requested locally even though a system-wide version exists. Your python interpreter will look first in the virtualenv's package directory, so those packages should shadow the global ones.
You can use the --system-site-packages and then "overinstall" the specific stuff for your virtualenv. That way, everything you install into your virtualenv will be taken from there, otherwise it will be taken from your system.
I am late to the game using python.3.8 and pip3 on Ubuntu 20.04.
The ONLY way to get rid of the annoying .local install for me was to set an environment variable (bash):
export PYTHONNOUSERSITE="true"
This does not need to be "true" anything will work. I would not go for a 0. ;-)
Install virtual env with
virtualenv --system-site-packages
and use pip install -U to install matplotlib
Related
I'm looking for a way to make a virtualenv which will contain just some libraries (which I chose) of the base python installation.
To be more concrete, I'm trying to import my matplotlib to virtualenv during the creation of virtualenv. It can't be installed efficiently with pip or easy_install since it misses some fortran compiler libs. The way I did it until now was to manually copy from:
/usr/lib/python2.7/dist-packages/ to virtualenv_name/lib/python2.7/dist-packages/
However this prevents the manully imported links to be registerd by yolk (which prints all currently available libs in virtualenv).
So, is there a way to do a selective variant of the
virtualenv --system-site-packages
Create the environment with virtualenv --system-site-packages . Then, activate the virtualenv and when you want things installed in the virtualenv rather than the system python, use pip install --ignore-installed or pip install -I . That way pip will install what you've requested locally even though a system-wide version exists. Your python interpreter will look first in the virtualenv's package directory, so those packages should shadow the global ones.
You can use the --system-site-packages and then "overinstall" the specific stuff for your virtualenv. That way, everything you install into your virtualenv will be taken from there, otherwise it will be taken from your system.
I am late to the game using python.3.8 and pip3 on Ubuntu 20.04.
The ONLY way to get rid of the annoying .local install for me was to set an environment variable (bash):
export PYTHONNOUSERSITE="true"
This does not need to be "true" anything will work. I would not go for a 0. ;-)
Install virtual env with
virtualenv --system-site-packages
and use pip install -U to install matplotlib
I am trying to install flake8 package using pip3 and it seems that it refuses to install because is already installed in one local location.
How can I force it to install globally (system level)?
pip3 install flake8
Requirement already satisfied (use --upgrade to upgrade): flake8 in ./.local/lib/python3.4/site-packages
Please note that I would prefer a generic solution (that should work on Debian, OS X maybe even Windows), one that should be used on any platform so I don't want to specify the destination myself.
For some weird reason it behaves like I already specified --user which in my case I didn't.
The only way I was able to install a package globally was to first remove it and install it again after this. Somehow it seems that pip (8.1.1) refuses to install a package globally if it exists locally.
Disclaimer: No virtual environments were used or harmed during the experiments.
Why don't you try sudo with the H flag? This should do the trick.
sudo -H pip install flake8
A regular sudo pip install flake8 will try to use your own home directory. The -H instructs it to use the system's home directory. More info at https://stackoverflow.com/a/43623102/
Maybe --force-reinstall would work, otherwise --ignore-installed should do the trick.
Where does pip installations happen in python?
I will give a windows solution which I was facing and took a while to solve.
First of all, in windows (I will be taking Windows as the OS here), if you do pip install <package_name>, it will be by default installed globally (if you have not activated a virtual enviroment).
Once you activate a virtual enviroment and you are inside it, all pip installations will be inside that virtual enviroment.
pip is installing the said packages but not I cannot use them?
For this pip might be giving you a warning that the pip executables like pip3.exe, pip.exe are not on your path variable.
For this you might add this path ( usually - C:\Users\<your_username>\AppData\Roaming\Programs\Python\ ) to your enviromental variables.
After this restart your cmd, and now try to use your installed python package. It should work now.
For windows 10:
Installing Python for all users is straight forward since when you install you have to click a checkbox for all users.
In order to install modules globally under C:\Program Files\Python310\Lib\site-packages
start CMD prompt as administrator and then install modules
python -m pip install selenium
For the Windows case:
Are you using virtualenv? If yes, deactivate the virtualenv. If you are not using a venv, the package should have already be installed on system level (system-wide). In that case, try to upgrade the package.
pip install flake8 --upgrade
I actually don‘t see your issue. Globally is any package which is in your python3 path‘s site package folder.
If you want to use it just locally then you must configure a virtualenv and reinstall the packages with an activated virtual environment.
I am trying to create a virtual environment using virtualenv on Mac OS X El Capitan. I have installed Python 2.7.11 with brew, which includes pip, wheel and setuptools by default.
Hovewer, when I try to install virtualenv following instructions in the documentation or from any other resource, I get several problems:
virtualenv executable is not placed in /usr/local/bin after pip makes its job, so I need to ln -s it by hand (it may indicate, that there is something wrong with installation on this step).
After I run virtualenv venv, it creates new environment, catches Python 2.7.11 from brew-installation, but: there is no pip inside bin folder. That means, that if I try which pip, having venv activated, it returns a global position of pip — /usr/local/bin/pip, not /path/to/venv/bin/pip.
As a consequence, installing packages inside venv uses global pip and installs them to a global sites-packages, not that inside venv, and it's quite the opposite of what environment should do.
Could you please suggest what may be wrong and how to fix it?
EDIT: The thing to mention is that I used to have other versions of Python installed on my computer, which I have recently deleted as it is described in this answer. Maybe it causes the issue, and some more thorough cleaning is needed.
Try removing or renaming the .pydistutils.cfg file in your home directory, e.g. by renaming with mv ~/.pydistutils.cfg ~/oldpydistutils.cfg
I'm putting a detailed answer here to help others, but the original credit goes to this answer. If you know what specifically in .pydistutils.cfg was causing the problem, let me know!
I was having the same issue: my virtual environments were created without a local copy of pip, although they had a local copy of python. This meant that using $ pip from within the virtual environment installed to the global package location, and was not visible to the environment's python.
How I diagnosed this on my machine:
I create a virtualenvironment with $ virtualenv env
Activated the virtual environment with $ source env/bin/activate
Checked python location: run (env)$ which python with output /Users/<username>/env/bin/python (as expected)
Checked pip location: run (env)$ which pip with output /usr/local/bin/pip (NOT expected)
To check where our packages are going, we can try to install a package in the virtual environment:
Try to install a package: (env)$ pip install HTTPServer which succeeds
Try to run the package: (env)$ python -m HTTPServer which fails with error /Users/emunsing/env/bin/python: No module named HTTPServer
To double-check, try to install again: (env)$ pip install HTTPServer which produces Requirement already satisfied (use --upgrade to upgrade): HTTPServer in /usr/local/lib/python2.7/site-packages
Double-checking, we see that there's no Pip in the environment's /bin folder:
$ ls env/bin
activate activate.fish python python2
activate.csh activate_this.py python-config python2.7
And so while the system finds the local python version, it can't find a local pip to use and traverses the $PATH. It ended up using pip from /usr/local/bin, leaving me unable to install packages locally to the virtual environment.
Here's what I tried:
- Reinstalling python brew uninstall python followed by brew upgrade and brew install python --build-from-source
- Installing pip using the get-pip.py command as described in the Pip documentation
Here's what I ruled out:
- I was not using sudo pip ... which caused similar problems in this other question and haven't done so at any time on this Python/pip install
- My virtual environment didn't show a local installation of pip, as was the case in these similar questions: This one for Windows, This one for Mac OS X.
Ultimately, I found that eliminating the ~/.pydistutils.cfg file fixed the problem, allowing for fresh virtual environments that had their own local pip. The contents of my ~/.pydistutils.cfg file were:
[global]
verbose=1
[install]
install-scripts=$HOME/bin
[easy_install]
install-scripts=$HOME/bin
Simply renaming the ~/.pydistutils.cfg file appears to fix the problem: it seems that although this file was created by the homebrew installation, some settings in this file may be incompatible with virtualenv. While removing this file hasn't had any bad effects on my system, you may need to use the --user flag when installing packages with pip to the global environment (e.g. $ pip install --user HTTPServer). Here are more details on .pydistutils.cfg if you want to work on tailoring it for your needs.
virtualenv executable is not placed in /usr/local/bin after pip makes its job, so I need to ln -s it by hand (it may indicate, that there is something wrong with installation on this step).
Don't do that. That will only hide the bug and not solve the problem. Here's a short guide how to debug this kind of issues:
Start with which -a python. The first path you see should be /usr/local/bin/python, if not check your PATH variable.
Next, check which -a pip. Again the first path should be /usr/local/bin/pip. If not, run python -m ensurepip and recheck.
Now install virtualenv using pip install virtualenv, after that check the output of which -a virtualenv. The first path should be /usr/local/bin/virtualenv, if not check the output of env |grep PYTHON for unexpected environment variables.
Finally check the output of virtualenv --version to make sure you have the latest version.
I had the issue when running virtualenv: "ImportError: No module named pip."
My solution was to downgrade virtualenv. I had 16.2.0.
pip uninstall virtualenv
pip install virtualenv==15.1.0
Just hit same issue on Linux. Seems like there are multiple causes of this issue, but for me answer was to remove ~/.pip/.
Details: I had this in my .pip/pip.conf for some reason I can't remember:
$ cat ~/.pip/pip.conf
[global]
use_https = True
index = https://pypi.python.org/pypi
prefix = /home/sam/local/
and was using local versions on Python, Pip installed at ~/local/. For some reason virtualenv installed pip must pick up prefix = /home/sam/local/ setting and pip was being installed there.
Try this: sudo apt install pythonV.v-distutils.
In my case V.v == 3.8.
This worked for me.
I would like to install the python-numpy in the Virtualenv environment. My system is Ubuntu 12.04, and my python is 2.7.5. First I installed the Virtualenv by
$ sudo apt-get install python-virtualenv
And then set up an environment by
$ mkdir myproject
$ cd myproject
$ virtualenv venv
New python executable in venv/bin/python
Installing distribute............done.
Activated it by
$ . venv/bin/activate
Installed python-numpy in the environment by
$ sudo apt-get install python-numpy
However, I tried to import numpy in python in the environment after all steps above. Python told me "No modules named numpy". Whereas, numpy could be imported in Python globally. I tried to remove and install many times but it does not work. I am a beginner of both Python and Linux.
apt-get will still install modules globally, even when you're in your new virtualenv.
You should either use pip install numpy from within your virtual environment (easiest way), or else compile and install numpy from source using the setup.py file in the root of the source directory (slightly harder way, see here).
I'd also thoroughly recommend you take a look at virtualenvwrapper, which makes managing virtual environments much friendlier.
Edit:
You should not be using sudo, either to create your virtual environment or to install things within it - it's a directory in your home folder, you don't need elevated permissions to make changes to it. If you use sudo, pip will make changes to your global site packages, not to your virtual environment, hence why you weren't able to install numpy locally.
Another thing to consider is that by default, new virtualenvs will inherit from the global site-packages - i.e. if Python can't find a module locally within your virtualenv, Python will also look in your global site packages *. In your case, since you'd already installed numpy globally (using apt-get), when you then try to pip install numpy in your virtual environment, pip sees that numpy is already in your Python path and doesn't install it locally.
You could:
Pass the --no-site-packages option when you create your virtualenv. This prevents the new virtualenv from inheriting from the global site packages, so everything must be installed locally.
Force pip to install/upgrade numpy locally, e.g. using pip install -U --force numpy
* As of v1.7, the default behaviour of virtualenv is to not include the global site-packages directory. You can override this by passing the --system-site-packages flag when creating a new virtual environment.
meddling with PYTHONPATH for site-packages indeed defeats the purpose of virtalenv. what worked for me was to specify the env i wanted the packages to be installed in via pip
example:
pip -E /home/proj1
where proj1 was created using virtualenv.
reference: how to install numpy in a virtualenv
Suppose I have a python interpreter with many modules installed on my local system, and it has been tuned to just work.
Now I want to create a virtualenv to freeze these, so that they won't be broke by upgrading in the future.
How can I make it? Thanks.
I can't use pip freeze, because that's a cluster on which there's no pip and I don't have the privileges to install it. And I don't want the reinstall the modules either, I'm looking for that whether there's a cloning way.
Run pip freeze to create a list of all modules currently installed on the system. Then make a virtualenv and install these modules.
pip freeze > env_modules.txt
virtualenv my_env && cd my_env && source bin/activate
pip install -r ../env_modules.txt
Virtualenv does not work because it uses local python interpreter.
My solution is to use conda (anoconda or miniconda) to build the environment, so if you need some packages, you can just conda install them. Then copy it to the remote machine and run.
I think the best is to use cpvirtualenv like this:
cpvirtualenv <name_of_virtualenv_to_be_copied> <name_of_new_virtualenv>