Installing virtualenv without root - python

I'm trying to install virtualenv on a machine without root access. Note that I don't have pip either on the machine, so "pip install --user virtualenv" won't work (unless I want to install pip locally as well).
I tried the following:
wget https://pypi.python.org/packages/d4/0c/9840c08189e030873387a73b90ada981885010dd9aea134d6de30cd24cb8/virtualenv-15.1.0.tar.gz#md5=44e19f4134906fe2d75124427dc9b716
tar zvf virtualenv-15.1.0.tar.gz
python virtualenv-15.1.0/virtualenv.py venv
And got an error message:
Cannot find a wheel for setuptools
Cannot find a wheel for pip
Does this mean I need to install setuptools and pip first? But installing everything (pip and virtualenv) under "~/.local" didn't work either. I also tried using "extra-search-dir" option for virtualenv.
Related Questions:
How to Install virtualenv on a machine without root access

Related

How to ignore my development project for installing a package with pip

I uploaded my package to testpypi, and installed it via:
pip install -i https://test.pypi.org/simple/ myporj==0.1.6
However it refuse to install it by saying:
Requirement already satisfied: myproj==0.1.6 in ./projs/myproj (0.1.6)
I guess I may add the project in editable mode:
pip install --editable .
However, I know want to disable it. I tried:
python setup.py develop --uninstall
But it has no effect.
It may be worth creating a separate env (Virtual Environments) for the installation.
Here are some articles on this subject:
https://docs.python.org/3/tutorial/venv.html
https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment
Or does it need to be installed in the same place?
You can try to find your package pip search myporj or pip list for show all packages.
And uninstall it later pip uninstall myporj (it may require the right of sudo in linux) then install again.
Maybe you may need --no-cache-dir option to ignore the cache during installation. Here is more details: https://pip.pypa.io/en/stable/reference/pip_install/#caching

How to force install package in virtualenv?

Trying to install django with different version that in system, it shows me:
Installing collected packages: Django
Found existing installation: Django 1.7.11
Not uninstalling django at /home/user/lib/python2.7, outside environment /home/user/webapps/v2_dev/venv
Successfully installed Django-1.8.19
But in fact there is old version
tried with different commands:
./venv/bin/pip install Django==1.8.11
pip install Django==1.8.11
UPDATED:
When I install my packages it shows:
The required version of setuptools (>=16.0) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U setuptools'.
(Currently using setuptools 3.1 (/home/user/lib/python2.7/setuptools-3.1-py2.7.egg))
When I do the upgrade:
venv/bin/pip install --upgrade setuptools
Requirement already up-to-date: setuptools in ./venv/lib/python2.7/site-packages (40.5.0)
I arrived at this post while looking for how to force install something in a virtualenv despite it being already installed in the global python. This happens when the virtual env was created with --system-site-packages.
In this situation, for certain packages it may be important to have a local version within the virtualenv, even if for many other packages we can share the global versions. This is the case of pytest, for example. However, pip will refuse to install a package in the virtualenv if it can already find the most recent version in the system site.
The solution is to use pip install --ignore-installed mypackage.
Instead of installing setuptools and Django like ./venv/bin/pip install ..., try to activate your virtual environment first and install the stuff you need afterwards.
Activating virtual environment:
Go to the folder where your virtual environment is located (typically the root folder of your project) and type one of the two:
source venv/bin/activate (Unix-based systems)
venv\Scripts\activate (Windows)
This will ensure that you are not mixing packages installed in different environments.
Forcing reinstall of the packages:
Simple upgrade can be done by adding: --upgrade or -U
Forcing reinstall of the packages can be done by adding: --force-reinstall
In your case (once the environment is activated):
python -m pip install -U --force-reinstall setuptools Django
Step by step:
Deactivate and delete the old virtual environment
Create new environment using python -m virtualenv venv (python 2) or python -m venv venv (python 3)
python above is the interpreter which you want to use in your project. That's the only point where you might want to use for example python3 or some absolute path instead. Later use the code as is.
source venv/bin/activate
Activating the virtual environment
python -m pip install -U pip
If you have issue with ImportError: No module named _internal than probably you are using an old version of pip. Issue is described here
python -m pip install -U --force-reinstall -r requirements.txt
-U --force-reinstall is a bit of an overkill in case of fresh environment, but it will do no harm
Go to the place where your manage.py is located and start the server using python manage.py runserver
The problem was in Webfaction VPS
Need an empty file named sitecustomize.py in the /home/username/webapps/appName/env/lib/python2.
That empty file overrides their python customizations, one of which is to include any packages in the ~/lib/python2.7 directory.
You might need to deactivate your virtual env and activate it again for changes to take effect.
workaround but it works!
in your virtualenv directory change the properties of the pyvenv.cfg file
include-system-site-packages = True
this will cause the packages installed on the main to be used

Downgrading python package installed locally

In the server that work in (as do many other people) the "global" python has a certain version of a package, say 1.0.0.
I recently used pip to upgrade that to 1.0.2 locally for my user with the pip install --user package==1.0.2, which worked. However, now I want to uninstall my locally installed version and remain with the global one.
I've tried pip uninstall --user package==1.0.2, pip uninstall --user package, and a few other options but nothing seems to work. I always get this error:
Usage:
pip <command> [options]
no such option: --user
I also tried pip install --user package=1.0.0 but now I have both versions installed locally and python uses the most recent.
How can I do what I want?
Apparently this cannot be done with pip directly. I ended up solving it just by removing the package from ~/.local/lib/python3.5/site-packages/. A bit more manual than I was hoping I'd have to do.
The --user option for pip seems to have been removed but is still an option with setuptools.
So if you want to use the --user function what you can do is use pip download which will download the .whl file. You then need to extract the file using wheel unpack. I then ran python setup.py install --user (worked for numpy) and it installed the package to my home directory under .local.
I followed the documentation here.

No module named google.protobuf

I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages.
There is another possibility, if you are running a python 2.7.11 or other similar versions,
sudo pip install protobuf
is ok.
But if you are in a anaconda environment, you should use
conda install protobuf
Locating the google directory in the site-packages directory (for the proper latter directory, of course) and manually creating an (empty) __init__.py resolved this issue for me.
(Note that within this directory is the protobuf directory but my installation of Python 2.7 did not accept the new-style packages so the __init__.py was required, even if empty, to identify the folder as a package folder.)
...In case this helps anyone in the future.
In my case I
downloaded the source code, compiled and installed:
$ ./configure
$ make
$ make check
$ sudo make install`
for python I located its folder(python) under source code, and ran commands:
$ python setup.py build
$ python setup.py install'
Not sure if this could help you..
I got the same error message when I tried to use Tensor Flow. The solution was simply to uninstall Tensor Flow and protobuf:
$ sudo pip uninstall protobuf
$ sudo pip uninstall tensorflow
And reinstall it again: pip installation of Tensorflow. Currently, this is:
# Ubuntu/Linux 64-bit, CPU only:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
when I command pip install protobuf, I get the error:
Cannot uninstall 'six'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
If you have the same problem as me, you should do the following commands.
pip install --ignore-installed six
sudo pip install protobuf
According to your comments, you have multiply versions of python
what could happend is that you install the package with pip of anthor python
pip is actually link to script that donwload and install your package.
two possible solutions:
go to $(PYTHONPATH)/Scripts and run pip from that folder that way you insure
you use the correct pip
create alias to pip which points to $(PYTHONPATH)/Scripts/pip and then run pip install
how will you know it worked?
Simple if the new pip is used the package will be install successfully, otherwise the package is already installed
I installed the protobuf with this command:
conda install -c anaconda protobuf=2.6.1
(you should check the version of protobuf)
In my case, MacOS has the permission control.
sudo -H pip3 install protobuf
I had this problem to when I had a google.py file in my project files.
It is quite easy to reproduce.
main.py: import tensorflow as tf
google.py: print("Protobuf error due to google.py")
Not sure if this is a bug and where to report it.

Installing pip for an offline machine on Python 2.7, CentOS 6.3

I'm trying to install lxml for Centos6.3, due to this issue. It looks like I've got a conflicting version of pip. The standing solution seems to re-install pip for the correct version of python.
My main issue is that all the methods I've found for installing pip require an internet connection. Is it possible to download pip install files, and then run pip install -U pip and point it at the right files?
The PyPI page for pip only has pip6.11 as a .whl. I've tried running pip install -U pip-6.1.1-py2.py3-none-any.whl and it's not worked.
I'm stumped. How do I install it?
You could try to download pip and setuptools manually from: https://pypi.org/project/pip/#files and https://pypi.org/project/setuptools/#files
get the python pip script from: https://bootstrap.pypa.io/get-pip.py
after that unzip/untar packages and run:
python get-pip.py --no-index --find-links=/path/to/pip-and-setuptools
or alternatively trying:
python setup.py install when running in unpacked folders
More here: https://github.com/pypa/pip/issues/2351

Categories