I wanted to be able to access all of my site packages from another installation of Python, so I created a virtual environment in this way:
venv my_project --system-site-packages
I noticed that my version of Keras was outdated, so from within my virtualenv, I executed:
pip install keras
which worked without an issue. I'm using pip version 9.0.1
I'm trying to run a python program that uses TensorFlow, but when I run it, I get an error:
ImportError: No module named tensorboard.plugins
I googled around and found that I needed to upgrade TensorFlow. I tried several commands:
(my_project/) user#GPU5:~/spatial/zero_padded/powerlaw$ pip install tensorflow
The above gives me a 'requirement already satisfied' error.
$ pip install --target=~/spatial/zero_padded/powerlaw/my_project/ --upgrade tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
The output of which python:
/user/spatial/zero_padded/powerlaw/my_project/bin/python
I think my PYTHONPATH is the first line in this:
(my_project/) user#GPU5:~/spatial/zero_padded/powerlaw/my_project$ python -c "import sys; print '\n'.join(sys.path)"
/user/spatial/zero_padded/powerlaw/my_project
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python27.zip
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7/plat-linux2
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7/lib-tk
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7/lib-old
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7/lib-dynload
/user/spatial/zero_padded/powerlaw/my_project/lib/python2.7/site-packages
/user/pkgs/enthought/canopy-1.5.1/lib/python2.7/site-packages
/user/pkgs/enthought/canopy-1.5.1/lib/python2.7/site-packages/PIL
/opt/enthought/canopy-1.5.1/appdata/canopy-1.5.1.2730.rh5-x86_64/lib/python2.7/site-packages
How do I upgrade TensorFlow inside my virtualenv?
Pretty sure that all you need to do is run pip install with -U to upgrade the package inside the virtualenv:
(my_project/) user#GPU5:~/spatial/zero_padded/powerlaw$ pip install -U tensorflow
-U is just shorthand for --upgrade. But, you should really go ahead and create a dependencies file for yourself called requirements.txt that lives in the project root and specify version numbers there.
e.g.,
tensorflow==1.2.0
And that makes it easier to install all requirements
pip install -r requirements.txt
The best way to do it is install the dependencies outside de vm, and create a new one I'm afraid to say that. Because doing upgrades is different than installing
Related
Trying to install django with different version that in system, it shows me:
Installing collected packages: Django
Found existing installation: Django 1.7.11
Not uninstalling django at /home/user/lib/python2.7, outside environment /home/user/webapps/v2_dev/venv
Successfully installed Django-1.8.19
But in fact there is old version
tried with different commands:
./venv/bin/pip install Django==1.8.11
pip install Django==1.8.11
UPDATED:
When I install my packages it shows:
The required version of setuptools (>=16.0) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U setuptools'.
(Currently using setuptools 3.1 (/home/user/lib/python2.7/setuptools-3.1-py2.7.egg))
When I do the upgrade:
venv/bin/pip install --upgrade setuptools
Requirement already up-to-date: setuptools in ./venv/lib/python2.7/site-packages (40.5.0)
I arrived at this post while looking for how to force install something in a virtualenv despite it being already installed in the global python. This happens when the virtual env was created with --system-site-packages.
In this situation, for certain packages it may be important to have a local version within the virtualenv, even if for many other packages we can share the global versions. This is the case of pytest, for example. However, pip will refuse to install a package in the virtualenv if it can already find the most recent version in the system site.
The solution is to use pip install --ignore-installed mypackage.
Instead of installing setuptools and Django like ./venv/bin/pip install ..., try to activate your virtual environment first and install the stuff you need afterwards.
Activating virtual environment:
Go to the folder where your virtual environment is located (typically the root folder of your project) and type one of the two:
source venv/bin/activate (Unix-based systems)
venv\Scripts\activate (Windows)
This will ensure that you are not mixing packages installed in different environments.
Forcing reinstall of the packages:
Simple upgrade can be done by adding: --upgrade or -U
Forcing reinstall of the packages can be done by adding: --force-reinstall
In your case (once the environment is activated):
python -m pip install -U --force-reinstall setuptools Django
Step by step:
Deactivate and delete the old virtual environment
Create new environment using python -m virtualenv venv (python 2) or python -m venv venv (python 3)
python above is the interpreter which you want to use in your project. That's the only point where you might want to use for example python3 or some absolute path instead. Later use the code as is.
source venv/bin/activate
Activating the virtual environment
python -m pip install -U pip
If you have issue with ImportError: No module named _internal than probably you are using an old version of pip. Issue is described here
python -m pip install -U --force-reinstall -r requirements.txt
-U --force-reinstall is a bit of an overkill in case of fresh environment, but it will do no harm
Go to the place where your manage.py is located and start the server using python manage.py runserver
The problem was in Webfaction VPS
Need an empty file named sitecustomize.py in the /home/username/webapps/appName/env/lib/python2.
That empty file overrides their python customizations, one of which is to include any packages in the ~/lib/python2.7 directory.
You might need to deactivate your virtual env and activate it again for changes to take effect.
workaround but it works!
in your virtualenv directory change the properties of the pyvenv.cfg file
include-system-site-packages = True
this will cause the packages installed on the main to be used
Whenever I run:
pip install fastai
I get the error
"Command "python setup.py egg_info" failed with error code 1 in C:\Users\seja9890\AppData\Local\Temp\pip-install-_cw7ve61\torch\".
Can someone please guide me where I might be going wrong?
Ps.: I have tried updating setuptools and it doesn't help in my case.
Fastai doesn't work with Python 2 so make sure you installed pip3 (sudo apt install python3-pip on Ubuntu).
Make sure Python3 is at least 3.6 this may change since Fastai may need 3.7. soon.
and then:
pip3 install git+https://github.com/fastai/fastai.git
or use pip3 install fastai, or in some cases you may need:
pip3 install --no-deps fastai
Note: At the moment I am writing this: PyTorch v1 and Python 3.6 are the minimal version requirements.
For official website, you should install it with conda.
anaconda
fast.ai
To install
# Prerequisites
Anaconda, manages Python environment and dependencies
# Normal installation
Download project: git clone https://github.com/fastai/fastai.git
Move into root folder: cd fastai
Set up Python environment: conda env update
Activate Python environment: conda activate fastai
If this fails, use instead: source activate fastai
# Install as pip package (not recommend)
You can also install this library in the local environment using pip
pip install fastai
However this is not currently the recommended approach, since the library is being updated much more frequently than the pip release, fewer people are using and testing the pip version, and pip needs to compile many libraries from scratch (which can be slow).
An alternative is to use the latest Github version with pip
pip install git+https://github.com/fastai/fastai.git
I am trying to install tensorflow for python on a Mac, and I am following the instructions provided on the website. I decided to use virtualenv because pip has been giving me issues lately, and the website recommended virtualenv as well. Although I apparently have downloaded tensorflow for Python 3, I also want to have it available in Python 2 (which I use more anyway). Here is what I have done so far:
$virtualenv --system-site-packages ~/tensorflow
$ cd ~/tensorflow
tensorflow$ pip install --upgrade tensorflow
I get the following message to show up:
Requirement already up-to-date: tensorflow in /usr/local/lib/python3.6/site-packages (1.8.0)
If you have any suggestions or commands to run, that would be greatly appreciated.
If you want to create a virtual environment with python 2,
virtualenv -p /usr/bin/python2.7 my_project
and then activate your project,
source my_project/bin/activate
check the environment and then install tensorflow,
python --version
pip --version
pip list
pip install tensorflow
I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages.
There is another possibility, if you are running a python 2.7.11 or other similar versions,
sudo pip install protobuf
is ok.
But if you are in a anaconda environment, you should use
conda install protobuf
Locating the google directory in the site-packages directory (for the proper latter directory, of course) and manually creating an (empty) __init__.py resolved this issue for me.
(Note that within this directory is the protobuf directory but my installation of Python 2.7 did not accept the new-style packages so the __init__.py was required, even if empty, to identify the folder as a package folder.)
...In case this helps anyone in the future.
In my case I
downloaded the source code, compiled and installed:
$ ./configure
$ make
$ make check
$ sudo make install`
for python I located its folder(python) under source code, and ran commands:
$ python setup.py build
$ python setup.py install'
Not sure if this could help you..
I got the same error message when I tried to use Tensor Flow. The solution was simply to uninstall Tensor Flow and protobuf:
$ sudo pip uninstall protobuf
$ sudo pip uninstall tensorflow
And reinstall it again: pip installation of Tensorflow. Currently, this is:
# Ubuntu/Linux 64-bit, CPU only:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
when I command pip install protobuf, I get the error:
Cannot uninstall 'six'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
If you have the same problem as me, you should do the following commands.
pip install --ignore-installed six
sudo pip install protobuf
According to your comments, you have multiply versions of python
what could happend is that you install the package with pip of anthor python
pip is actually link to script that donwload and install your package.
two possible solutions:
go to $(PYTHONPATH)/Scripts and run pip from that folder that way you insure
you use the correct pip
create alias to pip which points to $(PYTHONPATH)/Scripts/pip and then run pip install
how will you know it worked?
Simple if the new pip is used the package will be install successfully, otherwise the package is already installed
I installed the protobuf with this command:
conda install -c anaconda protobuf=2.6.1
(you should check the version of protobuf)
In my case, MacOS has the permission control.
sudo -H pip3 install protobuf
I had this problem to when I had a google.py file in my project files.
It is quite easy to reproduce.
main.py: import tensorflow as tf
google.py: print("Protobuf error due to google.py")
Not sure if this is a bug and where to report it.
Quick question.
Is there a way to ensure that pip freeze > requirements.txt keeps the order in which the packages were installed? This is an issue for me because I continuously get something like this in requirements.txt:
matplotlib==1.1.1
numpy==1.6.2
So an error occurs when I try to install using pip install -r requirements.txt because numpy is a dependency of matplotlib, so I have to install manually numpy first and then rerun pip install -r requirements.txt
Is there any fix on that?
UPDATE: In response to mechmind, I installed matplotlib and numpy in Ubuntu 12.04 using pip with virtualenv --distribute myenv. After installation, I got this freeze file:
argparse==1.2.1
distribute==0.6.28
matplotlib==1.1.1
numpy==1.6.2
wsgiref==0.1.2
Then when I try to reinstall in another virtual environment I get the following error:
REQUIRED DEPENDENCIES
numpy: no
* You must install numpy 1.4 or later to build
* matplotlib.
So maybe it's dependent on the system.
Thanks!
Just tried pip with numpy and matplotlib and pip correctly resolved dependency checks - numpy built first.
Tried on old stock pip from ubuntu 10.10.
EDIT: After playing with pip and virtualenv, i realized that dependency check actually works only when that dependencies was discovered, i.e. when package was installed, removed and installed again.
So actual solution will involve reordering of packages in requrements file (for simple case when there are only two packages with wrong order, you can just reverse requirements file: sort -r | xargs pip install