I'm following the Prepare to Use Extensions document and I'm having issues installing CKAN into a virtual environment.
sudo apt-get install virtualenv python-pip mercurial
virtualenv /home/ubuntu/pyenv
. /home/ubuntu/pyenv/bin/activate
At first this failed, but then I found that virtualenv should be python-virtualenv.
Now I'm having issues with:
pip install -e hg+http://bitbucket.org/okfn/ckan#egg=ckan
I'm getting an error code 255, and when I visit the URL, it looks like the source has been deleted and moved to Github. I'm a beginner to Ubuntu, Python and CKAN so I'm not sure how to properly change this command to point to the new location.
I tried to use the following, but it didn't work for me:
pip install -e hg+https://github.com/ckan/ckan#egg=ckan
How should I continue to install CKAN in the virtual environment?
If you go to that URL in your browser, you will see it has a pointer to github now:
This repository has been deleted
Our apologies, but the repository "ckan" has been deleted.
It now lives at https://github.com/okfn/ckan.
So, instead you do:
pip install -e 'git+git://github.com/ckan/ckan#egg=Package'
or
pip install -e 'git+https://github.com/ckan/ckan#egg=Package'
I quoted the URL as a good practice (because # is comment character in the shell), but in this context it isn't interpreted that way, so it isn't strictly necessary.
Related
Based on this answer, I can fully understand the use of:
pip install -e /path/to/locations/repo
However, I am yet to see the use of:
pip install -e .
I can understand it from the perspective of doing pip install -e /path/to/locations/repo, but from the working directory of the project dependency. But that's the only use case I can see.
In what use case would I want to install locally the same package I am now working on?
pip install -e
will just create a projekt_name.egg-info file in the venv\Lib\site-packages folder with a link to the repo location. Nothing is copied.
You can continue developing and you can access your project packages as if the repo was properly installed. No dirty sys.path.append-hacks needed.
I'm trying to set up the development environment for modifying a Python library. Currently, I have a fork of the library, I cloned it from remote and installed it with
pip install -e git+file:///work/projects/dev/git_project#branch#egg=git_project
However, it seems that instead of creating a symbolic link with pip install -e to the directory where I cloned my package, pip would copy the package to src/git_project in my virtual environment, making it difficult to modify it from there and push changes to my fork at the same time. Am I missing out on something or pip install -e doesn't actually make a symlink when installing from VCS?
I know that I can also do pip install -e git+git:// to install from my remote, but it makes it difficult to see real-time changes I make without pushing my code to this fork all the time.
Is there a way I can clone a fork to my local development environment, pip install a specific branch from this cloned repo, and create a symlink link to the actual git_project folder so that I can modify the package there, push changes to my remote, and at the same time import the library anywhere in my environment to see real-time changes I make on my branch without committing anything yet?
Thanks for any help!
pip install -e git+URL means "clone the repository from the URL locally and install". If you already have the repository cloned locally and want to simply install from it: just install without Git:
cd /work/projects/dev/git_project
git checkout branch
pip install -e .
Working inside a vagrant environment, inside a python virtual environment, when I try to install a python package using
(venv) vagrant#vagrant-ubuntu-trusty-64:~$ pip install <package_name>
I receive a permission error:
error: could not create '/home/vagrant/venv/lib/python2.7/site-packages/<package_name>': Permission denied
When I use sudo to install:
(venv) vagrant#vagrant-ubuntu-trusty-64:~$ sudo pip install <package_name>
the install is successful, but the package is not installed inside venv, but instead inside the global python directory.
I can successfully install the package inside venv by using sudo and specifying the path to pip:
(venv) vagrant#vagrant-ubuntu-trusty-64:~$ sudo /home/vagrant/venv/bin/pip install <package_name>
This is quite convoluted though. So how can I stop sudo pip linking to the global python pip?
Thank you
I had the same problem with pip vs sudo pip and virtualenv pip vs local pip.
I was logged in as root user when I created my venv months ago. So when I wanted to install a new pip package got permission denied. So tried the same command with sudo, but then it installed the package on my local pip.
Lesson learned. I should not use sudo inside a venv.
Fixed it with:
chmod -R 0777 venv_folder_path_here
-R switch for recursive change in venv folder.
And then activate your venv and try pip install:
/home/username_here/venv/project_name_here/bin/activate
(venv_name) pip install package_name_here
The root problem is that sudo does not by default inherit the user's environment as it executes the command. This is what you want - trust me on this.
In your case, your pip is either guided to the venv that it can't write to or - under sudo - to root's environment where you don't want it to be.
The solution you posted is actually valid: If you use sudo, be sure to tell it exactly what to do, how to do it and whom to do it to! All of the aforementioned can be controlled by the user's environment variables so caution is key.
You may also use sudo -E, which does inherit the calling user's environment and should therefore preserve your venv. Be sure to read sudo's man-page or do some googling about all the possible trouble you could get in, though.
Like Daniel said in comments, you should fix the permissions issue with your virtual environment directory. It could be that you already installed something in that directory with sudo. Or you created it with sudo. Which is not ideal. I recommend destroying the virtualenv and then creating it again with the vagrant user. If you are using pyvenv, make sure you pass --copies option.
As user27... said in their answer, the pip you run with sudo is probably not the same pip you run as vagrant user. You can always check that with which pip.
I'd recommend starting with which python inside your python virtual environment. Perhaps you have activated the wrong virtual environment, not related to your vagrant user at all.
I am attempting to setup a development environment on my new dev machine at home. I have just installed Ubuntu and now I am attempting to clone a remote repo from our web-server and install its dependencies so I can begin work.
So far I have manually installed virtualenv and virtualenvwrapper from pypi and edited my bash.rc appropriately to source my virtualenvs when i start my terminal. I then cloned my repo to ~/projects/project-name/websitename.com. Then I used virtualenvwrapper to mkvirtualenv env-name from ~/projects/project-name/websitename.com. This reflects exactly the file-structure/setup of the web-server I am cloning from. So far so good.
I logged into the dev server and activate the virtualenv there and use pip freeze -l > req.txt to render a dependencies list and scp to my local machine. I activate the virtualenv on my local machine, navigate to the ~/projects/project-name/websitename.com and execute pip install -r path-to-req.txt and it runs through all of the dependencies as if nothing is wrong. However, when i attempt to manage.py syncdb i get an error about not finding core django packages. What the hell? So i figure somehow Django failed to install, i run pip install Django==1.5.1 and it completes successfully. I got to setup my site again and get another error about no module named django_extensions. Okay, what the hell with it, i just installed all of these packages with pip?!
So i pip freeze -l > test.txt and cat test.txt, what does it list? Django==1.5.1, the one package I just manually installed. Why isn't pip installing my dependencies from my specified list into my virtualenv? What am I messing up here?
-EDIT-------------
Which pip gives me the path to pip in my virtualenv
I have only 1 virtualenv and it is activated
My usual workflow is to
pip freeze > someFile.txt
and then install with
pip install -r someFile.txt
So I'm certain that this should work just fine. Unfortunately I can't really tell you anything besides make sure to check that
You really are in the virtualenv that you think you are in. Make sure to run
workon yourVirtualEnvName
to activate it just in case that matters.
Make sure to check that pip is within your virtualenv.
which pip
gives me
/path/to/home/.virtualenvs/myVirtEnv/bin/pip
Sorry I can't give you a more concrete answer. I have to do this semi-regularly and I've never had a problem with it skipping dependencies. Best of luck!
Struggled with some variation of this issue not long ago; it ended up being my cluttered .bash_profile file.
Make sure you don't have anything that might mess up your virtualenv inside your .bash_profile/.bashrc, such as $VIRTUAL_ENV or $PYTHONHOME or $PYTHONPATH environment variables.
I know this is an old post, but I just encountered a similar problem. In my case the cause was that I was running the pip install command using sudo. This made the command run globally and the packages install in the global python path.
Hope that helps somebody.
I am trying to instal virtualenv and/or virtualenvwrapper on a Mac OS X 10.8.3.
I have been fighting with python for the last two days. Finally I was able to install Python 2.7.4 using brew. Before I had virtualenv installed using easy_install. Then I tried to uninstall it, trying to get my computer in the same situation as the one of my colleagues. Maybe I uninstalled it with success, maybe not. I don't know how to test it. Now I am supposed to install virtualenv using:
pip install virtualenv
But it gives me:
Could not find an activated virtualenv (required).
pip install virtualenvwrapper gives exactly the same output.
Also the variable: PIP_RESPECT_VIRTUALENV is null:
echo $PIP_RESPECT_VIRTUALENV
How can I solve this issue?
Open your ~/.bashrc file and see if this line is there -
export PIP_REQUIRE_VIRTUALENV=true
It might be causing the trouble. If it's there, change it to false and run -
source ~/.bashrc
If not, run export PIP_REQUIRE_VIRTUALENV=false from terminal.
Note: everything works the same if you have .bash_profile instead of .bashrc in your current user's root directory.
#Bibhas has it; +1 to look for export PIP_REQUIRE_VIRTUALENV=true in ~/.profile or ~/.bashrc. You can confirm the setting in your current shell with env |grep PIP_REQUIRE_VIRTUALENV.
This setting is a good safety check; more often than not, you'll want to be installing things into virtualenvs. However, sometimes you do want to be working with the global/system python. In those cases, take a look at --isolated:
Run pip in an isolated mode, ignoring environment variables and user configuration.
$ pip install --upgrade pip
Could not find an activated virtualenv (required).
$ pip install --upgrade pip --isolated
Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages
$ pip freeze --isolated
...
An additional solution to those already presented is to add a shell command that will allow you to install py packages by temporarily overriding the default setting. Add this to your ~/.profile, ~/.bashrc or wherever you maintain your shell's exports/settings (in my case, ~/.zshrc).
syspip(){
PIP_REQUIRE_VIRTUALENV="" pip "$#"
}
With this simple addition, you can install pip packages to the system via syspip install <package>.
Verify contents of ~/.pip/pip.conf like:
[global]
index=https://pypi.python.org/simple/
require-virtualenv=false
if previous it was set like require-virtualenv=true
Another place where you may possibly have this "lock" is the pip.conf file. In my case I had one in my ~/Library/Application Support/pip folder and forgot about it.
Typical content of the file could be:
[install]
require-virtualenv = true
[uninstall]
require-virtualenv = true
Similar to other answers, false should be changed to true in the file.
Important to heed #JCotton's advice here-- keeping your pip setup so as to only install into virtualenvs is a great practice.
His solution to get virtualenv setup again of pip install --upgrade pip --isolated is exactly what should be done.
You should NOT turn off requiring a virtualenv, either by config file or by editing ~/.bash_rc or ~/.bash_profile, to get your project's pip packages installed. We're only doing it here because OP needs virtualenv itself installed.
In general, I see people get this message when their virtualenv wasn't setup correctly for their project in the first place. Reminder that to create a virtualenv with its own python and pip so that you don't run into the "could not find an activated virtualenv" error, you run virtualenv -p python3
for matchbook you must go to '.bash_profile '
1) open with your favorite editor in terminal
nano .bash_profile OR vim .bash_profile
2) find the text line that says
export PIP_REQUIRE_VIRTUALENV=true
3) delete it or set it equal to "false"
4) finally restart your terminal