I'm learning django and just installed pipenv via pip install pipenv and then pipenv shell and I notice that the virtual environment files are installed or created in some random default directory, I have two questions regarding this:
1) How can I customize that installation/creation directory for the virtual environment? Do I have to use a different command line from pipenv shell?
2) Can you have multiple folders with different virtual environments inside each folder/project?
According to the pipenv advanced readme (https://github.com/pypa/pipenv/blob/master/docs/advanced.rst#-custom-virtual-environment-location):
You can set the environment variable WORKON_HOME to whichever directory you want,
e.g.: by setting export WORKON_HOME=~/.venvs in your .bashrc file (if you are using bash).
According to this https://github.com/pypa/pipenv/issues/1071#issuecomment-370561179 comment (from the pipenv github repo), you can use a workaround for achieving this:
To be super clear, you can still get your own custom environments set
up just by sourcing virtualenvs.
virtualenv 35 --python=python3.5
virtualenv 36 --python=python3.6
source 35/bin/activate && pipenv install
source 36/bin/activate && pipenv install
source 35/bin/activate && pipenv run <whatever>
a tiny bit of additional visual clutter to the commands but is pretty
straightforward.
You would execute the virtualenv x commands inside the project folder.
Related
So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the "python virtual -m venv env" or the one that's on the repo, if there is such a thing. Thanks
virtual env is used to make your original env clean. you can pip install virtualenv and then create a virtual env like virtualenv /path/to/folder then use source /path/to/folder/bin/activate to activate the env. then you can do pip install -r requirements.txt to install dependencies into the env. then everything will be installed into /path/to/folder/lib
alteratively, you can use /path/to/folder/bin/pip install or /path/to/folder/bin/python without activating the env.
Yes, you'll want to create your own with something like: python -m venv venv. The final argument specifies where your environment will live; you could put it anywhere you like. I often have a venv folder in Python projects, and just .gitignore it.
After you have the environment, you can activate it. On Linux: source venv/bin/activate. Once activated, any packages you install will go into it; you can run pip install -r requirements.txt for instance.
I am trying to install and use python3 packages to /home/myname/pp folder. I should be able to run python3 from anywhere. Also, pip3 should be able to update the packages in this folder. I should also be able to copy this folder to a new Linux system and it should work there as well (by changing PYTHONPATH there).
I searched and found following options:
pip install -t <direct directory> <package> # I prefer this.
and
pip install --install-option="--prefix=$PREFIX_PATH" package_name
or use:
virtualenv
and then I need to do:
echo 'export PYTHONPATH="/home/myname/pp:$PYTHONPATH"' >> ~/.bash_profile
What should be my approach for these requirements? Thanks for your help.
pip install -t <direct directory> <package>
Will install the package globally in the given directory.
pip install --install-option="--prefix=$PREFIX_PATH" package_name
Will run the package's setup.py with the given parameters, as mentioned in the pip's help:
--install-option Extra arguments to be supplied to the setup.py install command (use like
--install-option="--install-scripts=/usr/local/bin"). Use multiple --install-
option options to pass multiple options to setup.py install. If you are using an option with a directory path,
be sure to use absolute path.
The recommended way to do to install packages is to use a virtual environment. It keeps your global packages sanitized, in case you want the same package but different versions for it in two different projects for example.
virtualenv basically creates a folder for where to store the installed packages.
In a linux-based system, you would have to run the virtualenv commmand to create the folder and after that "activate" it.
virtualenv my_virtual_environment
source my_virtual_environment/bin/activate
You will notice that the environment's name will appear at the end of shell line. What activate does is simply changing some paths in your PATH environment variable to point to your current virtual environment folder.
It will still used the system's python interpreter but when trying to import packages in your program, it will look in the virtual environment's folder first.
To return to the global python packages, just type deactivate.
If you want the environment you're using to be active right when you start the terminal, add the source command to your .bash_profile or .bashrc. I recommend using the absolute path to the python virtual environment.
If you're working on multiple projects and want to keep the packages separate from each other, create multiple virtual environments and just switch to them. You could take a look at virtalenvwrapper which makes starting the virtual environment when you open the terminal and switching between other environments really easy.
I am thinking about switching from pip & virtualenv to pipenv. But after studying the documentation I am still at a loss on how the creators of pipenv structured the deployment workflow.
For example, in development I have a Pipfile & a Pipfile.lock that define the environment. Using a deployment script I want to deploy
git pull via Github to production server
pipenv install creates/refreshes the environment in the home directory of the deployment user
But I need a venv in a specific directory which is already configured in systemd or supervisor. E.g.: command=/home/ubuntu/production/application_xy/env/bin/gunicorn module:app
pipenv creates the env in some location such as
/home/ultimo/.local/share/virtualenvs/application_xy-jvrv1OSi
What is the intended workflow to deploy an application with pipenv?
You have few options there.
You can run your gunicorn via pipenv run:
pipenv run gunicorn module:app
This creates a slight overhead, but has the advantage of also loading environment from $PROJECT_DIR/.env (or other $PIPENV_DOTENV_LOCATION).
You can set the PIPENV_VENV_IN_PROJECT environment variable. This will keep pipenv's virtualenv in $PROJECT_DIR/.venv instead of the global location.
You can use an existing virtualenv and run pipenv from it. Pipenv will not attempt to create its own virtualenv if it's run from one.
You can just use the weird pipenv-created virtualenv path.
I've just switched to pipenv for deployment and my workflow is roughly as follows (managed with ansible). For an imaginary project called "project", assuming that a working Pipfile.lock is checked into source control:
Clone the git repository:
git clone https://github.com/namespace/project.git /opt/project
Change into that directory
cd /opt/project
Check out the target reference (branch, tag, ...):
git checkout $git_ref
Create a virtualenv somewhere, with the target Python version (3.6, 2.7, etc):
virtualenv -p"python$pyver" /usr/local/project/$git_ref
Call pipenv in the context of that virtualenv, so it won't install its own:
VIRTUAL_ENV="/usr/local/project/$git_ref" pipenv --python="/usr/local/project/$git_ref/bin/python" install --deploy
The --deploy will throw an error, when the Pipfile.lock does not match the Pipfile.
Install the project itself using the virtualenv's pip (only necessary if it isn't already in the Pipfile):
/usr/local/project/$git_ref/bin/pip install /opt/project
Set a symlink to the new installation directory:
ln -s /usr/local/project/$git_ref /usr/local/project/current
My application is then callable e.g. with /usr/local/project/current/bin/project_exec --foo --bar, which is what's configured in supervisor, for instance.
All of this is triggered when a tag is pushed to the remote.
As the virtualenvs of earlier versions remain intact, a rollback is simply done by setting the current-symlink back to an earlier version. I.e. if tag 1.5 is broken, and I want to go back to 1.4, all I have to do is ln -s /usr/local/project/1.4 /usr/local/project/current and restart the application with supervisorctl.
I think pipenv is very good for managing dependencies but is too slow, cumbersome and still a bit unstable for using it for automatic deployments.
Instead I use virtualenv (or virtualenvwrapper) and pip on the target machine.
On my build/development machine I create a requirements.txt compatible text file using pipenv lock -r:
$ pipenv lock -r > deploy-requirements.txt
While deploying, inside a virtualenv I run:
$ pip install -r deploy-requirements.txt
Just do this:
mkdir .venv
pipenv install
Explanation:
pipenv checks your project directory for a sub directory named .venv. If it finds it, then pipenv creates a local virtual environment (because then it sets automatically PIPENV_VENV_IN_PROJECT=true)
So now if you want you can either activate the virtual environment with:
source .venv/bin/activate
Or config you app.conf for gunicorn with something like this:
exec /path/to/.venv/bin/gunicorn myapp:app
To create virtual environment in the same directory as the project set the following environment variable doc
PIPENV_VENV_IN_PROJECT=true
This installs the dependencies to .venv directory inside project. Available from PipEnv v2.8.7
This is one of my first times really using virtualenv and when I first activated it I was (and am) a bit confused about where my actual project (like the code) should go. Currently (after making and activating the virtualenv) this is what my project looks like in PyCharm:
Project Name
|-project-name <= I called my virtualenv project-name
|-bin
|-Lots of stuff here
|-include
|-Lots of stuff here
|-lib
|-Lots of stuff here
|-.Python
|-pip-selfcheck.json
In this environment, where should I put my actual code?
I don't recommend to put your project to virtualenv folder. I think you should do it in this way:
Do it in terminal if you're using Linux:
mkdir project-name.
cd project-name.
virtualenvwrapper env.
source env/bin/activate.
So you will have project-name folder where you will have all files according to your project + virtualenv folder called env.
If you don't have virtualenvwrapper, then just install it using apt-get:
sudo apt-get install virtualenvwrapper
When you activate a virtual env using virtualenv env, env (where all of your dependencies will be installed), sits at the top of your root directory. Let's say you use Django to create a project, you would then follow these steps:
Type source env/bin/activate to activate virtual environment
Type pip install django to install Django
Type django-admin startproject my-example-proj, which will install Django in your root directory
You should now how two directories: env and my-example-proj. You project never goes inside the env directory. That's where you install dependencies using pip.
I've created folder and initialized a virtualenv instance in it.
$ mkdir myproject
$ cd myproject
$ virtualenv env
When I run (env)$ pip freeze, it shows the installed packages as it should.
Now I want to rename myproject/ to project/.
$ mv myproject/ project/
However, now when I run
$ . env/bin/activate
(env)$ pip freeze
it says pip is not installed. How do I rename the project folder without breaking the environment?
You need to adjust your install to use relative paths. virtualenv provides for this with the --relocatable option. From the docs:
Normally environments are tied to a
specific path. That means that you
cannot move an environment around or
copy it to another computer. You can
fix up an environment to make it
relocatable with the command:
$ virtualenv --relocatable ENV
NOTE: ENV is the name of the virtual environment and you must run this from outside the ENV directory.
This will make some of the files
created by setuptools or distribute
use relative paths, and will change
all the scripts to use
activate_this.py instead of using the
location of the Python interpreter to
select the environment.
Note: you must run this after you've
installed any packages into the
environment. If you make an
environment relocatable, then install
a new package, you must run virtualenv
--relocatable again.
I believe "knowing why" matters more than "knowing how". So, here is another approach to fix this.
When you run . env/bin/activate, it actually executes the following commands (using /tmp for example):
VIRTUAL_ENV="/tmp/myproject/env"
export VIRTUAL_ENV
However, you have just renamed myproject to project, so that command failed to execute.
That is why it says pip is not installed, because you haven't installed pip in the system global environment and your virtualenv pip is not sourced correctly.
If you want to fix this manually, this is the way:
With your favorite editor like Vim, modify /tmp/project/env/bin/activate usually in line 42:
VIRTUAL_ENV='/tmp/myproject/env' => VIRTUAL_ENV='/tmp/project/env'
Modify /tmp/project/env/bin/pip in line 1:
#!/tmp/myproject/env/bin/python => #!/tmp/project/env/bin/python
After that, activate your virtual environment env again, and you will see your pip has come back again.
NOTE: As #jb. points out, this solution only applies to easily (re)created virtualenvs. If an environment takes several hours to install this solution is not recommended
Virtualenvs are great because they are easy to make and switch around; they keep you from getting locked into a single configuration. If you know the project requirements, or can get them, Make a new virtualenv:
Create a requirements.txt file
(env)$ pip freeze > requirements.txt
If you can't create the requirements.txt file, check env/lib/pythonX.X/site-packages before removing the original env.
Delete the existing (env)
deactivate && rm -rf env
Create a new virtualenv, activate it, and install requirements
virtualenv env && . env/bin/activate && pip install -r requirements.txt
Alternatively, use virtualenvwrapper to make things a little easier as all virtualenvs are kept in a centralized location
$(old-venv) pip freeze > temp-reqs.txt
$(old-venv) deactivate
$ mkvirtualenv new-venv
$(new-venv) pip install -r temp-reqs.txt
$(new-venv) rmvirtualenv old-venv
I always install virtualenvwrapper to help out. From the shell prompt:
pip install virtualenvwrapper
There is a way documented in the virtualenvwrapper documents - cpvirtualenv
This is what you do. Make sure you are out of your environment and back to the shell prompt. Type in this with the names required:
cpvirtualenv oldenv newenv
And then, if necessary:
rmvirtualenv oldenv
To go to your newenv:
workon newenv
You can fix your issue by following these steps:
rename your directory
rerun this: $ virtualenv ..\path\renamed_directory
virtualenv will correct the directory associations while leaving your packages in place
$ scripts/activate
$ pip freeze to verify your packages are in place
An important caveat, if you have any static path dependencies in script files in your virtualenv directory, you will have to manually change those.
Yet another way to do it that worked for me many times without problems is virtualenv-clone:
pip install virtualenv-clone
virtualenv-clone old-dir/env new-dir/env
Run this inside your project folder:
cd bin
sed -i 's/old_dir_name/new_dir_name/g' *
Don't forget to deactivate and activate.
In Python 3.3+ with built-in venv
As of Python 3.3 the virtualenv package is now built-in to Python as the venv module. There are a few minor differences, one of which is the --relocatable option has been removed. As a result, it is normally best to recreate a virtual environment rather than attempt to move it. See this answer for more information on how to do that.
What is the purpose behind wanting to move rather than just recreate any virtual environment? A virtual environment is intended to manage the dependencies of a module/package with the venv so that it can have different and specific versions of a given package or module it is dependent on, and allow a location for those things to be installed locally.
As a result, a package should provide a way to recreate the venv from scratch. Typically this is done with a requirements.txt file and sometimes also a requirements-dev.txt file, and even a script to recreate the venv in the setup/install of the package itself.
One part that may give headaches is that you may need a particular version of Python as the executable, which is difficult to automate, if not already present. However, when recreating an existing virtual environment, one can simply run python from the existing venv when creating the new one. After that it is typically just a matter of using pip to reinstall all dependencies from the requirements.txt file:
From Git Bash on Windows:
python -m venv mynewvenv
source myvenv/Scripts/activate
pip install -r requirements.txt
It can get a bit more involved if you have several local dependencies from other locally developed packages, because you may need to update local absolute paths, etc. - though if you set them up as proper Python packages, you can install from a git repo, and thus avoid this issue by having a static URL as the source.
virtualenv --relocatable ENV is not a desirable solution. I assume most people want the ability to rename a virtualenv without any long-term side effects.
So I've created a simple tool to do just that. The project page for virtualenv-mv outlines it in a bit more detail, but essentially you can use virtualenv-mv just like you'd use a simple implementation of mv (without any options).
For example:
virtualenv-mv myproject project
Please note however that I just hacked this up. It could break under unusual circumstances (e.g. symlinked virtualenvs) so please be careful (back up what you can't afford to lose) and let me know if you encounter any problems.
Even easier solution which worked for me: just copy the site-packages folder of your old virtual environment into a new one.
Using Visual Studio Code (vscode), I just opened the ./env folder in my project root, and did a bulk find/replace to switch to my updated project name. This resolved the issue.
Confirm with which python
If you are using an conda env,
conda create --name new_name --clone old_name
conda remove --name old_name --all # or its alias: `conda env remove --name old_name`