Simple Python venv manager with different py versions? - python

To be honest, up until recently I was hoping to avoid everything related to vurtualenvs. Yep, very naïve.
All of the code I write using python is to be deployed using containers, where "one container - one app" is totally true for now. And also number of CLIs being distributed also to my coworkers as wheels.
What I came up with is two "global" environments living in my homedir:
"default" virtual env with python 3.10 and some general packages installed. Using this venv I develop current versions of my applications and do my regular job. It is sourced automatically in .bashrc.
Virtual env with python 3.9. This is used to support legacy versions of/not yet updated applications in case if something should be done to them.
And also for some of the projects there are dedicated virtual envs living in a project folder needed only for this particular project, python version there may vary from 3.7 to 3.10.
As a consequence, I have to change current environment by manually sourcing the right activate script several times a day, and this isn't funny.
After some research on the topic, I found things like virtualenvwrapper, Pipenv and Poetry, but none of them seem to meet all my needs: "global" env to use across multiple projects and outside projects (because of CLIs); ability to create env with given python version; easy switch between virtual envs.
Is there any solution for this?
UPD: Some illustrations
Next virtual environments were created inside ~/pyenv/:
$ virtualenv -p /usr/local/bin/python3.9 default3.9
$ virtualenv -p /usr/local/bin/python3.10 default3.10
Next, .bashrc was updated to do source ~/pyenv/default3.10/bin/activate.
Now, for the vast majority of tasks it is enough: everything needed for day-to-day job is installed in default3.10 venv.
In case I need to switch to python3.9 the following is required:
$ deactivate
$ source ~/pyenv/default3.9/bin/activate
And if then I need to work on some project with its own venv, inside project folder I have to do:
$ deactivate
$ source env/bin/activate
Too many manual inputs. At minimal, I'd like to do something like $ workon default3.9 no matter of the current directory I'm in, and it should activate default3.9 venv correctly if no venv is active right now or if any venv if already active.
The same should work inside project folder: $ workon <local_venv_name> to switch to the venv living in current project folder.
And something like venv create <enterpreter path/py version> to create new "global" or "local" (in current folder only) venv, and venv list to get the list of available "global" and "local" environments.

to use the global environment inside the project's local virtual environment.
set
include-system-site-packages = true
in pyvenv.cfg file located in .venv (virtual environment installation directory) directory of each project.

Related

How does pip work under virtual environments?

After learning how to code Python I'm starting to learn and figure out how to structure projects and set virtual environments up, but I can't make my mind up on how are packages managed when the virtual environment is activated. To make an example: I want to make a Django project. So, first of all, mkdir the project folder, cd into it and then execute python -m venv [whatever]. cd into Scripts folder and execute 'activate'. Then, pip install Django and pip list and shows Django. At last, I deactivate the virtual environment and make pip list again. Why is Django listed there? Should it?
You might have installed Django both inside and outside the venv (outside being the system python installation). Deactivate the venv and run pip uninstall django then try again.
Okay, finally I understood what was happening and I didn't realize. On one hand, virtual environments are completely independent from the global or system, so what is being installed in one side it shouldn't affect the other. On the other hand, what happened was, that for being inside the Scripts folder, when trying to execute Python commands I was actually executing the scripts with the same name, which are copies to use with the virtual environment and which can be used calling the "activate" script. There was actually no problem, it was me messed up.
Thanks to all contributors for their help.

Is there a python package that allows teams to share venvs through a git-like interface?

I'm working with a team. We each have our own Windows system. We have shared drives and a shared git repository. We want to have a shared virtual environment (in Python).
My understanding (from previous questions from myself and others) is that virtual environments do not include all files necessary for running python, in particular, the shared VE does not include the Python interpreter.
I can see how we can create a shared VE and it seems we could just copy that around, or put it on the shared drive, or put it in a git repository. But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
One of my colleagues has heard (or read) that "there is a package that allows teams to share their virtual environment configuration through a git-like interface. That way you can “pull” the updated configuration and it will install the new packages automatically. This allows each person to change the configuration and test it before releasing it to the team."
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
In any case, we do all have to install our own local versions of python anyway. Is that correct?
If the virtual environment is on a shared drive(group readable), then your team members should be able to access it. A virtual environment is just a directory.
But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
Virtual environments have their own python binaries, which you can see when you run which python inside the virtual environment after it is activated.
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
I would advise against uploading a virtual environment directory to version control, since it contains binaries, configuration files that don't belong in there. Its also unnecessary to do this because the dependencies are tracked in a requirements.txt file, which list the pip dependencies and is committed to version control. Additionally, When the virtual environment is activated, the VIRTUAL_ENV environment variable is automatically exported, so there is no need to modify it.
Conclusion
For simplicity, its probably best to have each user create their own virtual environment and install the dependencies from requirements.txt on their local machines. This also ensures users don't make a change to the virtual environment that will affect other users, which is a drawback of the above shared drive approach.
If they want to pull the latest requirements, then pulling the latest change using git pull and reinstalling the dependencies with pip install -r requirements.txt is good enough. You just have to ensure the virtual environment is activated, otherwise the dependencies will get installed system wide. This is where the pipenv package also comes in handy.
Usually in my team projects, the README contains instructions to get this setup for each team member.
Additionally, as Daniel Farrell helpfully mentioned in the comments, pip won't be able to manage packages like libffi, openssl, python-devel etc. inside a virtual environment. This is where using Docker containers become useful, since you can install dependencies inside a isolated environment built on top of the host operating system. This ensures the dependencies don't mess with the system wide packages, which is a good practice to follow in any case.
An example Dockerfile I have used in the past:
FROM python:3.8-slim-buster
# Set environment variables:
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Create virtual environment:
RUN python3 -m venv $VIRTUAL_ENV
# Install dependencies:
COPY requirements.txt .
RUN pip install -r requirements.txt
# Run the application:
COPY app.py .
CMD ["python", "app.py"]
Which I modified from this Elegantly activating a virtualenv in a Dockerfile article.
containerization aims to solve the "from where comes python?" problem. My developers' teams usually use a Dockerfile that installs their requirements within a docker-compose that spins ups a development environment for their applications . Unlike a virtual environment, containers offer a complete userspace solution that works pretty well in windows and osx.

virtualenv enable the environment global

I am working in the environment when activate the ll_env
me at me in ~/desktop/django/learning_log
$ source ll_env/bin/activate
(ll_env)
me at me in ~/desktop/django/learning_log
$
When change to the parent directory, it still is in the scope of the virtual environment:
(ll_env)
me at me in ~/desktop/django
$
I assumed that ll_env might disappear when jump out of the directory where environment files lives
How Django enable the environment global?
It is not the present working directory that determines your environment. To jump out of the virtual environment, you need to deactivate it.
Using the command: deactivate
It may seem unintuitive at first, but it's important to understand that the current directory is not related to the active virtualenv. The active virtualenv determines where python should look for installed dependencies, and where it should install new dependencies to. It places that directory on your path, which is all that really matters in the context of working with a given virtualenv.
This means that you can cd anywhere on your system, do a pip install foo, and know that foo will be installed to a known location for the current venv, not to the directory you happen to be sitting in right now.
virtualenv and virtualenvwrapper give you access to a function called deactivate to stop using the virtual environment.
$ deactivate
It's different with Anaconda environment, You will deactivate it with two-word command:
$ source deactivate

Django and 'virtualenv' - proper project structure

I have a dilemma setting up a local development projet structure. Here is my setup:
Python 2.7
Django 1.9
Mac OSX El Capitan 10.11
MySQL 5.7
I made a "Mistake" of setting my project globally instead of in a virtual environment (using 'pip' to install everything in />). After reading this article I still don't get all the steps. Is this correct:
I install global python ( pip, virtualenv in '/>' )
I then go to a location where my projects will reside, like /users/user/documents/projects/project1 and from within 'project1' I use 'virtualenv' to create a virtual environment for this project (this creates a /virtual env/ folder inside /project1/ folder)
activate this virtual environment and pip install django
then from within newly created /virtual env/ folder I startprojectwhich creates another /project1/ folder within /virtual env/ folder
with virtual environment still activated in the current shell session, I proceed with creating my scripts, site and app files
Ad 2. should the virtualenv folder be INSIDE the main "project1" folder or should it encompass it?
Ad 4. Is this correct or can I do it without activating virtual environment first?
My structure currently looks like this (starts from the root: /users/myUser/documents/projects/):
/project1/
/website1/
/static/
/templates/
__init.py__
settings.py
urls.py
views.py
wsgi.py
Common solution is to keep virtual environments and projects in separate folders, e.g. have /users/myUser/.venvs for virtual environments and /users/myUser/documents/projects/ for projects. In other aspects you got it pretty much right yourself. So:
You need to install global Python and virtualenv.
Create directoriy for virtual environments, e.g. run mkdir /users/myUser/.venvs.
Create the virtual environment for your project, virtualenv /users/myUser/.venvs/project1_venv.
Activate the environment for your current shell session /users/myUser/.venvs/project1_venv/bin/activate.
Install django and anything else in this environment pip install django, or better use requirements.txt file to keep track of all project dependencies.
Deactivate the environment, run deactivate.
Now when you'll want to run your project using created virtual environment, in your console window run /users/myUser/.venvs/project1_venv/bin/activate and then python /users/myUser/documents/projects/project1/manage.py runserver. You can activate the venv from any directory, it's activated for current shell window and any python ... run in that window after activation will use this virtual environment. The activation script modifies environment variables in a way, so that the interpreter and libraries from venv are used instead of global ones. (Though there are options to use global ones too.)
It doesn't really matter where you store your virtual environment. Find a project structure that works for you.
I wouldn't put the virtual env inside the project, because you shouldn't check it into version control (although you could use an ignore). Normally, you just need to check in your requirements file, so that you can recreate the environment.
I wouldn't put the project inside the virtual env, because virtual envs are disposable. You might want to destroy the virtual env without destroying the project. Also, you might want to run the same project under different virtual envs e.g. test your code on Django 1.8 and 1.9 before upgrading.
You might find virtualenvwrapper useful. It has some tools that make it easy to create and switch between virtual environments. It stores all of your virtual environments in one place, so you don't have to worry about where to put them.
Is this correct or can I do it without activating virtual environment first?
You should activate the virtual environment and install django before you create / work on your project.

virtualenv and CLI tools

I'm aware of how to use virtualenv for isolating Python dependencies in a long-running script, like a Flask or Twisted app. But I've been sort of puzzled about how you're supposed to go about this for a script intended to be invoked from the command line.
Suppose I wanted to make a CLI tool for interacting with some API, perhaps using Click or docopt. Obviously you don't want to have to source venv/bin/activate every time you want to use this tool. But I'd assume that it's still best to still use virtualenv to keep a clean environment even beyond development.
Sorry for the newbie question, but...what are you supposed to do to package up a script so it can be cleanly used in this manner? (I'm more used to RubyGems, and am still figuring out Pip and VirtualEnv.)
In general, if a package you've installed in a virtual env that provides binary command line script, say in ~/.virtualenv/bin/ you can symlink it into ~/bin/ (or wherever on your path you'd like to put local scripts).
There are a couple projects that aim to solve this issue:
pipsi the pip script installer -- amounts to doing the virtual env creation and symlinking for you
pipx pip for executable binaries
An excellent article on virtualenv by Dabapps would make it clear for you:
http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/
As for invoking it from a CLI script:
1. cd to your project root
2. env/bin/python your_main_file.py (Assuming your virtualenv is named env)
This way you don't need to source the virtualenv everytime.
Each virtualenv has it's own Python site_packages, builtin modles, and Python interpreter. So virtualenv is meant to be used at the project level, not at the "package by package" level. It isolates a collection of Python modules and possible dependencies. Each virtualenv has it's own location where pip will install packages. In theory, virtualenv should not be necessary, but in practice, it's nice to have a way to have different "environments" with different versions of Python modules and Python interpreters. I don't know if Ruby has something similar, that would allow you to have different "sets" of gems for different projects.
People that use straight virtualenv will add aliases to their .bashrc, for example:
alias workonawesomeproject="source ~/venv/awesomeproject/bin/activate"
They would activate the virtualenv with the alias
workonawesomeproject
To leave a virtualenv you use the command deactivate
An easier way to deal with virtualenvs is to use virtualenvwrapper
pip install virtualenvwrapper
Add these lines to your .bashrc (or other shell initialization file)
export WORKON_HOME=$HOME/venv # this directory is your choice
export PROJECT_HOME=$HOME/src # this directory is your choice
source /usr/local/bin/virtualenvwrapper.sh # leave this alone
If you just modified your .bashrc make sure to source it
source ~/.bashrc
Then to make a new virtualenv you simply run
mkvirtualenv awesomeproject
To use that virtualenv
workon awesomeproject
To deactivate that virtualenv
deactivate
Virtualenvwrapper Docs:
http://virtualenvwrapper.readthedocs.org/en/latest/install.html

Categories