I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden?
As Devin stated, it is not necessary to use virtualenv when you deploy to a vagrant machine as long as you are the sole user of the machine. However, I would still enable the use of a virtualenv, setup.py, etc. even if you do not use it for development or deployment.
In my (not so) humble opinion, any Python project should:
Include a .cvsignore, .gitignore, .hgignore, ... file that ignores the common Python intermediate files as well as virtualenv directories.
A requirements.txt file that lists the required packages in a pip-compliant format
Include a Makefile with the following targets:
environment: create the virtual environment using virtualenv or pyvenv
requirements: install required packages using pip and the requirements.txt file
develop: run setup.py develop using the virtual environment
test: run setup.py test
clean: remove intermediate files, coverage reports, etc.
maintainer-clean: remove the virtual environment
The idea is to keep the Makefile as simple as possible. The dependencies should be set up so that you can clone the repository (or extract the source tarball) and run make test. It should create a virtual environment, install the requirements, and run the unit tests.
You can also include a Vagrantfile and a vagrant target in the Makefile that runs vagrant up. Add a vagrant destroy to the maintainer-clean target while you are at it.
This makes your project usable by anyone that is using vagrant or developing without it. If (when) you need to use deploy alongside another project in a vagrant or physical environment, including a clean setup.py and a Vagrantfile that describes your minimal environment makes it simple to install into a virtual environment or a shared vagrant machine.
If you run one vagrant VM per project, then there is no direct reason to use virtualenv.
If other contributors do not use vagrant, but do use virtualenv, then you might want to use it and support it to make their lives easier.
Virtualenv and other forms of isolation (Docker, dedicated VM, ...) are not necessarily mutually exclusive. Using virtualenv is still a good idea, even in an isolated environment, to shield the virtual system Python from your project packages. *nix systems use plethora of Python based utilities dependent on specific versions of packages being available in system Python and you don't want to mess with these.
Mind that virtualenv can still only go as far as pure Python packages and doesn't solve the situation with native extensions that will still mix with the system.
Related
I'm working with a team. We each have our own Windows system. We have shared drives and a shared git repository. We want to have a shared virtual environment (in Python).
My understanding (from previous questions from myself and others) is that virtual environments do not include all files necessary for running python, in particular, the shared VE does not include the Python interpreter.
I can see how we can create a shared VE and it seems we could just copy that around, or put it on the shared drive, or put it in a git repository. But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
One of my colleagues has heard (or read) that "there is a package that allows teams to share their virtual environment configuration through a git-like interface. That way you can “pull” the updated configuration and it will install the new packages automatically. This allows each person to change the configuration and test it before releasing it to the team."
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
In any case, we do all have to install our own local versions of python anyway. Is that correct?
If the virtual environment is on a shared drive(group readable), then your team members should be able to access it. A virtual environment is just a directory.
But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
Virtual environments have their own python binaries, which you can see when you run which python inside the virtual environment after it is activated.
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
I would advise against uploading a virtual environment directory to version control, since it contains binaries, configuration files that don't belong in there. Its also unnecessary to do this because the dependencies are tracked in a requirements.txt file, which list the pip dependencies and is committed to version control. Additionally, When the virtual environment is activated, the VIRTUAL_ENV environment variable is automatically exported, so there is no need to modify it.
Conclusion
For simplicity, its probably best to have each user create their own virtual environment and install the dependencies from requirements.txt on their local machines. This also ensures users don't make a change to the virtual environment that will affect other users, which is a drawback of the above shared drive approach.
If they want to pull the latest requirements, then pulling the latest change using git pull and reinstalling the dependencies with pip install -r requirements.txt is good enough. You just have to ensure the virtual environment is activated, otherwise the dependencies will get installed system wide. This is where the pipenv package also comes in handy.
Usually in my team projects, the README contains instructions to get this setup for each team member.
Additionally, as Daniel Farrell helpfully mentioned in the comments, pip won't be able to manage packages like libffi, openssl, python-devel etc. inside a virtual environment. This is where using Docker containers become useful, since you can install dependencies inside a isolated environment built on top of the host operating system. This ensures the dependencies don't mess with the system wide packages, which is a good practice to follow in any case.
An example Dockerfile I have used in the past:
FROM python:3.8-slim-buster
# Set environment variables:
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Create virtual environment:
RUN python3 -m venv $VIRTUAL_ENV
# Install dependencies:
COPY requirements.txt .
RUN pip install -r requirements.txt
# Run the application:
COPY app.py .
CMD ["python", "app.py"]
Which I modified from this Elegantly activating a virtualenv in a Dockerfile article.
containerization aims to solve the "from where comes python?" problem. My developers' teams usually use a Dockerfile that installs their requirements within a docker-compose that spins ups a development environment for their applications . Unlike a virtual environment, containers offer a complete userspace solution that works pretty well in windows and osx.
My question is do i have to install django every single time in my virtual environment in order to run my python files? and is this taking up bunch of space on my machine? My project also uses "matplotlib" and every virtual environment i create it also asks me to import the matplotlib module too. its getting annoying. do i have to do this every time?
Im new to Django. I wanted to run some python files in django but they weren't working, so after some research i found out i needed to run my pycharm project in a virtual environment in order to run these python files.
my folders look like this pycharmProjects -> my project
I enter pycharmProjects and I set up virtual environment using "pienv shell". Then i run "python3 manage.py runserver". It turns out i must install django in the virtual environment before the files run.
Short answer is no, you don't have to use a virtual environment at all and can install your dependancies globally instead. However you will soon find that it will cause a lot of issues. The main reason you would create a virtual environment is to give control of your dependancies and prevent bugs that could be caused because of them having their wires crossed between projects.
Short answer yes.
If you create a virualenv you have to install all packages, that your program needs.
Long answer:
You could install django system wide and then create a virtualenv with the option
--system-site-packages then django would be used from your globally installed python.
(Or you install everything just in your global python, put I personally don't think this is good practice)
If you work with many different projects I think you will avoid a lot of trouble if you use one virtualenv per project.
Trouble meaning that one project breaks, because one pip install for another project changed the version of one package and one project can't handle the newer version.
I would recommend to create a requirements.txt file for each project, that lists the dependencies then you can create the virtualenv with following command
pip install -r requirements.txt
if you have requirement.txt files, then you can create virtualenvs rather quickly if going back to an old project and you can delete the virtualenvs whenever you run out of disk space. If you want to be an the safe side, type pip freeze > pipfreeze.txt prior to deleting the virtualenv and use pip install -r pipfreeze.txt if you want to create one with the same modules and the same versions.
You also might want to look at direnv or autoenv if working on a linux like system.
This will automatically switch to the required virtualenv when changing to a project's working directory.
I am writing a code in python that uses numpy, matplotlib etc.
How to make sure that even a remote web server with python installed but no extra modules, can run the code without errors?
I usually work on linux environment. Hence from source code, I can install the libraries in a prefix directory and can keep that along with my code. Then add pythonpath locally in my python code to use the directory.
But, I started to realize it's not correct way as first thing, it can't work on cross platform as the libraries are different, and my code inside the script to extend the pythonpath may not work due to the use of "/" in path. Also, I am not sure if the compiled code can work in different environments of the same Linux Platform.
So I think I need to create a directory like unix,windows,osx etc. and put my code there? I believe this is what I find when I download any code online. Is that what developers generally do to avoid these issues?
A popular convention is to list requirements in a text file (requirements.txt) and install them when deploying the project. Depending on your deployment configuration, libraries can be installed in a virtual environment (google keyword: virtualenv), or in a local user folder (pip install --user -r requirements.txt, if this is the only project under this account) or globally (pip install -r requirements.txt, e.g. in a docker container)
The thing is pip install can take some time and as a build can inherently be different from the other, some funcionalities may behave different too. My point is:
Is it safe to copy the virtualenv folder (typically venv) from the staging environment to the production environment?
I know some packages build themselves using *-dev packages and header files. But aren't these binaries deployable too? If these packages were the biggest problem, is it possible to install them only in the production environment and copy the rest of the venv folder?
I have created a flask application in a virtual environment on my local machine and I could run it locally (at http://localhost:5000).
I then put this project in a repo and I then went to my server and git clone this project.
All files are identical on my local machine and in my server.
I then wanted to test this virtual environment on the server by trying .venv/bin/activate
However I ran into an error. It says I do not have flask!:
Traceback (most recent call last):
File "__init__.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
I am assuming that I have to initialize something in the virtual environment first, like installing all of the dependencies. Or do I have to pip install flask again? (It would be kind of funny to do that...)
As a general rule python environments are not portable across machines.
This means that you cannot reliably expect to port the virtual environment across machines. This is especially true if you are moving stuff between different operating systems. For example, a virtual environment created in Windows will not work in Linux.
Similarly, a virtual environment created in OSX will not work in Linux. Sometimes, you can get Linux > Linux compatibility, but this is by chance and not to be relied upon.
The reasons are numerous - some libraries need to be built against native extensions, others require compatible system libraries in place to work, etc.
So, the most reliable workflow is the following:
You can (but I would recommend against this) put your virtual environment in the same directory as your project. If you do so, make sure you don't add the virtual environment root directory to your source control system. It is best to separate your virtual environments from your source code (see the virtualenvwrapper project project for a great way to manage your virtual environments separately).
You should create a requirements file, by running pip freeze > requirements.txt. Keep this file updated and add it to your source control system. In your target system, simply create an empty virtual environment and then pip install -r requirements.txt to make sure all requirements are installed correctly. Doing so will make sure that any native extensions are also built and installed.
A few possible issues:
When you created your original virtual environment did you specify --no-site-packages if not your package may be using elements from the system.
Some packages rely on system installed libraries that may be missing on your target system
Is your server running on a similar set of hardware to your development system with the same OS - if not your virtualenv is likely not to work without re-installing packages as any C/C++ extensions will have been built for the wrong hardware/OS and will not work.
The thing is that virtualenv is not a package builder, (look at pyinstaller for that), but rather a development and test environment when you go to distribute your code to a new platform then, provided you started off with --no-site-packages you can easily find out which packages you need to find out what you need to install on the new target.
So basically - Yes you, or more likely the system admin, does need to run pip install flask and probably several other things!