Activate python virtualenv in Dockerfile - python

I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?

You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.

There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]

Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python

Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.

If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"

Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...

Related

What profit of using virtualenv with pyenv or docker?

I recently join in current project and find steps in ReadMe like this (and I can't contact with man who create it):
# install pyenv
git clone git://github.com/pyenv/pyenv.git ~/.pyenv
...
pyenv install 3.7.9
pyenv global 3.7.9
# install venv
pip install virtualenv
# create virtual environment
source .venv/bin/activate
# install dependencies
pip install pipenv
pipenv install --dev
...
So my questions are:
what is reason/profit of using virtual environment inside virtual environment?
what is reason/profit of using pyenv or venv if we running application inside python container? Isn't better idea to install all libraries using docker's system pip/python? Docker container is already abstract layer (virtual environment).
pyenv already creating user depending environment that can be easily removed/changed/reseted without influence on system python libraries
In other way environment created with virtualenv still depending on system libraries, so It can't be moved easily between servers.
Maybe here is some benefits or good practices of using venv when service deploying?
Even localstack is using virualenv inside docker. Isn't docker isolation level is not enough?
Update 2022/06/02
According this answer looks like virtualenv may be used to keep size of resulted image smaller.
I checked two patterns:
staged build with virtualenv
FROM python:3-alpine as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./r.txt /app/r.txt
RUN pip install -Ur r.txt
FROM python:3-alpine as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["sleep", "999" ]
and
unstaged build without virtualenv
FROM python:3-alpine as runner
WORKDIR /app/
COPY ./r.txt /app/r.txt
RUN python -m venv /opt/venv
RUN pip install -Ur r.txt && pip cache purge
CMD ["sleep", "999" ]
where r.txt is:
django
django-rest-framework
flask
fastapi
Result is:
$ docker images | grep stage
unstaged_python latest 08460a18018c ... 160MB
staged_python latest dd606b218724 ... 151MB
Conclusion:
venv may be used to save image total size, however size difference is not so big. Also unstaged image may be little bit more cleaned after pip installation to reduce total size. In other words venv usage may reasonable when we have many heavy build operations using compile tools that needed only when build (compile) timing and may be removed when image is ready.

venv directory not being created inside Docker container/image

I am relatively new to Docker and, as an experiment, I am trying to create just a generic Django development container with the following Dockerfile:
FROM python
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get dist-upgrade -y
RUN mkdir /code
WORKDIR /code
RUN python3 -m venv djangoProject
RUN /bin/bash -c "source /code/djangoProject/bin/activate && python3 -m pip install --upgrade pip && pip install django"
EXPOSE 8000
The image seems to build okay, but when I go to run the container:
docker container run -v /home/me/dev/djangoRESTreact/code:/code -it --rm djangodev /bin/bash
My local mount, /home/me/dev/djangoRESTreact/code, is not populated with the djangoProject venv directory I was expecting from this Dockerfile and mount. The docker container also has an empty directory at /code. If I run python3 -m venv djangoProject directly inside the container, the venv directory is created and I can see it both on the host and within the container.
Any idea why my venv is not being created in the image and subsequent container?
I'm pulling my hair out.
Thanks in advance!
You don't need venvs in a Docker container at all, so don't bother with one.
FROM python
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get dist-upgrade -y
RUN mkdir /code
WORKDIR /code
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install django
EXPOSE 8000
To answer your question, though, you're misunderstanding how -v mounts work; they mount a thing from your host onto a directory in the container. The /code/... created in your dockerfile is essentially overridden by the volume mount, which is why you don't see the venv at all.
When you mount a volume into a container, the volume covers up anything that was already in the container at that location. This is the exact same way that every other mount on Linux works. Also, volumes are only mounted when building containers, not when running them. Thus, the venv that you put in that location while building isn't visible without running. If you want your venv to be visible, then you need to put it in the volume, not just in the container at the same place.
Mounting the volume with -v causes /home/me/dev/djangoRESTreact/code on the host to be mounted at /code in the container. This mounts over anything that was placed there during the build (your venv).
If you run the container without the -v flag, you'll probably find the venv directory exists.
You should probably avoid creating a venv within the container, as it's an isolated environment.
Instead just copy your requirements.txt into the container, and install them directly in the container. Something like:
COPY ./requirements.txt /requirements.txt
RUN pip install -U pip && pip install -r /requirements.txt

Dockerized flask app builds and runs locally but wont work when deployed on Azure [duplicate]

I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?
You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.
There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]
Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python
Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.
If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"
Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...

pipenv --system option for docker. What is the suggested way to get all the python packages in docker

I use pipenv for my django app.
$ mkdir djangoapp && cd djangoapp
$ pipenv install django==2.1
$ pipenv shell
(djangoapp) $ django-admin startproject example_project .
(djangoapp) $ python manage.py runserver
Now i am shifting to docker environment.
As per my understanding pipenv only installs packages inside a virtualenv
You don't need a virtual env inside a container, docket container IS a virtual environment in itself.
Later after going through many Dockerfile 's i found --system option to install in the system.
For example the following i found:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/
COPY ./Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev
https://hub.docker.com/r/kennethreitz/pipenv/dockerfile
# -- Install dependencies:
ONBUILD RUN set -ex && pipenv install --deploy --system
https://wsvincent.com/beginners-guide-to-docker/
# Set work directory
WORKDIR /code
# Copy Pipfile
COPY Pipfile /code
# Install dependencies
RUN pip install pipenv
RUN pipenv install --system
So --system is only sufficient or --deploy --system is better way. And --skip-lock --system --dev which is different again.
So can some one guide how to get my environment back in my Docker
A typical Docker deployment would involve having a requirements.txt (it's a file where you can store your pip dependencies, including Django itself) file and then in your Dockerfile you do something like:
FROM python:3.7 # or whatever version you need
ADD requirements.txt /code/
WORKDIR /code
# install your Python dependencies
RUN pip install -r requirements.txt
# run Django
CMD [ "python", "./manage.py", "runserver", "0.0.0.0:8000"]
You don't need pipenv here at all since you no longer have a virtual environment as you say.
Even better you can configure a lot of that stuff in a docker-compose.yml file and then use docker-compose to run and manage your services, not just Django.
Docker have a very good tutorial on dockerising Django with it. And if you're unsure what's going on in the Dockerfile itself, check the manual.
In either a docker image, a CI pipeline, a production server or even in your development workstation: you should always include the --deploy flag in your installs unless you want to potentially relock all dependencies, e.g. while evolving your requirements. It will check that the lockfile is up-to-date and will never install anything that is not listed there.
As for the --system flag, you'd better drop it. There is no real harm on using a virtual environment inside docker images, but some subtle benefits. See this comment by #anishtain4. Pipenv now recommends against system-wide installs https://github.com/pypa/pipenv/pull/2762.

Creating docker image that executes commands in python virtualenv

I have the following problem ...
I want to create a docker image on which a python virtual environment is created. Then I want to be able to do the following two things:
Run docker run -it <image> to start an interactive shell in this
virtual environment.
Run docker run <image> <command> (such as python --version) that is
executed in said virtual environment
I tried many things but it seems I don't get anywhere. My Dockerfile looks currently like this:
FROM ubuntu:16.04
RUN apt-get -y update && apt-get install -y python3 python-pip
RUN pip install virtualenv
RUN virtualenv -p python3.5 /venvs/myenv3.5
RUN . /venvs/myenv3.5/bin/activate && pip install numpy
I tried messing around with ENTRYPOINT and CMD but I don't get anywhere. By adding the following line: CMD . /venvs/myenv3.5/bin/activate; /bin/bash; I was able to start an interactive bash in the environment, but running docker run python --version shows that commands like that are not executed in said environment.
Is there a way to do this?
You can use the /venvs/myenv3.5/bin/python executable instead of the main python. This will execute python within that virtual environment. You can do this by doing ENV PATH /venvs/myenv3.5/bin:$PATH as you mentioned in the comments or by using an entrypoint in Dockerfile:
ENTRYPOINT /venvs/myenv3.5/bin/python
Now when you run your image, your virtualenv python will be executed by default:
$ docker run -it <image> --version
Python 3.5.2
If you need to get a shell on this image, you can overwrite the entrypont:
$ docker run -it --entrypoint /bin/bash <image>
/ #
You can also use /venvs/myenv3.5/bin/pip to install things into the virtualenv:
RUN /venvs/myenv3.5/bin/pip install numpy

Categories