I have the following problem ...
I want to create a docker image on which a python virtual environment is created. Then I want to be able to do the following two things:
Run docker run -it <image> to start an interactive shell in this
virtual environment.
Run docker run <image> <command> (such as python --version) that is
executed in said virtual environment
I tried many things but it seems I don't get anywhere. My Dockerfile looks currently like this:
FROM ubuntu:16.04
RUN apt-get -y update && apt-get install -y python3 python-pip
RUN pip install virtualenv
RUN virtualenv -p python3.5 /venvs/myenv3.5
RUN . /venvs/myenv3.5/bin/activate && pip install numpy
I tried messing around with ENTRYPOINT and CMD but I don't get anywhere. By adding the following line: CMD . /venvs/myenv3.5/bin/activate; /bin/bash; I was able to start an interactive bash in the environment, but running docker run python --version shows that commands like that are not executed in said environment.
Is there a way to do this?
You can use the /venvs/myenv3.5/bin/python executable instead of the main python. This will execute python within that virtual environment. You can do this by doing ENV PATH /venvs/myenv3.5/bin:$PATH as you mentioned in the comments or by using an entrypoint in Dockerfile:
ENTRYPOINT /venvs/myenv3.5/bin/python
Now when you run your image, your virtualenv python will be executed by default:
$ docker run -it <image> --version
Python 3.5.2
If you need to get a shell on this image, you can overwrite the entrypont:
$ docker run -it --entrypoint /bin/bash <image>
/ #
You can also use /venvs/myenv3.5/bin/pip to install things into the virtualenv:
RUN /venvs/myenv3.5/bin/pip install numpy
Related
I've python code that includes tqdm code.
the bash build docker image and container however I can't see any output from the container(in CLI).
#!/bin/sh
docker build . -t traffic
docker run -d --name traffic_con traffic
docker wait traffic_con
docker cp -a traffic_con:/usr/TrafficMannager/out/data/. ./out/data/
docker rm /traffic_con
docker rmi /traffic
I've tried to run the container on interactive mode (-it) however it's throwing an error
[EDIT:]
Docker file:
FROM cityflowproject/cityflow
# Create a folder we'll work in
WORKDIR /usr/TrafficMannager
# Upgrade installed packages
RUN apt-get update && apt-get upgrade -y && apt-get clean
# Install vim to open & edit code\text files
RUN apt-get install -y vim
# Install all python code depentences
RUN pip install gym && \
pip install numpy && \
pip install IPython && \
pip install torch && \
python -m pip install python-dotenv &&\
pip install tqdm
COPY . .
CMD chmod u+x script/container_instructions.sh; ./script/container_instructions.sh
container_instructions:
#!/bin/sh
pip install lib/extern/CityFlow/.
python main.py
You run the Docker container in the background, then immediately docker wait for it. If you run the container in the foreground instead, you'll see its output on stdout, and the docker run command will complete when the container exits.
docker run --name traffic_con traffic # without -d
Given the wrapper script you show, you may find this setup much easier to run in a Python virtual environment. Ignore all the Docker parts and run:
python3 -m venv venv
./venv/bin/pip install gym numpy IPython torch python-dotenv tqdm lib/extern/CityFlow
./venv/bin/python3 main.py
The script will directly write to ./out/data on the host system, without the long-winded privileged script to copy data out.
If you really do need a container here, you can also mount the output directory into the container to avoid the manual copy step.
#!/bin/sh
docker build . -t traffic
docker run --rm -v "$PWD/out/data:/usr/TrafficMannager/out/data" traffic
docker rmi traffic
I'm trying to run a python application inside a container. I keep getting:
"/bin/sh: 1: python3: not found
I've tried many different iterations, including using python as my base image, with different failures.
This time I built an Ubuntu container and ran the commands one at a time in the command line and it works in bash. But when I run the container it still can't seem to find python.
Here's what I currently have for my dockerfile:
FROM ubuntu
CMD mkdir pong
WORKDIR /pong
CMD apt-get update
CMD apt-get install python3 -y
CMD apt-get install python3-pip -y
COPY . /pong
CMD pip3 install pipenv
CMD pip3 install pyxel
CMD python3 main.py
I've spent a lot of time on the docker documentation too, so forgive me for posting this simple question, but I'm stumped. Thank you in advance!
Replace all CMD by RUN, the last one should be ENTRYPOINT.
FROM ubuntu
RUN mkdir pong
WORKDIR /pong
RUN apt-get update
RUN apt-get install python3 -y
RUN apt-get install python3-pip -y
COPY . /pong
RUN pip3 install pipenv
RUN pip3 install pyxel
ENTRYPOINT ["python3", "main.py"]
The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
For more details:
CMD
RUN
ENTRYPOINT
The sh shell does not know the full path of the executable python3
This should work better:
CMD /usr/bin/python3 main.py
Also, note that for the container not to halt, you need to keep the main.py process constantly running in the foreground. If it exits, the container stops.
I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?
You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.
There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]
Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python
Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.
If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"
Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...
I'm trying to build a docker-image, which runs a python script. Basically, it is generating .csv file. I want to have the .csv file in my localhost machine before the docker-container dies out. Following is my dockerfile, can anyone please help me how to store the .csv file without running the docker-container again.
from ubuntu:18.04
run apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3 \
git \
python3-pip
run pip3 install --upgrade tensorflow
run pip3 install opencv-python
run pip3 install keras
run pip3 install psutil
run pip3 install py-cpuinfo
run pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.3/imageai-2.0.3-py3-none-any.whl
workdir /data/code/
COPY . /data/
RUN ls /data/code/ | grep model
RUN chmod +x /data/code/image_prediction.py
CMD ["python3", "./image_prediction.py", "-OPTIONAL_FLAG"]
You can have your output sent directly to your host. You have to do it at the moment of running your docker image. Here is the step to do it:
Create a folder on your desktop for example and put your inputs data in it. Call it input_dir. The full path of this folder will have it like /path/to/input_dir/ (you can get it by going inside this folder and type on terminal pwd).
Create another folder for the output of your script on your host machine in the desktop for example. Call it output_dir. The full path of this folder is: path/to/output_dir
Running your docker image should be like this:
docker run -it -v /path/to/input_dir/:/data/ -v /path/to/output_dir/:/data/output/ my-image bash
Once done, your inputs will be automatically available in /data/input and make sure you set your output directory to '/data/output'
when you finish running your script, you will find your output on your host machine on the folder: /path/to/output_dir/
Mount volume on your host, that will work.
If .csv is getting created at /path/in/container location inside the container then run
docker run -itd -v /path/on/host/:/path/in/container <image>
This should make your .csv available at /path/on/host/ location on your host, even when your container dies.
I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?
You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.
There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]
Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python
Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.
If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"
Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...