I am using Shippable for two reasons: to automate the build of my docker images and to pass encrypted environment variables. I am able to automate the builds but I can't pass the variables.
I start with entering the environment variable to the Shippable text box in the project settings:
SECRET_KEY=123456
I click the 'encrypt' button and then shippable returns:
- secure : hash123abc...
I put this hash into my shippable.yml file. It looks like:
language: python
python:
- 2.7
build_image: myusername/myimagename
env:
- secure : hash123abc...
build:
post_ci:
- docker login -u myusername -p mypassword
- docker build -t myusername/myimagename:latest .
- docker push myusername/myimagename:latest
integrations:
hub:
- integrationName : myintegrationname
type: docker
branches:
only:
- master
The automated build works! But if I try:
sudo docker run myusername/myimagename:latest echo $SECRET_KEY
I get nothing.
My Dockerfile which sets the environment variables (in this case SECRET_KEY) looks like this:
FROM python:2.7.11
RUN apt-get update
RUN apt-get install -y git
RUN get clone https://github.com/myusername/myrepo.git
ENV SECRET_KEY=$SECRET_KEY
It might be helpful to explain MY logic as I see it. Because my thinking may be the issue if it's not in the code:
The shippable project build is triggered (by a repo push or manually). In shippable.yml it does some things:
builds the initial image
sets the SECRET_KEY environment variable
builds the new image based on the Dockerfile
the Dockerfile:
-- sets the env variable SECRET_KEY to the SECRET_KEY set by the .yml two steps earlier
pushes the image
I'm thinking that now I've set an environment variable in my image I can now access it. But I get nothing. What's the issue here?
Thanks #Alex Hall for working this out with me!
It turns out that passing environment variables with Docker in this setting must be done with a simple flag to start. So in my shippable.yml I changed:
- docker build -t myusername/myimagename:latest .
to
- docker build --build-arg SECRET_KEY=$SECRET_KEY -t myusername/myimagename:latest .
Then in my Dockerfile I added:
ARG SECRET_KEY
RUN echo $SECRET_KEY > env_file
Lo and behold the key was in env_file
Related
I would like to set a list of environment variables as specified in an env.list file during the build process, i.e. have a respective command in the Dockerfile. Like this:
FROM python:3.9.4-slim-buster
COPY env.list env.list
# Here I need a corresponding command:
ENV env.list
The file looks like this:
FOO=foo
BAR=bar
My book of already failed attempts / ruled out options:
On Linux, one can usually set environment variables from a file env.list by running:
source env.list
export $(cut -d= -f1 env.list)
However, executing those commands as RUN in the Dockerfile does not work because env variables defined using RUN export FOO=foo are not persisted across different stages of the image.
I do not want to explicitly set those variables in the Dockerfile using ENV FOO=foo because they contain login credentials. It's also easier to automate/maintain the project if the variables are defined in one place.
I also don't want to set those variables during docker run --env-file env.list because I need them for a development container which does not "run".
ENV directive does not allow to parse a file like env.list, as pointed out. But even if it did, the resulting environment variables would still be saved in the final image, passwords included.
The correct approach to my knowledge is to set the passwords at runtime with "docker run", when this image runs, or when the child image runs via "docker run".
If the credentials are required while the image is built, I would pass them via the ARG directive so that they can be reference as shell variables in the Dockerfile but not saved in the final image:
ARG VAR
FROM image
RUN echo ${VAR}
etc...
which can run as:
docker build --build-arg VAR=value ...
If you use docker-compose you can pass a variables.env file
docker-compose.yml:
version: "3.7"
services:
service_name:
build: folder/.
ports:
- '5001:5000'
env_file:
- folder/variables.env
folder/Dockerfile
FROM python:3.9.4-slim-buster
folder/variables.env
FOO=foo
BAR=bar
For more info on compose: https://docs.docker.com/compose/
I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.
I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.
Don't get me wrong, virtualenv (or pyenv) is a great tool, and the whole concept of virtual environments is a great improvement on developer environments, mitigating the whole Snowflake Server anti-pattern.
But nowadays Docker containers are everywhere (for good reasons) and it feels odd having your application running on a container but also setting up a local virtual environment for running tests and such in the IDE.
I wonder if there's a way we could leverage Docker containers for this purpose?
Summary
Yes, there's a way to achieve this. By configuring a remote Python interpreter and a "sidecar" Docker container.
This Docker container will have:
A volume mounted to your source code (henceforth, /code)
SSH setup
SSH enabled for the root:password credentials and the root user allowed to login
Get the sidecar container ready
The idea here is to duplicate your app's container and add SSH abilities to it. We'll use docker-compose to achieve this:
docker-compose.yml:
version: '3.3'
services:
dev:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- 127.0.0.1:9922:22
volumes:
- .:/code/
environment:
DEV: 'True'
env_file: local.env
Dockerfile.dev
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /code
# Copying the requirements, this is needed because at this point the volume isn't mounted yet
COPY requirements.txt /code/
# Installing requirements, if you don't use this, you should.
# More info: https://pip.pypa.io/en/stable/user_guide/
RUN pip install -r requirements.txt
# Similar to the above, but with just the development-specific requirements
COPY requirements-dev.txt /code/
RUN pip install -r requirements-dev.txt
# Setup SSH with secure root login
RUN apt-get update \
&& apt-get install -y openssh-server netcat \
&& mkdir /var/run/sshd \
&& echo 'root:password' | chpasswd \
&& sed -i 's/\#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Setting up PyCharm Professional Edition
Preferences (CMD + ,) > Project Settings > Project Interpreter
Click on the gear icon next to the "Project Interpreter" dropdown > Add
Select "SSH Interpreter" > Host: localhost, Port: 9922, Username: root > Password: password > Interpreter: /usr/local/bin/python, Sync folders: Project Root -> /code, Disable "Automatically upload..."
Confirm the changes and wait for PyCharm to update the indexes
Setting up Visual Studio Code
Install the Python extension
Install the Remote - Containers extension
Open the Command Pallette and type Remote-Containers, then select the Attach to Running Container... and selecet the running docker container
VS Code will restart and reload
On the Explorer sidebar, click the open a folder button and then enter /code (this will be loaded from the remote container)
On the Extensions sidebar, select the Python extension and install it on the container
When prompet on which interppreter to use, select /usr/local/bin/python
Open the Command Pallette and type Python: Configure Tests, then select the unittest framework
TDD Enablement
Now that you can run your tests directly from your IDE, use it to try out Test-Driven-Develop! One of its key points is a fast feedback loop, and not having to wait for the full test suite to finish execution just to see if your new test is passing is great! Just write it and run it right away!
Reference
The contents of this answer are also available in this GIST.
I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.