I would like to set a list of environment variables as specified in an env.list file during the build process, i.e. have a respective command in the Dockerfile. Like this:
FROM python:3.9.4-slim-buster
COPY env.list env.list
# Here I need a corresponding command:
ENV env.list
The file looks like this:
FOO=foo
BAR=bar
My book of already failed attempts / ruled out options:
On Linux, one can usually set environment variables from a file env.list by running:
source env.list
export $(cut -d= -f1 env.list)
However, executing those commands as RUN in the Dockerfile does not work because env variables defined using RUN export FOO=foo are not persisted across different stages of the image.
I do not want to explicitly set those variables in the Dockerfile using ENV FOO=foo because they contain login credentials. It's also easier to automate/maintain the project if the variables are defined in one place.
I also don't want to set those variables during docker run --env-file env.list because I need them for a development container which does not "run".
ENV directive does not allow to parse a file like env.list, as pointed out. But even if it did, the resulting environment variables would still be saved in the final image, passwords included.
The correct approach to my knowledge is to set the passwords at runtime with "docker run", when this image runs, or when the child image runs via "docker run".
If the credentials are required while the image is built, I would pass them via the ARG directive so that they can be reference as shell variables in the Dockerfile but not saved in the final image:
ARG VAR
FROM image
RUN echo ${VAR}
etc...
which can run as:
docker build --build-arg VAR=value ...
If you use docker-compose you can pass a variables.env file
docker-compose.yml:
version: "3.7"
services:
service_name:
build: folder/.
ports:
- '5001:5000'
env_file:
- folder/variables.env
folder/Dockerfile
FROM python:3.9.4-slim-buster
folder/variables.env
FOO=foo
BAR=bar
For more info on compose: https://docs.docker.com/compose/
Related
I have added code to a repo to build a Cloud Run service. The structure is like this:
I want to run b.py in cr.
Is there any way I can deploy cr without just copying b.py into the cr directory? (I don't want to do it since here are lots of other folders and files that b.py represents).
The problem is due to the Dockerfile being unable to see folders above.
Also how would eg api.py import from b.py?
TIA you lovely people.
You have to build your container with the correct parameters, and therefore, not to use the gcloud run deploy --source=. .... to build your container with default parameters
With docker, the Dockerfile by default is in the PATH/Dokerfile. But you can override that default behavior with the -f parameter to indicate the Dockerfile location.
For example, you can do that
cd top
docker build -f ./a/cr/Dockerfile .
Like that, you provide to the docker build runtime the current path (here top, and the current path is represented but the dot at the end .).
And you also specify the full path of the Dockerfile inside this current path.
So that, you have to update your Dockerfile, because the COPY . . will no longer copy the cr path, but the whole top directory.
EDIT 1
To validate my answer, I exactly do what you ask in your comment. I used gcloud build summit but:
I ran the command from the top directory
I created a cloudbuild.yaml file
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
docker build -f ./a/cr/Dockerfile -t <YOUR TAG> .
docker push <YOUR TAG>
you can't perform a gcloud builds submit --tag <YOUR TAG> from the top directory if you haven't a Dockerfile in the root dir.
version: "3.4"
services:
app:
build:
context: ./
target: project_build
image: sig-project
environment:
PROJECT_NAME: "${PROJECT_NAME:-NullProject}"
command: /bin/bash -c "init_project.sh"
ports:
- 7777:7777
expose:
- 7777
volumes:
- ./$PROJECT_NAME:/app/$PROJECT_NAME
- .:/home/jovyan/work
working_dir: /app
entrypoint: "jupyter notebook --ip=0.0.0.0 --port=7777 --allow-root --no-browser"
Above is my docker-compose.yaml.
The command doesn't run, I get this error:
Unrecognized alias: 'c', it will have no effect
Furthermore, it runs the jupyter notebook out of /bin instead of /app.
If I change the command to
command: "init_project.sh"
it fails silently, and if try to do something complicated like:
command: python init_project.py
then it gives me this:
No such file or directory: /app/python
note: init_project.sh is just a bash script wrapper for init_project.py
so it seems that for some reason the commands are run in a way that I don't understand and from within the /app directory but without shell or bash.
I've been hitting my head against the wall trying to figure out what I'm missing here and I don't know what else to try.
I've found a variety of issues and discussions that are similar, but nothing seems to resolve it.
these are the contents of the working-dir /app:
#ls
Dockerfile docker-compose.yaml docker_compose.sh init_project.sh poetry.lock pyproject.toml
Makefile create_poetry_lock.sh docker_build.sh init_project.py install-poetry.py poetry_lock_update.sh test.sh
What am I missing?
Thanks!
Your compose file looks weird to me!
You either have a docker image which container will be created with, or you have a Dockerfile present which builds the image and then container will be created with that image.
Based on what has been said above;
Why do you have both image and build attributes on your compose file?
If image is already available (e.g: postgreSQL, rabbitmq) then you don't need to build anything, just provide the image.
If there's no image, then please also add your Dockerfile to the quiestion.
Why are you trying to run /bin/bash -c?
You can simply add:
bash /path/to/ur/shell_script
Lastly, why don't you add your command in the Dockerfile with CMD?
What are init_project.sh and init_project.py? Maybe also share the content inside those files as well.
Also might be good to add tree output so we know how from where different commands are being executed.
I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.
I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.
I am using Shippable for two reasons: to automate the build of my docker images and to pass encrypted environment variables. I am able to automate the builds but I can't pass the variables.
I start with entering the environment variable to the Shippable text box in the project settings:
SECRET_KEY=123456
I click the 'encrypt' button and then shippable returns:
- secure : hash123abc...
I put this hash into my shippable.yml file. It looks like:
language: python
python:
- 2.7
build_image: myusername/myimagename
env:
- secure : hash123abc...
build:
post_ci:
- docker login -u myusername -p mypassword
- docker build -t myusername/myimagename:latest .
- docker push myusername/myimagename:latest
integrations:
hub:
- integrationName : myintegrationname
type: docker
branches:
only:
- master
The automated build works! But if I try:
sudo docker run myusername/myimagename:latest echo $SECRET_KEY
I get nothing.
My Dockerfile which sets the environment variables (in this case SECRET_KEY) looks like this:
FROM python:2.7.11
RUN apt-get update
RUN apt-get install -y git
RUN get clone https://github.com/myusername/myrepo.git
ENV SECRET_KEY=$SECRET_KEY
It might be helpful to explain MY logic as I see it. Because my thinking may be the issue if it's not in the code:
The shippable project build is triggered (by a repo push or manually). In shippable.yml it does some things:
builds the initial image
sets the SECRET_KEY environment variable
builds the new image based on the Dockerfile
the Dockerfile:
-- sets the env variable SECRET_KEY to the SECRET_KEY set by the .yml two steps earlier
pushes the image
I'm thinking that now I've set an environment variable in my image I can now access it. But I get nothing. What's the issue here?
Thanks #Alex Hall for working this out with me!
It turns out that passing environment variables with Docker in this setting must be done with a simple flag to start. So in my shippable.yml I changed:
- docker build -t myusername/myimagename:latest .
to
- docker build --build-arg SECRET_KEY=$SECRET_KEY -t myusername/myimagename:latest .
Then in my Dockerfile I added:
ARG SECRET_KEY
RUN echo $SECRET_KEY > env_file
Lo and behold the key was in env_file