I am using Docker 17.04.0-ce, build 4845c56 with docker-compose 1.12.0, build b31ff33 on Ubuntu 16.04.2 LTS. I simply want to pass an environment variable and display it from my script running in a container. I am doing this according to the documentation https://docs.docker.com/compose/compose-file/#environment . The problem is that the variable is not passed to the container.
My docker-compose.yml file:
env-file-test:
build: .
dockerfile: Dockerfile
environment:
- DEMO_VAR
My Dockerfile:
FROM alpine
COPY docker-start.sh /
CMD ["/docker-start.sh"]
And the docker-start.sh file:
#!/bin/sh
echo "DEMO_VAR Var Passed in: $DEMO_VAR"
I try to set the variable in my current terminal session and pass it to the container:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo docker-compose up
Starting envfiletest_env-file-test_1
Attaching to envfiletest_env-file-test_1
env-file-test_1 | DEMO_VAR Var Passed in:
envfiletest_env-file-test_1 exited with code 0
So you can see that the variable DEMO_VAR is empty!
I also tried using variables in docker-compose.yml like this: DEMO_VAR=${DEMO_VAR} but then when I run sudo docker-compose up, I get a warning: "WARNING: The DEMO_VAR variable is not set. Defaulting to a blank string.".
What am I doing wrong? What should I do to pass the variable to the container?
I found a solution. Answering my own question...
The problem was with the sudo command. It turned out that it does not pass environment variables by default. There are some possible solutions:
Use sudo -E. Demo:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo -E docker-compose up
env-file-test_1 | DEMO_VAR Var Passed in: aabbdd
Use sudo VAR=value:
sudo DEMO_VAR=$DEMO_VAR docker-compose up
Add environment variables to the sudoers file (https://stackoverflow.com/a/8636711)
Use docker without sudo (https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo)
you should use ENV in your Dockerfile, and avoid export.
See the doc
https://docs.docker.com/engine/reference/builder/#env
Related
Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?
In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.
In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.
I am working on a docker image I created using firesh/nginx-lua (The Linux distribution is Alpine):
FROM firesh/nginx-lua
COPY ./nginx.conf /etc/nginx
COPY ./handler.lua /etc/nginx/
COPY ./env_var_echo.py /etc/nginx/
RUN apk update
RUN apk add python3
RUN nginx -s reload
I run the image and then get in the docker:
docker run -it -d -p 8080:80 --name my-ngx-lua nginx-lua
docker exec -it my-ngx-lua sh
Then I define a new environment variable from inside the docker:
/etc/nginx # export SECRET=thisIsMySecret
/etc/nginx # echo $SECRET
thisIsMySecret
/etc/nginx #
EDIT: After defining the new env var, I exit the container and then get into it again and it is not there anymore:
/etc/nginx # exit
iy#MacBook-Pro ~ % docker exec -it my-ngx-lua sh
/etc/nginx # echo $SECRET
/etc/nginx #
I run the python script and I expect to receive "thisIsMySecret", which is the value I defined.
import os
secret_key = os.environ.get('SECRET')
print(secret_key + '\n')
But I get None instead.
Only if I call any env var that already came with the docker (PATH for example), python will return the value of it. But if it is an env var that I just defined, it will return None.
BTW, I tried the same with lua and received nil. hence I am pretty sure the issue is from Alpine.
I am not looking for a solution like defining the env var from docker build.
Thanks.
This
After defining the new env var, I exit the container
is the cause. Exported variable only exists while the main process does and it is only visible to the process it was declared in.
What you need is to use the -e option when you start the container:
docker run -e SECRET=mysecret ...
With this docker will add the environment variable to the main process (NGINX in this case) and to docker exec commands as well.
I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.
I would like to use the existing container multiple times by providing different arguments. I have a docker-compose.yml file with entrypoint: ["/bin/bash", "entrypoint.sh"].
To run the container I use the command docker-compose run foo --database=foo --schema=boo --tables=puah. It works perfect. Container does the job.
Here is the docker-compose.yml
version: "3"
services:
bcp:
image: ubuntu:18.04
restart: always
tty: true
entrypoint: ["/bin/bash", "/ingestion/bcp-entrypoint.sh"]
volumes:
- ./services/bcp:/ingestion/services/bcp
- ./bcp-entrypoint.sh:/ingestion/bcp-entrypoint.sh
Here is the bcp-entrypoint.sh
#!/bin/bash
INGESTION_DIR=/ingestion
TMP_DIR=/tmp/ingestion
apt-get update
apt-get upgrade -y
apt-get clean -y
apt-get install -y python3-pip
apt-get install -y curl
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | tee /etc/apt/sources.list.d/msprod.list
apt-get update
ACCEPT_EULA=Y apt-get install -y mssql-tools unixodbc-dev
cd ${INGESTION_DIR}
mkdir ${TMP_DIR}
python3 -m --database $database --schmema $schema --tables ${TABLES}
My problems:
The container restarts all the time and keeps retrieving the data with the same arguments provided in docker-compose run bcp ...?
I would like to use one container and overwrite the arguments, so I can skip costly installation.
Maybe there is a combination of entrypoint & command in docker-compose.yml? So, basically, I would like to execute python3 -m --database ... --schema ... --tables .... Ideally I would do it purely in docker-compose without a dockerfile.
I would like to use the existing container multiple times by providing different arguments.
If you want to change entrypoint or cmd of an already existing container, you can't. Once a container is created, most of its configuration cannot be changed (see docker update for updatable container configs)
Keep in mind:
docker-compose run will create and start a container with given arguments (you can then override entrypoint or cmd)
docker-compose exec run a command in a running container. It won't work in a stopped container, and won't create a new container.
docker start start a stopped container, the container will start with already defined cmd and entrypoint. You won't be able to change that.
You can do something like docker-compose run --entrypoint 'sleep 9999' foo which will start your container and ensure it's running for 9999 seconds, then execute commands with docker-compose exec such as
# similar to what would happen with 'docker-compose run foo --database=foo --schema=boo' considering entrypoint is ["/bin/bash", "entrypoint.sh"]'
# '--database=foo --schema=boo' would be passed as argument of entrypoint.sh
docker-compose exec "/bin/bash -c 'entrypoint.sh --database=foo --schema=boo'"
docker-compose exec "/bin/bash -c 'entrypoint.sh --database=blah --schema=wooow'"
docker-compose run is using an image to create a container with the provided arguments.
Use docker-compose run foo --name foo1 --[other arguments] to end up with a container for each set of arguments.
If you don't want to keep the container once the job is done then include the -rm option to remove the container on close.
I'm new to Docker. I have a Python program that I run in the following way.
python main.py --s=aws --d=scylla --p=4 --b=15 --e=local -w
Please note the double hyphen -- for the first four parameters and single hyhpen '-' for the last one.
I'm trying to run this inside a Docker container. Here's my Dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python","app.py","--s","(source)", "--d","(database)","--w", "(workers)", "--b", "(bucket)", "--e", "(env)", "-w"]
I'm not sure if this is will work as I don't know exactly how to test and run this. I want to run the Docker image with the following port mappings.
docker run --name=something -p 9042:9042 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9160:9160 -p 9180:9180 -p 10000:10000 -d user/something
How can I correct the Docker file? Once I build an image how to run it?
First, fix the dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
# optional: it is better to chain commands to reduce the number of created layers
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# mandatory: "--s=smth" is one argument
# optional: it's better to use environment variables for source, database etc
CMD ["python","app.py","--s=(source)", "--d=(database)","--w=(workers)", "--b=(bucket)", "--e=(env)", "-w"]
then, build it:
docker build -f "<dockerfile path>" -t "<tag to assign>" "<build dir (eg .)>"
Then, you can just use the assigned tag as an image name:
docker run ... <tag assigned>
UPD: I got it wrong the first time, tag should be used instead of the image name, not the instance name
UPD2: With the first response, I assumed you're going to hardcode parameters and only mentioned it is better to use environment variables. Here is an example how to do it:
First, better, option is to use check environment variables directly in your Python script, instead of command line arguments.
First, make your Python script to read environment variables.
The quickest dirty way to do so is to replace CMD with something like:
CMD ["sh", "-c", "python app.py --s=$SOURCE --d=$DATABASE --w=$WORKERS ... -w"]
(it is common to use CAPS names for environment variables)
It will be better, however, to read environment variables directly in your Python script instead of command line arguments, or use them as defaults:
# somewere in app.py
import os
...
DATABASE = os.environ.get('DATABASE', default_value) # can default ot args.d
SOURCE = os.environ.get('SOURCE') # None by default
# etc
Don't forget to update dockerfile as well in this case
# Dockerfile:
...
CMD ["python","app.py"]
Finally, pass environment variables to your run command:
docker run --name=something ... -e DATABASE=<dbname> -e SOURCE=<source> ... <tag assigned at build>
There are more ways to pass environment variables, I'll just refer to the official documentation here:
https://docs.docker.com/compose/environment-variables/