I've been trying to pass in an environment variable to a Docker container via the -e option. The variable is meant to be used in a supervisor script within the container. Unfortunately, the variable does not get resolved (i.e. they stay for instance$INSTANCENAME). I tried ${var} and "${var}", but this didn't help either. Is there anything I can do or is this just not possible?
The docker run command:
sudo docker run -d -e "INSTANCENAME=instance-1" -e "FOO=2" -v /var/app/tmp:/var/app/tmp -t myrepos/app:tag
and the supervisor file:
[program:app]
command=python test.py --param1=$FOO
stderr_logfile=/var/app/log/$INSTANCENAME.log
directory=/var/app
autostart=true
The variable is being passed to your container, but supervisor doesn't let use environment variables like this inside the configuration files.
You should review the supervisor documentation, and specifically the parts about string expressions. For example, for the command option:
Note that the value of command may include Python string expressions, e.g. /path/to/programname --port=80%(process_num)02d might expand to /path/to/programname --port=8000 at runtime.
String expressions are evaluated against a dictionary containing the keys group_name, host_node_name, process_num, program_name, here (the directory of the supervisord config file), and all supervisord’s environment variables prefixed with ENV_.
Related
Lets say I have the following python script
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--host", required=True)
parser.add_argument("--enabled", default=False, action="store_true")
args = parser.parse_args()
print("host: " + args.host)
print("enabled: " + str(args.enabled))
$ python3 test.py --host test.com
host: test.com
enabled: False
$ python3 test.py --host test.com --enabled
host: test.com
enabled: True
Now the script is used in a docker image and I want to pass the variables in docker run. For the host parameter it is quite easy
FROM python:3.10-alpine
ENV MY_HOST=default.com
#ENV MY_ENABLED=
ENV TZ=Europe/Berlin
WORKDIR /usr/src/app
COPY test.py .
CMD ["sh", "-c", "python test.py --host ${MY_HOST}"]
But how can I can make the --enabled flag to work? So when the/an ENV is unset or is 0 or off ore something, --enabled should be suppressed, otherwise it should be included in the CMD.
Is is possible without modify the python script?
For exactly the reasons you're showing here, I'd suggest modifying your script to be able to accept command-line options from environment variables. If you add a line
parser.set_defaults(
host=os.environ.get('MY_HOST'),
enabled=(os.environ.get('MY_ENABLED') == 'true')
)
then you can use docker run -e options to provide these values, without the complexity of trying to reconstruct the command line based on which options are and aren't present. (Also see Setting options from environment variables when using argparse.)
CMD ["./test.py"] # a fixed string, environment variables specified separately
docker run -e MY_HOST=example.com -e MY_ENABLED=true my-image
Conversely, you can provide the entire command line and its options when you run the container. (But depending on the context you might just be pushing the "how to construct the command" question up a layer.)
docker run my-image \
./test.py --host=example.com --enabled
In principle you can construct this using a separate shell script without modifying your Python script, but it will be somewhat harder and significantly less safe. That script could look something like
#!/bin/sh
TEST_ARGS="--host $MY_HOST"
if [ -n "$MY_ENABLED" ]; then
TEST_ARGS="$TEST_ARGS --enabled"
fi
exec ./test.py $TEST_ARGS
# ^^^^^^^^^^ without double quotes (usually a bug)
Expanding $TEST_ARGS without putting it in double quotes causes the shell to split the string's value on whitespace. This is usually a bug since it would cause directory names like /home/user/My Files to get split into multiple words. You're still at some risk if the environment variable values happen to contain whitespace or other punctuation, intentionally or otherwise.
There are safer but more obscure ways to approach this in shells with extensions like GNU bash, but not all Docker images contain these. Rather than double-check that your image has bash, and figure out bash array syntax, and write a separate script to do the argument handling, this is where I suggest handling it exclusively at the Python layer is a better approach.
I would like to set a list of environment variables as specified in an env.list file during the build process, i.e. have a respective command in the Dockerfile. Like this:
FROM python:3.9.4-slim-buster
COPY env.list env.list
# Here I need a corresponding command:
ENV env.list
The file looks like this:
FOO=foo
BAR=bar
My book of already failed attempts / ruled out options:
On Linux, one can usually set environment variables from a file env.list by running:
source env.list
export $(cut -d= -f1 env.list)
However, executing those commands as RUN in the Dockerfile does not work because env variables defined using RUN export FOO=foo are not persisted across different stages of the image.
I do not want to explicitly set those variables in the Dockerfile using ENV FOO=foo because they contain login credentials. It's also easier to automate/maintain the project if the variables are defined in one place.
I also don't want to set those variables during docker run --env-file env.list because I need them for a development container which does not "run".
ENV directive does not allow to parse a file like env.list, as pointed out. But even if it did, the resulting environment variables would still be saved in the final image, passwords included.
The correct approach to my knowledge is to set the passwords at runtime with "docker run", when this image runs, or when the child image runs via "docker run".
If the credentials are required while the image is built, I would pass them via the ARG directive so that they can be reference as shell variables in the Dockerfile but not saved in the final image:
ARG VAR
FROM image
RUN echo ${VAR}
etc...
which can run as:
docker build --build-arg VAR=value ...
If you use docker-compose you can pass a variables.env file
docker-compose.yml:
version: "3.7"
services:
service_name:
build: folder/.
ports:
- '5001:5000'
env_file:
- folder/variables.env
folder/Dockerfile
FROM python:3.9.4-slim-buster
folder/variables.env
FOO=foo
BAR=bar
For more info on compose: https://docs.docker.com/compose/
I have a dockerfile where a few commands need to be executed in a row, not in parallel or asynchronously, so cmd1 finishes, cmd2 starts, etc. etc.
Dockerfile's RUN is perfect for that. However, one of those RUN commands uses environment variables, meaning i'm calling os.getenv at some point. Sadly, it seems like when passing environment variables, be it through the CLI itself or with help of a .env file, only CMD instead of RUN works. but CMD is launching concurrently, so the container executes this command, but goes over right to the next one, which i definitely don't want.
In conclusion, is there even a way to pass environment variables to RUN commands in a dockerfile?
To help understand a bit better, here's an excerpt from my dockerfile:
FROM python:3.8
# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create working directory
RUN mkdir -p /usr/src/my_directory
WORKDIR /usr/src/my_directory
# Copy contents
COPY . /usr/src/my_directory
# RUN calling method that uses calls os.getenv at some point (THIS IS THE PROBLEM)
RUN ["python3" ,"some_script.py"]
# RUN some other commands (this needs to run AFTER the command above finishes)
#if i replace the RUN above with CMD, this gets called right after
RUN ["python3", "some_other_script.py","--param","1","--param2", "config.yaml"]
Excerpt from some_script.py:
if __name__ == "__main__":
abc = os.getenv("my_env_var") # this is where i get a ReferenceError if i use RUN
do_some_other_stuff(abc)
The .env file I'm using with the dockerfile (or docker-compose):
my_env_var=some_url_i_need_for_stuff
Do not use the exec form of a RUN instruction if you want variable substitution, or use it to execute a shell. From the documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
This is how I solved my problem:
write a bash script that executes all relevant commands in the nice order that i want to
use ENTRYPOINT instead of CMD or RUN
the bash script will already have the ENV vars, but you can double check with positional arguments passed to that bash script
The python modules that I downloaded are inside the user's home directory. I need to set the python path to the user's bin profile. I tried two approaches as shown below in my dockerfile but to no avail. When I check the environment variable in the running container for the first case the PY_USER BIN is $(python -c 'import site; print(site.USER_BASE + "/bin")') and for the second case the PY_USER_BIN is blank. However, when I manually try to export the PY_USER_BIN variable, it works.
ENV PY_USER_BIN $(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
and
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
To me you mix different context of execution.
The ENV command that you use is a Dockerfile command, it is for env variable in the context of docker that would be forwarded to the container.
The RUN command execute a command inside the container, here, export. Whatever is done inside the container stay inside the container and docker will not have access to it.
For me there no point to give as docker ENV variable where python is on the host has they don't share the same file system. If you need to do it on the container context, then run these command inside the container context with standard shell commands.
Try that first by connecting to your container and running a shell inside it, once the commands works, put them in your Dockerfile. That's as simple as that. To do that run it like:
docker run -ti [your container name/tag] [your shell]
if you use sh as shell:
docker run -ti [your container name/tag] sh
Then try your commands.
To me it seems the commands you want would look like that:
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
RUN export PATH=$PY_USER_BIN:$PATH
Anyway the point of a container is to have a fixed file system, fixed user names and all. So the USER_BIN shall be always in the same path inside the container in 99% case you could has well hardcode it.
I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf
The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name