I have a Dockerfile with an ENTRYPOINT that uses an ENV variable. I can't get the ENTRYPOINT structured so the container can also accept additional command line arguments. Here is the relevant part of the Dockerfile:
ARG MODULE_NAME
ENV MODULE_NAME=$MODULE_NAME
ENTRYPOINT /usr/bin/python3 -m ${MODULE_NAME}
That works fine if I just want to launch the container without additional arguments:
docker run my-image
But I need to be able to pass additional command line arguments (e.g., a "--debug" flag) to the python process like this:
docker run my-image --debug
With the form of ENTRYPOINT above, the "--debug" arg is not passed to the python process. I've tried both the exec form and the shell form of ENTRYPOINT but can't get it to work with both the ENV variable and command line args. A few other forms I tried:
This runs but doesn't accept additional args:
ENTRYPOINT ["/bin/bash", "-c", "/usr/bin/python3 -m ${MODULE_NAME}"]
This gives "/usr/bin/python3: No module named ${MODULE_NAME}":
ENTRYPOINT ["/usr/bin/python3", "-m ${MODULE_NAME}"]
This gives "/usr/bin/python3: No module named ${MODULE_NAME}":
ENTRYPOINT ["/usr/bin/python3", "-m", "${MODULE_NAME}"]
It appears it isn't possible to create an ENTRYPOINT that directly supports both variable expansion and additional command line arguments. While the shell form of ENTRYPOINT will expand ENV variables at run time, it does not accept additional (appended) arguments from the docker run command. While the exec form of ENTRYPOINT does support additional command line arguments, it does not create a shell environment by default so ENV variables are not expanded.
To get around this, bash can be called explicitly in the exec form to execute a script that then expands ENV variables and passes command line args to the python process. Here is an example Dockerfile that does this:
FROM ubuntu:16.04
ARG MODULE_NAME=foo
ENV MODULE_NAME=${MODULE_NAME}
RUN apt-get update -y && apt-get install -y python3.5
# Create the module to be run
RUN echo "import sys; print('Args are', sys.argv)" > /foo.py
# Create a script to pass command line args to python
RUN echo "/usr/bin/python3.5 -m $MODULE_NAME \$#" > /run_module.sh
ENTRYPOINT ["/bin/bash", "/run_module.sh"]
Output from the docker image:
$ docker run my-image
Args are ['/foo.py']
$ docker run my-image a b c
Args are ['/foo.py', 'a', 'b', 'c']
Note that variable expansion occurs during the RUN commands (since they are using shell form) so the contents of run_script.py in the image are:
/usr/bin/python3.5 -m foo $#
If the final RUN command is replaced with this:
RUN echo "/usr/bin/python3.5 -m \$MODULE_NAME \$#" > /run_module.sh
then the run_script.sh would contain
/usr/bin/python3.5 -m $MODULE_NAME $#
But output from the running container would be the same since variable expansion will occur at run time. A potential benefit of the second version is that one could override the module to be run at run time without replacing the ENTRYPOINT.
I was able to get the ENV variables resolved inside ENTRYPOINT. The mistake I was making was the ARGs that were used to populate ENV were written before the FROM statement and was hence out of scope for the current docker stage. Once I moved them below FROM, it works great.
My working Dockerfile:
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
ARG NEO4J_USERNAME
ARG NEO4J_URL
ARG NEO4J_PASSWORD
ENV NEO4J_USERNAME=$NEO4J_USERNAME \
NEO4J_URL=$NEO4J_URL \
NEO4J_PASSWORD=$NEO4J_PASSWORD
COPY ./ /
WORKDIR /
ENTRYPOINT python3 -u myscript.py ${NEO4J_URL} ${NEO4J_USERNAME} ${NEO4J_PASSWORD}
Related
I have a very simple docker file
FROM python:3
WORKDIR /usr/src/app
ENV CODEPATH=default_value
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
Here is my container command
docker run -e TOKEN="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
when I see container logs it shows
python3: can't open file '/usr/src/app/${TOKEN}': [Errno 2] No such file or directory
It looks like what you want to do is override the default path to the python file which is run when you launch the container. Rather than passing this option in as an environment variable, you can just pass the path to the file as an argument to docker run, which is the purpose of CMD in your dockerfile. What you set as the CMD option is the default, which users of your image can easily override by passing an argument to the docker run command.
doker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest "subfolder/testmypython.py"
Environment variable name CODEPATH but your setting TOKEN as Environment variable.
could you please try setting CODEPATH as env in following way
doker run -e CODEPATH="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
The way you've split ENTRYPOINT and CMD doesn't make sense, and it makes it impossible to do variable expansion here. You should combine the two parts together into a single CMD, and then use the shell form to run it:
# no ENTRYPOINT
CMD python3 /usr/src/app/${CODEPATH}
(Having done this, better still is to use the approach in #allan's answer and directly docker run python-image python3 other-script-name.py.)
The Dockerfile syntax doesn't allow environment expansion in RUN, ENTRYPOINT, or CMD commands. Instead, these commands have two forms.
Exec form requires you to format the command as a JSON array, and doesn't do any processing on what you give it; it runs the command with an exact set of shell words and the exact strings in the command. Shell form doesn't have any special syntax, but wraps the command in sh -c, and that shell handles all of the normal things you'd expect a shell to do.
Using RUN as an example:
# These are the same:
RUN ["ls", "-la", "some directory"]
RUN ls -la 'some directory'
# These are the same (and print a dollar sign):
RUN ["echo", "$FOO"]
RUN echo \$FOO
# These are the same (and a shell does variable expansion):
RUN echo $FOO
RUN ["/bin/sh", "-c", "echo $FOO"]
If you have both ENTRYPOINT and CMD this expansion happens separately for each half. This is where the split you have causes trouble: none of these options will work:
# Docker doesn't expand variables at all in exec form
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
# ["python3", "/usr/src/app/${CODEPATH}"] with no expansion
# The "sh -c" wrapper gets interpreted as an argument to Python
ENTRYPOINT ["python3"]
CMD /usr/src/app/${CODEPATH}
# ["python3", "/bin/sh", "-c", "/usr/src/app/${CODEPATH}"]
# "sh -c" only takes one argument and ignores the rest
ENTRYPOINT python3
CMD ["/usr/src/app/${CODEPATH}"]
# ["/bin/sh", "-c", "python3", ...]
The only real effect of this ENTRYPOINT/CMD split is to make a container that can only run Python scripts, without special configuration (an awkward docker run --entrypoint option); you're still providing most of the command line in CMD, but not all of it. I tend to recommend that the whole command go in CMD, and you reserve ENTRYPOINT for a couple of more specialized uses; there is also a pattern of putting the complete command in ENTRYPOINT and trying to use the CMD part to pass it options. Either way, things will work better if you put the whole command in one directive or the other.
I have a dockerfile where a few commands need to be executed in a row, not in parallel or asynchronously, so cmd1 finishes, cmd2 starts, etc. etc.
Dockerfile's RUN is perfect for that. However, one of those RUN commands uses environment variables, meaning i'm calling os.getenv at some point. Sadly, it seems like when passing environment variables, be it through the CLI itself or with help of a .env file, only CMD instead of RUN works. but CMD is launching concurrently, so the container executes this command, but goes over right to the next one, which i definitely don't want.
In conclusion, is there even a way to pass environment variables to RUN commands in a dockerfile?
To help understand a bit better, here's an excerpt from my dockerfile:
FROM python:3.8
# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create working directory
RUN mkdir -p /usr/src/my_directory
WORKDIR /usr/src/my_directory
# Copy contents
COPY . /usr/src/my_directory
# RUN calling method that uses calls os.getenv at some point (THIS IS THE PROBLEM)
RUN ["python3" ,"some_script.py"]
# RUN some other commands (this needs to run AFTER the command above finishes)
#if i replace the RUN above with CMD, this gets called right after
RUN ["python3", "some_other_script.py","--param","1","--param2", "config.yaml"]
Excerpt from some_script.py:
if __name__ == "__main__":
abc = os.getenv("my_env_var") # this is where i get a ReferenceError if i use RUN
do_some_other_stuff(abc)
The .env file I'm using with the dockerfile (or docker-compose):
my_env_var=some_url_i_need_for_stuff
Do not use the exec form of a RUN instruction if you want variable substitution, or use it to execute a shell. From the documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
This is how I solved my problem:
write a bash script that executes all relevant commands in the nice order that i want to
use ENTRYPOINT instead of CMD or RUN
the bash script will already have the ENV vars, but you can double check with positional arguments passed to that bash script
I have a python script that i run with the following command :
python3 scan.py --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id 42
This works perfectly when I run it on the command line
IN my Dockerfile , I have tried ARG and ENV . none seem to work
#ARG api_token
#ARG username
#ARG password
# Configure AWS arguments
#RUN aws configure set aws_access_key_id $AWS_KEY \
# && aws configure set aws_secret_access_key $AWS_SECRET_KEY \
# && aws configure set default.region $AWS_REGION
### copy bash script and change permission
RUN mkdir workspace
COPY scan-api.sh /workspace
RUN chmod +x /workspace/scan-api.py
CMD ["/python3", "/workspace/scan-api.py"]
so how do i define this flagged argument in docker file ?
And whats the command run when running the image ?
You can do this in two ways as you want to override at run time.
As args to Docker run command
As an ENV to Docker run command
1st is simplest and you will not need to change anything Dockerfile
docker run --rm my_image python3 /workspace/scan-api.py --bar tet --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id
and my simple script
import sys
print ("All ARGs",sys.argv[1:])
Using ENV you will need to change Dockerfile
I am posting the way for one, you can do this for all args
FROM python:3.7-alpine3.9
ENV API_TOKEN=default_token
CMD ["sh", "-c", "python /workspace/scan-api.py $API_TOKEN"]
So you can override them during run time or have the ability to run with some default value.
docker run -it --rm -e API_TOKEN=new_token my_image
CMD takes exactly the same arguments you used from the command line.
CMD ["/python3", "scan.py", "--api_token", "5563ff177863e97a70a45dd4", "--base_api_url", "http://101.102.34.66:4242/scanjob/", "--base_report_url", "http://101.102.33.66:4242/", "--job_id", "42"]
It's confusing.
You will need to use the SHELL form of ENTRYPOINT (or CMD) in order to have environment variable substitution, e.g.
ENTRYPOINT "/python3","/workspace/scan-api.py","--api-token=${TOKEN}" ...
And then run the container using something of the form:
docker run --interactive --tty --env=TOKEN=${TOKEN} ...
HTH!
I'm new to Docker. I have a Python program that I run in the following way.
python main.py --s=aws --d=scylla --p=4 --b=15 --e=local -w
Please note the double hyphen -- for the first four parameters and single hyhpen '-' for the last one.
I'm trying to run this inside a Docker container. Here's my Dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python","app.py","--s","(source)", "--d","(database)","--w", "(workers)", "--b", "(bucket)", "--e", "(env)", "-w"]
I'm not sure if this is will work as I don't know exactly how to test and run this. I want to run the Docker image with the following port mappings.
docker run --name=something -p 9042:9042 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9160:9160 -p 9180:9180 -p 10000:10000 -d user/something
How can I correct the Docker file? Once I build an image how to run it?
First, fix the dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
# optional: it is better to chain commands to reduce the number of created layers
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# mandatory: "--s=smth" is one argument
# optional: it's better to use environment variables for source, database etc
CMD ["python","app.py","--s=(source)", "--d=(database)","--w=(workers)", "--b=(bucket)", "--e=(env)", "-w"]
then, build it:
docker build -f "<dockerfile path>" -t "<tag to assign>" "<build dir (eg .)>"
Then, you can just use the assigned tag as an image name:
docker run ... <tag assigned>
UPD: I got it wrong the first time, tag should be used instead of the image name, not the instance name
UPD2: With the first response, I assumed you're going to hardcode parameters and only mentioned it is better to use environment variables. Here is an example how to do it:
First, better, option is to use check environment variables directly in your Python script, instead of command line arguments.
First, make your Python script to read environment variables.
The quickest dirty way to do so is to replace CMD with something like:
CMD ["sh", "-c", "python app.py --s=$SOURCE --d=$DATABASE --w=$WORKERS ... -w"]
(it is common to use CAPS names for environment variables)
It will be better, however, to read environment variables directly in your Python script instead of command line arguments, or use them as defaults:
# somewere in app.py
import os
...
DATABASE = os.environ.get('DATABASE', default_value) # can default ot args.d
SOURCE = os.environ.get('SOURCE') # None by default
# etc
Don't forget to update dockerfile as well in this case
# Dockerfile:
...
CMD ["python","app.py"]
Finally, pass environment variables to your run command:
docker run --name=something ... -e DATABASE=<dbname> -e SOURCE=<source> ... <tag assigned at build>
There are more ways to pass environment variables, I'll just refer to the official documentation here:
https://docs.docker.com/compose/environment-variables/
I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf
The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name