Pass python arguments (argparse) within Docker container - python

I have a python script that i run with the following command :
python3 scan.py --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id 42
This works perfectly when I run it on the command line
IN my Dockerfile , I have tried ARG and ENV . none seem to work
#ARG api_token
#ARG username
#ARG password
# Configure AWS arguments
#RUN aws configure set aws_access_key_id $AWS_KEY \
# && aws configure set aws_secret_access_key $AWS_SECRET_KEY \
# && aws configure set default.region $AWS_REGION
### copy bash script and change permission
RUN mkdir workspace
COPY scan-api.sh /workspace
RUN chmod +x /workspace/scan-api.py
CMD ["/python3", "/workspace/scan-api.py"]
so how do i define this flagged argument in docker file ?
And whats the command run when running the image ?

You can do this in two ways as you want to override at run time.
As args to Docker run command
As an ENV to Docker run command
1st is simplest and you will not need to change anything Dockerfile
docker run --rm my_image python3 /workspace/scan-api.py --bar tet --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id
and my simple script
import sys
print ("All ARGs",sys.argv[1:])
Using ENV you will need to change Dockerfile
I am posting the way for one, you can do this for all args
FROM python:3.7-alpine3.9
ENV API_TOKEN=default_token
CMD ["sh", "-c", "python /workspace/scan-api.py $API_TOKEN"]
So you can override them during run time or have the ability to run with some default value.
docker run -it --rm -e API_TOKEN=new_token my_image

CMD takes exactly the same arguments you used from the command line.
CMD ["/python3", "scan.py", "--api_token", "5563ff177863e97a70a45dd4", "--base_api_url", "http://101.102.34.66:4242/scanjob/", "--base_report_url", "http://101.102.33.66:4242/", "--job_id", "42"]

It's confusing.
You will need to use the SHELL form of ENTRYPOINT (or CMD) in order to have environment variable substitution, e.g.
ENTRYPOINT "/python3","/workspace/scan-api.py","--api-token=${TOKEN}" ...
And then run the container using something of the form:
docker run --interactive --tty --env=TOKEN=${TOKEN} ...
HTH!

Related

Env var is defined on docker but returns None

I am working on a docker image I created using firesh/nginx-lua (The Linux distribution is Alpine):
FROM firesh/nginx-lua
COPY ./nginx.conf /etc/nginx
COPY ./handler.lua /etc/nginx/
COPY ./env_var_echo.py /etc/nginx/
RUN apk update
RUN apk add python3
RUN nginx -s reload
I run the image and then get in the docker:
docker run -it -d -p 8080:80 --name my-ngx-lua nginx-lua
docker exec -it my-ngx-lua sh
Then I define a new environment variable from inside the docker:
/etc/nginx # export SECRET=thisIsMySecret
/etc/nginx # echo $SECRET
thisIsMySecret
/etc/nginx #
EDIT: After defining the new env var, I exit the container and then get into it again and it is not there anymore:
/etc/nginx # exit
iy#MacBook-Pro ~ % docker exec -it my-ngx-lua sh
/etc/nginx # echo $SECRET
/etc/nginx #
I run the python script and I expect to receive "thisIsMySecret", which is the value I defined.
import os
secret_key = os.environ.get('SECRET')
print(secret_key + '\n')
But I get None instead.
Only if I call any env var that already came with the docker (PATH for example), python will return the value of it. But if it is an env var that I just defined, it will return None.
BTW, I tried the same with lua and received nil. hence I am pretty sure the issue is from Alpine.
I am not looking for a solution like defining the env var from docker build.
Thanks.
This
After defining the new env var, I exit the container
is the cause. Exported variable only exists while the main process does and it is only visible to the process it was declared in.
What you need is to use the -e option when you start the container:
docker run -e SECRET=mysecret ...
With this docker will add the environment variable to the main process (NGINX in this case) and to docker exec commands as well.

ENV in docker file not getting replaced

I have a very simple docker file
FROM python:3
WORKDIR /usr/src/app
ENV CODEPATH=default_value
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
Here is my container command
docker run -e TOKEN="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
when I see container logs it shows
python3: can't open file '/usr/src/app/${TOKEN}': [Errno 2] No such file or directory
It looks like what you want to do is override the default path to the python file which is run when you launch the container. Rather than passing this option in as an environment variable, you can just pass the path to the file as an argument to docker run, which is the purpose of CMD in your dockerfile. What you set as the CMD option is the default, which users of your image can easily override by passing an argument to the docker run command.
doker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest "subfolder/testmypython.py"
Environment variable name CODEPATH but your setting TOKEN as Environment variable.
could you please try setting CODEPATH as env in following way
doker run -e CODEPATH="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
The way you've split ENTRYPOINT and CMD doesn't make sense, and it makes it impossible to do variable expansion here. You should combine the two parts together into a single CMD, and then use the shell form to run it:
# no ENTRYPOINT
CMD python3 /usr/src/app/${CODEPATH}
(Having done this, better still is to use the approach in #allan's answer and directly docker run python-image python3 other-script-name.py.)
The Dockerfile syntax doesn't allow environment expansion in RUN, ENTRYPOINT, or CMD commands. Instead, these commands have two forms.
Exec form requires you to format the command as a JSON array, and doesn't do any processing on what you give it; it runs the command with an exact set of shell words and the exact strings in the command. Shell form doesn't have any special syntax, but wraps the command in sh -c, and that shell handles all of the normal things you'd expect a shell to do.
Using RUN as an example:
# These are the same:
RUN ["ls", "-la", "some directory"]
RUN ls -la 'some directory'
# These are the same (and print a dollar sign):
RUN ["echo", "$FOO"]
RUN echo \$FOO
# These are the same (and a shell does variable expansion):
RUN echo $FOO
RUN ["/bin/sh", "-c", "echo $FOO"]
If you have both ENTRYPOINT and CMD this expansion happens separately for each half. This is where the split you have causes trouble: none of these options will work:
# Docker doesn't expand variables at all in exec form
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
# ["python3", "/usr/src/app/${CODEPATH}"] with no expansion
# The "sh -c" wrapper gets interpreted as an argument to Python
ENTRYPOINT ["python3"]
CMD /usr/src/app/${CODEPATH}
# ["python3", "/bin/sh", "-c", "/usr/src/app/${CODEPATH}"]
# "sh -c" only takes one argument and ignores the rest
ENTRYPOINT python3
CMD ["/usr/src/app/${CODEPATH}"]
# ["/bin/sh", "-c", "python3", ...]
The only real effect of this ENTRYPOINT/CMD split is to make a container that can only run Python scripts, without special configuration (an awkward docker run --entrypoint option); you're still providing most of the command line in CMD, but not all of it. I tend to recommend that the whole command go in CMD, and you reserve ENTRYPOINT for a couple of more specialized uses; there is also a pattern of putting the complete command in ENTRYPOINT and trying to use the CMD part to pass it options. Either way, things will work better if you put the whole command in one directive or the other.

Override Dockerfile Entrypoint with Python and parameters/

in my Dockerfile:
ENTRYPOINT ["python3", "start1.py"]
When I run the docker image i want to override it with start2.py with a parameter year=2020. So I run:
docker run -it --entrypoint python3 start2.py year 2020 b43ssssss
It still runs start1.py, what am i doing wrong?
It still runs start1.py, what am i doing wrong?
Because anything pass as CMD with the entrypoint ENTRYPOINT ["python3", "start1.py"] will be pass as an argument to python file start1.py.
You can verify this by doing the following
import argparse, sys
print ("All ARGs",sys.argv[1:])
So the output will be
All ARGs ['start2.py', 'year', '2020', 'b43ssssss']
So Convert entrypoint to python3 only with some default CMD (start1.py) so you will have control which files to run.
ENTRYPOINT ["python3"]
# Default file to run
CMD ["start1.py"]
and then override at run time
docker run -it --rm my_image start2 year 2020 b43ssssss
Now the args should be
All ARGs ['year', '2020', 'b43ssssss']
For a couple of reasons, I tend to recommend using CMD over ENTRYPOINT as a default. This question is one of them: if you need to override the command at run time, it's much easier to do if you specify CMD.
# Change ENTRYPOINT to CMD
CMD ["python3", "start1.py"]
# Run an alternate script
docker run -it myimage \
python3 start2.py year 2020 b43ssssss
# Run a debugging shell
docker run --rm -it myimage \
bash
# Quickly double-check file contents
docker run --rm -it myimage \
ls -l /app
# This is what you're trying to avoid
docker run --rm -it \
--entrypoint /bin/ls \
myimage \
-l app
There is also a useful pattern of using ENTRYPOINT to run a secondary script that does some initial setup (waits for a database, rewrites config files, bootstraps a data store, ...) and then does exec "$#" to launch the CMD. I tend to reserve ENTRYPOINT for this pattern and default to CMD even if I don't specifically need it.
I do not recommend splitting the command with ENTRYPOINT ["python3"]. In the very specific case of wanting to run an alternate Python script it saves one word in the docker run command, but you still need to repeat the script name (unlike the "entrypoint-as-command" pattern) and you still need the --entrypoint option if you want to run something non-Python.

Get all Docker env variables in python script inside a container

I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf
The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name

Docker ENTRYPOINT with ENV variable and optional arguments

I have a Dockerfile with an ENTRYPOINT that uses an ENV variable. I can't get the ENTRYPOINT structured so the container can also accept additional command line arguments. Here is the relevant part of the Dockerfile:
ARG MODULE_NAME
ENV MODULE_NAME=$MODULE_NAME
ENTRYPOINT /usr/bin/python3 -m ${MODULE_NAME}
That works fine if I just want to launch the container without additional arguments:
docker run my-image
But I need to be able to pass additional command line arguments (e.g., a "--debug" flag) to the python process like this:
docker run my-image --debug
With the form of ENTRYPOINT above, the "--debug" arg is not passed to the python process. I've tried both the exec form and the shell form of ENTRYPOINT but can't get it to work with both the ENV variable and command line args. A few other forms I tried:
This runs but doesn't accept additional args:
ENTRYPOINT ["/bin/bash", "-c", "/usr/bin/python3 -m ${MODULE_NAME}"]
This gives "/usr/bin/python3: No module named ${MODULE_NAME}":
ENTRYPOINT ["/usr/bin/python3", "-m ${MODULE_NAME}"]
This gives "/usr/bin/python3: No module named ${MODULE_NAME}":
ENTRYPOINT ["/usr/bin/python3", "-m", "${MODULE_NAME}"]
It appears it isn't possible to create an ENTRYPOINT that directly supports both variable expansion and additional command line arguments. While the shell form of ENTRYPOINT will expand ENV variables at run time, it does not accept additional (appended) arguments from the docker run command. While the exec form of ENTRYPOINT does support additional command line arguments, it does not create a shell environment by default so ENV variables are not expanded.
To get around this, bash can be called explicitly in the exec form to execute a script that then expands ENV variables and passes command line args to the python process. Here is an example Dockerfile that does this:
FROM ubuntu:16.04
ARG MODULE_NAME=foo
ENV MODULE_NAME=${MODULE_NAME}
RUN apt-get update -y && apt-get install -y python3.5
# Create the module to be run
RUN echo "import sys; print('Args are', sys.argv)" > /foo.py
# Create a script to pass command line args to python
RUN echo "/usr/bin/python3.5 -m $MODULE_NAME \$#" > /run_module.sh
ENTRYPOINT ["/bin/bash", "/run_module.sh"]
Output from the docker image:
$ docker run my-image
Args are ['/foo.py']
$ docker run my-image a b c
Args are ['/foo.py', 'a', 'b', 'c']
Note that variable expansion occurs during the RUN commands (since they are using shell form) so the contents of run_script.py in the image are:
/usr/bin/python3.5 -m foo $#
If the final RUN command is replaced with this:
RUN echo "/usr/bin/python3.5 -m \$MODULE_NAME \$#" > /run_module.sh
then the run_script.sh would contain
/usr/bin/python3.5 -m $MODULE_NAME $#
But output from the running container would be the same since variable expansion will occur at run time. A potential benefit of the second version is that one could override the module to be run at run time without replacing the ENTRYPOINT.
I was able to get the ENV variables resolved inside ENTRYPOINT. The mistake I was making was the ARGs that were used to populate ENV were written before the FROM statement and was hence out of scope for the current docker stage. Once I moved them below FROM, it works great.
My working Dockerfile:
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
ARG NEO4J_USERNAME
ARG NEO4J_URL
ARG NEO4J_PASSWORD
ENV NEO4J_USERNAME=$NEO4J_USERNAME \
NEO4J_URL=$NEO4J_URL \
NEO4J_PASSWORD=$NEO4J_PASSWORD
COPY ./ /
WORKDIR /
ENTRYPOINT python3 -u myscript.py ${NEO4J_URL} ${NEO4J_USERNAME} ${NEO4J_PASSWORD}

Categories