Get all Docker env variables in python script inside a container - python

I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf

The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name

Related

Env var is defined on docker but returns None

I am working on a docker image I created using firesh/nginx-lua (The Linux distribution is Alpine):
FROM firesh/nginx-lua
COPY ./nginx.conf /etc/nginx
COPY ./handler.lua /etc/nginx/
COPY ./env_var_echo.py /etc/nginx/
RUN apk update
RUN apk add python3
RUN nginx -s reload
I run the image and then get in the docker:
docker run -it -d -p 8080:80 --name my-ngx-lua nginx-lua
docker exec -it my-ngx-lua sh
Then I define a new environment variable from inside the docker:
/etc/nginx # export SECRET=thisIsMySecret
/etc/nginx # echo $SECRET
thisIsMySecret
/etc/nginx #
EDIT: After defining the new env var, I exit the container and then get into it again and it is not there anymore:
/etc/nginx # exit
iy#MacBook-Pro ~ % docker exec -it my-ngx-lua sh
/etc/nginx # echo $SECRET
/etc/nginx #
I run the python script and I expect to receive "thisIsMySecret", which is the value I defined.
import os
secret_key = os.environ.get('SECRET')
print(secret_key + '\n')
But I get None instead.
Only if I call any env var that already came with the docker (PATH for example), python will return the value of it. But if it is an env var that I just defined, it will return None.
BTW, I tried the same with lua and received nil. hence I am pretty sure the issue is from Alpine.
I am not looking for a solution like defining the env var from docker build.
Thanks.
This
After defining the new env var, I exit the container
is the cause. Exported variable only exists while the main process does and it is only visible to the process it was declared in.
What you need is to use the -e option when you start the container:
docker run -e SECRET=mysecret ...
With this docker will add the environment variable to the main process (NGINX in this case) and to docker exec commands as well.

Setting Docker ENV using python -c "command"

The python modules that I downloaded are inside the user's home directory. I need to set the python path to the user's bin profile. I tried two approaches as shown below in my dockerfile but to no avail. When I check the environment variable in the running container for the first case the PY_USER BIN is $(python -c 'import site; print(site.USER_BASE + "/bin")') and for the second case the PY_USER_BIN is blank. However, when I manually try to export the PY_USER_BIN variable, it works.
ENV PY_USER_BIN $(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
and
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
To me you mix different context of execution.
The ENV command that you use is a Dockerfile command, it is for env variable in the context of docker that would be forwarded to the container.
The RUN command execute a command inside the container, here, export. Whatever is done inside the container stay inside the container and docker will not have access to it.
For me there no point to give as docker ENV variable where python is on the host has they don't share the same file system. If you need to do it on the container context, then run these command inside the container context with standard shell commands.
Try that first by connecting to your container and running a shell inside it, once the commands works, put them in your Dockerfile. That's as simple as that. To do that run it like:
docker run -ti [your container name/tag] [your shell]
if you use sh as shell:
docker run -ti [your container name/tag] sh
Then try your commands.
To me it seems the commands you want would look like that:
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
RUN export PATH=$PY_USER_BIN:$PATH
Anyway the point of a container is to have a fixed file system, fixed user names and all. So the USER_BIN shall be always in the same path inside the container in 99% case you could has well hardcode it.

Pass python arguments (argparse) within Docker container

I have a python script that i run with the following command :
python3 scan.py --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id 42
This works perfectly when I run it on the command line
IN my Dockerfile , I have tried ARG and ENV . none seem to work
#ARG api_token
#ARG username
#ARG password
# Configure AWS arguments
#RUN aws configure set aws_access_key_id $AWS_KEY \
# && aws configure set aws_secret_access_key $AWS_SECRET_KEY \
# && aws configure set default.region $AWS_REGION
### copy bash script and change permission
RUN mkdir workspace
COPY scan-api.sh /workspace
RUN chmod +x /workspace/scan-api.py
CMD ["/python3", "/workspace/scan-api.py"]
so how do i define this flagged argument in docker file ?
And whats the command run when running the image ?
You can do this in two ways as you want to override at run time.
As args to Docker run command
As an ENV to Docker run command
1st is simplest and you will not need to change anything Dockerfile
docker run --rm my_image python3 /workspace/scan-api.py --bar tet --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id
and my simple script
import sys
print ("All ARGs",sys.argv[1:])
Using ENV you will need to change Dockerfile
I am posting the way for one, you can do this for all args
FROM python:3.7-alpine3.9
ENV API_TOKEN=default_token
CMD ["sh", "-c", "python /workspace/scan-api.py $API_TOKEN"]
So you can override them during run time or have the ability to run with some default value.
docker run -it --rm -e API_TOKEN=new_token my_image
CMD takes exactly the same arguments you used from the command line.
CMD ["/python3", "scan.py", "--api_token", "5563ff177863e97a70a45dd4", "--base_api_url", "http://101.102.34.66:4242/scanjob/", "--base_report_url", "http://101.102.33.66:4242/", "--job_id", "42"]
It's confusing.
You will need to use the SHELL form of ENTRYPOINT (or CMD) in order to have environment variable substitution, e.g.
ENTRYPOINT "/python3","/workspace/scan-api.py","--api-token=${TOKEN}" ...
And then run the container using something of the form:
docker run --interactive --tty --env=TOKEN=${TOKEN} ...
HTH!

Using docker environment -e variable in supervisor

I've been trying to pass in an environment variable to a Docker container via the -e option. The variable is meant to be used in a supervisor script within the container. Unfortunately, the variable does not get resolved (i.e. they stay for instance$INSTANCENAME). I tried ${var} and "${var}", but this didn't help either. Is there anything I can do or is this just not possible?
The docker run command:
sudo docker run -d -e "INSTANCENAME=instance-1" -e "FOO=2" -v /var/app/tmp:/var/app/tmp -t myrepos/app:tag
and the supervisor file:
[program:app]
command=python test.py --param1=$FOO
stderr_logfile=/var/app/log/$INSTANCENAME.log
directory=/var/app
autostart=true
The variable is being passed to your container, but supervisor doesn't let use environment variables like this inside the configuration files.
You should review the supervisor documentation, and specifically the parts about string expressions. For example, for the command option:
Note that the value of command may include Python string expressions, e.g. /path/to/programname --port=80%(process_num)02d might expand to /path/to/programname --port=8000 at runtime.
String expressions are evaluated against a dictionary containing the keys group_name, host_node_name, process_num, program_name, here (the directory of the supervisord config file), and all supervisord’s environment variables prefixed with ENV_.

execute python script local to docker client - no volumes

I can run a bash script local to my docker client (not local to the docker host or targeted container), without using volumes or copying the script to the container:
docker run debian bash -c "`cat script.sh`"
Q1 How do I do the equivalent on a django container? The following have not worked but my help demonstrate what Im asking for (the bash script printf the python script line with the expaned args):
docker run django shell < `cat script.py`
cat script.py | docker run django shell
Q2 How do I pass arguments to script.py passed to a dockerized managed.py? Again, examples of what does not work (for me):
./script.sh arg1 arg2 | docker run django shell
docker run django shell < echo "$(./script.sh arg1 arg2)"
I think the best way for you is to use custom Dockerfile that uses COPY or ADD command to move whatever scripts you into the container.
As for passing arguments you can either use ENTRYPOINT command in your image, like the example below:
ENTRYPOINT django shell /home/script.sh
Then you can use docker run arg1 arg2 to pass the arguments
This is the link to pass the command line arguments to python: http://www.tutorialspoint.com/python/python_command_line_arguments.htm
eg: python script.py -param1
If the script is already available in the docker you can trigger it using Dockerfile(with passing parameters)
RUN /script.py -param1 <value>
Extra:
Having said that it is always difficult to change the Dockerfile if there are more parameters to be changed frequently.Hence a small shell script can be written as a wrapper to Dockerfile like this:
Dockerwrapper.sh
pass parameters to Dockerfile
dockerbuild --tag <name> .
-
Dockerfile
RUN python script.py -param1 $1
I -------------------------------------------
IF script is not present inside docker
You can copy the script inside and then delete it by using COPY,RUN command...
(Reason: Since docker is an isolated environment, running from outside is not possible (I GUESS..))
Hoped it answered ur question.
All the best

Categories