Setting Docker ENV using python -c "command" - python

The python modules that I downloaded are inside the user's home directory. I need to set the python path to the user's bin profile. I tried two approaches as shown below in my dockerfile but to no avail. When I check the environment variable in the running container for the first case the PY_USER BIN is $(python -c 'import site; print(site.USER_BASE + "/bin")') and for the second case the PY_USER_BIN is blank. However, when I manually try to export the PY_USER_BIN variable, it works.
ENV PY_USER_BIN $(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
and
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH

To me you mix different context of execution.
The ENV command that you use is a Dockerfile command, it is for env variable in the context of docker that would be forwarded to the container.
The RUN command execute a command inside the container, here, export. Whatever is done inside the container stay inside the container and docker will not have access to it.
For me there no point to give as docker ENV variable where python is on the host has they don't share the same file system. If you need to do it on the container context, then run these command inside the container context with standard shell commands.
Try that first by connecting to your container and running a shell inside it, once the commands works, put them in your Dockerfile. That's as simple as that. To do that run it like:
docker run -ti [your container name/tag] [your shell]
if you use sh as shell:
docker run -ti [your container name/tag] sh
Then try your commands.
To me it seems the commands you want would look like that:
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
RUN export PATH=$PY_USER_BIN:$PATH
Anyway the point of a container is to have a fixed file system, fixed user names and all. So the USER_BIN shall be always in the same path inside the container in 99% case you could has well hardcode it.

Related

Env var is defined on docker but returns None

I am working on a docker image I created using firesh/nginx-lua (The Linux distribution is Alpine):
FROM firesh/nginx-lua
COPY ./nginx.conf /etc/nginx
COPY ./handler.lua /etc/nginx/
COPY ./env_var_echo.py /etc/nginx/
RUN apk update
RUN apk add python3
RUN nginx -s reload
I run the image and then get in the docker:
docker run -it -d -p 8080:80 --name my-ngx-lua nginx-lua
docker exec -it my-ngx-lua sh
Then I define a new environment variable from inside the docker:
/etc/nginx # export SECRET=thisIsMySecret
/etc/nginx # echo $SECRET
thisIsMySecret
/etc/nginx #
EDIT: After defining the new env var, I exit the container and then get into it again and it is not there anymore:
/etc/nginx # exit
iy#MacBook-Pro ~ % docker exec -it my-ngx-lua sh
/etc/nginx # echo $SECRET
/etc/nginx #
I run the python script and I expect to receive "thisIsMySecret", which is the value I defined.
import os
secret_key = os.environ.get('SECRET')
print(secret_key + '\n')
But I get None instead.
Only if I call any env var that already came with the docker (PATH for example), python will return the value of it. But if it is an env var that I just defined, it will return None.
BTW, I tried the same with lua and received nil. hence I am pretty sure the issue is from Alpine.
I am not looking for a solution like defining the env var from docker build.
Thanks.
This
After defining the new env var, I exit the container
is the cause. Exported variable only exists while the main process does and it is only visible to the process it was declared in.
What you need is to use the -e option when you start the container:
docker run -e SECRET=mysecret ...
With this docker will add the environment variable to the main process (NGINX in this case) and to docker exec commands as well.

ENV in docker file not getting replaced

I have a very simple docker file
FROM python:3
WORKDIR /usr/src/app
ENV CODEPATH=default_value
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
Here is my container command
docker run -e TOKEN="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
when I see container logs it shows
python3: can't open file '/usr/src/app/${TOKEN}': [Errno 2] No such file or directory
It looks like what you want to do is override the default path to the python file which is run when you launch the container. Rather than passing this option in as an environment variable, you can just pass the path to the file as an argument to docker run, which is the purpose of CMD in your dockerfile. What you set as the CMD option is the default, which users of your image can easily override by passing an argument to the docker run command.
doker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest "subfolder/testmypython.py"
Environment variable name CODEPATH but your setting TOKEN as Environment variable.
could you please try setting CODEPATH as env in following way
doker run -e CODEPATH="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
The way you've split ENTRYPOINT and CMD doesn't make sense, and it makes it impossible to do variable expansion here. You should combine the two parts together into a single CMD, and then use the shell form to run it:
# no ENTRYPOINT
CMD python3 /usr/src/app/${CODEPATH}
(Having done this, better still is to use the approach in #allan's answer and directly docker run python-image python3 other-script-name.py.)
The Dockerfile syntax doesn't allow environment expansion in RUN, ENTRYPOINT, or CMD commands. Instead, these commands have two forms.
Exec form requires you to format the command as a JSON array, and doesn't do any processing on what you give it; it runs the command with an exact set of shell words and the exact strings in the command. Shell form doesn't have any special syntax, but wraps the command in sh -c, and that shell handles all of the normal things you'd expect a shell to do.
Using RUN as an example:
# These are the same:
RUN ["ls", "-la", "some directory"]
RUN ls -la 'some directory'
# These are the same (and print a dollar sign):
RUN ["echo", "$FOO"]
RUN echo \$FOO
# These are the same (and a shell does variable expansion):
RUN echo $FOO
RUN ["/bin/sh", "-c", "echo $FOO"]
If you have both ENTRYPOINT and CMD this expansion happens separately for each half. This is where the split you have causes trouble: none of these options will work:
# Docker doesn't expand variables at all in exec form
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
# ["python3", "/usr/src/app/${CODEPATH}"] with no expansion
# The "sh -c" wrapper gets interpreted as an argument to Python
ENTRYPOINT ["python3"]
CMD /usr/src/app/${CODEPATH}
# ["python3", "/bin/sh", "-c", "/usr/src/app/${CODEPATH}"]
# "sh -c" only takes one argument and ignores the rest
ENTRYPOINT python3
CMD ["/usr/src/app/${CODEPATH}"]
# ["/bin/sh", "-c", "python3", ...]
The only real effect of this ENTRYPOINT/CMD split is to make a container that can only run Python scripts, without special configuration (an awkward docker run --entrypoint option); you're still providing most of the command line in CMD, but not all of it. I tend to recommend that the whole command go in CMD, and you reserve ENTRYPOINT for a couple of more specialized uses; there is also a pattern of putting the complete command in ENTRYPOINT and trying to use the CMD part to pass it options. Either way, things will work better if you put the whole command in one directive or the other.

Passing of environment variables with Dockerfile RUN instead of CMD

I have a dockerfile where a few commands need to be executed in a row, not in parallel or asynchronously, so cmd1 finishes, cmd2 starts, etc. etc.
Dockerfile's RUN is perfect for that. However, one of those RUN commands uses environment variables, meaning i'm calling os.getenv at some point. Sadly, it seems like when passing environment variables, be it through the CLI itself or with help of a .env file, only CMD instead of RUN works. but CMD is launching concurrently, so the container executes this command, but goes over right to the next one, which i definitely don't want.
In conclusion, is there even a way to pass environment variables to RUN commands in a dockerfile?
To help understand a bit better, here's an excerpt from my dockerfile:
FROM python:3.8
# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create working directory
RUN mkdir -p /usr/src/my_directory
WORKDIR /usr/src/my_directory
# Copy contents
COPY . /usr/src/my_directory
# RUN calling method that uses calls os.getenv at some point (THIS IS THE PROBLEM)
RUN ["python3" ,"some_script.py"]
# RUN some other commands (this needs to run AFTER the command above finishes)
#if i replace the RUN above with CMD, this gets called right after
RUN ["python3", "some_other_script.py","--param","1","--param2", "config.yaml"]
Excerpt from some_script.py:
if __name__ == "__main__":
abc = os.getenv("my_env_var") # this is where i get a ReferenceError if i use RUN
do_some_other_stuff(abc)
The .env file I'm using with the dockerfile (or docker-compose):
my_env_var=some_url_i_need_for_stuff
Do not use the exec form of a RUN instruction if you want variable substitution, or use it to execute a shell. From the documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
This is how I solved my problem:
write a bash script that executes all relevant commands in the nice order that i want to
use ENTRYPOINT instead of CMD or RUN
the bash script will already have the ENV vars, but you can double check with positional arguments passed to that bash script

Get all Docker env variables in python script inside a container

I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf
The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name

Using docker environment -e variable in supervisor

I've been trying to pass in an environment variable to a Docker container via the -e option. The variable is meant to be used in a supervisor script within the container. Unfortunately, the variable does not get resolved (i.e. they stay for instance$INSTANCENAME). I tried ${var} and "${var}", but this didn't help either. Is there anything I can do or is this just not possible?
The docker run command:
sudo docker run -d -e "INSTANCENAME=instance-1" -e "FOO=2" -v /var/app/tmp:/var/app/tmp -t myrepos/app:tag
and the supervisor file:
[program:app]
command=python test.py --param1=$FOO
stderr_logfile=/var/app/log/$INSTANCENAME.log
directory=/var/app
autostart=true
The variable is being passed to your container, but supervisor doesn't let use environment variables like this inside the configuration files.
You should review the supervisor documentation, and specifically the parts about string expressions. For example, for the command option:
Note that the value of command may include Python string expressions, e.g. /path/to/programname --port=80%(process_num)02d might expand to /path/to/programname --port=8000 at runtime.
String expressions are evaluated against a dictionary containing the keys group_name, host_node_name, process_num, program_name, here (the directory of the supervisord config file), and all supervisord’s environment variables prefixed with ENV_.

Categories