Docker file for running a Python program with parameters - python

I'm new to Docker. I have a Python program that I run in the following way.
python main.py --s=aws --d=scylla --p=4 --b=15 --e=local -w
Please note the double hyphen -- for the first four parameters and single hyhpen '-' for the last one.
I'm trying to run this inside a Docker container. Here's my Dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python","app.py","--s","(source)", "--d","(database)","--w", "(workers)", "--b", "(bucket)", "--e", "(env)", "-w"]
I'm not sure if this is will work as I don't know exactly how to test and run this. I want to run the Docker image with the following port mappings.
docker run --name=something -p 9042:9042 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9160:9160 -p 9180:9180 -p 10000:10000 -d user/something
How can I correct the Docker file? Once I build an image how to run it?

First, fix the dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
# optional: it is better to chain commands to reduce the number of created layers
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# mandatory: "--s=smth" is one argument
# optional: it's better to use environment variables for source, database etc
CMD ["python","app.py","--s=(source)", "--d=(database)","--w=(workers)", "--b=(bucket)", "--e=(env)", "-w"]
then, build it:
docker build -f "<dockerfile path>" -t "<tag to assign>" "<build dir (eg .)>"
Then, you can just use the assigned tag as an image name:
docker run ... <tag assigned>
UPD: I got it wrong the first time, tag should be used instead of the image name, not the instance name
UPD2: With the first response, I assumed you're going to hardcode parameters and only mentioned it is better to use environment variables. Here is an example how to do it:
First, better, option is to use check environment variables directly in your Python script, instead of command line arguments.
First, make your Python script to read environment variables.
The quickest dirty way to do so is to replace CMD with something like:
CMD ["sh", "-c", "python app.py --s=$SOURCE --d=$DATABASE --w=$WORKERS ... -w"]
(it is common to use CAPS names for environment variables)
It will be better, however, to read environment variables directly in your Python script instead of command line arguments, or use them as defaults:
# somewere in app.py
import os
...
DATABASE = os.environ.get('DATABASE', default_value) # can default ot args.d
SOURCE = os.environ.get('SOURCE') # None by default
# etc
Don't forget to update dockerfile as well in this case
# Dockerfile:
...
CMD ["python","app.py"]
Finally, pass environment variables to your run command:
docker run --name=something ... -e DATABASE=<dbname> -e SOURCE=<source> ... <tag assigned at build>
There are more ways to pass environment variables, I'll just refer to the official documentation here:
https://docs.docker.com/compose/environment-variables/

Related

Docker : Add a valid entrypoint for multiple python script

Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?
In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.
In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.

Is it possible to share a volume with 2 docker containters?

I can't run 2 containers whereas I can run each one them separately.
I have this 1st container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test1.py /app/container1/test1.py
WORKDIR /app/
CMD python3 container1/test1.py
I have this 2nd container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test2.py /app/container2/test2.py
WORKDIR /app/
CMD python3 container2/test2.py
No issues to create images:
docker image build ./authentif -t test1:latest
docker image build ./authoriz -t test2:latest
When I run the 1st container with this command:
docker container run -it --network my_network --name test1_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test1:latest
it works.
And If i want to check my volume:
sudo ls /var/lib/docker/volumes/my_volume/_data
I can see data in my volume
However when I want run the 2nd container:
docker container run -it --network my_network --name test2_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test2:latest
I have this error:
python3: can't open file '/app/container2/test2.py': [Errno 2] No such file or directory
If i delete everything and start over : if I start running the 2nd container it works but then id I want to run the 1st container, i have the error again.
why is that?
in my container1, let's assume that my script python writes data in a file, for example :
import os
print("test111111111")
if os.environ.get('LOG') == "1":
print("1111111")
with open('record.log', 'a') as file:
file.write("file11111")
I can't reproduce your issue. When I start 2 containers using
docker run -d --rm -v myvolume:/app --name container1 debian tail -f /dev/null
docker run -d --rm -v myvolume:/app --name container2 debian tail -f /dev/null
and then do
docker exec container1 /bin/sh -c 'echo hello > /app/hello.txt'
docker exec container2 cat /app/hello.txt
it prints out 'hello' as expected.
You are mounting the volume over /app, the directory that contains your application code. That hides the code and replaces it with something else.
The absolute best approach here, if you can handle it, is to avoid sharing files at all. Keep the data somewhere like a relational database (which may be stateful). Don't mount anything on to your containers. Especially if you're looking forward to a clustered environment like Kubernetes, sharing files can be surprisingly tricky.
If you can't get rid of the shared directory, then put it somewhere other than /app. You might need to configure the alternate directory using an environment variable.
docker container run ... \
--mount type=volume,src=my_volume,dst=/data \ # /data, not /app
...
What's actually happening in your setup is that Docker has a feature to copy the contents of the image into an empty named volume on first use. This only happens if the volume is completely empty, this only happens with a named Docker volume and not bind mounts, and this doesn't happen on other container systems like Kubernetes. (I'd discourage actually relying on this behavior.)
So when you run the first container, it sees that my_volume is empty and copies the test1 image into it; then the container sees the code it expects it in /app and it apparently works fine. The second container sees my_volume is non-empty, and so the volume contents (with the first image's code) hide what was in the image (the second image's code). I'd expect, if you started from scratch, whichever of the two containers you started first would work, but not the other, and if you change the code in the working image, a new container won't see that change (it will use the code out of the volume).

Pass windows environmental variables to dockerized python app

I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.

how to parse volume in dockerfile for an application to be pushed on docker hub

FROM python:3
WORKDIR /Users/vaibmish/Documents/new/graph-report
RUN pip install graphreport==1.2.1
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
CMD [ graphreport ]
THIS IS PART OF DCOKERFIILE
i wish to remove cd volumes from tha file and have a command like -v there so that whoever runs that can give his or her own volume path in same
The line
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
is wrong. You achieve the same with WORKDIR:
WORKDIR /Users/vaibmish/Documents/new/graph-report/graphreport_metrics
WORKDIR creates the path if it doesn't exist and then changes the current directory to that path (same as mkdir -p /path/new && cd /path/new)
You can also declare the path as a volume and instruct who runs the container to provide their own path (docker run -v host_path:container_path ...)
VOLUME /Users/vaibmish/Documents/new/graph-report
A final note: It looks like these paths are from the host. Remember that the paths in the Dockerfile are not host paths. They are paths inside the container.
Typical practice here is to pick some fixed path inside the Docker container. It should be a different path from where your application is installed; it does not need to match any particular host path at all.
FROM python:3
RUN pip3 install graphreport==1.2.1
WORKDIR /data
CMD ["graphreport"]
docker build -t me/graphreport:1.2.1 .
docker run --rm \
-v /Users/vaibmish/Documents/new/graph-report:/data \
me/graphreport:1.2.1
(Remember that only the last CMD has an effect, and if it's not a well-formed JSON array, Docker will interpret it as a shell command. What you show in the question would run the test(1) command and not the program you're installing.)
If you're trying to install a single package from PyPI and just run it on local files, a Python virtual environment will be much easier to set up than anything based on Docker, and will essentially work as you expect:
python3 -m venv graphreport
. graphreport/bin/activate
pip3 install graphreport==1.2.1
cd /Users/vaibmish/Documents/new/graph-report
graphreport
deactivate # switch back to system Python/pip
All of the installed Python code is inside the graphreport virtual environment directory, and if you don't need this application again, you can just delete the directory tree.

Passing PIP_EXTRA_INDEX_URL to docker build

I am building an app that has a dependency available on a private Pypi server.
My Dockerfile looks like this:
FROM python:3.6
WORKDIR /src/mylib
COPY . ./
RUN pip install .
I want pip to use the extra server to install the dependencies. So I'm trying to pass the PIP_EXTRA_INDEX_URL environment variable during the build phase like so:
"docker build --pull -t $IMAGE_TAG --build-arg PIP_EXTRA_INDEX_URL=$PIP_EXTRA_INDEX_URL ."
For some reason it is not working as intended, and RUN echo $PIP_EXTRA_INDEX_URL returns nothing.
What is wrong?
You should add ARG to your Dockerfile. Your Dockerfile should look like this:
FROM python:3.6
ARG PIP_EXTRA_INDEX_URL
# YOU CAN ALSO SET A DEFAULT VALUE:
# ARG PIP_EXTRA_INDEX_URL=DEFAULT_VALUE
RUN echo "PIP_EXTRA_INDEX_URL = $PIP_EXTRA_INDEX_URL"
# you could also use braces - ${PIP_EXTRA_INDEX_URL}
WORKDIR /src/mylib
COPY . ./
RUN pip install .
If you want to know more, take a look this article.

Categories