Docker : Add a valid entrypoint for multiple python script - python

Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?

In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.

In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.

Related

Is it possible to share a volume with 2 docker containters?

I can't run 2 containers whereas I can run each one them separately.
I have this 1st container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test1.py /app/container1/test1.py
WORKDIR /app/
CMD python3 container1/test1.py
I have this 2nd container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test2.py /app/container2/test2.py
WORKDIR /app/
CMD python3 container2/test2.py
No issues to create images:
docker image build ./authentif -t test1:latest
docker image build ./authoriz -t test2:latest
When I run the 1st container with this command:
docker container run -it --network my_network --name test1_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test1:latest
it works.
And If i want to check my volume:
sudo ls /var/lib/docker/volumes/my_volume/_data
I can see data in my volume
However when I want run the 2nd container:
docker container run -it --network my_network --name test2_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test2:latest
I have this error:
python3: can't open file '/app/container2/test2.py': [Errno 2] No such file or directory
If i delete everything and start over : if I start running the 2nd container it works but then id I want to run the 1st container, i have the error again.
why is that?
in my container1, let's assume that my script python writes data in a file, for example :
import os
print("test111111111")
if os.environ.get('LOG') == "1":
print("1111111")
with open('record.log', 'a') as file:
file.write("file11111")
I can't reproduce your issue. When I start 2 containers using
docker run -d --rm -v myvolume:/app --name container1 debian tail -f /dev/null
docker run -d --rm -v myvolume:/app --name container2 debian tail -f /dev/null
and then do
docker exec container1 /bin/sh -c 'echo hello > /app/hello.txt'
docker exec container2 cat /app/hello.txt
it prints out 'hello' as expected.
You are mounting the volume over /app, the directory that contains your application code. That hides the code and replaces it with something else.
The absolute best approach here, if you can handle it, is to avoid sharing files at all. Keep the data somewhere like a relational database (which may be stateful). Don't mount anything on to your containers. Especially if you're looking forward to a clustered environment like Kubernetes, sharing files can be surprisingly tricky.
If you can't get rid of the shared directory, then put it somewhere other than /app. You might need to configure the alternate directory using an environment variable.
docker container run ... \
--mount type=volume,src=my_volume,dst=/data \ # /data, not /app
...
What's actually happening in your setup is that Docker has a feature to copy the contents of the image into an empty named volume on first use. This only happens if the volume is completely empty, this only happens with a named Docker volume and not bind mounts, and this doesn't happen on other container systems like Kubernetes. (I'd discourage actually relying on this behavior.)
So when you run the first container, it sees that my_volume is empty and copies the test1 image into it; then the container sees the code it expects it in /app and it apparently works fine. The second container sees my_volume is non-empty, and so the volume contents (with the first image's code) hide what was in the image (the second image's code). I'd expect, if you started from scratch, whichever of the two containers you started first would work, but not the other, and if you change the code in the working image, a new container won't see that change (it will use the code out of the volume).

Does pyinstaller have any parameters like gcc -static?

I have a similar question to this : Is there a way to compile a Python program to binary and use it with a Scratch Dockerfile?
In this page, I saw that someone said that a C application runs well when compiled with -static.
So I have a new question: does pyinstaller have any parameters like gcc -static to make a python application run well in a Scratch Docker image?
From the question Docker Minimal Image PyInstaller Binary File?'s commands,I get the links about how to make python binary to static,which like the go application demo,say hello world in scratch.
And I do a single ,easy demo,app.py:
print("test")
Then,do docker build with the Dockerfile:
FROM bigpangl/python:3.6-slim AS complier
WORKDIR /app
COPY app.py ./app.py
RUN apt-get update \
&& apt-get install -y build-essential patchelf \
&& pip install staticx pyinstaller \
&& pyinstaller -F app.py \
&& staticx /app/dist/app /tu2k1ed
FROM scratch
WORKDIR /
COPY --from=complier /tu2k1ed /
COPY --from=complier /tmp /tmp
CMD ["/tu2k1ed"]
Get the image below, just 7.22M(I am not sure if could see the pic):
Try to run by code docker run test,successfully:
PS:
With my tests
the CMD must write by ['xxx'] not xxx direct.
/tmp directory is required in the demo.
other python application not test ,jsut the demo codes about print
The -F and --onefile parameters should do what you are looking to do. You'll likely want to take a look at your specs file and tweak accordingly.
Using --onefile will compile it into (you guessed it) one file. And you can include binaries with --add-binary parameter.
These pages in the docs may have some useful details on all of the parameters: https://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-binary-files
https://pyinstaller.readthedocs.io/en/stable/usage.html

Docker file for running a Python program with parameters

I'm new to Docker. I have a Python program that I run in the following way.
python main.py --s=aws --d=scylla --p=4 --b=15 --e=local -w
Please note the double hyphen -- for the first four parameters and single hyhpen '-' for the last one.
I'm trying to run this inside a Docker container. Here's my Dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python","app.py","--s","(source)", "--d","(database)","--w", "(workers)", "--b", "(bucket)", "--e", "(env)", "-w"]
I'm not sure if this is will work as I don't know exactly how to test and run this. I want to run the Docker image with the following port mappings.
docker run --name=something -p 9042:9042 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9160:9160 -p 9180:9180 -p 10000:10000 -d user/something
How can I correct the Docker file? Once I build an image how to run it?
First, fix the dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
# optional: it is better to chain commands to reduce the number of created layers
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# mandatory: "--s=smth" is one argument
# optional: it's better to use environment variables for source, database etc
CMD ["python","app.py","--s=(source)", "--d=(database)","--w=(workers)", "--b=(bucket)", "--e=(env)", "-w"]
then, build it:
docker build -f "<dockerfile path>" -t "<tag to assign>" "<build dir (eg .)>"
Then, you can just use the assigned tag as an image name:
docker run ... <tag assigned>
UPD: I got it wrong the first time, tag should be used instead of the image name, not the instance name
UPD2: With the first response, I assumed you're going to hardcode parameters and only mentioned it is better to use environment variables. Here is an example how to do it:
First, better, option is to use check environment variables directly in your Python script, instead of command line arguments.
First, make your Python script to read environment variables.
The quickest dirty way to do so is to replace CMD with something like:
CMD ["sh", "-c", "python app.py --s=$SOURCE --d=$DATABASE --w=$WORKERS ... -w"]
(it is common to use CAPS names for environment variables)
It will be better, however, to read environment variables directly in your Python script instead of command line arguments, or use them as defaults:
# somewere in app.py
import os
...
DATABASE = os.environ.get('DATABASE', default_value) # can default ot args.d
SOURCE = os.environ.get('SOURCE') # None by default
# etc
Don't forget to update dockerfile as well in this case
# Dockerfile:
...
CMD ["python","app.py"]
Finally, pass environment variables to your run command:
docker run --name=something ... -e DATABASE=<dbname> -e SOURCE=<source> ... <tag assigned at build>
There are more ways to pass environment variables, I'll just refer to the official documentation here:
https://docs.docker.com/compose/environment-variables/

Retrieving .csv file written by docker python program

I am trying to access the .csv file which my dockerized python program is making.
Here is my docker file:
# Use an official Python runtime as a parent image
FROM python:3.7
# Set the working directory to /app
WORKDIR /BotCloud
# Copy the current directory contents into the container at /app
ADD . /BotCloud
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
RUN pip install ta-lib
# Run BotFinal.py when the container launches
CMD ["python","-u", "BotLiveSnake.py"]
Here is the code snippet that is in my python file BotSnakeLive.py
def write(string):
with open('outfile.csv','w') as f:
f.write(string)
f.write("\n")
write(str("Starting Time: "+datetime.datetime.utcfromtimestamp(int(df.tail(1)['Open Time'])/10**3).strftime('%Y-%m-%d,%H:%M:%SUTC'))+",Trading:"+str(pairing)+",Starting Money:"+str(money)+",SLpercent:"+str(SLpercent)+",TPpercent,"+str(TPpercent))
Running my python program locally, outfile.csv is created in the same folder as my python program. However, with docker, I'm not sure where this outfile ends up. Any help would be appreciated.
In general, references to file paths that don't start with / are always interpreted relative to the current working directory. Unless you've changed that somehow (os.chdir, an entrypoint script running cd, the docker run -w option) that will be the WORKDIR you declared in the Dockerfile.
So: your file should be in /BotCloud/outfile.csv, in the container's filesystem space.
Note that containers have their own isolated filesystem space that is destroyed when the container is deleted. If the primary way your application communicates is via files, it may be much easier to use a non-Docker mechanism, such as Python virtual environments, to isolate your application from the rest of the system. You can mount a host directory into the container with docker run -v, or docker cp files out. (Note with docker run -v in particular it is helpful if the data is written to someplace that isn't the same directory as your application.)

What should I put for Docker CMD and ENTRYPOINT for Flask app running "python myapp.py images/*"

I am trying to run a Flask app using Docker.
Normally, to execute the Flask app, I run this inside of my Terminal:
python myapp.py images/*
I am unsure of how to convert that to Docker CMD syntax (or if I need to edit ENTRYPOINT).
Here is my docker file:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["myapp.py"]
Inside of requirements.txt:
flask
numpy
h5py
tensorflow
keras
When I run the docker image:
person#person:~/Projects/$ docker run -d -p 5001:5000 myapp
19645b69b68284255940467ffe81adf0e32a8027f3a8d882b7c024a10e60de46
docker ps:
Up 24 seconds 0.0.0.0:5001->5000/tcp hardcore_edison
When I got to localhost:5001 I get no response.
Is it an issue with my CMD parameter?
EDIT:
New Dockerfile:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
EXPOSE 5000
RUN pip install -r requirements.txt
CMD ["python myapp.py images/*.jpg "]
With this new configuration, when I run:
docker run -d -p 5001:5000 myapp
I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python myapp.py images/*.jpg \": stat python myapp.py images/*.jpg : no such file or directory": unknown.
When I run:
docker run -d -p 5001:5000 myapp python myapp.py images/*.jpg
I get the Docker image to run, but now when I go to localhost:5001, it complains that the connection was reset.
I'm glad you've already solved this issue. I put up this answer just for those who still have the same confusions like you do about ENTRYPOINT and CMD executives.
In a Dockerfile, ENTRYPOINT and CMD are two similar executives, but still have strong difference between them. The most important one(only seems to me) is that CMD could be overwritten but ENTRYPOINT not.
To explain this, I may offer you guys the command blow:
docker run -tid --name=container_name image_name [command]
As we can see, command is optional, and it(if exists) could overwrite CMD defined in Dockerfile.
Let's back to your issue. You may have two ways to achieve your purpose-->
ENTRYPOINT ["python"] and CMD ["/path/to/myapp.py", "/path/to/images/*.jpg"].
CMD python /path/to/myapp.py /path/to/images/*.jpg. This is mentioned by #David Maze above.
To understand the first one, you may take CMD as arguments for ENTRYPOINT.
A simple example below.
Dockerfile-->
FROM ubuntu:18.04
ENTRYPOINT ["cat"]
CMD ["/etc/hosts"]
Build image named test-cmd-show and start a container from it.
docker run test-cmd-show
This would show the content in /etc/hosts file. And go on...
docker run test-cmd-show /etc/resolv.conf
And this would show us the content of /etc/resolv.conf file. And go on ...
docker run test-cmd-show --help
This would show the help information for command cat.
Fantastic, right?
Somehow, we could do more research though this functionality.
Add a relevant question: What's the difference between CMD and ENTRYPOINT?
The important thing is that you need a shell to expand your command line, so I’d write
CMD python myapp.py images/*
When you just write CMD like this (without the not-really-JSON brackets and quotes) Docker will implicitly feed the command line through a shell for you.
(You also might consider changing your application to support taking a directory name as configuration in some form and “baking it in” to your application, if these images will be in a fixed place in the container filesystem.)
I would only set ENTRYPOINT when (a) you are setting it to a wrapper shell script that does some first-time setup and then exec "$#"; or (b) when you have a FROM scratch image with a static binary and you literally cannot do anything with the container besides run the one binary in it.
One issue I found was that the app wasn't accessible to Docker. I added this to app.run:
host='0.0.0.0'
According to this:
Deploying a minimal flask app in docker - server connection issues
Next, Docker panics when you add a directory to the CMD parameters.
So, I removed ENTRYPOINT and CMD and manually added the command to the Docker run:
docker docker run -d -p 5001:5000 myappdocker python myapp.py images/*.jpg

Categories