Docker container entry point no such file error with mounted volume - python

The image stores the application source code in /app. When running a container off the image without volume mapping, it works just fine.
If I setup a mount point for /app:/opt/test then I get the following error:
python: can't open file 'run.py': [Errno 2] No such file or directory
I can't seem to figure out what exactly is the problem. Can the application source code not be directly setup in a volume? I need to be able to mount the /app directory to the host and still run code inside /app, or some alternative.
Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
# Install SCIP requirements
RUN apt-get update && apt-get install -y wget libgfortran4 libblas3 liblapack3 libtbb-dev libgsl-dev libboost-all-dev build-essential g++ python-dev autotools-dev libicu-dev build-essential libbz2-dev libgmp3-dev libreadline-dev
RUN wget https://www.scipopt.org/download/release/SCIPOptSuite-7.0.1-Linux.sh -O scip.sh && chmod +x scip.sh && ./scip.sh --skip-license && mv bin/scip /app/scip
VOLUME ["/app"]
WORKDIR /app
# Install pip requirements
ADD requirements.txt .
RUN python -m pip install -r requirements.txt
ADD . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
ENTRYPOINT ["python", "run.py"]
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Hypixel API key
ENV API_KEY key
# Bot Discord token
ENV DISCORD_TOKEN token

How to solve it:
Just remove from your Dockerfile
VOLUME ["/app"]
Explanation:
You're creating an unnamed-volume in your Dockerfile before copying it. So, your files added with ADD . /app are not being saved in your image, but in your volume.
When you create a VOLUME in a Dockerfile (not with docker create volume), it's unnamed. It means that docker assign an arbitrary name (in the following example, 69e64d18f338whatever) for it and save data in /var/lib/docker/volumes/69e64d18f338whatever/_data
So, if you create container docker run without mounting these data you can't find it in your image.
Some good practices.
A good practices is use VOLUMES in Dockerfile for logs and volatile info.
If you want to use config, VOLUMES in Dockerfile is not recommended either. For configuration, it's better use named volumes.
For binaries used as entrypoint or command, you should set them directly in the docker image and never in a docker volume.

I think your mountpoint is reversed.
The correct syntax is host_folder:container_folder not the other way around.
Try mounting /opt/test:/app.

Related

Docker : Add a valid entrypoint for multiple python script

Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?
In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.
In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.

Streamlit showing me "Welcome to Streamlit" message when executing it with Docker

I'm trying to run a Docker container created from this Dockerfile
FROM selenium/standalone-chrome
WORKDIR /app
# Install dependencies
USER root
RUN apt-get update && apt-get install python3-distutils -y
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
RUN pip install selenium==4.1
# Copy src contents
COPY /src /app/
# Expose the port
EXPOSE 8501
# Execution
ENTRYPOINT [ "streamlit", "run" ]
CMD ["app.py"]
Building this container is possible, but when I execute the image, I obtain the following message:
👋 Welcome to Streamlit!
If you're one of our development partners or you're interested in getting
personal technical support or Streamlit updates, please enter your email
address below. Otherwise, you may leave the field blank.
Email: 2022-06-06 09:20:27.690
And, therefore, I am not able to press enter and continue the execution, as the execution halts. Do you guys know how should I make my Dockerfile to directly execute the streamlit run command and surpass this problem?
That welcome message is displayed when there does not exist a ~/.streamlit/credentials.toml file with the following content:
[general]
email=""
You can either create the above file (.streamlit/credentials.toml) within your app directory and copy its content to the container image in your Dockerfile or create this file using RUN commands on the following:
mkdir -p ~/.streamlit/
echo "[general]" > ~/.streamlit/credentials.toml
echo "email = \"\"" >> ~/.streamlit/credentials.toml
I would suggest the former approach to reduce the number of layers and thereby reduce the final image size.

Docker & Oracle Autonomous DB - TLS connection NOT working

Before starting to explain the issue, all the code I'm going to mention is also publicly available in my repository.
I'm trying to connect to my autonomous database instance without a wallet, by disabling m-TLS and using TLS instead. I have properly configured it so that I can connect with only a username, password and DSN strings.
If I execute this code from my python environment, I can successfully see the result:
However, when I execute this same exact code inside a Docker container, it fails for some reason.
Saying failure to open file, which is weird I think, since I shouldn't be referencing any file having avoided mutual TLS as the authentication mechanism for the autonomous database.
The Dockerfile is as follows:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM ghcr.io/oracle/oraclelinux8-instantclient:21
#RUN dnf -y install oracle-instantclient-release-el8 && \
# dnf -y install oracle-instantclient-basic oracle-instantclient-devel oracle-instantclient-sqlplus && \
# rm -rf /var/cache/dnf
RUN yum -y update && yum install -y python3 python3-pip
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python3 -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Wallet is no longer needed as we're using TLS to connect to ADB. See src/testing_db_tls.py for more info.
COPY wallet /home/appuser/wallets/Wallet_forza
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN useradd appuser && chown -R appuser /app
USER appuser
EXPOSE 65530/udp
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
ENTRYPOINT "python3" "src/oracledb.py"
Note that this Dockerfile is also correct, executing the connection code inside the Docker container with a wallet and the corresponding code that references to the wallet location has always worked.
So my question is, does Docker have something that impedes connecting through TLS to a database that I'm missing or some other restricting parameter unallowing me to properly connect inside a Docker container? The executed code is exactly the same, and both in and outside of the Docker image I have Oracle's instant client.

container: volume dockerfile does not work

I m not able to launch new jupyter-notebook from my project.
Below my dockerfile.
FROM python:3.9.0
ARG WORK_DIR=/opt/dir1
RUN apt-get update && apt-get install cron -y && apt-get install -y default-jre
# Install python libraries
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
WORKDIR $WORK_DIR
EXPOSE 8888
# Copy etl code
# copy code on container under your workdir "/opt/dir1"
COPY . .
ENTRYPOINT ["sh", "-c"]
CMD ["jupyter-notebook --ip 0.0.0.0 --no-browser --allow-root]
VOLUME /home/data/dir1/
then in my terminal i did
#build
docker build -t my-python-app .
#run
docker run -it -p 8888:8888 my-python-app
#in container i did
jupyter notebook --ip 0.0.0.0 --no-browser --allow-root
I think that my VOLUME doesn't work because when i did modifications in file of container nothing happens in the host /home/data/dir1/.
Does anyone knows why and how to solve it?
You can use, docker run -it /bin/bash (image name)
and try to navigate to the folder you have set the volume to, and see if an error occurs and check permissions.
When using volumes, check on the host system you can access the folder. Afterwards check which user you are, Docker allows to parse your USER_ID and GROUP_ID to the container.
From there you can use the same user and group as you are on the host system. If you wanted to access the same folder on the host system, you can enter into permissions problems.
More information on this on the following webpage.
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/
Maybe it's could be help you : try another way to start the volume inside container, for example add it within "run" command container creation. i've working with docker and i've never added volumes in this way (this doesn't mean you way is wrong)
Here two examples to working with volumens,I recommend you second link.
docker official docs , working with volumes example

Retrieving .csv file written by docker python program

I am trying to access the .csv file which my dockerized python program is making.
Here is my docker file:
# Use an official Python runtime as a parent image
FROM python:3.7
# Set the working directory to /app
WORKDIR /BotCloud
# Copy the current directory contents into the container at /app
ADD . /BotCloud
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
RUN pip install ta-lib
# Run BotFinal.py when the container launches
CMD ["python","-u", "BotLiveSnake.py"]
Here is the code snippet that is in my python file BotSnakeLive.py
def write(string):
with open('outfile.csv','w') as f:
f.write(string)
f.write("\n")
write(str("Starting Time: "+datetime.datetime.utcfromtimestamp(int(df.tail(1)['Open Time'])/10**3).strftime('%Y-%m-%d,%H:%M:%SUTC'))+",Trading:"+str(pairing)+",Starting Money:"+str(money)+",SLpercent:"+str(SLpercent)+",TPpercent,"+str(TPpercent))
Running my python program locally, outfile.csv is created in the same folder as my python program. However, with docker, I'm not sure where this outfile ends up. Any help would be appreciated.
In general, references to file paths that don't start with / are always interpreted relative to the current working directory. Unless you've changed that somehow (os.chdir, an entrypoint script running cd, the docker run -w option) that will be the WORKDIR you declared in the Dockerfile.
So: your file should be in /BotCloud/outfile.csv, in the container's filesystem space.
Note that containers have their own isolated filesystem space that is destroyed when the container is deleted. If the primary way your application communicates is via files, it may be much easier to use a non-Docker mechanism, such as Python virtual environments, to isolate your application from the rest of the system. You can mount a host directory into the container with docker run -v, or docker cp files out. (Note with docker run -v in particular it is helpful if the data is written to someplace that isn't the same directory as your application.)

Categories