docker-compose: Why is my python application being invoked here? - python

I've been scratching my head for a while with this. I have the following Dockerfile for my python application:
# Use an official Python runtime as a parent image
FROM frankwolf/rpi-python3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN chmod 777 docker-entrypoint.sh
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Run __main__.py when the container launches
CMD ["sudo", "python3", "__main__.py", "-debug"] # Not sure if I need sudo here
docker-compose file:
version: "3"
services:
mongoDB:
restart: unless-stopped
volumes:
- "/data/db:/data/db"
ports:
- "27017:27017"
- "28017:28017"
image: "andresvidal/rpi3-mongodb3:latest"
mosquitto:
restart: unless-stopped
ports:
- "1883:1883"
image: "mjenz/rpi-mosquitto"
FG:
privileged: true
network_mode: "host"
depends_on:
- "mosquitto"
- "mongoDB"
volumes:
- "/home/pi:/home/pi"
#image: "arkfreestyle/fg:v1.8"
image: "test:latest"
entrypoint: /app/docker-entrypoint.sh
restart: unless-stopped
And this what docker-entrypoint.sh looks like:
#!/bin/sh
if [ ! -f /home/pi/.initialized ]; then
echo "Initializing..."
echo "Creating .initialized"
# Create .initialized hidden file
touch /home/pi/.initialized
else
echo "Initialized already!"
sudo python3 __main__.py -debug
fi
Here's what I am trying to do:
(This stuff already works)
1) I need a docker image which runs my python application when I run it in a container. (this works)
2) I need a docker-compose file which runs 2 services + my python application, BUT before running my python application I need to do some initialization work, for this I created a shell script which is docker-entrypoint.sh. I want to do this initialization work ONLY ONCE when I deploy my application on a machine for the first time. So I'm creating a .initialized hidden file which I'm using as a check in my shell script.
I read that using entrypoint in a docker-compose file overwrites any old entrypoint/cmd given to the Dockerfile. So that's why in the else portion of my shell script I'm manually running my code using "sudo python3 main.py -debug", this else portion works fine.
(This is the main question)
In the if portion, I do not run my application in the shell script. I've tested the shell script itself separately, both if and else statements work as I expect, but when I run "sudo docker-compose up", the first time when my shell script hits the if portion it echoes the two statements, creates the hidden file and THEN RUNS MY APPLICATION. The console output appears in purple/pink/mauve for the application, while the other two services print their logs out in yellow and cyan. I'm not sure if the colors matter, but in the normal condition my application logs are always green, in fact the first two echoes "Initializing" and "Creating .initialized" are also green! so I thought I'd mention this detail. After those two echoes, my application mysteriously begins and logs console output in purple...
Why/how is my application being invoked in the if statement of the shell script?
(This is only happens if I run through docker-compose, not if I just run the shell script with sh docker-entrypoint.sh)

Problem 1
Using ENTRYPOINT and CMD at the same time has some strange effects.
Problem 2
This happens to your container:
It is started the first time. The .initialized file does not exist.
The if case is executed. The file is created.
The script and therefore the container ends.
The restart: unless-stopped option restarts the container.
The .initialized file exists now, the else case is run.
python3 __main__.py -debug is executed.
BTW the USER command in the Dockerfile or the user option in Docker Compose are better options than sudo.

Related

command in docker-compose.yaml throwing errors

version: "3.4"
services:
app:
build:
context: ./
target: project_build
image: sig-project
environment:
PROJECT_NAME: "${PROJECT_NAME:-NullProject}"
command: /bin/bash -c "init_project.sh"
ports:
- 7777:7777
expose:
- 7777
volumes:
- ./$PROJECT_NAME:/app/$PROJECT_NAME
- .:/home/jovyan/work
working_dir: /app
entrypoint: "jupyter notebook --ip=0.0.0.0 --port=7777 --allow-root --no-browser"
Above is my docker-compose.yaml.
The command doesn't run, I get this error:
Unrecognized alias: 'c', it will have no effect
Furthermore, it runs the jupyter notebook out of /bin instead of /app.
If I change the command to
command: "init_project.sh"
it fails silently, and if try to do something complicated like:
command: python init_project.py
then it gives me this:
No such file or directory: /app/python
note: init_project.sh is just a bash script wrapper for init_project.py
so it seems that for some reason the commands are run in a way that I don't understand and from within the /app directory but without shell or bash.
I've been hitting my head against the wall trying to figure out what I'm missing here and I don't know what else to try.
I've found a variety of issues and discussions that are similar, but nothing seems to resolve it.
these are the contents of the working-dir /app:
#ls
Dockerfile docker-compose.yaml docker_compose.sh init_project.sh poetry.lock pyproject.toml
Makefile create_poetry_lock.sh docker_build.sh init_project.py install-poetry.py poetry_lock_update.sh test.sh
What am I missing?
Thanks!
Your compose file looks weird to me!
You either have a docker image which container will be created with, or you have a Dockerfile present which builds the image and then container will be created with that image.
Based on what has been said above;
Why do you have both image and build attributes on your compose file?
If image is already available (e.g: postgreSQL, rabbitmq) then you don't need to build anything, just provide the image.
If there's no image, then please also add your Dockerfile to the quiestion.
Why are you trying to run /bin/bash -c?
You can simply add:
bash /path/to/ur/shell_script
Lastly, why don't you add your command in the Dockerfile with CMD?
What are init_project.sh and init_project.py? Maybe also share the content inside those files as well.
Also might be good to add tree output so we know how from where different commands are being executed.

Running pytest command outside of docker container failes because container stopped

I use docker selenium grid and pytest to execute tests. What i now do is:
Spin up selenium grid via a makerfile
Spin up the docker container (with a volume pointing to my local pc for the tests). The container also runs the pytest command.
This all works good, except that i would rather split the second action and be able to run the test on an already running container. Preferred setup:
Spin up selenium grid + docker container with pyton+pytest
A command to run the tests (with the container as interpretor)
When i tried to do this, i faced the issue that the python+pytest container stops running when the commands are all done. There is no long living process.
Dockerfile
FROM python:3.9.0-alpine
RUN apk add tk
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN ls ..
CMD pytest --junitxml ../r/latest.xml
My docker-compose file looks like:
docker-compose.yml
version: "3.0"
services:
pytest:
container_name: pytest
build:
context: .
dockerfile: Dockerfile
volumes:
- ./t:/t
- ./r:/r
working_dir: /t/
networks:
default:
name: my_local_network #same as selenium grid
It does not 'feel' good to have this pytest command in the container settings itself.
Container shutting down
That's because the CMD pytest --junitxml ../r/latest.xml line will execute once and when complete it will exit the container.
To run a cmd on an existing container
You can run commands on an existing docker container using this command:
docker exec <container_name> python -m pytest
Where <container_name> would be pytest in your case, since that is what the container is called in your docker-compose.yml file.
See here for more info: https://docs.docker.com/engine/reference/commandline/exec/
Using Make
If you want to extend this to a makefile command:
docker:
docker-compose up -d
ci-tests: docker
docker exec <container_name> python -m pytest
To both spin up AND run tests you can use:
make ci-tests
You could run selenium-grid in docker too if you wanted to make this solution completely portable: https://www.conductor.com/nightlight/running-selenium-grid-using-docker-compose/

File created by docker is uneditable on host

I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

docker entrypoint behaviour with django

I'm trying to make my first django container with uwsgi. It works as follows:
FROM python:3.5
RUN apt-get update && \
apt-get install -y && \
pip3 install uwsgi
COPY ./projects.thux.it/requirements.txt /opt/app/requirements.txt
RUN pip3 install -r /opt/app/requirements.txt
COPY ./projects.thux.it /opt/app
COPY ./uwsgi.ini /opt/app
COPY ./entrypoint /usr/local/bin/entrypoint
ENV PYTHONPATH=/opt/app:/opt/app/apps
WORKDIR /opt/app
ENTRYPOINT ["entrypoint"]
EXPOSE 8000
#CMD ["--ini", "/opt/app/uwsgi.ini"]
entrypoint here is a script that detects whether to call uwsgi (in case there are no args) or python manage in all other cases.
I'd like to use this container both as an executable (dj migrate, dj shell, ... - dj here is python manage.py the handler for django interaction) and as a long-term container (uwsgi --ini uwsgi.ini). I use docker-compose as follows:
web:
image: thux-projects:3.5
build: .
ports:
- "8001:8000"
volumes:
- ./projects.thux.it/web/settings:/opt/app/web/settings
- ./manage.py:/opt/app/manage.py
- ./uwsgi.ini:/opt/app/uwsgi.ini
- ./logs:/var/log/django
And I manage in fact to serve the project correctly but to interact with django to "check" I need to issue:
docker-compose exec web entrypoint check
while reading the docs I would have imagined I just needed the arguments (without entrypoint)
Command line arguments to docker run will be appended after
all elements in an exec form ENTRYPOINT, and will override all
elements specified using CMD. This allows arguments to be passed to
the entry point, i.e., docker run -d will pass the -d argument
to the entry point.
The working situation with "repeated" entrypoint:
$ docker-compose exec web entrypoint check
System check identified no issues (0 silenced).
The failing one if I avoid 'entrypoint':
$ docker-compose exec web check
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"check\": executable file not found in $PATH": unknown
docker exec never uses a container's entrypoint; it just directly runs the command you give it.
When you docker run a container, the entrypoint and command you give to start it are combined to produce a single command line, and that command becomes the main container process. On the other hand, when you docker exec a command in a running container, it's interpreted literally; there aren't two parts of the command line to assemble, and the container's entrypoint isn't considered at all.
For the use case you describe, you don't need an entrypoint script to process the command in an unusual way. You can create a symlink to the manage.py script to give a shorter alias to run it, but make the default command be the uwsgi runner.
RUN chmod +x manage.py
RUN ln -s /opt/app/manage.py /usr/local/bin/dj
CMD ["uwsgi", "--ini", "/opt/app/uwsgi.ini"]
# Runs uwsgi:
docker run -p 8000:8000 myimage
# Manually trigger database migrations:
docker run --rm myimage dj migrate

Categories