flask container unusual behavior - python

I am running a sample flask app backed with mysql using docker-compose. Here's my compose file.
version: "2"
services:
webapp:
build:
context: ./flask/
dockerfile: Dockerfile
ports:
- "8000:5000"
env_file:
- ./.env
depends_on:
- mysqldb
networks:
- my-bridge1
volumes:
- "./flask/flask-data:/usr/src"
mysqldb:
build:
context: ./mysql/
dockerfile: Dockerfile
env_file:
- ./.env
networks:
- my-bridge1
volumes:
- "./mysql/db-data:/var/lib/mysql"
networks:
my-bridge1:
driver: bridge
The issue is that when I mount my application directory outside of the container, there is an
**error**: __init__.py file is not found
Which is found in the WORKDIR. This issue only occurs when I mount my code volume outside the container, if I mount any other directory then the app works fine.
Here's my docker file for the app:
FROM python:3
RUN mkdir /usr/src/FlaskApp
RUN mkdir /usr/src/FlaskApp/code
WORKDIR /usr/src/FlaskApp/code
COPY ./code ./
RUN pip install -r ./requirements.txt
COPY FlaskApp.wsgi /usr/src/FlaskApp/
EXPOSE 5000
VOLUME /usr/src
CMD [ "python", "__init__.py" ]
I have tested the mysql container, it copies the files from within the container. But the python container does not.
EDIT1:
When I changes the CMD arg to "ls", the dir is empty. When I changed the CMD arg to "pwd", the output is: "/usr/src/FlaskApp/code"
EDIT2:
What's more strange is that the directories inside the bind volume are created outside. But they are empty!

Data is copied into the python container as well but you are obscuring these data with the bind mount.
First you copy the files into /usr/src/FlaskApp/code in your Dockerfile but then you create bind mount at the same location which means that /usr/src/ will now hold only the contents of the ./flask/flask-data that are on your localhost (source of the bind mount).
In result, you will end up with /usr/src/<contents of ./flask/flask-data>, so if ./flask/flask-data on your localhost doesn't contain the __init__.py file (and the whole directory substructure required by your application), neither will the container.
So all these lines in your Dockerfile are basically irrelevant as long as you are using that bind mount
RUN mkdir /usr/src/FlaskApp
RUN mkdir /usr/src/FlaskApp/code
WORKDIR /usr/src/FlaskApp/code
COPY ./code ./
COPY FlaskApp.wsgi /usr/src/FlaskApp/
I am not sure what exactly you are trying to achieve and how your application is resolving the paths but a quick fix would be to create another folder under /usr/src (maybe /usr/src/FlaskData) and mount the local directory there.
volumes:
- "./flask/flask-data:/usr/src/FlaskData"
Now you will have both FlaskApp and FlaskData in your /usr/src, but your will need to update file paths in your application accordingly.
from Docker docs
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the
directory’s existing contents are obscured by the bind mount. This can
be beneficial, such as when you want to test a new version of your
application without building a new image. However, it can also be
surprising and this behavior differs from that of docker volumes.
And to answer why the bind mount behaves differently for MySQL container - it doesn't.
You are mounting empty folder to a location where the data is written by MySQL only after the container starts, therefore there is nothing to be obscured there because the destination is empty to start with (the same applies to the python container, if you would write something to /usr/src after the container starts, you would see those data appear on the localhost in ./flask/flask).

Related

File created by docker is uneditable on host

I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.

Docker & Python, permission denied on Linux, but works when runnning on Windows

I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'

How to make FASTAPI pickup changes in an API routing file automatically while running inside a docker container?

I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile :
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
# Environment variable for directory containing our app
ENV APP /var/www/app
ENV PYTHONUNBUFFERED 1
# Define working directory
RUN mkdir -p $APP
WORKDIR $APP
COPY . $APP
# Install missing dependencies
RUN pip install -r requirements.txt
AND my docker-compose.yml file
version: '3.8'
services:
ingestion-service:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- .:/app
restart: always
I am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time.
Quick answer: Yes :)
In the Dockerfile, you copying your app into /var/www/app.
The instructions form the Dockerfile are executed when you build your image (docker build -t <imgName>:<tag>)
If you change the code later on, how could the image be aware of that?
However, you can mount a volume(a directory) from your host machine, into the container when you execute the docker run / docker-compose up command, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well.
Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app?
volumes:
- .:/var/www/app

Django in Docker - Entrypoint to initiate Django App Files

at the moment I am trying to build a Django App, that other users should be able to use as Docker-Container. I want them to easily do a run command or starting a prewritten docker-compose file to start the container.
Now, I have problems with the persistence of the data. I am using the volume flag in docker-compose for example to bind mount a local folder of the host into the container, where the app data and config files are located on the container. The host folder is empty on the first run, as the user just installed docker and is just starting the docker-compose.
As it is a bind mount, the empty folder overrides the folder in Docker as far as I understood and so the Container-Folder, containing the Django-App is now empty and so it is not startable.
I searched a bit and as far as I understood, I need to create a entrypoint.sh file that copies the app data folder into the folder of the container after the startup, where the volume is.
Now to my questions:
Is there a Best Practice of how to copy the files via an entrypoint.sh file?
What about a second run, after 1. worked and files already exist, how to not override the maybe changed config files with the default ones in the temp folder?
My example code for now:
Dockerfile
# pull official base image
FROM python:3.6
# set work directory
RUN mkdir /app
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . /app/
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#one of my tries to make data persistent
VOLUME /app
docker-compose.yml
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
command: python manage.py runserver 0.0.0.0:8000
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- /folder/to/app/data:/app
networks:
- overlay-core
networks:
overlay-core:
external: true
entrypoint.sh
#empty for now
You should restructure your application to store the application code and its data in different directories. Even if the data is a subdirectory of the application, that's good enough. Once you do that, you can bind-mount only the data directory and leave the application code from the image intact.
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
volumes:
- ./data:/app/data # not /app
There's no particular reason to put a VOLUME declaration in your Dockerfile, but you should declare the CMD your image should run there.

How to properly define services in compose file

I have to run simple service on Docker Compose. The first image is to host the previously created service while the second image, which is dependent on the first one, is to run the tests. So I created Dockerfile:
FROM python:2.7-slim
WORKDIR /flask
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "routes.py"]
Everything works. I created some simple tests, which also works, and placed the file in the same directory as routes.py.
So I tried to create docker-compose.yml file and did something like that:
version: '2'
services:
app:
build: .
command: 'python MyTest.py'
ports:
- "5000:5000"
tests:
build:
context: Mytest.py
depends_on:
- app
When I run it I received an error:
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
So how should I specify these directory and where I can place it in app or tests service?
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
Above error tells you context: should be folder to put your Dockerfile, but as you seems could use the same image to test your product, I think no need to specify it.
And I guess your MyTest.py will visit 5000 port of your app container to have a test. So what you needed is next:
version: '2'
services:
app:
build: .
container_name: my_app
ports:
- "5000:5000"
tests:
build: .
depends_on:
- app
command: python MyTest.py
Here, what you need to pay attention is: you should visit http://my_app:5000 for your test in MyTest.py.
Meanwhile, in MyTest.py suggest you to sleep some time, because depends_on just can ensure tests start after app, but cannot assure at that time your flask already ready, you can also consider this to assure the order.
You need to specify dockerfile field as you are using version-2 docker compose.
Check this out.
Modify your build command:
...
build:
context: .
dockerfile: Dockerfile
...

Categories