at the moment I am trying to build a Django App, that other users should be able to use as Docker-Container. I want them to easily do a run command or starting a prewritten docker-compose file to start the container.
Now, I have problems with the persistence of the data. I am using the volume flag in docker-compose for example to bind mount a local folder of the host into the container, where the app data and config files are located on the container. The host folder is empty on the first run, as the user just installed docker and is just starting the docker-compose.
As it is a bind mount, the empty folder overrides the folder in Docker as far as I understood and so the Container-Folder, containing the Django-App is now empty and so it is not startable.
I searched a bit and as far as I understood, I need to create a entrypoint.sh file that copies the app data folder into the folder of the container after the startup, where the volume is.
Now to my questions:
Is there a Best Practice of how to copy the files via an entrypoint.sh file?
What about a second run, after 1. worked and files already exist, how to not override the maybe changed config files with the default ones in the temp folder?
My example code for now:
Dockerfile
# pull official base image
FROM python:3.6
# set work directory
RUN mkdir /app
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . /app/
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#one of my tries to make data persistent
VOLUME /app
docker-compose.yml
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
command: python manage.py runserver 0.0.0.0:8000
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- /folder/to/app/data:/app
networks:
- overlay-core
networks:
overlay-core:
external: true
entrypoint.sh
#empty for now
You should restructure your application to store the application code and its data in different directories. Even if the data is a subdirectory of the application, that's good enough. Once you do that, you can bind-mount only the data directory and leave the application code from the image intact.
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
volumes:
- ./data:/app/data # not /app
There's no particular reason to put a VOLUME declaration in your Dockerfile, but you should declare the CMD your image should run there.
Related
I am running a sample flask app backed with mysql using docker-compose. Here's my compose file.
version: "2"
services:
webapp:
build:
context: ./flask/
dockerfile: Dockerfile
ports:
- "8000:5000"
env_file:
- ./.env
depends_on:
- mysqldb
networks:
- my-bridge1
volumes:
- "./flask/flask-data:/usr/src"
mysqldb:
build:
context: ./mysql/
dockerfile: Dockerfile
env_file:
- ./.env
networks:
- my-bridge1
volumes:
- "./mysql/db-data:/var/lib/mysql"
networks:
my-bridge1:
driver: bridge
The issue is that when I mount my application directory outside of the container, there is an
**error**: __init__.py file is not found
Which is found in the WORKDIR. This issue only occurs when I mount my code volume outside the container, if I mount any other directory then the app works fine.
Here's my docker file for the app:
FROM python:3
RUN mkdir /usr/src/FlaskApp
RUN mkdir /usr/src/FlaskApp/code
WORKDIR /usr/src/FlaskApp/code
COPY ./code ./
RUN pip install -r ./requirements.txt
COPY FlaskApp.wsgi /usr/src/FlaskApp/
EXPOSE 5000
VOLUME /usr/src
CMD [ "python", "__init__.py" ]
I have tested the mysql container, it copies the files from within the container. But the python container does not.
EDIT1:
When I changes the CMD arg to "ls", the dir is empty. When I changed the CMD arg to "pwd", the output is: "/usr/src/FlaskApp/code"
EDIT2:
What's more strange is that the directories inside the bind volume are created outside. But they are empty!
Data is copied into the python container as well but you are obscuring these data with the bind mount.
First you copy the files into /usr/src/FlaskApp/code in your Dockerfile but then you create bind mount at the same location which means that /usr/src/ will now hold only the contents of the ./flask/flask-data that are on your localhost (source of the bind mount).
In result, you will end up with /usr/src/<contents of ./flask/flask-data>, so if ./flask/flask-data on your localhost doesn't contain the __init__.py file (and the whole directory substructure required by your application), neither will the container.
So all these lines in your Dockerfile are basically irrelevant as long as you are using that bind mount
RUN mkdir /usr/src/FlaskApp
RUN mkdir /usr/src/FlaskApp/code
WORKDIR /usr/src/FlaskApp/code
COPY ./code ./
COPY FlaskApp.wsgi /usr/src/FlaskApp/
I am not sure what exactly you are trying to achieve and how your application is resolving the paths but a quick fix would be to create another folder under /usr/src (maybe /usr/src/FlaskData) and mount the local directory there.
volumes:
- "./flask/flask-data:/usr/src/FlaskData"
Now you will have both FlaskApp and FlaskData in your /usr/src, but your will need to update file paths in your application accordingly.
from Docker docs
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the
directory’s existing contents are obscured by the bind mount. This can
be beneficial, such as when you want to test a new version of your
application without building a new image. However, it can also be
surprising and this behavior differs from that of docker volumes.
And to answer why the bind mount behaves differently for MySQL container - it doesn't.
You are mounting empty folder to a location where the data is written by MySQL only after the container starts, therefore there is nothing to be obscured there because the destination is empty to start with (the same applies to the python container, if you would write something to /usr/src after the container starts, you would see those data appear on the localhost in ./flask/flask).
I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile :
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
# Environment variable for directory containing our app
ENV APP /var/www/app
ENV PYTHONUNBUFFERED 1
# Define working directory
RUN mkdir -p $APP
WORKDIR $APP
COPY . $APP
# Install missing dependencies
RUN pip install -r requirements.txt
AND my docker-compose.yml file
version: '3.8'
services:
ingestion-service:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- .:/app
restart: always
I am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time.
Quick answer: Yes :)
In the Dockerfile, you copying your app into /var/www/app.
The instructions form the Dockerfile are executed when you build your image (docker build -t <imgName>:<tag>)
If you change the code later on, how could the image be aware of that?
However, you can mount a volume(a directory) from your host machine, into the container when you execute the docker run / docker-compose up command, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well.
Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app?
volumes:
- .:/var/www/app
I have react app which communicates with flask API and display data. I had both of these projects in separate folders and everything worked fine.
Then I wanted to containerize Flask + React app with docker-compose for practice and then I created a folder in which I have my middleware(flask) and frontend(react) folders. Then I created a virtual environment and installed flask. Now when I import flask inside python file I get an error.
I do not understand why simply adding the folder inside another folder would affect my project. You can see the project structure and error in the picture below.
Dockerfile react app
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
CMD [ "npm", "start" ]
Dockerfile flask api
FROM python:3.7.2
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add requirements (to leverage Docker cache)
ADD ./requirements.txt /usr/src/app/requirements.txt
# install requirements
RUN pip install -r requirements.txt
# add app
ADD . /usr/src/app
# run server
CMD python app.py runserver -h 0.0.0.0
docker-compose.yml
version: '3'
services:
middleware:
build: ./middleware
expose:
- 5000
ports:
- 5000:5000
volumes:
- ./middleware:/usr/src/app
environment:
- FLASK_ENV=development
- FLASK_APP=app.py
- FLASK_DEBUG=1
frontend:
build: ./frontend
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./frontend/src:/usr/src/app/src
- ./frontend/public:/usr/src/app/public
links:
- "middleware:middleware"
When moving folders around, you should change the python path in your vscode/.settings file. Otherwise you'll be using the wrong Python interpreter - one without Flask.
I have to run simple service on Docker Compose. The first image is to host the previously created service while the second image, which is dependent on the first one, is to run the tests. So I created Dockerfile:
FROM python:2.7-slim
WORKDIR /flask
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "routes.py"]
Everything works. I created some simple tests, which also works, and placed the file in the same directory as routes.py.
So I tried to create docker-compose.yml file and did something like that:
version: '2'
services:
app:
build: .
command: 'python MyTest.py'
ports:
- "5000:5000"
tests:
build:
context: Mytest.py
depends_on:
- app
When I run it I received an error:
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
So how should I specify these directory and where I can place it in app or tests service?
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
Above error tells you context: should be folder to put your Dockerfile, but as you seems could use the same image to test your product, I think no need to specify it.
And I guess your MyTest.py will visit 5000 port of your app container to have a test. So what you needed is next:
version: '2'
services:
app:
build: .
container_name: my_app
ports:
- "5000:5000"
tests:
build: .
depends_on:
- app
command: python MyTest.py
Here, what you need to pay attention is: you should visit http://my_app:5000 for your test in MyTest.py.
Meanwhile, in MyTest.py suggest you to sleep some time, because depends_on just can ensure tests start after app, but cannot assure at that time your flask already ready, you can also consider this to assure the order.
You need to specify dockerfile field as you are using version-2 docker compose.
Check this out.
Modify your build command:
...
build:
context: .
dockerfile: Dockerfile
...
Django server is running well in localhost. however, When I try to run server on the docker container, it doesn't find the manage.py file when using docker-compose file and even I run the container manually and run the server, it doesn't appear in browser. how can I solve this problem?
So I wrote all the code testing on my local server and using the dockerfile, I built the image of my project.
and I tried to run server on the docker container, suddenly this doesn't run.
what's worse, if I use docker-compose to run the server, it doesn't find the manage.py file though I already checked with 'docker run -it $image_name sh'
here is the code of my project
I am new to docker and new to programming.
hope you can give me a help. thanks!
file structure
current directory
└─example
└─db.sqlite3
└─docker-compose.yml
└─Dockerfile
└─manage.py
└─Pipfile
└─Pipfile.lock
Docker file
# Base image - Python version
FROM python:3.6-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Copy Pipfile
COPY Pipfile /code
COPY Pipfile.lock /code
# Install dependencies
RUN pip install pipenv
RUN pipenv install --system
# Copy files
COPY . /code/
docker-compose.yml
# docker-compose.yml
version: '3.3'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
expected result : running server in web browser like in chrome
actual result :
when using docker-compose :
ERROR like this in the prompt : web_1 | python: can't open file '/code/manage.py': [Errno 2] No such file or directory
when running the container manually with 'docker run -it $image_name sh' and 'python manage.py runserver' on the shell :
server is running but, doesn't connect to web browser. (doesn't show up in browser like chrome'
Yo have done same thing in many ways. You have copy source files using a COPY command and then you have mounted a host volume in your docker-compose.yml file. In first place you don't need a volume because volume mounts are to persisting data generated by and used by Docker containers.
Following simplified Dockerfile and docker-compose file would fix the problem.
# Base image - Python version
FROM python:3.6-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Copy files
COPY . /code/
# Set work directory
WORKDIR /code
# Install dependencies
RUN pip install pipenv
RUN pipenv install --system
docker-compose.yml -:
# docker-compose.yml
version: '3.3'
services:
web:
build: .
command: python ./manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000