I have a docker-compose project with two containers running NGINX and gunicorn with my django files.
I also have a database outside of docker in AWS RDS.
My question is similiar to this one. But, that question is related to a database that is within docker-compose. Mine is outside.
So, if I were to open a bash terminal for my container and run py manage.py makemigrations the problem would be that the migration files in the django project, for example: /my-django-project/my-app/migrations/001-xxx.py would get out of sync with the database that stores which migrations has been applied. This will happen since my containers can shutdown and open a new container at any time. And the migration files would not be saved.
My ideas are to either:
Use a volume inside docker compose, but since the migrations folder are spread out over all django apps that could be hard to achieve.
Handle migrations outside of docker, that would require some kind of "master" project where migration files would be stored. This does not seem like a good idea since then the whole project would be dependent on some locals file existing.
I'm looking for suggestions on a good practice how I can handle migrations.
EDIT:
Here is docker-compose.yml, I'm runing this locally with docker-compose up and in production to AWS ECS with docker compose up. I left out some aws-cloudformation config which should not matter I think.
docker-compose.yml
version: '3'
services:
web:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/django:${IMAGE_TAG}
build:
context: .
dockerfile: ./Dockerfile
networks:
- demoapp
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
nginx:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/nginx:${IMAGE_TAG}
build:
context: .
dockerfile: ./nginx.Dockerfile
ports:
- 80:80
depends_on:
- web
networks:
- demoapp
The problem boiled down to where I would store my migration files that Django generates upon py manage.py makemigrations and when/where I would run py manage.py migrate. As 404pio suggested you can simple store these in your code repo like GitHub.
So my workflow goes like this:
In my local development environment, run py manage.py makemigrations and py manage.py migrations, (target a development database like sqlite).
If everything OK, commit and push to git.
(I'm using CircleCI to test and deploy my Django project, but this could be done manually aswell.) CircleCI runs pipeline after git push. In pipeline I have as the very last step to run py manage.py migrate. This must be after deployment of app since that might fail and then you don't want to migrate.
Related
at the moment I am trying to build a Django App, that other users should be able to use as Docker-Container. I want them to easily do a run command or starting a prewritten docker-compose file to start the container.
Now, I have problems with the persistence of the data. I am using the volume flag in docker-compose for example to bind mount a local folder of the host into the container, where the app data and config files are located on the container. The host folder is empty on the first run, as the user just installed docker and is just starting the docker-compose.
As it is a bind mount, the empty folder overrides the folder in Docker as far as I understood and so the Container-Folder, containing the Django-App is now empty and so it is not startable.
I searched a bit and as far as I understood, I need to create a entrypoint.sh file that copies the app data folder into the folder of the container after the startup, where the volume is.
Now to my questions:
Is there a Best Practice of how to copy the files via an entrypoint.sh file?
What about a second run, after 1. worked and files already exist, how to not override the maybe changed config files with the default ones in the temp folder?
My example code for now:
Dockerfile
# pull official base image
FROM python:3.6
# set work directory
RUN mkdir /app
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . /app/
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#one of my tries to make data persistent
VOLUME /app
docker-compose.yml
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
command: python manage.py runserver 0.0.0.0:8000
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- /folder/to/app/data:/app
networks:
- overlay-core
networks:
overlay-core:
external: true
entrypoint.sh
#empty for now
You should restructure your application to store the application code and its data in different directories. Even if the data is a subdirectory of the application, that's good enough. Once you do that, you can bind-mount only the data directory and leave the application code from the image intact.
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
volumes:
- ./data:/app/data # not /app
There's no particular reason to put a VOLUME declaration in your Dockerfile, but you should declare the CMD your image should run there.
I'm trying to use a Django server run/debug configuration in PyCharm with a docker compose interpreter and the 'backend' service. Everything works fine, however when I restart the server, only one container ('backend') is restarted:
xxxxx_redis is up-to-date
xxxxx_frontend_1 is up-to-date
xxxxx_postgresql is up-to-date
xxxxx_celery_1 is up-to-date
Starting xxxxx_backend_1 ...
How can I make some linked services (e.g. 'celery') restart as well via PyCharm? The definition of 'backend' looks like this:
backend:
build:
# build args
command: python manage.py runserver 0.0.0.0:8000 --settings=<settings.module>
user: root
volumes:
# volumes definition
links:
- postgresql
- redis
- frontend
- celery
Simply adding the name of the service to the end of the default up command in Command and options did the trick for me:
Now both backend and celery are restarted when I run the configuration.
I'm trying to find out how to setup my current docker compose yaml file to run my dev env. I'm new to docker but I was given a project that uses it.
version: '3'
services:
database:
image: someinfo:9.5
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- db-data:/var/lib/postgresql/data
backend:
build: .
command: bash /somepath/server/django_devserver.sh
volumes:
- .:/volumeinfo
links:
- database
ports:
- "8000:8000"
environment:
DJANGO_SETTINGS_MODULE: projectname.settings.production
SCHEMA: https
DB_HOST: database
PYTHONUNBUFFERED: 1
volumes:
db-data:
Currently this is running the Django production settings in my dev env. I want to keep that there but I also want to tell docker to run my dev settings when in the dev server. How can I do that? Would I create a new container called dev-backend with the dev vars?
Then would I run docker-compose up dev-backend or something like that? Forgive my ignorance, today is my first day with Docker.
The easiest way is to create a separate compose file for your development environment. A good start would be to copy this file and change the appropriate settings (such as DJANGO_SETTINGS_MODULE).
By default, docker-compose searches for a file called docker-compose.yml and uses it to bring up the containers; but you can pass in a custom file name with -f.
[~]$ docker-compose -f dev.yml up
dev.yml is the name of your development settings file. It can be called anything, as long as its proper YAML.
It would be good to bookmark the compose file reference from the documentation, as there is a very comprehensive list of directives and options you can add here.
I suggest that you try out the officially encouraged approach of Docker Compose configuration overriding:
# your_config.dev.yml
version: '3'
services:
database:
environment:
- POSTGRES_USER=dev_user
- POSTGRES_PASSWORD=dev_pass
backend:
environment:
DJANGO_SETTINGS_MODULE: projectname.settings.development
# ...
And this is how you override your production environment configuration with the one set for development:
docker-compose -f your_config.yml -f your_config.dev.yml (build|up|...)
N.B. This is assuming your_config.yml is the one presented in the question.
I have looked through the questions on this site, but I have not been able to fix this problem.
I created and ran an image of my django app, but when I try to view the app from the browser, the page does not load (can't establish a connection to the server)
I am using docker toolbox, I am using OS X El Capitan and the Macbook is from 2009.
The container IP is: 192.168.99.100
The django project root is called "Web app" and is the directory containing manage.py. My Dockerfile and my requirements.txt files are in this directory.
My dockerfile is:
FROM python:3.5
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My requirements.txt has django and mysqlclient
My django app uses Mysql, and I tried to view the dockerized django app in the browser with and without linking it to the standard mysql image. In both cases, I only see the following error:
problem loading page couldn't establish connection to server
When I did try linking the django container to the mysql container I used:
docker run --link mysqlapp:mysql -d app
Where mysqlapp is my mysql image and 'app' is my django image.
In my django settings.py, the allowed hosts are:
ALLOWED_HOSTS: ['localhost', '127.0.0.1', '0.0.0.0', '192.168.99.100']
Again, the image is successfully created when I used docker build, and it is successfully run as a container. Why is the page not loading in the browser?
I suggest to use yml file and docker compose. Below is a template to get you started:
[Dockerfile]
FROM python:2.7
RUN pip install Django
RUN mkdir /code
WORKDIR /code
COPY code/ /code/
where your files are located in code directory.
[docker-compose.yml]
version: '2'
services:
db:
image: mysql
web0:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
There might be a problem with your working directory path defined in Dockerfile. Hope above helps.
Solution provided by salehinejad seems to be good enough ,although i have not tested it personally but if you do not want to use yml file and want to go your way then you should expose the port by adding
-p 0:8000
in your run command
So your should look like this :
docker run -p 0:8000 --link mysqlapp:mysql -d app
I suspect you have not told Docker to talk to your VM, and that your containers are running on your host machine (if you can access at localhost, this is the issue).
Please see this post for resolution:
Connect to docker container using IP
I'm newbie in docker-compose and I have a docker with my django instance and a mysql database. I would like to create a self autoconfigured container which run a command only on the first docker run. In this command I would like to do the following tasks:
make initial database migrations
create the admin superuser
import a mysql backup into the database
After this the system should continue launching the django test webserver.
Are there any way to tell docker-compose to run a command just on it first run or are there any alternative in django to control if the system is already configured and updated?
In order to clarify here are my dockfile and docker-compose.yml:
FROM python:3.4
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
####################
version: '2'
services:
db:
image: "mysql:5.6"
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
MYSQL_DATABASE: xxxxxx
MYSQL_USER: xxxxx
MYSQL_PASSWORD: xxxxxxx
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thanks.
Following the comments of #cricket_007, finally I have found a tricky solution to solve the problem. I have created a sh script for the database service and for my web service. Additionally I have created two version files in my folder, web_local.version and web_server.version.
The web_local.version has been added to my .gitignore because this file is used to storage the current app version.
The start_web.sh script is a simple script that compare if the folder contains a web_local.version file. In that case the project has been configured in the past and the script checks if the current app version is updated compared with the server version. In the case all is up to date simply run a webserver otherwise run a migrate to update the models and then run the webserver.
Here is the web_start.sh script for references:
#!/bin/bash
FILE="web_local.version"
if [ -f "$FILE" ];
then
echo "File $FILE exist."
if diff ./web_server.version ./web_local.version > /dev/null;
then
echo "model version up to date :)"
else
echo "model updated!!"
python manage.py migrate
cp ./web_server.version ./$FILE
fi
else
echo "File $FILE does not exist"
sleep 10 #added because the first time db take a long time to init and the script doesn't wait until db is finished
cp ./web_server.version ./$FILE
python manage.py migrate
fi
python manage.py runserver 0.0.0.0:8000
I suppose that there are more formal solutions but this solutions is functional for my case because it allows our team to maintain the same mock database and same models synced through git and we have a zero time configuration environment running just with one command.