I have an app developed in Django (2.2.7) with python (3.8.0), Docker (19.03.5) and docker-compose (1.25.2) running in Windows 10 pro. I want to Dockerize it with changing the sqlite3 database for a MySQL database. I've already write this Dockerfile:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install --upgrade pip && pip install -r requirements.txt
RUN pip install mysqlclient
COPY . /code/
And this docker-compose.yml file:
version: '3'
services:
db:
image: mysql:5.7
ports:
- '3306:3306'
environment:
MYSQL_DATABASE: 'my-app-db'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
volumes:
- .setup.sql:/docker-entrypoint-initbd.d/setup.sql
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
links:
- db
Also I have change the default database configurations in settings.py for this:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'my-app-db',
'USER': 'root',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': 3306,
}
}
After all of this the docker compose works and the app starts, but the problem is that the tables in the database are not created. I've tried with these How do I add a table in MySQL using docker-compose, Seeding a MySQL DB for a Dockerized Django App or this Seeding a MySQL DB for a Dockerized Django App but I couldn't fix it yet.
How can I create the required tables in the MySQL db container while runing the docker-compose? Must I add every single table by hand or there is a way to do it from the django app automatically?
Thanks
Hi i think this answer helps you
##1.- Reset all your migrations
find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
find . -path "*/migrations/*.pyc" -delete
##2.- See and apply your migrations again
python manage.py showmigrations
python manage.py makemigrations
python manage.py migrate
You should not reset your migrations unless you want to wipe all of the data completely and start over. Your migrations should exist beforehand. So if you dont mind about old migrations you can delete them and use python manage.py makemigrations and then proceed to following steps:
So if your applications starts at the moment, we need to update your docker-compose file in a way that it uses entrypoint.sh. An ENTRYPOINT allows you to configure a container that will run as an executable.
First things first, create your entrypoint.sh file on the same level as docker-compose.yaml.
Next, don't forget to add chmod +x entrypoint.sh so entrypoint can be executed
entrypoint.sh file:
#!/bin/bash
set -e
echo "${0}: running migrations."
python manage.py migrate --noinput
echo "${0}: collecting statics."
python manage.py collectstatic --noinput
python manage.py runserver 0.0.0.0:8000
Afterwards update your docker-compose.yaml file. Change your command line to:
command:
- /bin/sh
- '-c'
- '/code/entrypoint.sh'
Additionally you should store all of your pip requirements in requirements.txt file and in your Dockerfile you should run pip install -r requirements.txt
You can dump your pip requirements with a command pip freeze > requirements.txt
Related
I would to run my Django project into a Docker container with its Database on another Docker container inside a Bebian
When i run my docker container, I have some errors. Like : Lost connection to MySQL server during query ([Errno 104] Connection reset by peer).
This command mysql > SET GLOBAL log_bin_trust_function_creators = 1 is very important because database's Django user create trigger.
Morever, I use a .env file used same for create DB image to store DB user and password. This path is settings/.env.
My code:
docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:8.0.29
container_name: db_mysql_container
environment:
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
command: ["--log_bin_trust_function_creators=1"]
ports:
- '3306:3306'
expose:
- '3306'
api:
build: .
container_name: django_container
command: bash -c "pip install -q -r requirements.txt &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- '8000:8000'
depends_on:
- db
Dockerfile :
# syntax=docker/dockerfile:1
FROM python:3.9.14-buster
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
How to start my Django project ? Is possible to start only the DB container ?
What command i need execute and what changes i need to make, I'm novice with Docker ! So if you help me, please explains your commands and actions !
You can find this project on my GitHub
Thank !
To run dockerized django project.
Simply you can run below command:
docker-compose run projectname bash -c "python manage.py createsuperuser"
Above command is used for to create superuser
I was reading an article in here which is about setting up project using docker, django and mysql together.
these are my files in project:
Dockerfile
FROM python:3.7
MAINTAINER masoud masoumi moghadam
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD . /app
ADD requirements.txt /app/requirements.txt
RUN pip install --upgrade pip && pip install -r requirements.txt
Docker-compose
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=localhost
- DB_NAME=contact_list
- DB_USER=root
- DB_PASS=secretpassword
depends_on:
- db
db:
image: mysql:5.7
environment:
- MYSQL_DATABASE=contact_list
- MYSQL_USER=root
- MYSQL_PASSWORD=secretpassword
requirements
Django>=2.0,<3.0
djangorestframework<3.10.0
mysqlclient==1.3.13
django-mysql==2.2.0
and also this settings in my setting.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': os.environ.get('DB_HOST'),
'NAME': os.environ.get('DB_NAME'),
'USER': os.environ.get('DB_USER'),
'PASSWORD': os.environ.get('DB_PASS')
}
}
When I use docker-compose build I face no problem and everything is just fine. Then I run service mysql start. I can assure that mysql service is in run and works because I have access to datasets. The problem occurs when I do the migration using this command docker-compose run app sh -c "python manage.py makemigrations core" I get this error:
django.db.utils.OperationalError: (2002,
"Can't connect to local MySQL server through socket
'/var/run/mysqld/mysqld.sock' (2)")
When I change localhost to 127.0.0.1 I get this error:
django.db.utils.OperationalError:
(2002, "Can't connect to MySQL
server on '127.0.0.1' (115)")
I spent 20 hours looking for the best possible configuration for these technologies, But I still can't figure out anything. I also used python-alpine but I still could not find it useful for project because I had the same mysql dependencies problem when I was trying to do docker build. Does anybody have the same experience? I would appreciate if you could help me here.
Your problem is that you use localhost as the host of mysql in django's config.
But docker containers have their own IP, they are not localhost.
So first in your docker-compose file, name your containers :
db:
image: mysql:5.7
container_name: db
...
Then in your django settings, set your db HOST to your db container name : "db" :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'db',
'PORT' : 3306, # (?)
...
Also you are missing the db 'PORT' in django settings, I think that for Mysql it is 3306 (I've added it above).
According to this beautifully organized article, I found out this configurations will have no issues running on a server(changed mysql service to postgress service):
Dockerfile
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY entrypoint.sh /usr/src/app/entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Docker-compose.yml
version: '3.7'
services:
web:
build: app
# container_name: web
command: python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- .env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
# container_name: postgres
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
- POSTGRES_DB=contact_list
requirements
Django==2.2.6
gunicorn==19.9.0
djangorestframework>=3.9.2,<3.10.0
psycopg2-binary==2.8.3
settings
DATABASES = {
'default': {
'ENGINE': os.environ.get('SQL_ENGINE', "django.db.backends.sqlite3"),
'NAME': os.environ.get("SQL_DB_NAME", os.path.join(BASE_DIR, "db.sqlite3")),
'USER': os.environ.get("SQL_USER", "user"),
'PASSWORD': os.environ.get("SQL_PASSWORD", "password"),
'HOST': os.environ.get("SQL_HOST", "localhost"),
'PORT': os.environ.get("SQL_PORT", "5432"),
}
}
I added a file named as .env.dev which is for better access to environment variables:
DEBUG=1
SECRET_KEY=123456
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_HOST=db
SQL_DB_NAME=contact_list
SQL_USER=user
SQL_PASSWORD=123456
SQL_PORT=5432
DATABASE=postgres
first try to login to mysql with provided credentials
docker exec -it <CONTAINER_ID> /bin/sh
Inside the docker please login to mysql via command line
mysql -uroot -psecretpassword
i've seen in the docs and many solution to initialize a postgres container with a database using an init.sql as mentioned in this question:
COPY init.sql /docker-entrypoint-initdb.d/10-init.sql
The problem is that my database data in .sql is to large(8GB). This makes the dump process to long and the file to heavy. I was trying to make something similar to this approach generating a .dump file with (540mb).
I have 2 containers to run my django project. A Dockerfile for my python environment and postgres image. My docker files are shown in the end of this question. So far i've managed to run the dump in my container with these steps:
I add my .dump file to my container in docker-compose
build my containers
Go inside my postgres container and execute a pg_restore command
to dump my database.
Then go to my python/django container to execute a migrate command.
And finally run server.
This works but its not automated. How can i automate this problem in my docker files so i don't need to execute these commands manually?
Bellow are my files and the commands i required to run my .dump file on postgres container:
generate and add ./project-dump/latest.dump to my project
docker-compose up -d web db #build containers
docker-compose exec db bash # go in my postgres container
pg_restore --verbose --clean --no-acl --no-owner -h {pg_container_name} -U postgres -d database_dev
exit # exit postgres container
docker-compose exec web bash # go in my python/django project container
./manage.py migrate # migrate database
./manage.py runserver 0.0.0.0:3000 #run server
Dockerfile:
FROM python:3.7.3-stretch
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
COPY init.sql /docker-entrypoint-initdb.d/10-init.sql
RUN python manage.py migrate
EXPOSE 3000
docker-compose.yaml
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_DB: database_dev
POSTGRES_PASSWORD: secret
ports:
- "5432:5432"
volumes:
- ./project-dump/:/var/www/html
web:
build: .
command: python3 manage.py runserver 0.0.0.0:3000
volumes:
- .:/django-docker
ports:
- "3000:3000"
depends_on:
- db
init.sql
CREATE DATABASE database_dev;
GRANT ALL PRIVILEGES ON DATABASE database_dev TO postgres;
I would like to create postgresql database in docker with my Django project. I'm trying to do it using init.sql file but it doesn't work:
settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'aso',
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
init.sql:
CREATE USER postgres;
CREATE DATABASE aso;
GRANT ALL PRIVILEGES ON DATABASE aso TO postgres;
My updated Dockerfile:
FROM python:3.6.1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip3 install -r requirements.txt
ADD . /code/
FROM library/postgres
ADD init.sql /docker-entrypoint-initdb.d/
Unfortunately I get:
db_1 | LOG: database system was shut down at 2017-07-05 14:02:41 UTC
web_1 | /usr/local/bin/docker-entrypoint.sh: line 145: exec: python3: not found
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: autovacuum launcher started
db_1 | LOG: database system is ready to accept connections
dockerpri_web_1 exited with code 127
I was trying with this Dockerfile, too:
FROM python:3.6.1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip3 install -r requirements.txt
ADD . /code/
FROM library/postgres
ENV POSTGRES_USER docker
ENV POSTGRES_PASSWORD docker
ENV POSTGRES_DB aso
My docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 PROJECT/backend/project/manage.py runserver 0.0.0.0:8001
volumes:
- .:/code
ports:
- "8001:8001"
depends_on:
- db
First, your Dockerfile example isn't valid. It should just have one FROM instruction. Remove the PostgreSQL stuff so it is just:
FROM python:3.6.1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip3 install -r requirements.txt
ADD . /code/
Next, I am going to answer the question of how to create a PostgreSQL user and database since that is what your init.sql is doing.
According to the postgres image documentation at https://hub.docker.com/_/postgres/, there are POSTGRES_USER and POSTGRES_PASSWORD environment variables available to us. The POSTGRES_USER variable will create a user and database with privileges to that user. An example amend to your docker-compose.yml would be:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: aso
POSTGRES_PASSWORD: somepass
web:
build: .
command: python3 PROJECT/backend/project/manage.py runserver 0.0.0.0:8001
volumes:
- .:/code
ports:
- "8001:8001"
depends_on:
- db
On start, this will initialize the PostgreSQL database with a username of aso, a database of aso, and a password of somepass. Then, you can just amend your settings.py to use this username, database, and password.
I'll also add, that if you wanted to run other arbitrary SQL or scripts during database startup, you can do so by adding sql or sh files to the postgres image at /docker-entrypoint-initdb.d/. Read more about this at https://github.com/docker-library/docs/tree/master/postgres#how-to-extend-this-image.
I'm newbie in docker-compose and I have a docker with my django instance and a mysql database. I would like to create a self autoconfigured container which run a command only on the first docker run. In this command I would like to do the following tasks:
make initial database migrations
create the admin superuser
import a mysql backup into the database
After this the system should continue launching the django test webserver.
Are there any way to tell docker-compose to run a command just on it first run or are there any alternative in django to control if the system is already configured and updated?
In order to clarify here are my dockfile and docker-compose.yml:
FROM python:3.4
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
####################
version: '2'
services:
db:
image: "mysql:5.6"
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
MYSQL_DATABASE: xxxxxx
MYSQL_USER: xxxxx
MYSQL_PASSWORD: xxxxxxx
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thanks.
Following the comments of #cricket_007, finally I have found a tricky solution to solve the problem. I have created a sh script for the database service and for my web service. Additionally I have created two version files in my folder, web_local.version and web_server.version.
The web_local.version has been added to my .gitignore because this file is used to storage the current app version.
The start_web.sh script is a simple script that compare if the folder contains a web_local.version file. In that case the project has been configured in the past and the script checks if the current app version is updated compared with the server version. In the case all is up to date simply run a webserver otherwise run a migrate to update the models and then run the webserver.
Here is the web_start.sh script for references:
#!/bin/bash
FILE="web_local.version"
if [ -f "$FILE" ];
then
echo "File $FILE exist."
if diff ./web_server.version ./web_local.version > /dev/null;
then
echo "model version up to date :)"
else
echo "model updated!!"
python manage.py migrate
cp ./web_server.version ./$FILE
fi
else
echo "File $FILE does not exist"
sleep 10 #added because the first time db take a long time to init and the script doesn't wait until db is finished
cp ./web_server.version ./$FILE
python manage.py migrate
fi
python manage.py runserver 0.0.0.0:8000
I suppose that there are more formal solutions but this solutions is functional for my case because it allows our team to maintain the same mock database and same models synced through git and we have a zero time configuration environment running just with one command.