During working with docker where I dockerised Django PostgreSQL, I've entered in such problems as when I change some model and migrate it, after entering to the page, it says there is no such relationship in the database. After some research, I found that problem can be due to creating every time new migration and deleting the old.
How can I fix this problem?
Below you can see my configurations
docker-compose-prod.yml
services:
app:
volumes:
- static_data:/app/staticfiles
- media_data:/app/mediafiles
env_file:
- django.env
- words_az.env
- words_en.env
build:
context: .
ports:
- "8000:8000"
entrypoint: /app/script/entrypoint.sh
command: sh -c "python manage.py collectstatic --no-input &&
gunicorn --workers=3 --bind 0.0.0.0:8000 django.wsgi:application"
depends_on:
- db
nginx:
build: ./nginx
volumes:
- static_data:/app/staticfiles
- media_data:/app/mediafiles
ports:
- "80:80"
- "443:443"
depends_on:
- app
- flower
db:
image: postgres:14.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- db.env
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
worker:
build:
context: .
command: celery -A django worker -l info
env_file:
- django.env
depends_on:
- db
- redis
- app
flower:
build: ./
command: celery -A django flower --basic_auth=$user:$password --address=0.0.0.0 --port=5555 --url-prefix=flower
env_file:
- django.env
ports:
- "5555:5555"
depends_on:
- redis
- worker
volumes:
postgres_data:
static_data:
media_data:
Dockerfile
FROM python:3.9-alpine
ENV PATH = "/script:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc g++ libc-dev linux-headers \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk add jpeg-dev zlib-dev libjpeg \
&& pip install Pillow \
&& apk del build-deps
RUN pip install --upgrade pip
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir /app
COPY /src /app
RUN mkdir /app/staticfiles
COPY /script /app/script
RUN chmod +x /app/script/*
WORKDIR /app
COPY django.env /app
RUN adduser -D user
RUN chown -R user:user /app
RUN chown -R user:user /var
RUN chmod -R 755 /var/
RUN chmod +x script/entrypoint.sh
USER user
CMD ["/script/entrypoint.sh"]
Related
I'm trying to create multiple containers for my Python/Django application called controller and I would like the containers to run in a single image, not two. The problem is that my file docker-compose.yml build two services from source and it generates two separated images as a result. The application is composed of 5 services: a Django project, Celery (worker, beat, flower) and Redis.
How can I tell docker-compose to build the django and redis service from source and to create all services in the same image ?
I've tried to change image: image: controller-redis with image: controller and it create a unique image with all services, but most of them failed to start because files aren't found :
Logs output :
$ docker-compose logs -f
controller-celery_beat-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-celerybeat: not found
controller-django-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start: not found
controller-flower-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-flower: not found
[...]
controller-celery_worker-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-celeryworker: not found
Docker-compose ps
$ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
controller-celery_beat-1 "docker-entrypoint.s…" celery_beat exited (127)
controller-celery_worker-1 "docker-entrypoint.s…" celery_worker exited (127)
controller-django-1 "docker-entrypoint.s…" django exited (127)
controller-flower-1 "docker-entrypoint.s…" flower exited (127)
controller-redis-1 "docker-entrypoint.s…" redis running 6378-6379/tcp
docker-compose.yml
version: '3.8'
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: controller
command: /start
volumes:
- .:/app
ports:
- "8001:8001"
env_file:
- controller/.env
depends_on:
- redis
networks:
- mynetwork
redis:
build:
context: .
dockerfile: ./compose/local/redis/Dockerfile
image: controller-redis # <------------------ modification was done here
expose:
- "6378"
networks:
- mynetwork
celery_worker:
image: controller
command: /start-celeryworker
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
celery_beat:
image: controller
command: /start-celerybeat
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
flower:
image: controller
command: /start-flower
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
compose/local/django/Dockerfile
FROM python:3.10
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
&& apt-get install -y build-essential \
&& apt-get install -y libpq-dev \
&& apt-get install -y gettext \
&& apt-get install -y git \
&& apt-get install -y openssh-client \
&& apt-get install -y libcurl4-openssl-dev libssl-dev \
&& apt-get install -y nano \
&& rm -rf /var/lib/apt/lists/*
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN chmod +x /start
COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/django/celery/flower/start /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
compose/local/redis/Dockerfile
FROM redis
RUN apt-get update \
&& apt-get install -y wget \
&& wget -O redis.conf 'http://download.redis.io/redis-stable/redis.conf' \
&& mkdir /usr/local/etc/redis \
&& cp redis.conf /usr/local/etc/redis/redis.conf
RUN sed -i '/protected-mode yes/c\protected-mode no' /usr/local/etc/redis/redis.conf \
&& sed -i '/bind 127.0.0.1 -::1/c\bind * -::*' /usr/local/etc/redis/redis.conf \
&& sed -i '/port 6379/c\port 6378' /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
WORKDIR /app
compose/local/django/start
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
python manage.py runserver 0.0.0.0:8001
compose/local/django/celery/beat
#!/bin/bash
set -o errexit
set -o nounset
rm -f './celerybeat.pid'
# watch only .py files
watchfiles \
--filter python \
'celery -A controller beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler'
compose/local/django/celery/worker
#!/bin/bash
set -o errexit
set -o nounset
# watch only .py files
watchfiles \
--filter python \
'celery -A controller worker --loglevel=info -Q controller_queue_1,controller_queue_2,default'
compose/local/django/celery/flower
#!/bin/bash
set -o errexit
set -o nounset
worker_ready() {
celery -A controller inspect ping
}
until worker_ready; do
>&2 echo 'Celery workers not available'
sleep 1
done
>&2 echo 'Celery workers is available'
celery -A controller \
--broker="${CELERY_BROKER}" \
flower
Projects files
docker-compose.yml
controller/
compose/
local/
django/
Dockerfile
entrypoint
start
celery/
beat/
start
flower/
start
worker/
start
redis/
Dockerfile
I'm getting this issue with my entrypoint.prod.sh file that it doesn't exist even though I have echoed "ls" command and it shows me that the file is present in the right location with the right permissions but still docker isn't able to find it. I have tried many solutions but none are working. Any suggestion/help would be much appreciated. Let me know if you guys need any extra information from me.
so this is my main docker-compose.staging.yml file: -
version: '3'
services:
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app
- static_volume:/app/django/staticfiles
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- postgresql
stdin_open: true
tty: true
env_file:
- ./.env.staging
postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=sparrowteams
- POSTGRES_PASSWORD=sparrowteams
- POSTGRES_DB=sparrowteams
ports:
- 5432:5432
volumes:
- .:/data
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/app/django/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- django
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
static_volume:
certs:
html:
vhost:
Then I have my Dockerfile.prod: -
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.1-buster as builder
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2 && apt-get -y install nginx
# lint
RUN pip install --upgrade pip
COPY ./backend .
# install dependencies
COPY backend/requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.1-buster
# create directory for the app user
RUN mkdir -p /app
# create the appropriate directories
ENV HOME=/app
ENV APP_HOME=/app/django
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY docker-compose/django/entrypoint.prod.sh $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY ./backend $APP_HOME
RUN echo $(ls -la)
RUN sed -i 's/\r$//' $APP_HOME/entrypoint.prod.sh && \
chmod +x $APP_HOME/entrypoint.prod.sh
ENTRYPOINT ["/bin/bash", "/app/django/entrypoint.prod.sh"]
and then finally I have my entrypoint.prod.sh file (Which is actually giving an error that it doesn't exist.)
#!/bin/bash
set -e
gunicorn SparrowTeams.wsgi:application --bind 0.0.0.0:8000
My nginx/vhost.d/default file: -
location /staticfiles/ {
alias /app/django/staticfiles/;
add_header Access-Control-Allow-Origin *;
}
nginx/custom.conf: -
client_max_body_size 10M;
nginx/dockerfile: -
FROM jwilder/nginx-proxy
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
My project structure looks something like this: -
- SparrowTeams (Main folder)
- backend
- SparrowTeams (Django project folder)
- docker-compose
- django
- Dockerfile.prod
- entrypoint.prod.sh
- nginx
- vhost.d
- default
- custom.conf
- dockerfile
- .env.staging
- docker-compose.staging.yml (Docker compose file that I'm running)
Your issue is that you have a volume that you mount to /app in your docker-compose file. That overrides the /app directory in your container and that's why it can't find the script.
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app <==== This volume
- static_volume:/app/django/staticfiles
You can either change the name of the directory you mount ./backend to (that's what I'd do), or you can place your app in another directory in your final image. The problem is caused by both of them being called /app.
I'm running Django and Postgres with Docker. Just tried to add Celery to the project and I can't make it run.
Dockerfile:
FROM python:3.8.5-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
&& apk add --no-cache openssl-dev libffi-dev
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
docker-compose.yml:
version: '3'
volumes:
local_postgres_data: {}
services:
postgres:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
env_file:
- ./.envs/.postgres
django: &django
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app/
ports:
- "8000:8000"
depends_on:
- postgres
rabbitmq:
image: rabbitmq:3.8.6
celeryworker:
<<: *django
image: celeryworker
restart: always
depends_on:
- rabbitmq
- postgres
ports: []
command: celery -A pyrty worker -l INFO
celerybeat:
<<: *django
image: celerybeat
restart: always
depends_on:
- rabbitmq
- postgres
ports: []
command: celery -A pyrty beat -l INFO
Errors:
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] raise VerificationError('%s: %s' % (e.__class__.__name__, e))
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] cffi
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] .
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] VerificationError
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] :
celerybeat_1 | [2020-08-16 17:12:54,206: WARNING/MainProcess] CompileError: command 'gcc' failed with exit status 1
requirements.txt:
Django==3.1
psycopg2==2.8.3
djangorestframework==3.11.0
Celery==4.4.7
rabbitmq==0.2.0
Pillow==7.1.2
django-extensions==2.2.9
I did run docker-compose build and everything seemed to be ok, but then when I did docker-compose run I got that.
Both Celery worker and beat are throwing the same error. Let me know if the entire trace is needed.
I what to dockerize my django app,
i create my Dockerfile :
FROM python:3.6-alpine
RUN apk add --no-cache linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /DEV
WORKDIR /DEV
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . .
at this point i create my docker-compose.yml:
version: '3'
networks:
mynetwork:
driver: bridge
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
POSTGRES_DB: mydb
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
networks:
- mynetwork
volumes:
- .:/DEV
ports:
- "8000:8000"
depends_on:
- db
then i create a .dockerignore file:
# Ignore
.DS_Store
.idea
.venv2
__pycache__
!manage.py
*.py[cod]
*$py.class
*.so
.Python
*.log
docker-compose.yml
Dockerfile
geckodriver.log
golog.py
golog.pyc
log.html
media
out
output.xml
report.html
startup.sh
templates
testlibs
.dockerignore
well, at this point i run:
docker-compose build --no-cache
at the end image was build correctly, but when i run:
docker-compose up
system return this error:
web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
core_web_1 exited with code 2
Someone can help me about the issue?
so many thanks in advance
Try making your Dockerfile more explicit with the locations and then change your docker-compose as well:
FROM python:3.6-alpine
RUN apk add --no-cache linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /DEV
WORKDIR /DEV
COPY ./requirements.txt /DEV/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /DEV/
web:
build: .
command: python /DEV/manage.py runserver 0.0.0.0:8000
networks:
- mynetwork
I am using docker-compose to run 3 containers:
django + gunicorn, nginx, and postgresQL.
Every time I change my python .py code, I will run docker-compose restart web but it takes a long time to restart.
I try to restart gunicorn with
`docker-compose exec web ps aux |grep gunicorn | awk '{ print $2 }' |xargs kill -HUP`
But it didn't work.
How can I reload .py code in a shorter time?
I know that gunicorn can be set to hot reload python code. Can I do it manually with a command?
My docker-compose.yml:
version: '3'
services:
db:
build: ./db/
volumes:
- dbdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
web:
build: .
command: >
sh -c "gunicorn abc.wsgi -b 0.0.0.0:8000"
# sh -c "python manage.py collectstatic --noinput &&
# python manage.py loaddata app/fixtures/masterData.json &&
# gunicorn abc.wsgi -b 0.0.0.0:8000"
volumes:
- .:/var/www/django
- ./static:/static/
expose:
- "8000"
environment:
- USE_S3
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_STORAGE_BUCKET_NAME
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- SENDGRID_API_KEY
- SECRET_KEY
depends_on:
- db
nginx:
restart: always
build: ./nginx/
volumes:
- ./static:/static/
ports:
- "8000:80"
links:
- web
backup:
image: prodrigestivill/postgres-backup-local:11-alpine
restart: always
volumes:
- /var/opt/pgbackups:/backups
links:
- db
depends_on:
- db
environment:
- POSTGRES_HOST
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- SCHEDULE
- BACKUP_KEEP_DAYS
- BACKUP_KEEP_WEEKS
- BACKUP_KEEP_MONTHS
- HEALTHCHECK_PORT
volumes:
dbdata:
Dockerfile - web:
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
ENV WKHTML2PDF_VERSION 0.12.4
# 0.12.5 wget not work
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /var/www
RUN mkdir /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN apt-get update && apt-get install -y \
libpq-dev \
python-dev \
gcc \
openssl \
build-essential \
xorg \
libssl1.0-dev \
wget
RUN apt-get install -y sudo
RUN pip install --upgrade pip
RUN pip install -r requirements.txt && pip3 install requests && pip3 install pdfkit
# & pip3 install sendgrid-django
ADD . /var/www/django/
WORKDIR /var/www
RUN wget "https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/${WKHTML2PDF_VERSION}/wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
RUN tar -xJf "wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
WORKDIR wkhtmltox
RUN sudo chown root:root bin/wkhtmltopdf
RUN sudo cp -r * /usr/
WORKDIR /var/www/django
Dockerfile - nginx:
FROM nginx
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf
After a while, I found that this line is having a problem:
gunicorn abc.wsgi -b 0.0.0.0:8000
This line will put gunicorn running a subprocess. When I tried to submit a HUP signal to gunicorn, only the master process can receive the signal. So gunicorn was not killed. The code cannot be reloaded.
What I do is adding "exec" before it and Gunicorn will be run as the master process. And I can use "docker-compose kill -s HUP web" to gracefully restart gunicorn and my code will be reloaded in the container