How can I reload python code in dockerized django? - python

I am using docker-compose to run 3 containers:
django + gunicorn, nginx, and postgresQL.
Every time I change my python .py code, I will run docker-compose restart web but it takes a long time to restart.
I try to restart gunicorn with
`docker-compose exec web ps aux |grep gunicorn | awk '{ print $2 }' |xargs kill -HUP`
But it didn't work.
How can I reload .py code in a shorter time?
I know that gunicorn can be set to hot reload python code. Can I do it manually with a command?
My docker-compose.yml:
version: '3'
services:
db:
build: ./db/
volumes:
- dbdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
web:
build: .
command: >
sh -c "gunicorn abc.wsgi -b 0.0.0.0:8000"
# sh -c "python manage.py collectstatic --noinput &&
# python manage.py loaddata app/fixtures/masterData.json &&
# gunicorn abc.wsgi -b 0.0.0.0:8000"
volumes:
- .:/var/www/django
- ./static:/static/
expose:
- "8000"
environment:
- USE_S3
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_STORAGE_BUCKET_NAME
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- SENDGRID_API_KEY
- SECRET_KEY
depends_on:
- db
nginx:
restart: always
build: ./nginx/
volumes:
- ./static:/static/
ports:
- "8000:80"
links:
- web
backup:
image: prodrigestivill/postgres-backup-local:11-alpine
restart: always
volumes:
- /var/opt/pgbackups:/backups
links:
- db
depends_on:
- db
environment:
- POSTGRES_HOST
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- SCHEDULE
- BACKUP_KEEP_DAYS
- BACKUP_KEEP_WEEKS
- BACKUP_KEEP_MONTHS
- HEALTHCHECK_PORT
volumes:
dbdata:
Dockerfile - web:
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
ENV WKHTML2PDF_VERSION 0.12.4
# 0.12.5 wget not work
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /var/www
RUN mkdir /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN apt-get update && apt-get install -y \
libpq-dev \
python-dev \
gcc \
openssl \
build-essential \
xorg \
libssl1.0-dev \
wget
RUN apt-get install -y sudo
RUN pip install --upgrade pip
RUN pip install -r requirements.txt && pip3 install requests && pip3 install pdfkit
# & pip3 install sendgrid-django
ADD . /var/www/django/
WORKDIR /var/www
RUN wget "https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/${WKHTML2PDF_VERSION}/wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
RUN tar -xJf "wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
WORKDIR wkhtmltox
RUN sudo chown root:root bin/wkhtmltopdf
RUN sudo cp -r * /usr/
WORKDIR /var/www/django
Dockerfile - nginx:
FROM nginx
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf

After a while, I found that this line is having a problem:
gunicorn abc.wsgi -b 0.0.0.0:8000
This line will put gunicorn running a subprocess. When I tried to submit a HUP signal to gunicorn, only the master process can receive the signal. So gunicorn was not killed. The code cannot be reloaded.
What I do is adding "exec" before it and Gunicorn will be run as the master process. And I can use "docker-compose kill -s HUP web" to gracefully restart gunicorn and my code will be reloaded in the container

Related

How build from source multiple services with docker-compose and create a single image?

I'm trying to create multiple containers for my Python/Django application called controller and I would like the containers to run in a single image, not two. The problem is that my file docker-compose.yml build two services from source and it generates two separated images as a result. The application is composed of 5 services: a Django project, Celery (worker, beat, flower) and Redis.
How can I tell docker-compose to build the django and redis service from source and to create all services in the same image ?
I've tried to change image: image: controller-redis with image: controller and it create a unique image with all services, but most of them failed to start because files aren't found :
Logs output :
$ docker-compose logs -f
controller-celery_beat-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-celerybeat: not found
controller-django-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start: not found
controller-flower-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-flower: not found
[...]
controller-celery_worker-1 | /usr/local/bin/docker-entrypoint.sh: 24: exec: /start-celeryworker: not found
Docker-compose ps
$ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
controller-celery_beat-1 "docker-entrypoint.s…" celery_beat exited (127)
controller-celery_worker-1 "docker-entrypoint.s…" celery_worker exited (127)
controller-django-1 "docker-entrypoint.s…" django exited (127)
controller-flower-1 "docker-entrypoint.s…" flower exited (127)
controller-redis-1 "docker-entrypoint.s…" redis running 6378-6379/tcp
docker-compose.yml
version: '3.8'
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: controller
command: /start
volumes:
- .:/app
ports:
- "8001:8001"
env_file:
- controller/.env
depends_on:
- redis
networks:
- mynetwork
redis:
build:
context: .
dockerfile: ./compose/local/redis/Dockerfile
image: controller-redis # <------------------ modification was done here
expose:
- "6378"
networks:
- mynetwork
celery_worker:
image: controller
command: /start-celeryworker
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
celery_beat:
image: controller
command: /start-celerybeat
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
flower:
image: controller
command: /start-flower
volumes:
- .:/app:/controller
env_file:
- controller/.env
depends_on:
- redis
- controller
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
compose/local/django/Dockerfile
FROM python:3.10
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
&& apt-get install -y build-essential \
&& apt-get install -y libpq-dev \
&& apt-get install -y gettext \
&& apt-get install -y git \
&& apt-get install -y openssh-client \
&& apt-get install -y libcurl4-openssl-dev libssl-dev \
&& apt-get install -y nano \
&& rm -rf /var/lib/apt/lists/*
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN chmod +x /start
COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/django/celery/flower/start /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
compose/local/redis/Dockerfile
FROM redis
RUN apt-get update \
&& apt-get install -y wget \
&& wget -O redis.conf 'http://download.redis.io/redis-stable/redis.conf' \
&& mkdir /usr/local/etc/redis \
&& cp redis.conf /usr/local/etc/redis/redis.conf
RUN sed -i '/protected-mode yes/c\protected-mode no' /usr/local/etc/redis/redis.conf \
&& sed -i '/bind 127.0.0.1 -::1/c\bind * -::*' /usr/local/etc/redis/redis.conf \
&& sed -i '/port 6379/c\port 6378' /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
WORKDIR /app
compose/local/django/start
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
python manage.py runserver 0.0.0.0:8001
compose/local/django/celery/beat
#!/bin/bash
set -o errexit
set -o nounset
rm -f './celerybeat.pid'
# watch only .py files
watchfiles \
--filter python \
'celery -A controller beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler'
compose/local/django/celery/worker
#!/bin/bash
set -o errexit
set -o nounset
# watch only .py files
watchfiles \
--filter python \
'celery -A controller worker --loglevel=info -Q controller_queue_1,controller_queue_2,default'
compose/local/django/celery/flower
#!/bin/bash
set -o errexit
set -o nounset
worker_ready() {
celery -A controller inspect ping
}
until worker_ready; do
>&2 echo 'Celery workers not available'
sleep 1
done
>&2 echo 'Celery workers is available'
celery -A controller \
--broker="${CELERY_BROKER}" \
flower
Projects files
docker-compose.yml
controller/
compose/
local/
django/
Dockerfile
entrypoint
start
celery/
beat/
start
flower/
start
worker/
start
redis/
Dockerfile

how to run python manage.py migrate inside a docker container that runs Django with apache2 [duplicate]

This question already has answers here:
How do you perform Django database migrations when using Docker-Compose?
(9 answers)
Closed 5 months ago.
I'm running Django app inside Docker container with apache2, I need to add the command python manage.py migrate inside the Dockerfile or docker-compose but am unable to run it .
Dockerfile
FROM ubuntu
RUN apt-get update
# Avoid tzdata infinite waiting bug
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Africa/Cairo
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils
RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN ln /usr/bin/python3 /usr/bin/python
RUN apt-get -y install python3-pip
#Add sf to avoid ln: failed to create hard link '/usr/bin/pip': File exists
RUN ln -sf /usr/bin/pip3 /usr/bin/pip
RUN pip install --upgrade pip
RUN pip install django ptvsd
COPY www/demo_app/water_maps/requirements.txt requirements.txt
RUN pip install -r requirements.txt
ADD ./demo_site.conf /etc/apache2/sites-available/000-default.conf
EXPOSE 80
WORKDIR /var/www/html/demo_app
CMD ["apache2ctl", "-D", "FOREGROUND"]
CMD ["python", "manage.py", "migrate", "--no-input"]
docker-compose
version: "2"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=mypasswordhere
- PGDATA=/tmp
django-apache2:
build: .
container_name: water_maps
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=mypasswordhere
- PGDATA=/tmp
ports:
- '80:80'
volumes:
- ./www/:/var/www/html
depends_on:
- db
what happens here is that the container exists after running the last CMD in the Dockerfile
Do this:
django-apache2:
build: .
container_name: water_maps
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=mypasswordhere
- PGDATA=/tmp
ports:
- '80:80'
volumes:
- ./www/:/var/www/html
command: >
sh -c 'python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000'
depends_on:
- db
or run docker-compose using below command:
docker-compose run --rm projectname sh -c "python manage.py filename"

Docker container keeps on restarting again and again in docker-compose but not when runs isolated

I'm trying to run a python program that uses MongoDB and I want to deploy it on a server, that's because I write a docker-compose file. My problem is that when I run the python project isolated with the docker build -t PROJET_NAME . and docker run image commands everything works properly, however when executing docker-compose up -d the python container restarts over and over again. What am I doing wrong?
I just tried to log it but nothing shows up
Here is the Dockerfile
FROM python:3.7
WORKDIR /app
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
ENV PROD=true
COPY . .
# Installing requirements
RUN pip install -r requirements.txt
RUN export PYTHONPATH=$PATHONPATH:`pwd`
CMD ["python3", "foo.py"]
And the docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
MONGODB_DATABASE: db
MONGODB_USERNAME: appuser
MONGODB_PASSWORD: mongopassword
MONGODB_HOSTNAME: mongodb
depends_on:
- mongodb
networks:
- internal
mongodb:
image: mongo
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbrootpassword
MONGO_INITDB_DATABASE: db
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db
networks:
- internal
networks:
internal:
driver: bridge
volumes:
mongodbdata:
driver: local

entrypoint.prod.sh file not found (Docker python buster image)

I'm getting this issue with my entrypoint.prod.sh file that it doesn't exist even though I have echoed "ls" command and it shows me that the file is present in the right location with the right permissions but still docker isn't able to find it. I have tried many solutions but none are working. Any suggestion/help would be much appreciated. Let me know if you guys need any extra information from me.
so this is my main docker-compose.staging.yml file: -
version: '3'
services:
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app
- static_volume:/app/django/staticfiles
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- postgresql
stdin_open: true
tty: true
env_file:
- ./.env.staging
postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=sparrowteams
- POSTGRES_PASSWORD=sparrowteams
- POSTGRES_DB=sparrowteams
ports:
- 5432:5432
volumes:
- .:/data
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/app/django/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- django
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
static_volume:
certs:
html:
vhost:
Then I have my Dockerfile.prod: -
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.1-buster as builder
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2 && apt-get -y install nginx
# lint
RUN pip install --upgrade pip
COPY ./backend .
# install dependencies
COPY backend/requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.1-buster
# create directory for the app user
RUN mkdir -p /app
# create the appropriate directories
ENV HOME=/app
ENV APP_HOME=/app/django
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY docker-compose/django/entrypoint.prod.sh $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY ./backend $APP_HOME
RUN echo $(ls -la)
RUN sed -i 's/\r$//' $APP_HOME/entrypoint.prod.sh && \
chmod +x $APP_HOME/entrypoint.prod.sh
ENTRYPOINT ["/bin/bash", "/app/django/entrypoint.prod.sh"]
and then finally I have my entrypoint.prod.sh file (Which is actually giving an error that it doesn't exist.)
#!/bin/bash
set -e
gunicorn SparrowTeams.wsgi:application --bind 0.0.0.0:8000
My nginx/vhost.d/default file: -
location /staticfiles/ {
alias /app/django/staticfiles/;
add_header Access-Control-Allow-Origin *;
}
nginx/custom.conf: -
client_max_body_size 10M;
nginx/dockerfile: -
FROM jwilder/nginx-proxy
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
My project structure looks something like this: -
- SparrowTeams (Main folder)
- backend
- SparrowTeams (Django project folder)
- docker-compose
- django
- Dockerfile.prod
- entrypoint.prod.sh
- nginx
- vhost.d
- default
- custom.conf
- dockerfile
- .env.staging
- docker-compose.staging.yml (Docker compose file that I'm running)
Your issue is that you have a volume that you mount to /app in your docker-compose file. That overrides the /app directory in your container and that's why it can't find the script.
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app <==== This volume
- static_volume:/app/django/staticfiles
You can either change the name of the directory you mount ./backend to (that's what I'd do), or you can place your app in another directory in your final image. The problem is caused by both of them being called /app.

Run Python Console via docker-compose on Pycharm

I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.

Categories