I had my Django web app running on the Azure App Services using a single docker container instances. However, I plan to add one more container to run the celery service.
Before going to try the compose with celery and Django web app, I first tried using their docker-compose option to run the Django web app before including the compose with celery service.
Following is my docker-compose configuration for Azure App Service
version: '3.3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
However, the only thing that I see in my App Service logs is:
2020-10-16T07:02:31.653Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T13:26:20.047Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T14:51:07.482Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:40:49.109Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:43:05.980Z INFO - Stopping site MYSITE because it failed during startup.
I tried the combination of celery and Django app using docker-compose on my LOCAL environment and it seems to be working as expected.
Following is the docker-compose file that I am using to run it on local:
version: '3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
env_file:
- .env.file
celery:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: celery -A DjangoProj worker -l DEBUG
depends_on:
- web
restart: on-failure
env_file:
- .env.file
What am I missing?
I have checked multiple SO questions but they are all left without an answer.
I can provide more details if required.
P.S. there's an option to run both Django and Celery in the same container and call it a day, but I am looking for a cleaner and scalable solution.
You have to change port because Azure does not support multi container app on port 8000.
Exemple of Configuration-file.yaml
version: '3.3'
services:
api:
image: containerdpt.azurecr.io/xxxxxxx
command: python manage.py runserver 0.0.0.0:8080
ports:
- "8080:8080"
Is there any chance you can time the startup of your site? My first concern with this is it's not starting up within 230 seconds or an external dependency such as the celery container is not ready within 230 seconds.
To see if this is the issue, can you try raising the startup time?
Set the WEBSITES_CONTAINER_START_TIME_LIMIT App Setting to the value you want.
Default Value = 230 Sec.
Max Value= 1800 Sec
Related
I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.
I've created FasAPI app with Postgres DB which lives in docker container.
So now I have docker-compose.yml file with my app and postgres DB:
version: '3.9'
services:
app:
container_name: app_container
build: .
volumes:
- .:/code
ports:
- '8000:8000'
depends_on:
- my_database
#networks:
# - postgres
my_database:
container_name: db_container
image: postgres
environment:
POSTGRES_NAME: dbf
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
restart: unless-stopped
volumes:
postgres:
And now I want to make pytest over my DB with testing endpoints and testing my DB
BUT, when I run python -m pytest cmd I got the error can not translate hostname "my_database" as in my database.py file I have to set DATABASE_URL = 'postgresql://myuser:password#my_database'. As according to userguide, when I build docker-compose file, in DATABASE_URL I must put name of service instead of hostname.
Anyone have an idea how to solve it?!!
The problem is that, if you use docker-compose to run your app in separate container and run database in another container. It is like your DB is not launched and pytest can't connect to it. This is wrong way to implement pytests in this way!!!!
To run pytest correctly you should:
You must in DATABASE_URL write the name of service instead of the name of host! In my case my_database is name of service in docker-compose.yml file, so I should set it as hostname, like: DATABASE_ULR = postgres://<username>:<password>#<name of service>
pytest must be run in app container! What it means! First of all, start your containers: docker-copose up --build where --build is optional (it just rebuilds your images if you made some changes to code in your programm files. After this, you should jump into app container. It can be done from Docker application on your computer or through the terminal. To make it with terinal window:
cmd: docker exec -it <name of container with your application>. You will dive into container and after this you can simply run cmd pytest or python -m pytest. And your tests will run as allways.
If you will have some questions you can write me anytime)))
So, the reason of this Error was that I run pytest and it tried to connect to DATABASE_URL which, em... has not been launched already (as I understand).
I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!
Every time when I run a debugger there happen many things but not what I expect.
I'm running a project with docker-compose up
Checking the localhost if backend backend is okay. It's down.
What's funny the container is running because I'm attached to this with vscode's remote containers.
The debugpy library is installed.
The first approach to run a debugger end with such info in debug console:
Attached!
System check identified some issues:
WARNINGS:
workflow.State.additional_values: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
Operations to perform:
Apply all migrations: accounts, auth, contenttypes, files, mambu, otp_totp, sessions, token_blacklist, workflow, zoho
Running migrations:
No migrations to apply.
and it's down. Backend is also down.
Second try:
Attached!
System check identified some issues:
WARNINGS:
workflow.State.additional_values: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
Zoho Configuration failed, check that you have all variables ZOHO_TOKEN_URL, ZOHO_REST_API_KEY, ZOHO_CURRENT_USER_EMAIL
and it's down but backend is up - I'm able to login etc.
The third try ends with such an error connect ECONNREFUSED 127.0.0.1:5678.
Any tips?
Code:
manage.py
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def initialize_debugger():
import debugpy
debugpy.listen(("0.0.0.0", 5678))
debugpy.wait_for_client()
print('Attached!')
def main():
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "xxx.settings")
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == "__main__":
initialize_debugger()
main()
The local docker-compose.yml
version: "3.2"
services:
backend:
container_name: xxx
build:
context: ./backend
dockerfile: ../build/backend.Dockerfile
volumes:
- ./backend:/opt/app
command: ./run.sh
ports:
- "8000:8000"
- "5678:5678"
env_file:
- build/.env-local
links:
- db:db
- rabbit:rabbit
- memcached:memcached
celery:
container_name: xxx
restart: always
build:
dockerfile: ../build/backend.Dockerfile
context: ./backend
command: ./run_celery.sh
env_file:
- build/.env-local
working_dir: /opt/app/
volumes:
- ./backend/:/opt/app
links:
- db:db
- rabbit:rabbit
frontend:
container_name: xxx
build:
context: frontend
dockerfile: ../build/frontend.Dockerfile
environment:
- BROWSER=none
- CI=true
volumes:
- ./frontend/src/:/frontend/src
- ./frontend/public/:/frontend/public
nginx:
container_name: xxx
build:
dockerfile: build/nginx.Dockerfile
context: .
args:
REACT_APP_GOOGLE_ANALYTICS_TOKEN: $REACT_APP_GOOGLE_ANALYTICS_TOKEN
REACT_APP_PAGESENSE_LINK: $REACT_APP_PAGESENSE_LINK
REACT_APP_CHATBOT_TOKEN: $REACT_APP_CHATBOT_TOKEN
REACT_APP_SENTRY_DSN: $REACT_APP_SENTRY_DSN
REACT_APP_SENTRY_ENVIRONMENT: $REACT_APP_SENTRY_ENVIRONMENT
REACT_APP_SENTRY_TRACES_SAMPLE_RATE: $REACT_APP_SENTRY_TRACES_SAMPLE_RATE
REACT_APP_THIRD_PARTY_API_URL: $REACT_APP_THIRD_PARTY_API_URL
ports:
- "5000:80"
depends_on:
- backend
- frontend
env_file:
- build/.env-local
volumes:
- ./build/nginx/nginx.conf:/etc/nginx.conf
db:
container_name: xxx
image: postgres:12
ports:
- "5432:5432"
restart: on-failure
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
rabbit:
container_name: xxx
image: rabbitmq
ports:
- "5672:5672"
memcached:
container_name: xxx
image: memcached
ports:
- "11211:11211"
restart: always
flower:
image: mher/flower:0.9.5
environment:
- CELERY_BROKER_URL=amqp://xxx-rabbitmq//
- FLOWER_PORT=8888
ports:
- 8888:8888
and the launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "CF: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}/backend",
"remoteRoot": "/opt/app/"
}
],
"django": true
}
]
}
Django don't support debugging by its own
this is what I fount of surfing in 2 min
this might help you
There could be many reasons why debugging does not work as intended. Troubleshooting is usually the reasonable thing to do. Starting from something simple and adding complexity until figuring out what step is not working as intended. I would recommend starting with a simple debugging session using pdb, before adding VS Code complexity. In order to accomplish that, you just need to add a breakpoint() in your backend code where you want to debug. In your docker-compose.yaml, you want to add to your backend service, the following additional configurations
services:
backend:
- tty: true
- stdin_open: true
In your terminal, start your application with docker-compose up. Open a second terminal and attach to your container with docker attach <project name>_backend. You should normally get a prompt pdb> at the location where your breakpoint was hit.
Based on your description, here are the points I would investigate.
debugpy installation
Make sure debugpy is installed in the Docker image and not locally.
WSGI HTTP server
I presume you're using python manage.py runserver 0.0.0.0:8000 to start the WSGI HTTP server. Just in case you're using something like gunicorn, it's worth mentioning that you should only use 1 worker. As an example, if using gunicorn, you can provide the amount of workers at the command line: gunicorn --workers=1 --timeout=1200 --bind 0.0.0.0:8000 your_application.wsgi:application.
Note also the huge timeout. You might want to set a high value both for your WSGI HTTP server and for Nginx. If one of them times out while you're debugging, you will get a 502 or 504 error depending on which one timed out first and your debugging session will terminate.
debugpy location
I usually place the code importing debugpy in wsgi.py, right before the call to get_wsgi_application()
"""
WSGI config for {{ project_name }} project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/{{ docs_version }}/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', '{{ project_name }}.settings')
import debugpy
debugpy.listen(('0.0.0.0', 5678))
debugpy.wait_for_client()
print('Attached!')
application = get_wsgi_application()
I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server