I am trying to run my django application using docker which involves celery. I am able to set everything on local and it works perfectly fine. However, when I run it docker, and my task gets executed, it throws me the following error:
myapp.models.mymodel.DoesNotExist: mymodel matching query does not exist.
I am particularly new to celery and docker so not sure what am I doing wrong.
Celery is set up correctly, I have made sure of that. Following are the broker_url and backend:
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'django-db'
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
image: redis:alpine
container_name: rz01
ports:
- "6379:6379"
networks:
- npm-nw
- braythonweb-network
braythonweb:
build: .
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
gunicorn braython.wsgi:application -b 0.0.0.0:8000 --workers=1 --timeout 10000"
volumes:
- .:/code
ports:
- "8000:8000"
restart: unless-stopped
env_file: .env
networks:
- npm-nw
- braythonweb-network
celery:
build: .
restart: always
container_name: cl01
command: celery -A braython worker -l info
depends_on:
- redis
networks:
- npm-nw
- braythonweb-network
networks:
braythonweb-network:
npm-nw:
external: false
I have tried few things from different stackoverflow posts like apply_async. I have also made sure that my model existed.
Update On further investigating the issue, I have noticed that the celery task does not get created in the database in the first place. Don't know why, may be I have to the following with something else:
CELERY_RESULT_BACKEND = 'django-db'
The exception is telling you that you are looking for an entry in your database, that does not exist (yet). Look for any function where you query the database and make sure you create the needed entry before looking for it. I'm assuming you have a table in your database for some configuration, that is read in a function, but the database is empty at the beginning.
I had to add the following to the celery container too to provide access to it:
volumes:
- .:/code
Related
I've set my django project and now I'm trying to test it with pytest. What is issue running pytest withing my containers doesn't kill it at the end of the process. So at the end of day I'm stuck with multiple running containers from pytest and often postgreSql connection problems.
My docker-compose file:
version: '3'
services:
license_server:
build: .
command: bash -c "python manage.py migrate && gunicorn LicenseServer.wsgi --reload --bind 0.0.0.0:8000"
depends_on:
- postgres
volumes:
- .:/code
environment:
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
env_file: .env
ports:
- "8000:8000"
restart: always
postgres:
build: ./postgres
volumes:
- ./postgres/postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: postgres
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
command: "-p 8005"
env_file: .env
ports:
- "127.0.0.1:8005:8005"
restart: always
nginx:
image: nginx:latest
container_name: nginx1
ports:
- "8001:80"
volumes:
- .:/code
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- license_server
What I want to achieve is automatically closing containers after the testing process is finished.
When you have restart: always they will just keep restarting when all the processes spawned by the command have exited. Even when you try to kill the running containers yourself they will tend to restart (which can be a nuisance). Try removing restart: always from your service descriptions.
For more info, check the docker-compose.yml reference
docker-compose.yml:
python-api: &python-api
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
networks:
- app-tier
expose:
- "8000"
depends_on:
- python-model
volumes:
- .:/python_api/
environment:
- PYTHON_API_ENV=development
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
expose:
- "8001"
volumes:
- .:/python_model/
command: >
sh -c "ls /python-model/
python_setup.sh development
cd /server/ &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8001"
python-celery:
<<: *python-api
environment:
- PYTHON_API_ENV=development
networks:
- app-tier
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
image: redis:5.0.8-alpine
hostname: redis
networks:
- app-tier
expose:
- "6379"
ports:
- "6379:6379"
command: ["redis-server"]
python-celery is inside python-api which should run as a separate container. But it is trying to occupy the same port as python-api, which should never be the case.
The error that I'm getting is:
AjayB$ docker-compose up
Creating integrated_redis_1 ... done
Creating integrated_python-model_1 ... done
Creating integrated_python-api_1 ...
Creating integrated_python-celery_1 ... error
Creating integrated_python-api_1 ... done
e1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: for python-celery Cannot start service python-celery: driver failed programming external connectivity on endpoint integrated_python-celery_1 (ab5e079dbc3a30223e16052f21744c2b5dfc56adbe1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
on doing docker ps -a, I get this:
AjayB$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ff1277fb7a7 integrated_python-celery "sh -c 'celery -A se…" 10 seconds ago Created integrated_python-celery_1
5b60221b42a4 integrated_python-api "sh -c 'ls /crackd-a…" 11 seconds ago Up 9 seconds 0.0.0.0:8000->8000/tcp integrated_python-api_1
bacd8aa3268f integrated_python-model "sh -c 'ls /crackd-m…" 12 seconds ago Exited (2) 10 seconds ago integrated_python-model_1
9fdab833b436 redis:5.0.8-alpine "docker-entrypoint.s…" 12 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp integrated_redis_1
Tried force removing the containers and tried docker-compose up getting the same error. :/ where am I making mistake?
Just doubtful on volumes: section. Can anyone please tell me if volumes is correct?
and please help me on this error. PS, first try on docker.
Thanks!
This is because you re-use the full config of python-api including the ports section which will expose port 8000 (by the way, expose is redundant since your ports section already exposes the port).
I would create a common section that could be used by any services. In your case, it would be something like that:
version: '3.7'
x-common-python-api:
&default-python-api
build:
context: /Users/AjayB/Desktop/python-api/
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
volumes:
- .:/python_api/
services:
python-api:
<<: *default-python-api
ports:
- "8000:8000"
depends_on:
- python-model
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
.
.
.
python-celery:
<<: *default-python-api
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
.
.
.
There is a lot in that docker-compose.yml file, but much of it is unnecessary. expose: in a Dockerfile does almost nothing; links: aren't needed with the current networking system; Compose provides a default network for you; your volumes: try to inject code into the container that should already be present in the image. If you clean all of this up, the only part that you'd really want to reuse from one container to the other is its build: (or image:), at which point the YAML anchor syntax is unnecessary.
This docker-compose.yml should be functionally equivalent to what you show in the question:
version: '3'
services:
python-api:
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
# No networks:, use `default`
# No expose:, use what's in the Dockerfile (or nothing)
depends_on:
- python-model
# No volumes:, use what's in the Dockerfile
# No environment:, this seems to be a required setting in the Dockerfile
# No command:, use what's in the Dockerfile
python-model:
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
python-celery:
build: # copied from python-api
context: /Users/AjayB/Desktop/python-api/
depends_on:
- redis
command: celery -A server worker -l info # one line, no sh -c wrapper
redis:
image: redis:5.0.8-alpine
# No hostname:, it doesn't do anything
ports:
- "6379:6379"
# No command:, use what's in the image
Again, notice that the only thing we've actually copied from the python-api container to the python-celery container is the build: block; all of the other settings that would be shared across the two containers (code, exposed ports) are included in the Dockerfile that describes how to build the image.
The flip side of this is that you need to make sure all of these settings are in fact included in your Dockerfile:
# Copy the application code in
COPY . .
# Set the "development" environment variable
ENV PYTHON_API_ENV=development
# Document which port you'll use by default
EXPOSE 8000
# Specify the default command to run
# (Consider writing a shell script with this content instead)
CMD python_api_setup.sh development && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000
I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server
I have django application with some model. I have manage.py command that creates n models and saves it to db. It runs with decent speed on my host machine.
But if I run it in docker it runs very slow, 1 instance created and saved in 40-50 seconds. I think I am missing something on how Docker works, can somebody point out why performance is low and what can i do with it?
docker-compose.yml:
version: '2'
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- /usr/local/var/postgres:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
dockerfile for web service:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . .
WORKDIR .
RUN pip install -r requirements.txt
RUN chmod +x wait-for-it.sh
The problem here is most likely the volume /usr/local/var/postgres:/var/lib/postgresql as you are using it on Mac. As I understand the Docker for Mac solution, it uses file sharing to implement host volumes, which is a lot slower then native filesystem access.
A possible workaround is to use a docker volume instead of a host volume. Here is an example:
version: '2'
volumes:
postgres_data:
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
Please note that this may complicate management of the postgres data, as you can't simply access the data from your Mac. You can only use the docker CLI or containers to access, modify and backup this data. Also, I'm not sure what happens if you uninstall Docker from your Mac, it may be that you lose this data.
Two things, can be a probable cause:
Starting of docker container takes some time, so if you start new container for each instance this can add up.
What storage driver do you use? Docker (often) defaults to device mapper loopback storage driver, which is slow. Here is some context. This will be painfull especially if you start this container often.
Other than that your config looks sensibly, and there are no obvious causes problems there. So if the above two points don't apply to you, please add some extra comments --- like how you actually add these model instances.
I'm trying to find a good way to populate a database with initial data for a simple application. I'm using a tutorial from realpython.com as a starting point. I then run a simple python script after the database is created to add a single entry, but when I do this the data is added multiple times even though I only call the script once. result
population script (test.py):
from app import db
from models import *
t = Post("Hello 3")
db.session.add(t)
db.session.commit()
edit:
Here is the docker-compose file which i use to build the project:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
it references two different Dockerfiles:
Dockerfile #1 which builds the App container and is 1 line:
FROM python:3.4-onbuild
Dockerfile #2 is used to build the nginx container
FROM tutum/nginx
RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled
edit2:
Some people have suggested that the data was persisting over several runs, and that was my initial thought as well. This is not the case, as I remove all active docker containers via docker rm before testing. Also the number of "extra" data is not consistent, ranging randomly from 3-6 in the few tests that I have run so far.
It turns out this is a bug related to using the run command on containers with the "restart: always" instruction in the docker-compose/Dockerfile. In order to resolve this issue without a bug fix I removed the "restart: always" from the web container.
related issue: https://github.com/docker/compose/issues/1013