I have in my celery configuration
BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
Yet whenever I run the celeryd, I get this error
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
Why is it not connecting to the redis broker I set it up with, which is running btw?
import your celery and add your broker like that :
celery = Celery('task', broker='redis://127.0.0.1:6379')
celery.config_from_object(celeryconfig)
This code belongs in celery.py
If you followed First Steps with Celery tutorial, specifically:
app.config_from_object('django.conf:settings', namespace='CELERY')
then you need to prefix your settings with CELERY, so change your BROKER_URL to:
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
I got this response because I was starting my celery worker incorrectly on the terminal.
I was running:
celery -A celery worker
But because I defined celery inside of web/server.py, I needed to run:
celery -A web.server.celery worker
web.server indicates that my celery object is in a file server.py inside a directory web. Running that latter command connected to the broker I specified!
Related
I'm using Django and Celery with RabbitMQ as the message broker. While developing in Windows I installed RabbitMQ and configured Celery inside Django like this:
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('DjangoExample')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
init.py
from .celery import app as celery_app
__all__ = ['celery_app']
When running Celery inside my development Windows machine everything works correctly and tasks are being executed as expected.
Now I'm trying to deploy the app inside a Centos7 machine.
I installed RabbitMQ and I tried running Celery with the following command:
celery -A main worker -l INFO
But I get a "connection refused" error:
[2021-02-24 17:39:58,221: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I don't have any special configuration for Celery inside my settings.py since it was working fine in Windows without it.
You can find the settings.py here:
https://github.com/adrenaline681/DjangoExample/blob/master/main/settings.py
Here is a screenshot of the celery error:
And here is the status of my RabbitMQ Server that shows that it's currently installed and running.
Here is an image of the RabbitMQ Management Plugin web interface, we you can see the port used for amqp:
Does anyone know why this is happening?
How can I get Celery to work correctly with RabbitMQ inside of Centos7?
Many thanks in advance!
I had similar problem and it was SELinux blocking access between those two processes, I mean RabbitMQ and Python. To check my guess please disable selinux temporarily and check if it goes ok. And if it is ok then you have to configure selinux to grant access Python to connect Rabbitmq. To disable SELinux temporarily you can run in shell
# setenforce 0
See more here about disabling SELinux either temporarily or permanently. But actually I would not recommend disabling SELinux. Actually it is better to configure SELinux to grant access. See more about SELinux here.
You said you are developing on Windows but you showed some outputs that look like Linux. Are you using Docker or some other container?
I don't know if you are using Docker or some other containers but you can likely adapt my advice to your setup.
If you are using docker you'll have need to have Django's settings.py configured to docker container running RabbitMQ instead of 127.0.0.1. The URL you provided for your settings.py file doesn't work so I cannot see what you have in there.
Here's my CELERY_... settings:
# Celery settings
CELERY_BROKER_URL = 'amqp://user:TheUserName#rabbitmq'
CELERY_RESULT_BACKEND = 'redis://redis:6379/'
I set it to the name I use for my container_name hosting each service because my docker-compose file has these:
services:
rabbitmq:
...
container_name: rabbitmq
redis:
...
container_name: redis
Please help me to get out of this problem I am getting this when I am running
celery -A app.celery worker --loglevel=info
Error:
Unable to load celery application.
The module app.celery was not found.
My code is--
# Celery Configuration
from celery import Celery
from app import app
print("App Name=",app.import_name)
celery=Celery(app.name,broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
#celery.task
def download_content():
return "hello"
directory structure--
newYoutube/app/auth/routes.py and this function is present inside routes.py
auth is blueprint.
When invoking celery via
celery -A app.celery ...
celery will look for the name celery in the app namespace, expecting it to hold an instance of Celery. If you put that elsewhere (say, in app.auth.routes), then celery won't find it.
I have a working example you can crib from at https://github.com/davewsmith/flask-celery-starter
Or, refer to chapter 22 of the Flask Mega Tutorial, which uses rx instead of celery, but the general approach to structuring the code is similar.
I can not run the celery worker + docker + django. I download image rabbit and linked worker, and at run I get error: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
worker_1. Django: 1.11, calary: 4.1.0. What doing wrong?
docker-compose
rabbit:
image: rabbitmq:latest
ports:
- "5672:5672"
worker:
build: ./project
volumes:
- ./main:/src/app
depends_on:
- rabbit
links:
- web #django project
entrypoint: /src/app/calery.sh
calery
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='APP')
app.autodiscover_tasks()
#app.task(bind=True)
def add():
print('Task')
celery.sh
#!/bin/bash
cd app
celery -A app worker -l info
The error is caused by invalid host for CELERY_BROKER_URL. Based on the error you provided, it seems that the host in your broker url is 127.0.0.1, since you are using docker, this will not work unless you provide the public IP of your host. You need to update the host in your CELERY_BROKER_URL to use the service name in you compose file. In your case it is rabbit. Something like below should work:
CELERY_BROKER_URL = 'amqp://guest:guest#rabbit:5672/%2F'
Change the user and password and other details.
If you can't access with the guest:guest, add your own user to the system. This doc can help you setup your own user, password and Virtual Host inside your RabbitMQ server.
http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html#broker-rabbitmq
I have a Django application that I've deployed with Heroku. I'm trying to user celery to create a periodic task every minute. However, when I observe the logs for the worker using the following command:
heroku logs -t -p worker
I don't see my task being executed. Perhaps there is a step I'm missing? This is my configuration below...
Procfile
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app
Tasks.py
import celery
app = celery.Celery('activiist')
import os
from celery.schedules import crontab
from celery.task import periodic_task
from django.conf import settings
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
os.environ['DJANGO_SETTINGS_MODULE'] = 'activiist.settings'
from trending.views import *
#periodic_task(run_every=crontab())
def add():
getarticles(30)
One thing to add. When I run the task using the python shell and the "delay()" command, the task does indeed run (it shows in the logs) -- but it only runs once and only when executed.
You need separate worker for the beat process (which is responsible for executing periodic tasks):
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app
beat: celery --app=trending.tasks.app
Worker isn't necessary for periodic tasks so the relevant line can be omitted. The other possibility is to embed beat inside the worker:
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app -B
but to quote the celery documentation:
You can also start embed beat inside the worker by enabling workers -B option, this is convenient if you will never run more than one worker node, but it’s not commonly used and for that reason is not recommended for production use
Summary
I am running Celery as a daemon via celeryd (as per instructions)
Specified redis as the broker in the configuration file /etc/default/celeryd BROKER_URL="redis://localhost:6379/0"
Worker log file indicates that BROKER_URL is being ignored as it is still attempting to connect to the default broker.
ERROR/MainProcess] consumer: Cannot connect to
amqp://guest:**#localhost:5672//: Error opening socket: a socket error
occurred.
Question: Do I need to modify the /etc/init.d/celeryd file beyond the basic template that was provided in the online instructions in order for BROKER_URL to be passed as an argument?
/etc/default/celeryd is configuration for the daemon, and only these options will go there. You can configure your celery instance with a settings file or by passing arguments when creating the instance.