Summary
I am running Celery as a daemon via celeryd (as per instructions)
Specified redis as the broker in the configuration file /etc/default/celeryd BROKER_URL="redis://localhost:6379/0"
Worker log file indicates that BROKER_URL is being ignored as it is still attempting to connect to the default broker.
ERROR/MainProcess] consumer: Cannot connect to
amqp://guest:**#localhost:5672//: Error opening socket: a socket error
occurred.
Question: Do I need to modify the /etc/init.d/celeryd file beyond the basic template that was provided in the online instructions in order for BROKER_URL to be passed as an argument?
/etc/default/celeryd is configuration for the daemon, and only these options will go there. You can configure your celery instance with a settings file or by passing arguments when creating the instance.
Related
I'm using Django and Celery with RabbitMQ as the message broker. While developing in Windows I installed RabbitMQ and configured Celery inside Django like this:
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('DjangoExample')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
init.py
from .celery import app as celery_app
__all__ = ['celery_app']
When running Celery inside my development Windows machine everything works correctly and tasks are being executed as expected.
Now I'm trying to deploy the app inside a Centos7 machine.
I installed RabbitMQ and I tried running Celery with the following command:
celery -A main worker -l INFO
But I get a "connection refused" error:
[2021-02-24 17:39:58,221: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I don't have any special configuration for Celery inside my settings.py since it was working fine in Windows without it.
You can find the settings.py here:
https://github.com/adrenaline681/DjangoExample/blob/master/main/settings.py
Here is a screenshot of the celery error:
And here is the status of my RabbitMQ Server that shows that it's currently installed and running.
Here is an image of the RabbitMQ Management Plugin web interface, we you can see the port used for amqp:
Does anyone know why this is happening?
How can I get Celery to work correctly with RabbitMQ inside of Centos7?
Many thanks in advance!
I had similar problem and it was SELinux blocking access between those two processes, I mean RabbitMQ and Python. To check my guess please disable selinux temporarily and check if it goes ok. And if it is ok then you have to configure selinux to grant access Python to connect Rabbitmq. To disable SELinux temporarily you can run in shell
# setenforce 0
See more here about disabling SELinux either temporarily or permanently. But actually I would not recommend disabling SELinux. Actually it is better to configure SELinux to grant access. See more about SELinux here.
You said you are developing on Windows but you showed some outputs that look like Linux. Are you using Docker or some other container?
I don't know if you are using Docker or some other containers but you can likely adapt my advice to your setup.
If you are using docker you'll have need to have Django's settings.py configured to docker container running RabbitMQ instead of 127.0.0.1. The URL you provided for your settings.py file doesn't work so I cannot see what you have in there.
Here's my CELERY_... settings:
# Celery settings
CELERY_BROKER_URL = 'amqp://user:TheUserName#rabbitmq'
CELERY_RESULT_BACKEND = 'redis://redis:6379/'
I set it to the name I use for my container_name hosting each service because my docker-compose file has these:
services:
rabbitmq:
...
container_name: rabbitmq
redis:
...
container_name: redis
Celery task revocation is stored in the memory, so it will not persist when worker is restarted.
In Celery documentation it can be persisted using command celery -A proj worker -l info --statedb=/var/run/celery/worker.state
http://celery.readthedocs.io/en/latest/userguide/workers.html#worker-persistent-revokes
but when I run the command, I got error file not found, so I created the file, I ran the command again but then it tells me db type could not be determined.
I try to lookup how to set the persistent database to use in celery but got no results. Any help will be apreciated
So it turns out, I have to create the directory first and celery worker should be permitted creating a file in that directory.
My solution was to create celery directory in the project then run command:
celery -A proj worker -l info --statedb=celery/working.state
and it works
By following this tutorial, I have now a Celery-Django app that is working fine if I launch the worker with this command:
celery -A myapp worker -n worker1.%h
in my Django settings.py, I set all parameters for Celery (IP of the messages broker, etc...). Everything is working well.
My next step now, is to run this app as a Daemon. So I have followed this second tutorial and everything is simple, except now, my Celery parameters included in settings.py are not loaded. By example, messages broker IP is set to 127.0.0.1 but in my settings.py, I set it at an other IP address.
In the tutorial, they say:
make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
So I made it sure. I had in /etc/default/celeryd this:
export DJANGO_SETTINGS_MODULE="myapp.settings"
Still not working... So I also, had this line in /etc/init.d/celeryd, again not working.
I don't know what to do anymore. Is someone has a clue?
EDIT:
Here is my celery.py:
from __future__ import absolute_import
import os
from django.conf import settings
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
EDIT #2:
Here is my /etc/default/celeryd:
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1.%h"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myapp"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/myapp-folder/"
# Extra command-line arguments to the worker
CELERYD_OPTS=""
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE=myapp.settings
export PYTHONPATH=$PYTHONPATH:/home/ubuntu/myapp-folder
All answers here could be a part of the solution but at the end, it was still not working.
But I finally succeeded to make it work.
First of all, in /etc/init.d/celeryd, I have changed this line:
CELERYD_MULTI=${CELERYD_MULTI:-"celeryd-multi"}
by:
CELERYD_MULTI=${CELERYD_MULTI:-"celery multi"}
The first one was tagged as deprecated, could be the problem.
Moreover, I put this as option:
CELERYD_OPTS="--app=myapp"
And don't forget to export some environments variables:
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myapp.settings"
export PYTHONPATH="$PYTHONPATH:/home/ubuntu/myapp-folder"
With all of this, it's now working on my side.
The problem is most likely that celeryd can't find your Django settings file because myapp.settings isn't in the the $PYTHONPATH then the application runs.
From what I recall, Python will look in the $PYTHONPATH as well as the local folder when importing files. When celeryd runs, it likely checks the path for a module app, doesn't find it, then looks in the current folder for a folder app with an __init__.py (i.e. a python module).
I think that all you should need to do is add this to your /etc/default/celeryd file:
export $PYTHONPATH:path/to/your/app
Below method does not helps to run celeryd, rather helps to run celery worker as a service which will be started at boot time.
commands like this sudo service celery status also works.
celery.conf
# This file sits in /etc/init
description "Celery for example"
start on runlevel [2345]
stop on runlevel [!2345]
#Send KILL after 10 seconds
kill timeout 10
script
#project(working_ecm) and Vitrualenv(working_ecm/env) settings
chdir /home/hemanth/working_ecm
exec /home/hemanth/working_ecm/env/bin/python manage.py celery worker -B -c 2 -f /var/log/celery-ecm.log --loglevel=info >> /tmp/upstart-celery-job.log 2>&1
end script
respawn
In your second tutorial they set the django_settings variable to:
export DJANGO_SETTINGS_MODULE="settings"
This could be a reason why your settings is not found in case it changes to directory
"/home/ubuntu/myapp-folder/"
Then you defined your app with "myapp" and then you say settings is in "myapp.settings"
This could lead to the fact that it searchs the settings file in
"/home/ubuntu/myapp-folder/myapp/myapp/settings"
So my suggestion is to remove the "myapp." in the DJANGO_SETTINGS_MODULE variable and dont forget quotation marks
I'd like to add an answer for anyone stumbling on this more recently.
I followed the getting started First Steps guide to a tee with Celery 4.4.7, as well as the Daemonization tutorial without luck.
My initial issue:
celery -A app_name worker -l info works without issue (actual celery configuration is OK).
I could start celeryd as daemon and status command would show OK, but couldn't receive tasks. Checking logs, I saw the following:
[2020-11-01 09:33:15,620: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
This was an indication that celeryd was not connecting to my broker (redis). Given CELERY_BROKER_URL was already set in my configuration, this meant my celery app settings were not being pulled in for the daemon process.
I tried sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start to see if any of my celery settings were pulled in, and i noticed that app was set to default default (not the app name specified as CELERY_APP in /etc/default/celeryd
Since celery -A app_name worker -l info worked, fixed the issue by exporting CELERY_APP in /etc/default/celeryd/, instead of just setting the variable per documentation.
TL;DR
If celery -A app_name worker -l info works (replace app_name with what you've defined in the Celery first steps guide), and sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start does not show your celery app settings being pulled in, add the following to the end of your /etc/default/celeryd:
export CELERY_APP="app_name"
On localhost, i used these statements to execute tasks and workers.
Run tasks:
python manage.py celery beat
Run workers:
python manage.py celery worker --loglevel=info
I used otp, rabbitmq server and django-celery.
It is working fine.
I uploaded the project on ubuntu server. I would like to daemonize these.
For that i created a file /etc/default/celeryd as below config settings.
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Where to chdir at start.
CELERYD_CHDIR="/home/sandbox/myprojrepo/myproj"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
Also i created a file /etc/init.d/celeryd with script i downloaded.
Now when i try to execute /etc/init.d/celeryd start it gives error as Unrecogonized command line argument.
I issued "celeryd-multi start nodeN" as a command and it said nodeN started. But tasks execution havent started yet.
I am new to daemonizing and server hosting.
You can run celery within supervisor:
https://pypi.python.org/pypi/supervisor
http://thomassileo.com/blog/2012/08/20/how-to-keep-celery-running-with-supervisor/
hth.
I have in my celery configuration
BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
Yet whenever I run the celeryd, I get this error
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
Why is it not connecting to the redis broker I set it up with, which is running btw?
import your celery and add your broker like that :
celery = Celery('task', broker='redis://127.0.0.1:6379')
celery.config_from_object(celeryconfig)
This code belongs in celery.py
If you followed First Steps with Celery tutorial, specifically:
app.config_from_object('django.conf:settings', namespace='CELERY')
then you need to prefix your settings with CELERY, so change your BROKER_URL to:
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
I got this response because I was starting my celery worker incorrectly on the terminal.
I was running:
celery -A celery worker
But because I defined celery inside of web/server.py, I needed to run:
celery -A web.server.celery worker
web.server indicates that my celery object is in a file server.py inside a directory web. Running that latter command connected to the broker I specified!