Celery-Django as Daemon: Settings not found - python

By following this tutorial, I have now a Celery-Django app that is working fine if I launch the worker with this command:
celery -A myapp worker -n worker1.%h
in my Django settings.py, I set all parameters for Celery (IP of the messages broker, etc...). Everything is working well.
My next step now, is to run this app as a Daemon. So I have followed this second tutorial and everything is simple, except now, my Celery parameters included in settings.py are not loaded. By example, messages broker IP is set to 127.0.0.1 but in my settings.py, I set it at an other IP address.
In the tutorial, they say:
make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
So I made it sure. I had in /etc/default/celeryd this:
export DJANGO_SETTINGS_MODULE="myapp.settings"
Still not working... So I also, had this line in /etc/init.d/celeryd, again not working.
I don't know what to do anymore. Is someone has a clue?
EDIT:
Here is my celery.py:
from __future__ import absolute_import
import os
from django.conf import settings
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
EDIT #2:
Here is my /etc/default/celeryd:
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1.%h"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myapp"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/myapp-folder/"
# Extra command-line arguments to the worker
CELERYD_OPTS=""
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE=myapp.settings
export PYTHONPATH=$PYTHONPATH:/home/ubuntu/myapp-folder

All answers here could be a part of the solution but at the end, it was still not working.
But I finally succeeded to make it work.
First of all, in /etc/init.d/celeryd, I have changed this line:
CELERYD_MULTI=${CELERYD_MULTI:-"celeryd-multi"}
by:
CELERYD_MULTI=${CELERYD_MULTI:-"celery multi"}
The first one was tagged as deprecated, could be the problem.
Moreover, I put this as option:
CELERYD_OPTS="--app=myapp"
And don't forget to export some environments variables:
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myapp.settings"
export PYTHONPATH="$PYTHONPATH:/home/ubuntu/myapp-folder"
With all of this, it's now working on my side.

The problem is most likely that celeryd can't find your Django settings file because myapp.settings isn't in the the $PYTHONPATH then the application runs.
From what I recall, Python will look in the $PYTHONPATH as well as the local folder when importing files. When celeryd runs, it likely checks the path for a module app, doesn't find it, then looks in the current folder for a folder app with an __init__.py (i.e. a python module).
I think that all you should need to do is add this to your /etc/default/celeryd file:
export $PYTHONPATH:path/to/your/app

Below method does not helps to run celeryd, rather helps to run celery worker as a service which will be started at boot time.
commands like this sudo service celery status also works.
celery.conf
# This file sits in /etc/init
description "Celery for example"
start on runlevel [2345]
stop on runlevel [!2345]
#Send KILL after 10 seconds
kill timeout 10
script
#project(working_ecm) and Vitrualenv(working_ecm/env) settings
chdir /home/hemanth/working_ecm
exec /home/hemanth/working_ecm/env/bin/python manage.py celery worker -B -c 2 -f /var/log/celery-ecm.log --loglevel=info >> /tmp/upstart-celery-job.log 2>&1
end script
respawn

In your second tutorial they set the django_settings variable to:
export DJANGO_SETTINGS_MODULE="settings"
This could be a reason why your settings is not found in case it changes to directory
"/home/ubuntu/myapp-folder/"
Then you defined your app with "myapp" and then you say settings is in "myapp.settings"
This could lead to the fact that it searchs the settings file in
"/home/ubuntu/myapp-folder/myapp/myapp/settings"
So my suggestion is to remove the "myapp." in the DJANGO_SETTINGS_MODULE variable and dont forget quotation marks

I'd like to add an answer for anyone stumbling on this more recently.
I followed the getting started First Steps guide to a tee with Celery 4.4.7, as well as the Daemonization tutorial without luck.
My initial issue:
celery -A app_name worker -l info works without issue (actual celery configuration is OK).
I could start celeryd as daemon and status command would show OK, but couldn't receive tasks. Checking logs, I saw the following:
[2020-11-01 09:33:15,620: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
This was an indication that celeryd was not connecting to my broker (redis). Given CELERY_BROKER_URL was already set in my configuration, this meant my celery app settings were not being pulled in for the daemon process.
I tried sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start to see if any of my celery settings were pulled in, and i noticed that app was set to default default (not the app name specified as CELERY_APP in /etc/default/celeryd
Since celery -A app_name worker -l info worked, fixed the issue by exporting CELERY_APP in /etc/default/celeryd/, instead of just setting the variable per documentation.
TL;DR
If celery -A app_name worker -l info works (replace app_name with what you've defined in the Celery first steps guide), and sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start does not show your celery app settings being pulled in, add the following to the end of your /etc/default/celeryd:
export CELERY_APP="app_name"

Related

How to start remote celery workers from django

I'm trying to use django in combination with celery.
Therefore I came across autodiscover_tasks() and I'm not fully sure on how to use them. The celery workers get tasks added by other applications (in this case a node backend).
So far I used this to start the worker:
celery worker -Q extraction --hostname=extraction_worker
which works fine.
Now I'm not sure what the general idea of the django-celery integration is. Should workers still be started from external (e.g. with the command above), or should they be managed and started from the django application?
My celery.py looks like:
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
then I have 2 apps containing a tasks.py file with:
#shared_task
def extraction(total):
return 'Task executed'
how can I now register django to register the worker for those tasks?
You just start worker process as documented, you don't need to register anything else
In a production environment you’ll want to run the worker in the
background as a daemon - see Daemonization - but for testing and
development it is useful to be able to start a worker instance by
using the celery worker manage command, much as you’d use Django’s
manage.py runserver:
celery -A proj worker -l info
For a complete listing of the command-line options available, use the
help command:
celery help
celery worker collects/registers task when it runs and also consumes tasks which it found out

Celery, what is the recommended way to deamonise as a worker and a scheduler?

I have Django and Celery set up. I am only using one node for the worker.
I want to use use it as an asynchronous queue and as a scheduler.
I can launch the task as follows, with the -B option and it will do both.
celery worker start 127.0.0.1 --app=myapp.tasks -B
However it is unclear how to do this on production when I want to daemonise the process. Do I need to set up both the init scripts?
I have tried adding the -B option to the init.d script, but it doesn't seem to have any effect. The documentation is not very clear.
Personally I use Supervisord, which has some nice options and configurability. There are example supervisord config files here
A couple of ways to achieve this:
http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
1. Celery distribution comes with a generic init scripts located in path-to-celery/celery-3.1.10/extra/generic-init.d/celeryd
this can be placed in /etc/init.d/celeryd-name and configured using a configuration file also present in the distribution which would look like the following
# Names of nodes to start (space-separated)
#CELERYD_NODES="my_application-node_1"
# Where to chdir at start. This could be the root of a virtualenv.
#CELERYD_CHDIR="/path/to/my_application"
# How to call celeryd-multi
#CELERYD_MULTI="$CELERYD_CHDIR/bin/celeryd-multi
# Extra arguments
#CELERYD_OPTS="--app=my_application.path.to.worker --time-limit=300 --concurrency=8 --loglevel=DEBUG"
# Create log/pid dirs, if they don't already exist
#CELERY_CREATE_DIRS=1
# %n will be replaced with the nodename
#CELERYD_LOG_FILE="/path/to/my_application/log/%n.log"
#CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers run as an unprivileged user
#CELERYD_USER=my_user
#CELERYD_GROUP=my_group
You can add the following celerybeat elements for celery beat configuration to the file
# Where to chdir at start.
CELERYBEAT_CHDIR="/opt/Myproject/"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
This config should be then saved in (atleast for centos) /etc/default/celeryd-config-name
Look at the init file for the exact location.
now you can run celery as a daemon by running commands
/etc/init.d/celeryd star/restart/stop
Using supervisord.
As mentioned in the other answer.
The superviosord configuration files are also in the distribution path-to-dist/celery-version/extra/supervisord
Configure using the files and use superviosrctl to run the service as a daemon

Running Flower using Supervisor

Am having challanges starting flower using supervisor.
The following command in my development environment works on the console
celery --app=celery_conf.celeryapp flower --conf=flowerconfig
but moving to production to use supervisor am getting all sorts of errors
/supervisor/conf.d/flower.conf
[program:flower]
command=/opt/apps/venv/my_app/bin/celery flower --app=celery_conf.celeryapp --conf=flowerconfig
directory=/opt/apps/my_app
user=www-data
autostart=true
autorestart=false
redirect_stderr=true
stderr_logfile=/var/log/celery/flower.err.log
stdout_logfile=/var/log/celery/flower.out.log
With the above configuration, there is no error but all celery does is give me a help like output. Its like it doesn't acknowledge the variables passed.
Type 'celery <command> --help' for help using a specific command.
Usage: celery <command> [options]
Show help screen and exit.
Options:
-A APP, --app=APP app instance to use (e.g. module.attr_name)
-b BROKER, --broker=BROKER
url to broker. default is 'amqp://guest#localhost//'
--loader=LOADER name of custom loader class to use.
etc..
etc..
etc...
Supervisor on the other hand throws INFO exited: flower (exit status 64; not expected)
I have other supervisor initiated apps using celery_beat and using the configuration file samples on github and they are working well with the same directory paths as above
The flowerconfig is as below:
flowerconfig.py
# Broker settings
BROKER_URL = 'amqp://guest:guest#localhost:5672//'
# RabbitMQ management api
broker_api = 'http://guest:guest#localhost:15672/api/'
#Port
port = 5555
# Enable debug logging
logging = 'INFO'
Solution:
Well, not really a solution so I haven't put it as an answer. Turned out there was a problem with my virtual environment. So I removed flower and installed again using pip3.4 as am on python3.4
Something to note though is that for flower to use your flowerconfig file, you need to add a director=/path/to/your/celery_config/folder/ entry in supervisor's /etc/supervisor/conf.d/flower.conf file else flower will launch with default settings.
/etc/supervisor/conf.d/flower.conf
; ==================================
; Flower: For monitoring Celery
; ==================================
[program:flower]
command=/opt/apps/venv/my_app/bin/celery flower --app=celery_conf.celeryapp --conf=flowerconfig
directory=/opt/apps/my_app/celery_conf #this is key as my configuration file was in the `celery_conf` folder
user=www-data
autostart=true
autorestart=false
redirect_stderr=true
stderr_logfile=/var/log/celery/flower.err.log
stdout_logfile=/var/log/celery/flower.out.log
Thanks.
Your supervisor is unable to locate celeryapp. My be your supervisor configuration file supervisor.conf is in different path.
You can pass directory option to supervisor process. So you can try
[program:flower]
directory = /opt/apps/venv/my_app/
command = celery --app=celery_conf.celeryapp flower
This starts a new flower instance.
Also note celery conf and flower conf are different.

Celery: Start Worker Automatically (on boot)

I have tasks (for Celery) defined in /var/tasks/tasks.py.
I have a virtualenv at /var/tasks/venv which should be used to run /var/tasks/tasks.py.
I can manually start a worker to process tasks like this:
cd /var/tasks
. venv/bin/activate
celery worker -A tasks -Q queue_1
Now, I want to daemonize this.
I copied the init.d script from GitHub and am using the following config file in /etc/default/celeryd:
# name(s) of nodes to start
CELERYD_NODES="worker1"
# absolute or relative path to celery binary
CELERY_BIN="/var/tasks/venv/bin/celery"
# app instance
CELERY_APP="tasks"
# change to directory on upstart
CELERYD_CHDIR="/var/tasks"
# options
CELERYD_OPTS="-Q queue_1 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# unprivileged user/group
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# create pid and log directories, if missing
CELERY_CREATE_DIRS=1
When I start the service (via the init.d script), it says:
celery init v10.1.
Using config script: /etc/default/celeryd
But, it does not process any tasks from the queue, nor is there anything in the log file.
What am I doing wrong?
Supervisor might be a good option but if you want to use Celery Init.d Script will recommend you to copy it from their Github Source.
sudo vim /etc/init.d/celeryd
Copy the code from https://github.com/celery/celery/blob/master/extra/generic-init.d/celeryd in to the file. See daemonizing tutorial for details.
sudo chmod 755 /etc/init.d/celeryd
sudo chown root:root /etc/init.d/celeryd
sudo nano /etc/default/celeryd
Copy paste the below config and change accordingly
#Where your Celery is present
CELERY_BIN="/home/shivam/Desktop/deploy/bin/celery"
# App instance to use
CELERY_APP="app.celery"
# Where to chdir at start
CELERYD_CHDIR="/home/shivam/Desktop/Project/demo/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# A user/group combination that already exists (e.g., nobody).
CELERYD_USER="shivam"
CELERYD_GROUP="shivam"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
export SECRET_KEY="foobar"
Save and exit
sudo /etc/init.d/celeryd start
sudo /etc/init.d/celeryd status
This will auto start Celery on Boot
sudo update-rc.d celeryd defaults
In case you use systemd, you should enable a celery service. It will activate your celery daemon on boot.
sudo systemctl enable yourcelery.service
I ended up using Supervisor and a script at /etc/supervisor/conf.d/celery.conf similar to this:
https://github.com/celery/celery/blob/3.1/extra/supervisord/celeryd.conf
This handles demonization, among other things, quite well and automatically.

How to daemonize django celery periodic task on ubuntu server?

On localhost, i used these statements to execute tasks and workers.
Run tasks:
python manage.py celery beat
Run workers:
python manage.py celery worker --loglevel=info
I used otp, rabbitmq server and django-celery.
It is working fine.
I uploaded the project on ubuntu server. I would like to daemonize these.
For that i created a file /etc/default/celeryd as below config settings.
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Where to chdir at start.
CELERYD_CHDIR="/home/sandbox/myprojrepo/myproj"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
Also i created a file /etc/init.d/celeryd with script i downloaded.
Now when i try to execute /etc/init.d/celeryd start it gives error as Unrecogonized command line argument.
I issued "celeryd-multi start nodeN" as a command and it said nodeN started. But tasks execution havent started yet.
I am new to daemonizing and server hosting.
You can run celery within supervisor:
https://pypi.python.org/pypi/supervisor
http://thomassileo.com/blog/2012/08/20/how-to-keep-celery-running-with-supervisor/
hth.

Categories