Django Celery periodic task example - python

I need a minimum example to do periodic task (run some function after every 5 minutes, or run something at 12:00:00 etc.).
In my myapp/tasks.py, I have,
from celery.task.schedules import crontab
from celery.decorators import periodic_task
from celery import task
#periodic_task(run_every=(crontab(hour="*", minute=1)), name="run_every_1_minutes", ignore_result=True)
def return_5():
return 5
#task
def test():
return "test"
When I run celery workers it does show the tasks (given below) but does not return any values (in either terminal or flower).
[tasks]
. mathematica.core.tasks.test
. run_every_1_minutes
Please provide a minimum example or hints to achieve the desired results.
Background:
I have a config/celery.py which contains the following:
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
app = Celery('config')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
And in my config/__init__.py, I have
from .celery import app as celery_app
__all__ = ['celery_app']
I added a function something like below in myapp/tasks.py
from celery import task
#task
def test():
return "test"
When I run test.delay() from shell, it runs successfully and also shows the task information in flower

To run periodic task you should run celery beat also. You can run it with this command:
celery -A proj beat
Or if you are using one worker:
celery -A proj worker -B

Related

Python - Celery autorelaod

How does you develop when using celery ?
Seem it require reload for every change,
I'm using command:
watchmedo auto-restart --directory=proj/ -p '*.py' --recursive -- celery -A proj worker --concurrency=1 --loglevel=INFO
cellery.py
from decouple import AutoConfig
cwd = os.getcwd()
DOTENV_FILE = cwd + '/proj/config/.env'
config = AutoConfig(search_path='DOTENV_FILE')
app = Celery('proj',
broker=config('CELERY_BROKER_URL'),
backend=config('CELERY_RESULT_BACKEND'),
include=['proj.tasks'])
app.conf.update(
result_expires=3600,
)
if __name__ == '__main__':
app.start()
tasks.py
from .celery import app
#app.task
def add(x, y):
return x + y
Even if there is a technical solution for this kind of reloading I would suggest you shouldn't use celery stuff as you develop your task function because, well, it's just a function! So my approach here is to get the function done first and add celery stuff then to check if it integrates well with other things like tasks in the chain, django, etc. The same technic will apply if you think about unit testing.

Celery task group not being executed in background and results in exception

My Celery task isn't executing in the background in my Django 1.7/Python3 project.
# settings.py
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULTBACKEND = BROKER_URL
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_ALWAYS_EAGER = False
I have celery.py in my root app module as such:
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
django.setup()
app = Celery('my_app')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
and load the app in __init__.py in the root module:
from __future__ import absolute_import
from .celery import app as celery_app
My task is set up as a shared task in a tasks.py file in my app module:
from __future__ import absolute_import
from celery import shared_task
#shared_task
def update_statistics(profile, category):
# more code
and I call the task as a group:
. . .
job = group([update_statistics(f.profile, category)
for f in forecasts])
job.apply_async()
However, I'm not seeing any status updates in my task queue, which I am starting via:
$ celery -A my_app worker -l info
The task is being executed, just not in the background. If I add a print statement to the task code, I will see the output in my Django development server console instead of the Celery queue.
After the task runs in the foreground, I'm greeted with this exception:
'NoneType' object has no attribute 'app'
Here's the full traceback if you're interested: https://gist.github.com/alsoicode/0263d251e3744227ba46
You're calling the tasks directly in your list comprehension when you create the group, so they're executed then and there. You need to use the .subtask() method (or its shortcut, .s()) to create the subtasks without calling them:
job = group([update_statistics.s(f.profile, category) for f in forecasts])

Celery task routing doesn't work when using current_app.send_task

I have an issue with Celery queue routing when using current_app.send_task
I have two workers (each one for each queue)
python manage.py celery worker -E -Q priority --concurrency=8 --loglevel=DEBUG
python manage.py celery worker -Q low --concurrency=8 -E -B --loglevel=DEBUG
I have two queues defined in celeryconfig.py file:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.core.exceptions import ImproperlyConfigured
from celery import Celery
from django.conf import settings
try:
app = Celery('proj', broker=getattr(settings, 'BROKER_URL', 'redis://'))
except ImproperlyConfigured:
app = Celery('proj', broker='redis://')
app.conf.update(
CELERY_TASK_SERIALIZER='json',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_RESULT_SERIALIZER='json',
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
CELERY_DEFAULT_EXCHANGE='tasks',
CELERY_DEFAULT_EXCHANGE_TYPE='topic',
CELERY_DEFAULT_ROUTING_KEY='task.priority',
CELERY_QUEUES=(
Queue('priority',routing_key='priority.#'),
Queue('low', routing_key='low.#'),
),
CELERY_DEFAULT_EXCHANGE='priority',
CELERY_IMPORTS=('mymodule.tasks',)
CELERY_ENABLE_UTC = True
CELERY_TIMEZONE = 'UTC'
if __name__ == '__main__':
app.start()
In the definition of tasks, we use decorator to explicit the queue:
#task(name='mymodule.mytask', routing_key='low.mytask', queue='low')
def mytask():
# does something
pass
This task is run indeed in the low queue when this task is run using:
from mymodule.tasks import mytask
mytask.delay()
But it's not the case when it's run using: (it's run in the default queue: "priority")
from celery import current_app
current_app.send_task('mymodule.mytask')
I wonder why this later way doesn't route the task to the "low" queue!
p.s: I use redis.
send_task is a low-level method. It sends directly to the broker the task signature without going through your task decorator.
With this method, you can even send a task without loading the task code/module.
To solve your problem, you can fetch the routing_key/queue from configuration directly:
route = celery.amqp.routes[0].route_for_task("mymodule.mytask")
Out[10]: {'queue': 'low', 'routing_key': 'low.mytask'}
celery.send_task("myodule.mytask", queue=route['queue'], routing_key=route['routing_key']`

Celery with RabbitMQ: AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'

I'm running the First Steps with Celery Tutorial.
We define the following task:
from celery import Celery
app = Celery('tasks', broker='amqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
Then call it:
>>> from tasks import add
>>> add.delay(4, 4)
But I get the following error:
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
I'm running both the celery worker and the rabbit-mq server. Rather strangely, celery worker reports the task as succeeding:
[2014-04-22 19:12:03,608: INFO/MainProcess] Task test_celery.add[168c7d96-e41a-41c9-80f5-50b24dcaff73] succeeded in 0.000435483998444s: 19
Why isn't this working?
Just keep reading tutorial. It will be explained in Keep Results chapter.
To start Celery you need to provide just broker parameter, which is required to send messages about tasks. If you want to retrieve information about state and results returned by finished tasks you need to set backend parameter. You can find full list with description in Configuration docs: CELERY_RESULT_BACKEND.
I suggest having a look at:
http://www.cnblogs.com/fangwenyu/p/3625830.html
There you will see that
instead of
app = Celery('tasks', broker='amqp://guest#localhost//')
you should be writing
app = Celery('tasks', backend='amqp', broker='amqp://guest#localhost//')
This is it.
In case anyone made the same easy to make mistake as I did: The tutorial doesn't say so explicitly, but the line
app = Celery('tasks', backend='rpc://', broker='amqp://')
is an EDIT of the line in your tasks.py file. Mine now reads:
app = Celery('tasks', backend='rpc://', broker='amqp://guest#localhost//')
When I run python from the command line I get:
$ python
>>> from tasks import add
>>> result = add.delay(4,50)
>>> result.ready()
>>> False
All tutorials should be easy to follow, even when a little drunk. So far this one doesn't reach that bar.
What is not clear by the tutorial is that the tasks.py module needs to be edited so that you change the line:
app = Celery('tasks', broker='pyamqp://guest#localhost//')
to include the RPC result backend:
app = Celery('tasks', backend='rpc://', broker='pyamqp://')
Once done, Ctrl + C the celery worker process and restart it:
celery -A tasks worker --loglevel=info
The tutorial is confusing in that we're making the assumption that creation of the app object is done in the client testing session, which it is not.
In your project directory find the settings file.
Then run the below command in your terminal:
sudo vim settings.py
copy/paste the below config into your settings.py:
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
Note: This is your backend for storing the messages in the queue if you are using django-celery package for your Django project.
Celery rely both on a backend AND a broker.
This solved it for me using only Redis:
app = Celery("tasks", backend='redis://localhost',broker="redis://localhost")
Remember to restart worker in your terminal after changing the config
I solved this error by adding app after taskID:
response = AsyncResult(taskID, app=celery_app)
where celery_app = Celery('ANYTHING', broker=BROKER_URL, backend=BACKEND_URL )
if you want to get the status of the celery task to know whether it is "PENDING","SUCCESS","FAILURE"
status = response.status
My case was simple - I used interactive Python console and Python cached imported module. I killed console and started it again - everything works as it should.
import celery
app = celery.Celery('tasks', broker='redis://localhost:6379',
backend='mongodb://localhost:27017/celery_tasks')
#app.task
def add(x, y):
return x + y
In Python console.
>>> from tasks import add
>>> result = add.delay(4, 4)
>>> result.ready()
True
Switching from Windows to Linux solved the issue for me
Windows is not guaranteed to work, it's mentioned here
I had the same issue, what resolved it for me was to import the celery file (celery.py) in the init function of you're app with something like:
from .celery import CELERY_APP as celery_app
__all__ = ('celery_app',)
if you use a celery.py file as described here

Python Celery could start with a threading inprocess ?

I want make a testcase with my celery codes.
But usually celery need start with a new process like $ celery -A CELERY_MODULE worker, It's means I can't run my testcase code directly ?
I'm configurate the Celery with memory store to void the extra I/O in the testcase. That's config can't sample share the task queue in different process.
Here is my naive implements.
The celery entry from celery.bin.celeryd.WorkCommand, it's parse the args and execute works.
Use the solo to void the MultiProcess use in the case. Of course you need install that's lib first.
You could use this before your celery testcase start.
#!/usr/bin/env python
#vim: encoding=utf-8
import time
import unittest
from threading import Thread
from celery import Celery, states
from celery.bin.celeryd import WorkerCommand
class CELERY_CONFIG(object):
BROKER_URL = "memory://"
CELERY_CACHE_BACKEND = "memory"
CELERY_RESULT_BACKEND = "cache"
CELERYD_POOL = "solo"
class CeleryTestCase(unittest.TestCase):
def test_inprocess(self):
app = Celery(__name__)
app.config_from_object(CELERY_CONFIG)
#app.task
def dumpy_task(dct):
return 321
worker = WorkerCommand(app)
#worker.execute_from_commandline(["-P solo"])
t = Thread(target=worker.execute_from_commandline, args=(["-c 1"],))
t.daemon = True
t.start()
ar = dumpy_task.apply_async(({"a": 123},))
while ar.status != states.SUCCESS:
time.sleep(.01)
self.assertEqual(states.SUCCESS, ar.status)
self.assertEqual(ar.result, 321)
t.join(0)

Categories