I need configue to which queue celery should put result of task execution, I am using this way as described in documentation (item "reply_to"):
#app.task(reply_to='export_task') # <= configured right way
def test_func():
return "here is result of task"
Expected behavior
Task result should be in queue with name "export_task" (as configured in decorator)
Actual behavior
Task result locates in queue with name like:
d5587446-0149-3133-a3ed-d9a297d52a96
celery report:
python -m celery -A my_worker report
software -> celery:3.1.24 (Cipater) kombu:3.0.37 py:3.5.1
billiard:3.3.0.23 py-amqp:1.4.9
platform -> system:Windows arch:64bit, WindowsPE imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:rpc:///
CELERY_ACCEPT_CONTENT: ['json']
CELERY_RESULT_BACKEND: 'rpc:///'
CELERY_QUEUES:
(<unbound Queue main_check -> <unbound Exchange main_check(direct)> -> main_check>,)
CELERYD_CONCURRENCY: 10
CELERY_TASK_SERIALIZER: 'json'
CELERY_RESULT_PERSISTENT: True
CELERY_ROUTES: {
'my_worker.test_func': {'queue': 'main_check'}}
BROKER_TRANSPORT: 'amqp'
CELERYD_MAX_TASKS_PER_CHILD: 3
CELERY_RESULT_SERIALIZER: 'json'
Steps to reproduce
Please create files of project.
celery_app.py:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('worker')
app.conf.update(
CELERY_ROUTES={
'my_worker.test_func': {'queue': 'main_check'},
},
BROKER_TRANSPORT='amqp',
CELERY_RESULT_BACKEND='rpc://',
CELERY_RESULT_PERSISTENT=True,
# CELERY_DEFAULT_DELIVERY_MODE='persistent',
# CELERY_RESULT_EXCHANGE='export_task',
CELERYD_CONCURRENCY=10,
CELERYD_MAX_TASKS_PER_CHILD=3,
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_QUEUES=(
Queue('main_check', Exchange('main_check', type='direct'), routing_key='main_check'),
),
)
my_worker.py:
from celery_app import app
#app.task(reply_to='export_task')
def test_func():
return "here is result of task"
then start celery:
python -m celery -A my_worker worker --loglevel=info
then in python debug console add new task:
from my_worker import *
result = test_func.delay()
I asked to help on official issue tracker, but nobody cares.
I don't see in your code where that queue (export_task) has been declared.
Related
I need a minimum example to do periodic task (run some function after every 5 minutes, or run something at 12:00:00 etc.).
In my myapp/tasks.py, I have,
from celery.task.schedules import crontab
from celery.decorators import periodic_task
from celery import task
#periodic_task(run_every=(crontab(hour="*", minute=1)), name="run_every_1_minutes", ignore_result=True)
def return_5():
return 5
#task
def test():
return "test"
When I run celery workers it does show the tasks (given below) but does not return any values (in either terminal or flower).
[tasks]
. mathematica.core.tasks.test
. run_every_1_minutes
Please provide a minimum example or hints to achieve the desired results.
Background:
I have a config/celery.py which contains the following:
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
app = Celery('config')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
And in my config/__init__.py, I have
from .celery import app as celery_app
__all__ = ['celery_app']
I added a function something like below in myapp/tasks.py
from celery import task
#task
def test():
return "test"
When I run test.delay() from shell, it runs successfully and also shows the task information in flower
To run periodic task you should run celery beat also. You can run it with this command:
celery -A proj beat
Or if you are using one worker:
celery -A proj worker -B
I have an issue with Celery queue routing when using current_app.send_task
I have two workers (each one for each queue)
python manage.py celery worker -E -Q priority --concurrency=8 --loglevel=DEBUG
python manage.py celery worker -Q low --concurrency=8 -E -B --loglevel=DEBUG
I have two queues defined in celeryconfig.py file:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.core.exceptions import ImproperlyConfigured
from celery import Celery
from django.conf import settings
try:
app = Celery('proj', broker=getattr(settings, 'BROKER_URL', 'redis://'))
except ImproperlyConfigured:
app = Celery('proj', broker='redis://')
app.conf.update(
CELERY_TASK_SERIALIZER='json',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_RESULT_SERIALIZER='json',
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
CELERY_DEFAULT_EXCHANGE='tasks',
CELERY_DEFAULT_EXCHANGE_TYPE='topic',
CELERY_DEFAULT_ROUTING_KEY='task.priority',
CELERY_QUEUES=(
Queue('priority',routing_key='priority.#'),
Queue('low', routing_key='low.#'),
),
CELERY_DEFAULT_EXCHANGE='priority',
CELERY_IMPORTS=('mymodule.tasks',)
CELERY_ENABLE_UTC = True
CELERY_TIMEZONE = 'UTC'
if __name__ == '__main__':
app.start()
In the definition of tasks, we use decorator to explicit the queue:
#task(name='mymodule.mytask', routing_key='low.mytask', queue='low')
def mytask():
# does something
pass
This task is run indeed in the low queue when this task is run using:
from mymodule.tasks import mytask
mytask.delay()
But it's not the case when it's run using: (it's run in the default queue: "priority")
from celery import current_app
current_app.send_task('mymodule.mytask')
I wonder why this later way doesn't route the task to the "low" queue!
p.s: I use redis.
send_task is a low-level method. It sends directly to the broker the task signature without going through your task decorator.
With this method, you can even send a task without loading the task code/module.
To solve your problem, you can fetch the routing_key/queue from configuration directly:
route = celery.amqp.routes[0].route_for_task("mymodule.mytask")
Out[10]: {'queue': 'low', 'routing_key': 'low.mytask'}
celery.send_task("myodule.mytask", queue=route['queue'], routing_key=route['routing_key']`
I have been playing with Celery on Windows 7. Right now, I am going through the Next Steps tutorial: http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
I created a celery.py file:
from __future__ import absolute_import
from celery import Celery
app = Celery('proj',
broker='amqp://',
backend='amqp://',
include=['proj.tasks'])
# app.conf.update(
# CELERY_TASK_RESULT_EXPIRES=3600,
# )
if __name__ == '__main__':
app.start()
Then I created a tasks.py file:
from __future__ import absolute_import
from proj.celery import app
#app.task
def add(x, y):
return x + y
#app.task
def mul(x, y):
return x * y
#app.task
def xsum(numbers):
return sum(numbers)
I then fired up a celery worker in one Powershell. Then in another Powershell I added a couple of integers:
>>> from proj.tasks import add
>>> res = add.delay(2, 2)
In the window running the queue, I got a result right away:
[2014-10-29 09:20:28,875: INFO/MainProcess] Received task: proj.tasks.add[3e5783ef-46a1-44d0-893f-0623e5bc0b09]
[2014-10-29 09:20:28,891: INFO/MainProcess] Task proj.tasks.add[3e5783ef-46a1-44d0-893f-0623e5bc0b09] succeeded in 0.016
0000324249s: 4
However, when I try to retrieve the result in the other window with res.get(), the function just hangs. I've read the tutorial several times and looked on the web and cannot find what the issue is. Could the problem be with using amqp as the backend? I guess amqp sends states as messages instead of storing them.
Oddly enough, if I hit Ctrl+C and query the status of res, I get 'PENDING'.
>>> res.status
'PENDING'
I find this odd because I thought the task was completed. I double checked the ids to make sure.
Looks like the client is configured to use amqp as the backend:
>>> print(res.backend)
<celery.backends.amqp.AMQPBackend object at 0x00000000035C0358>
Looks like ignore_result is set to false.
>>> add
<#task: proj.tasks.add of proj:0x2298390>
>>> add.name
'proj.tasks.add'
>>> add.ignore_result
False
Turns out this is a windows issue. For the sake of honesty, I should say I got the answer from here: Celery 'Getting Started' not able to retrieve results; always pending
Basically, pass the worker the --pool=solo flag:
> C:\Python27\Scripts\celery.exe -A messaging.tasks worker --loglevel=info --pool=solo
I'm unsure what the pool implementation controls though.
I'm running the First Steps with Celery Tutorial.
We define the following task:
from celery import Celery
app = Celery('tasks', broker='amqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
Then call it:
>>> from tasks import add
>>> add.delay(4, 4)
But I get the following error:
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
I'm running both the celery worker and the rabbit-mq server. Rather strangely, celery worker reports the task as succeeding:
[2014-04-22 19:12:03,608: INFO/MainProcess] Task test_celery.add[168c7d96-e41a-41c9-80f5-50b24dcaff73] succeeded in 0.000435483998444s: 19
Why isn't this working?
Just keep reading tutorial. It will be explained in Keep Results chapter.
To start Celery you need to provide just broker parameter, which is required to send messages about tasks. If you want to retrieve information about state and results returned by finished tasks you need to set backend parameter. You can find full list with description in Configuration docs: CELERY_RESULT_BACKEND.
I suggest having a look at:
http://www.cnblogs.com/fangwenyu/p/3625830.html
There you will see that
instead of
app = Celery('tasks', broker='amqp://guest#localhost//')
you should be writing
app = Celery('tasks', backend='amqp', broker='amqp://guest#localhost//')
This is it.
In case anyone made the same easy to make mistake as I did: The tutorial doesn't say so explicitly, but the line
app = Celery('tasks', backend='rpc://', broker='amqp://')
is an EDIT of the line in your tasks.py file. Mine now reads:
app = Celery('tasks', backend='rpc://', broker='amqp://guest#localhost//')
When I run python from the command line I get:
$ python
>>> from tasks import add
>>> result = add.delay(4,50)
>>> result.ready()
>>> False
All tutorials should be easy to follow, even when a little drunk. So far this one doesn't reach that bar.
What is not clear by the tutorial is that the tasks.py module needs to be edited so that you change the line:
app = Celery('tasks', broker='pyamqp://guest#localhost//')
to include the RPC result backend:
app = Celery('tasks', backend='rpc://', broker='pyamqp://')
Once done, Ctrl + C the celery worker process and restart it:
celery -A tasks worker --loglevel=info
The tutorial is confusing in that we're making the assumption that creation of the app object is done in the client testing session, which it is not.
In your project directory find the settings file.
Then run the below command in your terminal:
sudo vim settings.py
copy/paste the below config into your settings.py:
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
Note: This is your backend for storing the messages in the queue if you are using django-celery package for your Django project.
Celery rely both on a backend AND a broker.
This solved it for me using only Redis:
app = Celery("tasks", backend='redis://localhost',broker="redis://localhost")
Remember to restart worker in your terminal after changing the config
I solved this error by adding app after taskID:
response = AsyncResult(taskID, app=celery_app)
where celery_app = Celery('ANYTHING', broker=BROKER_URL, backend=BACKEND_URL )
if you want to get the status of the celery task to know whether it is "PENDING","SUCCESS","FAILURE"
status = response.status
My case was simple - I used interactive Python console and Python cached imported module. I killed console and started it again - everything works as it should.
import celery
app = celery.Celery('tasks', broker='redis://localhost:6379',
backend='mongodb://localhost:27017/celery_tasks')
#app.task
def add(x, y):
return x + y
In Python console.
>>> from tasks import add
>>> result = add.delay(4, 4)
>>> result.ready()
True
Switching from Windows to Linux solved the issue for me
Windows is not guaranteed to work, it's mentioned here
I had the same issue, what resolved it for me was to import the celery file (celery.py) in the init function of you're app with something like:
from .celery import CELERY_APP as celery_app
__all__ = ('celery_app',)
if you use a celery.py file as described here
I want make a testcase with my celery codes.
But usually celery need start with a new process like $ celery -A CELERY_MODULE worker, It's means I can't run my testcase code directly ?
I'm configurate the Celery with memory store to void the extra I/O in the testcase. That's config can't sample share the task queue in different process.
Here is my naive implements.
The celery entry from celery.bin.celeryd.WorkCommand, it's parse the args and execute works.
Use the solo to void the MultiProcess use in the case. Of course you need install that's lib first.
You could use this before your celery testcase start.
#!/usr/bin/env python
#vim: encoding=utf-8
import time
import unittest
from threading import Thread
from celery import Celery, states
from celery.bin.celeryd import WorkerCommand
class CELERY_CONFIG(object):
BROKER_URL = "memory://"
CELERY_CACHE_BACKEND = "memory"
CELERY_RESULT_BACKEND = "cache"
CELERYD_POOL = "solo"
class CeleryTestCase(unittest.TestCase):
def test_inprocess(self):
app = Celery(__name__)
app.config_from_object(CELERY_CONFIG)
#app.task
def dumpy_task(dct):
return 321
worker = WorkerCommand(app)
#worker.execute_from_commandline(["-P solo"])
t = Thread(target=worker.execute_from_commandline, args=(["-c 1"],))
t.daemon = True
t.start()
ar = dumpy_task.apply_async(({"a": 123},))
while ar.status != states.SUCCESS:
time.sleep(.01)
self.assertEqual(states.SUCCESS, ar.status)
self.assertEqual(ar.result, 321)
t.join(0)