How do I run celery status/flower without the -A option? - python

Consider this bash session:
$ export DJANGO_SETTINGS_MODULE=web.settings
$ celery status -b redis://redis.businessoptics.dev:6379/1 -t 10
Error: No nodes replied within time constraint.
$ celery status -b redis://redis.businessoptics.dev:6379/1 -t 10 -A scaffold.tasks.celery_app
celery#worker.9e2c39a1c42c: OK
Why do I need the -A option? As far as I can tell celery should be able to detect the necessary metadata on redis.
Similarly if I run celery flower -b <redis url> it shows that it successfully connects to redis but doesn't show any real workers/tasks/queues and shows several messages like 'stats' inspect method failed. Again, adding -A causes it to work.
I want to run flower in a minimal standalone Docker container that doesn't contain any of my code or its dependencies. Several repos such as this one offer this kind of thing. So how can I do that? The linked repo offers many options but no way to specify the -A option, which suggests it is not necessary.
I'm a beginner to celery so I may be missing something stupid. What am I supposed to do?
The scaffold.tasks.celery_app module simply looks like this:
from celery import Celery
from django.conf import settings
app = Celery()
app.config_from_object(settings)
And these are the Django settings that involve celery:
{'BROKER_HEARTBEAT': 0,
'BROKER_TRANSPORT_OPTIONS': {'fanout_patterns': True,
'fanout_prefix': True,
'visibility_timeout': 172800},
'BROKER_URL': 'redis://redis.businessoptics.dev:6379/1',
'CELERYBEAT_SCHEDULE': {'journey-heartbeat': {'args': (),
'schedule': <crontab: * * * * * (m/h/d/dM/MY)>,
'task': 'kms.data.journey.tasks.heartbeat'}},
'CELERYD_CONCURRENCY': 1,
'CELERYD_HIJACK_ROOT_LOGGER': False,
'CELERYD_LOG_COLOR': False,
'CELERYD_MAX_TASKS_PER_CHILD': 1,
'CELERYD_PREFETCH_MULTIPLIER': 1,
'CELERY_ACCEPT_CONTENT': ['pickle'],
'CELERY_ACKS_LATE': True,
'CELERY_DEFAULT_EXCHANGE': 'default',
'CELERY_DEFAULT_EXCHANGE_TYPE': 'direct',
'CELERY_DEFAULT_QUEUE': 'default',
'CELERY_DEFAULT_ROUTING_KEY': 'default',
'CELERY_IGNORE_RESULT': False,
'CELERY_IMPORTS': ['kms.knowledge.query.tasks2',
# names of several more modules...
],
'CELERY_QUEUES': [<unbound Queue tablestore -> <unbound Exchange default(direct)> -> kms.data.table_store.tasks.#>,
# several more similar-looking Queues...
<unbound Queue default -> <unbound Exchange default(direct)> -> default>],
'CELERY_REDIRECT_STDOUTS': False,
'CELERY_RESULT_BACKEND': 'database',
'CELERY_RESULT_DBURI': 'mysql://businessoptics:businessoptics#mysql.businessoptics.dev:3306/product',
'CELERY_RESULT_DB_SHORT_LIVED_SESSIONS': True,
'CELERY_ROUTES': ['scaffold.tasks.routers.TaskNameRouter'],
'CELERY_SEND_EVENTS': True,
'CELERY_SEND_TASK_ERROR_EMAILS': False,
'CELERY_SEND_TASK_SENT_EVENT': True,
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED': True,
'CELERY_TASKNAME_ROUTES': [('tablestore', 'kms.data.table_store.tasks.#'),
# bunch of routes...
],
'CELERY_TASK_RESULT_EXPIRES': None,
'CELERY_TIMEZONE': 'UTC',
'CELERY_TRACK_STARTED': True,
'CELERY_WORKER_DIRECT': True
}
Here are the relevant versions:
celery==3.1.19
Django==1.8
django-celery==3.1.0
redis==2.10.3

The -A option is the one that passes the celery instance with related configuration including the package containing your tasks.
To use all the features flowers needs to be configured like a worker, this means knowing the package where your celery tasks are and knowing them.
Add to your docker container the needed python lib shouldn't be that hard, for example you could add to this file the configuration line CELERY_IMPORTS in the following way:
CELERY_IMPORTS = os.getenv('CELERY_IMPORTS ', 'default.package')
UPDATE
As #asksol, celery creator, pointed out in the comments here's a more detailed explanation of why you need the -A option:
Flower is also a message consumer and so will help recover unacked messages. Since you have a custom visibility defined, starting flower unconfigured means it will use the default visibility timeout and thus will redeliver unacked messages faster than your workers. Always use -A so that worker, flower and client configuration is in sync

Related

Django Scheduled task using Django-q

Im trying to run scheduled task using Django-q I followed the docs but its not running
heres my config
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'db_cache_table',
}
}
Q_CLUSTER = {
'name': 'DjangORM',
'workers': 4,
'timeout': 90,
'retry': 120,
'queue_limit': 50,
'bulk': 10,
'orm': 'default'
}
heres my scheduled task
Nothin is executing, please help
I also had problems with getting scheduled tasks processed in the first place, but finally found a workflow.
I run django-q on a windows machine, using the django ORM as a broker.
Before talking about the execution routine i came up, lets quickly check out my modules first, starting with ..
settings.py:
Q_CLUSTER = {
"name": "austrian_energy_monthly",
"workers": 1,
"timeout": 10,
"retry": 20,
"queue_limit": 50,
"bulk": 10,
"orm": "default",
"ack_failures": True,
"max_attempts": 1,
"attempt_count": 0,
}
.. and my folder structure:
As you can see, the folder of my django project is inside the src folder. Further, there's a folder for the app i created for this project, which is simply called "app". Inside the app folder i do have another folder called "cron", which includes the following files and functions related to the scheduling:
tasks.py
I do not use the schedule() method provided by the django-q, instead i go for the creating tables directly (see: django-q official schedule docs)
from django.utils import timezone
from austrian_energy_monthly.app.cron.func import create_text_file
from django_q.models import Schedule
Schedule.objects.create(
func="austrian_energy_monthly.app.cron.func.create_text_file",
kwargs={"content": "Insert this into a text file"},
hooks="austrian_energy_monthly.app.cron.hooks.print_result",
name="Text file creation process",
schedule_type=Schedule.ONCE,
next_run=timezone.now(),
)
Make sure you assign the "right" path to the "func" keyword. Just using "func.create_text_file",didn't work out for me, even though these files are laying in the same folder. Same for the "hooks" keyword.
(NOTE: I've set up my project as a development package via setup.py, such that i can call it from everywhere inside my src folder)
func.py:
Contains the function called by the schedule table object.
def create_text_file(content: str) -> str:
file = open(f"copy.txt", "w")
file.write(content)
file.close()
return "Created a text file"
hooks.py:
Contains the function called after the scheduled process finished.
def print_result(task):
print(task.result)
Let's now see how i managed to get the executions running for with the file examples described above:
First i've scheduled the "Text file creation process". Therefore I used "python manage.py shell" and imported the tasks.py module (you probably could schedule everythin via the admin page as well, but i didnt tested this yet):
You could now see the scheduled task, with a question mark on the success column in the admin page (tab "Scheduled tasks", as within your picture):
After that i opened a new terminal and started the cluster with "python manage.py qcluster", resulting in the following output in the terminal:
The successful execution can be inspected by looking at "13:22:17 [Q] INFO Processed [ten-virginia-potato-high]", alongside the hook print statement "Created a text file" in the terminal. Further you can check it at the admin page, under the tab "Successful Tasks", where you should see:
Hope that helped!
Django-q dont support windows. :)

Celery: The module was not found

I am using Open Semantic Search (OSS) and I would like to monitor its processes using the Flower tool. The workers that Celery needs should be given as OSS states on its website
The workers will do tasks like analysis and indexing of the queued files. The workers are implemented by etl/tasks.py and will be started automatically on boot by the service opensemanticsearch.
This tasks.py file looks as follows:
#!/usr/bin/python3
# -*- coding: utf-8 -*-
#
# Queue tasks for batch processing and parallel processing
#
# Queue handler
from celery import Celery
# ETL connectors
from etl import ETL
from etl_delete import Delete
from etl_file import Connector_File
from etl_web import Connector_Web
from etl_rss import Connector_RSS
verbose = True
quiet = False
app = Celery('etl.tasks')
app.conf.CELERYD_MAX_TASKS_PER_CHILD = 1
etl_delete = Delete()
etl_web = Connector_Web()
etl_rss = Connector_RSS()
#
# Delete document with URI from index
#
#app.task(name='etl.delete')
def delete(uri):
etl_delete.delete(uri=uri)
#
# Index a file
#
#app.task(name='etl.index_file')
def index_file(filename, wait=0, config=None):
if wait:
time.sleep(wait)
etl_file = Connector_File()
if config:
etl_file.config = config
etl_file.index(filename=filename)
#
# Index file directory
#
#app.task(name='etl.index_filedirectory')
def index_filedirectory(filename):
from etl_filedirectory import Connector_Filedirectory
connector_filedirectory = Connector_Filedirectory()
result = connector_filedirectory.index(filename)
return result
#
# Index a webpage
#
#app.task(name='etl.index_web')
def index_web(uri, wait=0, downloaded_file=False, downloaded_headers=[]):
if wait:
time.sleep(wait)
result = etl_web.index(uri, downloaded_file=downloaded_file, downloaded_headers=downloaded_headers)
return result
#
# Index full website
#
#app.task(name='etl.index_web_crawl')
def index_web_crawl(uri, crawler_type="PATH"):
import etl_web_crawl
result = etl_web_crawl.index(uri, crawler_type)
return result
#
# Index webpages from sitemap
#
#app.task(name='etl.index_sitemap')
def index_sitemap(uri):
from etl_sitemap import Connector_Sitemap
connector_sitemap = Connector_Sitemap()
result = connector_sitemap.index(uri)
return result
#
# Index RSS Feed
#
#app.task(name='etl.index_rss')
def index_rss(uri):
result = etl_rss.index(uri)
return result
#
# Enrich with / run plugins
#
#app.task(name='etl.enrich')
def enrich(plugins, uri, wait=0):
if wait:
time.sleep(wait)
etl = ETL()
etl.read_configfile('/etc/opensemanticsearch/etl')
etl.read_configfile('/etc/opensemanticsearch/enhancer-rdf')
etl.config['plugins'] = plugins.split(',')
filename = uri
# if exist delete protocoll prefix file://
if filename.startswith("file://"):
filename = filename.replace("file://", '', 1)
parameters = etl.config.copy()
parameters['id'] = uri
parameters['filename'] = filename
parameters, data = etl.process (parameters=parameters, data={})
return data
#
# Read command line arguments and start
#
#if running (not imported to use its functions), run main function
if __name__ == "__main__":
from optparse import OptionParser
parser = OptionParser("etl-tasks [options]")
parser.add_option("-q", "--quiet", dest="quiet", action="store_true", default=False, help="Don\'t print status (filenames) while indexing")
parser.add_option("-v", "--verbose", dest="verbose", action="store_true", default=False, help="Print debug messages")
(options, args) = parser.parse_args()
if options.verbose == False or options.verbose==True:
verbose = options.verbose
etl_delete.verbose = options.verbose
etl_web.verbose = options.verbose
etl_rss.verbose = options.verbose
if options.quiet == False or options.quiet==True:
quiet = options.quiet
app.worker_main()
I read multiple tutorials about Celery and from my understanding, this line should do the job
celery -A etl.tasks flower
but it doesnt. The result is the statement
Error: Unable to load celery application. The module etl was not found.
Same for
celery -A etl.tasks worker --loglevel=debug
so Celery itself seems to be causing the trouble, not flower. I also tried e.g. celery -A etl.index_filedirectory worker --loglevel=debug but with the same result.
What am I missing? Do I have to somehow tell Celery where to find etl.tasks? Online research doesn't really show a similar case, most of the "Module not found" errors seem to occur while importing stuff. So possibly it's a silly question but I couldn't find a solution anywhere. I hope you guys can help me. Unfortunately, I won't be able to respond until Monday though, sorry in advance.
I got same issue, I installed and configured my queue as follows, and it works.
Install RabbitMQ
MacOS
brew install rabbitmq
sudo vim ~/.bash_profile
In bash_profile add the following line:
PATH=$PATH:/usr/local/sbin
Then update bash_profile:
sudo source ~/.bash_profile
Linux
sudo apt-get install rabbitmq-server
Configure RabbitMQ
Launch the queue:
sudo rabbitmq-server
In another Terminal, configure the queue:
sudo rabbitmqctl add_user myuser mypassword
sudo rabbitmqctl add_vhost myvhost
sudo rabbitmqctl set_user_tags myuser mytag
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
Launch Celery
I would suggest to go in the folder that contains task.py and use the following command:
celery -A task worker -l info -Q celery --concurrency 5
Beware that this error means two things:
The module is missing
The module exists but cannot be loaded. If it has errors in it, such as a SyntaxError for instance.
To check that it's not the latter, run:
python -c "import <myModuleContainingTasksDotPyFile>"
In the context of this question:
python -c "import etl"
If it crashes, fix this first (Unlike with celery, you'll get a detailed error message).
Solutions above did not work for me.
I had the same issue and my problem was that in main celery.py (that was in SmartCalend folder) I had:
app = Celery('proj')
but instead I must type there:
app = Celery('SmartCalend')
where SmartCalend is the actual app name where celery.py belongs (!). not any random word, but precisely app name. Thats nowhere mentioned, only in official docs here:
Try export PYTHONPATH=<parent directory> where parent directory is the folder where the etl is. Run the Celery worker, and see it if fixes your problem. This is probably one of the most common Celery "issues" (not really Celery, but Python in general). Alternatively, run the Celery worker from that folder.
Answer for MacOS Catalina:
When you install celery with pip (pip install celery), python can import celery, but you are not able to launch celery from the terminal because the terminal does not know of the celery executable.
Add celery to the path to fix:
nano ~/.bash_profile
In the file add: export PATH="/Users/gavinbelson/Library/Python/2.7/bin:$PATH"
To save the file in the nano editor: ctrl+o, then enter, then ctrl+x
To update the terminal with your change type: source ~/.bash_profile
Now you should be able to type celery in the terminal window
---- Note this is for the default python terminal command which runs version 2.7. If you are using python3 to run python, you would need to change alter the path variable accordingly

Celery works but celeryd doesn't

I'm trying to get Django and Celery functioning in production. My Django project is laid out like so:
- project_root
- manage.py
- app1
- settings.py
- celery_config.py
- __init__.py, models.py, etc...
- app2
- tasks.py
- __init__.py, models.py, etc...
- app3
- tasks.py
- __init__.py, models.py, etc...
and so on...
Now, during development, I can run celery -A app1 worker -l info in the project_root. This autodetects tasks in the other apps and generally runs fine.
For production, I obviously need to run celery as a daemon. I've followed the celeryd instructions at the celery website.
When I run a task (either from python manage.py shell or from the running Django app), I get:
>>> from app2.tasks import add
>>> result = add.delay(1,1)
>>> result.ready()
False
>>> result.get(timeout=1)
TimeoutError
... traceback
add() is just a simple function for testing purposes:
#shared_task
def add(x, y):
return x + y
In the celery logs, I get:
[2014-11-08 12:43:59,191: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2014-11-08 12:43:59,196: INFO/MainProcess] mingle: searching for neighbors
[2014-11-08 12:44:00,205: INFO/MainProcess] mingle: all alone
[2014-11-08 12:44:00,228: WARNING/MainProcess] w1#ubuntu ready.
[2014-11-08 12:44:09,216: ERROR/MainProcess] Received unregistered task of type 'app2.tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
{'utc': True, 'chord': None, 'args': (1, 2), 'retries': 0, 'expires': None, 'task': 'app2.tasks.add', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '82e59cbf-88be-4542-82a7-452f2fbafe95'} (213b)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'app2.tasks.add'
Here is my /etc/default/celeryd for reference:
CELERYD_NODES="w1"
CELERYD_CHDIR="/var/django/project_root"
CELERYD_OPTS="--concurrency=1"
CELERY_CONFIG_MODULE="app1.celery_config"
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_RESULT_BACKEND="amqp"
CELERY_CREATE_DIRS=1
And my project_root/app1/celery_config.py:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app1.settings')
app = Celery('app1', backend='amqp', broker='amqp://')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
How can I get Celery working correctly as a daemon?
Your celery instance/app has naming problems.
Solution:
Since
celery -A app1 worker -l info
is working, if you add
CELERY_APP_ARG="app1"
to your celeryd file, everything should work fine.
Note:
Using celeryd is painful. Also it is deprecated now. So you can use
1. celery multi:
You can start same worker in your project root without any bash scripts like this
celery multi start my_awesome_worker -A app1 \
--pidfile="somewhere/celery/%n.pid" \
--logfile="somewhere/celery/%n.log"
One more advantage of this method is, you can start daemon without sudo privileges.
2. Supervisor:
If you are already using supervisor, you can start one more process for celery which makes managing multiple workers super easy.
Explanation:
When you run worker from your terminal with
celery -A app1 worker -l info
in your log somewhere, it shows a list of tasks which it will process, something like this
[tasks]
. app1.tasks.add
Now if you do
In [1]: from app1.tasks import add
In [2]: add.name
Out[2]: 'app1.tasks.name' #attention please
In [3]: result = add.delay(1,1)
In [4]: result.ready()
Out[4]: True
In [5]: result.get(timeout=1)
Out[5]: 2
Everything works fine because, the registered task name and the name of task you run are same. On the other hand if you do
In [1]: from app1 import tasks
In [2]: tasks.add.name
Out[2]: 'tasks.name' #attention please
In [3]: result = add.delay(1,1)
In [4]: result.ready()
Out[4]: False
In [5]: result.get(timeout=1)
Out[5]: TimeoutError Traceback (most recent call last)
<ipython-input-10-ade09ca12a13> in <module>()
----> 1 r.get(timeout=1)
It throws error because the registered task name is app1.tasks.add and the task you queued is tasks.add. So your worker has no idea about the task you added. More about this here.
Warining:
Also if you are running a celery for another app lets say foo
celery worker -l info -A foo
which has registered task bar
[tasks]
. foo.tasks.bar
Now if you queue your old app1.tasks.add, this worker will throw key error. So you have to import & route tasks correctly.
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 455, in on_task_received
KeyError: 'app1.tasks.add'
Because it has no idea about the task you have queued. You have to import & route tasks correctly.
The solution turned out to be adding
CELERY_APP="app1"
CELERY_NODES="app1"
to /etc/default/celeryd

In flask's documentation for celery, why do celery tasks need names?

In the documentation, the #celery.task decorator is not passed arguments, yet in the GitHub example, it is named "tasks.add". Why is it? When I remove the name, the example no longer works, complaining of
KeyError: '__main__.add'
[1] http://flask.pocoo.org/docs/0.10/patterns/celery/
[2] https://github.com/thrisp/flask-celery-example/blob/master/app.py#L25
In the Flask documentation the task name was not set because the code is assumed to be inside a tasks module, so the task's name will be automatically generated as tasks.add, in the Celery docs:
Every task must have a unique name, and a new name will be generated
out of the function name if a custom name is not provided
Check the Names section of the Celery docs for more info.
In the other example on Github, the author sets the name explicitly instead of relying on the automatic naming, which will be __main__.tasks if running as the main module, which is the case when running the Flask server.
Update on why you're having this issue:
The task is being sent from the function hello_world when you access the /test page by passing x and y:
res = add.apply_async((x, y))
Because the task add is inside the __main__ module it will be named __main__.add and sent to the worker with this name, but on the other hand the worker that you started using:
celery worker -A app.celery
Has this task registered as app.add that's why you're getting this error:
[2014-10-10 10:32:29,540: ERROR/MainProcess] Received unregistered task of type '__main__.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://docs.celeryq.org/en/latest/userguide/tasks.html#task-names for more information.
The full contents of the message body was:
{'timelimit': (None, None), 'utc': True, 'chord': None, 'args': (2787476, 36096995), 'retries': 0, 'expires': None, 'task': '__main__.add', 'callbacks': None, 'errbacks': None, 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '804e10a0-2569-4338-a5e3-f9e07689d1d1'} (218b)
Traceback (most recent call last):
File "/home/peter/env/celery/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: '__main__.add'
Check the output of the worker:
[tasks]
. app.add
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
Celery only sends the task name to the worker to execute it, so when you explicitly set the task name the hello_world function will send the task with this name which is registered in the worker.
Update:
The task name can be whatever you want, it can just be add, and your celery tasks don't have to be in a tasks module at all, to understand more about task names try this:
Remove the explicit task name and start a worker:
celery worker -A app.celery
in another terminal window, cd to the code directory and start an interactive python shell and try this:
>>> import app
>>> app
<module 'app' from 'app.pyc'>
>>> app.add
<#task: app.add of app:0xb6a29a6c>
>>> # check the name of the task
... app.add.name
'app.add'
>>> t = app.add.delay(2, 3)
>>> t.result
5
As you can see we didn't use an explicit name and it worked as expected because the name of task from where we sent it is the same as registered in the worker (see above).
Now back to why you got this error when you removed the task name, the task is sent from app.py right, in the same directory run this:
$ python -i app.py
Then interrupt the Flask server with Ctrl + C, and try this:
>>> add.name
'__main__.add'
As you can see this is why you got this error and not because you removed the task name.

Why does Celery work in Python shell, but not in my Django views? (import problem)

I installed Celery (latest stable version.)
I have a directory called /home/myuser/fable/jobs. Inside this directory, I have a file called tasks.py:
from celery.decorators import task
from celery.task import Task
class Submitter(Task):
def run(self, post, **kwargs):
return "Yes, it works!!!!!!"
Inside this directory, I also have a file called celeryconfig.py:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "abc"
BROKER_PASSWORD = "xyz"
BROKER_VHOST = "fablemq"
CELERY_RESULT_BACKEND = "amqp"
CELERY_IMPORTS = ("tasks", )
In my /etc/profile, I have these set as my PYTHONPATH:
PYTHONPATH=/home/myuser/fable:/home/myuser/fable/jobs
So I run my Celery worker using the console ($ celeryd --loglevel=INFO), and I try it out.
I open the Python console and import the tasks. Then, I run the Submitter.
>>> import fable.jobs.tasks as tasks
>>> s = tasks.Submitter()
>>> s.delay("abc")
<AsyncResult: d70d9732-fb07-4cca-82be-d7912124a987>
Everything works, as you can see in my console
[2011-01-09 17:30:05,766: INFO/MainProcess] Task tasks.Submitter[d70d9732-fb07-4cca-82be-d7912124a987] succeeded in 0.0398268699646s:
But when I go into my Django's views.py and run the exact 3 lines of code as above, I get this:
[2011-01-09 17:25:20,298: ERROR/MainProcess] Unknown task ignored: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported.": {'retries': 0, 'task': 'fable.jobs.tasks.Submitter', 'args': ('abc',), 'expires': None, 'eta': None, 'kwargs': {}, 'id': 'eb5c65b4-f352-45c6-96f1-05d3a5329d53'}
Traceback (most recent call last):
File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/listener.py", line 321, in receive_message
eventer=self.event_dispatcher)
File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 299, in from_message
eta=eta, expires=expires)
File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 243, in __init__
self.task = tasks[self.task_name]
File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/registry.py", line 63, in __getitem__
raise self.NotRegistered(str(exc))
NotRegistered: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported."
It's weird, because the celeryd client does show that it's registered, when I launch it.
[2011-01-09 17:38:27,446: WARNING/MainProcess]
Configuration ->
. broker -> amqp://GOGOme#localhost:5672/fablemq
. queues ->
. celery -> exchange:celery (direct) binding:celery
. concurrency -> 1
. loader -> celery.loaders.default.Loader
. logfile -> [stderr]#INFO
. events -> OFF
. beat -> OFF
. tasks ->
. tasks.Decayer
. tasks.Submitter
Can someone help?
This is what I did which finally worked
in Settings.py I added
CELERY_IMPORTS = ("myapp.jobs", )
under myapp folder I created a file called jobs.py
from celery.decorators import task
#task(name="jobs.add")
def add(x, y):
return x * y
Then ran from commandline: python manage.py celeryd -l info
in another shell i ran python manage.py shell, then
>>> from myapp.jobs import add
>>> result = add.delay(4, 4)
>>> result.result
and the i get:
16
The important point is that you have to rerun both command shells when you add a new function. You have to register the name both on the client and and on the server.
:-)
I believe your tasks.py file needs to be in a django app (that's registered in settings.py) in order to be imported. Alternatively, you might try importing the tasks from an __init__.py file in your main project or one of the apps.
Also try starting celeryd from manage.py:
$ python manage.py celeryd -E -B -lDEBUG
(-E and -B may or may not be necessary, but that's what I use).
See Automatic Naming and Relative Imports, in the docs:
http://celeryq.org/docs/userguide/tasks.html#automatic-naming-and-relative-imports
The tasks name is "tasks.Submitter" (as listed in the celeryd output),
but you import the task as "fable.jobs.tasks.Submitter"
I guess the best solution here is if the worker also sees it as "fable.jobs.tasks.Submitter",
it makes more sense from an app perspective.
CELERY_IMPORTS = ("fable.jobs.tasks", )

Categories