I've been using supervisord to run celery for my django project for a while, but suddenly celerybeat won't start. It gives the following traceback:
Traceback (most recent call last):
File "[...]celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File "[...]celery/beat.py", line 454, in start
humanize_seconds(self.scheduler.max_interval))
File "[...]kombu/utils/__init__.py", line 322, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "[...]celery/beat.py", line 494, in scheduler
return self.get_scheduler()
File "[...]celery/beat.py", line 489, in get_scheduler
lazy=lazy)
File "[...]celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "[...]celery/beat.py", line 358, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "[...]celery/beat.py", line 185, in __init__
self.setup_schedule()
File "[...]celery/beat.py", line 377, in setup_schedule
self._store['entries']
File "/usr/local/lib/python2.7/shelve.py", line 122, in __getitem__
value = Unpickler(f).load()
UnpicklingError: pickle data was truncated
Haven't been able to find anything on this.
I pieced it together.
The issue was caused by a corrupt celerybeat-schedule file. To locate the file enter:
find ~/ -name celerybeat-schedule -print
Then delete or rename the file:
mv [filename] [newfilename]
Then restart your processes.
Related
I'm trying to run celery, and can't run it because of the following exception:
[2023-02-14 11:25:11,689: CRITICAL/MainProcess] Unrecoverable error: TypeError("unhashable type: 'dict'")
Traceback (most recent call last):
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 628, in start
c.loop(*c.loop_args())
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/loops.py", line 94, in asynloop
update_qos()
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/kombu/common.py", line 435, in update
return self.set(self.value)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/kombu/common.py", line 428, in set
self.callback(prefetch_count=new_value)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/consumer/tasks.py", line 43, in set_prefetch_count
return c.task_consumer.qos(
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/kombu/messaging.py", line 558, in qos
return self.channel.basic_qos(prefetch_size,
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/channel.py", line 1894, in basic_qos
return self.send_method(
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/abstract_channel.py", line 79, in send_method
return self.wait(wait, returns_tuple=returns_tuple)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/abstract_channel.py", line 99, in wait
self.connection.drain_events(timeout=timeout)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/connection.py", line 525, in drain_events
while not self.blocking_read(timeout):
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/connection.py", line 531, in blocking_read
return self.on_inbound_frame(frame)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/method_framing.py", line 77, in on_frame
callback(channel, msg.frame_method, msg.frame_args, msg)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/connection.py", line 537, in on_inbound_method
return self.channels[channel_id].dispatch_method(
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/abstract_channel.py", line 156, in dispatch_method
listener(*args)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/amqp/channel.py", line 1629, in _on_basic_deliver
fun(msg)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/kombu/messaging.py", line 626, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message)
File "/Users/shira/PycharmProjects/demo/venv/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 591, in on_task_received
strategy = strategies[type_]
TypeError: unhashable type: 'dict'
I tried to uninstall celery, stop rabbitMQ process, and googled it and didn't find any solution.
I run a simple basic code of celery using only one function ("add", without any dictionary).
I think maybe there is some issues with the libraries I import.
I put a breakpoint where the exception is thrown.
I found out that I sent the illegal task a few days ago, so I understood I need to remove it from the queue so other tasks could be done.
I used this command:
celery -A tasks purge
This solved me the issue :)
I have 3 machines with celery workers and rabbitmq as a broker, one worker is running with beat flag, all of this is managed by supervisor, and sometimes celery dies with such error.
This error appears only on beat worker, but when it appears, workers on all machines dies.
(celery==3.1.12, kombu==3.0.20)
[2014-07-05 08:37:04,297: INFO/MainProcess] Connected to amqp://user:**#192.168.15.106:5672//
[2014-07-05 08:37:04,311: ERROR/Beat] Process Beat
Traceback (most recent call last):
File "/var/projects/env/local/lib/python2.7/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 527, in run
self.service.start(embedded_process=True)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 453, in start
humanize_seconds(self.scheduler.max_interval))
File "/var/projects/env/local/lib/python2.7/site-packages/kombu/utils/__init__.py", line 322, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 491, in scheduler
return self.get_scheduler()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 486, in get_scheduler
lazy=lazy)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 357, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 184, in __init__
self.setup_schedule()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 376, in setup_schedule
self._store['entries']
File "/usr/lib/python2.7/shelve.py", line 121, in __getitem__
f = StringIO(self.dict[key])
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in __getitem__
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
File "/usr/lib/python2.7/bsddb/dbutils.py", line 68, in DeadlockWrap
return function(*_args, **_kwargs)
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in <lambda>
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
DBPageNotFoundError: (-30985, 'DB_PAGE_NOTFOUND: Requested page not found')
I've ran into this issue and the cause was a corrupted db file (usually named "celerybeat-schedule").
Solution would be to delete the existing db file and restart the process.
Relavent:bsddb.db.DBPageNotFoundError
https://mail.python.org/pipermail/python-list/2009-October/554552.html
I had to remove some temp files in the /tmp directory. One was named celeryd-<NAME_OF_WORKER>-state and also celeryd-<NAME_OF_WORKER>-state-renamed. After removing those and I was able to restart my affected worker.
I just installed pootle and I'm having this messagge "Some data on this page is currently being calculated, and the page will be refreshed automatically x seconds". Upon going to the admin page, I found out that there is a failed job so I run on my command line pootle retry_failed_jobs.
And this is what it says :/
`DoesNotExist: Directory matching query does not exist.
Traceback (most recent call last):
File "/var/www/pootle/env/local/lib/python2.7/site-packages/rq/worker.py", line 568, in perform_job
rv = job.perform()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/rq/job.py", line 495, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 683, in update_cache_job
instance._update_cache_job(keys, decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 534, in _update_cache_job
create_update_cache_job_wrapper(p, keys_for_parent, decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 693, in create_update_cache_job_wrapper
connection.on_commit(_create_update_cache_job)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/transaction_hooks/mixin.py", line 31, in on_commit
func()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 692, in _create_update_cache_job
create_update_cache_job(queue, instance, keys, decrement=decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 707, in create_update_cache_job
last_job_key = instance.get_last_job_key()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 299, in get_last_job_key
key = self.get_cachekey()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/apps/pootle_translationproject/models.py", line 373, in get_cachekey
return self.directory.pootle_path
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/db/models/fields/related.py", line 572, in __get__
rel_obj = qs.get()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/db/models/query.py", line 357, in get
self.model._meta.object_name)
DoesNotExist: Directory matching query does not exist.
Traceback (most recent call last):
File "/var/www/pootle/env/local/lib/python2.7/site-packages/rq/worker.py", line 568, in perform_job
rv = job.perform()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/rq/job.py", line 495, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 683, in update_cache_job
instance._update_cache_job(keys, decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 534, in _update_cache_job
create_update_cache_job_wrapper(p, keys_for_parent, decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 693, in create_update_cache_job_wrapper
connection.on_commit(_create_update_cache_job)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/transaction_hooks/mixin.py", line 31, in on_commit
func()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 692, in _create_update_cache_job
create_update_cache_job(queue, instance, keys, decrement=decrement)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 707, in create_update_cache_job
last_job_key = instance.get_last_job_key()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/core/mixins/treeitem.py", line 299, in get_last_job_key
key = self.get_cachekey()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/apps/pootle_translationproject/models.py", line 373, in get_cachekey
return self.directory.pootle_path
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/db/models/fields/related.py", line 572, in __get__
rel_obj = qs.get()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/db/models/query.py", line 357, in get
self.model._meta.object_name)
DoesNotExist: Directory matching query does not exist.
`
This actually happened when I deleted the language of the project using the admin panel, then suddenly somewhat deleted the folder of that language in the system. What I did is to create a new project and copy the translation files. So I didn't resolve the problem but I was able to remove the refreshing of data.
The stats in Pootle are managed by Redis. Pootle can sometimes get into a state where the stats are broken. Issues like broken files can cause this. You can clean up the stats using this guide.
I'd also report the situation and any tracebacks to the Pootle developers so that they can make the stats calculations more robust.
I have 3 machines with celery workers and rabbitmq as a broker, one worker is running with beat flag, all of this is managed by supervisor, and sometimes celery dies with such error.
This error appears only on beat worker, but when it appears, workers on all machines dies.
(celery==3.1.12, kombu==3.0.20)
[2014-07-05 08:37:04,297: INFO/MainProcess] Connected to amqp://user:**#192.168.15.106:5672//
[2014-07-05 08:37:04,311: ERROR/Beat] Process Beat
Traceback (most recent call last):
File "/var/projects/env/local/lib/python2.7/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 527, in run
self.service.start(embedded_process=True)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 453, in start
humanize_seconds(self.scheduler.max_interval))
File "/var/projects/env/local/lib/python2.7/site-packages/kombu/utils/__init__.py", line 322, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 491, in scheduler
return self.get_scheduler()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 486, in get_scheduler
lazy=lazy)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 357, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 184, in __init__
self.setup_schedule()
File "/var/projects/env/local/lib/python2.7/site-packages/celery/beat.py", line 376, in setup_schedule
self._store['entries']
File "/usr/lib/python2.7/shelve.py", line 121, in __getitem__
f = StringIO(self.dict[key])
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in __getitem__
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
File "/usr/lib/python2.7/bsddb/dbutils.py", line 68, in DeadlockWrap
return function(*_args, **_kwargs)
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in <lambda>
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
DBPageNotFoundError: (-30985, 'DB_PAGE_NOTFOUND: Requested page not found')
I've ran into this issue and the cause was a corrupted db file (usually named "celerybeat-schedule").
Solution would be to delete the existing db file and restart the process.
Relavent:bsddb.db.DBPageNotFoundError
https://mail.python.org/pipermail/python-list/2009-October/554552.html
I had to remove some temp files in the /tmp directory. One was named celeryd-<NAME_OF_WORKER>-state and also celeryd-<NAME_OF_WORKER>-state-renamed. After removing those and I was able to restart my affected worker.
I'm developing a Django app, using Celery and RabbitMQ as worker. I'm starting Celery with the following command (on Fedora)
python manage.py celery worker --loglevel=info
However, I'm getting the following error:
ImportError: No module named processe
In my office, we are using Ubuntu and are not getting any errors like this.
Here's the full traceback:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/djcelery/management/commands/celery.py", line 22, in run_from_argv
['%s %s' % (argv[0], argv[1])] + argv[2:],
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 901, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/base.py", line 187, in execute_from_commandline
return self.handle_argv(prog_name, argv[1:])
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 893, in handle_argv
return self.execute(command, argv)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 868, in execute
return cls(app=self.app).run_from_argv(self.prog_name, argv)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 148, in run_from_argv
return self(*args, **options)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 118, in __call__
ret = self.run(*args, **kwargs)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celery.py", line 220, in run
return self.target.run(*args, **kwargs)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/bin/celeryd.py", line 141, in run
kwargs.get('pool_cls') or self.app.conf.CELERYD_POOL)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/celery/concurrency/__init__.py", line 26, in get_implementation
return symbol_by_name(cls, ALIASES)
File "/home/gurpinars/projects/github/Blog-Env/lib/python2.7/site-packages/kombu/utils/__init__.py", line 80, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named processes
Additionally, here is my pip freeze output:
Django==1.5.2
PIL==1.1.7
amqp==1.0.13
anyjson==0.3.3
billiard==2.7.3.32
celery==3.0.23
django-celery==3.0.23
django-debug-toolbar==0.9.4
ipdb==0.7
ipython==1.0.0
kombu==2.5.14
python-dateutil==2.1
pytz==2013d
redis==2.8.0
six==1.4.1
wsgiref==0.1.2
Any suggestions as to how I can resolve this issue?
Problem solved.On ubuntu,rabbitmq start automatically but fedora we had to start and restart manually when changed conf file.