I am going through the setting of the following components on CentOS server. I get supervisord task to get the web site up and running, but I am blocked on setting the supervisor for celery. It seems that it recognizes the tasks, but when I try to execute the tasks, it won't connect to them. My redis is up and running on port 6380
Django==1.10.3
amqp==1.4.9
billiard==3.3.0.23
celery==3.1.25
kombu==3.0.37
pytz==2016.10
my celeryd.ini
[program:celeryd]
command=/root/myproject/myprojectenv/bin/celery worker -A mb --loglevel=INFO
environment=PATH="/root/myproject/myprojectenv/bin/",VIRTUAL_ENV="/root/myproject/myprojectenv",PYTHONPATH="/root/myproject/myprojectenv/lib/python2.7:/root/myproject/myprojectenv/lib/python2.7/site-packages"
directory=/home/.../myapp/
user=nobody
numprocs=1
stdout_logfile=/home/.../myapp/log_celery/worker.log
sterr_logfile=/home/.../myapp/log_celery/worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 1200
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq(redis) is supervised, it will start first.
priority=1000
The process starts and when I go to the project folder and do:
>python manage.py celery status
celery#ssd-1v: OK
1 node online.
When I open the log file of celery I see that the tasks are loaded.
[tasks]
. mb.tasks.add
. mb.tasks.update_search_index
. orders.tasks.order_created
my mb/tasks.py
from mb.celeryapp import app
import django
django.setup()
#app.task
def add(x, y):
print(x+y)
return x + y
my mb/celeryapp.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mb.settings")
app = Celery('mb', broker='redis://localhost:6380/', backend='redis://localhost:6380/')
app.conf.broker_url = 'redis://localhost:6380/0'
app.conf.result_backend = 'redis://localhost:6380/'
app.conf.timezone = 'Europe/Sofia'
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
my mb/settings.py:
...
WSGI_APPLICATION = 'mb.wsgi.application'
BROKER_URL = 'redis://localhost:6380/0'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
...
when I run:
python manage.py shell
>>> from mb.tasks import add
>>> add.name
'mb.tasks.add'
>>> result=add.delay(1,1)
>>> result.ready()
False
>>> result.status
'PENDING'
And as mentioned earlier I do not see any change in the log anymore.
If I try to run from the command line:
/root/myproject/myprojectenv/bin/celery worker -A mb --loglevel=INFO
Running a worker with superuser privileges when the
worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
But I suppose that's normal since I run it after with user nobody. Interesting thing is that the command just celery status (without python manage.py celery status) gives an error on connection, probably because it is looking for different port for redis, but the process of supervisord starts normally... and when I call 'celery worker -A mb' it says it's ok. Any ideas?
(myprojectenv) [root#ssd-1v]# celery status
Traceback (most recent call last):
File "/root/myproject/myprojectenv/bin/celery", line 11, in <module>
sys.exit(main())
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/__main__.py", line 3
0, in main
main()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
81, in main
cmd.execute_from_commandline(argv)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
793, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
11, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
785, in handle_argv
return self.execute(command, argv)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
717, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
15, in run_from_argv
sys.argv if argv is None else argv, command)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
77, in handle_argv
return self(*args, **options)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 2
74, in __call__
ret = self.run(*args, **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
473, in run
replies = I.run('ping', **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
325, in run
return self.do_call_method(args, **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
347, in do_call_method
return getattr(i, method)(*args)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 100, in ping
return self._request('ping')
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 316, in broadcast
limit, callback, channel=channel,
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 771, in default_channel
self.connection
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 756, in connection
self._connection = self._establish_connection()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 711, in _establish_connection
conn = self.transport.establish_connection()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
socket.error: [Errno 111] Connection refused
Any help will be highly appreciated.
UPDATE:
when I run
$:python manage.py shell
>>from mb.tasks import add
>>add
<#task: mb.tasks.add of mb:0x**2b3f6d0**>
the 0x2b3f6d0is different from what celery claims to be its memory space in its log, namely:
[config]
- ** ---------- .> app: mb:0x3495bd0
- ** ---------- .> transport: redis://localhost:6380/0
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 1 (prefork)
Ok, the answer in this case was that the gunicorn file was actually starting the project from the common python library, instead of the virtual env
Related
I'm facing an issue related to using an active I/O connection in the SigTerm handler using gunicorn eventlet server.
server.py
def exit_with_grace(*args):
conn = get_redis_connection()
conn.set('exited_gracefully', True)
signal.signal(signal.SIGTERM, exit_with_grace)
I also tried to fire up the celery task (using amqp broker) but all my ideas failed. When I start server in debug mode using python server.py it works perfectly. Gunicorn+ eventlet does not allow to connect to redis in sigterm handler, resulting with an following error:
Traceback (most recent call last):
File "/project/handlers/socketio/redis_context_backend.py", line 256, in publish_pattern
return conn.publish(pattern, serialized)
File "/project/venv/lib/python3.6/site-packages/redis/client.py", line 3098, in publish
return self.execute_command('PUBLISH', channel, message)
File "/project/venv/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 559, in connect
sock = self._connect()
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 603, in _connect
sock.connect(socket_address)
File "/project/venv/lib/python3.6/site-packages/eventlet/greenio/base.py", line 250, in connect
self._trampoline(fd, write=True)
File "/project/venv/lib/python3.6/site-packages/eventlet/greenio/base.py", line 210, in _trampoline
mark_as_closed=self._mark_as_closed)
File "/project/venv/lib/python3.6/site-packages/eventlet/hubs/__init__.py", line 142, in trampoline
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
Gunicorn command:
gunicorn --worker-class eventlet -w 1 server:ws --reload -b localhost:5001
I'm following the First Steps With Django for my app running on a docker container. I have rabbitmq setup on a separate docker container. Opening a python shell to run the task add just results in hanging/freezing without any reports/errors from celery interface. Below are the details.
Django version - 3.0.5
Celery - 5.0.2
amqp - 5.0.2
kombu - 5.0.2
Rabbitmq - 3.8.9
myapp/myapp/settings.py
CELERY_BROKER_URL = 'pyamqp://guest:guest#myhost.com//'
CELERY_RESULT_BACKEND = 'db+postgresql+psycopg2://postgres:111111#myhost.com/celery'
myapp/myapp/celery.py
import os
from celery import Celery
import logging
logger = logging.getLogger(__name__)
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
logger.error('in celery')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
myapp/myapp/init.py
from .celery import app as celery_app
import logging
logger = logging.getLogger(__name__)
logger.error('in init')
__all__ = ('celery_app',)
myapp/otherapp/tasks.py
from celery import shared_task
import logging
logger = logging.getLogger(__name__)
#shared_task
def add(x, y):
logger.error('add')
return x + y
Once I run celery -A demolists worker --loglevel=INFO in terminal, I get the following.
in celery
in init
/usr/lib/python3.8/site-packages/celery/platforms.py:797: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
-------------- celery#3c389cd683b2 v5.0.2 (singularity)
--- ***** -----
-- ******* ---- Linux-4.14.186-110.268.amzn1.x86_64-x86_64-with 2020-11-29 22:12:54
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: myapp:0x7f5a52f51c70
- ** ---------- .> transport: amqp://guest:**#myhost.com:5672//
- ** ---------- .> results: postgresql+psycopg2://postgres:**#myhost.com/celery
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. myapp.celery.debug_task
. otherapp.tasks.add
[2020-11-29 22:12:55,304: INFO/MainProcess] Connected to amqp://guest:**#myhost.com:5672//
[2020-11-29 22:12:55,337: INFO/MainProcess] mingle: searching for neighbors
[2020-11-29 22:12:56,388: INFO/MainProcess] mingle: all alone
[2020-11-29 22:12:56,412: WARNING/MainProcess] /usr/lib/python3.8/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2020-11-29 22:12:56,412: INFO/MainProcess] celery#3c389cd683b2 ready.
In a separate terminal tab, I go into the python shell and do the following.
>>> from otherapp.tasks import add
>>> result = add.delay(4, 5)
It hangs here without any change. Control-C produces this error however.
^CTraceback (most recent call last):
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 32, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 143, in _connect
entries = socket.getaddrinfo(
File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name does not resolve
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 325, in retry_over_time
return fun(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 866, in _connection_factory
self._connection = self._establish_connection()
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 801, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 128, in establish_connection
conn.connect()
File "/usr/lib/python3.8/site-packages/amqp/connection.py", line 322, in connect
self.transport.connect()
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 84, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 152, in _connect
raise (e
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 168, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/celery/app/task.py", line 421, in delay
return self.apply_async(args, kwargs)
File "/usr/lib/python3.8/site-packages/celery/app/task.py", line 561, in apply_async
return app.send_task(
File "/usr/lib/python3.8/site-packages/celery/app/base.py", line 718, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/lib/python3.8/site-packages/celery/app/amqp.py", line 523, in send_task_message
ret = producer.publish(
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 175, in publish
return _publish(
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 525, in _ensured
return fun(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 184, in _publish
channel = self.channel
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 206, in _get_channel
channel = self._channel = channel()
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 34, in __call__
value = self.__value__ = self.__contract__()
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 221, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 884, in default_channel
self._ensure_connection(**conn_opts)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 435, in _ensure_connection
return retry_over_time(
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 339, in retry_over_time
sleep(1.0)
KeyboardInterrupt
I would appreciate any assistance, very confused by what is happening as on one hand, the celery interface suggests it is connected to rabbitmq, but the error messages in shell suggests there are some issues with that. Thank you in advanced
I have to add few periodic tasks. I'm using Celery - Redis in Django platform.
When I execute the method from shell_plus all is well . However Celery Beat is unable to find the database instance properly.
Celery version = 4.1.0. I had previously installed django-celery-beats etc
Database = MySQL
Where am i wrong.
Thanks in advance.
Celery Command
(venv)$:/data/project/(sesh/dev)$ celery -A freightquotes worker -B -E -l INFO --autoscale=2,1
settings.py
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_BROKER_TRANSPORT = 'redis'
CELERY_BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 604800}
CELERY_RESULT_BACKEND = BROKER_URL
CELERY_TASK_RESULT_EXPIRES = datetime.timedelta(days=1) # Take note of the CleanUp task in middleware/tasks.py
CELERY_MAX_CACHED_RESULTS = 1000
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
CELERY_TRACK_STARTED = True
CELERY_SEND_EVENTS = True
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
REDIS_CONNECT_RETRY = True
REDIS_DB = 0
BROKER_POOL_LIMIT = 2
CELERYD_CONCURRENCY = 1
CELERYD_TASK_TIME_LIMIT = 600
CELERY_BEAT_SCHEDULE = {
'test': {
'task': 'loads.tasks.test',
'schedule': crontab(minute='*/1'),
},
init.py
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ['celery_app']
celery.py
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings.base')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
loads/tasks.py
#task()
def test():
x = [i.id for i in Load.objects.all()]
print (x)
Error
[2017-11-30 03:52:00,032: ERROR/ForkPoolWorker-2] Task loads.tasks.test[0020e4ae-5e52-49d8-863f-e51c2acfd7a7] raised unexpected: OperationalError('no such table: loads_load',)
Traceback (most recent call last):
File "/data/project/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/data/project/venv/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 328, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: loads_load
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/project/venv/lib/python3.4/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/data/project/venv/lib/python3.4/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/data/project/loads/tasks.py", line 146, in test
x = [i.id for i in Load.objects.all()]
File "/data/project/venv/lib/python3.4/site-packages/django/db/models/query.py", line 250, in __iter__
self._fetch_all()
File "/data/project/venv/lib/python3.4/site-packages/django/db/models/query.py", line 1103, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/data/project/venv/lib/python3.4/site-packages/django/db/models/query.py", line 53, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch)
File "/data/project/venv/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 886, in execute_sql
raise original_exception
File "/data/project/venv/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 876, in execute_sql
cursor.execute(sql, params)
File "/data/project/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/data/project/venv/lib/python3.4/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/data/project/venv/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/data/project/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/data/project/venv/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 328, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: loads_load
I found some answer.
We have few files in settings like base, dev, prod and local. The database settings is different in each one of them.
Its working when i point the celery app to the local which has all database config. In this case i had to copy all celery config from base to local.
I tried to use django.conf.settings as os.environ.setdefault , but didnt work.
So the answer is incorrect configuration. if we have all in one file we are fine. if we split we have to find some work around.
Edit
Since the issue was finding the right settings file. I now start the celery by setting the module
DJANGO_SETTINGS_MODULE='project.settings.dev' celery -A project worker -B -E -l INFO --autoscale=2,1
UPDATE: I decided to try using Django as the broker for simplicity, as I assumed I did something wrong in the Redis setup. However, after making the changes described in the docs I get the same error as below when attempting to run a Celery task with .delay(). The Celery worker starts and shows it's connected to Django for transport. Could this be a firewall issue?
ORIGINAL
I'm working on a Django project and attempting to add background tasks. I've installed Celery and chosen Redis for the broker, and installed that as well (I'm on a Windows machine, fyi). The celery worker starts, connects to the Redis server, and discovers my shared_tasks
-------------- celery#GALACTICA v3.1.19 (Cipater)
---- **** -----
--- * *** * -- Windows-7-6.1.7601-SP1
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: proj:0x2dbf970
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. app.tasks.add
. app.tasks.mul
. app.tasks.xsum
. proj.celery.debug_task
[2016-01-16 11:53:05,586: INFO/MainProcess] Connected to redis://localhost:6379/
0
[2016-01-16 11:53:06,611: INFO/MainProcess] mingle: searching for neighbors
[2016-01-16 11:53:09,628: INFO/MainProcess] mingle: all alone
c:\python34\lib\site-packages\celery\fixups\django.py:265: UserWarning: Using se
ttings.DEBUG leads to a memory leak, never use this setting in production enviro
nments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-01-16 11:53:14,670: WARNING/MainProcess] c:\python34\lib\site-packages\cel
ery\fixups\django.py:265: UserWarning: Using settings.DEBUG leads to a memory le
ak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-01-16 11:53:14,671: WARNING/MainProcess] celery#GALACTICA ready.
I'm following the intro docs so the tasks are very simple, including one called add. I can run the tasks by themselves in a python shell, but when I attempt to call add.delay() to have celery handle it, it appears the connection isn't successful:
>>> add.delay(2,2)
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 423, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\kombu\connection.py", line 436, in _ensured
return fun(*args, **kwargs)
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 177, in _publish
channel = self.channel
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel) File "C:\Python34\lib\site-packages\kombu\connection.py", line 756, in default_channel
self.connection
File "C:\Python34\lib\site-packages\kombu\connection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:\Python34\lib\site-packages\kombu\connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:\Python34\lib\site-packages\kombu\transport\pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\celery\app\task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "C:\Python34\lib\site-packages\celery\app\task.py", line 560, in apply_async
**dict(self._get_exec_options(), **options)
File "C:\Python34\lib\site-packages\celery\app\base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "C:\Python34\lib\site-packages\celery\app\amqp.py", line 305, in publish_task
**kwargs
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 457, in _ensured
interval_max)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 237, in connect
return self.connection
File "C:\Python34\lib\site-packages\kombu\connection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:\Python34\lib\site-packages\kombu\connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:\Python34\lib\site-packages\kombu\transport\pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it
There's no output on the console with the celery worker running, so I don't think it ever gets the task. I believe my settings.py, celery.py and tasks.py are alright:
settings.py:
#celery settings
BROKER_URL = 'redis://localhost:6379/0'
celery.py:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
from django.conf import settings # noqa
app = Celery('proj')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
tasks.py:
from __future__ import absolute_import
#from proj.celery import app
from celery import shared_task
#shared_task
def add(x, y):
return x + y
#shared_task
def mul(x, y):
return x * y
#shared_task
def xsum(numbers):
return sum(numbers)
My project layout is nearly identical to the Celery example Django project layout on GitHub, as well as the example here. It looks like:
proj
├── proj
│ ├── celery.py
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
└── app
├── __init__.py
├── models.py
├── tasks.py
├── tests.py
└── views.py
Apologies on the other app in my project being named 'app' - it makes things a bit confusing to read, and is the result of autogenerating the base project in Visual Studio with PTVS installed. I probably could have changed it early on, but i didn't realize the name was so vague.
Thanks for any thoughts- I've been stumped by this for a while.
I was getting the same error after scrolling all over the internet i got no solution because i forgot to add the following code:
from .celery import app as celery_app
__all__ = ('celery_app',)
to my __init.py file of project directory got my error resolved
I got around this, but I'm not sure how. I came back to this exact configuration the next day, and tasks were making it to the celery worker.
Perhaps one of the services I restarted was the key, but I'm not sure.
If anyone else runs into this, especially on Windows: make sure your redis-server is active and that you see the incoming connections from a ping as well as the task. I had done that before posting this question, but it seems like the likely candidate for being misconfigured.
Redis won't be started yet after installing it. So, starting Redis will solve your problem.
I have setup from the the docs
celery.py file:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'cars.settings')
app = Celery('cars')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
supervisord config
[program:bbay-celery]
command = /opt/webapps/bbay/env/bin/celery worker -A cars.celery:app ; Command to start app
directory = /opt/webapps/bbay/
user = bbay ; User to run as
numprocs = 1
stdout_logfile = /opt/webapps/bbay/logs/celery.log ; Where to write log messages
redirect_stderr = true ; Save stderr in the same log
environment=LANG='en_US.UTF-8',LC_ALL='en_US.UTF-8',DJANGO_SETTINGS_MODULE='cars.settings',CELERYD_CHDIR='/opt/webapps/bbay/'
autostart = true
autorestart = true
startsecs = 10
django settings :
BROKER_URL = 'redis://localhost:6379/0'
when i start worker all seems fine and correct broker url used:
(env)bbay#djproj:/opt/webapps/bbay$ celery -A cars.celery:app beat
celery beat v3.1.17 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> redis://localhost:6379/0
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
Here is my task:
#shared_task
def send_mail_task(template, context, send_to):
....
Here is how i use it:
send_mail_task.delay('email/confirmation_message.html', context, [user.email, ])
But when the task called it tried to connect to default broker ( host
'127.0.0.1:5672' ). Here is stacktrace:
Stacktrace (most recent call last):
File "django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "django/views/decorators/csrf.py", line 57, in wrapped_view
return view_func(*args, **kwargs)
File "django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "rest_framework/views.py", line 452, in dispatch
response = self.handle_exception(exc)
File "rest_framework/views.py", line 449, in dispatch
response = handler(request, *args, **kwargs)
File "accounts/api/views.py", line 132, in post
send_mail_task.delay('email/contact_seller.html', context, [profile.user.email, ])
File "celery/app/task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "celery/app/task.py", line 555, in apply_async
**dict(self._get_exec_options(), **options)
File "celery/app/base.py", line 355, in send_task
reply_to=reply_to or self.oid, **options
File "celery/app/amqp.py", line 305, in publish_task
**kwargs
File "kombu/messaging.py", line 168, in publish
routing_key, mandatory, immediate, exchange, declare)
File "kombu/connection.py", line 457, in _ensured
interval_max)
File "kombu/connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "kombu/utils/__init__.py", line 243, in retry_over_time
return fun(*args, **kwargs)
File "kombu/connection.py", line 237, in connect
return self.connection
File "kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "kombu/transport/pyamqp.py", line 112, in establish_connection
conn = self.Connection(**opts)
File "amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
So what is wrong and how to make celery connect to the specified broker and where in celery docs is it?
The problem was that i missed celery in __init__.py. __init__.py should contain following ( from the docs ):
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
Ensure the broker url on you config or settings file has the specified prefix as on the namespace app.config_from_object('django.conf:settings', namespace='CELERY')
so changed my BROKER_URL to CELERY_BROKER_URL
In my case the wrong broker issue was due to incorrect celery start command.
I've used 'celery beat -A=myapp', and broker was incorrect. Then i changed it to 'celery -A myapp beat' and it used correct broker from settings.
It may cuased by kombu.
at first my env were:
kombu-4.6.11
celery-4.4.7
celery could not get settings by config_from_object !
after changed to:
kombu-4.0.2
celery-4.0.2
problem solved!