I want to use celery with pyramid.
Try to use pyramid_celery package. Аll attempts have failed.
My development.ini has:
BROKER_URL = amqp://dev:dev#192.168.1.50:5672//test
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json', 'application/json']
CELERY_RESULT_BACKEND = amqp://dev:dev#192.168.1.50:5672//test
;CELERY_ACCEPT_CONTENT = json12
CELERY_IMPORTS = celerypythontest.celery_service
And when i try to run this command:
celery worker -A pyramid_celery.celery_app --ini development.ini
i have this output:
H:\Development\CeleryPythonTest>celery worker -A pyramid_celery.celery_app --ini development.ini
-------------- celery#CUBA v3.1.17 (Cipater)
---- **** -----
--- * *** * -- Windows-8-6.2.9200
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x3c550b8
- ** ---------- .> transport: amqp://dev:**#192.168.1.50:5672//test
- ** ---------- .> results: amqp://dev:dev#192.168.1.50:5672//test
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
2015-03-04 19:41:34,963 ERROR [celery.worker][MainThread] Unrecoverable error: PicklingError("Can't pickle <function ViewDeriver._response_resolved_view.<locals>.viewresult_to_response at 0x00000000051DF378>: attribute lookup ViewDeriver._response_resolved_view.
<locals>.viewresult_to_response on pyramid.static failed",)
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\celery\worker\__init__.py", line 206, in start
self.blueprint.start(self)
File "C:\Python34\lib\site-packages\celery\bootsteps.py", line 123, in start
step.start(parent)
File "C:\Python34\lib\site-packages\celery\bootsteps.py", line 374, in start
return self.obj.start()
File "C:\Python34\lib\site-packages\celery\concurrency\base.py", line 131, in start
self.on_start()
File "C:\Python34\lib\site-packages\celery\concurrency\prefork.py", line 117, in on_start
**self.options)
File "C:\Python34\lib\site-packages\billiard\pool.py", line 966, in __init__
self._create_worker_process(i)
File "C:\Python34\lib\site-packages\billiard\pool.py", line 1062, in _create_worker_process
w.start()
File "C:\Python34\lib\site-packages\billiard\process.py", line 137, in start
self._popen = Popen(self)
File "C:\Python34\lib\site-packages\billiard\forking.py", line 263, in __init__
dump(process_obj, to_child, HIGHEST_PROTOCOL)
File "C:\Python34\lib\site-packages\billiard\py3\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function ViewDeriver._response_resolved_view.<locals>.viewresult_to_response at 0x00000000051DF378>: attribute lookup ViewDeriver._response_resolved_view.<locals>.viewresult_to_response on pyramid.static failed
H:\Development\CeleryPythonTest>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python34\lib\site-packages\billiard\forking.py", line 459, in main
self = load(from_parent)
EOFError: Ran out of input
And every time, when i try to run celery, the output error is different:
_pickle.PicklingError: Can't pickle <InterfaceClass pyramid.request.__static/_IRequest>: attribute lookup __static/_IRequest on pyramid.request failed
_pickle.PicklingError: Can't pickle <function _compile_route.<locals>.matcher at 0x0000000004BB0620>: attribute lookup _compile_route.<locals>.matcher on pyramid.urldispatch failed
_pickle.PicklingError: Can't pickle <function beforerender_subscriber at 0x0000000003CD8D90>: it's not the same object as pyramid_debugtoolbar.toolbar.beforerender_subscriber
Can anyone tell me, what i am doing wrong?
Thanks!
Run under Linux. And all problems gone!
Don't use celery on windows!
Don't develop on python under windows!
This is log from celery starting
Related
I'm following the First Steps With Django for my app running on a docker container. I have rabbitmq setup on a separate docker container. Opening a python shell to run the task add just results in hanging/freezing without any reports/errors from celery interface. Below are the details.
Django version - 3.0.5
Celery - 5.0.2
amqp - 5.0.2
kombu - 5.0.2
Rabbitmq - 3.8.9
myapp/myapp/settings.py
CELERY_BROKER_URL = 'pyamqp://guest:guest#myhost.com//'
CELERY_RESULT_BACKEND = 'db+postgresql+psycopg2://postgres:111111#myhost.com/celery'
myapp/myapp/celery.py
import os
from celery import Celery
import logging
logger = logging.getLogger(__name__)
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
logger.error('in celery')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
myapp/myapp/init.py
from .celery import app as celery_app
import logging
logger = logging.getLogger(__name__)
logger.error('in init')
__all__ = ('celery_app',)
myapp/otherapp/tasks.py
from celery import shared_task
import logging
logger = logging.getLogger(__name__)
#shared_task
def add(x, y):
logger.error('add')
return x + y
Once I run celery -A demolists worker --loglevel=INFO in terminal, I get the following.
in celery
in init
/usr/lib/python3.8/site-packages/celery/platforms.py:797: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
-------------- celery#3c389cd683b2 v5.0.2 (singularity)
--- ***** -----
-- ******* ---- Linux-4.14.186-110.268.amzn1.x86_64-x86_64-with 2020-11-29 22:12:54
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: myapp:0x7f5a52f51c70
- ** ---------- .> transport: amqp://guest:**#myhost.com:5672//
- ** ---------- .> results: postgresql+psycopg2://postgres:**#myhost.com/celery
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. myapp.celery.debug_task
. otherapp.tasks.add
[2020-11-29 22:12:55,304: INFO/MainProcess] Connected to amqp://guest:**#myhost.com:5672//
[2020-11-29 22:12:55,337: INFO/MainProcess] mingle: searching for neighbors
[2020-11-29 22:12:56,388: INFO/MainProcess] mingle: all alone
[2020-11-29 22:12:56,412: WARNING/MainProcess] /usr/lib/python3.8/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2020-11-29 22:12:56,412: INFO/MainProcess] celery#3c389cd683b2 ready.
In a separate terminal tab, I go into the python shell and do the following.
>>> from otherapp.tasks import add
>>> result = add.delay(4, 5)
It hangs here without any change. Control-C produces this error however.
^CTraceback (most recent call last):
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 32, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 143, in _connect
entries = socket.getaddrinfo(
File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name does not resolve
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 325, in retry_over_time
return fun(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 866, in _connection_factory
self._connection = self._establish_connection()
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 801, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 128, in establish_connection
conn.connect()
File "/usr/lib/python3.8/site-packages/amqp/connection.py", line 322, in connect
self.transport.connect()
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 84, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 152, in _connect
raise (e
File "/usr/lib/python3.8/site-packages/amqp/transport.py", line 168, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/celery/app/task.py", line 421, in delay
return self.apply_async(args, kwargs)
File "/usr/lib/python3.8/site-packages/celery/app/task.py", line 561, in apply_async
return app.send_task(
File "/usr/lib/python3.8/site-packages/celery/app/base.py", line 718, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/lib/python3.8/site-packages/celery/app/amqp.py", line 523, in send_task_message
ret = producer.publish(
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 175, in publish
return _publish(
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 525, in _ensured
return fun(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 184, in _publish
channel = self.channel
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 206, in _get_channel
channel = self._channel = channel()
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 34, in __call__
value = self.__value__ = self.__contract__()
File "/usr/lib/python3.8/site-packages/kombu/messaging.py", line 221, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 884, in default_channel
self._ensure_connection(**conn_opts)
File "/usr/lib/python3.8/site-packages/kombu/connection.py", line 435, in _ensure_connection
return retry_over_time(
File "/usr/lib/python3.8/site-packages/kombu/utils/functional.py", line 339, in retry_over_time
sleep(1.0)
KeyboardInterrupt
I would appreciate any assistance, very confused by what is happening as on one hand, the celery interface suggests it is connected to rabbitmq, but the error messages in shell suggests there are some issues with that. Thank you in advanced
I am going through the setting of the following components on CentOS server. I get supervisord task to get the web site up and running, but I am blocked on setting the supervisor for celery. It seems that it recognizes the tasks, but when I try to execute the tasks, it won't connect to them. My redis is up and running on port 6380
Django==1.10.3
amqp==1.4.9
billiard==3.3.0.23
celery==3.1.25
kombu==3.0.37
pytz==2016.10
my celeryd.ini
[program:celeryd]
command=/root/myproject/myprojectenv/bin/celery worker -A mb --loglevel=INFO
environment=PATH="/root/myproject/myprojectenv/bin/",VIRTUAL_ENV="/root/myproject/myprojectenv",PYTHONPATH="/root/myproject/myprojectenv/lib/python2.7:/root/myproject/myprojectenv/lib/python2.7/site-packages"
directory=/home/.../myapp/
user=nobody
numprocs=1
stdout_logfile=/home/.../myapp/log_celery/worker.log
sterr_logfile=/home/.../myapp/log_celery/worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 1200
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq(redis) is supervised, it will start first.
priority=1000
The process starts and when I go to the project folder and do:
>python manage.py celery status
celery#ssd-1v: OK
1 node online.
When I open the log file of celery I see that the tasks are loaded.
[tasks]
. mb.tasks.add
. mb.tasks.update_search_index
. orders.tasks.order_created
my mb/tasks.py
from mb.celeryapp import app
import django
django.setup()
#app.task
def add(x, y):
print(x+y)
return x + y
my mb/celeryapp.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mb.settings")
app = Celery('mb', broker='redis://localhost:6380/', backend='redis://localhost:6380/')
app.conf.broker_url = 'redis://localhost:6380/0'
app.conf.result_backend = 'redis://localhost:6380/'
app.conf.timezone = 'Europe/Sofia'
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
my mb/settings.py:
...
WSGI_APPLICATION = 'mb.wsgi.application'
BROKER_URL = 'redis://localhost:6380/0'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
...
when I run:
python manage.py shell
>>> from mb.tasks import add
>>> add.name
'mb.tasks.add'
>>> result=add.delay(1,1)
>>> result.ready()
False
>>> result.status
'PENDING'
And as mentioned earlier I do not see any change in the log anymore.
If I try to run from the command line:
/root/myproject/myprojectenv/bin/celery worker -A mb --loglevel=INFO
Running a worker with superuser privileges when the
worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
But I suppose that's normal since I run it after with user nobody. Interesting thing is that the command just celery status (without python manage.py celery status) gives an error on connection, probably because it is looking for different port for redis, but the process of supervisord starts normally... and when I call 'celery worker -A mb' it says it's ok. Any ideas?
(myprojectenv) [root#ssd-1v]# celery status
Traceback (most recent call last):
File "/root/myproject/myprojectenv/bin/celery", line 11, in <module>
sys.exit(main())
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/__main__.py", line 3
0, in main
main()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
81, in main
cmd.execute_from_commandline(argv)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
793, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
11, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
785, in handle_argv
return self.execute(command, argv)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
717, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
15, in run_from_argv
sys.argv if argv is None else argv, command)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 3
77, in handle_argv
return self(*args, **options)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/base.py", line 2
74, in __call__
ret = self.run(*args, **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
473, in run
replies = I.run('ping', **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
325, in run
return self.do_call_method(args, **kwargs)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/bin/celery.py", line
347, in do_call_method
return getattr(i, method)(*args)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 100, in ping
return self._request('ping')
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/celery/app/control.py", line 316, in broadcast
limit, callback, channel=channel,
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 771, in default_channel
self.connection
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 756, in connection
self._connection = self._establish_connection()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/connection.py", line 711, in _establish_connection
conn = self.transport.establish_connection()
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/root/myproject/myprojectenv/lib/python2.7/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
socket.error: [Errno 111] Connection refused
Any help will be highly appreciated.
UPDATE:
when I run
$:python manage.py shell
>>from mb.tasks import add
>>add
<#task: mb.tasks.add of mb:0x**2b3f6d0**>
the 0x2b3f6d0is different from what celery claims to be its memory space in its log, namely:
[config]
- ** ---------- .> app: mb:0x3495bd0
- ** ---------- .> transport: redis://localhost:6380/0
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 1 (prefork)
Ok, the answer in this case was that the gunicorn file was actually starting the project from the common python library, instead of the virtual env
I am using celery for async queue in flask, I have setup the queue using following code.
from src import app
from celery import Celery
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
in config.py for flask I have.
# Celery config for queue
CELERY_BROKER_URL = 'redis://localhost:6379/1'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/1'
I am trying to run worker using commandline on windows via following command.
celery worker -A src.celery
It gives me following stacktrace.
-------------- celery#DESKTOP-F3RS3C9 v4.0.0 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.14393 2016-11-14 12:23:49
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: src:0x4e6e550
- ** ---------- .> transport: redis://localhost:6379/1
- ** ---------- .> results: redis://localhost:6379/1
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[2016-11-14 12:23:49,463: CRITICAL/MainProcess] Unrecoverable error: TypeError('must be integer<K>, not _subprocess_handle',)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\celery\worker\worker.py", line 203, in start
self.blueprint.start(self)
File "c:\python27\lib\site-packages\celery\bootsteps.py", line 119, in start
step.start(parent)
File "c:\python27\lib\site-packages\celery\bootsteps.py", line 370, in start
return self.obj.start()
File "c:\python27\lib\site-packages\celery\concurrency\base.py", line 131, in start
self.on_start()
File "c:\python27\lib\site-packages\celery\concurrency\prefork.py", line 112, in on_start
**self.options)
File "c:\python27\lib\site-packages\billiard\pool.py", line 1008, in __init__
self._create_worker_process(i)
File "c:\python27\lib\site-packages\billiard\pool.py", line 1117, in _create_worker_process
w.start()
File "c:\python27\lib\site-packages\billiard\process.py", line 122, in start
self._popen = self._Popen(self)
File "c:\python27\lib\site-packages\billiard\context.py", line 383, in _Popen
return Popen(process_obj)
File "c:\python27\lib\site-packages\billiard\popen_spawn_win32.py", line 64, in __init__
_winapi.CloseHandle(ht)
TypeError: must be integer<K>, not _subprocess_handle
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\python27\lib\site-packages\billiard\spawn.py", line 159, in spawn_main
new_handle = steal_handle(parent_pid, pipe_handle)
File "c:\python27\lib\site-packages\billiard\reduction.py", line 121, in steal_handle
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
WindowsError: [Error 87] The parameter is incorrect
I have installed redis package, plus redis for windows as well.
I have same problem after upgrade to the 4.0.0.
Official documentation says that Microsoft Windows support has been removed in this version.
http://docs.celeryproject.org/en/latest/whatsnew-4.0.html#removed-features
This issue was rejected as for not supported platform: https://github.com/celery/celery/issues/3551
Running a worker on a different machine results in errors specified below. I have followed the configuration instructions and have sync the dags folder.
I would also like to confirm that RabbitMQ and PostgreSQL only needs to be installed on the Airflow core machine and does not need to be installed on the workers (the workers only connect to the core).
The specification of the setup is detailed below:
Airflow core/server computer
Has the following installed:
Python 2.7 with
airflow (AIRFLOW_HOME = ~/airflow)
celery
psycogp2
RabbitMQ
PostgreSQL
Configurations made in airflow.cfg:
sql_alchemy_conn = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
executor = CeleryExecutor
broker_url = amqp://username:password#192.168.1.2:5672//
celery_result_backend = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
Tests performed:
RabbitMQ is running
Can connect to PostgreSQL and have confirmed that Airflow has created tables
Can start and view the webserver (including custom dags)
.
.
Airflow worker computer
Has the following installed:
Python 2.7 with
airflow (AIRFLOW_HOME = ~/airflow)
celery
psycogp2
Configurations made in airflow.cfg are exactly the same as in the server:
sql_alchemy_conn = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
executor = CeleryExecutor
broker_url = amqp://username:password#192.168.1.2:5672//
celery_result_backend = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
Output from commands run on the worker machine:
When running airflow flower:
ubuntu#airflow_client:~/airflow$ airflow flower
[2016-06-13 04:19:42,814] {__init__.py:36} INFO - Using executor CeleryExecutor
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/bin/airflow", line 15, in <module>
args.func(args)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/airflow/bin/cli.py", line 576, in flower
os.execvp("flower", ['flower', '-b', broka, port, api])
File "/home/ubuntu/anaconda2/lib/python2.7/os.py", line 346, in execvp
_execvpe(file, args)
File "/home/ubuntu/anaconda2/lib/python2.7/os.py", line 382, in _execvpe
func(fullname, *argrest)
OSError: [Errno 2] No such file or directory
When running airflow worker:
ubuntu#airflow_client:~$ airflow worker
[2016-06-13 04:08:43,573] {__init__.py:36} INFO - Using executor CeleryExecutor
[2016-06-13 04:08:43,935: ERROR/MainProcess] Unrecoverable error: ImportError('No module named postgresql',)
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
self.on_start()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/apps/worker.py", line 169, in on_start
string(self.colored.cyan(' \n', self.startup_info())),
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/apps/worker.py", line 230, in startup_info
results=self.app.backend.as_uri(),
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/app/base.py", line 626, in backend
return self._get_backend()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/app/base.py", line 444, in _get_backend
self.loader)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/backends/__init__.py", line 68, in get_backend_by_url
return get_backend_cls(backend, loader), url
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/backends/__init__.py", line 49, in get_backend_cls
cls = symbol_by_name(backend, aliases)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/kombu/utils/__init__.py", line 96, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named postgresql
When celery_result_backend is changed to the default db+mysql://airflow:airflow#localhost:3306/airflow and the airflow worker is run again the result is:
ubuntu#airflow_client:~/airflow$ airflow worker
[2016-06-13 04:17:32,387] {__init__.py:36} INFO - Using executor CeleryExecutor
-------------- celery#airflow_client2 v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Linux-3.19.0-59-generic-x86_64-with-debian-jessie-sid
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f5cb65cb510
- ** ---------- .> transport: amqp://username:**#192.168.1.2:5672//
- ** ---------- .> results: mysql://airflow:**#localhost:3306/airflow
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> default exchange=default(direct) key=celery
[2016-06-13 04:17:33,385] {__init__.py:36} INFO - Using executor CeleryExecutor
Starting flask
[2016-06-13 04:17:33,737] {_internal.py:87} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
[2016-06-13 04:17:34,536: WARNING/MainProcess] celery#airflow_client2 ready.
What am I missing? How can I diagnose this further?
The ImportError: No module named postgresql error is due to the invalid prefix used in your celery_result_backend. When using a database as a Celery backend, the connection URL must be prefixed with db+. See
https://docs.celeryproject.org/en/stable/userguide/configuration.html#conf-database-result-backend
So replace:
celery_result_backend = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
with something like:
celery_result_backend = db+postgresql://username:password#192.168.1.2:5432/airflow
You need to ensure to install Celery Flower. That is, pip install flower.
UPDATE: I decided to try using Django as the broker for simplicity, as I assumed I did something wrong in the Redis setup. However, after making the changes described in the docs I get the same error as below when attempting to run a Celery task with .delay(). The Celery worker starts and shows it's connected to Django for transport. Could this be a firewall issue?
ORIGINAL
I'm working on a Django project and attempting to add background tasks. I've installed Celery and chosen Redis for the broker, and installed that as well (I'm on a Windows machine, fyi). The celery worker starts, connects to the Redis server, and discovers my shared_tasks
-------------- celery#GALACTICA v3.1.19 (Cipater)
---- **** -----
--- * *** * -- Windows-7-6.1.7601-SP1
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: proj:0x2dbf970
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. app.tasks.add
. app.tasks.mul
. app.tasks.xsum
. proj.celery.debug_task
[2016-01-16 11:53:05,586: INFO/MainProcess] Connected to redis://localhost:6379/
0
[2016-01-16 11:53:06,611: INFO/MainProcess] mingle: searching for neighbors
[2016-01-16 11:53:09,628: INFO/MainProcess] mingle: all alone
c:\python34\lib\site-packages\celery\fixups\django.py:265: UserWarning: Using se
ttings.DEBUG leads to a memory leak, never use this setting in production enviro
nments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-01-16 11:53:14,670: WARNING/MainProcess] c:\python34\lib\site-packages\cel
ery\fixups\django.py:265: UserWarning: Using settings.DEBUG leads to a memory le
ak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-01-16 11:53:14,671: WARNING/MainProcess] celery#GALACTICA ready.
I'm following the intro docs so the tasks are very simple, including one called add. I can run the tasks by themselves in a python shell, but when I attempt to call add.delay() to have celery handle it, it appears the connection isn't successful:
>>> add.delay(2,2)
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 423, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\kombu\connection.py", line 436, in _ensured
return fun(*args, **kwargs)
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 177, in _publish
channel = self.channel
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel) File "C:\Python34\lib\site-packages\kombu\connection.py", line 756, in default_channel
self.connection
File "C:\Python34\lib\site-packages\kombu\connection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:\Python34\lib\site-packages\kombu\connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:\Python34\lib\site-packages\kombu\transport\pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\celery\app\task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "C:\Python34\lib\site-packages\celery\app\task.py", line 560, in apply_async
**dict(self._get_exec_options(), **options)
File "C:\Python34\lib\site-packages\celery\app\base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "C:\Python34\lib\site-packages\celery\app\amqp.py", line 305, in publish_task
**kwargs
File "C:\Python34\lib\site-packages\kombu\messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 457, in _ensured
interval_max)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "C:\Python34\lib\site-packages\kombu\utils\__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "C:\Python34\lib\site-packages\kombu\connection.py", line 237, in connect
return self.connection
File "C:\Python34\lib\site-packages\kombu\connection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:\Python34\lib\site-packages\kombu\connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:\Python34\lib\site-packages\kombu\transport\pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:\Python34\lib\site-packages\amqp\transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it
There's no output on the console with the celery worker running, so I don't think it ever gets the task. I believe my settings.py, celery.py and tasks.py are alright:
settings.py:
#celery settings
BROKER_URL = 'redis://localhost:6379/0'
celery.py:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
from django.conf import settings # noqa
app = Celery('proj')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
tasks.py:
from __future__ import absolute_import
#from proj.celery import app
from celery import shared_task
#shared_task
def add(x, y):
return x + y
#shared_task
def mul(x, y):
return x * y
#shared_task
def xsum(numbers):
return sum(numbers)
My project layout is nearly identical to the Celery example Django project layout on GitHub, as well as the example here. It looks like:
proj
├── proj
│ ├── celery.py
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
└── app
├── __init__.py
├── models.py
├── tasks.py
├── tests.py
└── views.py
Apologies on the other app in my project being named 'app' - it makes things a bit confusing to read, and is the result of autogenerating the base project in Visual Studio with PTVS installed. I probably could have changed it early on, but i didn't realize the name was so vague.
Thanks for any thoughts- I've been stumped by this for a while.
I was getting the same error after scrolling all over the internet i got no solution because i forgot to add the following code:
from .celery import app as celery_app
__all__ = ('celery_app',)
to my __init.py file of project directory got my error resolved
I got around this, but I'm not sure how. I came back to this exact configuration the next day, and tasks were making it to the celery worker.
Perhaps one of the services I restarted was the key, but I'm not sure.
If anyone else runs into this, especially on Windows: make sure your redis-server is active and that you see the incoming connections from a ping as well as the task. I had done that before posting this question, but it seems like the likely candidate for being misconfigured.
Redis won't be started yet after installing it. So, starting Redis will solve your problem.