I keep getting:ERROR/MainProcess] consumer: Cannot connect to amqp://ec2celeryuser when I run celery -A tasks worker on terminal.
Basically what I'm trying to do is get celery/rabbitmq working properly across (2) ec2 instances. To pass a silly task in tasks.py for processing to rabbitmq.
Instance 1 - Houses rabbitMQ
This currently runs RabbitMQ fine. If I run sudo rabbitmqctl status it outputs:
Status of node 'rabbit#ip-xx-xxx-xxx-xx' ...
[{pid,786},
2. Instance 2 - Houses Celery
I'm trying to run celery on instance 2 against Instance 1 using the following in terminal:
celery -A tasks worker
I have a file celeryconfig.py:
BROKER_URL = 'amqp://ec2celeryuser:mypasshere#xx.xxx.xx.xx:5672/celeryserver1/'
#CELERY SETTINGS
CELERY_IMPORTS = ("tasks",)
CELERY_RESULT_BACKEND = "amqp"
I have a file client.py:
from tasks import add
result = add.delay(4, 4) # call task
result_sum = result.get(timeout=5) # wait to get result for a maximum of 5 seconds
I have a file tasks.py:
from celery import Celery
app = Celery('tasks', broker='amqp://ec2celeryuser:mypasshere#xx.xxx.xx.xx:5672/celeryserver1/')
#app.task
def add(x, y):
return x + y
I've properly setup a vhost, a user ec2celeryuser, and gave this user permissions of:
sudo rabbitmqctl set_permissions -p /celeryserver1 ec2celeryuser ".*" ".*" ".*"
if I do: sudo rabbitmqctl list_users on RabbitMQ (instance 1) it shows:
ec2celeryuser []
guest [administrator
I've tried both usernames with their passwords, but no change.
I've been following the Celery Guide, and a tutorial without much luck.
What am I doing wrong here? Clearly there is a connection issue, but what am I doing wrong?
Thank you!
Thanks to user natdempk for helping me fix the configuration syntax of a queues.
The issue was creating a vhost in rabbitmq like:
sudo rabbitmqctl add_vhost /celeryserver1
when it should have been:
sudo rabbitmqctl add_vhost celeryserver1
I then had to reset the permissions for my user ec2celeryuser like:
sudo rabbitmqctl set_permissions -p celeryserver1 ec2celeryuser ".*" ".*" ".*"
The way I realized this was the issue was: I visited /var/log/rabbitmq/<last log file.log>
and saw:
=INFO REPORT==== 30-Apr-2014::12:45:58 ===
accepted TCP connection on [::]:5672 from xx.xxx.xxx.xxx:45964
=INFO REPORT==== 30-Apr-2014::12:45:58 ===
starting TCP connection <x.xxx.x> from from xx.xxx.xxx.xxx:45964
=ERROR REPORT==== 30-Apr-2014::12:46:01 ===
exception on TCP connection <x.xxx.x> from from xx.xxx.xxx.xxx:45964
{channel0_error,opening,
{amqp_error,access_refused,
"access to vhost 'celeryserver1/' refused for user 'ec2celeryuser'",
'connection.open'}}
Since fixing the vhost, I now pleasantly see:
[2014-04-30 13:08:10,101: WARNING/MainProcess] celery#ip-xx-xxx-xx-xxx ready.
So I see a few things wrong here. First your broker URL for rabbitMQ in tasks.py doesn't seem correct. It should read something like below.
app = Celery('tasks', broker='amqp://ec2celeryuser:ec2celerypassword#xx.xxx.xx.xx/celeryserver1/')
Also you might want to specify the app you want celery to serve when you run the worker process. You can do this by running celery -A tasks worker from the directory tasks.py is located in.
Another thing is your code in client.py to call your task seems incorrect. From the celery documentation, you can call the task as follows:
from tasks import add
result = add.delay(4, 4) # call task
result_sum = result.get(timeout=5) # wait to get result for a maximum of 5 seconds
Fixing these might solve your issue, or at least get you closer.
Related
I am trying to schedule a task that runs every 10 minutes using Django 1.9.8, Celery 4.0.2, RabbitMQ 2.1.4, Redis 2.10.5. These are all running within Docker containers in Linux (Fedora 25). I have tried many combinations of things that I found in Celery docs and from this site. The only combination that has worked thus far is below. However, it only runs the periodic task initially when the application starts, but the schedule is ignored thereafter. I have absolutely confirmed that the scheduled task does not run again after the initial time.
My (almost-working) setup that only runs one-time:
settings.py:
INSTALLED_APPS = (
...
'django_celery_beat',
...
)
BROKER_URL = 'amqp://{user}:{password}#{hostname}/{vhost}/'.format(
user=os.environ['RABBIT_USER'],
password=os.environ['RABBIT_PASS'],
hostname=RABBIT_HOSTNAME,
vhost=os.environ.get('RABBIT_ENV_VHOST', '')
# We don't want to have dead connections stored on rabbitmq, so we have to negotiate using heartbeats
BROKER_HEARTBEAT = '?heartbeat=30'
if not BROKER_URL.endswith(BROKER_HEARTBEAT):
BROKER_URL += BROKER_HEARTBEAT
BROKER_POOL_LIMIT = 1
BROKER_CONNECTION_TIMEOUT = 10
# Celery configuration
# configure queues, currently we have only one
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
)
# Sensible settings for celery
CELERY_ALWAYS_EAGER = False
CELERY_ACKS_LATE = True
CELERY_TASK_PUBLISH_RETRY = True
CELERY_DISABLE_RATE_LIMITS = False
# By default we will ignore result
# If you want to see results and try out tasks interactively, change it to False
# Or change this setting on tasks level
CELERY_IGNORE_RESULT = True
CELERY_SEND_TASK_ERROR_EMAILS = False
CELERY_TASK_RESULT_EXPIRES = 600
# Set redis as celery result backend
CELERY_RESULT_BACKEND = 'redis://%s:%d/%d' % (REDIS_HOST, REDIS_PORT, REDIS_DB)
CELERY_REDIS_MAX_CONNECTIONS = 1
# Don't use pickle as serializer, json is much safer
CELERY_TASK_SERIALIZER = "json"
CELERY_RESULT_SERIALIZER = "json"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERYD_HIJACK_ROOT_LOGGER = False
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_MAX_TASKS_PER_CHILD = 1000
celeryconf.py
coding=UTF8
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "web_portal.settings")
app = Celery('web_portal')
CELERY_TIMEZONE = 'UTC'
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
tasks.py
from celery.schedules import crontab
from .celeryconf import app as celery_app
#celery_app.on_after_finalize.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls email_scanner every 10 minutes
sender.add_periodic_task(
crontab(hour='*',
minute='*/10',
second='*',
day_of_week='*',
day_of_month='*'),
email_scanner.delay(),
)
#app.task
def email_scanner():
dispatch_list = scanning.email_scan()
for dispatch in dispatch_list:
validate_dispatch.delay(dispatch)
return
run_celery.sh -- Used to start celery tasks from docker-compose.yml
#!/bin/sh
# wait for RabbitMQ server to start
sleep 10
cd web_portal
# run Celery worker for our project myproject with Celery configuration stored in Celeryconf
su -m myuser -c "celery beat -l info --pidfile=/tmp/celerybeat-web_portal.pid -s /tmp/celerybeat-schedule &"
su -m myuser -c "celery worker -A web_portal.celeryconf -Q default -n default#%h"
I have also tried using a CELERYBEAT_SCHEDULER in the settings.py in lieu of the #celery_app.on_after finalize_connect decorator and block in tasks.py, but the scheduler never ran even once.
settings.py (not working at all scenario)
(same as before except also including the following)
CELERYBEAT_SCHEDULE = {
'email-scanner-every-5-minutes': {
'task': 'tasks.email_scanner',
'schedule': timedelta(minutes=10)
},
}
The Celery 4.0.2 documentation online presumes that I should instinctively know many givens, but I am new in this environment. If anybody knows where I can find a tutorial OTHER THAN docs.celeryproject.org and http://django-celery-beat.readthedocs.io/en/latest/ which both assume that I am already a Django master, I would be grateful. Or let me know of course if you see something obviously wrong in my setup. Thanks!
I found a solution that works. I could not get CELERYBEAT_SCHEDULE or the celery task decorators to work, and I suspect that it may be at least partially due with the manner in which I started the Celery beat task.
The working solution goes the whole 9 yards to utilize Django Database Scheduler. I downloaded the GitHub project "https://github.com/celery/django-celery-beat" and incorporated all of the code as another "app" in my project. This enabled Django-Admin access to maintain the cron / interval / periodic task(s) tables via a browser. I also modified my run_celery.sh as follows:
#!/bin/sh
# wait for RabbitMQ server to start
sleep 10
# run Celery worker for our project myproject with Celery configuration stored in Celeryconf
celery beat -A web_portal.celeryconf -l info --pidfile=/tmp/celerybeat- web_portal.pid -S django --detach
su -m myuser -c "celery worker -A web_portal.celeryconf -Q default -n default#%h -l info "
After adding a scheduled task via the django-admin web interface, the scheduler started working fine.
I'm new to celery, I followed the django / celery tutorial. I'm using rabbitmq. I have a simple function that uses celery:
from celery.decorators import task
#task
def test_celery(x, y):
print x + y
return None
When I run it with delay it doesn't work, it gives me a "connection reset by peer":
test_celery.delay("one ", "dos")
I'm running rabbitmq in another terminal, if I do
sudo rabbitmqctl list_users
I get
alejoss []
guest [administrator]
my BROKER_URL looks like this:
BROKER_URL = "amqp://alejoss:password#localhost://"
What am I missing. I'm new to Celery... please help.
Based on your debugging feedback, I think you have an authentication issue with the user you setup for yourself. You may want to read-up more on access control here (https://www.rabbitmq.com/access-control.html).
Sounds like it could be a permissions issue.
Here's the spoiler for you in case the documentation is too confusing at first :)
sudo rabbitmqctl set_permissions -p alejoss / ".*" ".*" ".*"
The RabbitMQ gotcha here for newcomers is that newly created users by default have NO permissions.
I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery.
I am following http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/ along with the docs.
I've been able to get a basic task working at the command line, using:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO
I just realized, that I have a bunch of tasks in my queue, that had not executed:
[2015-03-28 16:49:05,916: WARNING/MainProcess] Restoring 4 unacknowledged message(s).
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery -A tp purge
WARNING: This will remove all tasks from queue: celery.
There is no undo for this operation!
(to skip this prompt use the -f option)
Are you sure you want to delete all tasks (yes/NO)? yes
Purged 81 messages from 1 known task queue.
How do I get a list of the queued items from the command line?
If you want to get all scheduled tasks,
celery inspect scheduled
To find all active queues
celery inspect active_queues
For status
celery inspect stats
For all commands
celery inspect
If you want to get it explicitily.Since you are using redis as queue.Then
redis-cli
>KEYS * #find all keys
Then find out something related to celery
>LLEN KEY # i think it gives length of list
Here is a copy-paste solution for Redis:
def get_celery_queue_len(queue_name):
from yourproject.celery import app as celery_app
with celery_app.pool.acquire(block=True) as conn:
return conn.default_channel.client.llen(queue_name)
def get_celery_queue_items(queue_name):
import base64
import json
from yourproject.celery import app as celery_app
with celery_app.pool.acquire(block=True) as conn:
tasks = conn.default_channel.client.lrange(queue_name, 0, -1)
decoded_tasks = []
for task in tasks:
j = json.loads(task)
body = json.loads(base64.b64decode(j['body']))
decoded_tasks.append(body)
return decoded_tasks
It works with Django. Just don't forget to change yourproject.celery.
I am currently using django with celery and everything works fine.
However I want to be able to give the users an opportunity to cancel a task if the server is overloaded by checking how many tasks are currently scheduled.
How can I achieve this ?
I am using redis as broker.
I just found this :
Retrieve list of tasks in a queue in Celery
It is somehow relate to my issue but I don't need to list the tasks , just count them :)
Here is how you can get the number of messages in a queue using celery that is broker-agnostic.
By using connection_or_acquire, you can minimize the number of open connections to your broker by utilizing celery's internal connection pooling.
celery = Celery(app)
with celery.connection_or_acquire() as conn:
conn.default_channel.queue_declare(
queue='my-queue', passive=True).message_count
You can also extend Celery to provide this functionality:
from celery import Celery as _Celery
class Celery(_Celery)
def get_message_count(self, queue):
'''
Raises: amqp.exceptions.NotFound: if queue does not exist
'''
with self.connection_or_acquire() as conn:
return conn.default_channel.queue_declare(
queue=queue, passive=True).message_count
celery = Celery(app)
num_messages = celery.get_message_count('my-queue')
If your broker is configured as redis://localhost:6379/1, and your tasks are submitted to the general celery queue, then you can get the length by the following means:
import redis
queue_name = "celery"
client = redis.Redis(host="localhost", port=6379, db=1)
length = client.llen(queue_name)
Or, from a shell script (good for monitors and such):
$ redis-cli -n 1 -h localhost -p 6379 llen celery
If you have already configured redis in your app, you can try this:
from celery import Celery
QUEUE_NAME = 'celery'
celery = Celery(app)
client = celery.connection().channel().client
length = client.llen(QUEUE_NAME)
Get a redis client instance used by Celery, then check the queue length. Don't forget to release the connection every time you use it (use .acquire):
# Get a configured instance of celery:
from project.celery import app as celery_app
def get_celery_queue_len(queue_name):
with celery_app.pool.acquire(block=True) as conn:
return conn.default_channel.client.llen(queue_name)
Always acquire a connection from the pool, don't create it manually. Otherwise, your redis server will run out of connection slots and this will kill your other clients.
I'll expand on the answer of #StephenFuhry around the not-found error, because more or less broker-agnostic way of retrieving queue length is beneficial even if Celery suggests to mess with brokers directly. In Celery 4 (with Redis broker) this error looks like:
ChannelError: Channel.queue_declare: (404) NOT_FOUND - no queue 'NAME' in vhost '/'
Observations:
ChannelError is a kombu exception (if fact, it's amqp's and kombu "re-exports" it).
On Redis broker Celery/Kombu represent queues as Redis lists
Redis collection type keys are removed whenever the collection becomes empty
If we look at what queue_declare does, it has these lines:
if passive and not self._has_queue(queue, **kwargs):
raise ChannelError(...)
Kombu Redis virtual transport's _has_queue is this:
def _has_queue(self, queue, **kwargs):
with self.conn_or_acquire() as client:
with client.pipeline() as pipe:
for pri in self.priority_steps:
pipe = pipe.exists(self._q_for_pri(queue, pri))
return any(pipe.execute())
The conclusion is that on a Redis broker ChannelError raised from queue_declare is okay (for an existing queue of course), and just means that the queue is empty.
Here's an example of how to output all active Celery queues' lengths (normally should be 0, unless your worker can't cope with the tasks).
from kombu.exceptions import ChannelError
def get_queue_length(name):
with celery_app.connection_or_acquire() as conn:
try:
ok_nt = conn.default_channel.queue_declare(queue=name, passive=True)
except ChannelError:
return 0
else:
return ok_nt.message_count
for queue_info in celery_app.control.inspect().active_queues().values():
print(queue_info[0]['name'], get_queue_length(queue_info[0]['name']))
Question
After running tasks via celery's periodic task scheduler, beat, why do I have so many unconsumed queues remaining in RabbitMQ?
Setup
Django web app running on Heroku
Tasks scheduled via celery beat
Tasks run via celery worker
Message broker is RabbitMQ from ClouldAMQP
Procfile
web: gunicorn --workers=2 --worker-class=gevent --bind=0.0.0.0:$PORT project_name.wsgi:application
scheduler: python manage.py celery worker --loglevel=ERROR -B -E --maxtasksperchild=1000
worker: python manage.py celery worker -E --maxtasksperchild=1000 --loglevel=ERROR
settings.py
CELERYBEAT_SCHEDULE = {
'do_some_task': {
'task': 'project_name.apps.appname.tasks.some_task',
'schedule': datetime.timedelta(seconds=60 * 15),
'args': ''
},
}
tasks.py
#celery.task
def some_task()
# Get some data from external resources
# Save that data to the database
# No return value specified
Result
Every time the task runs, I get (via the RabbitMQ web interface):
An additional message in the "Ready" state under my "Queued Messages"
An additional queue with a single message in the "ready" state
This queue has no listed consumers
It ended up being my setting for CELERY_RESULT_BACKEND.
Previously, it was:
CELERY_RESULT_BACKEND = 'amqp'
I no longer had unconsumed messages / queues in RabbitMQ after I changed it to:
CELERY_RESULT_BACKEND = 'database'
What was happening, it would appear, is that after a task was executed, celery was sending info about that task back via rabbitmq, but, there was nothing setup to consume these responses messages, hence a bunch of unread ones ending up in the queue.
NOTE: This means that celery would be adding database entries recording the outcomes of tasks. To keep my database from getting loaded up with useless messages, I added:
# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400
Relevant parts from Settings.py
########## CELERY CONFIGURATION
import djcelery
# https://github.com/celery/django-celery/
djcelery.setup_loader()
INSTALLED_APPS = INSTALLED_APPS + (
'djcelery',
)
# Compress all the messages using gzip
# http://celery.readthedocs.org/en/latest/userguide/calling.html#compression
CELERY_MESSAGE_COMPRESSION = 'gzip'
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-transport
BROKER_TRANSPORT = 'amqplib'
# Set this number to the amount of allowed concurrent connections on your AMQP
# provider, divided by the amount of active workers you have.
#
# For example, if you have the 'Little Lemur' CloudAMQP plan (their free tier),
# they allow 3 concurrent connections. So if you run a single worker, you'd
# want this number to be 3. If you had 3 workers running, you'd lower this
# number to 1, since 3 workers each maintaining one open connection = 3
# connections total.
#
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-pool-limit
BROKER_POOL_LIMIT = 3
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-connection-max-retries
BROKER_CONNECTION_MAX_RETRIES = 0
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-url
BROKER_URL = os.environ.get('CLOUDAMQP_URL')
# Previously, had this set to 'amqp', this resulted in many read / unconsumed
# queues and messages in RabbitMQ
# See: http://docs.celeryproject.org/en/latest/configuration.html#celery-result-backend
CELERY_RESULT_BACKEND = 'database'
# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400
########## END CELERY CONFIGURATION
Looks like you are getting back responses from your consumed tasks.
You can avoid that by doing:
#celery.task(ignore_result=True)