Stop celery workers to consume from all queues - python

Cheers,
I have a celery setup running in a production environment (on Linux) where I need to consume two different task types from two dedicated queues (one for each). The problem that arises is, that all workers are always bound to both queues, even when I specify them to only consume from one of them.
TL;DR
Celery running with 2 queues
Messages are published in correct queue as designed
Workers keep consuming both queues
Leads to deadlock
General Information
Think of my two different task types as a hierarchical setup:
A task is a regular celery task that may take quite some time, because it dynamically dispatches other celery tasks and may be required to chain through their respective results
A node is a dynamically dispatched sub-task, which also is a regular celery task but itself can be considered an atomic unit.
My task thus can be a more complex setup of nodes where the results of one or more nodes serves as input for one or more subsequent nodes, and so on. Since my tasks can take longer and will only finish when all their nodes have been deployed, it is essential that they are handled by dedicated workers to keep a sufficient number of workers free to consume the nodes. Otherwise, this could lead to the system being stuck, when a lot of tasks are dispatched, each consumed by another worker, and their respective nodes are only queued but will never be consumed, because all workers are blocked.
If this is a bad design in general, please make any propositions on how I can improve it. I did not yet manage to build one of these processes using celery's built-in canvas primitives. Help me, if you can?!
Configuration/Setup
I run celery with amqp and have set up the following queues and routes in the celery configuration:
CELERY_QUERUES = (
Queue('prod_nodes', Exchange('prod'), routing_key='prod.node'),
Queue('prod_tasks', Exchange('prod'), routing_key='prod.task')
)
CELERY_ROUTES = (
'deploy_node': {'queue': 'prod_nodes', 'routing_key': 'prod.node'},
'deploy_task': {'queue': 'prod_tasks', 'routing_key': 'prod.task'}
)
When I launch my workers, I issue a call similar to the following:
celery multi start w_task_01 w_node_01 w_node_02 -A my.deployment.system \
-E -l INFO -P gevent -Q:1 prod_tasks -Q:2-3 prod_nodes -c 4 --autoreload \
--logfile=/my/path/to/log/%N.log --pidfile=/my/path/to/pid/%N.pid
The Problem
My queue and routing setup seems to work properly, as I can see messages being correctly queued in the RabbitMQ Management web UI.
However, all workers always consume celery tasks from both queues. I can see this when I start and open up the flower web UI and inspect one of the deployed tasks, where e.g. w_node_01 starts consuming messages from the prod_tasks queue, even though it shouldn't.
The RabbitMQ Management web UI furthermore tells me, that all started workers are set up as consumers for both queues.
Thus, I ask you...
... what did I do wrong?
Where is the issue with my setup or worker start call; How can I circumvent the problem of workers always consuming from both queues; Do I really have to make additional settings during runtime (what I certainly do not want)?
Thanks for your time and answers!

You can create 2 separate workers for each queue and each one's define what queue it should get tasks from using the -Q command line argument.
If you want to keep the number processes the same, by default a process is opened for each core for each worker you can use the --concurrency flag (See Celery docs for more info)

Celery allows configuring a worker with a specific queue.
1) Specify the name of the queue with 'queue' attribute for different types of jobs
celery.send_task('job_type1', args=[], kwargs={}, queue='queue_name_1')
celery.send_task('job_type2', args=[], kwargs={}, queue='queue_name_2')
2) Add the following entry in configuration file
CELERY_CREATE_MISSING_QUEUES = True
3) On starting the worker, pass -Q 'queue_name' as argument, for consuming from that desired queue.
celery -A proj worker -l info -Q queue_name_1 -n worker1
celery -A proj worker -l info -Q queue_name_2 -n worker2

Related

Celery: How to separate the logic of Publisher and Consumer?

I am new to Celery. In this example, I am unable to figure out how to separate the logic of publisher and consumer. Is the command celery -A tasks worker --loglevel=INFO used to start working for publishing or consuming?
If add.delay(4, 4) is to push data into a queue, how do I connect to the same queue in a separate code file and consume it?
Publishers are typically either Celery beat (scheduler), custom scripts that you develop, or other tasks executed by Celery workers in your cluster.
Consumers are EXCLUSIVELY Celery workers. Unless you dig really deep into Celery/Kombu and implement your own consumer you are pretty much not able to write consumer so easily.

Serial processing of specific tasks using Celery with concurrency

I have a python/celery setup: I have a queue named "task_queue" and multiple python scripts that feed it data from different sensors. There is a celery worker that reads from that queue and sends an alarm to user if the sensor value changed from high to low. The worker has multiple threads (I have autoscaling parameter enabled) and everything works fine until one sensor decides to send multiple messages at once. That's when I get the race condition and may send multiple alarms to user, since before a thread stores the info that it had already sent an alarm, few other threads also send it.
I have n sensors (n can be more than 10000) and messages from any sensor should be processed sequentially. So in theory I could have n threads, but that would be an overkill. I'm looking for a simplest way to equally distribute the messages across x threads (usually 10 or 20), so I wouldn't have to (re)write routing function and define new queues each time I want to increase x (or decrease).
So is it possible to somehow mark the tasks that originate from same sensor to be executed in serial manner (when calling the delay or apply_async)? Or is there a different queue/worker architecture I should be using to achieve that?
From what I understand, you have some tasks that can run all at the same time and a specific task that can not do this (this task needs to be executed 1 at a time).
There is no way (for now) to set the concurrency of a specific task queue so I think the best approach in your situation would be handling the problem with multiple workers.
Lets say you have the following queues:
queue_1 Here we send tasks that can run all at the same time
queue_2 Here we send tasks that can run 1 at a time.
You could start celery with the following commands (If you want them in the same machine).
celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1#%h -Q queue_1
celery -A proj worker --loglevel=INFO --concurrency=1 -n worker2#%h -Q queue_2
This will make worker1 which has concurrency 10 handle all tasks that can be ran at the same time and worker2 handles only the tasks that need to be 1 at a time.
Here is some documentation reference:
https://docs.celeryproject.org/en/stable/userguide/workers.html
NOTE: Here you will need to specify the task in which queue runs. This can be done when calling with apply_async, directly from the decorator or some other ways.

Broadcasting Celery Tasks is processed by only one Subprocess per Worker

Edit: In lack of an alternative I just multiply the task. I am using Flask as a webserver. As I call the Train_network endpoint, the Task is executed as many times as there are workers.
response = [fw.train_network.delay().get() for _ in range(Workers)]
For this to work, I also remove the -c 2 argument from the celery worker command and placed the amount into my config
celery.conf.worker_concurrency = cfg.celery_workers
Witht his I always know the amount of Subprocesses and how many times the Task should be repeated.
If there is an better option to solve this, I will update the post with an answer. Or maybe somebody else can provide insight.
Edit: Basically, I need to have access from all Subprocesses to a specific set of variables, which should be shared between these processes. Or, if every process got their own varibale, I need to be able to modify all of these variables by executing a task.
Edit:
SO, Ive found out that the Task is indeed broadcasted at to all workers. But not the workers launched from the pool/ the concurrency but fro, the terminal.
So, if I start multiple terminals with celery worker ..... -c 2 These celery workers do receive the broadcast task. Which is good I guess. Now I want to broadcast these task to the PoolWorkers inside the celery workers too.
Basically I load a model and I want to relaod the model on all pool workers
Original:
I"ve been reading trough the user guide cat celery so that I can send a single task to all of my workers.
I am using RabbitMQ and everzthing elso works fine, but the broadcasted task are only processed by a single worker.
I define the exchange and the Queue
exchange = Exchange('broadcast_exchange', type='fanout')
celery.conf.task_queues = (Broadcast(name='broadcast_learning', exchange=exchange),)
And also the Task routes:
celery.conf.task_routes = {
'fworker.train_network':
{
'queue':'broadcast_learning',
'exchange':'broadcast_exchange'
},
....
}
But executing the task with .delay() or with .apply_async(queue='broadcast_learning') does not seem to send the task to ALL workers - instead only one is processing it.
After starting my worker it listens to the broadcast and the default queue, I see that they are registered in Celery (altough with a strange internal name)
[queues]
.> bcast.13bebf5c-f69c-40d9-a0e8-73f74efb9114 exchange=broadcast_exchange(fanout) key=celery
.> celery exchange=celery(direct) key=celery
I already changed from Redis backend to RabbitmQ, since some answers suggested that Redis is not working with broadcasting. But whatever I try, it does not seem to work.

How to limit the maximum number of running Celery tasks by name

How do you limit the number of instances of a specific Celery task that can be ran simultaneously?
I have a task that processes large files. I'm running into a problem where a user may launch several tasks, causing the server to run out of CPU and memory as it tries to process too many files at once. I want to ensure that only N instances of this one type of task are ran at any given time, and that other tasks will sit queued in the scheduler until the others complete.
I see there's a rate_limit option in the task decorator, but I don't think this does what I want. If I'm understanding the docs correctly, this will just limit how quickly the tasks are launched, but it won't restrict the overall number of tasks running, so this will make my server will crash more slowly...but it will still crash nonetheless.
You have to setup extra queue and set desired concurrency level for it. From Routing Tasks:
# Old config style
CELERY_ROUTES = {
'app.tasks.limited_task': {'queue': 'limited_queue'}
}
or
from kombu import Exchange, Queue
celery.conf.task_queues = (
Queue('default', default_exchange, routing_key='default'),
Queue('limited_queue', default_exchange, routing_key='limited_queue')
)
And start extra worker, serving only limited_queue:
$ celery -A celery_app worker -Q limited_queue --loglevel=info -c 1 -n limited_queue
Then you can check everything running smoothly using Flower or inspect command:
$ celery -A celery_app worker inspect --help
What you can do is to push these tasks to a specific queue and have X number of workers processing them. Having two workers on a queue with 100 items will ensure that there will only be two tasks processed at the same time.
I am not sure you can do that in Celery, what you can do is check how many tasks of that name are currently running when a request arrives and if it exceeds the maximum either return an error or add a mechanism that periodically checks if there are open slots for the tasks and runs it (if you add such a mechanism, you don't need to double check, just at each request add it to it's queue.
In order to check running tasks, you can use the inspect command.
In short:
app = Celery(...)
i = app.control.inspect()
i.active()

How to make celery retry using the same worker?

I'm just starting out with celery in a Django project, and am kinda stuck at this particular problem: Basically, I need to distribute a long-running task to different workers. The task is actually broken into several steps, each of which takes considerable time to complete. Therefore, if some step fails, I'd like celery to retry this task using the same worker to reuse the results from the completed steps. I understand that celery uses routing to distribute tasks to certain server, but I can't find anything about this particular problem. I use RabbitMQ as my broker.
You could have every celeryd instance consume from a queue named after the hostname of the worker:
celeryd -l info -n worker1.example.com -Q celery,worker1.example.com
sets the hostname to worker1.example.com and will consume from a queue named the same, as well as the default queue (named celery).
Then to direct a task to a specific worker you can use:
task.apply_async(args, kwargs, queue="worker1.example.com")
similary to direct a retry:
task.retry(queue="worker1.example.com")
or to direct the retry to the same worker:
task.retry(queue=task.request.hostname)

Categories