Single Celery task suspend all other Celery workers - python

I have a celery configuration (over django-celery) with rabbit MQ as a broker and concurrency of 20 threads.
one of the tasks is taking really long time (about an hour) to be executed and. after a few minutes that the task running all the other concurrency threads stop working until the task finish, why this is happening?
Thanks!

Need to have multiple worker to pick up the task.
For example, environment needs to have at least 2 CPU.
http://docs.celeryproject.org/en/latest/userguide/workers.html#concurrency
May use celery flower to inspect the workers.

Related

How to force all celery workers to execute one task repeatedly?

I have a couple of workers deployed in Kubernetes. I want to write a customized exporter for Prometheus so I need to check all workers' availability.
I have some huge tasks in one queue, which take 200 seconds (for example). The related workers to this queue have been run with eventlet pool and 1000 concurrency. This worker deployed in a workload with 2 pods.
Because of the huge tasks, sometimes light tasks got stuck in these workers and does not process until huge task are done(I have another queue for light tasks, but I have to have some light tasks in this queue).
How I can check all workers' performance and upness?
I come across Bootstrap in celery but I do not know whether it helps me or not, because I want to have a task that is run on every worker (and queues) and I want it to run between huge tasks not separated.
For more details: I want to save this data in a Redis and read it in my exporter.

What does it mean to have a Celery worker offline but active=1?

In my case still I'm trying to understand something about it. Running long tasks (they take from 20 mins to 2 hours) I have a weird scenario in which my celery worker, after a while (15-20 mins) pass from status=online to offline, however they still have active=1.
After this I see how the same task is started in another celery worker. Ant the process repeat. This happens again until I have the same task running three times at the same time in different workers. All of them offline with active=1 after a while
What does it mean to have a celery worker status=offline with active=1?
What can be the reason to have a worker on this state?

Celery distributing Queues and Workers

I am new to Celery and I am trying to understand how queues work. If I have two tasks say task1 and task2 and I place them in different queues and on task1 I use only a single worker and on task2 I use multiple works, will task1 only run one at a time since I only have a single worker? and will task2 run as many as many times as I have workers? Is my understanding correct?
You are close. You can distribute tasks to specific queues and configure workers to only listen to specific queues and scale the number of workers listening to each queue independently. More workers, generally, means more tasks can execute concurrently.
However, having only a single worker assigned to a particular queue/task alone does not guarantee that the task will only execute one at a time.
By default, workers have concurrency enabled, meaning a single worker can utilize multiple processes to execute tasks concurrently. Further, there are other working settings to consider, too, like prefetching and early acknowledgement.
If you want to ensure that a task can only be executed one at a time, you should not rely on the (lack of) availability of worker processes. Instead, a locking mechanism like described in the docs ensuring a task is executed one at a time would be one recommended approach for this.
task2 will not be executed as many times as you have workers on that queue, this task will be distributed to one of workers that will be available at that moment of time.

How to limit the number of tasks that runs in celery

I have an app running on Heroku and I'm using celery together with a worker dyno to process background work.
I'm running tasks that are using quite a lot of memory. These tasks get started at roughly the same time, but I want only one or two tasks to be running at the same time, the others must wait in the queue. How can I achieve that?
If they run at the same time I run out of memory and the system gets restarted. I know why it's using a lot of memory and not looking to decrease that
Quite simply: limit your concurrency (number of celery worker processes) to the number of tasks that can safely run in parallel on this server.
Note that if you have different tasks having widly different resource needs (ie one task that eats a lot of ram and takes minutes to complete and a couple ones that are fast and don't require much resources at all) you might be better using two distinct nodes to serve them (one for the heavy tasks and the other for the light ones) so heavy tasks don't block light ones. You can use queues to route tasks to different celery nodes.

Force Celery to run next task in queue?

Is there any way to make Celery recheck if there are any tasks in the main queue ready to be started? Will the remote command add_consumer() do the job?
The reason: I am running several concurrent tasks, which spawn multiple sub-processes. When the tasks are done, the sub-processes sometimes take a few seconds to finish, so because the concurrency limit is maxed out by sub-processes, a new task from the queue is never started. And because Celery does not check again when the sub-processes finish, the queue gets stalled with no active tasks. Therefore I want to add a periodical task that tells Celery to recheck the queue and and start the next task. How do I tell Celery this?
From the docs:
The add_consumer control command will tell one or more workers to start consuming from a queue. This operation is idempotent.
Yes, add_consumer does what you want. You could also combine that with a periodic task to "recheck the queue and start the next task" every so often (depending on your need)

Categories