I have two workers:
celery worker -l info --concurrency=2 -A o_broker -n main_worker
celery worker -l info --concurrency=2 -A o_broker -n second_worker
I am using flower to monitor and receive API requests for these workers:
flower -A o_broker
to launch these celery workers from an API I use flower per the docs:
curl -X POST -d '{"args":[1,2]}' 'http://localhost:5555/api/task/async-apply/o_broker.add'
However, with this POST request it runs the task on either one of the workers. I need to choose to run a specific broker to complete the task.
How do I specify or set this up so I can choose what worker to use for the add task? If you have a solution using another API without flower, that would also work.
The easiest way to achieve this is with separate queues. Start worker with -Q first_worker,celery and the second broker with -Q second_worker,celery. celery is the default queue name in celery.
Now, when you want to send a task to just the first worker, you can route the task to the first_worker queue using celery's task_routes setting. You treat routing tasks to the second_worker queue symmetrically. You can also manually route a particular task call to a certain queue when using apply_async, e.g.:
add.apply_async(args=(1, 2), queue='first_worker')
n.b., last I checked, flower will only monitor one of your queues (by default it's the celery queue).
Related
I have my API, and some endpoints need to forward requests to Celery. Idea is to have specific API service that basically only instantiates Celery client and uses send_task() method, and seperate service(workers) that consume tasks. Code for task definitions should be located in that worker service. Basicaly seperating celery app (API) and celery worker to two seperate services.
I dont want my API to know about any celery task definitions, endpoints only need to use celery_client.send_task('some_task', (some_arguments)). So on one service i have my API, an on other service/host I have celery code base where my celery worker will execute tasks.
I came across this great article that describes what I want to do.
https://medium.com/#tanchinhiong/separating-celery-application-and-worker-in-docker-containers-f70fedb1ba6d
and this post Celery - How to send task from remote machine?
I need help on how to create routes for tasks from the API? I was expecting for celery_client.send_task() to have queue= keyword, but it does not. I need to have 2 queues, and two workers that will consume content from these two queues.
Commands for my workers:
celery -A <path_to_my_celery_file>.celery_client worker --loglevel=info -Q queue_1
celery -A <path_to_my_celery_file>.celery_client worker --loglevel=info -Q queue_2
I have also visited celery "Routing Tasks" documentation, but it is still unclear to me how to establish this communication.
Your API side should hold the router. I guess it's not an issue because it is only a map of task -> queue (aka send task1 to queue1).
In other words, your celery_client should have task_routes like:
task_routes = {
'mytasks.some_task': 'queue_1',
'mytasks.some_other_task': 'queue_2',
}
I'm new to asynchronous tasks and I'm using django-celery and was hoping to use django-celery-beat to schedule periodic tasks.
However it looks like celery-beat doesn't pick up one-off tasks. Do I need two Celery instances, one as a worker for one off tasks and one as beat for scheduled tasks for this to work?
Pass -B parameter to your worker, it is a parameter to run beat schedule. This worker will do all other tasks, the ones sent from beat, and the "one-off" ones, it really doesn't matter for worker.
So the full command looks like:
celery -A flock.celery worker -l DEBUG -BE.
If you have multiple periodic tasks executing for example every 10 seconds, then they should all point to the same schedule object. please refer here
Cheers,
I have a celery setup running in a production environment (on Linux) where I need to consume two different task types from two dedicated queues (one for each). The problem that arises is, that all workers are always bound to both queues, even when I specify them to only consume from one of them.
TL;DR
Celery running with 2 queues
Messages are published in correct queue as designed
Workers keep consuming both queues
Leads to deadlock
General Information
Think of my two different task types as a hierarchical setup:
A task is a regular celery task that may take quite some time, because it dynamically dispatches other celery tasks and may be required to chain through their respective results
A node is a dynamically dispatched sub-task, which also is a regular celery task but itself can be considered an atomic unit.
My task thus can be a more complex setup of nodes where the results of one or more nodes serves as input for one or more subsequent nodes, and so on. Since my tasks can take longer and will only finish when all their nodes have been deployed, it is essential that they are handled by dedicated workers to keep a sufficient number of workers free to consume the nodes. Otherwise, this could lead to the system being stuck, when a lot of tasks are dispatched, each consumed by another worker, and their respective nodes are only queued but will never be consumed, because all workers are blocked.
If this is a bad design in general, please make any propositions on how I can improve it. I did not yet manage to build one of these processes using celery's built-in canvas primitives. Help me, if you can?!
Configuration/Setup
I run celery with amqp and have set up the following queues and routes in the celery configuration:
CELERY_QUERUES = (
Queue('prod_nodes', Exchange('prod'), routing_key='prod.node'),
Queue('prod_tasks', Exchange('prod'), routing_key='prod.task')
)
CELERY_ROUTES = (
'deploy_node': {'queue': 'prod_nodes', 'routing_key': 'prod.node'},
'deploy_task': {'queue': 'prod_tasks', 'routing_key': 'prod.task'}
)
When I launch my workers, I issue a call similar to the following:
celery multi start w_task_01 w_node_01 w_node_02 -A my.deployment.system \
-E -l INFO -P gevent -Q:1 prod_tasks -Q:2-3 prod_nodes -c 4 --autoreload \
--logfile=/my/path/to/log/%N.log --pidfile=/my/path/to/pid/%N.pid
The Problem
My queue and routing setup seems to work properly, as I can see messages being correctly queued in the RabbitMQ Management web UI.
However, all workers always consume celery tasks from both queues. I can see this when I start and open up the flower web UI and inspect one of the deployed tasks, where e.g. w_node_01 starts consuming messages from the prod_tasks queue, even though it shouldn't.
The RabbitMQ Management web UI furthermore tells me, that all started workers are set up as consumers for both queues.
Thus, I ask you...
... what did I do wrong?
Where is the issue with my setup or worker start call; How can I circumvent the problem of workers always consuming from both queues; Do I really have to make additional settings during runtime (what I certainly do not want)?
Thanks for your time and answers!
You can create 2 separate workers for each queue and each one's define what queue it should get tasks from using the -Q command line argument.
If you want to keep the number processes the same, by default a process is opened for each core for each worker you can use the --concurrency flag (See Celery docs for more info)
Celery allows configuring a worker with a specific queue.
1) Specify the name of the queue with 'queue' attribute for different types of jobs
celery.send_task('job_type1', args=[], kwargs={}, queue='queue_name_1')
celery.send_task('job_type2', args=[], kwargs={}, queue='queue_name_2')
2) Add the following entry in configuration file
CELERY_CREATE_MISSING_QUEUES = True
3) On starting the worker, pass -Q 'queue_name' as argument, for consuming from that desired queue.
celery -A proj worker -l info -Q queue_name_1 -n worker1
celery -A proj worker -l info -Q queue_name_2 -n worker2
I am using Celery in a project, where I am using it as a scheduler( as periodic task).
My Celery task looks like:
#periodic_task(run_every=timedelta(seconds=300))
def update_all_feed():
feed_1()
feed_2()
...........
feed_n()
But as the number of feeds increases it is taking a long time to get to other feeds (e.g when Celery is working with feed number n it takes a long time to get to the next feed (n+1). I want to use Celery's concurrency to start multiple feeds.
After going through the docs, I found I can call a celery task like below:
feed.delay()
How can I configure celery so that it gets all the feed ids and aggregates them (e.g for example 5 feeds at a time)? I realize that to achieve this I will have to run Celery as daemon.
N.B: I am using mongodb as a broker, all I did was install it and add the url in Celery's config.
You can schedule all your feeds like this
#periodic_task(run_every=timedelta(seconds=300))
def update_all_feed():
feed_1.delay()
feed_2.delay()
.......
feed_n.delay()
or you can use a group so simplify it
from celery import group
#periodic_task(run_every=timedelta(seconds=300))
def update_all_feed():
group(feed.delay(i) for i in range(10))
Now to run the tasks you can start a worker to execute tasks
celery worker -A your_app -l info --beat
This starts executing your task every five minutes. However the default concurrency is equal to cores of your cpu. You can change the concurrency also. If you want to execute 10 tasks at a time concurrently then
celery worker -A your_app -l info --beat -c 10
From the Celery documentation:
from celery.task.sets import TaskSet
from .tasks import feed, get_feed_ids
job = TaskSet(tasks=[
feed.subtask((feed_id,)) for feed_id in get_feed_ids()
])
result = job.apply_async()
results = result.join() # There's more in the documentation
I'm just starting out with celery in a Django project, and am kinda stuck at this particular problem: Basically, I need to distribute a long-running task to different workers. The task is actually broken into several steps, each of which takes considerable time to complete. Therefore, if some step fails, I'd like celery to retry this task using the same worker to reuse the results from the completed steps. I understand that celery uses routing to distribute tasks to certain server, but I can't find anything about this particular problem. I use RabbitMQ as my broker.
You could have every celeryd instance consume from a queue named after the hostname of the worker:
celeryd -l info -n worker1.example.com -Q celery,worker1.example.com
sets the hostname to worker1.example.com and will consume from a queue named the same, as well as the default queue (named celery).
Then to direct a task to a specific worker you can use:
task.apply_async(args, kwargs, queue="worker1.example.com")
similary to direct a retry:
task.retry(queue="worker1.example.com")
or to direct the retry to the same worker:
task.retry(queue=task.request.hostname)