Restart celery if celery worker is down on windows - python

I wanted to know if there is a way to restart celery worker if celery worker is down due to some error or issue, so that it can be automatically restarted programmatically.

Check out this SO thread.
AS you are using windows, check for the ability to run Celery as a service, such as is explained right here on SO.

Related

How to make Celery worker consume single task and exit

How do I make the celery -A app worker command to consume only a single task and then exit.
I want to run celery workers as a kubernetes Job that finishes after handling a single task.
I'm using KEDA for autoscaling workers according to queue messages.
I want to run celery workers as jobs for long running tasks, as suggested in the documentation:
KEDA long running execution
There's not really anything specific for this. You would have to hack in your own driver program, probably via a custom concurrency module. Are you trying to use Keda ScaledJobs or something? You would just use a ScaledObject instead.

Django: Celery worker in production, Ubuntu 18+

I'm learning Celery and I'd like to ask:
Which is the absolute simplest way to get Celery to automatically run when Django starts in Ubuntu?. Now I manually start celery -A {prj name} worker -l INFO via the terminal.
Can I make any type of configuration so Celery catches the changes in tasks.py code without the need to restart Celery? Now I ctrl+c and type celery -A {prj name} worker -l INFO every time I change something in the tasks.py code. I can foresee a problem in such approach in production if I can get Celery start automatically ==> need to restart Ubuntu instead?.
(setup: VPS, Django, Ubuntu 18.10 (no docker), no external resources, using Redis (that starts automatically)
I am aware it is a similar question to Django-Celery in production and How to ... but still it is a bit unclear as it refers to amazon and also using shell scripts, crontabs. It seems a bit peculiar that these things wouldn't work out of the box.
I give benefit to the doubt that I have misunderstood the setup of Celery.
I have a deploy script that launch Celery in production.
In production it's better to launch worker :
celery multi stop 5
celery multi start 5 -A {prj name} -Q:1 default -Q:2,3 QUEUE1 -Q:4,5 QUEUE2 --pidfile="%n.pid"
this will stop and launch 5 worker for different Queue
Celery at launch will get the wsgi file that will use this instance of your code, it's mean you need to relaunch it to apply modification, you cannot add a watcher in production (memory cost)

Celery How to make a worker run only when other workers are broken?

I have two servers, there is one celery worker on each server. I use Redis as the broker to collaborate with workers.
My question is how can I make only one worker run for most of the time, and once this worker is broken, another worker will turn on as a backup worker?
Basically, just take one worker as a back-up.
I know how to specify a task to a certain worker by a queue on the worker respectively, after reading the doc [http://docs.celeryproject.org/en/latest/userguide/routing.html#redis-message-priorities]
This is, in my humble opinion, completely against the point of having distributed system to off-load CPU heavy, or long-running tasks, or have thousands of small tasks that you can't run elsewhere...
- You are running two servers anyway, so why keeping the other one idle? More workers mean you will be able to process more tasks concurrently.
If you are not convinced, and still want to do this, you need to write a tiny service on machine with idle Celery worker. This service will periodically check the health of the active worker, and if that check fails, you will run Celery worker on the backup server.
Here is a question for you - why this service simply does not restart the Celery worker on the active server? - It is pretty much possible to do that, so again, I see no justification for having a completely idle machine doing nothing. If you are on a cloud platform, you can easily spin up a new instance from an existing image of your Celery worker. This is scenario I use in production.

Celery beat sometimes stops working

I'm using latest stable Celery (4) with RabbitMQ within my Django project.
RabbitMQ is running on separate server within local network. And beat periodically just stops to send tasks to worker without any errors, and only restarting it resolves the issue.
There are no exceptions in worker (checked in logs & also I'm using Sentry to catch exceptions). It just stops sending tasks.
Service config:
[Unit]
Description=*** Celery Beat
After=network.target
[Service]
User=***
Group=***
WorkingDirectory=/opt/***/web/
Environment="PATH=/opt/***/bin"
ExecStart=/opt/***/bin/celery -A *** beat --max-interval 30
[Install]
WantedBy=multi-user.target
Is it possible to fix this? Or are there any good alternatives? (Cron seems to be not a best solution).
Your description sounds a lot like this open bug: https://github.com/celery/celery/issues/3409
There are a lot of details there, but the high-level bug description is that if the connection to RabbitMQ is lost that it's unable to regain the connection.
Unfortunately, I can't see that anyone has definitely solved this issue.
You could start by debugging this using this:
ExecStart=/opt/***/bin/celery -A *** beat --loglevel DEBUG --max-interval 30

How to Running celeryd as a daemon in ubuntu?

i am trying to install an init.d script, to run celery for scheduling tasks. when i tried to start it by sudo /etc/init.d/celeryd start, it throws error "User does not exist: 'celery'"
my celery configuration file (/etc/default/celeryd) contains these:
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
i know that these are wrong that is why it throws error.
The documentation just says this:
CELERYD_USER
User to run celeryd as. Default is current user.
nothing more about it.
Any help will be appreciated.
I am adding a proper answer in order to be clearly visible:
Workers are unix processes that will run the various celery tasks. As you can see in the documentation, the CELERYD_USER and CELERYD_GROUP determine the name of user and group these workers will be run in your Unix environment.
So, what happened initially in your case is that celery tried to start the worker with a user named "celery" which did not exist in your system. When you commented out these two options, then celery started the workers with the user that issued the command sudo /etc/init.d/celeryd start which in this case is the root (administrator) user (default is the current user).
However, it is recommended to run the workers as unpriviledged users and not as root for obvious reasons. So I recommend to actually add the celery user and group using the small tutorial found here http://www.cyberciti.biz/faq/unix-create-user-account/ and uncomment again the
CELERYD_USER="celery"
CELERYD_GROUP="celery"
options.

Categories