I am running a celery worker like this:
celery worker --app=portalmq --logfile=/tmp/portalmq.log --loglevel=INFO -E --pidfile=/tmp/portalmq.pid
Now I want to run this worker in the background. I have tried several things, including:
nohup celery worker --app=portalmq --logfile=/tmp/portal_mq.log --loglevel=INFO -E --pidfile=/tmp/portal_mq.pid >> /tmp/portal_mq.log 2>&1 </dev/null &
But it is not working. I have checked the celery documentation, and I found this:
Running the worker as a daemon
Running the celery worker server
Specially this comment is relevant:
In production you will want to run the worker in the background as a daemon.
To do this you need to use the tools provided by your platform, or something
like supervisord (see Running the worker as a daemon for more information).
This is too much overhead just to run a process in the background. I would need to install supervisord in my servers, and get familiar with it. No go at the moment. Is there a simple way of running a celery worker in the backrground?
supervisor is really simple and requires really little work to get it setup up, same applies for to celery in combination with supervisor.
It should not take more than 10 minutes to setup it up :)
install supervisor with apt-get
create /etc/supervisor/conf.d/celery.conf config file
paste somethis in the celery.conf file
[program:celery]
directory = /my_project/
command = /usr/bin/python manage.py celery worker
plus (if you need) some optional and useful stuff (with dummy
values)
user = celery_user
group = celery_group
stdout_logfile = /var/log/celeryd.log
stderr_logfile = /var/log/celeryd.err
autostart = true
environment=PATH="/some/path/",FOO="bar"
restart supervisor (or do supervisorctl reread; supervisorctl add
celery)
after that you get the nice ctl commands to manage the celery process:
supervisorctl start/restart/stop celery
supervisorctl tail [-f] celery [stderr]
celery worker -A app.celery --loglevel=info --detach
For me this one worked, I was using celery with django
celery -A proj_name worker -l INFO --detach
I have faced the same problem as a lazy solution is to use & at the end of the command.
For example
celery worker -A <app>.celery --loglevel=info &
Below command when executed in terminal will start celery as a background process.
celery -A app.celery worker --loglevel=info --detach
Incase you want stop it then ps aux | grep celery as mentioned #Kaiss B. in another answer's comment & kill -9 <process id> to kill the process.
But first of all you need to install the celery for
apt install python-celery-common.
Some of the guys might be wondering why the other answers which are upvoted but not working in there system is because celery changed the command syntax from
celery worker -A app.celery --loglevel=info --detach
to
celery -A app.celery worker --loglevel=info --detach
Hope that helps.
Related
I'm trying to run the example app Django+Celery from official celery repository:
https://github.com/celery/celery/tree/master/examples/django
I cloned the repo, ran RabbitMQ in my docker container:
docker run -d --hostname localhost -p 15672:15672 --name rabbit-test rabbitmq:3
ran celery worker like this:
celery -A proj worker -l INFO
When I try to execute a task:
python ./manage.py shell
>>> from demoapp.tasks import add, mul, xsum
>>> res = add.delay(2,3)
>>> res.ready()
False
I always get res.ready() is False. The output from worker notify that task is recieved:
[2022-12-14 14:43:20,283: INFO/MainProcess] Task demoapp.tasks.add[29743cee-744b-4fa6-ba68-36d17e4ac806] received
but it's never done.
What might be wrong? How to catch the problem?
The solution is to run worker using --pool option. Like this:
celery -A proj worker -l INFO --pool solo
I am using celery==4.1.0 and django-celery-beat==1.1.0.
I am running gunicorn + celery + rabbitmq with Django.
This is my config for creating beat and worker
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
During Django deployment I am doing following:
rm -f celerybeat.pid
rm -f celeryd.pid
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
service nginx restart
service gunicorn stop
sleep 1
service gunicorn start
I want to restart both celery beat and worker and it seems that this logic works. But I noticed that celery starts to use more and more memory during deployment and after several deployments I hit 100% memory use. I tried different server setups and it seems that it is not related.
rabbitmq may be to blame for high memory usage. Can you safely restart rabbit?
Also can you confirm that after a restart there is the expected amount of workers?
You are starting 2 new workers for every deployment without stopping/killing the previous workers.
During deployment, stop the existing workers with
kill -9 $PID
kill -9 `cat /var/run/myProcess.pid`
Alternatively, you can just kill all the workers with
pkill -9 celery
Now you can start workers as usual.
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
I am running celery on production using supervisord. My supervisor configuration is below.
[program:celeryd]
command=%(ENV_PROJECT_PATH)s/scripts/celery_worker.sh
stdout_logfile=%(ENV_PROJECT_PATH)s/celeryd.log
stderr_logfile=%(ENV_PROJECT_PATH)s/celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=1000
priority=1000
My command to run celery worker is
celery_path=$(which celery)
$celery_path -A Project_Name worker --loglevel=info
I want to ask, how to restart celery worker when my codebase changes in production?
The main issue I run into is that long running tasks may get killed if you tell supervisor to killasgroup which would result in lost data.
The solution I've moved to using is to tell the mainprocess to TERM which will kill off the workers as they finish their tasks. supervisor will then restart the main process after all the workers finish.
ps aux | grep celery.*MainProcess | awk '{print $2}' | xargs kill -TERM
This is also related.
Celery Production Graceful Restart
Add following in supervisor file and restart supervisor.
killasgroup=true
I have task
class BasecrmSync(PeriodicTask):
run_every = schedules.crontab(minute='*/1')
def run(self, **kwargs):
bc = basecrm.Client(access_token=settings.BASECRM_AUTH_TOKEN)
sync = basecrm.Sync(client=bc, device_uuid=settings.BASECRM_DEVICE_UUID)
sync.fetch(synchronize)
And celery config with db broker
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
BROKER_URL = 'django://'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
I run
celery -A renuval_api worker -B --loglevel=debug
But it doesn't run task...
Also I've tried run by
python3 manage.py celery worker --loglevel=DEBUG -E -B -c 1 --settings=renuval_api.settings.local
But It uses amqp transport and I can't understand why.
I run a separate process for the beat function itself. I could never get periodic tasks to fire otherwise. Of course, I may have this completely wrong, but it works for me and has for some time.
For example, I have the celery worker with its app running in one process:
celery worker --app=celeryapp:app -l info --logfile="/var/log/celery/worker.log"
And I have the beat pointed to the same app in its own process:
celery --app=celeryapp:app beat
They are pointed at the same app and settings, and beat fires off the task which the worker picks up and does. This app is in the same code tree as my Django apps, but the processes are not running in Django. Perhaps you could run something like:
python3 manage.py celery beat --loglevel=DEBUG -E -B -c 1 --settings=renuval_api.settings.local
I hope that helps.
I have celeryd daemons, working on small tasks. This daemon was configured with Upstart script
start on starting cessna
stop on stopping cessna
respawn
script
chdir /home/ubuntu/projects/cessna
exec su -c 'cd /home/ubuntu/projects/cessna; export MAX_POOL_SIZE="50";export newrelic-admin run-program celeryd -A cessna.celeryconfig --loglevel=info --concurrency=50 --pool=eventlet --queue=cessna_celery -E --pidfile=/tmp/cessna-3.pid >> /home/ubuntu/logs/cessna-w\
orker-3.log 2>> /home/ubuntu/errs/cessna-worker-3.log';
end script
Not so long I saw a lot of unack tasks in rabbitmq, no crashes in log files etc. We moved to native /etc/init.d/celeryd daemon, it solved the problem.
So, how it could be - Is there any relation between starting Celery with Upstart, and unacknowled tasks in Celery?