How do you ensure celeryd only runs as a single process? When I run manage.py celeryd --concurrency=1 and then ps aux | grep celery I see 3 instances running:
www-data 8609 0.0 0.0 20744 1572 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
www-data 8625 0.0 1.7 325916 71372 ? S 13:42 0:01 python manage.py celeryd --concurrency=1
www-data 8768 0.0 1.5 401460 64024 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
I've noticed a similar problem with celerybeat, which always runs as 2 processes.
As per this link .. The number of processes would be 4: one main process, two child processes and one celerybeat process,
also if you're using FORCE_EXECV there's another process started to cleanup semaphores.
If you use celery+django-celery development, and using RabbitMQ or Redis as a broker, then it shouldn't use more
than one extra thread (none if CELERY_DISABLE_RATE_LIMITS is set)
Related
I am using celery==4.1.0 and django-celery-beat==1.1.0.
I am running gunicorn + celery + rabbitmq with Django.
This is my config for creating beat and worker
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
During Django deployment I am doing following:
rm -f celerybeat.pid
rm -f celeryd.pid
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
service nginx restart
service gunicorn stop
sleep 1
service gunicorn start
I want to restart both celery beat and worker and it seems that this logic works. But I noticed that celery starts to use more and more memory during deployment and after several deployments I hit 100% memory use. I tried different server setups and it seems that it is not related.
rabbitmq may be to blame for high memory usage. Can you safely restart rabbit?
Also can you confirm that after a restart there is the expected amount of workers?
You are starting 2 new workers for every deployment without stopping/killing the previous workers.
During deployment, stop the existing workers with
kill -9 $PID
kill -9 `cat /var/run/myProcess.pid`
Alternatively, you can just kill all the workers with
pkill -9 celery
Now you can start workers as usual.
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
I am running celery on production using supervisord. My supervisor configuration is below.
[program:celeryd]
command=%(ENV_PROJECT_PATH)s/scripts/celery_worker.sh
stdout_logfile=%(ENV_PROJECT_PATH)s/celeryd.log
stderr_logfile=%(ENV_PROJECT_PATH)s/celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=1000
priority=1000
My command to run celery worker is
celery_path=$(which celery)
$celery_path -A Project_Name worker --loglevel=info
I want to ask, how to restart celery worker when my codebase changes in production?
The main issue I run into is that long running tasks may get killed if you tell supervisor to killasgroup which would result in lost data.
The solution I've moved to using is to tell the mainprocess to TERM which will kill off the workers as they finish their tasks. supervisor will then restart the main process after all the workers finish.
ps aux | grep celery.*MainProcess | awk '{print $2}' | xargs kill -TERM
This is also related.
Celery Production Graceful Restart
Add following in supervisor file and restart supervisor.
killasgroup=true
I am debugging an issue where every scheduled task is run twice. I saw two processes named celery. Is it normal for two celery tasks to be running?
$ ps -ef | grep celery
hgarg 303 32764 0 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
hgarg 307 21179 0 17:24 pts/1 00:00:00 grep celery
hgarg 32764 1 4 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
There were two pairs of Celery processes, the older of which shouldn't have been. Killing them all and restarting celery seems to have fixed it. Without any other recent changes, unlikely that anything else could have caused it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
root#www:~# ps aux | grep uwsgi
root 4660 0.0 0.0 10620 892 pts/1 S+ 19:13 0:00 grep --color=auto uwsgi
root 19372 0.0 0.6 51228 6628 ? Ss 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
root 19373 0.0 0.1 40420 1292 ? S 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
www-data 19374 0.0 1.9 82640 20236 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19375 0.0 2.4 95676 25324 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
www-data 19385 0.0 2.1 90772 22248 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19389 0.0 2.0 95676 21244 ? S 06:41 0:00 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
above is ps output of uwsgi processes. Strange thing is that for each ini files there are two instances loaded - even I have two uwsgi masters. is this normal?
the deployment strategy for uwsgi is
have Emperor managed by upstart
Emperor searches for each uwsgi.ini in apps folder
uwsgi.conf for upstart:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --master --die-on-term --emperor "/var/www/*/uwsgi.ini"
uwsgi.ini(I have two apps, and both apps have same ini except app# numbering):
[uwsgi]
# variables
uid = www-data
gid = www-data
projectname = myproject
projectdomain = www.myproject.com
base = /var/www/app2
# config
enable-threads
protocol = uwsgi
venv = %(base)/
pythonpath = %(base)/
wsgi-file = %(base)/app.wsgi
socket = /tmp/%(projectdomain).sock
logto = %(base)/logs/uwsgi.log
You started it with the --master option, which spawns a master process to control the workers.
From the official documentation https://uwsgi-docs.readthedocs.org/en/latest/Glossary.html?highlight=master
master
uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments it’s not really a good idea not to use master mode.
You should read http://uwsgi-docs.readthedocs.org/en/latest/Options.html#master
And also this thread might have some info for you. uWSGI: --master with --emperor spawns two emperors
It is generally not recommended to use --master and --emperor together.
My educated guess on this topic is that it should be transfer to Server Fault indeed.
But here is the answer:
You should have started the upstart script two time ;-)
Just try to kill the main ROOT process with a SIGTERM and see if childs process died to.
If you have run the upstart script twice, you will have one ROOT and Two childs remaining.
I have celeryd daemons, working on small tasks. This daemon was configured with Upstart script
start on starting cessna
stop on stopping cessna
respawn
script
chdir /home/ubuntu/projects/cessna
exec su -c 'cd /home/ubuntu/projects/cessna; export MAX_POOL_SIZE="50";export newrelic-admin run-program celeryd -A cessna.celeryconfig --loglevel=info --concurrency=50 --pool=eventlet --queue=cessna_celery -E --pidfile=/tmp/cessna-3.pid >> /home/ubuntu/logs/cessna-w\
orker-3.log 2>> /home/ubuntu/errs/cessna-worker-3.log';
end script
Not so long I saw a lot of unack tasks in rabbitmq, no crashes in log files etc. We moved to native /etc/init.d/celeryd daemon, it solved the problem.
So, how it could be - Is there any relation between starting Celery with Upstart, and unacknowled tasks in Celery?