Celery task revocation is stored in the memory, so it will not persist when worker is restarted.
In Celery documentation it can be persisted using command celery -A proj worker -l info --statedb=/var/run/celery/worker.state
http://celery.readthedocs.io/en/latest/userguide/workers.html#worker-persistent-revokes
but when I run the command, I got error file not found, so I created the file, I ran the command again but then it tells me db type could not be determined.
I try to lookup how to set the persistent database to use in celery but got no results. Any help will be apreciated
So it turns out, I have to create the directory first and celery worker should be permitted creating a file in that directory.
My solution was to create celery directory in the project then run command:
celery -A proj worker -l info --statedb=celery/working.state
and it works
Related
I have some celery workers in a Heroku app. My app is using python3.6and django, these are the relevant dependencies and their versions:
celery==3.1.26.post2
redis==2.10.3
django-celery==3.2.2
I do not know if the are useful to this question, but just in case. On Heroku we are running the Heroku-18 stack.
As it's usual, we have our workers declared in a Procfile, with the following content:
web: ... our django app ....
celeryd: python manage.py celery worker -Q celery --loglevel=INFO -O fair
one_type_of_worker: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
another_type: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
So, my current understanding of this process is the following:
Our celery queues run on multiple workers, each worker runs as a dyno on Heroku (not a server, but a “worker process” kind of thing, since servers aren’t a concept on Heroku). We also have multiple dynos running the same celery worker with the same queue, which results in multiple parallel “threads” for that queue to run more tasks simultaneously (scalability).
The web workers, celery workers, and celery queues can talk to each other because celery manages the orchestration between them. I think it's specifically the broker that handles this responsibility. But for example, this lets our web workers schedule a celery task on a specific queue and it is routed to the correct queue/worker, or a task running in one queue/worker can schedule a task on a different queue/worker.
Now here is when comes my question, so does the worker communicate? Do they use an API endpoint in localhost with a port? RCP? Do they use the broker url? Magic?
I'm asking this because I'm trying to replicate this setup in ECS and I need to know how to set it up for celery.
Here you go to know how celery works at heroku: https://devcenter.heroku.com/articles/celery-heroku
You can't run celery on Heroku without getting a Heroku dyno for celery. Also, make sure you have Redis configured on your Django celery settings.
to run the celery on Heroku, you just add this line to your Procfile
worker: celery -A YOUR-PROJECT_NAME worker -l info -B
Note: above celery commands will run both celery worker and celery beat
If you want to run it separately, you can use separate commands but one command is recommended
My colleague has written celery tasks, necessary configuration in settings file, also supervisors config file. Everything is working perfectly fine. The projects is handed over to me and I seeing some issues that I have to fix.
There are two projects running on a single machine, both projects are almost same, lets call them projA and projB.
supervisord.conf file is as:
;for projA
[program:celeryd]
directory=/path_to_projA/
command=celery -A project worker -l info
...
[program:celerybeat]
directory=/path_to_projA/
command=celery -A project beat -l info
...
; For projB
[program:celerydB]
directory=/path_to_projB/
command=celery -A project worker -l info
...
[program:celerybeatB]
directory=/path_to_projB/
command=celery -A project beat -l info
...
The issue is, I am creating tasks through a loop and only one task is received from celeryd of projA, and remaining task are not in received (or could be received by celeryd of projB).
But when I stop celery programs for projB everything works well. Please note, the actual name of django-app is project hence celery -A project worker/beat -l info.
Please bare, I am new to celery, any help is appreciated. TIA.
As the Celery docs says,
Celery is an asynchronous task queue/job queue based on distributed message passing.
When multiple tasks are created through a loop, tasks are evenly distributed to two different workers ie worker of projA and worker of projB since your workers are same.
If projects are similar or as you mentioned almost same, you can use Celery Queue but of course your queues across projects should be different.
Celery Docs for the same is provided here.
You need to set CELERY_DEFAULT_QUEUE, CELERY_DEFAULT_ROUTING_KEY and CELERY_QUEUES
in your settings.py file.
And your supervisor.conf file needs queue name in the commands line for all the programs.
For Ex: command=celery -A project beat -l info -Q <queue_name>
And that should work, based on my experience.
While running the celery working using the following command creates two files w1.log and w1.pid which I do not want.
celery multi start w1 -A destiPak.celery -l info
Output
celery multi v3.1.20 (Cipater)
> Starting nodes...
> w1#foo-bar: OK
Show worker
celery multi show w1
Output
/Users/foo/bar/bin/python -m celery worker --detach -n w1#foo-bar --pidfile=w1.pid --logfile=w1.log --executable=/Users/foo/bar/bin/python
Suggestions, how to avoid creating those log file while Running the worker as a daemon
It's hard to understand why you would not want these files - the pid file is only a few bytes and the log files will contain useful information, and you can use logrotate or whatever to ensure that they do not take up too much space.
That said, if you use supervisord to manage the workers instead of celery multi you can configure it to not generate log files, plus it does not use .pid files.
here's a supervisord config file that should do what you want
[program:celery]
command=celery worker -n w1#foo-bar
autostart=true
stdout_logfile=/dev/null
redirect_stderr=true
I'm trying to process some tasks using celery, and I'm not having too much luck. I'm running celeryd and celerybeat as daemons. I have a tasks.py file that look like this with a simple app and task defined:
from celery import Celery
app = Celery('tasks', broker='amqp://user:pass#hostname:5672/vhostname')
#app.task
def process_file(f):
# do some stuff
# and log results
And this file is referenced from another file process.py I use to monitor for file changes that looks like:
from tasks import process_file
file_name = '/file/to/process'
result = process_file.delay(file_name)
result.get()
And with that little code, celery is unable to see tasks and process them. I can execute similar code in the python interpreter and celery processes them:
py >>> from tasks import process_file
py >>> process_file.delay('/file/to/process')
<AsyncResult: 8af23a4e-3f26-469c-8eee-e646b9d28c7b>
When I run the tasks from the interpreter however, beat.log and worker1.log don't show any indication that the tasks were received, but using logging I can confirm the task code was executed. There are also no obvious errors in the .log files. Any ideas what could be causing this problem?
My /etc/default/celerybeat looks like:
CELERY_BIN="/usr/local/bin/celery"
CELERYBEAT_CHDIR="/opt/dirwithpyfiles"
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
And /etc/default/celeryd:
CELERYD_NODES="worker1"
CELERY_BIN="/usr/local/bin/celery"
CELERYD_CHDIR="/opt/dirwithpyfiles"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERY_CREATE_DIRS=1
So I figured out my issue here by running celery from the cli instead of as a daemon, enabling me to see more detailed output of errors that happened. I did this by running:
user#hostname /opt/dirwithpyfiles $ su celery
celery#hostname /opt/dirwithpyfiles $ celery -A tasks worker --loglevel=info
There I could see that a permissions issue was happening as the celery user that did not happen when I ran the commands from the python interpreter as my normal user. I fixed this by changing the permissions of /file/to/process so that both users could read from it.
I've started a celery3 worker (Redis backend) on dev machine with a command like:
celery -A tasks worker --loglevel=info -E
(and the celery screen says that events are enabled)
Then I try to get stats for this working with command:
celery status
which results in
Error: No nodes replied within time constraint
What can be a possible cause for this?
I've already tried restarting the working and the machine.