Celery beat doesn't run periodic task - python

I have task
class BasecrmSync(PeriodicTask):
run_every = schedules.crontab(minute='*/1')
def run(self, **kwargs):
bc = basecrm.Client(access_token=settings.BASECRM_AUTH_TOKEN)
sync = basecrm.Sync(client=bc, device_uuid=settings.BASECRM_DEVICE_UUID)
sync.fetch(synchronize)
And celery config with db broker
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
BROKER_URL = 'django://'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
I run
celery -A renuval_api worker -B --loglevel=debug
But it doesn't run task...
Also I've tried run by
python3 manage.py celery worker --loglevel=DEBUG -E -B -c 1 --settings=renuval_api.settings.local
But It uses amqp transport and I can't understand why.

I run a separate process for the beat function itself. I could never get periodic tasks to fire otherwise. Of course, I may have this completely wrong, but it works for me and has for some time.
For example, I have the celery worker with its app running in one process:
celery worker --app=celeryapp:app -l info --logfile="/var/log/celery/worker.log"
And I have the beat pointed to the same app in its own process:
celery --app=celeryapp:app beat
They are pointed at the same app and settings, and beat fires off the task which the worker picks up and does. This app is in the same code tree as my Django apps, but the processes are not running in Django. Perhaps you could run something like:
python3 manage.py celery beat --loglevel=DEBUG -E -B -c 1 --settings=renuval_api.settings.local
I hope that helps.

Related

Django + Celery task never done

I'm trying to run the example app Django+Celery from official celery repository:
https://github.com/celery/celery/tree/master/examples/django
I cloned the repo, ran RabbitMQ in my docker container:
docker run -d --hostname localhost -p 15672:15672 --name rabbit-test rabbitmq:3
ran celery worker like this:
celery -A proj worker -l INFO
When I try to execute a task:
python ./manage.py shell
>>> from demoapp.tasks import add, mul, xsum
>>> res = add.delay(2,3)
>>> res.ready()
False
I always get res.ready() is False. The output from worker notify that task is recieved:
[2022-12-14 14:43:20,283: INFO/MainProcess] Task demoapp.tasks.add[29743cee-744b-4fa6-ba68-36d17e4ac806] received
but it's never done.
What might be wrong? How to catch the problem?
The solution is to run worker using --pool option. Like this:
celery -A proj worker -l INFO --pool solo

How do celery workers communicate in Heroku

I have some celery workers in a Heroku app. My app is using python3.6and django, these are the relevant dependencies and their versions:
celery==3.1.26.post2
redis==2.10.3
django-celery==3.2.2
I do not know if the are useful to this question, but just in case. On Heroku we are running the Heroku-18 stack.
As it's usual, we have our workers declared in a Procfile, with the following content:
web: ... our django app ....
celeryd: python manage.py celery worker -Q celery --loglevel=INFO -O fair
one_type_of_worker: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
another_type: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
So, my current understanding of this process is the following:
Our celery queues run on multiple workers, each worker runs as a dyno on Heroku (not a server, but a “worker process” kind of thing, since servers aren’t a concept on Heroku). We also have multiple dynos running the same celery worker with the same queue, which results in multiple parallel “threads” for that queue to run more tasks simultaneously (scalability).
The web workers, celery workers, and celery queues can talk to each other because celery manages the orchestration between them. I think it's specifically the broker that handles this responsibility. But for example, this lets our web workers schedule a celery task on a specific queue and it is routed to the correct queue/worker, or a task running in one queue/worker can schedule a task on a different queue/worker.
Now here is when comes my question, so does the worker communicate? Do they use an API endpoint in localhost with a port? RCP? Do they use the broker url? Magic?
I'm asking this because I'm trying to replicate this setup in ECS and I need to know how to set it up for celery.
Here you go to know how celery works at heroku: https://devcenter.heroku.com/articles/celery-heroku
You can't run celery on Heroku without getting a Heroku dyno for celery. Also, make sure you have Redis configured on your Django celery settings.
to run the celery on Heroku, you just add this line to your Procfile
worker: celery -A YOUR-PROJECT_NAME worker -l info -B
Note: above celery commands will run both celery worker and celery beat
If you want to run it separately, you can use separate commands but one command is recommended

Celery worker stops when console is closed [duplicate]

I am running a celery worker like this:
celery worker --app=portalmq --logfile=/tmp/portalmq.log --loglevel=INFO -E --pidfile=/tmp/portalmq.pid
Now I want to run this worker in the background. I have tried several things, including:
nohup celery worker --app=portalmq --logfile=/tmp/portal_mq.log --loglevel=INFO -E --pidfile=/tmp/portal_mq.pid >> /tmp/portal_mq.log 2>&1 </dev/null &
But it is not working. I have checked the celery documentation, and I found this:
Running the worker as a daemon
Running the celery worker server
Specially this comment is relevant:
In production you will want to run the worker in the background as a daemon.
To do this you need to use the tools provided by your platform, or something
like supervisord (see Running the worker as a daemon for more information).
This is too much overhead just to run a process in the background. I would need to install supervisord in my servers, and get familiar with it. No go at the moment. Is there a simple way of running a celery worker in the backrground?
supervisor is really simple and requires really little work to get it setup up, same applies for to celery in combination with supervisor.
It should not take more than 10 minutes to setup it up :)
install supervisor with apt-get
create /etc/supervisor/conf.d/celery.conf config file
paste somethis in the celery.conf file
[program:celery]
directory = /my_project/
command = /usr/bin/python manage.py celery worker
plus (if you need) some optional and useful stuff (with dummy
values)
user = celery_user
group = celery_group
stdout_logfile = /var/log/celeryd.log
stderr_logfile = /var/log/celeryd.err
autostart = true
environment=PATH="/some/path/",FOO="bar"
restart supervisor (or do supervisorctl reread; supervisorctl add
celery)
after that you get the nice ctl commands to manage the celery process:
supervisorctl start/restart/stop celery
supervisorctl tail [-f] celery [stderr]
celery worker -A app.celery --loglevel=info --detach
For me this one worked, I was using celery with django
celery -A proj_name worker -l INFO --detach
I have faced the same problem as a lazy solution is to use & at the end of the command.
For example
celery worker -A <app>.celery --loglevel=info &
Below command when executed in terminal will start celery as a background process.
celery -A app.celery worker --loglevel=info --detach
Incase you want stop it then ps aux | grep celery as mentioned #Kaiss B. in another answer's comment & kill -9 <process id> to kill the process.
But first of all you need to install the celery for
apt install python-celery-common.
Some of the guys might be wondering why the other answers which are upvoted but not working in there system is because celery changed the command syntax from
celery worker -A app.celery --loglevel=info --detach
to
celery -A app.celery worker --loglevel=info --detach
Hope that helps.

Running celery beat on windows - mission impossible ?

I'm stuck with running celery 3.1.17 on windows 7 (and later on 2013 server) using redis as backend.
In my celery.py file I defined an app with one scheudled task
app = Celery('myapp',
backend='redis://localhost',
broker='redis://localhost',
include=['tasks']
)
app.conf.update(
CELERYBEAT_SCHEDULE = {
'dumdum': {
'task': 'tasks.dumdum',
'schedule': timedelta(seconds=5),
}
}
)
The task is writing a line to a file
#app.task
def dumdum():
with open('c:/src/dumdum.txt','w') as f:
f.write('dumdum actually ran !')
Running the beat service from the command line
(venv) celery beat -A tasks
celery beat v3.1.17 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> redis://localhost:6379/1
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2015-03-15 10:50:33,265: INFO/MainProcess] beat: Starting...
[2015-03-15 10:50:35,496: INFO/MainProcess] Scheduler: Sending due task dumdum (tasks.dumdum)
[2015-03-15 10:50:40,513: INFO/MainProcess] Scheduler: Sending due task dumdum (tasks.dumdum)
Looks promising, BUT NOTHING HAPPENS. Nothing is being writen to the file.
The celery documentation on runnig beat on windows reference this article from 2011. The article explains how to run celeryd as a scheduler task on windows. celeryd has been deprecated since and the command stated in the article is no longer working (there is no celery.bin.celeryd module).
So, What is the solution here ?
Thanks.
I used following command to run celery beat on windows:
python manage.py celery beat
after following these steps for installation:
Run celery beat on windows
it worked for me perfectly fine!
Celery beat and celery worker can not run with same project, as celery v4.0 stop supporting win for celery worker and celery beat.
one thing you can do, suppose your project name is a recommendation_system and your project hierarchy is as below:
recommendation_system
--your_app_dir
--main.py #or manage.py
--app.py
and you define scheduler(for beat) and worker fun in main.py, then you have to make a copy of this project let's say named recommendation_system_beat.
now to run workers you need to go to inside the recommendation_system directory then run cmd as :
python.exe -m celery -A main worker --pool=solo --concurrency=5 --loglevel=info -n main.%h --queues=recommendation
where the recommendation parameter is the queue name. set concurrency no according to your need.
this will run your workers. but beat will not run, to run beat
now got to recommendation_system_beat and run the following cmd:
python.exe -m celery -A main beat --loglevel=info
this will run all you beat (scheduler)
so ultimately you need to run worker and beat in two different repo

Celerybeat not executing periodic tasks

How do you diagnose why manage.py celerybeat won't execute any tasks?
I'm running celerybeat via supervisord with the command:
/usr/local/myapp/src/manage.py celerybeat --schedule=/tmp/celerybeat-schedule-myapp --pidfile=/tmp/celerybeat-myapp.pid --loglevel=INFO
Supervisord appears to run celerybeat just fine, and the log file shows:
[2013-06-12 13:17:12,540: INFO/MainProcess] Celerybeat: Starting...
[2013-06-12 13:17:12,571: WARNING/MainProcess] Reset: Account for new __version__ field
[2013-06-12 13:17:12,571: WARNING/MainProcess] Reset: Account for new tz field
[2013-06-12 13:17:12,572: WARNING/MainProcess] Reset: Account for new utc_enabled field
I have several periodic tasks showing as enabled on http://localhost:8000/admin/djcelery/periodictask which should run every few minutes. However, the celerybeat log never shows anything being executed. Why would this be?
celerybeat will just schecdule task, wont execute it.
To execute task you need to also start worker. You can start celery beat as well as worker together.
I use "celeryd -B"
In your case it should look like:
/usr/local/myapp/src/manage.py celery worker --beat
--schedule=/tmp/celerybeat-schedule-myapp --pidfile=/tmp/celerybeat-myapp.pid --loglevel=INFO
or
/usr/local/myapp/src/manage.py celeryd -B
--schedule=/tmp/celerybeat-schedule-myapp --pidfile=/tmp/celerybeat-myapp.pid --loglevel=INFO
We recently upgraded from celery 4 to celery 5.
Apparently the -l flag has been removed, or re-named?
Works in celery4, but not celery 5:
celery -A pm -l info beat
Remove -l :
celery -A pm beat

Categories