django with celery: how to set up periodic tasks with admin interface - python

I have a problem wit setting up periodic tasks with celery.
I got the scheduler running by:
celery -A myproject beat -l info --scheduler
django_celery_beat.schedulers:DatabaseScheduler
It seems as if the scheduler is up and running my task:
In the admin interface, I can see/edit the task:
But it does nothing. IMHO, the file myproject.backend.tasks.importnewvideo.py should be executed.
But it is not.
In the celery manual, I could not find any further information how to set up a task with the admin interface.
Any ideas?
Thanks in advance.

Related

How do celery workers communicate in Heroku

I have some celery workers in a Heroku app. My app is using python3.6and django, these are the relevant dependencies and their versions:
celery==3.1.26.post2
redis==2.10.3
django-celery==3.2.2
I do not know if the are useful to this question, but just in case. On Heroku we are running the Heroku-18 stack.
As it's usual, we have our workers declared in a Procfile, with the following content:
web: ... our django app ....
celeryd: python manage.py celery worker -Q celery --loglevel=INFO -O fair
one_type_of_worker: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
another_type: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
So, my current understanding of this process is the following:
Our celery queues run on multiple workers, each worker runs as a dyno on Heroku (not a server, but a “worker process” kind of thing, since servers aren’t a concept on Heroku). We also have multiple dynos running the same celery worker with the same queue, which results in multiple parallel “threads” for that queue to run more tasks simultaneously (scalability).
The web workers, celery workers, and celery queues can talk to each other because celery manages the orchestration between them. I think it's specifically the broker that handles this responsibility. But for example, this lets our web workers schedule a celery task on a specific queue and it is routed to the correct queue/worker, or a task running in one queue/worker can schedule a task on a different queue/worker.
Now here is when comes my question, so does the worker communicate? Do they use an API endpoint in localhost with a port? RCP? Do they use the broker url? Magic?
I'm asking this because I'm trying to replicate this setup in ECS and I need to know how to set it up for celery.
Here you go to know how celery works at heroku: https://devcenter.heroku.com/articles/celery-heroku
You can't run celery on Heroku without getting a Heroku dyno for celery. Also, make sure you have Redis configured on your Django celery settings.
to run the celery on Heroku, you just add this line to your Procfile
worker: celery -A YOUR-PROJECT_NAME worker -l info -B
Note: above celery commands will run both celery worker and celery beat
If you want to run it separately, you can use separate commands but one command is recommended

How to start remote celery workers from django

I'm trying to use django in combination with celery.
Therefore I came across autodiscover_tasks() and I'm not fully sure on how to use them. The celery workers get tasks added by other applications (in this case a node backend).
So far I used this to start the worker:
celery worker -Q extraction --hostname=extraction_worker
which works fine.
Now I'm not sure what the general idea of the django-celery integration is. Should workers still be started from external (e.g. with the command above), or should they be managed and started from the django application?
My celery.py looks like:
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
then I have 2 apps containing a tasks.py file with:
#shared_task
def extraction(total):
return 'Task executed'
how can I now register django to register the worker for those tasks?
You just start worker process as documented, you don't need to register anything else
In a production environment you’ll want to run the worker in the
background as a daemon - see Daemonization - but for testing and
development it is useful to be able to start a worker instance by
using the celery worker manage command, much as you’d use Django’s
manage.py runserver:
celery -A proj worker -l info
For a complete listing of the command-line options available, use the
help command:
celery help
celery worker collects/registers task when it runs and also consumes tasks which it found out

Running two celery workers in a server for two django application

I have a server in which two django application are running appone, apptwo
for them, two celery workers are started with commands:
celery worker -A appone -B --loglevel=INFO
celery worker -A apptwo -B --loglevel=INFO
Both points to same BROKER_URL = 'redis://localhost:6379'
redis is setup with db 0 and 1
I can see the task configured in these two apps in both app's log, which is leading to warnings and errors.
Can we configure in django settings such that the celery works exclusively without interfering with each other's tasks?
You can route tasks to different queues. Start Celery with two different -Q myqueueX and then use different CELERY_DEFAULT_QUEUE in your two Django projects.
Depending on your Celery configuration, your Django setting should look something like:
CELERY_DEFAULT_QUEUE = 'myqueue1'
You can also have more fine grained control with:
#celery.task(queue="myqueue3")
def some_task(...):
pass
More options here:
How to keep multiple independent celery queues?

Celery task not received by worker when two projects

My colleague has written celery tasks, necessary configuration in settings file, also supervisors config file. Everything is working perfectly fine. The projects is handed over to me and I seeing some issues that I have to fix.
There are two projects running on a single machine, both projects are almost same, lets call them projA and projB.
supervisord.conf file is as:
;for projA
[program:celeryd]
directory=/path_to_projA/
command=celery -A project worker -l info
...
[program:celerybeat]
directory=/path_to_projA/
command=celery -A project beat -l info
...
; For projB
[program:celerydB]
directory=/path_to_projB/
command=celery -A project worker -l info
...
[program:celerybeatB]
directory=/path_to_projB/
command=celery -A project beat -l info
...
The issue is, I am creating tasks through a loop and only one task is received from celeryd of projA, and remaining task are not in received (or could be received by celeryd of projB).
But when I stop celery programs for projB everything works well. Please note, the actual name of django-app is project hence celery -A project worker/beat -l info.
Please bare, I am new to celery, any help is appreciated. TIA.
As the Celery docs says,
Celery is an asynchronous task queue/job queue based on distributed message passing.
When multiple tasks are created through a loop, tasks are evenly distributed to two different workers ie worker of projA and worker of projB since your workers are same.
If projects are similar or as you mentioned almost same, you can use Celery Queue but of course your queues across projects should be different.
Celery Docs for the same is provided here.
You need to set CELERY_DEFAULT_QUEUE, CELERY_DEFAULT_ROUTING_KEY and CELERY_QUEUES
in your settings.py file.
And your supervisor.conf file needs queue name in the commands line for all the programs.
For Ex: command=celery -A project beat -l info -Q <queue_name>
And that should work, based on my experience.

Running celery beat on windows - mission impossible ?

I'm stuck with running celery 3.1.17 on windows 7 (and later on 2013 server) using redis as backend.
In my celery.py file I defined an app with one scheudled task
app = Celery('myapp',
backend='redis://localhost',
broker='redis://localhost',
include=['tasks']
)
app.conf.update(
CELERYBEAT_SCHEDULE = {
'dumdum': {
'task': 'tasks.dumdum',
'schedule': timedelta(seconds=5),
}
}
)
The task is writing a line to a file
#app.task
def dumdum():
with open('c:/src/dumdum.txt','w') as f:
f.write('dumdum actually ran !')
Running the beat service from the command line
(venv) celery beat -A tasks
celery beat v3.1.17 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> redis://localhost:6379/1
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2015-03-15 10:50:33,265: INFO/MainProcess] beat: Starting...
[2015-03-15 10:50:35,496: INFO/MainProcess] Scheduler: Sending due task dumdum (tasks.dumdum)
[2015-03-15 10:50:40,513: INFO/MainProcess] Scheduler: Sending due task dumdum (tasks.dumdum)
Looks promising, BUT NOTHING HAPPENS. Nothing is being writen to the file.
The celery documentation on runnig beat on windows reference this article from 2011. The article explains how to run celeryd as a scheduler task on windows. celeryd has been deprecated since and the command stated in the article is no longer working (there is no celery.bin.celeryd module).
So, What is the solution here ?
Thanks.
I used following command to run celery beat on windows:
python manage.py celery beat
after following these steps for installation:
Run celery beat on windows
it worked for me perfectly fine!
Celery beat and celery worker can not run with same project, as celery v4.0 stop supporting win for celery worker and celery beat.
one thing you can do, suppose your project name is a recommendation_system and your project hierarchy is as below:
recommendation_system
--your_app_dir
--main.py #or manage.py
--app.py
and you define scheduler(for beat) and worker fun in main.py, then you have to make a copy of this project let's say named recommendation_system_beat.
now to run workers you need to go to inside the recommendation_system directory then run cmd as :
python.exe -m celery -A main worker --pool=solo --concurrency=5 --loglevel=info -n main.%h --queues=recommendation
where the recommendation parameter is the queue name. set concurrency no according to your need.
this will run your workers. but beat will not run, to run beat
now got to recommendation_system_beat and run the following cmd:
python.exe -m celery -A main beat --loglevel=info
this will run all you beat (scheduler)
so ultimately you need to run worker and beat in two different repo

Categories