I've built a small web scraper function to get some data from the web and populate it to my db which works just well.
Now I would like to fire this function periodically every 20 seconds using Celery periodic tasks.
I walked through the docu and everything seems to be set up for development (using redis as broker).
This is my tasks.py file in project/stocksapp where my periodically fired functions are:
# Celery imports
from celery.task.schedules import crontab
from celery.decorators import periodic_task
from celery.utils.log import get_task_logger
from datetime import timedelta
logger = get_task_logger(__name__)
# periodic functions
#periodic_task(
run_every=(timedelta(seconds=20)),
name="getStocksDataDax",
ignore_result=True
)
def getStocksDataDax():
print("fired")
Now when I start the worker, the function seems to be fired once and only once (the database gets populated). But after that, the function doesn't get fired anymore, although the console suggests it:
C:\Users\Jonas\Desktop\CFD\CFD>celery -A CFD beat -l info
celery beat v4.4.2 (cliffs) is starting.
__ - ... __ - _
LocalTime -> 2020-05-15 23:06:29
Configuration ->
. broker -> redis://localhost:6379/0
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> 5.00 minutes (300s)
[2020-05-15 23:06:29,990: INFO/MainProcess] beat: Starting...
[2020-05-15 23:06:30,024: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:06:50,015: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:07:10,015: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:07:30,015: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:07:50,015: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:08:10,016: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:08:30,016: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
[2020-05-15 23:08:50,016: INFO/MainProcess] Scheduler: Sending due task getStocksDataDax (getStocksDataDax)
project/project/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'CFD.settings')
app = Celery('CFD',
broker='redis://localhost:6379/0',
backend='amqp://',
include=['CFD.tasks'])
app.conf.broker_transport_options = {'visibility_timeout': 3600}
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
The function itself runs about 1 second totally.
Where could basically be an issue in this setup to make the worker/celery fire the function every 20 seconds as supposed to?
celery -A CFD beat -l info only starts the Celery beat process. You should have a separate Celery worker process - in a different terminal run something like celery -A CFD worker -c 8 -O fair -l info.
Related
I want to send periodic mail to certain user After the user has been registered to certain platform I tried to send email using celery which works fine and Now I want django celery to send periodic mail as of now I have set the email time period to be 15 seconds. Further code is here
celery.py
app.conf.beat_schedule = {
'send_mail-every-day-at-4': {
'task': 'apps.users.usecases.BaseUserUseCase().email_send_task()',
'schedule': 15
}
}
I have my Class in this way ->apps.users.usecases.BaseUserUseCase.email_send_task
here is my usecases to send email
class BaseUserUseCase:
##other code is skipped
#shared_task
def email_send_task(self):
print("done")
return ConfirmationEmail(context=self.context).send(to=[self.receipent])
How do I call this email_send_task method am I doing right any help regarding this It is not working.
To enable class-based tasks:
Change your class to inherit from celery.Task
Change your #shared_task:email_send_task() to run()
Call Celery.register_task() for your class. The result would be the callable task.
Directly call the callable task from step-3 to manually enqueue tasks.
References:
https://docs.celeryproject.org/en/4.0/whatsnew-4.0.html#the-task-base-class-no-longer-automatically-register-tasks
https://docs.celeryproject.org/en/stable/userguide/application.html#breaking-the-chain
https://docs.celeryproject.org/en/stable/userguide/application.html#abstract-tasks
Register Celery Class-based Task
File structure
.
├── apps
│ └── users
│ └── usecases.py
└── my_proj
├── celery.py
└── settings.py
celery.py
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_proj.settings") # Only if using Django. Otherwise remove this line.
app = Celery("my_app")
app.conf.update(
imports=['apps.users.usecases'],
beat_schedule={
'send_mail-every-day-at-4': {
'task': 'apps.users.usecases.BaseUserUseCase',
'schedule': 5,
},
},
)
usecases.py
from celery import Task
from my_proj.celery import app
class BaseUserUseCase(Task):
def __init__(self, context, recipient):
self.context = context
self.recipient = recipient
def run(self, context=None, recipient=None): # Optional arguments context and recipient only if you need to explicitly change it for some calls
target_context = context or self.context
target_recipient = recipient or self.recipient
print(f"Send email with {target_context} to {target_recipient}")
BaseUserUseCaseTask = app.register_task(
BaseUserUseCase(
context={'setting': 'default'},
recipient="default#email.com",
)
)
Logs (Producer)
$ celery --app=my_proj beat --loglevel=INFO
[2021-08-20 09:50:22,193: INFO/MainProcess] Scheduler: Sending due task send_mail-every-day-at-4 (apps.users.usecases.BaseUserUseCase)
[2021-08-20 09:50:27,181: INFO/MainProcess] Scheduler: Sending due task send_mail-every-day-at-4 (apps.users.usecases.BaseUserUseCase)
Logs (Consumer)
$ celery --app=my_proj worker --queues=celery --loglevel=INFO
[tasks]
. apps.users.usecases.BaseUserUseCase
[2021-08-20 09:50:22,206: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[3bce46e8-98c0-410e-a156-83e293ba6337] received
[2021-08-20 09:50:22,207: WARNING/ForkPoolWorker-4] Send email with {'setting': 'default'} to default#email.com
[2021-08-20 09:50:22,207: WARNING/ForkPoolWorker-4]
[2021-08-20 09:50:22,207: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[3bce46e8-98c0-410e-a156-83e293ba6337] succeeded in 0.0002498120002201176s: None
[2021-08-20 09:50:27,183: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[e1d2de2a-3e7d-4253-9641-41fd328a17ce] received
[2021-08-20 09:50:27,183: WARNING/ForkPoolWorker-4] Send email with {'setting': 'default'} to default#email.com
[2021-08-20 09:50:27,184: WARNING/ForkPoolWorker-4]
[2021-08-20 09:50:27,184: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[e1d2de2a-3e7d-4253-9641-41fd328a17ce] succeeded in 0.0001804589992389083s: None
If you want to change the context and recipient as a scheduled task.
celery.py
Just change the following lines.
...
beat_schedule={
'send_mail-every-day-at-4': {
'task': 'apps.users.usecases.BaseUserUseCase',
'schedule': 5,
'kwargs': {
'context': {'setting': 'custom'},
'recipient': "custom#email.com",
},
},
},
...
Logs (Consumer):
[2021-08-20 09:54:36,530: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[6b69b79b-6764-4d83-ad24-9b0723dd8c79] received
[2021-08-20 09:54:36,531: WARNING/ForkPoolWorker-4] Send email with {'setting': 'custom'} to custom#email.com
[2021-08-20 09:54:36,531: WARNING/ForkPoolWorker-4]
[2021-08-20 09:54:36,532: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[6b69b79b-6764-4d83-ad24-9b0723dd8c79] succeeded in 0.0012146830003985087s: None
[2021-08-20 09:54:41,498: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[a20d34e7-3214-4130-aba2-5544238096d0] received
[2021-08-20 09:54:41,499: WARNING/ForkPoolWorker-4] Send email with {'setting': 'custom'} to custom#email.com
[2021-08-20 09:54:41,499: WARNING/ForkPoolWorker-4]
[2021-08-20 09:54:41,500: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[a20d34e7-3214-4130-aba2-5544238096d0] succeeded in 0.00047696000001451466s: None
If you intend to call the task manually e.g. from one of the Django views:
>>> from apps.users.usecases import BaseUserUseCaseTask
>>> BaseUserUseCaseTask.apply_async()
<AsyncResult: fd347270-59b8-4cec-8772-cf82a79e60df>
>>> BaseUserUseCaseTask.apply_async(kwargs={'context': {'setting': 'manual'}, 'recipient': "manual#email.com"})
<AsyncResult: 13d9df7e-e1c4-4f50-847f-72a413404c82>
Logs (Consumer):
[2021-08-20 09:57:19,324: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[fd347270-59b8-4cec-8772-cf82a79e60df] received
[2021-08-20 09:57:19,324: WARNING/ForkPoolWorker-4] Send email with {'setting': 'default'} to default#email.com
[2021-08-20 09:57:19,324: WARNING/ForkPoolWorker-4]
[2021-08-20 09:57:19,325: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[fd347270-59b8-4cec-8772-cf82a79e60df] succeeded in 0.00026283199986210093s: None
[2021-08-20 12:35:29,238: INFO/MainProcess] Task apps.users.usecases.BaseUserUseCase[13d9df7e-e1c4-4f50-847f-72a413404c82] received
[2021-08-20 12:35:29,240: WARNING/ForkPoolWorker-4] Send email with {'setting': 'manual'} to manual#email.com
[2021-08-20 12:35:29,240: WARNING/ForkPoolWorker-4]
[2021-08-20 12:35:29,240: INFO/ForkPoolWorker-4] Task apps.users.usecases.BaseUserUseCase[13d9df7e-e1c4-4f50-847f-72a413404c82] succeeded in 0.000784056000156852s: None
I am not 100% sure but below should work for you
Function class:
class BaseUserUseCase:
def __init__(self):
self.context = {}
self.receipent = ""
Add a task.py file in project folder:
import datetime
import logging
from path_to.celery import app
from rest_framework import status
from path_to import BaseUserUseCase
#app.task
def email_send_task(self):
context = BaseUserUseCase.context
receipent = BaseUserUseCase.receipent
print("done")
return ConfirmationEmail(context=context).send(to=[receipent])
Configure a celery.py file:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
from celery.schedules import crontab
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project_name')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
# # add cron beat
app.conf.beat_schedule = {
'send-email-celery-task': {
'task': 'app_name.tasks.email_send_task',
'schedule': crontab(hour=0, minute=1)
},
}
Could you please try and let me know if it works for you?
My django-celery code cannot reload, which I concluded after seeing an error that was supposedly resolved. Can anyone tell me how to properly restart my Celery server, or is the problem still existent?
Running on Windows 10, by the way.
file structure
|-- manage.py
|-- nttracker\
|-- celery.py
|-- tasks.py
|-- settings.py
I have not yet added any separate configuration files yet.
nttracker/celery.py
import os
from celery import Celery
from celery.schedules import crontab
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'nttracker.settings')
postgres_broker = 'sqla+postgresql://user:pass#host/name'
app = Celery('nttracker', broker='amqp://', backend='rpc://', include=['nttracker.tasks'])
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.update(
result_expires=3600,
)
app.conf.beat_schedule = {
'add-every-10-seconds': {
'task': 'nttracker.tasks.add',
'schedule': 10.0,
'args': (16, 16)
},
}
if __name__ == '__main__':
app.start()
nttracker/tasks.py
from __future__ import absolute_import
import django
django.setup()
from celery import Celery
from celery.schedules import crontab
app = Celery()
#app.task
def add(x, y):
print(x + y)
nttracker/settings.py
# Celery Configuration Options
CELERY_TIMEZONE = "valid/timezone"
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
# celery setting.
CELERY_CACHE_BACKEND = 'default'
# django setting.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'my_cache_table',
}
}
terminal one output (celery -A nttracker worker --pool=solo -l INFO)
[2021-06-04 20:03:54,409: INFO/MainProcess] Received task: nttracker.tasks.add[4f9e0e15-82de-4cdb-be84-d3690ebe142e]
[2021-06-04 20:03:54,411: WARNING/MainProcess] 32
[2021-06-04 20:03:54,494: INFO/MainProcess] Task nttracker.tasks.add[4f9e0e15-82de-4cdb-be84-d3690ebe142e] succeeded in 0.09399999999732245s:
None
[2021-06-04 20:04:04,451: INFO/MainProcess] Received task: nttracker.tasks.add[da9c8999-3937-44fd-8d4b-15ff83977a4b]
[2021-06-04 20:04:04,452: WARNING/MainProcess] 32
[2021-06-04 20:04:04,529: INFO/MainProcess] Task nttracker.tasks.add[da9c8999-3937-44fd-8d4b-15ff83977a4b] succeeded in 0.07800000000861473s:
None
[2021-06-04 20:04:14,497: INFO/MainProcess] Received task: nttracker.tasks.add[c82b5099-e1dd-4f7b-a068-8041268571d1]
[2021-06-04 20:04:14,498: WARNING/MainProcess] 32
[2021-06-04 20:04:14,568: INFO/MainProcess] Task nttracker.tasks.add[c82b5099-e1dd-4f7b-a068-8041268571d1] succeeded in 0.0629999999946449s: None
[2021-06-04 20:04:23,187: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
http://docs.celeryq.org/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
b'[[16, 16], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (83b)
Traceback (most recent call last):
File "c:\users\xxxx\onedrive\desktop\github_new\nttracker\venv\lib\site-packages\celery\worker\consumer\consumer.py", line 555, in on_task_received
strategy = strategies[type_]
KeyError: 'tasks.add'
[2021-06-04 20:04:24,544: INFO/MainProcess] Received task: nttracker.tasks.add[66f050ac-b17d-4a7c-9bc6-564cc1d84ae1]
[2021-06-04 20:04:24,545: WARNING/MainProcess] 32
[2021-06-04 20:04:24,620: INFO/MainProcess] Task nttracker.tasks.add[66f050ac-b17d-4a7c-9bc6-564cc1d84ae1] succeeded in 0.07799999999406282s:
None
terminal two output (celery -A nttracker beat -S django)
celery beat v5.0.5 (singularity) is starting.
__ - ... __ - _
LocalTime -> 2021-06-04 19:58:21
Configuration ->
. broker -> redis://127.0.0.1:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> django_celery_beat.schedulers.DatabaseScheduler
. logfile -> [stderr]#%WARNING
. maxinterval -> 5.00 seconds (5s)
I'd like to stress the point that, in a 30-second interval, 32 was printed trice (from add(16, 16) but there will also be one tasks.add error.
I've tried to restart my redis server and celery's worker and beat, but the initial import error was still not resolved.
Can anyone please help? Many thanks in advance.
import your task project settings.py file
CELERY_IMPORTS = (
"import task here"
)
How do I prevent duplicate celery logs in an application like this?
# test.py
from celery import Celery
import logging
app = Celery('tasks', broker='redis://localhost:6379/0')
app.logger = logging.getLogger("new_logger")
file_handler = logging.handlers.RotatingFileHandler("app.log", maxBytes=1024*1024, backupCount=1)
file_handler.setFormatter(logging.Formatter('custom_format %(message)s'))
app.logger.addHandler(file_handler)
#app.task
def foo(x, y):
app.logger.info("log info from foo")
I start the application with: celery -A test worker --loglevel=info --logfile celery.log
Then I cause foo to be run with python -c "from test import foo; print foo.delay(4, 4)"
This results in the "log info from foo" being displayed in both celery.log and app.log.
Here is app.log contents:
custom_format log info from foo
And here is celery.log contents:
[2017-07-26 21:17:24,962: INFO/MainProcess] Connected to redis://localhost:6379/0
[2017-07-26 21:17:24,967: INFO/MainProcess] mingle: searching for neighbors
[2017-07-26 21:17:25,979: INFO/MainProcess] mingle: all alone
[2017-07-26 21:17:25,991: INFO/MainProcess] celery#jd-t430 ready.
[2017-07-26 21:17:38,224: INFO/MainProcess] Received task: test.foo[e2c5e6aa-0d2d-4a16-978c-388a5e3cf162]
[2017-07-26 21:17:38,225: INFO/ForkPoolWorker-4] log info from foo
[2017-07-26 21:17:38,226: INFO/ForkPoolWorker-4] Task test.foo[e2c5e6aa-0d2d-4a16-978c-388a5e3cf162] succeeded in 0.000783085000876s: None
I considered removing the custom logger handler from the python code, but I don't want to just use celery.log because it doesn't support rotating files. I considered starting celery with --logfile /dev/null but then I would loose the mingle and other logs that don't show up in app.log.
Can I prevent "log info from foo" from showing up in celery.log? Given that I created the logger from scratch and only setup logging to app.log why is "log info from foo" showing up in celery.log anyway?
Is it possible to get the celery MainProcess and Worker logs (e.g. Connected to redis://localhost:6379/0) to be logged by a RotatingFileHandler (e.g. go in my app.log)?
Why is "log info from foo" showing up in celery.log?
The logging system is basically a tree of logging.Logger objects with main logging.Logger in the root of the tree (you get the root with call logging.getLogger() without parameters).
When you call logging.getLogger("child") you get reference to the logging.Logger processing the "child" logs. The problem is when you call logging.getLogger("child").info() the info message is delivered to the "child" but also to the parent of the "child" and to its parent until it arrives to the root.
To avoid sending logs to the parent you have to setup the logging.getLogger("child").propagate = False.
I use Celery to make requests to the server (in tasks). I have hard limit - only 1 request in one second (from one ip).
I read this, so its what I want - 1/s.
In celeryconfig.py I have:
CELERY_DISABLE_RATE_LIMITS = False
CELERY_DEFAULT_RATE_LIMIT = "1/s"
But I have the messages, that I have too many requests per second.
In call.py I use groups.
I think, rate_limits does not work, because I have a mistake in celeryconfig.py.
How to fix that? Thanks!
When you start a celery worker with
celery -A your_app worker -l info
the default concurrency is equal to the number of the cores your machine has. So,eventhough you set a rate limit of '1/s', it is trying to process multiple tasks concurrently.
Also setting a rate_limit in celery_config is a bad idea. Now you have only one task, if you add new tasks to your app, the rate limits will affect each other.
A simple way to achieve your one task per one second is this.
tasks.py
import time
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://guest#localhost//')
#app.task()
def task1():
time.sleep(1)
return('task1')
Now start you worker with a concurrency of ONE
celery -A my_taks.py worker -l info -c 1
This will execute only one task per second. Here is my log with the above code.
[2014-10-13 19:27:41,158: INFO/MainProcess] Received task: task1[209008d6-bb9d-4ce0-80d4-9b6c068b770e]
[2014-10-13 19:27:41,161: INFO/MainProcess] Received task: task1[83dc18e0-22ec-4b2d-940a-8b62006e31cd]
[2014-10-13 19:27:41,168: INFO/MainProcess] Received task: task1[e1b25558-0bb2-405a-8009-a7b58bbfa4e1]
[2014-10-13 19:27:41,171: INFO/MainProcess] Received task: task1[2d864be0-c969-4c52-8a57-31dbd11eb2d8]
[2014-10-13 19:27:42,335: INFO/MainProcess] Task task1[209008d6-bb9d-4ce0-80d4-9b6c068b770e] succeeded in 1.170940883s: 'task1'
[2014-10-13 19:27:43,457: INFO/MainProcess] Task task1[83dc18e0-22ec-4b2d-940a-8b62006e31cd] succeeded in 1.119711205s: 'task1'
[2014-10-13 19:27:44,605: INFO/MainProcess] Task task1[e1b25558-0bb2-405a-8009-a7b58bbfa4e1] succeeded in 1.1454614s: 'task1'
[2014-10-13 19:27:45,726: INFO/MainProcess] Task task1[2d864be0-c969-4c52-8a57-31dbd11eb2d8] succeeded in 1.119111023s: 'task1'
I am starting celery via supervisord, see the entry below.
[program:celery]
user = foobar
autostart = true
autorestart = true
directory = /opt/src/slicephone/cloud
command = /opt/virtenvs/django_slice/bin/celery beat --app=cloud -l DEBUG -s /home/foobar/run/celerybeat-schedule --pidfile=/home/foobar/run/celerybeat.pid
priority = 100
stdout_logfile_backups = 0
stderr_logfile_backups = 0
stdout_logfile_maxbytes = 10MB
stderr_logfile_maxbytes = 10MB
stdout_logfile = /opt/logs/celery.stdout.log
stderr_logfile = /opt/logs/celery.stderr.log
pip freeze | grep celery
celery==3.1.0
But any usage of:
#celery.task
def test_rabbit_running():
import logging
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
logger.setLevel(logging.DEBUG)
logger.info("foobar")
doesn't show up in the logs. Instead I get entries like the following.
celery.stdout.log
celery beat v3.1.0 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> redis://localhost:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> /home/foobar/run/celerybeat-schedule
. logfile -> [stderr]#%DEBUG
. maxinterval -> now (0s)
celery.stderr.log
[2013-11-12 05:42:39,539: DEBUG/MainProcess] beat: Waking up in 2.00 seconds.
INFO Scheduler: Sending due task test_rabbit_running (retail.tasks.test_rabbit_running)
[2013-11-12 05:42:41,547: INFO/MainProcess] Scheduler: Sending due task test_rabbit_running (retail.tasks.test_rabbit_running)
DEBUG retail.tasks.test_rabbit_running sent. id->34268340-6ffd-44d0-8e61-475a83ab3481
[2013-11-12 05:42:41,550: DEBUG/MainProcess] retail.tasks.test_rabbit_running sent. id->34268340-6ffd-44d0-8e61-475a83ab3481
DEBUG beat: Waking up in 6.00 seconds.
What do I have to do to make my logging calls appear in the log files?
It doesn't log anything because it doesn't execute any tasks (and it's ok).
See also Celerybeat not executing periodic tasks
I'd try to put the call to log inside a task as the name of the util function implies get_task_logger, or just start with a simple print, or have your own log set up as suggested in Django Celery Logging Best Practice (best way to go IMO)