Celery tasks doesn't works - python

Celery docs say that Celery 3.1 can work with django out of box. But tasks not working. I have tasks.py:
from celery import task
from datetime import timedelta
#task.periodic_task(run_every=timedelta(seconds=20), ignore_result=True)
def disable_not_confirmed_users():
print "start"
Configs:
from kombu import Exchange, Queue
CELERY_SEND_TASK_ERROR_EMAILS = True
BROKER_URL = 'amqp://guest#localhost//'
CELERY_DEFAULT_QUEUE = 'project-queue'
CELERY_DEFAULT_EXCHANGE = 'project-queue'
CELERY_DEFAULT_ROUTING_KEY = 'project-queue'
CELERY_QUEUES = (
Queue('project-queue', Exchange('project-queue'), routing_key='project-queue'),
)
project/celery.py
from future import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
from django.conf import settings
app = Celery('project')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
Run celery: celery -A project worker --loglevel=INFO
But nothing not happend.

you should use celery beat to run periodic task.
celery -A project worker --loglevel=INFO
starts the worker, which does the actually work.
celery -A proj beat
starts the beat service, which asks the work to do the job.

Related

Celery jobs not running on heroku (python/django app)

I have a Django app setup with some scheduled tasks. The app is deployed on Heroku with Redis. The task runs if invoked synchronously in the console, or locally when I also have redis and celery running. However, the scheduled jobs are not running on Heroku.
My task:
#shared_task(name="send_emails")
def send_emails():
.....
celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.schedules import crontab
# set the default Django settings module for the 'celery' program.
# this is also used in manage.py
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
# Get the base REDIS URL, default to redis' default
BASE_REDIS_URL = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('my_app')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
app.conf.broker_url = BASE_REDIS_URL
# this allows you to schedule items in the Django admin.
app.conf.beat_scheduler = 'django_celery_beat.schedulers.DatabaseScheduler'
# These are the scheduled jobs
app.conf.beat_schedule = {
'send_emails_crontab': {
'task': 'send_emails',
'schedule': crontab(hour=9, minute=0),
'args': (),
}
}
In Procfile:
worker: celery -A my_app worker --beat -S django -l info
I've spun up the worker with heroku ps:scale worker=1 -a my-app.
I can see the registered tasks under [tasks] in the worker logs.
However, the scheduled tasks are not running at their scheduled time. Calling send_emails.delay() in the production console does work.
How do I get the worker to stay alive and / or run the job at the scheduled time?
I have a workaround using a command and heroku scheduler. Just unsure if that's the best way to do it.
If you're on free demo, you should know that heroku server sleeps and if your scheduled task becomes due when your server is sleeping, it won't run.
I share you any ideas.
Run console and get the datetime of Dyno. The Dyno use a localtime US.
The DynoFree sleeps each 30 minutes and only 450 hours/month.
Try change celery to BackgroundScheduler,
you need add a script clock.py as:
from myapp import myfunction
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.schedulers.blocking import BlockingScheduler
from time import monotonic, sleep, ctime
import os
sched = BlockingScheduler()
hour = int(os.environ.get("SEARCH_HOUR"))
minutes = int(os.environ.get("SEARCH_MINUTES"))
#sched.scheduled_job('cron', day_of_week='mon-sun', hour=hour, minute = minutes)
def scheduled_job():
print('This job: Execute myfunction every at ', hour, ':', minutes)
#My function
myfunction()
sched.start(
)
In Procfile:
clock: python clock.py
and run:
heroku ps:scale clock=1 --app thenameapp
Regards.

celery task not sent or executed

I'm new to learning celery and was following tutorials and setup my celery setup with docker
I'm having issue with sending and executing celery task.
So have 4 docker container one for rabbitmq server, celery producer server and 2 worker.
Celery tasks file:
"""
CELERY MAIN FILE
"""
from celery import Celery
from time import sleep
celery_obj = Celery()
celery_obj.config_from_object('celery_config') #config file we created in same folder
#celery_obj.task
def add(num1,num2):
print("executing add function")
sleep(5)
return num1 + num2
My celery config file for Producer:
"""
CELERY CONFIGURATION FILE
"""
from kombu import Exchange, Queue
broker_url = "pyamqp://rabbitmq_user:123#172.17.0.2/res_opt_rabbitmq_vhost"
result_backend = 'rpc://'
#celery_result_backend = ""
celery_imports = ('res_opt_code.tasks')
task_queues = (
Queue('worker_A_kombu_queue',Exchange('celery',type='direct'),routing_key='worker_A_rabbitmq_queue'),
Queue('worker_B_kombu_queue',Exchange('celery',type='direct'),routing_key='worker_B_rabbitmq_queue')
)
Config file for worker_A:
"""
CELERY CONFIGURATION FILE
"""
from kombu import Exchange, Queue
broker_url = "pyamqp://rabbitmq_user:123#172.17.0.2/res_opt_rabbitmq_vhost"
result_backend = 'rpc://'
#celery_result_backend = ""
celery_imports = ('worker_code.tasks')
task_queues = (
Queue('worker_A_kombu_queue',Exchange('celery',type='direct'),routing_key='worker_A_rabbitmq_queue'),
Queue('worker_B_kombu_queue',Exchange('celery',type='direct'),routing_key='worker_B_rabbitmq_queue')
)
Command for starting celery on producer:
celery -A tasks worker --loglevel=DEBUG -f log_file.txt
command for starting celery on worker:
celery -A tasks worker -n celery_worker_A -Q worker_A_kombu_queue --loglevel=DEBUG
Function call from producer:
from tasks import add
add.apply_async([4,4],routing_key='worker_A_rabbitmq_queue')
#also tried local executing the function but not logs of functions it's in pending
add.delay(4,4)
could you guyz please help me what I'm doing wrong here
In Logs I'm able to see worker_A connected but no logs for function
Tried further troubleshooting and changed the argument in apply_async from routing key to queue and it working with the queue argument
was following this tutorial:
https://www.youtube.com/watch?v=TM1a3m65zaA
old:
add.apply_async([4,4],routing_key='worker_A_rabbitmq_queue')
new:
add.apply_async([4,4],queue='worker_A_rabbitmq_queue')

Celery Task not getting assigned through redis

Using Celery/redis i tried creating a task, But on checking the celery working info with the below code
celery -A intranet_project worker -l info
I am unable to get the task added there.
Settings.py
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
init.py
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ('celery_app',)
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'intranet_project.settings')
app = Celery('intranet_project')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
my_task.py
from celery.decorators import task
from celery import shared_task
#shared_task
def add(a,b):
d = a + b
return d
Below is the server log
[tasks]
. intranet_project.celery.debug_task
[2020-02-26 19:38:59,051: INFO/MainProcess] Connected to redis://localhost:6379//
[2020-02-26 19:38:59,160: INFO/MainProcess] mingle: searching for neighbors
[2020-02-26 19:39:00,379: INFO/MainProcess] mingle: all alone
I don`t see the code from where you execute your task.
view.py (or wherever you want to trigger the task from)
from .my_tasks import add
def action(request):
add.delay(2,2)
You should then see a line like this appear in your celery log:
[2020-02-26 14:11:40,765: INFO/MainProcess] Received task: intranet_project.tasks.add[xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx]

Reload celery beat config

I'm using celery and celery-beat without Django and I have a task which needs to modify celery-beat schedule when run.
Now I have the following code (module called celery_tasks):
# __init__.py
from .celery import app as celery_app
__all__ = ['celery_app']
#celery.py
from celery import Celery
import config
celery_config = config.get_celery_config()
app = Celery(
__name__,
include=[
'celery_tasks.tasks',
],
)
app.conf.update(celery_config)
# tasks.py
from celery_tasks import celery_app
from celery import shared_task
#shared_task
def start_game():
celery_app.conf.beat_schedule = {
'process_round': {
'task': 'celery_tasks.tasks.process_round',
'schedule': 5,
},
}
I start celery with the following command:
celery worker -A celery_tasks -E -l info --beat
start_game executes and exists normally, but beat process_round task never runs.
How can I force-reload beat schedule (restarting all workers doesn't seem as a good idea)?
the problem with normal celery backend when you start the celerybeat process. it will create a config file and write all tasks and schedules in to that file so it cannot change dynamically
you can use the package
celerybeat-sqlalchemy-scheduler so you can edit schedule on DB itself so that celerybeat will pickup the new schedule from DB itself
also there is another package celery-redbeat which using redis-server as backend
you can refer this this also
Using schedule config also seems bad idea. What if initially process_round task will be active and check if game is not started task just do nothing.

Django celery beat task not working

celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'TwitterApiProxy.settings')
app = Celery('TwitterApiProxy')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls test('hello') every 10 seconds.
sender.add_periodic_task(10.0, hello_test.s('hello'), name='add every 10')
#app.task
def hello_test(arg):
print(arg)
settings.py
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'America/Los_Angeles'
I want to print "hello" every 10 seconds. So on running celery -A TwitterApiProxy beat in my terminal, I see as below:
LocalTime -> 2018-04-06 23:27:09
Configuration ->
. broker -> redis://localhost:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%WARNING
. maxinterval -> 5.00 minutes (300s)
it did not print anything related to the task that I scheduled. Where did I go wrong?
Nothing is wrong with your set up.
Start your worker and celery beat in two separate cmd windows
celery -A TwitterApiProxy worker -l info
celery -A TwitterApiProxy beat -l info
If you are using Celery 4.0+, you have to install eventlet first, then start your worker like this:
celery -A TwitterApiProxy worker -l info -P eventlet
Task Admin Backend
If you want a task admin backend, you can install and use django-celery

Categories