I am trying to write a custom django-admin command that executes a celery task, however the task doesn't execute and django just hangs when I try.
from django.core.management.base import BaseCommand
from myapp.tasks import my_celery_task
class Command(BaseCommand):
def handle(self, *args, **options):
print "starting task"
my_celery_task.delay()
print "task has been sent"
The output I receive when calling the command is:
starting task
I never reach the "task has been sent" line. It just hangs. I'm not sure why the task isn't running. Celery tasks are called perfectly when called by a view.
This seems to work for me with celery 5.1.2, but not if it is a shared_task
====== tasks.py
from celery import current_app
from celery.schedules import crontab
app = current_app._get_current_object()
#app.task
def my_celery_task():
# Do something interesting
return 0 # Return the result of this operation
====== my_command.py
from accounts.tasks import my_celery_task
...
def handle(self, *args, **options):
self.stdout.write("Queing request")
result = my_celery_task.delay()
self.stdout.write("Waiting for task to end")
updated = result.wait(timeout=120.0)
self.stdout.write(f"Returned {updated}")
And, if I put the task in the celery.py file I ended up with a deadlock which was a PITA to debug
The issue was actually with RabbitMQ on Mac after upgrading to High Sierra.
Related
I have a scenario where I want to trigger an event anytime a new task is created of finished. I'm currently trying to make use of Celery task signals to do so, but can't figure out how to trigger a certain socket event from within the task signal functions.
from .extensions import celery, socketio
from celery.signals import task_prerun, task_postrun
#celery.task
def myTask():
""" do some things """
return {"things"}
#task_prerun.connect(sender=myTask)
def task_prerun_notifier(sender=None, **kwargs):
print("From task_prerun_notifier ==> Running just before add() executes")
socketio.emit("trigger me") #doesn't work
#task_postrun.connect(sender=myTask)
def task_postrun_notifier(sender=None, **kwargs):
print("From task_postrun_notifier ==> Ok, done!")
socketio.emit("trigger me") #doesn't work
The task signals runs at the correct time, but I am not able to trigger the event trigger me. I have tried it with only emit("trigger me") instead of socketio.emit("trigger me") as well but without any luck.
How do you execute emits from within a Celery task signal?
I have a fastAPI app where I want to call a celery task
I can not import the task as they are in two different code base. So I have to call it using its name.
in tasks.py
imagery = Celery(
"imagery", broker=os.getenv("BROKER_URL"), backend=os.getenv("REDIS_URL")
)
...
#imagery.task(bind=True, name="filter")
def filter_task(self, **kwargs) -> Dict[str, Any]:
print('running task')
The celery worker is running with this command:
celery worker -A worker.imagery -P threads --loglevel=INFO --queues=imagery
Now in my FastAPI code base I want to run the filter task.
So my understanding is I have to use the celery.send_task() function
In app.py I have
from celery import Celery, states
from celery.execute import send_task
from fastapi import FastAPI
from starlette.responses import JSONResponse, PlainTextResponse
from app import models
app = FastAPI()
tasks = Celery(broker=os.getenv("BROKER_URL"), backend=os.getenv("REDIS_URL"))
#app.post("/filter", status_code=201)
async def upload_images(data: models.FilterProductsModel):
"""
TODO: use a celery task(s) to query the database and upload the results to S3
"""
data = ['ok', 'un test']
data = ['ok', 'un test']
result = tasks.send_task('workers.imagery.filter', args=list(data))
return PlainTextResponse(f"here is the id: {str(result.ready())}")
After calling the /filter endpoint, I don't see any task being picked up by the worker.
So I tried different name in send_task()
filter
imagery.filter
worker.imagery.filter
How come my task never get picked up by the worker and nothing shows in the log?
Is my task name wrong?
Edit:
The worker process run in docker. Here is the fullpath of the file on its disk.
tasks.py : /workers/worker.py
So if I follow the import schema. the name of the task would be workers.worker.filter but this does not work, nothing get printed in the logs of docker. Is a print supposed to appear in the STDOUT of the celery cli?
Your Celery worker is subscribed to the imagery queue only . On the other hand, you try to send the task to the default queue (if you did not change configuration, the name of that queue is celery) with result = tasks.send_task('workers.imagery.filter', args=list(data)). It is not surprising you do not see task being executed by your worker as you have been sending tasks to the default queue whole time.
To fix this, try the following:
result = tasks.send_task('workers.imagery.filter', args=list(data), queue='imagery')
OP Here.
This is the solution I used.
task = signature("filter", kwargs=data.dict() ,queue="imagery")
res = task.delay()
As mentioned by #DejanLekic I had to specify the queue.
I am using the below packages.
celery==5.1.2
Django==3.1
I have 2 periodic celery tasks, in which I want the first task to run every 15 mins and the second to run every 20 mins. But the problem is that the first task is running on time, while the second is not running.
Although I'm getting a message on console:
Scheduler: Sending due task <task_name> (<task_name>)
But the logs inside that function are not coming on the console.
Please find the following files,
celery.py
from celery import Celery, Task
app = Celery('settings')
...
class PeriodicTask(Task):
#classmethod
def on_bound(cls, app):
app.conf.beat_schedule[cls.name] = {
"schedule": cls.run_every,
"task": cls.name,
"args": cls.args if hasattr(cls, "args") else (),
"kwargs": cls.kwargs if hasattr(cls, "kwargs") else {},
"options": cls.options if hasattr(cls, "options") else {}
}
tasks.py
from celery.schedules import crontab
from settings.celery import app, PeriodicTask
...
#app.task(
base=PeriodicTask,
run_every=crontab(minute='*/15'),
name='task1',
options={'queue': 'queue_name'}
)
def task1():
logger.info("task1 called")
#app.task(
base=PeriodicTask,
run_every=crontab(minute='*/20'),
name='task2'
)
def task2():
logger.info("task2 called")
Please help me to find the bug here. Thanks!
Have you enabled logging in celery. The docs says that for logging you have to enable logging. Here is the link for the docs celery docs refer to logging section.
TLDR:
I need to setup a flask app for multiprocessing such that the API and stomp queue listener are running in separate processes and therefore not interfering with each other's operations.
Details:
I am building a python flask app that has API endpoints and also creates a message queue listener to connect to an activemq queue with the stomp package.
I need to implement multiprocessing such that the API and listener do not block each other's operation. That way the API will accept new requests and the listener will continue to listen for new messages and carry out tasks accordingly.
A simplified version of the code is shown below (some details are omitted for brevity).
Problem: The multiprocessing is causing the application to get stuck. The worker's run method is not called consistently, and therefore the listener never gets created.
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
After several days of troubleshooting I suspect the problem is due to the multiprocessing not being setup correctly. I was able to prove that this is the case because when I stripped out all of the multiprocessing code and called the worker's run method directly, the all of the queue management code is working correctly, the CustomWorker module creates the listener, creates the message, and picks up the message. I think this indicates that the queue management code is working correctly and the source of the problem is most likely due to the multiprocessing.
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
worker = MyWorker()
worker.run()
Here is the code I have so far:
App
This part of the code creates the API and attempts to create a new process to create the queue listener. The 'custom_worker_utils' module is a custom module that creates the stomp listener in the CustomWorker() class run method.
from flask import Flask, request, make_response, jsonify
from flask_restx import Resource, Api
import sys, os, logging, time
basedir = os.path.dirname(os.getcwd())
sys.path.append('..')
from custom_worker_utils.custom_worker_utils import *
from multiprocessing import Manager
# app.py
def create_app():
app = Flask(__name__)
app.config['BASE_DIR'] = basedir
api = Api(app, version='1.0', title='MPS Worker', description='MPS Common Worker')
logger = get_logger()
'''
This is a placeholder to trigger the sending of a message to the first queue
'''
#api.route('/initialapicall', endpoint="initialapicall", methods=['GET', 'POST', 'PUT', 'DELETE'])
class InitialApiCall(Resource):
#Sends a message to the queue
def get(self, *args, **kwargs):
mqconn = get_mq_connection()
message = create_queue_message(initial_tracker_file)
mqconn.send('/queue/test1', message, headers = {"persistent":"true"})
return make_response(jsonify({'message': 'Initial Test Call Worked!'}), 200)
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
#worker = MyWorker()
#worker.run()
return app
Custom worker utils
The run() method is called, connects to the queue and creates the listener with the stomp package
# custom_worker_utils.py
from multiprocessing import Manager, Process
from _datetime import datetime
import os, time, json, stomp, requests, logging, random
'''
The listener
'''
class MyListener(stomp.ConnectionListener):
def __init__(self, p):
self.process = p
self.logger = p.logger
self.conn = p.mqconn
self.conn.connect(_user, _password, wait=True)
self.subscribe_to_queue()
def on_message(self, headers, message):
message_data = json.loads(message)
ticket_id = message_data[constants.TICKET_ID]
prev_status = message_data[constants.PREVIOUS_STEP_STATUS]
task_name = message_data[constants.TASK_NAME]
#Run the service
if prev_status == "success":
resp = self.process.do_task(ticket_id, task_name)
elif hasattr(self, 'revert_task'):
resp = self.process.revert_task(ticket_id, task_name)
else:
resp = True
if (resp):
self.logger.debug('Acknowledging')
self.logger.debug(resp)
self.conn.ack(headers['message-id'], self.process.conn_id)
else:
self.conn.nack(headers['message-id'], self.process.conn_id)
def on_disconnected(self):
self.conn.connect('admin', 'admin', wait=True)
self.subscribe_to_queue()
def subscribe_to_queue(self):
queue = os.getenv('QUEUE_NAME')
self.conn.subscribe(destination=queue, id=self.process.conn_id, ack='client-individual')
def get_mq_connection():
conn = stomp.Connection([(_host, _port)], heartbeats=(4000, 4000))
conn.connect(_user, _password, wait=True)
return conn
class CustomWorker(Process):
def __init__(self, **kwargs):
super(CustomWorker, self).__init__()
self.logger = logging.getLogger("Worker Log")
log_level = os.getenv('LOG_LEVEL', 'WARN')
self.logger.setLevel(log_level)
self.mqconn = get_mq_connection()
self.conn_id = random.randrange(1,100)
for k, v in kwargs.items():
setattr(self, k, v)
def revert_task(self, ticket_id, task_name):
# If the subclass does not implement this,
# then there is nothing to undo so just return True
return True
def run(self):
lst = MyListener(self)
self.mqconn.set_listener('queue_listener', lst)
while True:
pass
Seems like Celery is excatly what you need.
Celery is a task queue that can distribute work across worker-processes and even across machines.
Miguel Grinberg created a great post about that, Showing how to accept tasks via flask and spawn them using Celery as tasks.
Good Luck!
To resolve this issue I have decided to run the flask API and the message queue listener as two entirely separate applications in the same docker container. I have installed and configured supervisord to start and the processes individually.
[supervisord]
nodaemon=true
logfile=/home/appuser/logs/supervisord.log
[program:gunicorn]
command=gunicorn -w 1 -c gunicorn.conf.py "app:create_app()" -b 0.0.0.0:8081 --timeout 10000
directory=/home/appuser/app
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_worker_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_worker_stderr.log
[program:mqlistener]
command=python3 start_listener.py
directory=/home/appuser/mqlistener
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_mqlistener_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_mqlistener_stderr.log
I am building a website using python Flask. Everything is going good and now I am trying to implement celery.
That was going good as well until I tried to send an email using flask-mail from celery. Now I am getting an "working outside of application context" error.
full traceback is
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ryan/www/CG-Website/src/util/mail.py", line 28, in send_forgot_email
msg = Message("Recover your Crusade Gaming Account")
File "/usr/lib/python2.7/site-packages/flask_mail.py", line 178, in __init__
sender = current_app.config.get("DEFAULT_MAIL_SENDER")
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 336, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 295, in _get_current_object
return self.__local()
File "/usr/lib/python2.7/site-packages/flask/globals.py", line 26, in _find_app
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
This is my mail function:
#celery.task
def send_forgot_email(email, ref):
global mail
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
This is my celery file:
from __future__ import absolute_import
from celery import Celery
celery = Celery('src.tasks',
broker='amqp://',
include=['src.util.mail'])
if __name__ == "__main__":
celery.start()
Here is a solution which works with the flask application factory pattern and also creates celery task with context, without needing to use app.app_context(). It is really tricky to get that app while avoiding circular imports, but this solves it. This is for celery 4.2 which is the latest at the time of writing.
Structure:
repo_name/
manage.py
base/
base/__init__.py
base/app.py
base/runcelery.py
base/celeryconfig.py
base/utility/celery_util.py
base/tasks/workers.py
So base is the main application package in this example. In the base/__init__.py we create the celery instance as below:
from celery import Celery
celery = Celery('base', config_source='base.celeryconfig')
The base/app.py file contains the flask app factory create_app and note the init_celery(app, celery) it contains:
from base import celery
from base.utility.celery_util import init_celery
def create_app(config_obj):
"""An application factory, as explained here:
http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask('base')
app.config.from_object(config_obj)
init_celery(app, celery=celery)
register_extensions(app)
register_blueprints(app)
register_errorhandlers(app)
register_app_context_processors(app)
return app
Moving on to base/runcelery.py contents:
from flask.helpers import get_debug_flag
from base.settings import DevConfig, ProdConfig
from base import celery
from base.app import create_app
from base.utility.celery_util import init_celery
CONFIG = DevConfig if get_debug_flag() else ProdConfig
app = create_app(CONFIG)
init_celery(app, celery)
Next, the base/celeryconfig.py file (as an example):
# -*- coding: utf-8 -*-
"""
Configure Celery. See the configuration guide at ->
http://docs.celeryproject.org/en/master/userguide/configuration.html#configuration
"""
## Broker settings.
broker_url = 'pyamqp://guest:guest#localhost:5672//'
broker_heartbeat=0
# List of modules to import when the Celery worker starts.
imports = ('base.tasks.workers',)
## Using the database to store task state and results.
result_backend = 'rpc'
#result_persistent = False
accept_content = ['json', 'application/text']
result_serializer = 'json'
timezone = "UTC"
# define periodic tasks / cron here
# beat_schedule = {
# 'add-every-10-seconds': {
# 'task': 'workers.add_together',
# 'schedule': 10.0,
# 'args': (16, 16)
# },
# }
Now define the init_celery in the base/utility/celery_util.py file:
# -*- coding: utf-8 -*-
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
For the workers in base/tasks/workers.py:
from base import celery as celery_app
from flask_security.utils import config_value, send_mail
from base.bp.users.models.user_models import User
from base.extensions import mail # this is the flask-mail
#celery_app.task
def send_async_email(msg):
"""Background task to send an email with Flask-mail."""
#with app.app_context():
mail.send(msg)
#celery_app.task
def send_welcome_email(email, user_id, confirmation_link):
"""Background task to send a welcome email with flask-security's mail.
You don't need to use with app.app_context() here. Task has context.
"""
user = User.query.filter_by(id=user_id).first()
print(f'sending user {user} a welcome email')
send_mail(config_value('EMAIL_SUBJECT_REGISTER'),
email,
'welcome', user=user,
confirmation_link=confirmation_link)
Then, you need to start the celery beat and celery worker in two different cmd prompts from inside the repo_name folder.
In one cmd prompt do a celery -A base.runcelery:celery beat and the other celery -A base.runcelery:celery worker.
Then, run through your task that needed the flask context. Should work.
Flask-mail needs the Flask application context to work correctly. Instantiate the app object on the celery side and use app.app_context like this:
with app.app_context():
celery.start()
I don't have any points, so I couldn't upvote #codegeek's above answer, so I decided to write my own since my search for an issue like this was helped by this question/answer: I've just had some success trying to tackle a similar problem in a python/flask/celery scenario. Even though your error was from trying to use mail while my error was around trying to use url_for in a celery task, I suspect the two were related to the same problem and that you would have had errors stemming from the use of url_for if you had tried to use that before mail.
With no context of the app present in a celery task (even after including an import app from my_app_module) I was getting errors, too. You'll need to perform the mail operation in the context of the app:
from module_containing_my_app_and_mail import app, mail # Flask app, Flask mail
from flask.ext.mail import Message # Message class
#celery.task
def send_forgot_email(email, ref):
with app.app_context(): # This is the important bit!
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
If anyone is interested, my solution for the problem of using url_for in celery tasks can be found here
In your mail.py file, import your "app" and "mail" objects. Then, use request context. Do something like this:
from whateverpackagename import app
from whateverpackagename import mail
#celery.task
def send_forgot_email(email, ref):
with app.test_request_context():
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
The answer provided by Bob Jordan is a good approach but I found it very hard to read and understand so I completely ignored it only to arrive at the same solution later myself. In case anybody feels the same, I'd like to explain the solution in a much simpler way. You need to do 2 things:
create a file which initializes a Celery app
# celery_app_file.py
from celery import Celery
celery_app = Celery(__name__)
create another file which initializes a Flask app and uses it to monkey-patch the Celery app created earlier
# flask_app_file.py
from flask import Flask
from celery_app import celery_app
flask_app = Flask(__name__)
class ContextTask(celery_app.Task):
def __call__(self, *args, **kwargs):
with flask_app.app_context():
return super().__call__(self, *args, **kwargs)
celery_app.Task = ContextTask
Now, any time you import the Celery application inside different files (e.g. mailing/tasks.py containing email-related stuff, or database/tasks.py containg database-related stuff), it'll be the already monkey-patched version that will work within a Flask context.
The important thing to remember is that this monkey-patching must happen at some point when you start Celery through the command line. This means that (using my example) you have to run celery -A flask_app_file.celery_app worker because flask_app_file.py is the file that contains the celery_app variable with a monkey-patched Celery application assigned to it.
Without using app.app_context(), just configure the celery before you register blueprints like below :
celery = Celery('myapp', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
From your blueprint where you wish to use celery, call the instance of celery already created to create your celery task.
It will work as expected.