I am building a website using python Flask. Everything is going good and now I am trying to implement celery.
That was going good as well until I tried to send an email using flask-mail from celery. Now I am getting an "working outside of application context" error.
full traceback is
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ryan/www/CG-Website/src/util/mail.py", line 28, in send_forgot_email
msg = Message("Recover your Crusade Gaming Account")
File "/usr/lib/python2.7/site-packages/flask_mail.py", line 178, in __init__
sender = current_app.config.get("DEFAULT_MAIL_SENDER")
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 336, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 295, in _get_current_object
return self.__local()
File "/usr/lib/python2.7/site-packages/flask/globals.py", line 26, in _find_app
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
This is my mail function:
#celery.task
def send_forgot_email(email, ref):
global mail
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
This is my celery file:
from __future__ import absolute_import
from celery import Celery
celery = Celery('src.tasks',
broker='amqp://',
include=['src.util.mail'])
if __name__ == "__main__":
celery.start()
Here is a solution which works with the flask application factory pattern and also creates celery task with context, without needing to use app.app_context(). It is really tricky to get that app while avoiding circular imports, but this solves it. This is for celery 4.2 which is the latest at the time of writing.
Structure:
repo_name/
manage.py
base/
base/__init__.py
base/app.py
base/runcelery.py
base/celeryconfig.py
base/utility/celery_util.py
base/tasks/workers.py
So base is the main application package in this example. In the base/__init__.py we create the celery instance as below:
from celery import Celery
celery = Celery('base', config_source='base.celeryconfig')
The base/app.py file contains the flask app factory create_app and note the init_celery(app, celery) it contains:
from base import celery
from base.utility.celery_util import init_celery
def create_app(config_obj):
"""An application factory, as explained here:
http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask('base')
app.config.from_object(config_obj)
init_celery(app, celery=celery)
register_extensions(app)
register_blueprints(app)
register_errorhandlers(app)
register_app_context_processors(app)
return app
Moving on to base/runcelery.py contents:
from flask.helpers import get_debug_flag
from base.settings import DevConfig, ProdConfig
from base import celery
from base.app import create_app
from base.utility.celery_util import init_celery
CONFIG = DevConfig if get_debug_flag() else ProdConfig
app = create_app(CONFIG)
init_celery(app, celery)
Next, the base/celeryconfig.py file (as an example):
# -*- coding: utf-8 -*-
"""
Configure Celery. See the configuration guide at ->
http://docs.celeryproject.org/en/master/userguide/configuration.html#configuration
"""
## Broker settings.
broker_url = 'pyamqp://guest:guest#localhost:5672//'
broker_heartbeat=0
# List of modules to import when the Celery worker starts.
imports = ('base.tasks.workers',)
## Using the database to store task state and results.
result_backend = 'rpc'
#result_persistent = False
accept_content = ['json', 'application/text']
result_serializer = 'json'
timezone = "UTC"
# define periodic tasks / cron here
# beat_schedule = {
# 'add-every-10-seconds': {
# 'task': 'workers.add_together',
# 'schedule': 10.0,
# 'args': (16, 16)
# },
# }
Now define the init_celery in the base/utility/celery_util.py file:
# -*- coding: utf-8 -*-
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
For the workers in base/tasks/workers.py:
from base import celery as celery_app
from flask_security.utils import config_value, send_mail
from base.bp.users.models.user_models import User
from base.extensions import mail # this is the flask-mail
#celery_app.task
def send_async_email(msg):
"""Background task to send an email with Flask-mail."""
#with app.app_context():
mail.send(msg)
#celery_app.task
def send_welcome_email(email, user_id, confirmation_link):
"""Background task to send a welcome email with flask-security's mail.
You don't need to use with app.app_context() here. Task has context.
"""
user = User.query.filter_by(id=user_id).first()
print(f'sending user {user} a welcome email')
send_mail(config_value('EMAIL_SUBJECT_REGISTER'),
email,
'welcome', user=user,
confirmation_link=confirmation_link)
Then, you need to start the celery beat and celery worker in two different cmd prompts from inside the repo_name folder.
In one cmd prompt do a celery -A base.runcelery:celery beat and the other celery -A base.runcelery:celery worker.
Then, run through your task that needed the flask context. Should work.
Flask-mail needs the Flask application context to work correctly. Instantiate the app object on the celery side and use app.app_context like this:
with app.app_context():
celery.start()
I don't have any points, so I couldn't upvote #codegeek's above answer, so I decided to write my own since my search for an issue like this was helped by this question/answer: I've just had some success trying to tackle a similar problem in a python/flask/celery scenario. Even though your error was from trying to use mail while my error was around trying to use url_for in a celery task, I suspect the two were related to the same problem and that you would have had errors stemming from the use of url_for if you had tried to use that before mail.
With no context of the app present in a celery task (even after including an import app from my_app_module) I was getting errors, too. You'll need to perform the mail operation in the context of the app:
from module_containing_my_app_and_mail import app, mail # Flask app, Flask mail
from flask.ext.mail import Message # Message class
#celery.task
def send_forgot_email(email, ref):
with app.app_context(): # This is the important bit!
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
If anyone is interested, my solution for the problem of using url_for in celery tasks can be found here
In your mail.py file, import your "app" and "mail" objects. Then, use request context. Do something like this:
from whateverpackagename import app
from whateverpackagename import mail
#celery.task
def send_forgot_email(email, ref):
with app.test_request_context():
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
The answer provided by Bob Jordan is a good approach but I found it very hard to read and understand so I completely ignored it only to arrive at the same solution later myself. In case anybody feels the same, I'd like to explain the solution in a much simpler way. You need to do 2 things:
create a file which initializes a Celery app
# celery_app_file.py
from celery import Celery
celery_app = Celery(__name__)
create another file which initializes a Flask app and uses it to monkey-patch the Celery app created earlier
# flask_app_file.py
from flask import Flask
from celery_app import celery_app
flask_app = Flask(__name__)
class ContextTask(celery_app.Task):
def __call__(self, *args, **kwargs):
with flask_app.app_context():
return super().__call__(self, *args, **kwargs)
celery_app.Task = ContextTask
Now, any time you import the Celery application inside different files (e.g. mailing/tasks.py containing email-related stuff, or database/tasks.py containg database-related stuff), it'll be the already monkey-patched version that will work within a Flask context.
The important thing to remember is that this monkey-patching must happen at some point when you start Celery through the command line. This means that (using my example) you have to run celery -A flask_app_file.celery_app worker because flask_app_file.py is the file that contains the celery_app variable with a monkey-patched Celery application assigned to it.
Without using app.app_context(), just configure the celery before you register blueprints like below :
celery = Celery('myapp', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
From your blueprint where you wish to use celery, call the instance of celery already created to create your celery task.
It will work as expected.
Related
I have a flask server running within a gunicorn.
In my flask application I want to handle large upload files (>20GB), so I plan on letting a celery task do the handling of the large file.
The problem is that retrieving the file from request.files already takes quite long, in the meantime gunicorn terminates the worker handling that request. I could increase the timeout time, but the maximum file size is currently unknown, so I don't know how much time I would need.
My plan was to make the request context available to the celery task, as it was described here: http://xion.io/post/code/celery-include-flask-request-context.html, but I cannot make it work
Q1 Is the signature right?
I set the signature with
celery.signature(handle_large_file, args={}, kwargs={})
and nothing is complaining. I get the arguments I pass from the flask request handler to the celery task, but that's it. Should I somehow get a handle to the context here?
Q2 how to use the context?
I would have thought if the flask request context was available I could just use request.files in my code, but then I get the warning that I am out of context.
Using celery 4.4.0
Code:
# in celery.py:
from flask import request
from celery import Celery
celery = Celery('celery_worker',
backend=Config.CELERY_RESULT_BACKEND,
broker=Config.CELERY_BROKER_URL)
#celery.task(bind=True)
def handle_large_file(task_object, data):
# do something with the large file...
# what I'd like to do:
files = request.files['upfile']
...
celery.signature(handle_large_file, args={}, kwargs={})
# in main.py
def create_app():
app = Flask(__name__.split('.')[0])
...
celery_worker.conf.update(app.config)
# copy from the blog
class RequestContextTask(Task):...
celery_worker.Task = RequestContextTask
# in Controller.py
#FILE.route("", methods=['POST'])
def upload():
data = dict()
...
handle_large_file.delay(data)
What am I missing?
I have a Flask app where i use Flask-APScheduler to run a scheduled query on my database and send an email via a cron job.
I'm running my app via Gunicorn with the following config and controlled via supervisor:
[program:myapp]
command=/home/path/to/venv/bin/gunicorn -b localhost:8000 -w 4 myapp:app --preload
directory=/home/path/here
user=myuser
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
The job details are stored in my config.py:
...config stuff
JOBS = [
{
'id': 'sched_email',
'func': 'app.tasks:sched_email',
'trigger': 'cron',
'hour': 9,
},
]
SCHEDULER_API_ENABLED = True
Originally the email was being sent 4 times due to the 4 workers initialising the app and the scheduler. I found a similar article which suggested opening a socket when the app is initialised so the other workers can't grab the job.
My init.py:
# Third-party imports
import logging
from logging.handlers import SMTPHandler, RotatingFileHandler
import os
from flask import Flask
from flask_mail import Mail, Message
from flask_sqlalchemy import SQLAlchemy
from flask_apscheduler import APScheduler
from flask_migrate import Migrate
from flask_login import LoginManager
import sys, socket
# Local imports
from config import app_config
# Create initial instances of extensions
mail = Mail()
db = SQLAlchemy()
scheduler = APScheduler()
migrate = Migrate()
login_manager = LoginManager()
# Construct the Flask app instance
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(app_config[config_name])
app_config[config_name].init_app(app)
migrate.init_app(app, db)
mail.init_app(app)
db.init_app(app)
# Fix to ensure only one Gunicorn worker grabs the scheduled task
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 47200))
except socket.error:
pass
else:
scheduler.init_app(app)
scheduler.start()
login_manager.init_app(app)
login_manager.login_message = "You must be logged in to access this page."
login_manager.login_message_category = 'danger'
login_manager.login_view = "admin.login"
# Initialize blueprints
from .errors import errors as errors_blueprint
app.register_blueprint(errors_blueprint)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
from .admin import admin as admin_blueprint
app.register_blueprint(admin_blueprint)
# Setup logging when not in debug mode
if not app.debug:
... logging email config
... log file config
return app
Now the email is being sent twice!
Can anyone suggest why this is happening? Is there any logs i can dive into to work out what's going on?
I also read about using the #app.before_first_request decorator but as i'm using the app factory pattern i'm not sure how to incorporate this.
Thanks!
So it turns out the issue was a silly mistake i made.
I didn't configure supervisor correctly so my --preload flag was not actually being applied.
After i fixed supervisor and reloaded, my task is now running correctly and i'm receiving one email.
I hope my setup above will help others as being a beginner this took me a long time to get working.
I'd like to call generate_async_audio_service from a view and have it asynchronously generate audio files for the list of words using a threading pool and then commit them to a database.
I keep running into an error that I'm working out of the application context even though I'm creating a new polly and s3 instance each time.
How can I generate/upload multiple audio files at once?
from flask import current_app,
from multiprocessing.pool import ThreadPool
from Server.database import db
import boto3
import io
import uuid
def upload_audio_file_to_s3(file):
app = current_app._get_current_object()
with app.app_context():
s3 = boto3.client(service_name='s3',
aws_access_key_id=app.config.get('BOTO3_ACCESS_KEY'),
aws_secret_access_key=app.config.get('BOTO3_SECRET_KEY'))
extension = file.filename.rsplit('.', 1)[1].lower()
file.filename = f"{uuid.uuid4().hex}.{extension}"
s3.upload_fileobj(file,
app.config.get('S3_BUCKET'),
f"{app.config.get('UPLOADED_AUDIO_FOLDER')}/{file.filename}",
ExtraArgs={"ACL": 'public-read', "ContentType": file.content_type})
return file.filename
def generate_polly(voice_id, text):
app = current_app._get_current_object()
with app.app_context():
polly_client = boto3.Session(
aws_access_key_id=app.config.get('BOTO3_ACCESS_KEY'),
aws_secret_access_key=app.config.get('BOTO3_SECRET_KEY'),
region_name=app.config.get('AWS_REGION')).client('polly')
response = polly_client.synthesize_speech(VoiceId=voice_id,
OutputFormat='mp3', Text=text)
return response['AudioStream'].read()
def generate_polly_from_term(vocab_term, gender='m'):
app = current_app._get_current_object()
with app.app_context():
audio = generate_polly('Celine', vocab_term.term)
file = io.BytesIO(audio)
file.filename = 'temp.mp3'
file.content_type = 'mp3'
return vocab_term.id, upload_audio_file_to_s3(file)
def generate_async_audio_service(terms):
pool = ThreadPool(processes=12)
results = pool.map(generate_polly_from_term, terms)
# do something w/ results
This is not necessarily a fleshed-out answer, but rather than putting things into comments I'll put it here.
Celery is a task manager for python. The reason you would want to use this is if you have tasks pinging Flask, but they take longer to finish than the interval of tasks coming in, then certain tasks will be blocked and you won't get all of your results. To fix this, you hand it to another process. This goes like so:
1) Client sends a request to Flask to process audio files
2) The files land in Flask to be processed, Flask will send an asyncronous task to Celery.
3) Celery is notified of the task and stores its state in some sort of messaging system (RabbitMQ and Redis are the canonical examples)
4) Flask is now unburdened from that task and can receive more
5) Celery finishes the task, including the upload to your database
Celery and Flask are then two separate python processes communicating with one another. That should satisfy your multithreaded approach. You can also retrieve the state from a task through Flask if you want the client to verify that the task was/was not completed. The route in your Flask app.py would look like:
#app.route('/my-route', methods=['POST'])
def process_audio():
# Get your files and save to common temp storage
save_my_files(target_dir, files)
response = celery_app.send_tast('celery_worker.files', args=[target_dir])
return jsonify({'task_id': response.task_id})
Where celery_app comes from another module worker.py:
import os
from celery import Celery
env = os.environ
# This is for a rabbitMQ backend
CELERY_BROKER_URL = env.get('CELERY_BROKER_URL', 'amqp://0.0.0.0:5672/0')
CELERY_RESULT_BACKEND = env.get('CELERY_RESULT_BACKEND', 'rpc://')
celery_app = Celery('tasks', broker=CELERY_BROKER_URL, backend=CELERY_RESULT_BACKEND)
Then, your celery process would have a worker configured something like:
from celery import Celery
from celery.signals import after_task_publish
env = os.environ
CELERY_BROKER_URL = env.get('CELERY_BROKER_URL')
CELERY_RESULT_BACKEND = env.get('CELERY_RESULT_BACKEND', 'rpc://')
# Set celery_app with name 'tasks' using the above broker and backend
celery_app = Celery('tasks', broker=CELERY_BROKER_URL, backend=CELERY_RESULT_BACKEND)
#celery_app.task(name='celery_worker.files')
def async_files(path):
# Get file from path
# Process
# Upload to database
# This is just if you want to return an actual result, you can fill this in with whatever
return {'task_state': "FINISHED"}
This is relatively basic, but could serve as a starting point. I will say that some of Celery's behavior and setup is not always the most intuitive, but this will leave your flask app available to whoever wants to send files to it without blocking anything else.
Hopefully that's somewhat helpful
I'm trying to deploy a flask app on heroku that uses background tasks in Celery. I've implemented the application factory pattern so that the celery processes are not bound to any one instance of the flask app.
This works locally, and I have yet to see an error. But when deployed to heroku, the same results always occur: the celery task (I'm only using one) succeeds the first time it is run, but any subsequent celery calls to that task fail with sqlalchemy.exc.DatabaseError: (psycopg2.DatabaseError) SSL error: decryption failed or bad record mac. If I restart the celery worker, the cycle continues.
There are multiple issues that show this same error, but none specify a proper solution. I initially believed implementing the application factory pattern would have prevented this error from manifesting, but it's not quite there.
In app/__init__.py I create the celery and db objects:
celery = Celery(__name__, broker=Config.CELERY_BROKER_URL)
db = SQLAlchemy()
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
db.init_app(app)
return app
My flask_celery.py file creates the actual Flask app object:
import os
from app import celery, create_app
app = create_app(os.getenv('FLASK_CONFIG', 'default'))
app.app_context().push()
And I start celery with this command:
celery worker -A app.flask_celery.celery --loglevel=info
This is what the actual celery task looks like:
#celery.task()
def task_process_stuff(stuff_id):
stuff = Stuff.query.get(stuff_id)
stuff.processed = True
db.session.add(stuff)
db.session.commit()
return stuff
Which is invoked by:
task_process_stuff.apply_async(args=[stuff.id], countdown=10)
Library Versions
Flask 0.12.2
SQLAlchemy 1.1.11
Flask-SQLAlchemy 2.2
Celery 4.0.2
The solution was to add db.engine.dispose() at the beginning of the task, disposing of all db connections before any work begins:
#celery.task()
def task_process_stuff(stuff_id):
db.engine.dispose()
stuff = Stuff.query.get(stuff_id)
stuff.processed = True
db.session.commit()
return stuff
As I need this functionality across all of my tasks, I added it to task_prerun:
#task_prerun.connect
def on_task_init(*args, **kwargs):
db.engine.dispose()
This question already has answers here:
RuntimeError: working outside of application context
(4 answers)
Closed 6 years ago.
I have a project which is structured similar to the overholt and fbone example. I can send emails from all my blueprints fine, but struggle to send outside. E.g. from within a cron script or celery task.
I keep getting the error working outside of application context
app/factory.py
from flask import Flask
from .extensions import mail
def create_app(package_name, package_path, settings_override=None,
register_security_blueprint=True):
app = Flask(package_name, instance_relative_config=True)
mail.init_app(app)
register_blueprints(app, package_name, package_path)
app.wsgi_app = HTTPMethodOverrideMiddleware(app.wsgi_app)
return app
app/extensions.py
from flask_mail import Mail
mail = Mail()
app/frontend/admin.py
bp = Blueprint('admin', __name__, url_prefix='/admin', static_folder='static')
#bp.route('/')
def admin():
msg = Message(......)
mail.send(msg)
Everything up until here works fine. Now I have a separate module in app which has some cron scripts which now fail.
app/cron/alerts.py
from ..extensions import mail
from flask.ext.mail import Message
def alert():
msg = Message('asdfasdf', sender='hello#example.com', recipients=['hello#example.com'])
msg.body = 'hello'
mail.send(msg)
Which produces the error. How can I get around this?
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
You need use Flask-Mail:
from flask_mail import Mail
mail = Mail(app)
I would recommend going with celery. For the context problem I have found my solution in the following.
Have a look at this:
Miguel Grinberg's Blogpost on celery
Or if you are using the factory pattern in your application:
Celery with Factory Pattern
The second one is some kind of further/extended reading.
Both of them helped me a lot. (The second one also solved a context issue for me)