I trying to use Celery (and rabbitmq) to send emails asynchronously with Flask mail. Initially I had an issue with render_template from flask breaking celery - Flask-Mail breaks Celery (The celery task would still execute successfully but no emails were being sent). While I was trying to fix that issue (which is still not fixed!) - I stumbled upon another problem. This pickling error which is due to a thread lock. I noticed that the problem started when I changed the way I called the celery task (from delay to apply_async). Since then I tried reverting my changes but I still can't get rid of the error. Any help regarding any one of the issues will be highly appreciated.
The traceback:
File "/Users/.../python2.7/site-packages/celery/app/amqp.py", line 250, in publish_task
**kwargs
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 157, in publish
compression, headers)
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 233, in _prepare
body) = encode(body, serializer=serializer)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 170, in encode
payload = encoder(data)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 356, in dumps
return dumper(obj, protocol=pickle_protocol)
PicklingError: Can't pickle <type 'thread.lock'>: attribute lookup thread.lock failed
tasks.py
from __future__ import absolute_import
from flask import render_template
from flask.ext.mail import Message
from celery import Celery
celery = Celery('tasks',
broker = 'amqp://tester:testing#localhost:5672/test_host')
#celery.task(name = "send_async_email")
def send_auth_email(app, nickname, email):
with app.test_request_context("/"):
recipients = []
recipients.append(email)
subject = render_template("subject.txt")
msg = Message(subject, recipients = recipients)
msg.html = render_template("test.html", name = nickname)
app.mail.send(msg)
In the test case I just call:
send_auth_email.delay(test_app, nick, email)
FYI: The API works perfectly fine if I don't use celery (i.e. synchronously). Thanks in advance!
When you invoke send_auth_email.delay(test_app, nick, email) all function arguments are being sent to task Queue. To do so, Celery pickles them.
Short answer test_app, being flask application, uses some magic and cannot be pickled. See docs for details on what can be pickled, and what not.
One solution is to pass all necessary arguments (in your case it seems that this is only name) to re-instantiate test_app in send_auth_email.
Related
I am currently trying to setup celery to handle responses from a chatbot and forward those responses to a user.
The chatbot hits the /response endpoint of my server, that triggers the following function in my server.py module:
def handle_response(user_id, message):
"""Endpoint to handle the response from the chatbot."""
tasks.send_message_to_user.apply_async(args=[user_id, message])
return ('OK', 200,
{'Content-Type': 'application/json; charset=utf-8'})
In my tasks.py file, I import celery and create the send_message_to_user function:
from celery import Celery
celery_app = Celery('tasks', broker='redis://')
#celery_app.task(name='send_message_to_user')
def send_message_to_user(user_id, message):
"""Send the message to a user."""
# Here is the logic to send the message to a specific user
My problem is, my chatbot may answer multiple messages to a user, so the send_message_to_user task is properly put in the queue but then a race condition arises and sometimes the messages arrive to the user in the wrong order.
How could I make each send_message_to_user task wait for the previous task with the same name and with the same argument "user_id" before executing it ?
I have looked at this thread Running "unique" tasks with celery but a lock isn't my solution, as I don't want to implement ugly retries when the lock is released.
Does anyone have any idea how to solve that issue in a clean(-ish) way ?
Also, it's my first post here so I'm open to any suggestions to improve my request.
Thanks!
I'm using Python 2.7 (sigh), celery==3.1.19, librabbitmq==1.6.1, rabbitmq-server-3.5.6-1.noarch, and redis 2.8.24 (from redis-cli info).
I'm attempting to send a message from a celery producer to a celery consumer, and obtain the result back in the producer. There is 1 producer and 1 consumer, but 2 rabbitmq's (as brokers) and 1 redis (for results) in between.
The problem I'm facing is:
In the consumer, I get back get an AsyncResult via async_result =
ZipUp.delay(unique_directory), but async_result.ready() never
returns True (at least for 9 seconds it doesn't) - even for a
consumer task that does essentially nothing but return a string.
I can see, in the rabbitmq management web interface, my message
being received by the rabbitmq exchange, but it doesn't show up in
the corresponding rabbitmq queue. Also, a log message sent by the
very beginning of the ZipUp task doesn't appear to be getting
logged.
Things work if I don't try to get a result back from the AsyncResult! But I'm kinda hoping to get the result of the call - it's useful :).
Below are configuration specifics.
We're setting up Celery as follows for returns:
CELERY_RESULT_BACKEND = 'redis://%s' % _SHARED_WRITE_CACHE_HOST_INTERNAL
CELERY_RESULT = Celery('TEST', broker=CELERY_BROKER)
CELERY_RESULT.conf.update(
BROKER_HEARTBEAT=60,
CELERY_RESULT_BACKEND=CELERY_RESULT_BACKEND,
CELERY_TASK_RESULT_EXPIRES=100,
CELERY_IGNORE_RESULT=False,
CELERY_RESULT_PERSISTENT=False,
CELERY_ACCEPT_CONTENT=['json'],
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json',
)
We have another Celery configuration that doesn't expect a return value, and that works - in the same program. It looks like:
CELERY = Celery('TEST', broker=CELERY_BROKER)
CELERY.conf.update(
BROKER_HEARTBEAT=60,
CELERY_RESULT_BACKEND=CELERY_BROKER,
CELERY_TASK_RESULT_EXPIRES=100,
CELERY_STORE_ERRORS_EVEN_IF_IGNORED=False,
CELERY_IGNORE_RESULT=True,
CELERY_ACCEPT_CONTENT=['json'],
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json',
)
The celery producer's stub looks like:
#CELERY_RESULT.task(name='ZipUp', exchange='cognition.workflow.ZipUp_%s' % INTERNAL_VERSION)
def ZipUp(directory): # pylint: disable=invalid-name
""" Task stub """
_unused_directory = directory
raise NotImplementedError
It's been mentioned that using queue= instead of exchange= in this stub would be simpler. Can anyone confirm that (I googled but found exactly nothing on the topic)? Apparently you can just use queue= unless you want to use fanout or something fancy like that, since not all celery backends have the concept of an exchange.
Anyway, the celery consumer starts out with:
#task(queue='cognition.workflow.ZipUp_%s' % INTERNAL_VERSION, name='ZipUp')
#StatsInstrument('workflow.ZipUp')
def ZipUp(directory): # pylint: disable=invalid-name
'''
Zip all files in directory, password protected, and return the pathname of the new zip archive.
:param directory Directory to zip
'''
try:
LOGGER.info('zipping up {}'.format(directory))
But "zipping up" doesn't get logged anywhere. I searched every (disk-backed) file on the celery server for that string, and got two hits: /usr/bin/zip, and my celery task's code - and no log messages.
Any suggestions?
Thanks for reading!
It appears that using the following task stub in the producer solved the problem:
#CELERY_RESULT.task(name='ZipUp', queue='cognition.workflow.ZipUp_%s' % INTERNAL_VERSION)
def ZipUp(directory): # pylint: disable=invalid-name
""" Task stub """
_unused_directory = directory
raise NotImplementedError
In short, it's using queue= instead of exchange= .
When executing async task celery is raising an exception
#case where error is thrown
send_registration_email.delay("test", "test#gmail.com", {})
This error does not appear when I execute code by omitting celery
#case where code is executed correctly
send_registration_email("test", "test#gmail.com", {})
How can I execute my async task with celery, so I will and get rid of this error ?
Error
[2015-10-12 14:50:57,176: ERROR/MainProcess] Task tasks.core.email.send_registration_email[5f96bee3-9df7-42ce-b726-c7086e82b954] raised unexpected: NameError("global name 'Mailer' is not defined",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/srv/www/compare/htdocs/tasks/core/email.py", line 6, in send_registration_email
#shared_task
NameError: global name 'Mailer' is not defined
Celery Task
# email.py
from __future__ import absolute_import
from celery import shared_task
from utilities.helpers.mailer import Mailer
#shared_task
def send_registration_email(email_type="", recipient="", data={}):
Mailer.send_email(email_type, recipient, data)
Mailer Class
# mailer.py
from __future__ import absolute_import
from django.core.mail.message import EmailMultiAlternatives
from django.template.loader import get_template
class Mailer():
#staticmethod
def send_email(email_type="", recipient="", data={}):
try:
email = Mailer.create_email(email_type, recipient, data)
email.send()
return True
except Exception as e:
return False
#classmethod
def create_email(self, email_type, recipient, data):
subject = ""
message = ""
sender_email = "mailgun#xxx.mailgun.org"
if email_type == "test":
subject = "Test subject"
content_html = "<html><body><h1>Test</h1></body></html>"
email = EmailMultiAlternatives(subject, None, sender_email, [recipient])
email.attach_alternative(content_html, "text/html")
return email
You will need to restart your worker in order to see changes to your files reflected in your worker instances. Celery does not monitor for file changes, so if you change any of the files while your workers are running you will need to restart the workers.
For development purposes this can be sidestepped by using the auto reloading feature:
When auto-reload is enabled the worker starts an additional thread that watches for changes in the file system. New modules are imported, and already imported modules are reloaded whenever a change is detected, and if the prefork pool is used the child processes will finish the work they are doing and exit, so that they can be replaced by fresh processes effectively reloading the code.
This is an experimental feature and should only be used in development environments
As #grrrrrr suggested restarting the celery worker fixes the issue. In my case celery worker is run by supervisor so I had to restart supervisor.
I've programmed a server-side service using Spyne. I want to use the Spyne client code, but I can't do it without having exceptions.
The server side code is something like (I've removed the imports and unified files):
class NotificationsRPC(ServiceBase):
#rpc(Uuid, DateTime, _returns=Integer)
def new_player(ctx, player_uuid, birthday):
# A lot of irrelevant code
return 0
#rpc(Uuid, _returns=Integer)
def deleted_player(ctx, player_uuid):
# A lot of irrelevant code
return 0
# Many other similar methods
radiante_app = Application(
[NotificationsRPC],
tns="radiante.rpc",
in_protocol=JsonDocument(validator="soft"),
out_protocol=JsonDocument()
)
wsgi_app = WsgiApplication(radiante_app)
server = make_server('127.0.0.1', 27182, wsgi_app)
server.serve_forever()
This code runs properly, I can make requests to it through CURL (the real code is implemented using uWSGI, but in this example I'm using the python embedded WSGI server).
The problem appears in the client-side code. It's something like (RadianteRPC is the same class as in server-side, but with pass in the methods body:
radiante_app = Application(
[RadianteRPC],
tns="radiante.rpc",
in_protocol=JsonDocument(validator="soft"),
out_protocol=JsonDocument()
)
rad_client = HttpClient("http://127.0.0.1:27182", radiante_app)
# In the real code, the parameters have more sense.
rad_client.service.new_player(uuid.UUID(), datetime.utcnow())
Then, when the code is executed I have the following error:
File "/vagrant/apps/radsync/signal_hooks/player.py", line 74, in _player_post_save
created
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/client/http.py", line 64, in __call__
self.get_in_object(self.ctx)
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/client/_base.py", line 144, in get_in_object
message=self.app.in_protocol.RESPONSE)
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/protocol/dictdoc.py", line 278, in decompose_incoming_envelope
raise ValidationError("Need a dictionary with exactly one key "
ValidationError: Fault(Client.ValidationError: 'The value "\'Need a dictionary with exactly one key as method name.\'" could not be validated.')
It's worth noting that the client is implemented in Django U_u (not my decision), but I think it hasn't relation with the problem.
I've followed some indications from this question (adapting the example of ZeroMQ transport protocol to HTTP transport protocol): There is an example of Spyne client?
Thank you for your attention.
I am building a website using python Flask. Everything is going good and now I am trying to implement celery.
That was going good as well until I tried to send an email using flask-mail from celery. Now I am getting an "working outside of application context" error.
full traceback is
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ryan/www/CG-Website/src/util/mail.py", line 28, in send_forgot_email
msg = Message("Recover your Crusade Gaming Account")
File "/usr/lib/python2.7/site-packages/flask_mail.py", line 178, in __init__
sender = current_app.config.get("DEFAULT_MAIL_SENDER")
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 336, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python2.7/site-packages/werkzeug/local.py", line 295, in _get_current_object
return self.__local()
File "/usr/lib/python2.7/site-packages/flask/globals.py", line 26, in _find_app
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
This is my mail function:
#celery.task
def send_forgot_email(email, ref):
global mail
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
This is my celery file:
from __future__ import absolute_import
from celery import Celery
celery = Celery('src.tasks',
broker='amqp://',
include=['src.util.mail'])
if __name__ == "__main__":
celery.start()
Here is a solution which works with the flask application factory pattern and also creates celery task with context, without needing to use app.app_context(). It is really tricky to get that app while avoiding circular imports, but this solves it. This is for celery 4.2 which is the latest at the time of writing.
Structure:
repo_name/
manage.py
base/
base/__init__.py
base/app.py
base/runcelery.py
base/celeryconfig.py
base/utility/celery_util.py
base/tasks/workers.py
So base is the main application package in this example. In the base/__init__.py we create the celery instance as below:
from celery import Celery
celery = Celery('base', config_source='base.celeryconfig')
The base/app.py file contains the flask app factory create_app and note the init_celery(app, celery) it contains:
from base import celery
from base.utility.celery_util import init_celery
def create_app(config_obj):
"""An application factory, as explained here:
http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask('base')
app.config.from_object(config_obj)
init_celery(app, celery=celery)
register_extensions(app)
register_blueprints(app)
register_errorhandlers(app)
register_app_context_processors(app)
return app
Moving on to base/runcelery.py contents:
from flask.helpers import get_debug_flag
from base.settings import DevConfig, ProdConfig
from base import celery
from base.app import create_app
from base.utility.celery_util import init_celery
CONFIG = DevConfig if get_debug_flag() else ProdConfig
app = create_app(CONFIG)
init_celery(app, celery)
Next, the base/celeryconfig.py file (as an example):
# -*- coding: utf-8 -*-
"""
Configure Celery. See the configuration guide at ->
http://docs.celeryproject.org/en/master/userguide/configuration.html#configuration
"""
## Broker settings.
broker_url = 'pyamqp://guest:guest#localhost:5672//'
broker_heartbeat=0
# List of modules to import when the Celery worker starts.
imports = ('base.tasks.workers',)
## Using the database to store task state and results.
result_backend = 'rpc'
#result_persistent = False
accept_content = ['json', 'application/text']
result_serializer = 'json'
timezone = "UTC"
# define periodic tasks / cron here
# beat_schedule = {
# 'add-every-10-seconds': {
# 'task': 'workers.add_together',
# 'schedule': 10.0,
# 'args': (16, 16)
# },
# }
Now define the init_celery in the base/utility/celery_util.py file:
# -*- coding: utf-8 -*-
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
For the workers in base/tasks/workers.py:
from base import celery as celery_app
from flask_security.utils import config_value, send_mail
from base.bp.users.models.user_models import User
from base.extensions import mail # this is the flask-mail
#celery_app.task
def send_async_email(msg):
"""Background task to send an email with Flask-mail."""
#with app.app_context():
mail.send(msg)
#celery_app.task
def send_welcome_email(email, user_id, confirmation_link):
"""Background task to send a welcome email with flask-security's mail.
You don't need to use with app.app_context() here. Task has context.
"""
user = User.query.filter_by(id=user_id).first()
print(f'sending user {user} a welcome email')
send_mail(config_value('EMAIL_SUBJECT_REGISTER'),
email,
'welcome', user=user,
confirmation_link=confirmation_link)
Then, you need to start the celery beat and celery worker in two different cmd prompts from inside the repo_name folder.
In one cmd prompt do a celery -A base.runcelery:celery beat and the other celery -A base.runcelery:celery worker.
Then, run through your task that needed the flask context. Should work.
Flask-mail needs the Flask application context to work correctly. Instantiate the app object on the celery side and use app.app_context like this:
with app.app_context():
celery.start()
I don't have any points, so I couldn't upvote #codegeek's above answer, so I decided to write my own since my search for an issue like this was helped by this question/answer: I've just had some success trying to tackle a similar problem in a python/flask/celery scenario. Even though your error was from trying to use mail while my error was around trying to use url_for in a celery task, I suspect the two were related to the same problem and that you would have had errors stemming from the use of url_for if you had tried to use that before mail.
With no context of the app present in a celery task (even after including an import app from my_app_module) I was getting errors, too. You'll need to perform the mail operation in the context of the app:
from module_containing_my_app_and_mail import app, mail # Flask app, Flask mail
from flask.ext.mail import Message # Message class
#celery.task
def send_forgot_email(email, ref):
with app.app_context(): # This is the important bit!
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
If anyone is interested, my solution for the problem of using url_for in celery tasks can be found here
In your mail.py file, import your "app" and "mail" objects. Then, use request context. Do something like this:
from whateverpackagename import app
from whateverpackagename import mail
#celery.task
def send_forgot_email(email, ref):
with app.test_request_context():
msg = Message("Recover your Crusade Gaming Account")
msg.recipients = [email]
msg.sender = "Crusade Gaming stuff#cg.com"
msg.html = \
"""
Hello Person,<br/>
You have requested your password be reset. <a href="{0}" >Click here recover your account</a> or copy and paste this link in to your browser: {0} <br />
If you did not request that your password be reset, please ignore this.
""".format(url_for('account.forgot', ref=ref, _external=True))
mail.send(msg)
The answer provided by Bob Jordan is a good approach but I found it very hard to read and understand so I completely ignored it only to arrive at the same solution later myself. In case anybody feels the same, I'd like to explain the solution in a much simpler way. You need to do 2 things:
create a file which initializes a Celery app
# celery_app_file.py
from celery import Celery
celery_app = Celery(__name__)
create another file which initializes a Flask app and uses it to monkey-patch the Celery app created earlier
# flask_app_file.py
from flask import Flask
from celery_app import celery_app
flask_app = Flask(__name__)
class ContextTask(celery_app.Task):
def __call__(self, *args, **kwargs):
with flask_app.app_context():
return super().__call__(self, *args, **kwargs)
celery_app.Task = ContextTask
Now, any time you import the Celery application inside different files (e.g. mailing/tasks.py containing email-related stuff, or database/tasks.py containg database-related stuff), it'll be the already monkey-patched version that will work within a Flask context.
The important thing to remember is that this monkey-patching must happen at some point when you start Celery through the command line. This means that (using my example) you have to run celery -A flask_app_file.celery_app worker because flask_app_file.py is the file that contains the celery_app variable with a monkey-patched Celery application assigned to it.
Without using app.app_context(), just configure the celery before you register blueprints like below :
celery = Celery('myapp', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
From your blueprint where you wish to use celery, call the instance of celery already created to create your celery task.
It will work as expected.