How to wire up Huey to a Flask application - python

I've read the official docs yet I'm not quite sure I understand how to apply what they tell. Also I've seen this QA, I also use a factory pattern. Just can't see the whole picture.
The connection pool as long as other redis/huey settings may differ depending on the given environment (development, production). How do we wire huey up so we can configure it similar to the Flask application?
As long as I understand to fire a task from a view we need to import tasks moudule and call the specific task (call a function passing the sensitive params). Where shoud we instantiate, keep the huey instance?
Should tasks know about the application dependencies? Should we consider another stripped-down Flask app for this matter?
Can you help a little bit?

Here's how I wired it all up.
First off, here's the contents of my project folder:
Get a stripped-down Flask application to be used by your tasks. As it was suggested in the post I created a secondary application factory:
# global dependencies
db = SQLAlchemy()
def create_app_huey(config_name):
app = Flask(__name__)
# apply configuration
app.config.from_object(config[config_name])
# init extensions
db.init_app(app)
return app
Create tasking package. Two important files here are config.py and tasks.py. This post helped a lot. Let's start with configuration. Note, this is very simple approach.
# config.py (app.tasking.config)
import os
from huey import RedisHuey
settings__development = {
'host': 'localhost'
}
settings__testing = {
'host': 'localhost'
}
settings__production = {
'host': 'production_server'
}
settings = {
'development': settings__development,
'testing': settings__testing,
'production': settings__production,
'default': settings__development
}
huey = RedisHuey(**settings[os.getenv('FLASK_ENV') or 'default'])
Then the tasks.py module will look like this:
import os
from app.tasking.config import huey
from app import create_app_huey
app = create_app_huey(config_name=os.getenv('FLASK_ENV') or 'default')
#huey.task()
def create_thumbnails(document):
pass
Run the consumer. Activate your virtual environment. Then run from cmd (I'm on Windows):
huey_consumer.py app.tasking.config.huey
Where app.tasking.config is a package.package.module path (in my case!) and
huey is the name of available (in the config module) huey instance. Check your huey instance name.
Reading this helped.

Related

Celery tasks don't send email to admins on logger critical messages

My celery tasks don't send an email to application admins each time I call logger.critical.
I'm building a Django application. The current configuration of my project, allows the admins of the application to receive an email each time a logger.critical message is created. This was pretty straight forward to set up, I just followed the documentation on both projects (celery and Django). For some reason, which I'm not figuring out, the code that runs inside a celery task, does not have the same behavior, It doesn't send an email to the application admin each time the logger.critical message is created.
Does celery actually even allow this to be done?
Am I missing some configuration?
Does anyone got this problem and was able to solve it?
Using:
Django 1.11
Celery 4.3
Thanks for any help.
As stated in the documentation Celery overrides the current logging configuration to apply its own, it also says that you can set CELERYD_HIJACK_ROOT_LOGGER to False in your Django settings to prevent this behavior, what is not well documented is that this is not really working at the moment.
In my opinion you have 2 options:
1. Prevent Celery to override your configuration (really) using the setup_logging signal
Open your celery.py file and add the following:
from celery.signals import setup_logging
#setup_logging.connect
def config_loggers(*args, **kwags):
pass
After that your file should look more or less like this:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.signals import setup_logging
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#setup_logging.connect
def config_loggers(*args, **kwags):
pass
However I would avoid this option unless you have a really good reason because in this way you will lose the default task logging handled by Celery, which is quite good to have.
2. Use a specific logger
You can define a custom logger in your Django LOGGING configuration and use it in your task, eg:
Django settings:
LOGGING = {
# ... other configs ...
'handlers': {
'my_email_handler': {
# ... handler configuration ...
},
},
'loggers': {
# ... other loggers ...
'my_custom_logger': {
'handlers': ['my_email_handler'],
'level': 'CRITICAL',
'propagate': True,
},
},
}
Tasks:
import logging
logger = logging.getLogger('my_custom_logger')
#shared_task
def log():
logger.critical('Something bad happened!')
I believe this is the best approach for you because, as far as I understand, you need to manually log messages, and this allows you to keep using the Celery logging system.
From Celery docs:
By default any previously configured handlers on the root logger will be removed. If you want to customize your own logging handlers, then you can disable this behavior by setting worker_hijack_root_logger = False.
Celery installs own logger that you can obtain using get_task_logger() call. I assume you wrote your own logger that does the logic you described in the original question. Read more about Celery logging to find out how to disable this behaviour and tweak Celery to your needs.

Django: How to pass arguments to settings.py to set variables before running the server

I have few services such as redis server, centifuge server that runs alongside Django. I would like to start the services and the servers using a single command.
Something like:
python setup.py run
By default, this will run all the services on 127.0.0.1 and below variables from settings.py make sense.
#Sample settings.py
#Django rq configs
RQ_SHOW_ADMIN_LINK = True
RQ_QUEUES = {
'default': {
'HOST': 'localhost',
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': 360,
},
}
# centrifuge configuration
CENTRIFUGE_WEBSOCKET = 'ws://127.0.0.1:8080/connection/websocket'
CENTRIFUGE_ADDRESS = 'http://127.0.0.1:8080'
I would like to run
python setup.py run 172.19.78.179
Here, the services will running on 172.19.78.179, I would to change the settings to this ip.
# centrifuge configuration
CENTRIFUGE_WEBSOCKET = 'ws://172.19.78.179:8080/connection/websocket'
CENTRIFUGE_ADDRESS = 'http://172.19.78.179:8080'
I don't want set it in live settings or through admin interface. How do I go about in solving this problem
if the ip is going to be dynamic, you can try using os.environ, like this:
import os
IP = os.environ['IP']
...
So you can run like: IP=172.19.78.179 python setup.py run
Or if you prefer, you can use a separate settings file and call it with --settings flag:
django-admin runserver --settings=mysite.settings
You can read more about django enironment settings here.
Hope this helps.

Combining resources in Python with Flask

I' trying to combine two independent Flask apps like the example below:
from geventwebsocket import WebSocketServer, Resource
...
server = WebSocketServer(('', 8080), Resource({
'/': frontend,
'/one': flask_app_one,
'/two': flask_app_two}))
server.serve_forever()
Inside each Flask app I declare the full path, isn't that suppose to be relative path, inside flask_app_one:
from flask import Flask
app = Flask(__name__)
#app.route('/one/ping')
def ping():
return 'hello\n'
Why I should specify in #app.route('/one/ping') instead of just #app.route('/ping') since all traffic to /one will be forwarded to the corresponding app?
Let me know if you need any additional info I kept my example clean
Thank you
Finally I have managed to do it with the so called Application Dispatching and the resources found in this page:
http://flask.pocoo.org/docs/0.10/patterns/appdispatch/#app-dispatch
Thanks

How to see if a Flask app is being run on localhost?

I want my Flask app to have different behaviors when it is being run on localhost and when it is being hosted online. How can I detect from a flask app when it is on localhost and when it is deployed?
You'll want to look at the configuration handling section of the docs, most specifically, the part on dev / production. To summarize here, what you want to do is:
Load a base configuration which you keep in source control with sensible defaults for things which need to have some value. Anything which needs a value should have the value set to what makes sense for production not for development.
Load an additional configuration from a path discovered via an environment variable that provides environment-specific settings (e. g. the database URL).
An example in code:
from __future__ import absolute_imports
from flask import Flask
import .config # This is our default configuration
app = Flask(__name__)
# First, set the default configuration
app.config.from_object(config)
# Then, load the environment-specific information
app.config.from_envvar("MYAPP_CONFIG_PATH")
# Setup routes and then ...
if __name__ == "__main__":
app.run()
See also: The docs for Flask.config
Here is one way of doing it. The key is comparing the current root url flask.request.url_root against a known url value you want to match.
Excerpt taken from github repo https://github.com/nueverest/vue_flask
from flask import Flask, request
def is_production():
""" Determines if app is running on the production server or not.
Get Current URI.
Extract root location.
Compare root location against developer server value 127.0.0.1:5000.
:return: (bool) True if code is running on the production server, and False otherwise.
"""
root_url = request.url_root
developer_url = 'http://127.0.0.1:5000/'
return root_url != developer_url

How can I choose a Config based on a server variable in Flask?

I am running nginx + gunicorn + flask
My nginx config looks like:
...
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Stage "development";
proxy_redirect off;
...
My flask app looks like:
from flask import Flask, request
from werkzeug.contrib.fixers import ProxyFix
app = Flask(__name__)
# configuration settings
if request.headers.get('Stage') == 'production':
app.config.from_object('config.production_config')
else:
app.config.from_object('config.development_config')
#app.route('/')
def index():
return "hello"
app.wsgi_app = ProxyFix(app.wsgi_app)
However,
That doesn't seem to work.
I get a: RuntimeError: working outside of request context
My nginx is setup so that I can have a development/production environment, but I want to be able to say that this "server location" is a development environment, and I want Flask to use the appropriate configuration.
The application config is for the whole application, while request headers are for just one request. The same application generally handles many requests. Therefore you can not set the config based on request headers.
Your code at the module level is executed at server start-up when no request as reached the application yet, so there is no current request. This is what the "working outside of request context" message means.
What you’re trying to do (prod vs. dev config) is better done with an environment variable in the script starting your gunicorn server. If you want both at the same time the easiest is to run two gunicorn servers.
Alternatively, make two application objects, run them both in the same process, and dispatch with a WSGI middleware similar to these: http://flask.pocoo.org/docs/patterns/appdispatch/
This is a bit old but I wanted to add how we accomplish this with flask. The majority of this is adapted from http://flask.pocoo.org/docs/config/ .
In our config.py we define multiple classes (one per environment):
class Config(object):
FOO = 1
BAR = 2
class Development(Config):
BAR = 3
Then in each of our application nodes we set an environment variable in the gunicorn init scripts (for us this lives within a supervisor config but it doesn't have to be).
APPLICATION_ENV='Development'
Then within the flask application during initialization (only runs on server start, not within the request context):
try:
env = os.environ['APPLICATION_ENV']
except KeyError as e:
logging.error('Unknown environment key, defaulting to Development')
env = 'Development'
app.config.from_object('config.%s' % env)
now app.config['BAR'] will be 3.
We also wanted support for local config files (eg on a developer machine or passwords that are deployed from chef directly to the machine and not stored in git). To do that we've expanded on the above to also load a local config based on the app.config['LOCAL_CONFIG'] parameter.
class Development(Config):
BAR = 3
LOCAL_CONFIG = '/etc/localConfig.py'
And then in /etc/localConfig.py
BAR = 4
And again in our app initialization code after the code above loads the initial app.config for the environment:
if 'LOCAL_CONFIG' in app.config:
#try to load the local configuration overrides
if app.config.from_pyfile(app.config['LOCAL_CONFIG'], silent=True):
logging.info('Loaded local config file at %s' % app.config['LOCAL_CONFIG'])
else:
logging.warning('Failed to load local config file at %s - does it exist?' % app.config['LOCAL_CONFIG'])
At this point app.config['BAR'] is 4.
This isn't perfect because if you have dicts in your config you will only be able to override the entire dict, not keys within it. It does accomplish most of what we need though.

Categories