Background task in Flask + Gunicorn without Celery - python

I want to send a telegram notification when the user performs a specific task in my flask application. I'm using python-telegram-bot to handle telegram. Here's the simplified code:
#app.route('/route')
def foo():
# do some stuff...
# if stuff is completed successfully - send the notification
app.telegram_notifier.send_notification(some_data)
return ''
I'm using messagequeue from python-telegram-bot to avoid flood limits. As you might have expected, that's not working and I'm getting the following error:
telegram.ext.messagequeue.DelayQueueError: Could not process callback in stopped thread
I tried to launch it in a separate daemon thread but I also ended up with that error.
This functionality is used only once in the entire application so I want things to be simple and don't want to install more dependencies like Celery.
Is there a way to achieve this using threads or some other simple way?
EDIT (more code)
Here's simplified implementation of the telegram bot:
from telegram import Bot, ParseMode
from telegram.ext import messagequeue as mq
class Notifier(Bot):
def __init__(self):
super(Notifier, self).__init__('my token')
# Message queue setup
self._is_messages_queued_default = True
self._msg_queue = mq.MessageQueue(all_burst_limit=3, all_time_limit_ms=3500)
self.chat_id = 'my chat ID'
#mq.queuedmessage
def _send_message(self, *args, **kwargs):
return super(Notifier, self).send_message(*args, **kwargs)
def send_notification(self, data: str):
msg = f'Notification content: <b>{data}</b>'
self._send_message(self.chat_id, msg, ParseMode.HTML)
In the app factory method:
from notifier import Notifier
def init_app():
app = Flask(__name__)
app.telegram_notifier = Notifier()
# some other init stuff...
return app
The thing with threads I tried:
#app.route('/route')
def foo():
# do some stuff...
# First method
t = Thread(target=app.telegram_notifier.send_notification, args=('some notification data',), daemon=True)
t.start()
# Second method
t = Thread(target=app.telegram_notifier.send_notification, args=('some notification data',))
t.start()
t.join()
return ''

Related

Python Flask API with stomp listener multiprocessing

TLDR:
I need to setup a flask app for multiprocessing such that the API and stomp queue listener are running in separate processes and therefore not interfering with each other's operations.
Details:
I am building a python flask app that has API endpoints and also creates a message queue listener to connect to an activemq queue with the stomp package.
I need to implement multiprocessing such that the API and listener do not block each other's operation. That way the API will accept new requests and the listener will continue to listen for new messages and carry out tasks accordingly.
A simplified version of the code is shown below (some details are omitted for brevity).
Problem: The multiprocessing is causing the application to get stuck. The worker's run method is not called consistently, and therefore the listener never gets created.
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
After several days of troubleshooting I suspect the problem is due to the multiprocessing not being setup correctly. I was able to prove that this is the case because when I stripped out all of the multiprocessing code and called the worker's run method directly, the all of the queue management code is working correctly, the CustomWorker module creates the listener, creates the message, and picks up the message. I think this indicates that the queue management code is working correctly and the source of the problem is most likely due to the multiprocessing.
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
worker = MyWorker()
worker.run()
Here is the code I have so far:
App
This part of the code creates the API and attempts to create a new process to create the queue listener. The 'custom_worker_utils' module is a custom module that creates the stomp listener in the CustomWorker() class run method.
from flask import Flask, request, make_response, jsonify
from flask_restx import Resource, Api
import sys, os, logging, time
basedir = os.path.dirname(os.getcwd())
sys.path.append('..')
from custom_worker_utils.custom_worker_utils import *
from multiprocessing import Manager
# app.py
def create_app():
app = Flask(__name__)
app.config['BASE_DIR'] = basedir
api = Api(app, version='1.0', title='MPS Worker', description='MPS Common Worker')
logger = get_logger()
'''
This is a placeholder to trigger the sending of a message to the first queue
'''
#api.route('/initialapicall', endpoint="initialapicall", methods=['GET', 'POST', 'PUT', 'DELETE'])
class InitialApiCall(Resource):
#Sends a message to the queue
def get(self, *args, **kwargs):
mqconn = get_mq_connection()
message = create_queue_message(initial_tracker_file)
mqconn.send('/queue/test1', message, headers = {"persistent":"true"})
return make_response(jsonify({'message': 'Initial Test Call Worked!'}), 200)
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
#worker = MyWorker()
#worker.run()
return app
Custom worker utils
The run() method is called, connects to the queue and creates the listener with the stomp package
# custom_worker_utils.py
from multiprocessing import Manager, Process
from _datetime import datetime
import os, time, json, stomp, requests, logging, random
'''
The listener
'''
class MyListener(stomp.ConnectionListener):
def __init__(self, p):
self.process = p
self.logger = p.logger
self.conn = p.mqconn
self.conn.connect(_user, _password, wait=True)
self.subscribe_to_queue()
def on_message(self, headers, message):
message_data = json.loads(message)
ticket_id = message_data[constants.TICKET_ID]
prev_status = message_data[constants.PREVIOUS_STEP_STATUS]
task_name = message_data[constants.TASK_NAME]
#Run the service
if prev_status == "success":
resp = self.process.do_task(ticket_id, task_name)
elif hasattr(self, 'revert_task'):
resp = self.process.revert_task(ticket_id, task_name)
else:
resp = True
if (resp):
self.logger.debug('Acknowledging')
self.logger.debug(resp)
self.conn.ack(headers['message-id'], self.process.conn_id)
else:
self.conn.nack(headers['message-id'], self.process.conn_id)
def on_disconnected(self):
self.conn.connect('admin', 'admin', wait=True)
self.subscribe_to_queue()
def subscribe_to_queue(self):
queue = os.getenv('QUEUE_NAME')
self.conn.subscribe(destination=queue, id=self.process.conn_id, ack='client-individual')
def get_mq_connection():
conn = stomp.Connection([(_host, _port)], heartbeats=(4000, 4000))
conn.connect(_user, _password, wait=True)
return conn
class CustomWorker(Process):
def __init__(self, **kwargs):
super(CustomWorker, self).__init__()
self.logger = logging.getLogger("Worker Log")
log_level = os.getenv('LOG_LEVEL', 'WARN')
self.logger.setLevel(log_level)
self.mqconn = get_mq_connection()
self.conn_id = random.randrange(1,100)
for k, v in kwargs.items():
setattr(self, k, v)
def revert_task(self, ticket_id, task_name):
# If the subclass does not implement this,
# then there is nothing to undo so just return True
return True
def run(self):
lst = MyListener(self)
self.mqconn.set_listener('queue_listener', lst)
while True:
pass
Seems like Celery is excatly what you need.
Celery is a task queue that can distribute work across worker-processes and even across machines.
Miguel Grinberg created a great post about that, Showing how to accept tasks via flask and spawn them using Celery as tasks.
Good Luck!
To resolve this issue I have decided to run the flask API and the message queue listener as two entirely separate applications in the same docker container. I have installed and configured supervisord to start and the processes individually.
[supervisord]
nodaemon=true
logfile=/home/appuser/logs/supervisord.log
[program:gunicorn]
command=gunicorn -w 1 -c gunicorn.conf.py "app:create_app()" -b 0.0.0.0:8081 --timeout 10000
directory=/home/appuser/app
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_worker_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_worker_stderr.log
[program:mqlistener]
command=python3 start_listener.py
directory=/home/appuser/mqlistener
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_mqlistener_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_mqlistener_stderr.log

How can I run a coroutine in Python?

My webscraper takes about 10 mins to run, I am trying to use the threading library to allow my webscraper to run in the background after data has been returned to whomever made a call to my API I created with Flask.
My code looks something like this:
from threading import Thread
from flask import Flask
application = Flask(__name__)
class Compute(Thread):
def __init__(self, request):
print("init")
Thread.__init__(self)
self.request = request
def run(self):
print("RUN")
command = './webscrape.py -us "{user}" -p "{password}" -url "{url}"'.format(**self.request.json)
output = subprocess.call(['bash','-c', command])
#application.route('/scraper/run', methods=['POST'])
def init_scrape():
thread_a = Compute(request.__copy__())
thread_a.start()
return jsonify({'Scraping this site: ': request.json["url"]}), 201
if __name__ == '__main__':
application.run(host="0.0.0.0", port="8080")
Now I am testing my API with postman and when I make a POST request it prints out "init" but dosen't seem to go any further to start the run() function, what am I doing wrong?

How can I dynamically change the type of tweets that are being streamed and figure out which message to send to who?

App Description
So I'm trying to create an application that does real-time sentiment analysis on tweets(as close to real time as I'm able to get it) and these tweets have to be based on user input. So in the main page of my application, I have a simple search bar where the user can enter a topic they would like to perform sentiment analysis on and when they press enter, it would take them to another page where they see a line chart displaying all the data in real time.
Problem 1
The first problem I'm facing at the moment is that I don't know how I can get tweepy to change what it is tracking when two or more people make a request. If I were to have global streaming that I simply disconnect and reconnect every time the user makes a new query, then it is also going to disconnect for other users as well which I don't want. On the other hand, if I were to allocate a streaming object for each user that connects, then this strategy should work. This still poses a problem. Twitter does not allow you to hold more than one connection at a time it seems given this StackOverflow post.
Does Tweepy support running multiple Streams to collect data?
If I still were to go along with this, I risk getting my IP banned. So both of these solutions are no good.
Problem 2
The last problem I'm having is figuring out who the message belongs to. At the moment, I'm using RabbitMQ to store all incoming messages in one single queue called twitter_topic_feed. For every tweet that I receive from tweepy, I publish it in that queue. Then RabbiMQ consumes the message and sends it to every available connection. Obviously, that behaviour is not what I'm looking for. Consider two users who search for pizza and sports. Both users will receive tweets pertaining to football and pizza when one user asked for sports tweets and the other asked for pizza tweets.
One idea is to create a queue with a unique identifier for each available connection. The identifier would have the form {Search Term}_{Hash ID}.
For generating the hash ID, I can use the UUID package that is available in python and create the ID when the connection opens and delete it when it closes. Of course, when they close the connection I also need to delete the queue. I'm not sure how well this solution would scale. If we were to have 10,000 connections, we would have 10,000 queues and each queue could potentially have a lot of messages stored in it. Seems like it would be very memory intensive.
Design
tornado Framework for WebSockets,
tweepy API for streaming tweets
RabbitMQ For publishing messages to the queue whenever tweepy receives a new tweet. RabbitMQ will then consume that message and send it to the WebSocket.
Attempt(What I currently have so far)
TweetStreamListener uses the tweepy API to listen for tweets based on the user's input. Whatever tweet it gets, it calculates the polarity of that tweet and publishes it to rabbitMQ twitter_topic_feed queue.
import logging
from tweepy import StreamListener, OAuthHandler, Stream, API
from sentiment_analyzer import calculate_polarity_score
from constants import SETTINGS
auth = OAuthHandler(
SETTINGS["TWITTER_CONSUMER_API_KEY"], SETTINGS["TWITTER_CONSUMER_API_SECRET_KEY"])
auth.set_access_token(
SETTINGS["TWITTER_ACCESS_KEY"], SETTINGS["TWITTER_ACCESS_SECRET_KEY"])
api = API(auth, wait_on_rate_limit=True)
class TweetStreamListener(StreamListener):
def __init__(self):
self.api = api
self.stream = Stream(auth=self.api.auth, listener=self)
def start_listening(self):
pass
def on_status(self, status):
if not hasattr(status, 'retweeted_status'):
polarity = calculate_polarity_score(status.text)
message = {
'polarity': polarity,
'timestamp': status.created_at
}
# TODO(Luis) Need to figure who to send this message to.
logging.debug("Message received from Twitter: {0}".format(message))
# limit handling
def on_limit(self, status):
logging.info(
'Limit threshold exceeded. Status code: {0}'.format(status))
def on_timeout(self, status):
logging.error('Stream disconnected. continuing...')
return True # Don't kill the stream
"""
Summary: Callback that executes for any error that may occur. Whenever we get a 420 Error code, we simply
stop streaming tweets as we have reached our rate limit. This is due to making too many requests.
Returns: False if we are sending too many tweets, otherwise return true to keep the stream going.
"""
def on_error(self, status_code):
if status_code == 420:
logging.error(
'Encountered error code 420. Disconnecting the stream')
# returning False in on_data disconnects the stream
return False
else:
logging.error('Encountered error with status code: {}'.format(
status_code))
return True # Don't kill the stream
WS_Handler is in charge of maintaining a list of open connections and sending any message that it receives back to every client(This behaviour is something I don't want).
import logging
import json
from uuid import uuid4
from tornado.web import RequestHandler
from tornado.websocket import WebSocketHandler
class WSHandler(WebSocketHandler):
def check_origin(self, origin):
return True
#property
def sess_id(self):
return self._sess_id
def open(self):
self._sess_id = uuid4().hex
logging.debug('Connection established.')
self.application.pc.register_websocket(self._sess_id, self)
# When messages arrives via RabbitMQ, write it to websocket
def on_message(self, message):
logging.debug('Message received: {0}'.format(message))
self.application.pc.redirect_incoming_message(
self._sess_id, json.dumps(message))
def on_close(self):
logging.debug('Connection closed.')
self.application.pc.unregister_websocket(self._sess_id)
The PikaClient module contains the PikaClient that will allows to keep track of inbound and outbound channels as well as keeping track of the websockets that currently running.
import logging
import pika
from constants import SETTINGS
from pika import PlainCredentials, ConnectionParameters
from pika.adapters.tornado_connection import TornadoConnection
pika.log = logging.getLogger(__name__)
class PikaClient(object):
INPUT_QUEUE_NAME = 'in_queue'
def __init__(self):
self.connected = False
self.connecting = False
self.connection = None
self.in_channel = None
self.out_channels = {}
self.websockets = {}
def connect(self):
if self.connecting:
return
self.connecting = True
# Setup rabbitMQ connection
credentials = PlainCredentials(
SETTINGS['RABBITMQ_USERNAME'], SETTINGS['RABBITMQ_PASSWORD'])
param = ConnectionParameters(
host=SETTINGS['RABBITMQ_HOST'], port=SETTINGS['RABBITMQ_PORT'], virtual_host='/', credentials=credentials)
return TornadoConnection(param, on_open_callback=self.on_connected)
def run(self):
self.connection = self.connect()
self.connection.ioloop.start()
def stop(self):
self.connected = False
self.connecting = False
self.connection.ioloop.stop()
def on_connected(self, unused_Connection):
self.connected = True
self.in_channel = self.connection.channel(self.on_conn_open)
def on_conn_open(self, channel):
self.in_channel.exchange_declare(
exchange='tornado_input', exchange_type='topic')
channel.queue_declare(
callback=self.on_input_queue_declare, queue=self.INPUT_QUEUE_NAME)
def on_input_queue_declare(self, queue):
self.in_channel.queue_bind(
callback=None, exchange='tornado_input', queue=self.INPUT_QUEUE_NAME, routing_key="#")
def register_websocket(self, sess_id, ws):
self.websockets[sess_id] = ws
self.create_out_channel(sess_id)
def unregister_websocket(self, sess_id):
self.websockets.pop(sess_id)
if sess_id in self.out_channels:
self.out_channels[sess_id].close()
def create_out_channel(self, sess_id):
def on_output_channel_creation(channel):
def on_output_queue_declaration(queue):
channel.basic_consume(self.on_message, queue=sess_id)
self.out_channels[sess_id] = channel
channel.queue_declare(callback=on_output_queue_declaration,
queue=sess_id, auto_delete=True, exclusive=True)
self.connection.channel(on_output_channel_creation)
def redirect_incoming_message(self, sess_id, message):
self.in_channel.basic_publish(
exchange='tornado_input', routing_key=sess_id, body=message)
def on_message(self, channel, method, header, body):
sess_id = method.routing_key
if sess_id in self.websockets:
self.websockets[sess_id].write_message(body)
channel.basic_ack(delivery_tag=method.delivery_tag)
else:
channel.basic_reject(delivery_tag=method.delivery_tag)
Server.py is the main entry point of the application.
import logging
import os
from tornado import web, ioloop
from tornado.options import define, options, parse_command_line
from client import PikaClient
from handlers import WSHandler, MainHandler
define("port", default=3000, help="run on the given port.", type=int)
define("debug", default=True, help="run in debug mode.", type=bool)
def main():
parse_command_line()
settings = {
"debug": options.debug,
"static_path": os.path.join(os.path.dirname(__file__), "web/static")
}
app = web.Application(
[
(r"/", MainHandler),
(r"/stream", WSHandler),
],
**settings
)
# Setup PikaClient
app.pc = PikaClient()
app.listen(options.port)
logging.info("Server running on http://localhost:3000")
try:
app.pc.run()
except KeyboardInterrupt:
app.pc.stop()
if __name__ == "__main__":
main()

How can I create several websocket chats in Tornado?

I am trying to create a Tornado application with several chats. The chats should be based on HTML5 websocket. The Websockets communicate nicely, but I always run into the problem that each message is posted twice.
The application uses four classes to handle the chat:
Chat contains all written messages so far and a list with all waiters which should be notified
ChatPool serves as a lookup for new Websockets - it creates a new chat when there is no one with the required scratch_id or returns an existing chat instance.
ScratchHandler is the entry point for all HTTP requests - it parses the base template and returns all details of client side.
ScratchWebSocket queries the database for user information, sets up the connection and notifies the chat instance if a new message has to be spread.
How can I prevent that the messages are posted several times?
How can I build a multi chat application with tornado?
import uuid
import tornado.websocket
import tornado.web
import tornado.template
from site import models
from site.handler import auth_handler
class ChatPool(object):
# contains all chats
chats = {}
#classmethod
def get_or_create(cls, scratch_id):
if scratch_id in cls.chats:
return cls.chats[scratch_id]
else:
chat = Chat(scratch_id)
cls.chats[scratch_id] = chat
return chat
#classmethod
def remove_chat(cls, chat_id):
if chat_id not in cls.chats: return
del(cls.chats[chat_id])
class Chat(object):
def __init__(self, scratch_id):
self.scratch_id = scratch_id
self.messages = []
self.waiters = []
def add_websocket(self, websocket):
self.waiters.append(websocket)
def send_updates(self, messages, sending_websocket):
print "WAITERS", self.waiters
for waiter in self.waiters:
waiter.write_message(messages)
self.messages.append(messages)
class ScratchHandler(auth_handler.BaseHandler):
#tornado.web.authenticated
def get(self, scratch_id):
chat = ChatPool.get_or_create(scratch_id)
return self.render('scratch.html', messages=chat.messages,
scratch_id=scratch_id)
class ScratchWebSocket(tornado.websocket.WebSocketHandler):
def allow_draft76(self):
# for iOS 5.0 Safari
return True
def open(self, scratch_id):
self.scratch_id = scratch_id
scratch = models.Scratch.objects.get(scratch_id=scratch_id)
if not scratch:
self.set_status(404)
return
self.scratch_id = scratch.scratch_id
self.title = scratch.title
self.description = scratch.description
self.user = scratch.user
self.chat = ChatPool.get_or_create(scratch_id)
self.chat.add_websocket(self)
def on_close(self):
# this is buggy - only remove the websocket from the chat.
ChatPool.remove_chat(self.scratch_id)
def on_message(self, message):
print 'I got a message'
parsed = tornado.escape.json_decode(message)
chat = {
"id": str(uuid.uuid4()),
"body": parsed["body"],
"from": self.user,
}
chat["html"] = tornado.escape.to_basestring(self.render_string("chat-message.html", message=chat))
self.chat.send_updates(chat, self)
NOTE: After the feedback from #A. Jesse I changed the send_updates method from Chat. Unfortunately, it still returns double values.
class Chat(object):
def __init__(self, scratch_id):
self.scratch_id = scratch_id
self.messages = []
self.waiters = []
def add_websocket(self, websocket):
self.waiters.append(websocket)
def send_updates(self, messages, sending_websocket):
for waiter in self.waiters:
if waiter == sending_websocket:
continue
waiter.write_message(messages)
self.messages.append(messages)
2.EDIT: I compared my code with the example provided demos. In the websocket example a new message is spread to the waiters through the WebSocketHandler subclass and a class method. In my code, it is done with a separated object:
From the demos:
class ChatSocketHandler(tornado.websocket.WebSocketHandler):
#classmethod
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
My application using an object and no subclass of WebSocketHandler
class Chat(object):
def send_updates(self, messages, sending_websocket):
for waiter in self.waiters:
if waiter == sending_websocket:
continue
waiter.write_message(messages)
self.messages.append(messages)
If you want to create a multi-chat application based on Tornado I recommend you use some kind of message queue to distribute new message. This way you will be able to launch multiple application process behind a load balancer like nginx. Otherwise you will be stuck to one process only and thus be severely limited in scaling.
I updated my old Tornado Chat Example to support multi-room chats as you asked for. Have a look at the repository:
Tornado-Redis-Chat
Live Demo
This simple Tornado application uses Redis Pub/Sub feature and websockets to distribute chat messages to clients. It was very easy to extend the multi-room functionality by simply using the chat room ID as the Pub/Sub channel.
on_message sends the message to all connected websockets, including the websocket that sent the message. Is that the problem: that messages are echoed back to the sender?

What is the proper way to handle Redis connection in Tornado ? (Async - Pub/Sub)

I am using Redis along with my Tornado application with asyc client Brukva, when I looked at the sample apps at Brukva site they are making new connection on "init" method in websocket
class MessagesCatcher(tornado.websocket.WebSocketHandler):
def __init__(self, *args, **kwargs):
super(MessagesCatcher, self).__init__(*args, **kwargs)
self.client = brukva.Client()
self.client.connect()
self.client.subscribe('test_channel')
def open(self):
self.client.listen(self.on_message)
def on_message(self, result):
self.write_message(str(result.body))
def close(self):
self.client.unsubscribe('test_channel')
self.client.disconnect()
its fine in the case of websocket but how to handle it in the common Tornado RequestHandler post method say long polling operation (publish-subscribe model). I am making new client connetion in every post method of update handler is this the right approach ?? When I checked at the redis console I see that clients increasing in every new post operation.
Here is a sample of my code.
c = brukva.Client(host = '127.0.0.1')
c.connect()
class MessageNewHandler(BaseHandler):
#tornado.web.authenticated
def post(self):
self.listing_id = self.get_argument("listing_id")
message = {
"id": str(uuid.uuid4()),
"from": str(self.get_secure_cookie("username")),
"body": str(self.get_argument("body")),
}
message["html"] = self.render_string("message.html", message=message)
if self.get_argument("next", None):
self.redirect(self.get_argument("next"))
else:
c.publish(self.listing_id, message)
logging.info("Writing message : " + json.dumps(message))
self.write(json.dumps(message))
class MessageUpdatesHandler(BaseHandler):
#tornado.web.authenticated
#tornado.web.asynchronous
def post(self):
self.listing_id = self.get_argument("listing_id", None)
self.client = brukva.Client()
self.client.connect()
self.client.subscribe(self.listing_id)
self.client.listen(self.on_new_messages)
def on_new_messages(self, messages):
# Closed client connection
if self.request.connection.stream.closed():
return
logging.info("Getting update : " + json.dumps(messages.body))
self.finish(json.dumps(messages.body))
self.client.unsubscribe(self.listing_id)
def on_connection_close(self):
# unsubscribe user from channel
self.client.unsubscribe(self.listing_id)
self.client.disconnect()
I appreciate if you provide some sample code for similar case.
A little late but, I've been using tornado-redis. It works with tornado's ioloop and the tornado.gen module
Install tornadoredis
It can be installed from pip
pip install tornadoredis
or with setuptools
easy_install tornadoredis
but you really shouldn't do that. You could also clone the repository and extract it. Then run
python setup.py build
python setup.py install
Connect to redis
The following code goes in your main.py or equivalent
redis_conn = tornadoredis.Client('hostname', 'port')
redis_conn.connect()
redis.connect is called only once. It is a blocking call, so it should be called before starting the main ioloop. The same connection object is shared between all the handlers.
You could add it to your application settings like
settings = {
redis = redis_conn
}
app = tornado.web.Application([('/.*', Handler),],
**settings)
Use tornadoredis
The connection can be used in handlers as self.settings['redis'] or it can be added as a property of the BaseHandler class. Your request handlers subclass that class and access the property.
class BaseHandler(tornado.web.RequestHandler):
#property
def redis():
return self.settings['redis']
To communicate with redis, the tornado.web.asynchronous and the tornado.gen.engine decorators are used
class SomeHandler(BaseHandler):
#tornado.web.asynchronous
#tornado.gen.engine
def get(self):
foo = yield gen.Task(self.redis.get, 'foo')
self.render('sometemplate.html', {'foo': foo}
Extra information
More examples and other features like connection pooling and pipelines can be found at the github repo.
you should pool the connections in your app. since it seems like brukva doesn't support this automatically (redis-py supports this, but is blocking by nature so it doesn't go well with tornado), you need to write your own connection pool.
the pattern is pretty simple, though. something along these lines (this is not real operational code):
class BrukvaPool():
__conns = {}
def get(host, port,db):
''' Get a client for host, port, db '''
key = "%s:%s:%s" % (host, port, db)
conns = self.__conns.get(key, [])
if conns:
ret = conns.pop()
return ret
else:
## Init brukva client here and connect it
def release(client):
''' release a client at the end of a request '''
key = "%s:%s:%s" % (client.connection.host, client.connection.port, client.connection.db)
self.__conns.setdefault(key, []).append(client)
it can be a bit more tricky, but that's the main idea.

Categories