I use a simple Flask application with gunicorn's gevent worker to serve server-sent events.
To stream the content, i use:
response = Response(eventstream(), mimetype="text/event-stream")
which streams events from redis:
def eventstream():
for message in pubsub.listen():
# ...
yield str(event)
deployed with:
gunicorn -k gevent -b 127.0.0.1:50008 flaskapplication
But after its used for a while, i have 50 redis connections open, even when no one is connected to the server-sent events stream anymore.
It seems, like the view does not terminate, because gunicorn is non-blocking and pubsub.listen() is blocking.
How can i fix this? Should i limit the number of processes gunicorn may spawn, or should flask kill the view after some timeout? If possible, it should stop the view/redis connections on inactivity, without disconnecting users, who are still connected to the SSE stream.
You can run gunicorn with -t <seconds> to specify a timeout for your workers which will kill them if they are silent for a number of seconds, usually 30 is typical. I think this should work for your issue, but not completely sure.
From what I've seen, it seems like you could also rewrite your worker to use Timeout from gevent.
This might look something like the following:
from gevent import Timeout
def eventstream():
pubsub = redis.pubsub()
try:
with Timeout(30) as timeout:
pubsub.subscribe(channel)
for message in pubsub.listen():
# ...
yield str(event)
except Timeout, t:
if t is not timeout:
raise
else:
pubsub.unsubscribe(channel)
This example was helpful for getting a hang of how this should work.
Using the Timeout object from natdempk's solution, the most elegant solution is to send a heartbeat, to detect dead connections:
while True:
pubsub = redis.pubsub()
try:
with Timeout(30) as timeout:
for message in pubsub.listen():
# ...
yield str(event)
timeout.cancel()
timeout.start()
except Timeout, t:
if t is not timeout:
raise
else:
yield ":\n\n" # heartbeat
Note that you need to call redis.pubsub() again, because the redis connection is lost after the exception and you will get an error NoneType object has no attribute readline.
Related
I am trying to detect a failed connection when using the Twisted endpoint connect() function. What is odd is that the following works under Windows and gives the expected result, but on a Linux/Mac OS system I am never seeing the print statement from errBack. Is my code incorrect or does Windows Twisted work differently from the rest?
class Gateway():
def __init__(self):
from twisted.internet.endpoints import TCP4ClientEndpoint
endpoint = TCP4ClientEndpoint(reactor, 'localhost', 8000)
authInterfaceFactory = AuthInterfaceFactory(self.__authMsgProcessor)
d = endpoint.connect(authInterfaceFactory)
d.addErrback(self.ConnFailed)
print("WAITING...")
def ConnFailed(self, msg):
print("[DEBUG] Errback : {0}".format(msg))
Windows Result
WAITING... [DEBUG] Errback : [Failure instance: Traceback (failure
with no frames): : Connection was
refused by other side: 10061: No connection could be made because the
target machine actively refused it..]
I created a client that uses endpoint connect and it immediately returned, although when used it in the same setup as my code it doesn't:
self.__networkThread = threading.Thread(target=reactor.run,
kwargs={"installSignalHandlers": False})
self.__networkThread.start()
from twisted.internet.endpoints import TCP4ClientEndpoint
endpoint = TCP4ClientEndpoint(reactor, 'localhost', 8000)
d = endpoint.connect(authInterfaceFactory)
d.addErrback(self.ConnFailed)
d.addCallback(self.ConnOK)
Is the logic incorrect when running a reactor in a thread (I have to as I want it started at the beginning)?
You can't run the reactor in one thread and use Twisted APIs in another. Apart from a couple APIs dedicated specifically to interacting with threads, you must use all Twisted APIs from a single thread.
"I want it started at the beginning" doesn't sound like a reason to use threads. Many many Twisted-using programs start the reactor "at the beginning" without threads.
(Also please take this as an excellent example of the need for complete examples.)
So I am using a RabbitMQ + Celery to create a simple RPC architecture. I have one RabbitMQ message broker and one remote worker which runs Celery deamon.
There is a third server which exposes a thin RESTful API. When it receives HTTP request, it sends a task to the remote worker, waits for response and returns a response.
This works great most of the time. However I have notices that after a longer inactivity (say 5 minutes of no incoming requests), the Celery worker behaves strangely. First 3 tasks received after a longer inactivity return this error:
exchange.declare: connection closed unexpectedly
After three erroneous tasks it works again. If there are not tasks for longer period of time, the same thing happens. Any idea?
My init script for the Celery worker:
# description "Celery worker using sync broker"
console log
start on runlevel [2345]
stop on runlevel [!2345]
setuid richard
setgid richard
script
chdir /usr/local/myproject/myproject
exec /usr/local/myproject/venv/bin/celery worker -n celery_worker_deamon.%h -A proj.sync_celery -Q sync_queue -l info --autoscale=10,3 --autoreload --purge
end script
respawn
My celery config:
# Synchronous blocking tasks
BROKER_URL_SYNC = 'amqp://guest:guest#localhost:5672//'
# Asynchronous non blocking tasks
BROKER_URL_ASYNC = 'amqp://guest:guest#localhost:5672//'
#: Only add pickle to this list if your broker is secured
#: from unwanted access (see userguide/security.html)
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
CELERY_ENABLE_UTC = True
CELERY_BACKEND = 'amqp'
# http://docs.celeryproject.org/en/latest/userguide/tasks.html#disable-rate-limits-if-they-re-not-used
CELERY_DISABLE_RATE_LIMITS = True
# http://docs.celeryproject.org/en/latest/userguide/routing.html
CELERY_DEFAULT_QUEUE = 'sync_queue'
CELERY_DEFAULT_EXCHANGE = "tasks"
CELERY_DEFAULT_EXCHANGE_TYPE = "topic"
CELERY_DEFAULT_ROUTING_KEY = "sync_task.default"
CELERY_QUEUES = {
'sync_queue': {
'binding_key':'sync_task.#',
},
'async_queue': {
'binding_key':'async_task.#',
},
}
Any ideas?
EDIT:
Ok, now it appears to happen randomly. I noticed this in RabbitMQ logs:
=WARNING REPORT==== 6-Jan-2014::17:31:54 ===
closing AMQP connection <0.295.0> (some_ip_address:36842 -> some_ip_address:5672):
connection_closed_abruptly
Is your RabbitMQ server or your Celery worker behind a load balancer by any chance? If yes, then the load balancer is closing the TCP connection after some period of inactivity. In which case, you will have to enable heartbeat from the client (worker) side. If you do, I would not recommend using the pure Python amqp lib for this. Instead, replace it with librabbitmq.
The connection_closed_abruptly is caused when clients disconnecting without the proper AMQP shutdown protocol:
channel.close(...)
Request a channel close.
This method indicates that the sender wants to close the channel.
This may be due to internal conditions (e.g. a forced shut-down) or due to
an error handling a specific method, i.e. an exception.
When a close is due to an exception, the sender provides the class and method id of
the method which caused the exception.
After sending this method, any received methods except Close and Close-OK MUST be discarded. The response to receiving a Close after sending Close must be to send Close-Ok.
channel.close-ok():
Confirm a channel close.
This method confirms a Channel.Close method and tells the recipient
that it is safe to release resources for the channel.
A peer that detects a socket closure without having received a
Channel.Close-Ok handshake method SHOULD log the error.
Here is an issue about that.
Can you set your custom configuration for BROKER_HEARTBEAT and BROKER_HEARTBEAT_CHECKRATE and check again, for example:
BROKER_HEARTBEAT = 10
BROKER_HEARTBEAT_CHECKRATE = 2.0
I have a python script that will run on a local machine that needs to access a message queue (RabbitMQ) or receive subscribed events over HTTP. I've researched several solutions, but none seem natively designed to allow desktop clients to access them over HTTP. I'm thinking that using Twisted as a proxy is an option as well. Any guidance or recommendations would be greatly appreciated. Thanks in advance.
I've read this tutorial on RabbitMQ site, and they provide name of some libraries that could solve receiving messages.
Sender: send.py
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print " [x] Sent 'Hello World!'"
connection.close()
Receiver: receive.py
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming()
Now we can try out our programs in a terminal. First, let's send a message using our send.py program:
$ python send.py
[x] Sent 'Hello World!'
The producer program send.py will stop after every run. Let's receive it:
$ python receive.py
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'Hello World!'
Hurray! We were able to send our first message through RabbitMQ. As you might have noticed, the receive.py program doesn't exit. It will stay ready to receive further messages, and may be interrupted with Ctrl-C.
Try to run send.py again in a new terminal.
We've learned how to send and receive a message from a named queue. It's time to move on to part 2 and build a simple work queue.
I've decided to use wamp http://wamp.ws/. Still experimenting with it, but it's working quite well at the moment.
Choice #1
You may be interested in this RabbitHub
Choice #2
If you want it to be on port#80, cant you do port forwarding using a proxy? It could be challenging, but
Choice #3
If your script is not tightly coupled with RMQ message format , you can try celery ( which uses RMQ underneath), then u can try celery Http gateway or celery web hooks if u want any other application to be triggered directly
It might be time consuming to get it up. However, Celery opens up loads of flexibility
Choice #4
For one of my projects, I developed an intermediate web service (Flask Service) to use RMQ
Not ideal, but it served the purpose at that time.
I have a flask application running with gevent-socketio that I create this way:
server = SocketIOServer(('localhost', 2345), app, resource='socket.io')
gevent.spawn(send_queued_messages_loop, server)
server.serve_forever()
I launch send_queued_messages_loop in a gevent thread that keeps on polling on a gevent.Queue where my program stores data to send it to the socket.io connected clients
I tried different approaches to stop the server (such as using sys.exit) either from the socket.io handler (when the client sends a socket.io message) or from a normal route (when the client makes a request to /shutdown) but in any case, sys.exit seems to fail because of the presence of greenlets.
I tried to call gevent.shutdown() first, but this does not seem to change anything
What would be the proper way to shutdown the server?
Instead of using serve_forever() create a gevent.event.Event and wait for it. To actually initiate shutdown, trigger the event using its set() method:
from gevent.event import Event
stopper = Event()
server = SocketIOServer(('localhost', 2345), app, resource='socket.io')
server.start()
gevent.spawn(send_queued_messages_loop)
try:
stopper.wait()
except KeyboardInterrupt:
print
No matter from where you now want to terminate your process - all you need to do is calling stopper.set().
The try..except is not really necessary but I prefer not getting a stacktrace on a clean CTRL-C exit.
I'm using python2.6 with HTTPServer and the ThreadingMixIn, which will handle each request in a separate thread. I'm also using HTTP1.1 persistent connections ('Connection: keep-alive'), so neither the server or client will close a connection after a request.
Here's roughly what the request handler looks like
request, client_address = sock.accept()
rfile = request.makefile('rb', rbufsize)
wfile = request.makefile('wb', wbufsize)
global server_stopping
while not server_stopping:
request_line = rfile.readline() # 'GET / HTTP/1.1'
# etc - parse the full request, write to wfile with server response, etc
wfile.close()
rfile.close()
request.close()
The problem is that if I stop the server, there will still be a few threads waiting on rfile.readline().
I would put a select([rfile, closefile], [], []) above the readline() and write to closefile when I want to shutdown the server, but I don't think it would work on windows because select only works with sockets.
My other idea is to keep track of all the running requests and rfile.close() but I get Broken pipe errors.
Ideas?
You're almost there—the correct approach is to call rfile.close() and to catch the broken pipe errors and exit your loop when that happens.
If you set daemon_threads to true in your HTTPServer subclass, the activity of the threads will not prevent the server from exiting.
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
You could work around the Windows problem by making closefile a socket, too -- after all, since it's presumably something that's opened by your main thread, it's up to you to decide whether to open it as a socket or a file;-).