Fullly stopping a threaded tornado WebSocket server - python

For testing a WebSocket client, I am writing a small tornado WebSocket server, which resides in a designated thread and can be started and stopped
during test runtime. Here is what I have come up with so far:
class SocketHandler(tornado.websocket.WebSocketHandler):
def __init__(self, application, request, **kwargs):
super().__init__(application, request, **kwargs)
self.authenticated = False
def check_origin(self, origin):
return True
def open(self):
pass
def on_message(self, message):
pass
def on_close(self):
pass
application = tornado.web.Application([
(r"/ws", SocketHandler),
])
class WebSocketServer(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self, name='WebServer')
self.port = port
self.ioloop = None
def run(self):
self.ioloop = tornado.ioloop.IOLoop()
http_server_api = tornado.httpserver.HTTPServer(application)
http_server_api.listen(self.port)
self.ioloop.start()
http_server_api.stop()
self.ioloop.clear_instance()
def stop(self):
self.ioloop.add_callback(self.ioloop.stop)
Starting the server works well:
server = WebSocketServer(8888)
server.start()
I can conncet to the server using any WebSocket client. Unfortunately, when I stop the server:
server.stop()
the thread closes, the listen port of the server is being removed, but all established WebSocket connections remain intact.
How can I close all established connections as well? Thanks for the help!

In Tornado 5.1 there's no easy way to do this for websocket connections (for regular HTTP there's HTTPServer.close_all_connections). You'll just have to keep track of all your connections and explicitly close them. Tornado's own test suite jumps through a lot of ugly hoops to make this work without spamming the logs with warnings about unclean shutdown.
In Tornado 6.0 I want to fix this so that HTTPServer.close_all_connections is aware of websocket connections and closes them too.

Related

Python-twisted: Trying to make UDP and websockets work together?

I have a websocket server written using twisted and autobahn. It is an echo server, I want to add the functionality of forwarding messages received from an multicast UDP port to the clients of the websocket server.
I tried what I did for the exact same functionality in a tcp server here but that doesn't seem to work.
class SomeServerProtocol(WebSocketServerProtocol):
def onOpen(self):
self.factory.register(self)
# Adding this line in TCP protocol's on connection method worked.
self.port = reactor.listenMulticast(6027, Listener(self), listenMultiple=True)
def connectionLost(self, reason):
self.factory.unregister(self)
def onMessage(self, payload, isBinary):
self.sendMessage(payload)
class PriceListener(DatagramProtocol):
def __init__(self, stream):
self.stream = stream
def startProtocol(self):
self.transport.setTTL(5)
self.transport.joinGroup("0.0.0.0")
def datagramReceived(self, datagram, address):
# Do some processing
# Send the data
class SomeServerFactory(WebSocketServerFactory):
def __init__(self, *args, **kwargs):
super(SomeServerFactory, self).__init__(*args, **kwargs)
self.clients = {}
def register(self, client):
self.clients[client.peer] = {"object": client, "id": k}
def unregister(self, client):
self.clients.pop(client.peer)
if __name__ == "__main__":
log.startLogging(sys.stdout)
# static file server seving index.html as root
root = File(".")
factory = SomeServerFactory(u"ws://127.0.0.1:8080")
factory.protocol = SomeServerProtocol
resource = WebSocketResource(factory)
# websockets resource on "/ws" path
root.putChild(u"ws", resource)
site = Site(root)
reactor.listenTCP(8080, site)
reactor.run()
I have marked in the SomeServerProtocol the line I added to have it listen on the UDP line. On removing this line everything works fine. I am getting data on UDP line, I want to push whatever data comes on the UDP line to all the clients connected to the websocket server.
I have already checked that the server works and clients are able to connect.
How do I do this? Also it would be great if one could clarify why does the TCP solution wouldn't work here.
PS
I am getting the following error on the client side.
WebSocket connection to 'ws://localhost:8080/ws' failed: One or more reserved bits are on: reserved1 = 1, reserved2 = 0, reserved3 = 1
I understood something about this issue here. So is the receiving of data from multicast inside the websocket protocol causing this?

Twisted close web server connection

Im writing a simple web server application using twisted. The application will get a string and return the reverse of the string it received.
It all works fine. Now I need to close the socket connection if there is an inactivity for 5 mins.
Here is my server code:-
from twisted.internet import reactor, protocol
class Echo(protocol.Protocol):
"""This is just about the simplest possible protocol"""
def dataReceived(self, data):
"As soon as any data is received, write it back."
self.transport.write(data[::-1])
def main():
"""This runs the protocol on port 8000"""
factory = protocol.ServerFactory()
factory.protocol = Echo
reactor.listenTCP(8000,factory)
reactor.run()
# this only runs if the module was *not* imported
if __name__ == '__main__':
main()
~
Add these methods to your class:
def connectionMade(self):
def terminate():
self.terminateLater = None
self.transport.abortConnection()
self.terminateLater = reactor.callLater(60 * 5, terminate)
def connectionLost(self, reason):
delayedCall = self.terminateLater
self.terminateLater = None
if delayedCall is not None:
delayedCall.cancel()
This makes it so that when a connection is established, your protocol will scheduled a timed call in 5 minutes to close the connection. If the connection is closed otherwise, it will cancel the timeout.

Communicate between asyncio protocol/servers

I'm trying to write a Server Side Events server which I can connect to with telnet and have the telnet content be pushed to a browser. The idea behind using Python and asyncio is to use as little CPU as possible as this will be running on a Raspberry Pi.
So far I have the following which uses a library found here: https://pypi.python.org/pypi/asyncio-sse/0.1 which uses asyncio.
And I have also copied a telnet server which uses asyncio as well.
Both work separately, but I have no idea how to tie both together. As I understand it, I need to call send() in the SSEHandler class from inside Telnet.data_received, but I don't know how to access it. Both of these 'servers' need to be running in a loop to accept new connections, or push data.
Can anyone help, or point me in another direction?
import asyncio
import sse
# Get an instance of the asyncio event loop
loop = asyncio.get_event_loop()
# Setup SSE address and port
sse_host, sse_port = '192.168.2.25', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
print(data)
self.transport.write(b'echo:')
self.transport.write(data)
# This is where I want to send data via SSE
# SSEHandler.send(data)
# Things I've tried :(
#loop.call_soon_threadsafe(SSEHandler.handle_request());
#loop.call_soon_threadsafe(sse_server.send("PAH!"));
def connection_lost(self, esc):
print("Connection lost!")
telnet_server.close()
class SSEHandler(sse.Handler):
#asyncio.coroutine
def handle_request(self):
self.send('Working')
# SSE server
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
# Telnet server
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '192.168.2.25', 7777))
#telnet_server.something = sse_server;
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())
Server side events are a sort of http protocol; and you may have any number of concurrent http requests in flight at any given moment, you may have zero if nobody is connected, or dozens. This nuance is all wrapped up in the two sse.serve and sse.Handler constructs; the former represents a single listening port, which dispatches each separate client request to the latter.
Additionally, sse.Handler.handle_request() is called once for each client, and the client is disconnected once that co-routine terminates. In your code, that coroutine terminates immediately, and so the client sees a single "Working" event. So, we need to wait, more-or-less forever. We can do that by yield froming an asyncio.Future().
The second issue is that we'll somehow need to get a hold of all of the separate instances of a SSEHandler() and use the send() method on each of them, somehow. Well, we can have each one self-register in their handle_request() methods; by adding each one to a dict which maps the individual handler instances to the future they are waiting on.
class SSEHandler(sse.Handler):
_instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
Now, to send an event to every listening we just visit all of the SSEHandler instances registered in the dict we created and using send() on each one.
class SSEHandler(sse.Handler):
#...
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
class Telnet(asyncio.Protocol):
#...
def data_received(self, data):
#...
SSEHandler.broadcast(data.decode('ascii'))
lastly, your code exits when the telnet connection closes. that's fine, but we should clean-up at that time, too. Fortunately, that's just a matter of setting a result on all of the futures for all of the handlers
class SSEHandler(sse.Handler):
#...
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
class Telnet(asyncio.Protocol):
#...
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
here's a full, working dump in case my illustration is not obvious.
import asyncio
import sse
loop = asyncio.get_event_loop()
sse_host, sse_port = '0.0.0.0', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
SSEHandler.broadcast(data.decode('ascii'))
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
class SSEHandler(sse.Handler):
_instances = {}
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '0.0.0.0', 7777))
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())

How to handle asyncore within a class in python, without blocking anything?

I need to create a class that can receive and store SMTP messages, i.e. E-Mails. To do so, I am using asyncore according to an example posted here. However, asyncore.loop() is blocking so I cannot do anything else in the code.
So I thought of using threads. Here is an example-code that shows what I have in mind:
class MyServer(smtpd.SMTPServer):
# derive from the python server class
def process_message(..):
# overwrite a smtpd.SMTPServer method to be able to handle the received messages
...
self.list_emails.append(this_email)
def get_number_received_emails(self):
"""Return the current number of stored emails"""
return len(self.list_emails)
def start_receiving(self):
"""Start the actual server to listen on port 25"""
self.thread = threading.Thread(target=asyncore.loop)
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver from itself
self.close()
self.thread.join()
I hope you get the picture. The class MyServer should be able to start and stop listening to port 25 in a non-blocking way, able to be queried for messages while listening (or not). The start method starts the asyncore.loop() listener, which, when a reception of an email occurs, append to an internal list. Similar, the stop method should be able to stop this server, as suggested here.
Despite the fact this code does not work as I expect to (asyncore seems to run forever, even I call the above stop method. The error I raise is catched within stop, but not within the target function containing asyncore.loop()), I am not sure if my approach to the problem is senseful. Any suggestions for fixing the above code or proposing a more solid implementation (without using third party software), are appreciated.
The solution provided might not be the most sophisticated solution, but it works reasonable and has been tested.
First of all, the matter with asyncore.loop() is that it blocks until all asyncore channels are closed, as user Wessie pointed out in a comment before. Referring to the smtp example mentioned earlier, it turns out that smtpd.SMTPServer inherits from asyncore.dispatcher (as described on the smtpd documentation), which answers the question of which channel to be closed.
Therefore, the original question can be answered with the following updated example code:
class CustomSMTPServer(smtpd.SMTPServer):
# store the emails in any form inside the custom SMTP server
emails = []
# overwrite the method that is used to process the received
# emails, putting them into self.emails for example
def process_message(self, peer, mailfrom, rcpttos, data):
# email processing
class MyReceiver(object):
def start(self):
"""Start the listening service"""
# here I create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', 25), None)
# and here I also start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds after the smtp channel has been closed
self.thread = threading.Thread(target=asyncore.loop,kwargs = {'timeout':1} )
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is save to wait for the thread to finish, i.e. for asyncore.loop() to exit
self.thread.join()
# now it finally it is possible to use an instance of this class to check for emails or whatever in a non-blocking way
def count(self):
"""Return the number of emails received"""
return len(self.smtp.emails)
def get(self):
"""Return all emails received so far"""
return self.smtp.emails
....
So in the end, I have a start and a stop method to start and stop listening on port 25 within a non-blocking environment.
Coming from the other question asyncore.loop doesn't terminate when there are no more connections
I think you are slightly over thinking the threading. Using the code from the other question, you can start a new thread that runs the asyncore.loop by the following code snippet:
import threading
loop_thread = threading.Thread(target=asyncore.loop, name="Asyncore Loop")
# If you want to make the thread a daemon
# loop_thread.daemon = True
loop_thread.start()
This will run it in a new thread and will keep going till all asyncore channels are closed.
You should consider using Twisted, instead. http://twistedmatrix.com/trac/browser/trunk/doc/mail/examples/emailserver.tac demonstrates how to set up an SMTP server with a customizable on-delivery hook.
Alex answer is the best but was incomplete for my use case. I wanted to test SMTP as part of a unit test which meant building the fake SMTP server inside my test objects and the server would not terminate the asyncio thread so I had to add a line to set it to a daemon thread to allow the rest of the unit test to complete without blocking waiting for that asyncio thread to join. I also added in complete logging of all email data so that I could assert anything sent through the SMTP.
Here is my fake SMTP class:
class TestingSMTP(smtpd.SMTPServer):
def __init__(self, *args, **kwargs):
super(TestingSMTP, self).__init__(*args, **kwargs)
self.emails = []
def process_message(self, peer, mailfrom, rcpttos, data, **kwargs):
msg = {'peer': peer,
'mailfrom': mailfrom,
'rcpttos': rcpttos,
'data': data}
msg.update(kwargs)
self.emails.append(msg)
class TestingSMTP_Server(object):
def __init__(self):
self.smtp = TestingSMTP(('0.0.0.0', 25), None)
self.thread = threading.Thread()
def start(self):
self.thread = threading.Thread(target=asyncore.loop, kwargs={'timeout': 1})
self.thread.daemon = True
self.thread.start()
def stop(self):
self.smtp.close()
self.thread.join()
def count(self):
return len(self.smtp.emails)
def get(self):
return self.smtp.emails
And here is how it is called by the unittest classes:
smtp_server = TestingSMTP_Server()
smtp_server.start()
# send some emails
assertTrue(smtp_server.count() == 1) # or however many you intended to send
assertEqual(self.smtp_server.get()[0]['mailfrom'], 'first#fromaddress.com')
# stop it when done testing
smtp_server.stop()
In case anyone else needs this fleshed out a bit more, here's what I ended up using. This uses smtpd for the email server and smtpblib for the email client, with Flask as a http server [gist]:
app.py
from flask import Flask, render_template
from smtp_client import send_email
from smtp_server import SMTPServer
app = Flask(__name__)
#app.route('/send_email')
def email():
server = SMTPServer()
server.start()
try:
send_email()
finally:
server.stop()
return 'OK'
#app.route('/')
def index():
return 'Woohoo'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
smtp_server.py
# smtp_server.py
import smtpd
import asyncore
import threading
class CustomSMTPServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print('Receiving message from:', peer)
print('Message addressed from:', mailfrom)
print('Message addressed to:', rcpttos)
print('Message length:', len(data))
return
class SMTPServer():
def __init__(self):
self.port = 1025
def start(self):
'''Start listening on self.port'''
# create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', self.port), None)
# start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds
# after the smtp channel has been closed
kwargs = {'timeout':1, 'use_poll': True}
self.thread = threading.Thread(target=asyncore.loop, kwargs=kwargs)
self.thread.start()
def stop(self):
'''Stop listening to self.port'''
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is safe to wait for asyncore.loop() to exit
self.thread.join()
# check for emails in a non-blocking way
def get(self):
'''Return all emails received so far'''
return self.smtp.emails
if __name__ == '__main__':
server = CustomSMTPServer(('0.0.0.0', 1025), None)
asyncore.loop()
smtp_client.py
import smtplib
import email.utils
from email.mime.text import MIMEText
def send_email():
sender='author#example.com'
recipient='6142546977#tmomail.net'
msg = MIMEText('This is the body of the message.')
msg['To'] = email.utils.formataddr(('Recipient', recipient))
msg['From'] = email.utils.formataddr(('Author', 'author#example.com'))
msg['Subject'] = 'Simple test message'
client = smtplib.SMTP('127.0.0.1', 1025)
client.set_debuglevel(True) # show communication with the server
try:
client.sendmail('author#example.com', [recipient], msg.as_string())
finally:
client.quit()
Then start the server with python app.py and in another request simulate a request to /send_email with curl localhost:5000/send_email. Note that to actually send the email (or sms) you'll need to jump through other hoops detailed here: https://blog.codinghorror.com/so-youd-like-to-send-some-email-through-code/.

How can I ensure closing all connection in loadbalancer when something fails or hangs?

I'm trying to write a simple load-balancer. It works ok till one of servers (BalanceServer) doesn't close connection then...
Client (ReverseProxy) disconnects but the connection in with BalanceServer stays open.
I tried to add callback (#3) to ReverseProxy.connectionLost to close the connection with one of the servers as I do with closing connection when server disconnects (clientLoseConnection), but at that time the ServerWriter is Null and I cannot terminate it at #1 and #2
How can I ensure that all connections are closed when one of sides disconnects? I guess that also some kind of timeout here would be nice when both client and one of servers hang, but how can I add it so it works on both connections?
from twisted.internet.protocol import Protocol, Factory, ClientCreator
from twisted.internet import reactor, defer
from collections import namedtuple
BalanceServer = namedtuple('BalanceServer', 'host port')
SERVER_LIST = [BalanceServer('127.0.0.1', 8000), BalanceServer('127.0.0.1', 8001)]
def getServer(servers):
while True:
for server in servers:
yield server
# this writes to one of balance servers and responds to client with callback 'clientWrite'
class ServerWriter(Protocol):
def sendData(self, data):
self.transport.write(data)
def dataReceived(self, data):
self.clientWrite(data)
def connectionLost( self, reason ):
self.clientLoseConnection()
# callback for reading data from client to send it to server and get response to client again
def transferData(serverWriter, clientWrite, clientLoseConnection, data):
if serverWriter:
serverWriter.clientWrite = clientWrite
serverWriter.clientLoseConnection = clientLoseConnection
serverWriter.sendData(data)
def closeConnection(serverWriter):
if serverWriter: #1 this is null
#2 So connection is not closed and hangs there, till BalanceServer close it
serverWriter.transport.loseConnection()
# accepts clients
class ReverseProxy(Protocol):
def connectionMade(self):
server = self.factory.getServer()
self.serverWriter = ClientCreator(reactor, ServerWriter)
self.client = self.serverWriter.connectTCP( server.host, server.port )
def dataReceived(self, data):
self.client.addCallback(transferData, self.transport.write,
self.transport.loseConnection, data )
def connectionLost(self, reason):
self.client.addCallback(closeConnection) #3 adding close doesn't work
class ReverseProxyFactory(Factory):
protocol = ReverseProxy
def __init__(self, serverGenerator):
self.getServer = serverGenerator
plainFactory = ReverseProxyFactory( getServer(SERVER_LIST).next )
reactor.listenTCP( 7777, plainFactory )
reactor.run()
You may want to look at twisted.internet.protocols.portforward for an example of hooking up two connections and then disconnecting them. Or just use txloadbalancer and don't even write your own code.
However, loseConnection will never forcibly terminate the connection if there is never any traffic going over it. So if you don't have an application-level ping or any data going over your connections, they may still never shut down. This is a long-standing bug in Twisted. Actually, the longest-standing bug. Perhaps you'd like to help work on the fix :).

Categories