I'm trying to write a Server Side Events server which I can connect to with telnet and have the telnet content be pushed to a browser. The idea behind using Python and asyncio is to use as little CPU as possible as this will be running on a Raspberry Pi.
So far I have the following which uses a library found here: https://pypi.python.org/pypi/asyncio-sse/0.1 which uses asyncio.
And I have also copied a telnet server which uses asyncio as well.
Both work separately, but I have no idea how to tie both together. As I understand it, I need to call send() in the SSEHandler class from inside Telnet.data_received, but I don't know how to access it. Both of these 'servers' need to be running in a loop to accept new connections, or push data.
Can anyone help, or point me in another direction?
import asyncio
import sse
# Get an instance of the asyncio event loop
loop = asyncio.get_event_loop()
# Setup SSE address and port
sse_host, sse_port = '192.168.2.25', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
print(data)
self.transport.write(b'echo:')
self.transport.write(data)
# This is where I want to send data via SSE
# SSEHandler.send(data)
# Things I've tried :(
#loop.call_soon_threadsafe(SSEHandler.handle_request());
#loop.call_soon_threadsafe(sse_server.send("PAH!"));
def connection_lost(self, esc):
print("Connection lost!")
telnet_server.close()
class SSEHandler(sse.Handler):
#asyncio.coroutine
def handle_request(self):
self.send('Working')
# SSE server
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
# Telnet server
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '192.168.2.25', 7777))
#telnet_server.something = sse_server;
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())
Server side events are a sort of http protocol; and you may have any number of concurrent http requests in flight at any given moment, you may have zero if nobody is connected, or dozens. This nuance is all wrapped up in the two sse.serve and sse.Handler constructs; the former represents a single listening port, which dispatches each separate client request to the latter.
Additionally, sse.Handler.handle_request() is called once for each client, and the client is disconnected once that co-routine terminates. In your code, that coroutine terminates immediately, and so the client sees a single "Working" event. So, we need to wait, more-or-less forever. We can do that by yield froming an asyncio.Future().
The second issue is that we'll somehow need to get a hold of all of the separate instances of a SSEHandler() and use the send() method on each of them, somehow. Well, we can have each one self-register in their handle_request() methods; by adding each one to a dict which maps the individual handler instances to the future they are waiting on.
class SSEHandler(sse.Handler):
_instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
Now, to send an event to every listening we just visit all of the SSEHandler instances registered in the dict we created and using send() on each one.
class SSEHandler(sse.Handler):
#...
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
class Telnet(asyncio.Protocol):
#...
def data_received(self, data):
#...
SSEHandler.broadcast(data.decode('ascii'))
lastly, your code exits when the telnet connection closes. that's fine, but we should clean-up at that time, too. Fortunately, that's just a matter of setting a result on all of the futures for all of the handlers
class SSEHandler(sse.Handler):
#...
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
class Telnet(asyncio.Protocol):
#...
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
here's a full, working dump in case my illustration is not obvious.
import asyncio
import sse
loop = asyncio.get_event_loop()
sse_host, sse_port = '0.0.0.0', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
SSEHandler.broadcast(data.decode('ascii'))
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
class SSEHandler(sse.Handler):
_instances = {}
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '0.0.0.0', 7777))
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())
Related
I'm trying to simulate a situation where data is received by a server periodically. In my set up, I run one process that sets up the server and another process that sets up a bunch of clients (suffices to think of a single client). I have set up some of the code by putting together bits and pieces mostly from here. The server/clients communicate by sending messages using transport.write. First, the server tells the clients to start (this works fine AFAIK). The clients report back to the server as they make progress. What has me confused is that I only get these intermittent messages at the very end when the client is done. This could be a problem with buffer flush and I tried (unsuccessfully) things like This. Also, each message is pretty large and I tried sending the same message multiple times so the buffer would get cleared.
I suspect what I am seeing is a problem with returning the control to the transport but I can't figure out how to do away with it.
Any help with this or any other issues that jump up to you is much appreciated.
Server:
from twisted.internet import reactor, protocol
import time
import serverSideAnalysis
import pdb
#import bson, json, msgpack
import _pickle as pickle # I expect the users to authenticate and not
# do anything malicious.
PORT = 9000
NUM = 1
local_scratch="/local/scratch"
class Hub(protocol.Protocol):
def __init__(self,factory, clients, nclients):
self.clients = clients
self.nclients = nclients
self.factory = factory
self.dispatcher = serverSideAnalysis.ServerTalker(NUM, self,
local_scratch)
def connectionMade(self):
print("connected to user" , (self))
if len(self.clients) < self.nclients:
self.factory.clients.append(self)
else:
self.factory.clients[self.nclients] = self
if len(self.clients) == NUM:
val = input("Looks like everyone is here, shall we start? (Y/N)")
while (val.upper() != "Y"):
time.sleep(20)
val = input("Looks like everyone is here, shall we start??? (Y/N)")
message = pickle.dumps({"TASK": "INIT", "SUBTASK":"STORE"})
self.message(message) # This reaches the client as I had expected
def message(self, command):
for c in self.factory.clients:
c.transport.write(command)
def connectionLost(self, reason):
self.factory.clients.remove(self)
self.nclients -= 1
def dataReceived(self, data):
if len(self.clients) == NUM:
self.dispatcher.dispatch(data)
class PauseTransport(protocol.Protocol):
def makeConnection(self, transport):
transport.pauseProducing()
class HubFactory(protocol.Factory):
def __init__(self, num):
self.clients = set([])
self.nclients = 0
self.totConnections = num
def buildProtocol(self, addr):
print(self.nclients)
if self.nclients < self.totConnections:
self.nclients += 1
return Hub(self, self.clients, self.nclients)
protocol = PauseTransport()
protocol.factory = self
return protocol
factory = HubFactory(NUM)
reactor.listenTCP(PORT, factory)
factory.clients = []
reactor.run()
Client:
from twisted.internet import reactor, protocol
import time
import clientSideAnalysis
import sys
HOST = 'localhost'
PORT = 9000
local_scratch="/local/scratch"
class MyClient(protocol.Protocol):
def connectionMade(self):
print("connected!")
self.factory.clients.append(self)
print ("clients are ", self.factory.clients)
self.cdispatcher = clientSideAnalysis.ServerTalker(analysis_file_name, local_scratch, self)
def clientConnectionLost(self, reason):
#TODO send warning
self.factory.clients.remove(self)
def dataReceived(self, data): #This is the problematic part I think
self.cdispatcher.dispatch(data)
print("1 sent")
time.sleep(10)
self.cdispatcher.dispatch(data)
print("2 sent")
time.sleep(10)
self.cdispatcher.dispatch(data)
time.sleep(10)
def message(self, data):
self.transport.write(data)
class MyClientFactory(protocol.ClientFactory):
protocol = MyClient
if __name__=="__main__":
analysis_file_name = sys.argv[1]
factory = MyClientFactory()
reactor.connectTCP(HOST, PORT, factory)
factory.clients = []
reactor.run()
The last bit of relevant information about what the dispatchers do.
In both cases, they load the message that has arrived (a dictionary) and do a few computations based on the content. Every once in a while, they use the message method to communicate with thier current values.
Finally, I'm using python 3.6. and twisted 18.9.0
The way you return control to the reactor from a Protocol.dataReceived method is you return from that method. For example:
def dataReceived(self, data):
self.cdispatcher.dispatch(data)
print("1 sent")
If you want more work to happen after this, you have some options. If you want the work to happen after some amount of time has passed, use reactor.callLater. If you want the work to happen after it is dispatched to another thread, use twisted.internet.threads.deferToThread. If you want the work to happen in response to some other event (for example, data being received), put it in the callback that handles that event (for example, dataReceived).
I have a module which makes blocking network requests to some TCP server and receive responses. I must integrate it into asyncio application. My module looks like this:
import socket
# class providing transport facilities
# can be implemented in any manner
# for example it can manage asyncio connection
class CustomTransport:
def __init__(self, host, port):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.host = host
self.port = port
def write(self, data):
left = len(data)
while left:
written = self.sock.send(data.encode('utf-8'))
left = left - written
data = data[written:]
def read(self, sz):
return self.sock.recv(sz)
def open(self):
self.sock.connect((self.host, self.port))
def close(self):
self.sock.shutdown(2)
# generated. shouldn't be modified
# however any transport can be passed
class HelloNetClient:
def __init__(self, transport):
self.transport = transport
def say_hello_net(self):
self.transport.write('hello')
response = self.transport.read(5)
return response
# can be modified
class HelloService:
def __init__(self):
# create transport for connection to echo TCP server
self.transport = CustomTransport('127.0.0.1', 6789)
self.hello_client = HelloNetClient(self.transport)
def say_hello(self):
print('Saying hello...')
return self.hello_client.say_hello_net()
def __enter__(self):
self.transport.open()
return self
def __exit__(self,exc_type, exc_val, exc_tb):
self.transport.close()
Usage:
def start_conversation():
with HelloService() as hs:
answer = hs.say_hello()
print(answer.decode('utf-8'))
if __name__ == "__main__":
start_conversation()
Now I see that only way to turn my module to be compatible with asyncio is to convert everything to coroutines and replace regular socket with asyncio-provided transport. But I don't want to touch generated code (HelloNetClient). Is it possible?
P.S. I want it to be used like this:
async def start_conversation():
async with HelloService() as hs:
answer = await hs.say_hello()
print(answer.decode('utf-8'))
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(start_conversation())
HelloService will probably need to use run_in_executor (which manages a thread pool) to run HelloNetClient methods in the background. For example:
async def say_hello(self):
print('Saying hello...')
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, self.hello_client.say_hello_net)
This is not an idiomatic use of asyncio, and you're missing out on some of its features - for example, you won't be able to create thousands of clients that all work in parallel, and you won't get reliable cancellation (ability to cancel() any task you wish). Nonetheless, simple usage will work just fine.
Unfortunately the ability to provide custom transports is not of help here because the middle tier, HelloNetClient, expects synchronous behavior. Even if you were to write a custom transport that hooked into asyncio, methods like say_hello_net would still wait for as long as it takes for the response to arrive, so HelloService would have to schedule them in a separate thread. For this reason your best bet is to use the default transport and connect the code with asyncio with the code in the service as shown above.
I need your help please.
This code only works once, a second wget gives timeout (attached file).
wget http://localhost:9090
#!/usr/bin/env python
import trollius as asyncio
from trollius import From
import os
class Client(asyncio.Protocol):
def connection_made(self, transport):
self.connected = True
# save the transport
self.transport = transport
def data_received(self, data):
# forward data to the server
self.server_transport.write(data)
def connection_lost(self, *args):
self.connected = False
class Server(asyncio.Protocol):
clients = {}
def connection_made(self, transport):
# save the transport
self.transport = transport
#asyncio.coroutine
def send_data(self, data):
# get a client by its peername
peername, port = self.transport.get_extra_info('peername')
client = self.clients.get(peername)
# create a client if peername is not known or the client disconnect
if client is None or not client.connected:
protocol, client = yield From(loop.create_connection(Client, 'google.com', 80))
client.server_transport = self.transport
self.clients[peername] = client
# forward data to the client
client.transport.write(data)
def data_received(self, data):
# use a task so this is executed async
asyncio.Task(self.send_data(data))
#asyncio.coroutine
def initialize(loop):
# use a coroutine to use yield from and get the async result of
# create_server
server = yield From(loop.create_server(Server, '127.0.0.1', 9090))
loop = asyncio.get_event_loop()
# main task to initialize everything
asyncio.Task(initialize(loop))
# run
loop.run_forever()
Does anyone know the reason?
Thanks!
You need a real 'loop' in servers when you are writing socket servers in asyncio. Note that despite 'sync' coding, infinite loops do not block execution here. You need an infinite while loop within your server. There are many samples,I recommend websockets library samples!
Working with Python, Twisted, Redis and txredisapi.
How can I get the SubscriberProtocol for subscribe and unsubscribe to channels after the connection has been made?
I guess I need to get the instance of the SubscriberProtocol and then I can use "subscribe" and "unsubscribe" methods but don't know how to get it.
Code example:
import txredisapi as redis
class RedisListenerProtocol(redis.SubscriberProtocol):
def connectionMade(self):
self.subscribe("channelName")
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" %(pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class RedisListenerFactory(redis.SubscriberFactory):
maxDelay = 120
continueTrying = True
protocol = RedisListenerProtocol
Then from outside of these classes:
# I need to sub/unsub from here! (not from inside de protocol)
protocolInstance = RedisListenerProtocol # Here is the problem
protocolInstance.subscribe("newChannelName")
protocolInstance.unsubscribe("channelName")
Any suggestion?
Thanks!
The next code solves the problem:
#defer.inlineCallbacks
def subUnsub():
deferred = yield ClientCreator(reactor, RedisListenerProtocol).connectTCP(HOST, PORT)
deferred.subscribe("newChannelName")
deferred.unsubscribe("channelName")
Explanation:
Use "ClientCreator" to get an instance of SubscriberProtocol inside a function with the flag "#defer.inlineCallbacks" and don't forget the "yield" keyword for wait to complete the deferred data. Then you can use this deferred to suscribe and unsubscribe.
In my case I forgot the yield keyword, so the deferred wasn't complete and the method suscribe and unsubscribe didn't work.
connecting = ClientCreator(reactor, RedisListenerProtocol).connectTCP(HOST, PORT)
def connected(listener):
listener.subscribe("newChannelName")
listener.unsubscribe("channelName")
connecting.addCallback(connected)
I need to create a class that can receive and store SMTP messages, i.e. E-Mails. To do so, I am using asyncore according to an example posted here. However, asyncore.loop() is blocking so I cannot do anything else in the code.
So I thought of using threads. Here is an example-code that shows what I have in mind:
class MyServer(smtpd.SMTPServer):
# derive from the python server class
def process_message(..):
# overwrite a smtpd.SMTPServer method to be able to handle the received messages
...
self.list_emails.append(this_email)
def get_number_received_emails(self):
"""Return the current number of stored emails"""
return len(self.list_emails)
def start_receiving(self):
"""Start the actual server to listen on port 25"""
self.thread = threading.Thread(target=asyncore.loop)
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver from itself
self.close()
self.thread.join()
I hope you get the picture. The class MyServer should be able to start and stop listening to port 25 in a non-blocking way, able to be queried for messages while listening (or not). The start method starts the asyncore.loop() listener, which, when a reception of an email occurs, append to an internal list. Similar, the stop method should be able to stop this server, as suggested here.
Despite the fact this code does not work as I expect to (asyncore seems to run forever, even I call the above stop method. The error I raise is catched within stop, but not within the target function containing asyncore.loop()), I am not sure if my approach to the problem is senseful. Any suggestions for fixing the above code or proposing a more solid implementation (without using third party software), are appreciated.
The solution provided might not be the most sophisticated solution, but it works reasonable and has been tested.
First of all, the matter with asyncore.loop() is that it blocks until all asyncore channels are closed, as user Wessie pointed out in a comment before. Referring to the smtp example mentioned earlier, it turns out that smtpd.SMTPServer inherits from asyncore.dispatcher (as described on the smtpd documentation), which answers the question of which channel to be closed.
Therefore, the original question can be answered with the following updated example code:
class CustomSMTPServer(smtpd.SMTPServer):
# store the emails in any form inside the custom SMTP server
emails = []
# overwrite the method that is used to process the received
# emails, putting them into self.emails for example
def process_message(self, peer, mailfrom, rcpttos, data):
# email processing
class MyReceiver(object):
def start(self):
"""Start the listening service"""
# here I create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', 25), None)
# and here I also start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds after the smtp channel has been closed
self.thread = threading.Thread(target=asyncore.loop,kwargs = {'timeout':1} )
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is save to wait for the thread to finish, i.e. for asyncore.loop() to exit
self.thread.join()
# now it finally it is possible to use an instance of this class to check for emails or whatever in a non-blocking way
def count(self):
"""Return the number of emails received"""
return len(self.smtp.emails)
def get(self):
"""Return all emails received so far"""
return self.smtp.emails
....
So in the end, I have a start and a stop method to start and stop listening on port 25 within a non-blocking environment.
Coming from the other question asyncore.loop doesn't terminate when there are no more connections
I think you are slightly over thinking the threading. Using the code from the other question, you can start a new thread that runs the asyncore.loop by the following code snippet:
import threading
loop_thread = threading.Thread(target=asyncore.loop, name="Asyncore Loop")
# If you want to make the thread a daemon
# loop_thread.daemon = True
loop_thread.start()
This will run it in a new thread and will keep going till all asyncore channels are closed.
You should consider using Twisted, instead. http://twistedmatrix.com/trac/browser/trunk/doc/mail/examples/emailserver.tac demonstrates how to set up an SMTP server with a customizable on-delivery hook.
Alex answer is the best but was incomplete for my use case. I wanted to test SMTP as part of a unit test which meant building the fake SMTP server inside my test objects and the server would not terminate the asyncio thread so I had to add a line to set it to a daemon thread to allow the rest of the unit test to complete without blocking waiting for that asyncio thread to join. I also added in complete logging of all email data so that I could assert anything sent through the SMTP.
Here is my fake SMTP class:
class TestingSMTP(smtpd.SMTPServer):
def __init__(self, *args, **kwargs):
super(TestingSMTP, self).__init__(*args, **kwargs)
self.emails = []
def process_message(self, peer, mailfrom, rcpttos, data, **kwargs):
msg = {'peer': peer,
'mailfrom': mailfrom,
'rcpttos': rcpttos,
'data': data}
msg.update(kwargs)
self.emails.append(msg)
class TestingSMTP_Server(object):
def __init__(self):
self.smtp = TestingSMTP(('0.0.0.0', 25), None)
self.thread = threading.Thread()
def start(self):
self.thread = threading.Thread(target=asyncore.loop, kwargs={'timeout': 1})
self.thread.daemon = True
self.thread.start()
def stop(self):
self.smtp.close()
self.thread.join()
def count(self):
return len(self.smtp.emails)
def get(self):
return self.smtp.emails
And here is how it is called by the unittest classes:
smtp_server = TestingSMTP_Server()
smtp_server.start()
# send some emails
assertTrue(smtp_server.count() == 1) # or however many you intended to send
assertEqual(self.smtp_server.get()[0]['mailfrom'], 'first#fromaddress.com')
# stop it when done testing
smtp_server.stop()
In case anyone else needs this fleshed out a bit more, here's what I ended up using. This uses smtpd for the email server and smtpblib for the email client, with Flask as a http server [gist]:
app.py
from flask import Flask, render_template
from smtp_client import send_email
from smtp_server import SMTPServer
app = Flask(__name__)
#app.route('/send_email')
def email():
server = SMTPServer()
server.start()
try:
send_email()
finally:
server.stop()
return 'OK'
#app.route('/')
def index():
return 'Woohoo'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
smtp_server.py
# smtp_server.py
import smtpd
import asyncore
import threading
class CustomSMTPServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print('Receiving message from:', peer)
print('Message addressed from:', mailfrom)
print('Message addressed to:', rcpttos)
print('Message length:', len(data))
return
class SMTPServer():
def __init__(self):
self.port = 1025
def start(self):
'''Start listening on self.port'''
# create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', self.port), None)
# start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds
# after the smtp channel has been closed
kwargs = {'timeout':1, 'use_poll': True}
self.thread = threading.Thread(target=asyncore.loop, kwargs=kwargs)
self.thread.start()
def stop(self):
'''Stop listening to self.port'''
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is safe to wait for asyncore.loop() to exit
self.thread.join()
# check for emails in a non-blocking way
def get(self):
'''Return all emails received so far'''
return self.smtp.emails
if __name__ == '__main__':
server = CustomSMTPServer(('0.0.0.0', 1025), None)
asyncore.loop()
smtp_client.py
import smtplib
import email.utils
from email.mime.text import MIMEText
def send_email():
sender='author#example.com'
recipient='6142546977#tmomail.net'
msg = MIMEText('This is the body of the message.')
msg['To'] = email.utils.formataddr(('Recipient', recipient))
msg['From'] = email.utils.formataddr(('Author', 'author#example.com'))
msg['Subject'] = 'Simple test message'
client = smtplib.SMTP('127.0.0.1', 1025)
client.set_debuglevel(True) # show communication with the server
try:
client.sendmail('author#example.com', [recipient], msg.as_string())
finally:
client.quit()
Then start the server with python app.py and in another request simulate a request to /send_email with curl localhost:5000/send_email. Note that to actually send the email (or sms) you'll need to jump through other hoops detailed here: https://blog.codinghorror.com/so-youd-like-to-send-some-email-through-code/.