Subscribing to a websocket and publishing on another - python

I'm attempting to do the following:
connect as client to an existing websocket
process the streaming data received from this socket, and publish it on another websocket
I'm using twisted and autobahn to do so. I have managed to have the two parts working separately, by deriving a WebSocketClientProtocol for the client, and deriving an ApplicationSession in the second. The two run with the same reactor.
I am not sure however as to how to make them communicate. I would like to send a message on my server when the client receives a message, but I don't know how to get the running instance of the WebSocketClientProtocol...
Perhaps this isn't the right approach to do this either. What's the right way to do this?

I've been trying to solve similiar problem recently, here's what worked:
f = XLeagueBotFactory()
app = Application(f)
reactor.connectTCP("irc.gamesurge.net", 6667, f)
reactor.listenTCP(port, app, interface=host)
^ This is in if __name__ == "__main__":
class Application(web.Application):
def __init__(self, botfactory):
self.botfactory = botfactory
Define the instance as self, then in my instance I was sending it to another handler for http post request (using cyclone)
class requestvouch(web.RequestHandler):
def __init__(self, application, request, **kwargs):
super(requestvouch, self).__init__(application, request, **kwargs)
self.botfactory = application.botfactory
def msg(self, channel, msg):
bot = self.botfactory.getProtocolByName("XLeagueBot")
sendmsg(bot, channel, msg) # function that processed the msg through stuff like encoding and logging and then sent it to bot.msg() function that posts it to IRC (endpoint in my case)
def post(self):
msg = "What I'm sending to the protocol of the other thing"
self.msg("#xleague", msg)
Now the important part comes in factory
class XLeagueBotFactory(protocol.ClientFactory):
protocol = XLeagueBot
def __init__(self):
self.protocols = {}
def getProtocolByName(self, name):
return self.protocols.get(name)
def registerProtocol(self, protocol):
self.protocols[protocol.nickname] = protocol
def unregisterProtocol(self, protocol):
del self.protocols[protocol.nickname]
Finally in my client class:
class XLeagueBot(irc.IRCClient):
nickname = "XLeagueBot"
def connectionMade(self):
irc.IRCClient.connectionMade(self)
self.factory.registerProtocol(self)
def connectionLost(self, reason):
self.factory.unregisterProtocol(self)
irc.IRCClient.connectionLost(self, reason)
I'm not entirely sure that this code is perfect, or bugfree, but it should +- tell you how to deal with calling instance of protocol class. The problem afaik comes from name of instance protocol being generated inside of it's factory and not being sent elsewhere.

Related

Tornado websocket messages aren't receiving

I have a very simple setup inspired by this question: Tornado - Listen to multiple clients simultaneously over websockets
Essentially, I have one Websocket Handler that may connect to many websocket clients. Then I have another websocket handler 'DataHandler' that will broadcast a message everytime it receives a message.
So I made a global list of TestHandler instances and use it to broadcast messages to all the instances
ws_clients = []
class TestHandler(tornado.websocket.WebSocketHandler):
def open(self):
print('open test!')
ws_clients.append(self)
self.random_number = random.randint(0, 101)
def on_message(self, message):
print(message)
print('received', message, self, self.random_number)
self.write_message('Message received')
def on_close(self):
print('closed')
class DataHandler(tornado.websocket.WebSocketHandler):
def open(self):
print('data open!')
def on_message(self, message):
for c in ws_clients:
c.write_message('hello!')
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/test_service/", TestHandler),
(r"/data/", DataHandler),
(r"/", httpHandler)
]
tornado.web.Application.__init__(self, handlers)
ws_app = Application()
ws_app.listen(8000)
tornado.ioloop.IOLoop.instance().start()
TestHandler can receive messages fine through the address ws://127.0.0.1/test_service/ and DataHandler can receive messages fine through the address ws://127.0.0.1/data/ but whenever I loop through ws_clients, I never receive any messages on TestHandler.
Am I doing something wrong?
Here's what I'd do - I'd create a new method on TestHandler which will serve
one single purpose - take a message and send it to all the connected clients.
Before going into the code, I'd like to point out that it seems (conventionally) better to keep ws_clients inside the class instead of a global object. And use a set instead of a list.
class TestHandler(...):
ws_clients = set() # use set instead of list to avoid duplicate connections
def open(self):
self.ws_clients.add(self)
#classmethod
def broadcast(cls, message):
"""Takes a message and sends to all connected clients"""
for client in cls.ws_clients:
# here you can calculate `var` depending on each client
client.write_message(message)
def on_close(self):
# remove the client from `ws_clients`
self.ws_client.remove(self)
# then you can call TestHandler.broadcast
# from anywhere in your code
# example:
class DataHandler(...):
...
def on_message(self, message):
# pass the message to TestHandler
# to send out to connected clients
TestHandler.broadcast(message)

Communicate between asyncio protocol/servers

I'm trying to write a Server Side Events server which I can connect to with telnet and have the telnet content be pushed to a browser. The idea behind using Python and asyncio is to use as little CPU as possible as this will be running on a Raspberry Pi.
So far I have the following which uses a library found here: https://pypi.python.org/pypi/asyncio-sse/0.1 which uses asyncio.
And I have also copied a telnet server which uses asyncio as well.
Both work separately, but I have no idea how to tie both together. As I understand it, I need to call send() in the SSEHandler class from inside Telnet.data_received, but I don't know how to access it. Both of these 'servers' need to be running in a loop to accept new connections, or push data.
Can anyone help, or point me in another direction?
import asyncio
import sse
# Get an instance of the asyncio event loop
loop = asyncio.get_event_loop()
# Setup SSE address and port
sse_host, sse_port = '192.168.2.25', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
print(data)
self.transport.write(b'echo:')
self.transport.write(data)
# This is where I want to send data via SSE
# SSEHandler.send(data)
# Things I've tried :(
#loop.call_soon_threadsafe(SSEHandler.handle_request());
#loop.call_soon_threadsafe(sse_server.send("PAH!"));
def connection_lost(self, esc):
print("Connection lost!")
telnet_server.close()
class SSEHandler(sse.Handler):
#asyncio.coroutine
def handle_request(self):
self.send('Working')
# SSE server
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
# Telnet server
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '192.168.2.25', 7777))
#telnet_server.something = sse_server;
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())
Server side events are a sort of http protocol; and you may have any number of concurrent http requests in flight at any given moment, you may have zero if nobody is connected, or dozens. This nuance is all wrapped up in the two sse.serve and sse.Handler constructs; the former represents a single listening port, which dispatches each separate client request to the latter.
Additionally, sse.Handler.handle_request() is called once for each client, and the client is disconnected once that co-routine terminates. In your code, that coroutine terminates immediately, and so the client sees a single "Working" event. So, we need to wait, more-or-less forever. We can do that by yield froming an asyncio.Future().
The second issue is that we'll somehow need to get a hold of all of the separate instances of a SSEHandler() and use the send() method on each of them, somehow. Well, we can have each one self-register in their handle_request() methods; by adding each one to a dict which maps the individual handler instances to the future they are waiting on.
class SSEHandler(sse.Handler):
_instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
Now, to send an event to every listening we just visit all of the SSEHandler instances registered in the dict we created and using send() on each one.
class SSEHandler(sse.Handler):
#...
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
class Telnet(asyncio.Protocol):
#...
def data_received(self, data):
#...
SSEHandler.broadcast(data.decode('ascii'))
lastly, your code exits when the telnet connection closes. that's fine, but we should clean-up at that time, too. Fortunately, that's just a matter of setting a result on all of the futures for all of the handlers
class SSEHandler(sse.Handler):
#...
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
class Telnet(asyncio.Protocol):
#...
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
here's a full, working dump in case my illustration is not obvious.
import asyncio
import sse
loop = asyncio.get_event_loop()
sse_host, sse_port = '0.0.0.0', 8888
class Telnet(asyncio.Protocol):
def connection_made(self, transport):
print("Connection received!");
self.transport = transport
def data_received(self, data):
SSEHandler.broadcast(data.decode('ascii'))
def connection_lost(self, esc):
print("Connection lost!")
SSEHandler.abort()
telnet_server.close()
class SSEHandler(sse.Handler):
_instances = {}
#classmethod
def broadcast(cls, message):
for instance, future in cls._instances.items():
instance.send(message)
#classmethod
def abort(cls):
for instance, future in cls._instances.items():
future.set_result(None)
cls._instances = {}
#asyncio.coroutine
def handle_request(self):
self.send('Working')
my_future = asyncio.Future()
SSEHandler._instances[self] = my_future
yield from my_future
sse_server = sse.serve(SSEHandler, sse_host, sse_port)
telnet_server = loop.run_until_complete(loop.create_server(Telnet, '0.0.0.0', 7777))
loop.run_until_complete(sse_server)
loop.run_until_complete(telnet_server.wait_closed())

Subscribe and unsubscribe to channels after the connection has been made with txredisapi

Working with Python, Twisted, Redis and txredisapi.
How can I get the SubscriberProtocol for subscribe and unsubscribe to channels after the connection has been made?
I guess I need to get the instance of the SubscriberProtocol and then I can use "subscribe" and "unsubscribe" methods but don't know how to get it.
Code example:
import txredisapi as redis
class RedisListenerProtocol(redis.SubscriberProtocol):
def connectionMade(self):
self.subscribe("channelName")
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" %(pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class RedisListenerFactory(redis.SubscriberFactory):
maxDelay = 120
continueTrying = True
protocol = RedisListenerProtocol
Then from outside of these classes:
# I need to sub/unsub from here! (not from inside de protocol)
protocolInstance = RedisListenerProtocol # Here is the problem
protocolInstance.subscribe("newChannelName")
protocolInstance.unsubscribe("channelName")
Any suggestion?
Thanks!
The next code solves the problem:
#defer.inlineCallbacks
def subUnsub():
deferred = yield ClientCreator(reactor, RedisListenerProtocol).connectTCP(HOST, PORT)
deferred.subscribe("newChannelName")
deferred.unsubscribe("channelName")
Explanation:
Use "ClientCreator" to get an instance of SubscriberProtocol inside a function with the flag "#defer.inlineCallbacks" and don't forget the "yield" keyword for wait to complete the deferred data. Then you can use this deferred to suscribe and unsubscribe.
In my case I forgot the yield keyword, so the deferred wasn't complete and the method suscribe and unsubscribe didn't work.
connecting = ClientCreator(reactor, RedisListenerProtocol).connectTCP(HOST, PORT)
def connected(listener):
listener.subscribe("newChannelName")
listener.unsubscribe("channelName")
connecting.addCallback(connected)

Implementing Referenceable objects client-side with Twisted Perspective Broker

I am trying to implement a simple server reply in Perspective Broker.
Possible implementation (please suggest others if possible):
Client requests server to execute a server method, Server executes then replies (by executing a client method whose sole purpose is to print a message):
[Client-side]:
class ClientPrint(pb.Referenceable):
def remote_clientprint(self, message):
print "Printing the message from the server: ", message
[Server-side]:
class RootServerObject(pb.Root):
def remote_OneFunc(self, ...):
...
print "Now sending the reply..."
*get ClientPrint object?*
clientprintobj.callRemote("clientprint", "this is the reply!")
How can I implement the grabbing of client-side objects? Is there a better way to implement server replies than grabbing a client-side object and calling a print-only client method?
Here is the full code where I am trying to implement the replies:
[Client-side]:
from twisted.internet import reactor
from twisted.spread import pb
class Client():
def __init__(self, addr, port, spec):
self.addr = None
self.port = None
self.SomeData = None
def connect(self, addr, port):
factory = pb.PBClientFactory()
reactor.connectTCP(addr, port, factory)
def1 = factory.getRootObject()
def1.addCallbacks(self.got_obj, self.err_obj)
def got_obj(self, rootsrvobj):
print "Got root server obj:", rootsrvobj
self.server = rootsrvobj
def2 = self.server.callRemote("SomeFunc", SomeData)
def err_obj(self, reason):
print "Error getting root server obj:", reason
self.quit()
def cmdsub(addr, port, SomeData):
c = Client(addr, port, SomeData)
c.connect(addr, port)
[Server-side]:
class RootServerObject(pb.Root):
def __init__(self):
self.DataOut = None
def remote_SomeFunc(self, SomeData):
self.DataOut = hash(SomeData)
print "Now sending reply..."
*implement a reply?*
Perhaps there are some more advanced Twisted (or Twisted PB) features that will make this simpler.
Documentation: https://twistedmatrix.com/documents/12.3.0/core/howto/pb-usage.html#auto3
Thanks.
The simplest way to do this is to take the client-side object that the server needs to use and pass it to the server. Almost any solution I can think of has this at its core.
Change your client's got_obj method to be something more like this:
def got_obj(self, rootsrvobj):
print "Got root server obj:", rootsrvobj
self.server = rootsrvobj
def2 = self.server.callRemote("SomeFunc", self, SomeData)
And change the implementation of remote_SomeFunc to be something more like this:
def remote_SomeFunc(self, client, SomeData):
self.DataOut = hash(SomeData)
print "Now sending reply..."
client.callRemote("client_print", "Here is your reply")
You might want to investigate Twisted Cred as a more structured way to manage references to your client object - but cred is just building on this exact feature of Perspective Broker to provide its more abstract, more featureful interface.
However, notice that I said "almost" above...
Keep in mind that Twisted's implementation of Perspective Broker has well-integrated support for Deferreds. If a remote_ method returns a Deferred then no response will be sent to the method call until the Deferred fires and then the result will be sent as the result of the method call. You might consider putting the logic of client_print into a callback on the Deferred returned by self.server.callRemote("SomeFunc", SomeData) and making the server's remote_SomeFunc return the reply, either synchronously or asynchronously (as a Deferred).

how to kill twisted protocol instances python

I have a server application written in python using twisted and I'd like to know how to kill instances of my protocol (bottalk). Everytime I get a new client connection, I see the instance in memory (print Factory.clients) .. but let's say I want to kill one of those instances from the server side (drop a specific client connection)? Is this possible? I've tried looking for a phrase using lineReceived, then if it matches, self.transport.loseConnection(). But that doesn't seem to reference the instance anymore or something..
class bottalk(LineReceiver):
from os import linesep as delimiter
def connectionMade(self):
Factory.clients.append(self)
print Factory.clients
def lineReceived(self, line):
for bots in Factory.clients[1:]:
bots.message(line)
if line == "killme":
self.transport.loseConnection()
def message(self, message):
self.transport.write(message + '\n')
class botfactory(Factory):
def buildProtocol(self, addr):
return bottalk()
Factory.clients = []
stdio.StandardIO(bottalk())
reactor.listenTCP(8123, botfactory())
reactor.run()
You closed the TCP connection by calling loseConnection. But there's no code anywhere in your application that removes items from the clients list on the factory.
Try adding this to your protocol:
def connectionLost(self, reason):
Factory.clients.remove(self)
This will remove the protocol instance from the clients list when the protocol's connection is lost.
Also, you should consider not using the global Factory.clients to implement this functionality. It's bad for all the usual reasons globals are bad. Instead, give each protocol instance a reference to its factory and use that:
class botfactory(Factory):
def buildProtocol(self, addr):
protocol = bottalk()
protocol.factory = self
return protocol
factory = botfactory()
factory.clients = []
StandardIO(factory.buildProtocol(None))
reactor.listenTCP(8123, factory)
Now each bottalk instance can use self.factory.clients instead of Factory.clients.

Categories