I'm trying to make an application that normally sends a request to a server and receives a response. If it would have been only that, I'd go with HTTP and call it a deal. But some requests to the server make a change for other clients, so I want the server to send all the affected clients a message that they should update.
For that, I've chosen WebSockets protocol and the Tornado library to work with it using Python. The simple message exchange was pretty straightforward, despite the asynchrony. However, the WebSocket client is really not that configurable and I've been struggling to make a client listen for incoming notifications without this interrupting the main message exchange.
The server part is represented by the tornado.websocket.WebSocketHandler, which has an on_message method:
from tornado.websocket import WebSocketHandler
class MyHandler(WebSocketHandler):
def on_message(self, message):
print('message:', message)
And I'd like something like that in the client part, which is only represented by a function tornado.websocket.websocket_connect (source). This function initiates a tornado.websocket.WebSocketClientConnection (source) object, which has an on_message method, but due to the entangled asynchronous structure, I haven't been able to override it properly, without breaking the main message exchange.
Another way I tried to go was the on_message_callback. This sounded like something I could use, but I couldn't figure out how to use it with read_message. This was my best attempt:
import tornado.websocket
import tornado.ioloop
ioloop = tornado.ioloop.IOLoop.current()
def clbk(message):
print('received', message)
async def main():
url = 'server_url_here'
conn = await tornado.websocket.websocket_connect(url, io_loop = ioloop, on_message_callback=clbk)
while True:
print(await conn.read_message()) # The execution hangs here
st = input()
conn.write_message(st)
ioloop.run_sync(main)
With this being the server code:
import tornado.ioloop
import tornado.web
import tornado.websocket
import os
class EchoWebSocket(tornado.websocket.WebSocketHandler):
def open(self):
self.write_message('hello')
def on_message(self, message):
self.write_message(message)
self.write_message('notification')
if __name__ == "__main__":
app = tornado.web.Application([(r"/", EchoWebSocket)])
app.listen(os.getenv('PORT', 8080))
tornado.ioloop.IOLoop.current().start()
I don't know what's going on here. Am I even going in the right direction with this?
There are two issues here:
Use on_message_callback or loop on await read_message(), but not both. If you give a callback the messages will only be passed to that callback and not saved for use by read_message.
input is blocking and doesn't play well with Tornado. It's fine in this little toy demo but if you want to do something like this in production you'll probably want to do something like wrap a PipeIOStream around sys.stdin and use stream.read_until('\n').
Related
The code like this:
from tornadoredis import Client
from tornado.ioloop import IOLoop
from tornado.gen import coroutine, Task
rds = Client()
#coroutine
def listen_pub():
def handle(msg):
print msg
yield Task(rds.subscribe, channels='pub')
rds.listen(handle)
#coroutine
def listen_list():
while True:
res = yield Task(rds.brpop, keys='list')
print res
def test():
listen_pub()
listen_list()
test()
IOLoop.current().start()
When I running the code above, only 'listen_list' can receive messages.
Why the 'listen_list' doesn't work?
How can I listen the message from LIST and PUB/SUB at the same time?
Take a look at the redis documentation:
A client subscribed to one or more channels should not issue
commands, although it can subscribe and unsubscribe to and from
other channels. The reply of the SUBSCRIBE and UNSUBSCRIBE operations
are sent in the form of messages, so that the client can just read a
coherent stream of messages where the first element indicates the type
of message.
You have to use two connection clients.
Source: http://redis.io/topics/pubsub
I have an application where every websocket connection (within tornado open callback) creates a zmq.SUB socket to an existing zmq.FORWARDER device. Idea is to receive data from zmq as callbacks, which can then be relayed to frontend clients over websocket connection.
https://gist.github.com/abhinavsingh/6378134
ws.py
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
from tornado.websocket import WebSocketHandler
from tornado.web import Application
from tornado.ioloop import IOLoop
ioloop = IOLoop.instance()
class ZMQPubSub(object):
def __init__(self, callback):
self.callback = callback
def connect(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect('tcp://127.0.0.1:5560')
self.stream = ZMQStream(self.socket)
self.stream.on_recv(self.callback)
def subscribe(self, channel_id):
self.socket.setsockopt(zmq.SUBSCRIBE, channel_id)
class MyWebSocket(WebSocketHandler):
def open(self):
self.pubsub = ZMQPubSub(self.on_data)
self.pubsub.connect()
self.pubsub.subscribe("session_id")
print 'ws opened'
def on_message(self, message):
print message
def on_close(self):
print 'ws closed'
def on_data(self, data):
print data
def main():
application = Application([(r'/channel', MyWebSocket)])
application.listen(10001)
print 'starting ws on port 10001'
ioloop.start()
if __name__ == '__main__':
main()
forwarder.py
import zmq
def main():
try:
context = zmq.Context(1)
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
print 'starting zmq forwarder'
zmq.device(zmq.FORWARDER, frontend, backend)
except KeyboardInterrupt:
pass
except Exception as e:
logger.exception(e)
finally:
frontend.close()
backend.close()
context.term()
if __name__ == '__main__':
main()
publish.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:5559')
socket.send('session_id helloworld')
print 'sent data for channel session_id'
However, my ZMQPubSub class doesn't seem like is receiving any data at all.
I further experimented and realized that I need to call ioloop.IOLoop.instance().start() after registering on_recv callback within ZMQPubSub. But, that will just block the execution.
I also tried passing main.ioloop instance to ZMQStream constructor but doesn't help either.
Is there a way by which I can bind ZMQStream to existing main.ioloop instance without blocking flow within MyWebSocket.open?
In your now complete example, simply change frontend in your forwarder to a PULL socket and your publisher socket to PUSH, and it should behave as you expect.
The general principles of socket choice that are relevant here:
use PUB/SUB when you want to send a message to everyone who is ready to receive it (may be no one)
use PUSH/PULL when you want to send a message to exactly one peer, waiting for them to be ready
it may appear initially that you just want PUB-SUB, but once you start looking at each socket pair, you realize that they are very different. The frontend-websocket connection is definitely PUB-SUB - you may have zero-to-many receivers, and you just want to send messages to everyone who happens to be available when a message comes through. But the backend side is different - there is only one receiver, and it definitely wants every message from the publishers.
So there you have it - backend should be PULL and frontend PUB. All your sockets:
PUSH -> [PULL-PUB] -> SUB
publisher.py: socket is PUSH, connected to backend in device.py
forwarder.py: backend is PULL, frontend is PUB
ws.py: SUB connects and subscribes to forwarder.frontend.
The relevant behavior that makes PUB/SUB fail on the backend in your case is the slow joiner syndrome, which is described in The Guide. Essentially, subscribers take a finite time to tell publishers about there subscriptions, so if you send a message immediately after opening a PUB socket, the odds are it hasn't been told that it has any subscribers yet, so it's just discarding messages.
ZeroMq subscribers have to subscribe on what messages they wish to receive; I don't see that in your code. I believe the Python way is this:
self.socket.setsockopt(zmq.SUBSCRIBE, "")
I've written a code in tornado that connects to a server that is pushing an infinite data stream, processes the data stream and sends it out on a websocket server.
The problem is that the way I implemented it the server became blocked on a particular function and doesn't accept any more clients since it never exits the function serving the data to the websocket. I want the connection to the server and the data retrieved from it processed only once but send the processed data to all the clients that connect to my tornado server. Could someone please help me, I can't figure out a way to do it. Here's my code with processing of data removed:
import socket
import ssl
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
websockets = []
class WSHandler(tornado.websocket.WebSocketHandler):
def readData(self):
while True:
line = self.ssl_sock.read()
#PROCESS THE READ LINE AND CONVERT INTO RESULTING DATA
if(toSend):
self.write_message(result)
def makeConnection(self):
self.ssl_sock.connect(self.address)
self.readData()
def open(self):
print 'New connection was opened'
self.s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock=ssl.wrap_socket(self.s, cert_reqs=ssl.CERT_NONE)
self.address=('SERVER_ADDRESS',5000)
self.nodes=[]
self.edges=[]
if self not in websockets:
print ('added')
websockets.append(self)
if(len(websockets)==1):
print('executing make conn')
self.makeConnection()
else:
self.readData()
print('executing read data')
def on_message(self, message):
print 'Incoming message:', message
self.write_message("You said: " + message)
def on_close(self):
print 'Connection was closed...'
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Tornado is an asynchronous framework, that is, all your IO must be run within its event loop, otherwise the whole server gets stuck.
Try having a look at Tornado Async Client.
I'm trying to add a sockjs-tornado server to my site, and all worked fine until I decided to connect it to my other apps via MsgPack (using msgpack-rpc-python). And now works sockjs server either RPC server. Depending on wich of them start there loop first.
I think that I need to use one tornado.ioloop for both of them. But do not know how to achieve it. Or may be there is another way to add rpc to a tornado server?
Here is a sockjs-tornado sample code with msgpack-rpc-python:
import tornado.ioloop
import tornado.web
import sockjs.tornado
import msgpackrpc
class RPCServer(object):
def sum(self, x, y):
return x + y
class IndexHandler(tornado.web.RequestHandler):
"""Regular HTTP handler to serve the chatroom page"""
def get(self):
self.render('index.html')
class ChatConnection(sockjs.tornado.SockJSConnection):
"""Chat connection implementation"""
# Class level variable
participants = set()
def on_open(self, info):
# Send that someone joined
self.broadcast(self.participants, "Someone joined.")
# Add client to the clients list
self.participants.add(self)
def on_message(self, message):
# Broadcast message
self.broadcast(self.participants, message)
def on_close(self):
# Remove client from the clients list and broadcast leave message
self.participants.remove(self)
self.broadcast(self.participants, "Someone left.")
if __name__ == "__main__":
# 1. Create chat router
ChatRouter = sockjs.tornado.SockJSRouter(ChatConnection, '/chat')
# 1.5 Create MsgPack RPC Server
rpc = msgpackrpc.Server(RPCServer())
# 2. Create Tornado application
app = tornado.web.Application(
[(r"/", IndexHandler)] + ChatRouter.urls
)
# 3. Make Tornado app listen on port 5000
app.listen(5000)
# 3.5 Make MsgPack RPC Server listen on port 5001
rpc.listen(msgpackrpc.Address('localhost', 5001))
# 4. Start IOLoop
tornado.ioloop.IOLoop.instance().start()
# 5. Never executed
rpc.start()
`
Any suggestions or examples are welcome!
This happens because both start() calls start Tornado IOLoop and they won't exit until IOLoop stopped.
Yes, you have to use one IOLoop. As msgpackrpc.Server accepts Loop class instance and Loop incapsulates IOLoop, try this:
if __name__ == '__main__':
io_loop = tornado.ioloop.IOLoop.instance()
loop = msgpackrpc.Loop(io_loop)
rpc = msgpackrpc.Server(RPCServer(), loop=loop)
# ... sockjs-tornado initialisation. No need to call rpc.start()
io_loop.start()
I have a server where I have implemented a child of the NetstringReceiver protocol. I want it to perform an asynchronous operation (using txredisapi) based on the client's request and then respond with the results of the operation. A generalization of my code:
class MyProtocol(NetstringReceiver):
def stringReceived(self, request):
d = async_function_that_returns_deferred(request)
d.addCallback(self.respond)
# self.sendString(myString)
def respond(self, result_of_async_function):
self.sendString(result_of_async_function)
In the above code, the client connecting to my server does not get a response. However, it does get myString if I uncomment
# self.sendString(myString)
I also know that result_of_async_function is a non-empty string because I print it to stdout .
What can I do that will allow me to respond to the client with the result of the asynchronous function?
Update: Runnable source code
from twisted.internet import reactor, defer, protocol
from twisted.protocols.basic import NetstringReceiver
from twisted.internet.task import deferLater
def f():
return "RESPONSE"
class MyProtocol(NetstringReceiver):
def stringReceived(self, _):
d = deferLater(reactor, 5, f)
d.addCallback(self.reply)
# self.sendString(str(f())) # Note that this DOES send the string.
def reply(self, response):
self.sendString(str(response)) # Why does this not send the string and how to fix?
class MyFactory(protocol.ServerFactory):
protocol = MyProtocol
def main():
factory = MyFactory()
from twisted.internet import reactor
port = reactor.listenTCP(8888, factory, )
print 'Serving on %s' % port.getHost()
reactor.run()
if __name__ == "__main__":
main()
There's one specific feature about NetstringReceiver:
The connection is lost if an illegal message is received
Are you sure that your messages conform djb's Netstring protocol?
Obviously the client sends illegal string that could not be parsed, and connection is lost by protocol conditions. Everything else looks good in your code.
If you don't need that specific protcol, you'd better inherit LineReceiver instead of NetstringReceiver.
The reason you never get the response is because by the time it's sent the connection is closed. The reason the connection is closed is because the message you send with 'nc' is:
1:a,\n
Because you have to type a newline to get nc to send the message, but nc includes it as part of the message. That violates the NetString protocol...
I worked around it (with your code modified with some additional prints) by sending this message instead:
1:a,40:\n
blahblahblahDon't hit return here, just wait for the reply
8:RESPONSE,