I have an application where every websocket connection (within tornado open callback) creates a zmq.SUB socket to an existing zmq.FORWARDER device. Idea is to receive data from zmq as callbacks, which can then be relayed to frontend clients over websocket connection.
https://gist.github.com/abhinavsingh/6378134
ws.py
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
from tornado.websocket import WebSocketHandler
from tornado.web import Application
from tornado.ioloop import IOLoop
ioloop = IOLoop.instance()
class ZMQPubSub(object):
def __init__(self, callback):
self.callback = callback
def connect(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect('tcp://127.0.0.1:5560')
self.stream = ZMQStream(self.socket)
self.stream.on_recv(self.callback)
def subscribe(self, channel_id):
self.socket.setsockopt(zmq.SUBSCRIBE, channel_id)
class MyWebSocket(WebSocketHandler):
def open(self):
self.pubsub = ZMQPubSub(self.on_data)
self.pubsub.connect()
self.pubsub.subscribe("session_id")
print 'ws opened'
def on_message(self, message):
print message
def on_close(self):
print 'ws closed'
def on_data(self, data):
print data
def main():
application = Application([(r'/channel', MyWebSocket)])
application.listen(10001)
print 'starting ws on port 10001'
ioloop.start()
if __name__ == '__main__':
main()
forwarder.py
import zmq
def main():
try:
context = zmq.Context(1)
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
print 'starting zmq forwarder'
zmq.device(zmq.FORWARDER, frontend, backend)
except KeyboardInterrupt:
pass
except Exception as e:
logger.exception(e)
finally:
frontend.close()
backend.close()
context.term()
if __name__ == '__main__':
main()
publish.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:5559')
socket.send('session_id helloworld')
print 'sent data for channel session_id'
However, my ZMQPubSub class doesn't seem like is receiving any data at all.
I further experimented and realized that I need to call ioloop.IOLoop.instance().start() after registering on_recv callback within ZMQPubSub. But, that will just block the execution.
I also tried passing main.ioloop instance to ZMQStream constructor but doesn't help either.
Is there a way by which I can bind ZMQStream to existing main.ioloop instance without blocking flow within MyWebSocket.open?
In your now complete example, simply change frontend in your forwarder to a PULL socket and your publisher socket to PUSH, and it should behave as you expect.
The general principles of socket choice that are relevant here:
use PUB/SUB when you want to send a message to everyone who is ready to receive it (may be no one)
use PUSH/PULL when you want to send a message to exactly one peer, waiting for them to be ready
it may appear initially that you just want PUB-SUB, but once you start looking at each socket pair, you realize that they are very different. The frontend-websocket connection is definitely PUB-SUB - you may have zero-to-many receivers, and you just want to send messages to everyone who happens to be available when a message comes through. But the backend side is different - there is only one receiver, and it definitely wants every message from the publishers.
So there you have it - backend should be PULL and frontend PUB. All your sockets:
PUSH -> [PULL-PUB] -> SUB
publisher.py: socket is PUSH, connected to backend in device.py
forwarder.py: backend is PULL, frontend is PUB
ws.py: SUB connects and subscribes to forwarder.frontend.
The relevant behavior that makes PUB/SUB fail on the backend in your case is the slow joiner syndrome, which is described in The Guide. Essentially, subscribers take a finite time to tell publishers about there subscriptions, so if you send a message immediately after opening a PUB socket, the odds are it hasn't been told that it has any subscribers yet, so it's just discarding messages.
ZeroMq subscribers have to subscribe on what messages they wish to receive; I don't see that in your code. I believe the Python way is this:
self.socket.setsockopt(zmq.SUBSCRIBE, "")
Related
I have my Tornado client continuously listening to my Tornado server in a loop, as it is mentioned here - http://tornadoweb.org/en/stable/websocket.html#client-side-support. It looks like this:
import tornado.websocket
from tornado import gen
#gen.coroutine
def test():
client = yield tornado.websocket.websocket_connect("ws://localhost:9999/ws")
client.write_message("Hello")
while True:
msg = yield client.read_message()
if msg is None:
break
print msg
client.close()
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().run_sync(test)
I'm not able to get multiple instances of clients to connect to the server. The second client always waits for the first client process to end before it connects to the server. The server is set up as follows, with reference from Websockets with Tornado: Get access from the "outside" to send messages to clients and Tornado - Listen to multiple clients simultaneously over websockets.
class WSHandler(tornado.websocket.WebSocketHandler):
clients = set()
def open(self):
print 'new connection'
WSHandler.clients.add(self)
def on_message(self, message):
print 'message received %s' % message
# process received message
# pass it to a thread which updates a variable
while True:
output = updated_variable
self.write_message(output)
def on_close(self):
print 'connection closed'
WSHandler.clients.remove(self)
application = tornado.web.Application([(r'/ws', WSHandler),])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(9999)
tornado.ioloop.IOLoop.instance().start()
But this has not worked - for some reason even after I have made a successful first connection, the second connection just fails to connect i.e. it does not even get added to the clients set.
I initially thought the while True would not block the server from receiving and handling more clients, but it does as without it multiple clients are able to connect. How can I send back continuously updated information from my internal thread without using the while True?
Any help would be greatly appreciated!
To write messages to client in a while loop, you can use the yield None inside the loop. This will pause the while loop and then Tornado's IOLoop will be free to accept new connections.
Here's an example:
#gen.coroutine
def on_message(self):
while True:
self.write_message("Hello")
yield None
Thanks for your answer #xyres! I was able to get it to work by starting a thread in the on_message method that handed processing and the while True to a function outside the WSHandler class. I believe this allowed for the method to run outside of Tornado's IOLoop, unblocking new connections.
This is how my server looks now:
def on_message(self, message):
print 'message received %s' % message
sendThread = threading.Thread(target=send, args=(self, message))
sendThread.start()
def send(client, msg):
# process received msg
# pass it to a thread which updates a variable
while True:
output = updated_variable
client.write_message(output)
Where send is a function defined outside the class which does the required computation for me and writes back to client inside thewhile True.
I started working with Twisted Framework, I wrote a TCP server and I connect to it throw Telnet, it works fine. Now I want to manage connections and connected clients( sending data, cutting connections, etc etc) using an GUI like PyUI or GTK..
this is my code
import sys
import os
from twisted.internet import reactor, protocol
from twisted.python import log
class Server(protocol.Protocol):
def dataReceived(self, data):
log.msg ("data received: %s"%data)
self.transport.write("you sent: %s"%data)
def connectionMade(self):
self.client_host = self.transport.getPeer().host
self.client_port = self.transport.getPeer().port
if len(self.factory.clients) >= self.factory.clients_max:
log.msg("Too many connections !!")
self.transport.write("Too many connections, sorry\n")
self.transport.loseConnection()
else:
self.factory.clients.append((self.client_host,self.client_port))
log.msg("connection from %s:%s\n"%(self.client_host,str(self.client_port)))
self.transport.write(
"Welcome %s:%s\n" %(self.client_host,str(self.client_port)))
def connectionLost(self, reason):
log.msg('Connection lost from %s:%s. Reason: %s\n' % (self.client_host,str(self.client_port),reason.getErrorMessage()))
if (self.client_host,self.client_port) in self.factory.clients:
self.factory.clients.remove((self.client_host,self.client_port))
class MyFactory(protocol.ServerFactory):
protocol = Server
def __init__(self, clients_max=10):
self.clients_max = clients_max
self.clients = []
def main():
"""This runs the protocol on port 8000"""
log.startLogging(sys.stdout)
reactor.listenTCP(8000,MyFactory)
reactor.run()
if __name__ == '__main__':
main()
Thanks.
If you want to write a single Python program (process) that runs both your UI and your networking, you will first need to choose an appropriate Twisted reactor that integrates with the UI toolkit's event loop. See here.
Next, you might start with something simple, like have a button that when pressed will send a text message to all currently connected clients.
Another thing: what clients will connect? Browsers (also)? If so, you might contemplate about using WebSocket instead of raw TCP.
I've written a code in tornado that connects to a server that is pushing an infinite data stream, processes the data stream and sends it out on a websocket server.
The problem is that the way I implemented it the server became blocked on a particular function and doesn't accept any more clients since it never exits the function serving the data to the websocket. I want the connection to the server and the data retrieved from it processed only once but send the processed data to all the clients that connect to my tornado server. Could someone please help me, I can't figure out a way to do it. Here's my code with processing of data removed:
import socket
import ssl
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
websockets = []
class WSHandler(tornado.websocket.WebSocketHandler):
def readData(self):
while True:
line = self.ssl_sock.read()
#PROCESS THE READ LINE AND CONVERT INTO RESULTING DATA
if(toSend):
self.write_message(result)
def makeConnection(self):
self.ssl_sock.connect(self.address)
self.readData()
def open(self):
print 'New connection was opened'
self.s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock=ssl.wrap_socket(self.s, cert_reqs=ssl.CERT_NONE)
self.address=('SERVER_ADDRESS',5000)
self.nodes=[]
self.edges=[]
if self not in websockets:
print ('added')
websockets.append(self)
if(len(websockets)==1):
print('executing make conn')
self.makeConnection()
else:
self.readData()
print('executing read data')
def on_message(self, message):
print 'Incoming message:', message
self.write_message("You said: " + message)
def on_close(self):
print 'Connection was closed...'
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Tornado is an asynchronous framework, that is, all your IO must be run within its event loop, otherwise the whole server gets stuck.
Try having a look at Tornado Async Client.
I need to create a class that can receive and store SMTP messages, i.e. E-Mails. To do so, I am using asyncore according to an example posted here. However, asyncore.loop() is blocking so I cannot do anything else in the code.
So I thought of using threads. Here is an example-code that shows what I have in mind:
class MyServer(smtpd.SMTPServer):
# derive from the python server class
def process_message(..):
# overwrite a smtpd.SMTPServer method to be able to handle the received messages
...
self.list_emails.append(this_email)
def get_number_received_emails(self):
"""Return the current number of stored emails"""
return len(self.list_emails)
def start_receiving(self):
"""Start the actual server to listen on port 25"""
self.thread = threading.Thread(target=asyncore.loop)
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver from itself
self.close()
self.thread.join()
I hope you get the picture. The class MyServer should be able to start and stop listening to port 25 in a non-blocking way, able to be queried for messages while listening (or not). The start method starts the asyncore.loop() listener, which, when a reception of an email occurs, append to an internal list. Similar, the stop method should be able to stop this server, as suggested here.
Despite the fact this code does not work as I expect to (asyncore seems to run forever, even I call the above stop method. The error I raise is catched within stop, but not within the target function containing asyncore.loop()), I am not sure if my approach to the problem is senseful. Any suggestions for fixing the above code or proposing a more solid implementation (without using third party software), are appreciated.
The solution provided might not be the most sophisticated solution, but it works reasonable and has been tested.
First of all, the matter with asyncore.loop() is that it blocks until all asyncore channels are closed, as user Wessie pointed out in a comment before. Referring to the smtp example mentioned earlier, it turns out that smtpd.SMTPServer inherits from asyncore.dispatcher (as described on the smtpd documentation), which answers the question of which channel to be closed.
Therefore, the original question can be answered with the following updated example code:
class CustomSMTPServer(smtpd.SMTPServer):
# store the emails in any form inside the custom SMTP server
emails = []
# overwrite the method that is used to process the received
# emails, putting them into self.emails for example
def process_message(self, peer, mailfrom, rcpttos, data):
# email processing
class MyReceiver(object):
def start(self):
"""Start the listening service"""
# here I create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', 25), None)
# and here I also start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds after the smtp channel has been closed
self.thread = threading.Thread(target=asyncore.loop,kwargs = {'timeout':1} )
self.thread.start()
def stop(self):
"""Stop listening now to port 25"""
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is save to wait for the thread to finish, i.e. for asyncore.loop() to exit
self.thread.join()
# now it finally it is possible to use an instance of this class to check for emails or whatever in a non-blocking way
def count(self):
"""Return the number of emails received"""
return len(self.smtp.emails)
def get(self):
"""Return all emails received so far"""
return self.smtp.emails
....
So in the end, I have a start and a stop method to start and stop listening on port 25 within a non-blocking environment.
Coming from the other question asyncore.loop doesn't terminate when there are no more connections
I think you are slightly over thinking the threading. Using the code from the other question, you can start a new thread that runs the asyncore.loop by the following code snippet:
import threading
loop_thread = threading.Thread(target=asyncore.loop, name="Asyncore Loop")
# If you want to make the thread a daemon
# loop_thread.daemon = True
loop_thread.start()
This will run it in a new thread and will keep going till all asyncore channels are closed.
You should consider using Twisted, instead. http://twistedmatrix.com/trac/browser/trunk/doc/mail/examples/emailserver.tac demonstrates how to set up an SMTP server with a customizable on-delivery hook.
Alex answer is the best but was incomplete for my use case. I wanted to test SMTP as part of a unit test which meant building the fake SMTP server inside my test objects and the server would not terminate the asyncio thread so I had to add a line to set it to a daemon thread to allow the rest of the unit test to complete without blocking waiting for that asyncio thread to join. I also added in complete logging of all email data so that I could assert anything sent through the SMTP.
Here is my fake SMTP class:
class TestingSMTP(smtpd.SMTPServer):
def __init__(self, *args, **kwargs):
super(TestingSMTP, self).__init__(*args, **kwargs)
self.emails = []
def process_message(self, peer, mailfrom, rcpttos, data, **kwargs):
msg = {'peer': peer,
'mailfrom': mailfrom,
'rcpttos': rcpttos,
'data': data}
msg.update(kwargs)
self.emails.append(msg)
class TestingSMTP_Server(object):
def __init__(self):
self.smtp = TestingSMTP(('0.0.0.0', 25), None)
self.thread = threading.Thread()
def start(self):
self.thread = threading.Thread(target=asyncore.loop, kwargs={'timeout': 1})
self.thread.daemon = True
self.thread.start()
def stop(self):
self.smtp.close()
self.thread.join()
def count(self):
return len(self.smtp.emails)
def get(self):
return self.smtp.emails
And here is how it is called by the unittest classes:
smtp_server = TestingSMTP_Server()
smtp_server.start()
# send some emails
assertTrue(smtp_server.count() == 1) # or however many you intended to send
assertEqual(self.smtp_server.get()[0]['mailfrom'], 'first#fromaddress.com')
# stop it when done testing
smtp_server.stop()
In case anyone else needs this fleshed out a bit more, here's what I ended up using. This uses smtpd for the email server and smtpblib for the email client, with Flask as a http server [gist]:
app.py
from flask import Flask, render_template
from smtp_client import send_email
from smtp_server import SMTPServer
app = Flask(__name__)
#app.route('/send_email')
def email():
server = SMTPServer()
server.start()
try:
send_email()
finally:
server.stop()
return 'OK'
#app.route('/')
def index():
return 'Woohoo'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
smtp_server.py
# smtp_server.py
import smtpd
import asyncore
import threading
class CustomSMTPServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print('Receiving message from:', peer)
print('Message addressed from:', mailfrom)
print('Message addressed to:', rcpttos)
print('Message length:', len(data))
return
class SMTPServer():
def __init__(self):
self.port = 1025
def start(self):
'''Start listening on self.port'''
# create an instance of the SMTP server, derived from asyncore.dispatcher
self.smtp = CustomSMTPServer(('0.0.0.0', self.port), None)
# start the asyncore loop, listening for SMTP connection, within a thread
# timeout parameter is important, otherwise code will block 30 seconds
# after the smtp channel has been closed
kwargs = {'timeout':1, 'use_poll': True}
self.thread = threading.Thread(target=asyncore.loop, kwargs=kwargs)
self.thread.start()
def stop(self):
'''Stop listening to self.port'''
# close the SMTPserver to ensure no channels connect to asyncore
self.smtp.close()
# now it is safe to wait for asyncore.loop() to exit
self.thread.join()
# check for emails in a non-blocking way
def get(self):
'''Return all emails received so far'''
return self.smtp.emails
if __name__ == '__main__':
server = CustomSMTPServer(('0.0.0.0', 1025), None)
asyncore.loop()
smtp_client.py
import smtplib
import email.utils
from email.mime.text import MIMEText
def send_email():
sender='author#example.com'
recipient='6142546977#tmomail.net'
msg = MIMEText('This is the body of the message.')
msg['To'] = email.utils.formataddr(('Recipient', recipient))
msg['From'] = email.utils.formataddr(('Author', 'author#example.com'))
msg['Subject'] = 'Simple test message'
client = smtplib.SMTP('127.0.0.1', 1025)
client.set_debuglevel(True) # show communication with the server
try:
client.sendmail('author#example.com', [recipient], msg.as_string())
finally:
client.quit()
Then start the server with python app.py and in another request simulate a request to /send_email with curl localhost:5000/send_email. Note that to actually send the email (or sms) you'll need to jump through other hoops detailed here: https://blog.codinghorror.com/so-youd-like-to-send-some-email-through-code/.
Is it possible to add additional subscriptions to a Redis connection? I have a listening thread but it appears not to be influenced by new SUBSCRIBE commands.
If this is the expected behavior, what is the pattern that should be used if users add a stock ticker feed to their interests or join chatroom?
I would like to implement a Python class similar to:
import threading
import redis
class RedisPubSub(object):
def __init__(self):
self._redis_pub = redis.Redis(host='localhost', port=6379, db=0)
self._redis_sub = redis.Redis(host='localhost', port=6379, db=0)
self._sub_thread = threading.Thread(target=self._listen)
self._sub_thread.setDaemon(True)
self._sub_thread.start()
def publish(self, channel, message):
self._redis_pub.publish(channel, message)
def subscribe(self, channel):
self._redis_sub.subscribe(channel)
def _listen(self):
for message in self._redis_sub.listen():
print message
The python-redis Redis and ConnectionPool classes inherit from threading.local, and this is producing the "magical" effects you're seeing.
Summary: your main thread and worker threads' self._redis_sub clients end up using two different connections to the server, but only the main thread's connection has issued the SUBSCRIBE command.
Details: Since the main thread is creating the self._redis_sub, that client ends up being placed into main's thread-local storage. Next I presume the main thread does a client.subscribe(channel) call. Now the main thread's client is subscribed on connection 1. Next you start the self._sub_thread worker thread which ends up having its own self._redis_sub attribute set to a new instance of redis.Client which constructs a new connection pool and establishes a new connection to the redis server.
This new connection has not yet been subscribed to your channel, so listen() returns immediately. So with python-redis you cannot pass an established connection with outstanding subscriptions (or any other stateful commands) between threads.
Depending on how you plan to implement your app you may need to switch to using a different client, or come up with some other way to communicate subscription state to the worker threads, e.g. send subscription commands through a queue.
One other issue is that python-redis uses blocking sockets, which prevents your listening thread from doing other work while waiting for messages, and it cannot signal it wishes to unsubscribe unless it does so immediately after receiving a message.
Async way:
Twisted framework and the plug txredisapi
Example code (Subscribe:
import txredisapi as redis
from twisted.application import internet
from twisted.application import service
class myProtocol(redis.SubscriberProtocol):
def connectionMade(self):
print "waiting for messages..."
print "use the redis client to send messages:"
print "$ redis-cli publish chat test"
print "$ redis-cli publish foo.bar hello world"
self.subscribe("chat")
self.psubscribe("foo.*")
reactor.callLater(10, self.unsubscribe, "chat")
reactor.callLater(15, self.punsubscribe, "foo.*")
# self.continueTrying = False
# self.transport.loseConnection()
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" % (pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class myFactory(redis.SubscriberFactory):
# SubscriberFactory is a wapper for the ReconnectingClientFactory
maxDelay = 120
continueTrying = True
protocol = myProtocol
application = service.Application("subscriber")
srv = internet.TCPClient("127.0.0.1", 6379, myFactory())
srv.setServiceParent(application)
Only one thread, no headache :)
Depends on what kind of app u coding of course. In networking case go twisted.