I have an autobahn Websocket Server with the typical onX functions in it's protocol. My problem is that I can't find a way to exit onX, while keep doing the various stuff that I wanted to do when the specific message arrived. More specifically in my onMessage function, I sometimes perform an HTTP request to an API which is very slow. As a result, the client that sent the websocket message is being blocked by the server's onMessage finalization. Even if I do self.sendMessage or reactor.callFromThread(<http request here>), or self.transport.loseConnection() from the server side, in the onMessage block, the onMessage is still executing the HTTP request and my client waits.
This is my client's code:
#asyncio.coroutine
def send(command,userPath,token):
websocket = yield from websockets.connect('wss://127.0.0.1:7000',ssl=ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2))
data = json.dumps({"api_command":"session","body":command,"headers": {'X-User-Path': userPath, 'X-User-Token': token}})
response = {}
try:
yield from websocket.send(data)
finally:
yield from websocket.close()
if 'command' in response:
if response['command'] == 'ACK_SESSION_COMMAND' or response['command'] == 'ACK_INITIALIZATION':
return ('OK',200)
else:
return('',400)
I even tried to just websocket.send(data), from the client, but for some reason it doesn't send the data (I don't see them arriving in the server). I don't understand how can I return from the onMessage block and keep doing my HTTP request.
And to explain my situation, I just want to sent 1 ssl websocket message to my server and immediately close the connection. Anything that can do that, suits me.
Using reactor.callInThread instead of reactor.callFromThread, causes the application flow to release and the HTTP request is performed independently in a thread. As described in the twisted documentation : http://twistedmatrix.com/documents/13.2.0/core/howto/threading.html
Related
I am trying to read messages on a MQTT server. In some cases, the connection is unstable and requires to reconnect. But after reconnect, I am not able to receive any message from the topic that I previously subscribed to. I am using paho's python package to handle MQTT connection. Here is some code I am using
TopicName='some/topic/name'
class Counter:
def __init__(self, mqttClient):
self.messages_recieved = 0
self.mqttClient = mqttClient
self.mqttClient.subscribe(TopicName)
self.mqttClient.on_message = self.on_message
self.mqttClient.on_disconnect = self.on_disconnect
self.mqttClient.loop_start()
def on_message(self, client, userdata, message):
self.messages_received += 1
def on_disconnect(self, client, userdata, rc):
if rc != 0:
print("Trying to reconnect")
while not self.mqttClient.is_connected():
try:
self.mqttClient.reconnect()
except OSError:
pass
If my internet goes down, I am no longer able to receive messages. I have tried to subscribe again to the topic, also I have tried to call loop_start in the on_disconnect method, neither of those worked. Any solution would be helpful. Also to point out messages are being sent, I can see them in the browser on MQTT wall
You have not shown where you are calling connect, but the usual safe pattern is to put the calls to subscribe() in the on_connect() callback attached to the client.
This means that calls to subscribe will
Always wait until the connection has completed
Get called again automatically when a reconnect had happend
Not sure what module you are using, but most will require you to re-subscribe if you disconnect. Add your subscribe() call after your .reconnect() call and you should be good to go. Also, keep in mind that at QOS level 0, any messages that the broker received while you were disconnect, your client will NOT receive...only messages while the client is subscribed will be received by your client. If messages are published with the Retain flag, you client will receive the LAST one received by the broker...even if the client previously received it.
So I'm doing a unittest of a tornado server I programmed which has a websocketHandler. This is my testing code:
def test_unauthorized_websocket(self):
message = "Hey bro"
ws = websocket.create_connection('ws://localhost:8888/ws', header=['Authorization: false'])
send_dict = {'command': test_command, 'message': message}
serialized_dict = json.dumps(send_dict)
ws.send(serialized_dict)
response = ws.recv()
#test response with assert
My goal with this test was to prove that my tornado server correctly refuses and closes this websocket connection because of wrong authentication header.
This is my tornado websocketHandler code:
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
#some code
headers = self.request.headers
try:
auth = headers['Authorization']
except KeyError:
self.close(code = 1002, reason = "Unauthorized websocket")
print("IT'S CLOSED")
return
if auth == "true":
print("Authorized Websocket!")
#some code
else:
print("Unauthorized Websocket... :-(")
self.close(code = 1002, reason = "Unauthorized websocket")
So when the authentication is wrong, self.close() is called (not sure I need code and reason). This should close the websocket. But this doesn't actually happens in the "client" side. After I call create_connection() in the "client" the ws.connected variable is still True, and when I do ws.send() the on_message method of the websocketHandler is still called and when it tries to make a response with self.write_message() it raises a WebSocketClosedError. And only then the "client" actually closes its side of the websocket, ws.recv() returns nothing and ws.connected turns to False after that.
Is there a way I can communicate to the client side (through handshake headers or something) that the websocket is meant to be closed earlier on its side?
You can override prepare() and raise a tornado.web.HTTPError, instead of overriding open() and calling self.close(). This will reject the connection at the first opportunity and this will be reported on the client side as a part of create_connection() instead of on a later read.
I am trying to get message from activemq using stomp.py and then doing some processing on it. But there is a case when that processing fails for certain messages and that message is lost.
How can I prevent the deletion of message untill the message is fully processed?
For example in my code when there is new entry in queue the on_message function will be called and the processing starts but if it is interrupted in between the message is lost. How do I stop it?
Here is my code:
conn = stomp.Connection([(host, 61613)])
conn.set_listener('ML', MyListener())
conn.start()
conn.connect('admin', 'admin', wait=True)
conn.subscribe(destination=/queue/someque, id=1, ack='auto')
print "running"
while 1:
print 'waiting'
time.sleep(2.5)
Here is my Listener class:
class MyListener(stomp.ConnectionListener):
def on_message(self, headers, message):
print headers
print message
do_something()
Thanks in advance.
The issue appears to be that you are using the 'auto' ack mode so the message will be acknowledged before delivery to the client by the broker meaning that even if you fail to process it, it's to late as it is already forgotten on the broker side. You'd need to use either 'client' ack or 'client-individual' ack mode as described in the STOMP specification. Using one of the client ack modes you control when a message or messages are actually acknowledged and dropped by the broker.
I created a zmq_forwarder.py that's run separately and it passes messages from the app to a sockJS connection, and i'm currently working on right now on how a flask app could receive a message from sockJS via zmq. i'm pasting the contents of my zmq_forwarder.py. im new to ZMQ and i dont know why everytime i run it, it uses 100% CPU load.
import zmq
# Prepare our context and sockets
context = zmq.Context()
receiver_from_server = context.socket(zmq.PULL)
receiver_from_server.bind("tcp://*:5561")
forwarder_to_server = context.socket(zmq.PUSH)
forwarder_to_server.bind("tcp://*:5562")
receiver_from_websocket = context.socket(zmq.PULL)
receiver_from_websocket.bind("tcp://*:5563")
forwarder_to_websocket = context.socket(zmq.PUSH)
forwarder_to_websocket.bind("tcp://*:5564")
# Process messages from both sockets
# We prioritize traffic from the server
while True:
# forward messages from the server
while True:
try:
message = receiver_from_server.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from server: ", message
forwarder_to_websocket.send_string(message)
# forward messages from the websocket
while True:
try:
message = receiver_from_websocket.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from websocket: ", message
forwarder_to_server.send_string(message)
as you can see, i've setup 4 sockets. the app connects to port 5561 to push data to zmq, and port 5562 to receive from zmq (although im still figuring out how to actually set it up to listen for messages sent by zmq). on the other hand, sockjs receives data from zmq on port 5564 and sends data to it on port 5563.
i've read the zmq.DONTWAIT makes receiving of message asynchronous and non-blocking so i added it.
is there a way to improve the code so that i dont overload the CPU? the goal is to be able to pass messages between the flask app and the websocket using zmq.
You are polling your two receiver sockets in a tight loop, without any blocking (zmq.DONTWAIT), which will inevitably max out the CPU.
Note that there is some support in ZMQ for polling multiple sockets in a single thread - see this answer. I think you can adjust the timeout in poller.poll(millis) so that your code only uses lots of CPU if there are lots of incoming messages, and idles otherwise.
Your other option is to use the ZMQ event loop to respond to incoming messages asynchronously, using callbacks. See the PyZMQ documentation on this topic, from which the following "echo" example is adapted:
# set up the socket, and a stream wrapped around the socket
s = ctx.socket(zmq.REP)
s.bind('tcp://localhost:12345')
stream = ZMQStream(s)
# Define a callback to handle incoming messages
def echo(msg):
# in this case, just echo the message back again
stream.send_multipart(msg)
# register the callback
stream.on_recv(echo)
# start the ioloop to start waiting for messages
ioloop.IOLoop.instance().start()
I have my software running on a bunch of clients around my network. I've been playing around with RabbitMQ as a solution for passing messages between each client.
My test code is this:
#!/usr/bin/python2
import pika
import time
connection = pika.AsyncoreConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
def callback(ch, method, properties, body):
# send messages back on certain events
if body == '5':
channel.basic_publish(exchange='',
routing_key='test',
body='works')
print body
channel.queue_declare(queue='test')
channel.basic_consume(callback, queue='test', no_ack=True)
for i in range(0, 8):
channel.basic_publish(exchange='',
routing_key='test',
body='{}'.format(i))
time.sleep(0.5)
channel.close()
Picture this as kind of a 'chat program'. Each client will need to constantly listen for messages. At times, the client will need to send messages back to the server.
This code works, but I've ran into an issue. When the code below sends out the message works, it then retreives that again from the RabbitMQ queue. Is there a way to tell have my client, a producer and a consumer, not receive the message it just sent?
I can't see this functionality built into RabbitMQ so I figured I'd send messages in the form of:
body='{"client_id" : 1, "message" : "this is the message"}'
Then I can parse that string and check the client_id. The client can then ignore all messagess not destined to it.
Is there a better way? Should I look for an alternative to RabbitMQ?
You can have as many queue in RabbitMQ. Why not have a queue for messages to the server as well as a queue for each client?