Durable subscriber with client-individual ACK not working - python

I have a durable subscriber for a topic(eg:topic_a). I am trying to provide client-individual ACK. At the end of the try block, I am providing manual ack and in exception block, I am doing no acknowledgment. Whenever an error occurs the consumer(subscriber) is getting hung up and eventually stops.
I AM TRYING TO DO MANUAL ACK IN TOPIC(PUB-SUB) BASED IMPLEMENTATION.
1). Is it possible?
2). Whether the message will be redelivered to the same durable subscriber?
execute method inside main class:
self.conn = stomp.Connection11(self.conn_param, encoding=self.ENCODE_FORMAT)
self.conn.start()
self.conn.connect(wait=True, headers={'client-id': self.CLIENT_ID})
self.conn.set_listener('', CustomListener(self.conn))
Listener class:
class CustomListener(stomp.ConnectionListener):
def __init__(self, conn, func_to_exec):
self.conn = conn
def on_message(self, headers, message):
try:
message = json.loads(message)
/**DO SOME BUSINESS LOGIC**/
self.conn.ack(headers.get("message-id"), int(headers.get("subscription")))
print("message ack done..!")
except Exception as ex:
print("Exception in processing message :: %s"%(ex))
in on_message Listener if any exception during process time. then the message needs to redelivered.

If using client individual ack mode then your code has a responsibility to acknowledge the messages sent to it and if you fail to ack enough then the broker would stop sending you more because you've exhausted the available credit that is configured. The broker would assume the unacknowledged messages are pending until you either ACK or NACK them. You can use NACK to poison the message and either send it to DLQ or (if configured broker side redelivery) have the broker redeliver the message.

Related

connection to two RabbitMQ servers

I'm using python with pika, and have the following two similar use cases:
Connect to RabbitMQ server A and server B (at different IP addrs with different credentials), listen on exchange A1 on server A; when a message arrives, process it and send to an exchange on server B
Open an HTTP listener and connect to RabbitMQ server B; when a specific HTTP request arrives, process it and send to an exchange on server B
Alas, in both these cases using my usual techniques, by the time I get to sending to server B the connection throws ConnectionClosed or ChannelClosed.
I assume this is the cause: while waiting on the incoming messages, the connection to server B (its "driver") is starved of CPU cycles, and it never gets a chance to service is connection socket, thus it can't respond to heartbeats from server B, thus the servers shuts down the connection.
But I can't noodle out the fix. My current work around is lame: I catch the ConnectionClosed, reopen a connection to server B, and retry sending my message.
But what is the "right" way to do this? I've considered these, but don't really feel I have all the parts to solve this:
Don't just sit forever in server A's basic_consume (my usual pattern), but rather, use a timeout, and when I catch the timeout somehow "service" heartbeats on server B's driver, before returning to a "consume with timeout"... but how do I do that? How do I "let service B's connection driver service its heartbeats"?
I know the socket library's select() call can wait for messages on several sockets and once, then service the socket who has packets waiting. So maybe this is what pika's SelectConnection is for? a) I'm not sure, this is just a hunch. b) Even if right, while I can find examples of how to create this connection, I can't find examples of how to use it to solve my multiconnection case.
Set up the the two server connections in different processes... and use Python interprocess queues to get the processed message from one process to the next. The concept is "two different RabbitMQ connections in two different processes should thus then be able to independently service their heartbeats". Except... I think this has a fatal flaw: the process with "server B" is, instead, going to be "stuck" waiting on the interprocess queue, and the same "starvation" is going to happen.
I've checked StackOverflow and Googled this for an hour last night: I can't for the life of me find a blog post or sample code for this.
Any input? Thanks a million!
I managed to work it out, basing my solution on the documentation and an answer in the pika-python Google group.
First of all, your assumption is correct — the client process that's connected to server B, responsible for publishing, cannot reply to heartbeats if it's already blocking on something else, like waiting a message from server A or blocking on an internal communication queue.
The crux of the solution is that the publisher should run as a separate thread and use BlockingConnection.process_data_events to service heartbeats and such. It looks like that method is supposed to be called in a loop that checks if the publisher still needs to run:
def run(self):
while self.is_running:
# Block at most 1 second before returning and re-checking
self.connection.process_data_events(time_limit=1)
Proof of concept
Since proving the full solution requires having two separate RabbitMQ instances running, I have put together a Git repo with an appropriate docker-compose.yml, the application code and comments to test this solution.
https://github.com/karls/rabbitmq-two-connections
Solution outline
Below is a sketch of the solution, minus imports and such. Some notable things:
Publisher runs as a separate thread
The only "work" that the publisher does is servicing heartbeats and such, via Connection.process_data_events
The publisher registers a callback whenever the consumer wants to publish a message, using Connection.add_callback_threadsafe
The consumer takes the publisher as a constructor argument so it can publish the messages it receives, but it can work via any other mechanism as long as you have a reference to an instance of Publisher
The code is taken from the linked Git repo, which is why certain details are hardcoded, e.g the queue name etc. It will work with any RabbitMQ setup needed (direct-to-queue, topic exchange, fanout, etc).
class Publisher(threading.Thread):
def __init__(
self,
connection_params: ConnectionParameters,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.daemon = True
self.is_running = True
self.name = "Publisher"
self.queue = "downstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.confirm_delivery()
def run(self):
while self.is_running:
self.connection.process_data_events(time_limit=1)
def _publish(self, message):
logger.info("Calling '_publish'")
self.channel.basic_publish("", self.queue, body=message.encode())
def publish(self, message):
logger.info("Calling 'publish'")
self.connection.add_callback_threadsafe(lambda: self._publish(message))
def stop(self):
logger.info("Stopping...")
self.is_running = False
# Call .process_data_events one more time to block
# and allow the while-loop in .run() to break.
# Otherwise the connection might be closed too early.
#
self.connection.process_data_events(time_limit=1)
if self.connection.is_open:
self.connection.close()
logger.info("Connection closed")
logger.info("Stopped")
class Consumer:
def __init__(
self,
connection_params: ConnectionParameters,
publisher: Optional["Publisher"] = None,
):
self.publisher = publisher
self.queue = "upstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.basic_qos(prefetch_count=1)
def start(self):
self.channel.basic_consume(
queue=self.queue, on_message_callback=self.on_message
)
try:
self.channel.start_consuming()
except KeyboardInterrupt:
logger.info("Warm shutdown requested...")
except Exception:
traceback.print_exception(*sys.exc_info())
finally:
self.stop()
def on_message(self, _channel: Channel, m, _properties, body):
try:
message = body.decode()
logger.info(f"Got: {message!r}")
if self.publisher:
self.publisher.publish(message)
else:
logger.info(f"No publisher provided, printing message: {message!r}")
self.channel.basic_ack(delivery_tag=m.delivery_tag)
except Exception:
traceback.print_exception(*sys.exc_info())
self.channel.basic_nack(delivery_tag=m.delivery_tag, requeue=False)
def stop(self):
logger.info("Stopping consuming...")
if self.connection.is_open:
logger.info("Closing connection...")
self.connection.close()
if self.publisher:
self.publisher.stop()
logger.info("Stopped")

Reading subscribed MQTT messages after reconnect

I am trying to read messages on a MQTT server. In some cases, the connection is unstable and requires to reconnect. But after reconnect, I am not able to receive any message from the topic that I previously subscribed to. I am using paho's python package to handle MQTT connection. Here is some code I am using
TopicName='some/topic/name'
class Counter:
def __init__(self, mqttClient):
self.messages_recieved = 0
self.mqttClient = mqttClient
self.mqttClient.subscribe(TopicName)
self.mqttClient.on_message = self.on_message
self.mqttClient.on_disconnect = self.on_disconnect
self.mqttClient.loop_start()
def on_message(self, client, userdata, message):
self.messages_received += 1
def on_disconnect(self, client, userdata, rc):
if rc != 0:
print("Trying to reconnect")
while not self.mqttClient.is_connected():
try:
self.mqttClient.reconnect()
except OSError:
pass
If my internet goes down, I am no longer able to receive messages. I have tried to subscribe again to the topic, also I have tried to call loop_start in the on_disconnect method, neither of those worked. Any solution would be helpful. Also to point out messages are being sent, I can see them in the browser on MQTT wall
You have not shown where you are calling connect, but the usual safe pattern is to put the calls to subscribe() in the on_connect() callback attached to the client.
This means that calls to subscribe will
Always wait until the connection has completed
Get called again automatically when a reconnect had happend
Not sure what module you are using, but most will require you to re-subscribe if you disconnect. Add your subscribe() call after your .reconnect() call and you should be good to go. Also, keep in mind that at QOS level 0, any messages that the broker received while you were disconnect, your client will NOT receive...only messages while the client is subscribed will be received by your client. If messages are published with the Retain flag, you client will receive the LAST one received by the broker...even if the client previously received it.

Don't let stomp.py delete packet from ActiveMQ untill the whole packet is processed

I am trying to get message from activemq using stomp.py and then doing some processing on it. But there is a case when that processing fails for certain messages and that message is lost.
How can I prevent the deletion of message untill the message is fully processed?
For example in my code when there is new entry in queue the on_message function will be called and the processing starts but if it is interrupted in between the message is lost. How do I stop it?
Here is my code:
conn = stomp.Connection([(host, 61613)])
conn.set_listener('ML', MyListener())
conn.start()
conn.connect('admin', 'admin', wait=True)
conn.subscribe(destination=/queue/someque, id=1, ack='auto')
print "running"
while 1:
print 'waiting'
time.sleep(2.5)
Here is my Listener class:
class MyListener(stomp.ConnectionListener):
def on_message(self, headers, message):
print headers
print message
do_something()
Thanks in advance.
The issue appears to be that you are using the 'auto' ack mode so the message will be acknowledged before delivery to the client by the broker meaning that even if you fail to process it, it's to late as it is already forgotten on the broker side. You'd need to use either 'client' ack or 'client-individual' ack mode as described in the STOMP specification. Using one of the client ack modes you control when a message or messages are actually acknowledged and dropped by the broker.

How to avoid high cpu usage?

I created a zmq_forwarder.py that's run separately and it passes messages from the app to a sockJS connection, and i'm currently working on right now on how a flask app could receive a message from sockJS via zmq. i'm pasting the contents of my zmq_forwarder.py. im new to ZMQ and i dont know why everytime i run it, it uses 100% CPU load.
import zmq
# Prepare our context and sockets
context = zmq.Context()
receiver_from_server = context.socket(zmq.PULL)
receiver_from_server.bind("tcp://*:5561")
forwarder_to_server = context.socket(zmq.PUSH)
forwarder_to_server.bind("tcp://*:5562")
receiver_from_websocket = context.socket(zmq.PULL)
receiver_from_websocket.bind("tcp://*:5563")
forwarder_to_websocket = context.socket(zmq.PUSH)
forwarder_to_websocket.bind("tcp://*:5564")
# Process messages from both sockets
# We prioritize traffic from the server
while True:
# forward messages from the server
while True:
try:
message = receiver_from_server.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from server: ", message
forwarder_to_websocket.send_string(message)
# forward messages from the websocket
while True:
try:
message = receiver_from_websocket.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from websocket: ", message
forwarder_to_server.send_string(message)
as you can see, i've setup 4 sockets. the app connects to port 5561 to push data to zmq, and port 5562 to receive from zmq (although im still figuring out how to actually set it up to listen for messages sent by zmq). on the other hand, sockjs receives data from zmq on port 5564 and sends data to it on port 5563.
i've read the zmq.DONTWAIT makes receiving of message asynchronous and non-blocking so i added it.
is there a way to improve the code so that i dont overload the CPU? the goal is to be able to pass messages between the flask app and the websocket using zmq.
You are polling your two receiver sockets in a tight loop, without any blocking (zmq.DONTWAIT), which will inevitably max out the CPU.
Note that there is some support in ZMQ for polling multiple sockets in a single thread - see this answer. I think you can adjust the timeout in poller.poll(millis) so that your code only uses lots of CPU if there are lots of incoming messages, and idles otherwise.
Your other option is to use the ZMQ event loop to respond to incoming messages asynchronously, using callbacks. See the PyZMQ documentation on this topic, from which the following "echo" example is adapted:
# set up the socket, and a stream wrapped around the socket
s = ctx.socket(zmq.REP)
s.bind('tcp://localhost:12345')
stream = ZMQStream(s)
# Define a callback to handle incoming messages
def echo(msg):
# in this case, just echo the message back again
stream.send_multipart(msg)
# register the callback
stream.on_recv(echo)
# start the ioloop to start waiting for messages
ioloop.IOLoop.instance().start()

Is RabbitMQ capable of passing messages to specific clients? Or must I perform those checks client-side?

I have my software running on a bunch of clients around my network. I've been playing around with RabbitMQ as a solution for passing messages between each client.
My test code is this:
#!/usr/bin/python2
import pika
import time
connection = pika.AsyncoreConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
def callback(ch, method, properties, body):
# send messages back on certain events
if body == '5':
channel.basic_publish(exchange='',
routing_key='test',
body='works')
print body
channel.queue_declare(queue='test')
channel.basic_consume(callback, queue='test', no_ack=True)
for i in range(0, 8):
channel.basic_publish(exchange='',
routing_key='test',
body='{}'.format(i))
time.sleep(0.5)
channel.close()
Picture this as kind of a 'chat program'. Each client will need to constantly listen for messages. At times, the client will need to send messages back to the server.
This code works, but I've ran into an issue. When the code below sends out the message works, it then retreives that again from the RabbitMQ queue. Is there a way to tell have my client, a producer and a consumer, not receive the message it just sent?
I can't see this functionality built into RabbitMQ so I figured I'd send messages in the form of:
body='{"client_id" : 1, "message" : "this is the message"}'
Then I can parse that string and check the client_id. The client can then ignore all messagess not destined to it.
Is there a better way? Should I look for an alternative to RabbitMQ?
You can have as many queue in RabbitMQ. Why not have a queue for messages to the server as well as a queue for each client?

Categories