Kafka producer difference between flush and poll - python

We have a Kafka consumer which will read messages and do so stuff and again publish to Kafka topic using below script
producer config :
{
"bootstrap.servers": "localhost:9092"
}
I haven't configured any other configuration like queue.buffering.max.messages queue.buffering.max.ms batch.num.messages
I am assuming these all will be going to be default values from configuration
queue.buffering.max.messages : 100000
queue.buffering.max.ms : 0
batch.num.messages : 10000
my understanding : When internal queue reaches either of queue.buffering.max.ms or batch.num.messages messages will get published to Kafka in separate thread. in my configuration queue.buffering.max.ms is 0, so every message will be published as soon as when I call produce(). correct me if I am wrong.
My producer snippet:
def send(topic, message):
p.produce(topic, json.dumps(message), callback=delivery_callback(err, msg))
p.flush()
from this post i understand that using flush after every message, producer is going to be sync producer . if I use above script it is taking ~ 45ms to publish to Kafka
If I change above snippet to
def send(topic, message):
p.produce(topic, json.dumps(message), callback=delivery_callback(err, msg))
p.poll(0)
Is there any performance will be improved ? Can you clarify my understanding.
Thanks

The difference between flush() and poll() is explained in the client's documentation.
For flush(), it states:
Wait for all messages in the Producer queue to be delivered. This is a
convenience method that calls poll() until len() is zero or the
optional timeout elapses.
For poll():
Polls the producer for events and calls the corresponding callbacks
(if registered).
Calling poll() just after a send() does not make the producer synchronous as it's unlikely the message just sent will already have reached the broker and a delivery report was already sent back to the client.
Instead flush() will block until the previously sent messages have been delivered (or errored), effectively making the producer synchronous.

Related

How to stop consumer in Rabbit MQ once the message queue is empty in pika library?

So i am basically trying to send some message from a producer to consumer in RabbitMQ using python client (Pika Library) but by default receiver keeps on running even after reading the message because it waits for further messages but what i want according to my requirement that the receiver should stop once after it reads all the messages from the queue and basically when the queue is empty or atleast it should read messages one by one and when i turn it on again or after a defined period it should read the messages again but the main concern is to stop the receiver. So how can i do that in python's pika library.
the receiver should stop once after it reads all the messages from the queue and basically when the queue is empty
Since queues can always be published to, are they ever really "empty"? You need to come up with a condition that defines "empty", something like "has not had a message within the last 5 seconds" or "consumer saw a particular STOP message".
I recently answered a similar question:
Closing idle consumer which handles long running task in rabbitmq pika
Please see this code, which demonstrates a consumer that stops after 5 seconds of inactivity:
https://github.com/lukebakken/so-pika-idle-consumer-72792217/blob/master/consumer.py
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

integration testing rabbitmq consumer

I have a simple consumer written in python using pika and rabbitmq.
The consumer connects to rabbitmq and listens on a queue. When a message arrives it transforms the message and publishes it on another queue.
Presented here: https://bitbucket.org/snippets/fbanke/8e7zbX
I would like to make test cases to test the interaction between the consumer and the queue. For example, I would like to make sure that when a message is consumed the "basic_ack" function is called to let rabbitmq know that the message was processed.
Another test case is if the consumer reconnects to rabbitmq if the connection gets dropped.
And so on. I want to test the interaction between the consumer and the queue not the actual business logic in the consumer.
If I mock the pika objects it requires me to understand 100% how the API behaves and any misunderstanding of the API will result in faulty code. Code that passes the tests, but actually doesn't work.
I would rather test the consumer using a live queue, and manipulate it from the test to see if the consumer behaves as expected.
For example
1. setup the queue
2. start the consumer
3. publish a valid message to the queue
4. assert that the message was consumed by the worker
or
setup the queue
start the consumer
kill the queue
assert that the worker terminated as expected
Does there exist any best-practice on how to do this? I can find many examples of similar tests on databases, but not for queues. It seems that I need to start the consumer in a separate thread and work with it, but it seems that there is no infrastructure to support this.

how to Asynchronously consume from 2 RabbitMQ queues from one consumer using pika

I am writing a Consumer that need to consume from two different queues.
1-> for the actual messages(queue declared before hand).
2-> for command messages to control the behavior of the consumer(dynamically declared by the consumer and binds to an existing exchange with a routing key in a specific format(need one for each instance of consumer running))
I am using selection connection to consume async'ly.
self.channel.basic_qos(prefetch_count = self.prefetch_count)
log.info("Establishing channel with the Queue: "+self.commandQueue)
print "declaring command queue"
self.channel.queue_declare(queue=self.commandQueue,
durable = True,
exclusive=False,
auto_delete=True,
callback = self.on_command_queue_declared)
The queue is not being declared or the callback is not getting called.
On the other hand the messages from the actual message Queue are not being consumed since i added this block of code.
Pika logs do not show any errors nor the consumer app crashes.
does anybody know why this is happening or is there a better way to do this?
Have you looked at the example here: http://pika.readthedocs.org/en/latest/examples/asynchronous_consumer_example.html ?
And some blocking examples:
http://pika.readthedocs.org/en/latest/examples/blocking_consume.html
http://pika.readthedocs.org/en/latest/examples/blocking_consumer_generator.html
Blocking and Select connection comparison: http://pika.readthedocs.org/en/latest/examples/comparing_publishing_sync_async.html
Blocking and Select connections in pika 0.10.0 pre-release are faster and there are a number of bug fixes in that version.

RabbitMQ: can both consuming and publishing be done in one thread?

can both consuming and publishing be done in one Python thread using RabbitMQ channels?
Actually this isn't a problem at all and you can do it quite easily with for example pika the problem is however that you'd have to stop the consuming since it's a blocking loop or do the producing during the consume of a message.
Consuming and producing is a normal usecase, especially in pika since it isn't threadsafe, when for example you'd want to implement some form of filter on the messages, or, perhaps a smart router, which in turn will pass on the messages to another queue.
I don't think you should want to. MQ means asynch processing. Doing both consuming and producing in the same thread defeats the purpose in my opinion.
I'd recommend taking a look at Celery (http://celery.readthedocs.org/en/latest/) to manage worker tasks. With that, you won't need to integrate with RMQ directly as it will handle the the producing and consuming for you.
But, if you do desire to integrate with RMQ directly and manage your own workers, check out Kombu (http://kombu.readthedocs.org/en/latest/) for the integration. There are non-blocking consumers and producers that would permit you to have both in the same event loop.
I think the simple answer to your question is yes. But it depends on what you want to do. My guess is you have a loop that is consuming from your thread on one channel and after some (small or large) processing it decides to send it on to another queue (or exchange) on a different channel then I do not see any problem with that at all. Though it might be preferable to dispatch it to a different thread it is not necessary.
If you give more details about your process then it might help give a more specific answer.
Kombu is a common python library for working with RabbitMQ (Celery uses it under the hood). It is worth pointing out here that the answer to your question for the simplest use of Kombu that I tried is "No - you can't receive and publish on the same consumer callback thread."
Specifically if there are several messages in the queue for a consumer that has registered a callback for that topic and that callback does some processing and publishes the results then the publishing of the result will cause the 2nd message in the queue to hit the callback before it has returned from the publish from 1st message - so you end up with a recursive call to the callback. If you have n message on the queue your call stack will end up n message deep before it unwinds. Obviously that explodes pretty quickly.
One solution (not necessarily the best) is to have the callback just post the message into a simple queue internal to the consumer that could be processed on the main process thread (i.e. off the callback thread)
def process_message(self, body: str, message: Message):
# Queue the message for processing off this thread:
print("Start process_message ----------------")
self.do_process_message(body, message) if self.publish_on_callback else self.queue.put((body, message))#
print("End process_message ------------------")
def do_process_message(self, body: str, message: Message):
# Deserialize and "Process" the message:
print(f"Process message: {body}")
# ... msg processing code...
# Publish a processing output:
processing_output = self.get_processing_output()
print(f"Publishing processing output: {processing_output}")
self.rabbit_msg_transport.publish(Topics.ProcessingOutputs, processing_output)
# Acknowledge the message:
message.ack()
def run_message_loop(self):
while True:
print("Waiting for incoming message")
self.rabbit_connection.drain_events()
while not self.queue.empty():
body, message = self.queue.get(block=False)
self.do_process_message(body, message)
In this snippet above process_message is the callback. If publish_on_callback is True you'll see recursion in the callback n deep for n message on rabbit queue. If publish_on_callback is False it runs correctly without recursion in the callback.
Another approach is to use a second Connection for the Producer Exchange - separate from the Connection used for the Consumer. This also works so that callback from consuming a message and publishing the result completes before the callback is again fired for the next message on queue.

How to delete or postpone a message in the AMQP queue

I am using txamqp python library to connect to an AMQP broker (RabbitMQ) and i have a consumer with the following callback:
#defer.inlineCallbacks
def message_callback(self, message, queue, chan):
"""This callback is a queue listener
it is called whenever a message was consumed from queue
c.f. test_amqp.ConsumeTestCase for use cases
"""
# The callback should be redefined here to keep getting further messages from queue
queue.get().addCallback(self.message_callback, queue, chan).addErrback(self.message_errback)
print " [x] Received a valid message: [%r]" % (message.content.body,)
yield self.smpp.sendDataRequest(SubmitSmPDU)
# ACK the message in queue, this will remove it from the queue
chan.basic_ack(message.delivery_tag)
When "ack"ing a message, it will be deleted (to confirm ?) from the queue, but what happens when the message is not "ack"ed ? i need to get a "retry" mechanism where i can postpone the message to be callbacked again later on and to keep track of how much retries did it take.
And how can i list/delete messages from a queue ?
RabbitMQ has a nice management plugin, however that doesn't even allow one to delete messages from queues.
You basically would have to write your own application, or figure out which of these 3rd party management applications can delete messaages.
It's resolved, in order to retry a message from the queue, it is necessary to reject the message with 'retry' flag, it will be enqueued back to the queue.
If i reject it with a timer (callLater in twisted), the message enqueuing will be postponed for whatever time i want.

Categories