I am wanting to create a RabbitMQ receiver/consumer in Python and am not sure how to check for messages. I am trying to do this in my own loop, not using the call-backs in pika.
If I understand things, in the Java client I can use getBasic() to check to see if there are any messages available without blocking. I don't mind blocking while getting messages, but I don't want to block until there is a message.
I don't find any clear examples and haven't yet figured out the corresponding call in pika.
If you want to do it synchronously then you will need to look at the pika BlockingConnection
The BlockingConnection creates a layer on top of Pika’s asynchronous
core providng methods that will block until their expected response
has returned. Due to the asynchronous nature of the Basic.Deliver and
Basic.Return calls from RabbitMQ to your application, you are still
required to implement continuation-passing style asynchronous methods
if you’d like to receive messages from RabbitMQ using basic_consume or
if you want to be notified of a delivery failure when using
basic_publish.
More info and an example here
https://pika.readthedocs.org/en/0.9.12/connecting.html#blockingconnection
You can periodically check the queue size using the example of this answer Get Queue Size in Pika (AMQP Python)
Queue processing loop can be done iteratively with the help of process_data_events():
import pika
# A stubborn callback that still wants to be in the code.
def mq_callback(ch, method, properties, body):
print(" Received: %r" % body)
connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()
queue_state = channel.queue_declare(queue="test")
# Configure a callback.
channel.basic_consume(mq_callback, queue="test")
try:
# My own loop here:
while(True):
# Do other processing
# Process message queue events, returning as soon as possible.
# Issues mq_callback() when applicable.
connection.process_data_events(time_limit=0)
finally:
connection.close()
Related
I want to share the BlockingChannel across multiple python process.
In order to send
basic_ack from other python process.
How to share the BlockingChannel across multiple python processes.
Following is the code:
self.__connection__ = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
self.__channel__ = self.__connection__.channel()
I have tried to dump using pickle but it doenst allow to dump Channel and give error can't pickle select.epoll objects
using the follwoing code
filepath = "temp/" + "merger_channel.sav"
pickle.dump(self.__channel__, open(filepath, 'wb'))
GOAL:
Goal is to send basic_ack from channel from other python processes.
It is an antipattern to share a channel between multiple threads and it's quite unlikely you will manage to share it between processes.
The rule of thumb is 1 connection per process and 1 channel per thread.
You can read more in regard of this matter at the following links:
13 common RabbitMQ mistakes
RabbitMQ best practices
This SO thread gives an in depth analysis in regards of RabbitMQ and concurrent consumption
If you want to pair message consumption together with multiprocessing, the usual pattern is to let the main process receive the messages, deliver their payload to a pool of worker processes and acknowledge them once they are done.
Simple example using pika.BlockingChannel and concurrent.futures.ProcessPoolExecutor:
def ack_message(channel, delivery_tag, _future):
"""Called once the message has been processed.
Acknowledge the message to RabbitMQ.
"""
channel.basic_ack(delivery_tag=delivery_tag)
for message in channel.consume(queue='example'):
method, properties, body = message
future = pool.submit(process_message, body)
# use partial to pass channel and ack_tag to callback function
ack_message_callback = functools.partial(ack_message, channel, method.delivery_tag)
future.add_done_callback(ack_message_callback)
The above loop will endlessly consume messages from the example queue and submit them to the pool of processes. You can control how many messages to process concurrently via RabbitMQ consumer prefetch parameter. Check pika.basic_qos to see how to do it in Python.
I have a python server that is available through a websocket endpoint.
During serving a connection, it also communicates with some backend services. This communication is asynchronous and may trigger the send() method of the websocket.
When a single client is served, it seems to work ok. However, when multiple clients are served in parallel, some of the routines that handle the connections get stuck occasionally. More precisely, it seem to block in the recv() method.
The actual code is somehow complex and the issue is slightly more complicated than I have described, nevertheless, I provide a minimal skeleton of code that sketch the way in which I use he websockets:
class MinimalConversation(object):
def __init__(self, ws, worker_sck, messages, should_continue_conversation, should_continue_listen):
self.ws = ws
self.messages = messages
self.worker_sck = worker_sck
self.should_continue_conversation = should_continue_conversation
self.should_continue_listen = should_continue_listen
async def run_conversation(self):
serving_future = asyncio.ensure_future(self.serve_connection())
listening_future = asyncio.ensure_future(self.handle_worker())
await asyncio.wait([serving_future, listening_future], return_when=asyncio.ALL_COMPLETED)
async def serve_connection(self):
while self.should_continue_conversation():
await self.ws.recv()
logger.debug("Message received")
self.sleep_randomly(10, 5)
await self.worker_sck.send(b"Dummy")
async def handle_worker(self):
while self.should_continue_listen():
self.sleep_randomly(50, 40)
await self.worker_sck.recv()
await self.ws.send(self.messages.pop())
def sleep_randomly(self, mean, dev):
delta = random.randint(1, dev) / 1000
if random.random() < .5:
delta *= -1
time.sleep(mean / 1000 + delta)
Obviously, in the real code I do not sleep for random intervals and don't use given list of messages but this sketches the way I handle the websockets. In the real setting, some errors may occur that are sent over the websocket too, so parallel sends() may occur in theory but I have never encountered such a situation.
The code is run from a handler function which is passed as a parameter to websockets.serve(), initialize the MinimalConversation object and calls the run_conversation() method.
My questions are:
Is there something fundamentally wrong with such usage of the websockets?
Are concurrent calls of the send() methods dangerous?
Can you suggest some good practices regarding usage of websockets and asyncio?
Thak you.
recv function yields back only when a message is received, and it seems that there are 2 connections awaiting messages from each other, so there might be a situation similar to "deadlock" when they are waiting for each other's messages and can't send anything. Maybe you should try to rethink the overall algorithm to be safer from this.
And, of course, try adding more debug output and see what really happens.
are concurrent calls of the send() methods dangerous?
If by concurrent you mean in the same thread but in independently scheduled coroutines then parallel send is just fine. But be careful with "parallel" recv on the same connection, because order of coroutine scheduling might be far from obvious and it's what decides which call to recv will get a message first.
Can you suggest some good practices regarding usage of websockets and asyncio?
In my experience, the easiest way is to create a dedicated task for incoming connections which will repeatedly call recv on the connection, until connection is closed. You can store the connection somewhere and delete it in finally block, then it can be used from other coroutines to send something.
I am writing a Consumer that need to consume from two different queues.
1-> for the actual messages(queue declared before hand).
2-> for command messages to control the behavior of the consumer(dynamically declared by the consumer and binds to an existing exchange with a routing key in a specific format(need one for each instance of consumer running))
I am using selection connection to consume async'ly.
self.channel.basic_qos(prefetch_count = self.prefetch_count)
log.info("Establishing channel with the Queue: "+self.commandQueue)
print "declaring command queue"
self.channel.queue_declare(queue=self.commandQueue,
durable = True,
exclusive=False,
auto_delete=True,
callback = self.on_command_queue_declared)
The queue is not being declared or the callback is not getting called.
On the other hand the messages from the actual message Queue are not being consumed since i added this block of code.
Pika logs do not show any errors nor the consumer app crashes.
does anybody know why this is happening or is there a better way to do this?
Have you looked at the example here: http://pika.readthedocs.org/en/latest/examples/asynchronous_consumer_example.html ?
And some blocking examples:
http://pika.readthedocs.org/en/latest/examples/blocking_consume.html
http://pika.readthedocs.org/en/latest/examples/blocking_consumer_generator.html
Blocking and Select connection comparison: http://pika.readthedocs.org/en/latest/examples/comparing_publishing_sync_async.html
Blocking and Select connections in pika 0.10.0 pre-release are faster and there are a number of bug fixes in that version.
can both consuming and publishing be done in one Python thread using RabbitMQ channels?
Actually this isn't a problem at all and you can do it quite easily with for example pika the problem is however that you'd have to stop the consuming since it's a blocking loop or do the producing during the consume of a message.
Consuming and producing is a normal usecase, especially in pika since it isn't threadsafe, when for example you'd want to implement some form of filter on the messages, or, perhaps a smart router, which in turn will pass on the messages to another queue.
I don't think you should want to. MQ means asynch processing. Doing both consuming and producing in the same thread defeats the purpose in my opinion.
I'd recommend taking a look at Celery (http://celery.readthedocs.org/en/latest/) to manage worker tasks. With that, you won't need to integrate with RMQ directly as it will handle the the producing and consuming for you.
But, if you do desire to integrate with RMQ directly and manage your own workers, check out Kombu (http://kombu.readthedocs.org/en/latest/) for the integration. There are non-blocking consumers and producers that would permit you to have both in the same event loop.
I think the simple answer to your question is yes. But it depends on what you want to do. My guess is you have a loop that is consuming from your thread on one channel and after some (small or large) processing it decides to send it on to another queue (or exchange) on a different channel then I do not see any problem with that at all. Though it might be preferable to dispatch it to a different thread it is not necessary.
If you give more details about your process then it might help give a more specific answer.
Kombu is a common python library for working with RabbitMQ (Celery uses it under the hood). It is worth pointing out here that the answer to your question for the simplest use of Kombu that I tried is "No - you can't receive and publish on the same consumer callback thread."
Specifically if there are several messages in the queue for a consumer that has registered a callback for that topic and that callback does some processing and publishes the results then the publishing of the result will cause the 2nd message in the queue to hit the callback before it has returned from the publish from 1st message - so you end up with a recursive call to the callback. If you have n message on the queue your call stack will end up n message deep before it unwinds. Obviously that explodes pretty quickly.
One solution (not necessarily the best) is to have the callback just post the message into a simple queue internal to the consumer that could be processed on the main process thread (i.e. off the callback thread)
def process_message(self, body: str, message: Message):
# Queue the message for processing off this thread:
print("Start process_message ----------------")
self.do_process_message(body, message) if self.publish_on_callback else self.queue.put((body, message))#
print("End process_message ------------------")
def do_process_message(self, body: str, message: Message):
# Deserialize and "Process" the message:
print(f"Process message: {body}")
# ... msg processing code...
# Publish a processing output:
processing_output = self.get_processing_output()
print(f"Publishing processing output: {processing_output}")
self.rabbit_msg_transport.publish(Topics.ProcessingOutputs, processing_output)
# Acknowledge the message:
message.ack()
def run_message_loop(self):
while True:
print("Waiting for incoming message")
self.rabbit_connection.drain_events()
while not self.queue.empty():
body, message = self.queue.get(block=False)
self.do_process_message(body, message)
In this snippet above process_message is the callback. If publish_on_callback is True you'll see recursion in the callback n deep for n message on rabbit queue. If publish_on_callback is False it runs correctly without recursion in the callback.
Another approach is to use a second Connection for the Producer Exchange - separate from the Connection used for the Consumer. This also works so that callback from consuming a message and publishing the result completes before the callback is again fired for the next message on queue.
Currently, I am using ZeroRPC, I have "workers" connect to the "server" and do the work that the server sends them.
Currently calls are made over ZeroRPC as soon as there is a call to make, as far as I can tell it uses a FIFO queue.
I would like to use my own queue so that I throttle/prioritize the calls.
I'm hoping that ZeroRPC exposes a gevent Event that triggers when its internal queue runs empty.
What you want to do, is create your own work queue in your server. And dispatch yourself the calls in the priorities you wish.
Since few lines of code express more than any vampire story in 3 volumes, lets see in pseudo code what the server could look like:
myqueue = MySuperBadAssQueue()
def myqueueprocessor():
for request in myqueue: # blocks until next request
gevent.spawn(request.processme) # do the job asynchronously
gevent.spawn(myqueueprocessor) # do that at startup
class Server:
def dosomething(args...blabla...): # what users are calling
request = Request(args...blabla...)
myqueue.put(request) # something to do buddy!
return request.future.get() # return when request is completed
# (can also raise an exception)
# An example of what a request could look like:
class Request:
def __init__(self, ....blablabla...):
self.future = gevent.AsyncResult()
def process():
try:
result = someworker(self.args*) # call some worker
self.future.set(result) # complete the initial request
except Exception as e:
self.future.set_exception(e)
Its up to MySuperBadAssQueue to do all the smart work, throttle if you want, cancel out a request with an exception if necessary, etc...
ZeroRPC does not expose any event to let you know if its 'internal' queue runs
empty:
In fact, there is no explicit queue in ZeroRPC. What happens, is
simply first come first serve, and the exact order depend both of
ZeroMQ and the Gevent IOLoop (libevent or libev depending of the
version). It happens that in practice, this conveniently plays
like a FIFO queue.
I haven't tried this myself, but I have read through the source. I am motivated because I want to do this myself.
Seems like what you would do is inherit zerorpc.Server and override the _acceptor method. According to the source, _acceptor is what receives messages and then spawns threads to run them. So if you change up the logic/loop to incorporate your queue, you can use that to throttle.