I have a class that contains a websocket-client WebSocketApp, that is started in a thread.
The websocket connection recieves messages and sets some class variables accordingly.
One example is the login:
A login message is sent, and after a while the on_message function recieves a successfull login message. When the function catches this message a self.logged_in variable is set to true.
Currently I'm "waiting" for the variable to become true using a busy wait, which is obviously not very good.
while websocket.logged_in:
pass
What I need is something like this
wait(websocket.logged_in=True, timeout=100)
Found a nice solution using synchronized Queues
You just need to call queue.put(var) in your thread where the message arrives.
In the main you call queue.get() which will wait until an element shows up in the queue.
In my case I will probably use multiple queues with just one element for different responses that I'm going to get
Related
I have a callback associated with a rabbitmq queue through pika's basic_consume like so.
channel.basic_consume(queue=REQUESTS_QUEUE,
on_message_callback=request_callback, auto_ack=False)
And the request callback function is:
def request_callback(channel, method, properties, body):
try:
readings = json_util.loads(body)
location_updater.update_location(readings)
channel.basic_ack(delivery_tag=method.delivery_tag)
except Exception:
logger.exception('EXCEPTION: ')
Whenever the code inside the except block is executed, this particular callback stops working (i.e. it stops being called when a message is sent to its associated queue). All the other callbacks I have associated with other queues keep working fine. If I comment out the try...except logic the callback keeps working fine for further requests, even after a exception occurs.
I'm still getting used to Python, so it might be something simple. Can anyone help?
I'm assuming the exception comes from a statement before channel.basic_ack, and I'm also assuming you're calling channel.basic_qos to set a prefetch value.
The exception prevents the call to basic_ack, which prevents RabbitMQ from removing the message from the queue. If you have reached the prefetch value, no further messages will be delivered to that client because RabbitMQ assumes your client is still processing them.
You need to decide what to do with that message when an exception happens. I'm assuming that the message can't be considered to be processed, so you should reject (nack) the message (https://www.rabbitmq.com/nack.html). Do this in the except block. This will cause the message to be re-enqueued and re-delivered, potentially to a different consumer.
Closing the channel and/or the connection will also have the same effect. This ensures that clients that crash do not permanently "hold onto" messages.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Let's say I have an infinite while loop that awaits the read_message() method of a WebSocket Tornado client connection. Then, I externally trigger a function that sends a message and should get an immediate response.
Since everything is asynchronous, I would assume that when that external call takes place, the execution goes over to it. But when I try to listen for a response inside that call, it throws an AssertionError stating that self.read_future is not None, when it should be.
Here are the methods of the client application. A little earlier, it connects to a server and places the connection in the self.conn variable:
async def loop(self):
while True:
print(await self.conn.read_message())
async def ext_call(self):
self.conn.write_message('Hello, World!')
response = await self.conn.read_message() # This line fails
Why can't I listen for messages in two separate places?
What you're asking for is ambiguous - which messages would go to which place? It looks like you probably mean for the next message to arrive after you send "Hello world" to be handled in ext_call, and all others to be printed in loop. But how could the system know that? Consider that the "Hello world" message could be sent to the client and the response received before the python interpreter has executed the read_message call in ext_call, so it would be routed to the waiting read_message in loop.
In general, you want to use one of these patterns but not both at the same time. If you always have matched request/response pairs of messages, you can read after you send as in ext_call. But if you may have messages that are not a part of request/response pairs, you need one loop that reads all the messages and decides what to do with them (perhaps splitting them up by type and emitting them on one or more queues using the tornado.queues module).
I have a tornado coroutine hander that looks in part like:
class QueryHandler(tornado.web.RequestHandler):
queryQueues = defaultdict(tornado.queues.Queue)
#tornado.gen.coroutine
def get(self, network):
qq = self.queryQueues[network]
query = yield qq.get()
# do some work with with the dequeued query
self.write(response)
On the client side, I use python-requests to long poll it:
fetched = session.get(QueryURL)
I can make a query, the server blocks waiting on the queue until cough up a something to process and finally respond.
This works pretty slick until... the long poll gets shutdown and restarted while the handler is blocking on the queue. When I stop the query on the client side, the handler stays happily blocked. Worse if I restart the query on the client side, I now have a second handler instance blocking on the queue. So when the queue DOES have data show up, the stale handler processes it and replies to the bitbucket, and the restarted query is now blocked indefinitely.
Is there a pattern I can use to avoid this? I had hoped that when the client side closed, the handler would receive some sort of exception indicating that things have gone south. The queue.get() can have a timeout, but what I really want is not a timeout but a sort of "unless I close" exception.
You want a "queue with guaranteed delivery" which is a hard problem in distributed systems. After all, even if "self.write" succeeds, you can't be certain the other end really received the message.
A basic approach would look like this:
each entry in the queue gets an id greater than all previous ids
when the client connects it asks to subscribe to the queue
when a client is disconnected, it reconnects and asks for all entries with ids greater than the last id it saw
when your QueryHandler receives a get with a non-None id, it first serves all entries with ids greater than id, then begins waiting on the queue
when your QueryHandler raises an exception from self.write, ignore it: the client is responsible for retrieving the lost entry
keep all past entries in a list
expire the oldest list entries after some time (hours?)
I am wanting to create a RabbitMQ receiver/consumer in Python and am not sure how to check for messages. I am trying to do this in my own loop, not using the call-backs in pika.
If I understand things, in the Java client I can use getBasic() to check to see if there are any messages available without blocking. I don't mind blocking while getting messages, but I don't want to block until there is a message.
I don't find any clear examples and haven't yet figured out the corresponding call in pika.
If you want to do it synchronously then you will need to look at the pika BlockingConnection
The BlockingConnection creates a layer on top of Pika’s asynchronous
core providng methods that will block until their expected response
has returned. Due to the asynchronous nature of the Basic.Deliver and
Basic.Return calls from RabbitMQ to your application, you are still
required to implement continuation-passing style asynchronous methods
if you’d like to receive messages from RabbitMQ using basic_consume or
if you want to be notified of a delivery failure when using
basic_publish.
More info and an example here
https://pika.readthedocs.org/en/0.9.12/connecting.html#blockingconnection
You can periodically check the queue size using the example of this answer Get Queue Size in Pika (AMQP Python)
Queue processing loop can be done iteratively with the help of process_data_events():
import pika
# A stubborn callback that still wants to be in the code.
def mq_callback(ch, method, properties, body):
print(" Received: %r" % body)
connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()
queue_state = channel.queue_declare(queue="test")
# Configure a callback.
channel.basic_consume(mq_callback, queue="test")
try:
# My own loop here:
while(True):
# Do other processing
# Process message queue events, returning as soon as possible.
# Issues mq_callback() when applicable.
connection.process_data_events(time_limit=0)
finally:
connection.close()
can both consuming and publishing be done in one Python thread using RabbitMQ channels?
Actually this isn't a problem at all and you can do it quite easily with for example pika the problem is however that you'd have to stop the consuming since it's a blocking loop or do the producing during the consume of a message.
Consuming and producing is a normal usecase, especially in pika since it isn't threadsafe, when for example you'd want to implement some form of filter on the messages, or, perhaps a smart router, which in turn will pass on the messages to another queue.
I don't think you should want to. MQ means asynch processing. Doing both consuming and producing in the same thread defeats the purpose in my opinion.
I'd recommend taking a look at Celery (http://celery.readthedocs.org/en/latest/) to manage worker tasks. With that, you won't need to integrate with RMQ directly as it will handle the the producing and consuming for you.
But, if you do desire to integrate with RMQ directly and manage your own workers, check out Kombu (http://kombu.readthedocs.org/en/latest/) for the integration. There are non-blocking consumers and producers that would permit you to have both in the same event loop.
I think the simple answer to your question is yes. But it depends on what you want to do. My guess is you have a loop that is consuming from your thread on one channel and after some (small or large) processing it decides to send it on to another queue (or exchange) on a different channel then I do not see any problem with that at all. Though it might be preferable to dispatch it to a different thread it is not necessary.
If you give more details about your process then it might help give a more specific answer.
Kombu is a common python library for working with RabbitMQ (Celery uses it under the hood). It is worth pointing out here that the answer to your question for the simplest use of Kombu that I tried is "No - you can't receive and publish on the same consumer callback thread."
Specifically if there are several messages in the queue for a consumer that has registered a callback for that topic and that callback does some processing and publishes the results then the publishing of the result will cause the 2nd message in the queue to hit the callback before it has returned from the publish from 1st message - so you end up with a recursive call to the callback. If you have n message on the queue your call stack will end up n message deep before it unwinds. Obviously that explodes pretty quickly.
One solution (not necessarily the best) is to have the callback just post the message into a simple queue internal to the consumer that could be processed on the main process thread (i.e. off the callback thread)
def process_message(self, body: str, message: Message):
# Queue the message for processing off this thread:
print("Start process_message ----------------")
self.do_process_message(body, message) if self.publish_on_callback else self.queue.put((body, message))#
print("End process_message ------------------")
def do_process_message(self, body: str, message: Message):
# Deserialize and "Process" the message:
print(f"Process message: {body}")
# ... msg processing code...
# Publish a processing output:
processing_output = self.get_processing_output()
print(f"Publishing processing output: {processing_output}")
self.rabbit_msg_transport.publish(Topics.ProcessingOutputs, processing_output)
# Acknowledge the message:
message.ack()
def run_message_loop(self):
while True:
print("Waiting for incoming message")
self.rabbit_connection.drain_events()
while not self.queue.empty():
body, message = self.queue.get(block=False)
self.do_process_message(body, message)
In this snippet above process_message is the callback. If publish_on_callback is True you'll see recursion in the callback n deep for n message on rabbit queue. If publish_on_callback is False it runs correctly without recursion in the callback.
Another approach is to use a second Connection for the Producer Exchange - separate from the Connection used for the Consumer. This also works so that callback from consuming a message and publishing the result completes before the callback is again fired for the next message on queue.