I have a task queue in RabbitMQ with multiple producers (12) and one consumer for heavy tasks in a webapp. When I run the consumer it starts dequeuing some of the messages before crashing with this error:
Traceback (most recent call last):
File "jobs.py", line 42, in <module> jobs[job](config)
File "/home/ec2-user/project/queue.py", line 100, in init_queue
channel.start_consuming()
File "/usr/lib/python2.7/site-packages/pika/adapters/blocking_connection.py", line 1822, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.7/site-packages/pika/adapters/blocking_connection.py", line 749, in process_data_events
self._flush_output(common_terminator)
File "/usr/lib/python2.7/site-packages/pika/adapters/blocking_connection.py", line 477, in _flush_output
result.reason_text)
pika.exceptions.ConnectionClosed: (-1, "error(104, 'Connection reset by peer')")
The producers code is:
message = {'image_url': image_url, 'image_name': image_name, 'notes': notes}
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='tasks_queue')
channel.basic_publish(exchange='', routing_key=queue_name, body=json.dumps(message))
connection.close()
And the only consumer's code (the one is clashing):
def callback(self, ch, method, properties, body):
"""Callback when receive a message."""
message = json.loads(body)
try:
image = _get_image(message['image_url'])
except:
sys.stderr.write('Error getting image in note %s' % note['id'])
# Crop image with PIL. Not so expensive
box_path = _crop(image, message['image_name'], box)
# API call. Long time function
result = long_api_call(box_path)
if result is None:
sys.stderr.write('Error in note %s' % note['id'])
return
# update the db
db.update_record(result)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='tasks_queue')
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback_obj.callback, queue='tasks_queue', no_ack=True)
channel.start_consuming()
As you can see, there are 3 expensive functions for message. One crop task, one API call and one database update. Without the API call, que consumer runs smoothly.
Thanks in advance
Your RabbitMQ log shows a message that I thought we might see:
missed heartbeats from client, timeout: 60s
What's happening is that your long_api_call blocks Pika's I/O loop. Pika is a very lightweight library and does not start threads in the background for you so you must code in such a way as to not block Pika's I/O loop longer than the heartbeat interval. RabbitMQ thinks your client has died or is unresponsive and forcibly closes the connection.
Please see my answer here which links to this example code showing how to properly execute a long-running task in a separate thread. You can still use no_ack=True, you will just skip the ack_message call.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Starting with RabbitMQ 3.5.5, the broker’s default heartbeat timeout
decreased from 580 seconds to 60 seconds.
See pika: Ensuring well-behaved connection with heartbeat and blocked-connection timeouts.
The simplest fix is to increase the heartbeat timeout:
rabbit_url = host + "?heartbeat=360"
conn = pika.BlockingConnection(pika.URLParameters(rabbit_url))
# or
params = pika.ConnectionParameters(host, heartbeat=360)
conn = pika.BlockingConnection(params)
Related
I'm using this example provided in python docs on asyncio streams, to write data and read from a socket.
I am consuming rabbitmq and sending the messages through the socket and waiting for a response. I've setup the reader and writer in __init__() :
self.reader, self.writer = await asyncio.open_connection(self.host, self.port, loop=self.loop)
and in consuming messages, I just send the message to the socket and read the response, then publish the response to another queue (after some processing):
async def process_airtime(self, message: aio_pika.IncomingMessage):
async with message.process():
logger.info('Send: %r' % message.body)
self.writer.write(message.body)
data = await self.reader.read(4096)
logger.info('Received: %r' % data)
await self.publish(data) # publishing to some other queue
The problem is when i try to consume like say 10 messages, all other messges raise this error, although the last message successfully gets a response..
RuntimeError: read() called while another coroutine is already waiting for incoming data
This is the response i get(i've truncated some response..):
2022-02-05 10:59:17,123 INFO [__main__:130] request_consumer [*] waiting for messages...
Send: b'\x00\x00\x00"000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
Send: b'\x00\x00\x00"000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
Send: b'\x00\x00\x00"000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
...
2022-02-05 10:59:17,194 ERROR [asyncio:1707] base_events Task exception was never retrieved
future: <Task finished name='Task-53' coro=<consumer() done, defined at /home/steven/workspace/python/onfon/cheka/venv/lib/python3.8/site-packages/aio_pika/queue.py:25> exception=RuntimeError('read() called while another coroutine is already waiting for incoming data')>
Traceback (most recent call last):
File "/home/steven/workspace/python/onfon/cheka/venv/lib/python3.8/site-packages/aio_pika/queue.py", line 27, in consumer
return await create_task(callback, message, loop=loop)
File "request_consumer.py", line 122, in process_airtime
data = await self.reader.read(4096)
File "/usr/lib/python3.8/asyncio/streams.py", line 684, in read
await self._wait_for_data('read')
File "/usr/lib/python3.8/asyncio/streams.py", line 503, in _wait_for_data
raise RuntimeError(
RuntimeError: read() called while another coroutine is already waiting for incoming data
2022-02-05 10:59:17,194 ERROR [asyncio:1707] base_events Task exception was never retrieved
future: <Task finished name='Task-54' coro=<consumer() done, defined at /home/steven/workspace/python/onfon/cheka/venv/lib/python3.8/site-packages/aio_pika/queue.py:25> exception=RuntimeError('read() called while another coroutine is already waiting for incoming data')>
Traceback (most recent call last):
File "/home/steven/workspace/python/onfon/cheka/venv/lib/python3.8/site-packages/aio_pika/queue.py", line 27, in consumer
return await create_task(callback, message, loop=loop)
File "request_consumer.py", line 122, in process_airtime
data = await self.reader.read(4096)
File "/usr/lib/python3.8/asyncio/streams.py", line 684, in read
await self._wait_for_data('read')
File "/usr/lib/python3.8/asyncio/streams.py", line 503, in _wait_for_data
raise RuntimeError(
RuntimeError: read() called while another coroutine is already waiting for incoming data
Received: b'\xf2>D\x95\n\xe0\x80 \x00\x00\x00\x00\x00\x00\x00"0000000000000000000000000000000000000000000000'
My question is, what should i, probably, do to make read() be called again only after a current coroutine reading has finished. Will that affect performance or is there some way i can read on, say, different threads?
I will appreciate if someone points me the right direction.
I use python3.8 on linux
Simple answer is to only read() from one task.
It sounds like you are using a callback to consume RMQ messages. If so, then aio_pika will consume messages asynchronously (ie. concurrently) if it has a multiple messages. That is, it will create a new task for each callback/message and leave it to its own devices.
Given that you have a read() whilst processing a message, that doesn't really make sense for your read calls. How will you know which read is for which message. You need to find some way to sync your
reads to each message. There are a few ways you can do this:
Put a lock around calls to read()
Creating a separate task that is solely responsible for calling read(), it
puts the results onto a queue, from which any task can read. asyncio
queues are task safe (unlike read()).
Or perhaps most simply, by using the queue as an iterator (and
not spawning a new task to handle the message).
I am assuming you current code looks something a bit like:
async def init_rmq_consumer():
# connect to RMQ and create a queue
...
queue = ...
# start consuming messages off of the queue
await queue.consume(process_airtime)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(init_rmq_consumer())
loop.run_forever()
Using the queue as an iterator might look something like:
async def main():
# connect to RMQ and create a queue
...
queue = ...
# start consuming messages off of the queue *serially*
async with queue.iterator() as queue_iter:
async for message in queue_iter:
# we will not fetch a new message from the queue until we are
# finished with this one.
await process_airtime(message)
if __name__ == '__main__':
asyncio.run(main())
I'm working on a microservice endpoint which "only" consumes messages from RabbitMq and then serves those messages as SSE events.
This is my code for the endpoint:
data = queue.Queue()
def sseRouteHandler(reqId):
def consume():
connection = pika.BlockingConnection(pika.ConnectionParameters(host=rmqHostName,
port=rmqPort,
virtual_host='/',
credentials=pika.PlainCredentials(username=rmqUserName, password=rmqPassword),
connection_attempts=retryCount,
retry_delay=retryInterval))
channel = connection.channel()
channel.queue_declare(queue=consumerQueues, auto_delete=False, exclusive=False, arguments=None)
channel.queue_bind(queue=consumerQueues, exchange="eis.ds", routing_key=consumerQueues)
def callback(ch, method, properties, body):
# print(body)
data.put(body)
# ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_consume(queue=consumerQueues, on_message_callback=callback, exclusive=False, arguments=None)
channel.start_consuming()
thread = Thread(target=consume)
thread.start()
# Check if queue is empty, if not then pop the element else continue. Not working... I only get values after I close the server using keyboard interrupt
def xcallback():
while True:
if not data.empty():
yield data.get(block=False)
else:
continue
return Response(xcallback(), mimetype="text/event-stream")
Check if queue is empty, if not then get the element else continue. This is not working... I only get values after I close the server using keyboard interrupt
^C^CTraceback (most recent call last):
File "python3.6/site-packages/waitress/server.py", line 307, in run
use_poll=self.adj.asyncore_use_poll,
File "python3.6/site-packages/waitress/wasyncore.py", line 222, in loop
poll_fun(timeout, map)
File "python3.6/site-packages/waitress/wasyncore.py", line 152, in poll
r, w, e = select.select(r, w, e, timeout)
KeyboardInterrupt
curl -X GET 'http://localhost:9080/v1/req/1ljhzckm3'
curl: (18) transfer closed with outstanding read data remaining
{"requestId": "1212122", "message": "Dat"}
What could be the solution for this?
In my code I create threads, which publish.single multiple times on a MQTT connection. However this error is raised and I cannot understand or find its origin. The only time it mentions my code is with line 75, in send_on_sensor.
Exception in thread Thread-639:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/Users//PycharmProjects//V3_multiTops/mt_GenPub.py", line 75, in send_on_sensor
publish.single(topic, payload, hostname=hostname)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 223, in single
protocol, transport)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 159, in multiple
client.connect(hostname, port, keepalive)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 839, in connect
return self.reconnect()
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 962, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
This is the mentioned code part. The thrown line 75 is the one with time.sleep(delay). This method will be called on a new thread whenever a new set of data (as a queue of points) shall be sent.
def send_on_sensor(q, topic, delay):
while q.empty is not True:
payload = json.dumps(q.get())
publish.single(topic, payload, hostname=hostname)
time.sleep(delay)
I get the feeling I am doing something which is not "threadsafe"?! Also this issue occurs especially, when the delay is a short interval (< 1sec). From my output I can see that the next set of data (100 points) will start sending in a new thread before the first one has finished sending. I can fix that and also this error by increasing the time interval in between two sets of data. E.g. if I determine the delay between sets using this relation set_delay = 400 * point_delay I can safely use a delay of 0.1 secs. However the same relation won't work for smaller delays, so this solution really does not satisfy me.
What can I do about this issue? I really want to get my delay below 0.1 secs and be able to adjust it.
EDIT
this is the method which creates the threads:
def send_dataset(data, labels, secs=0):
qs = []
for i in range(8):
qs.append(queue.Queue())
for value in data:
msg = {
"key": value,
}
# c is set accordingly
qs[c].put(msg)
for q in qs:
topic = sensors[qs.index(q)]
t = threading.Thread(target=send_on_sensor, args=(q, topic, secs))
t.start()
time.sleep(secs)
and this is where I start all methods off
output_interval = 0.01
while True:
X, y = give_dataset()
send_dataset(X, y, output_interval)
time.sleep(output_interval * 2000)
Even though you added extra code, it doesnt reveal much. However, I have an experience with similar thing happening to me. I was doing heavy threaded app with MQTT and its quite save. Not totally but it is.
The reason why you get error with lowering the delay is that you have ONE client. By publishing message (I cant be sure because I dont see your code) you connect, send message and disconnect!. Since you are threading this process, you most propably send one message(still in process) and you are about to publish new one in new thread. However the first thread is going to finish and disconnects the client. The new thread is trying to publish, but you cant, because previous thread disconnected you.
Solution:
1) Dont disconnect the client upon publishing
2) Risky and you need more code: For every publish, create new client but be sure to handle this correctly. That means: create client, publish and disconnect, again and again, but make sure you close the connections correctly and delete the clients so your you dont store dead clients
3) solution to 2) - try to make function that will do all - create client, connect and publish and dies after the the end. If you thread such function, I guess you will not have to take care of problems arising in solution 2
Update:
In case your problem is something else, I still think that its not because of threads itself, but because multiple threads are trying to control something that should be controlled only by one thread - like client object
Update: template code
be aware that its my old code and I dont use it anymore because my applications needs particular thread attitude and so on, so I rewrite this one for each application individually. But this one works like charm for not threaded apps and possible for threaded too. It can publish only with qos=0
import paho.mqtt.client as mqtt
import json
# Define Variables
MQTT_BROKER = ""
MQTT_PORT = 1883
MQTT_KEEPALIVE_INTERVAL = 5
MQTT_TOPIC = ""
class pub:
def __init__(self,MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC,transport = ''):
self.MQTT_TOPIC = MQTT_TOPIC
self.MQTT_BROKER =MQTT_BROKER
self.MQTT_PORT = MQTT_PORT
self.MQTT_KEEPALIVE_INTERVAL = MQTT_KEEPALIVE_INTERVAL
# Initiate MQTT Client
if transport == 'websockets':
self.mqttc = mqtt.Client(transport='websockets')
else:
self.mqttc = mqtt.Client()
# Register Event Handlers
self.mqttc.on_publish = self.on_publish
self.mqttc.on_connect = self.on_connect
self.connect()
# Define on_connect event Handler
def on_connect(self,mosq, obj, rc):
print("mqtt.thingstud.io")
# Define on_publish event Handler
def on_publish(self,client, userdata, mid):
print("Message Published...")
def publish(self,MQTT_MSG):
MQTT_MSG = json.dumps(MQTT_MSG)
# Publish message to MQTT Topic
self.mqttc.publish(self.MQTT_TOPIC,MQTT_MSG)
# Disconnect from MQTT_Broker
def connect(self):
self.mqttc.connect(self.MQTT_BROKER, self.MQTT_PORT, self.MQTT_KEEPALIVE_INTERVAL)
def disconnect(self):
self.mqttc.disconnect()
p = pub(MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC)
p.publish('some messages')
p.publish('more messages')
Note that on object creation I connect automaticly, but I dont disconnect. That is something you have to do manually
I suggest you try to create as many pub objects as you have sensors and publish with them.
I am using Kombu in Python to consume a durable RabbitMQ queue.
There is only one consumer consuming the queue in Windows. This consumer produces the below error:
Traceback (most recent call last):
File ".\consumer_windows.py", line 66, in <module>
message.ack()
File "C:\Users\Administrator\Anaconda2\lib\site-packages\kombu\message.py", line 88, in ack
self.channel.basic_ack(self.delivery_tag)
File "C:\Users\Administrator\Anaconda2\lib\site-packages\amqp\channel.py", line 1584, in basic_ack
self._send_method((60, 80), args)
File "C:\Users\Administrator\Anaconda2\lib\site-packages\amqp\abstract_channel.py", line 56, in _send_method
self.channel_id, method_sig, args, content,
File "C:\Users\Administrator\Anaconda2\lib\site-packages\amqp\method_framing.py", line 221, in write_method
write_frame(1, channel, payload)
File "C:\Users\Administrator\Anaconda2\lib\site-packages\amqp\transport.py", line 182, in write_frame
frame_type, channel, size, payload, 0xce,
File "C:\Users\Administrator\Anaconda2\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 10054] An existing connection was forcibly closed by the remote host
There are at most 500 messages in the queue at any one time. Each message is small in size however it is a task and takes up to 10 minutes to complete (although it usually takes less then 5 mins per message).
I have tried restarting the consumer, RabbitMQ server and deleting the queue however the error still persists.
I've seen this question however the answer is from 2010 and my rabbitmq.log has different entries:
=ERROR REPORT==== 24-Apr-2016::08:26:20 ===
closing AMQP connection <0.6716.384> (192.168.X.X:59602 -> 192.168.Y.X:5672):
{writer,send_failed,{error,timeout}}
There were no recent events in the rabbitmq-sasl.log.
Why is this error happening and how can I prevent it from occurring?
I'm still looking for an answer. In the meantime I restart the connection to my rabbit server:
while True:
try:
connection = pika.BlockingConnection(params)
channel = connection.channel() # start a channel
channel.queue_declare(queue=amqp_q, durable=True) # Declare a queue
...
except pika.exceptions.ConnectionClosed:
print('connection closed... and restarted')
I had the same issue with MySQL server which was hosted...
I came to understand that it happened if we open the connection for a long time or unmodified for a long time..
If your program opens the DB or anything until the whole program runs make it in a such a way that it opens the DB writes everything and closes and repeats
I don't know what exactly rabbitmq is but I think the error you wrote as title may be for this reason
I had the same error (using pure PIKA library) and trying to connect to a Rabbitmq broker through Amazon MQ.
The problem resolved when setting up correctly the ssl configuration.
Please check full blog post here: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-rabbitmq-pika.html
Core snippets that I used:
Define Pika Client:
import ssl
import pika
class BasicPikaClient:
def __init__(self, rabbitmq_broker_id, rabbitmq_user, rabbitmq_password, region):
# SSL Context for TLS configuration of Amazon MQ for RabbitMQ
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
ssl_context.set_ciphers('ECDHE+AESGCM:!ECDSA')
url = f"amqps://{rabbitmq_user}:{rabbitmq_password}#{rabbitmq_broker_id}.mq.{region}.amazonaws.com:5671"
parameters = pika.URLParameters(url)
parameters.ssl_options = pika.SSLOptions(context=ssl_context)
self.connection = pika.BlockingConnection(parameters)
self.channel = self.connection.channel()
Producer:
from basicClient import BasicPikaClient
class BasicMessageSender(BasicPikaClient):
def declare_queue(self, queue_name, durable):
print(f"Trying to declare queue({queue_name})...")
self.channel.queue_declare(queue=queue_name, durable=durable)
def send_message(self, exchange, routing_key, body):
channel = self.connection.channel()
channel.basic_publish(exchange=exchange,
routing_key=routing_key,
body=body)
print(f"Sent message. Exchange: {exchange}, Routing Key: {routing_key}, Body: {body}")
def close(self):
self.channel.close()
self.connection.close()
Calling Producer:
# Initialize Basic Message Sender which creates a connection
# and channel for sending messages.
basic_message_sender = BasicMessageSender(
credentials["broker_id"],
credentials["username"],
credentials['password'],
credentials['region']
)
# Declare a queue
basic_message_sender.declare_queue("q_name", durable=True)
# Send a message to the queue.
basic_message_sender.send_message(exchange="", routing_key="q_name", body=b'Hello World 2!')
# Close connections.
basic_message_sender.close()
Define Consumer:
class BasicMessageReceiver(BasicPikaClient):
def get_message(self, queue):
method_frame, header_frame, body = self.channel.basic_get(queue)
if method_frame:
print(method_frame, header_frame, body)
self.channel.basic_ack(method_frame.delivery_tag)
return method_frame, header_frame, body
else:
print('No message returned')
def close(self):
self.channel.close()
self.connection.close()
Calling Consumer:
# Create Basic Message Receiver which creates a connection
# and channel for consuming messages.
basic_message_receiver = BasicMessageReceiver(
credentials["broker_id"],
credentials["username"],
credentials['password'],
credentials['region']
)
# Consume the message that was sent.
basic_message_receiver.get_message("q_name")
# Close connections.
basic_message_receiver.close()
I hope the above helps.
Thanks
Simple question, but Google or the Pika open source code did not help. Is there a way to query the current queue size (item counter) in Pika?
I know that this question is a bit old, but here is an example of doing this with pika.
Regarding AMQP and RabbitMQ, if you have already declared the queue, you can re-declare the queue with the passive flag on and keeping all other queue parameters identical. The response to this declaration declare-ok will include the number of messages in the queue.
Here is an example With pika 0.9.5:
import pika
def on_callback(msg):
print msg
params = pika.ConnectionParameters(
host='localhost',
port=5672,
credentials=pika.credentials.PlainCredentials('guest', 'guest'),
)
# Open a connection to RabbitMQ on localhost using all default parameters
connection = pika.BlockingConnection(parameters=params)
# Open the channel
channel = connection.channel()
# Declare the queue
channel.queue_declare(
callback=on_callback,
queue="test",
durable=True,
exclusive=False,
auto_delete=False
)
# ...
# Re-declare the queue with passive flag
res = channel.queue_declare(
callback=on_callback,
queue="test",
durable=True,
exclusive=False,
auto_delete=False,
passive=True
)
print 'Messages in queue %d' % res.method.message_count
This will print the following:
<Method(['frame_type=1', 'channel_number=1', "method=<Queue.DeclareOk(['queue=test', 'message_count=0', 'consumer_count=0'])>"])>
<Method(['frame_type=1', 'channel_number=1', "method=<Queue.DeclareOk(['queue=test', 'message_count=0', 'consumer_count=0'])>"])>
Messages in queue 0
You get the number of messages from the message_count member.
Here is how you can get queue length using pika(Considering you are using default user and password on localhost)
replace q_name by your queue name.
import pika
connection = pika.BlockingConnection()
channel = connection.channel()
q = channel.queue_declare(q_name)
q_len = q.method.message_count
Have you tried PyRabbit? It has a get_queue_depth() method which sounds like what you're looking for.
There are two ways to get the queue size in the AMQP protocol. You can either use Queue.Declare or Basic.Get.
If you are consuming messages as they arrive using Basic.Consume, then you can't get this info unless you disconnect (timeout) and redeclare the queue, or else get one message but don't ack it. In newer versions of AMQP you can actively requeue the message.
As for Pika, I don't know the specifics but Python clients for AMQP have been a thorn in my side. Often you will need to monkeypatch classes in order to get the info you need, or to allow a queue consumer to timeout so that you can do other things at periodic intervals like record stats or find out how many messages are in a queue.
Another way around this is to give up, and use the Pipe class to run sudo rabbitmqctl list_queues -p my_vhost. Then parse the output to find the size of all queues. If you do this you will need to configure /etc/sudoers to not ask for the usual sudo password.
I pray that someone else with more Pika experience answers this by pointing out how you can do all the things that I mentioned, in which case I will download Pika and kick the tires. But if that doesn't happen and you are having difficulty with monkeypatching the Pika code, then have a look at haigha. I found their code to be much more straightforward than other Python AMQP client libraries because they stick closer to the AMQP protocol.
I am late to the party but this is an example getting queue count using pyrabbit or pyrabbit2 from AWS AmazonMQ with HTTPS, should work on RabbitMQ as well:
from pyrabbit2.api import Client
cl = Client('b-xxxxxx.mq.ap-southeast-1.amazonaws.com', 'user', 'password', scheme='https')
if not cl.is_alive():
raise Exception("Failed to connect to rabbitmq")
for i in cl.get_all_vhosts():
print(i['name'])
queues = [q['name'] for q in cl.get_queues('/')]
print(queues)
itemCount = cl.get_queue_depth('/', 'event.stream.my-api')
print(itemCount)
Just posting this in case anyone else comes across this discussion. The answer with the most votes, i.e.:
# Re-declare the queue with passive flag
res = channel.queue_declare(
callback=on_callback,
queue="test",
durable=True,
exclusive=False,
auto_delete=False,
passive=True
)
was very helpful for me, but it comes with a serious caveat. According to the pika documentation, the passive flag is used to "Only check to see if the queue exists." As such, one would imagine you can use the queue_declare function with the passive flag to check if a queue exists in situations where there's a chance that the queue was never declared. From my testing, if you call this function with the passive flag and the queue does not exist, not only does the api throw an exception; it will also cause the broker to disconnect your channel, so even if you catch the exception gracefully, you've lost your connection to the broker. I tested this with 2 different python scripts against a plain vanilla RabbitMQ container running in minikube. I've run this test many times and I get the same behavior every time.
My test code:
import logging
import pika
logging.basicConfig(level="INFO")
logger = logging.getLogger(__name__)
logging.getLogger("pika").setLevel(logging.WARNING)
def on_callback(msg):
logger.info(f"Callback msg: {msg}")
queue_name = "testy"
credentials = pika.PlainCredentials("guest", "guest")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host="localhost", port=5672, credentials=credentials)
)
logger.info("Connection established")
channel = connection.channel()
logger.info("Channel created")
channel.exchange_declare(exchange="svc-exchange", exchange_type="direct", durable=True)
response = channel.queue_declare(
queue=queue_name, durable=True, exclusive=False, auto_delete=False, passive=True
)
logger.info(f"queue_declare response: {response}")
channel.queue_delete(queue=queue_name)
connection.close()
The output:
INFO:__main__:Connection established
INFO:__main__:Channel created
WARNING:pika.channel:Received remote Channel.Close (404): "NOT_FOUND - no queue 'testy' in vhost '/'" on <Channel number=1 OPEN conn=<SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x1047e2700> params=<ConnectionParameters host=localhost port=5672 virtual_host=/ ssl=False>>>
Traceback (most recent call last):
File "check_queue_len.py", line 29, in <module>
response = channel.queue_declare(
File "/Users/dbailey/dev/asc-service-deployment/venv/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 2521, in queue_declare
self._flush_output(declare_ok_result.is_ready)
File "/Users/dbailey/dev/asc-service-deployment/venv/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 1354, in _flush_output
raise self._closing_reason # pylint: disable=E0702
pika.exceptions.ChannelClosedByBroker: (404, "NOT_FOUND - no queue 'testy' in vhost '/'")
When I set passive to False:
scripts % python check_queue_len.py
INFO:__main__:Connection established
INFO:__main__:Channel created
INFO:__main__:queue_declare response: <METHOD(['channel_number=1', 'frame_type=1', "method=<Queue.DeclareOk(['consumer_count=0', 'message_count=0', 'queue=testy'])>"])>
Please let me know if I'm somehow missing something here.