Is it possible to receive only a number of messages from activemq.
Let say I need to receive only 100 messages from queue, is it possible.
I am using message listener method, is there any other method to receive messages.
example code snippet:
queue_messages = []
class SampleListener(object):
def on_message(self, headers, msg):
queue_messages.append(msg)
def read_messages():
queue_connection = stomp.Connection([(activemq_host, int(activemq_port))])
queue_connection.start()
queue_connection.connect('admin', 'admin')
queue_connection.set_listener('SampleListener', SampleListener())
queue_connection.subscribe(destination=activemq_input_q, id=1, ack='auto')
time.sleep(1)
queue_connection.disconnect()
read_messages()
Why don't you share your problem rather than the solution in your mind? Chances are the problem might not be a problem as you think or there can be better solutions.
To answer your question, yes you can. For ActiveMQ case, you can add extra header like {'activemq.prefetchSize':100}, ans set ack='client', when you subscribe the queue. But you do not acknowledge the messages at all. The consequence is you will not receive any more messages than 100.
It is a awkward solution I must say. Your code will end up with consuming the first 100 messages in the queue and that's it. You can apparently disconnect and resubscribe the same queue to receive the next 100 messages.
when ack='client' and i don't acknowledge the message on on_message event, when actually the acknowledgement will be sent to the server, will it send the acknowledgement on successful disconnect.
Also, if i abruptly kill the script, will be acknowledgement be still sent and will i miss the messages
Related
This is my code:
def _poll_for_messages(self, poller: Poller):
sockets = dict(poller.poll(3000))
if not sockets:
self._reconnect_if_necessary(poller)
return
if self._command_handler.command_socket in sockets:
encoded_message = self._command_handler.command_socket.recv_multipart()
This should communicate with my service bus and potentially reconnect if the bus gets restarted. When the Bus gets shut down, sometimes the last line still gets reached but the socket is not able to receive a message and it waits for one indefinitely.
For normal receives there is zmq.DONTWAIT but this does not work for multipart messages as far as I'm aware. Is there an easy way around this or am I polling for messages the wrong way in general?
If anyone stumbles over this and has the same problem, mine got fixed by adding the zmq.POLLIN flag when registering a socket to my poller:
poller.register(self._command_handler._command_socket, zmq.POLLIN)
I'm trying to implement a basic pubsub using redis-py client.
The idea is, the publisher is actually a callback that gets called periodically and will publish some information on channel1 in the callback function.
The subscriber will listen on that channel for this message and do some processing accordingly.
The subscriber is actually a basic bare-bones webserver that is deployed on k8s and it simply should show up the messages that it receives via the event_handler function.
subscriber.py
class Sub(object):
def __init___(self):
redis = Redis(host=...,
port=...,
password=...,
db=0)
ps = redis.pubsub(ignore_subscribe_messages=True)
ps.subscribe(**{'channel1': Sub.event_handler})
ps.run_in_thread(sleep_time=0.01, daemon=True)
#staticmethod
def event_handler(msg):
print("Hello from event handler")
if msg and msg.get('type') == 'message': # interested only in messages, not subscribe/unsubscribe/pmessages
# process the message
publisher.py
redis = Redis(host=...,
port=...,
password=...,
db=0)
def call_back(msg):
global redis
redis.publish('channel1', msg)
At the beginning, the messages are published and the subscriber event handler prints and process it correctly.
The problem is, after few hours, the subscriber stops showing up those messages. I've checked publisher logs and the messages definitely get sent out, but I'm not able to figure out why the event_handler is not getting called after few hours.
The print statement in it stops getting printed which is why I say the handler is not getting fired after few hours.
Initially I suspected the thread must have died, but on exec into the system I see it listed under the list of threads.
I've read through a lot of blogs, documentations but haven't found much help.
All I can deduce is the event handler stops getting called after sometime.
Can anyone help understand what's going on and the best way to reliably consume pubsub messages in a non blocking way?
Really appreciate any insights you guys have! :(
could you post the whole puplisher.py, please? It could be the case that call_back(msg) isn't called anymore.
To check whether a client is still subscribed, you can use the command PUBSUB CHANNELS in reds-cli.
Regards, Martin
I'm using MQTT to disturb messages in my network and have an question about the cleanest way to publish and subscribe multiple messages to the broker.
First of all, I've got two lists:
request_list = [('sensors/system/temperature', 0),
('sensors/system/gyroscope', 1),
('sensors/system/acceleration', 2)]
Which contains my topics I have to publish my messages to.
My second list defines the messages I want to publish and the topics where I want to get my response (== the topics I have to subscribe to get my answers).
request_value = ['{"response":"similarity/sensors/system/temperature","duration":"60s"}',
{"response":"similarity/sensors/system/gyroscope","duration":"60s"}',
'{"response":"similarity/sensors/system/acceleration","duration":"60s"}']
My broker is for every topic the same and defined with HOST= "192.168.137.1" on PORT = "8083".
For now I'am using a for loop, to subscribe to one topic, publish my message and wait for the message to come in. Because I have to wait for every subscribtions and publish to suceed its very time consuming. The pseudocode of my current code looks like the following:
list_measurments = []
for topic in request_list:
client.connect("Broker","Host")
client.loop_start()
client.subscribe("sub_topic")
client.pub("pub_topic","pub_message")
client.callback("append list_measurements")
client.loop_stop() #stop the loop
client.disconnect
I tried to use threads form my question here but it turned out that the normal use of threads would be to publish the same message to a lot of different brokers. I also thought about multiple subscribtions.
If anybody could give me an hint, what the cleanest and fastest approach would be, I'd be very thankful.
You should only connect to the broker and start the client loop once outside the for loop.
Setting up and tearing down the connection to the broker every time will add a huge amount of overhead and leaves lots of room to miss messages.
You should also just subscribe to all the topics you want once at startup. OK, you can add more or unsubscribe if needed, but if they are always the same just subscribe when you connect.
The basic general approach should look like this.
def on_connect(client, userdata, flags, rc):
for i in request_value:
client.subscribe(i.response)
for i in request_list:
//this loop needs to be a bit more complicated
//as you need to pull the topic from the request_list
//and the payload from request_value
client.publish(i)
def on_message(client, userdata, message):
if message.topic == "similarity/sensors/system/temperature":
//update temperature
elif message.topic == "similarity/sensors/system/gyroscope":
//update gyro
elif message.topic == "similarity/sensors/system/acceleration":
//update accel
client.on_connect = on_connect
client.on_message = on_message
client.connect("BrokerIP")
client.loop_start()
You can also run the publish loop again if needed (as it looks like you are only requesting 60s of data at a time). You would probably do better to combine the request_list and request_value data structures into one list.
My idea is also described here if I express myself incorrectly (Send images with their names in one message - RabbitMQ (Python 3.X))
I currently have a problem with RabbitMQ --->
I made a working queue on which several consumers work at the same time, it is a containerized image processing that gives a str output with the requested information.
The results must be sent on another queue when the processing is finished,
but how do I know if the queue containing the images is empty and there is no more work to do? I would like to know if a command like "if the queue is empty, then send the results..." to say it roughly.
Thank you for your time, have a good day.
You can do a passive declare of the queue to get the count of messages, but that may not be reliable as the count returned does not include messages in the "unacked" state. You could query the queue's counts via the HTTP API.
Or, whatever application publishes the images could send a "no more images" message to indicate no more work to do. The consumer that receives that message could then query the HTTP API to confirm that no messages are in the Ready or Unacked state, then send the results to the next queue.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
hi i think you can solve this with queue_declare.
status = channel.queue_declare(queue_name, passive=True)
if status.method.message_count > 5:
return True
log.error(f'{queue_name} has no message or less than 5 messages')
return False
I have my own jabber bot, and today I made a new plugin, which is to send message for all users. My code was working well, but I have a small problem; when I give my bot the command to send a message, my bot gets stuck and disconnects.
I know why my bot gets stuck and disconnects; I have more than 2000 users, so my bot cannot send a message at the same time for all users. Is there any method in Python to make my code send the message for each user after N seconds? I mean have the bot send MSG for user1, then wait for N seconds and send for user2, etc.
I hope my idea is clear. This is my code:
def send_msg(type, source, parameters):
ADMINFILE = 'modules/xmpp/users.cfg'
fp = open(ADMINFILE, 'r')
users = eval(fp.read())
if parameters:
for z in users:
msg(z, u"MSG from Admin:\n" +parameters)
reply(type, source, u"MSG has been sent!")
else:
reply(type, source, u"Error! please try again.")
register_command_handler(send_msg, 'msg', ['all','amsg'], 0,'Sends a message to all users')
I believe you are looking for time.sleep(secs). From the docs:
Suspend execution for the given number of seconds. The argument may be
a floating point number to indicate a more precise sleep time. The
actual suspension time may be less than that requested because any
caught signal will terminate the sleep() following execution of that
signal’s catching routine. Also, the suspension time may be longer
than requested by an arbitrary amount because of the scheduling of
other activity in the system.
After each send you can delay for time.sleep(seconds) before sending your next message.