My idea is also described here if I express myself incorrectly (Send images with their names in one message - RabbitMQ (Python 3.X))
I currently have a problem with RabbitMQ --->
I made a working queue on which several consumers work at the same time, it is a containerized image processing that gives a str output with the requested information.
The results must be sent on another queue when the processing is finished,
but how do I know if the queue containing the images is empty and there is no more work to do? I would like to know if a command like "if the queue is empty, then send the results..." to say it roughly.
Thank you for your time, have a good day.
You can do a passive declare of the queue to get the count of messages, but that may not be reliable as the count returned does not include messages in the "unacked" state. You could query the queue's counts via the HTTP API.
Or, whatever application publishes the images could send a "no more images" message to indicate no more work to do. The consumer that receives that message could then query the HTTP API to confirm that no messages are in the Ready or Unacked state, then send the results to the next queue.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
hi i think you can solve this with queue_declare.
status = channel.queue_declare(queue_name, passive=True)
if status.method.message_count > 5:
return True
log.error(f'{queue_name} has no message or less than 5 messages')
return False
Related
I'm trying to implement a basic pubsub using redis-py client.
The idea is, the publisher is actually a callback that gets called periodically and will publish some information on channel1 in the callback function.
The subscriber will listen on that channel for this message and do some processing accordingly.
The subscriber is actually a basic bare-bones webserver that is deployed on k8s and it simply should show up the messages that it receives via the event_handler function.
subscriber.py
class Sub(object):
def __init___(self):
redis = Redis(host=...,
port=...,
password=...,
db=0)
ps = redis.pubsub(ignore_subscribe_messages=True)
ps.subscribe(**{'channel1': Sub.event_handler})
ps.run_in_thread(sleep_time=0.01, daemon=True)
#staticmethod
def event_handler(msg):
print("Hello from event handler")
if msg and msg.get('type') == 'message': # interested only in messages, not subscribe/unsubscribe/pmessages
# process the message
publisher.py
redis = Redis(host=...,
port=...,
password=...,
db=0)
def call_back(msg):
global redis
redis.publish('channel1', msg)
At the beginning, the messages are published and the subscriber event handler prints and process it correctly.
The problem is, after few hours, the subscriber stops showing up those messages. I've checked publisher logs and the messages definitely get sent out, but I'm not able to figure out why the event_handler is not getting called after few hours.
The print statement in it stops getting printed which is why I say the handler is not getting fired after few hours.
Initially I suspected the thread must have died, but on exec into the system I see it listed under the list of threads.
I've read through a lot of blogs, documentations but haven't found much help.
All I can deduce is the event handler stops getting called after sometime.
Can anyone help understand what's going on and the best way to reliably consume pubsub messages in a non blocking way?
Really appreciate any insights you guys have! :(
could you post the whole puplisher.py, please? It could be the case that call_back(msg) isn't called anymore.
To check whether a client is still subscribed, you can use the command PUBSUB CHANNELS in reds-cli.
Regards, Martin
I have code that looks like.
def message_reader(consumer):
consumed_message = consumer.consume_batch()
if consumed_message:
#do something
def run_reader():
process_consumer = get_consumer() #gets a SimpleConsumer()
message_reader(process_consumer)
process_consumer.commit()
process_consumer.close()
so, my question is , Suppose there is no message in the topic and no messages are consumed - does the commit() increase the offset?
And also, does the producer check for the latest offset before producing a message ?
Not an expert on the python client, but the java one would just re-commit the same position if it hasn't actually consumed anything between commit calls.
I'm certain, however, that all clients do the same (commit the same position) as doing otherwise would cause you to skip records. There are also entire Kafka monitoring systems that have been written to rely on this behavior - for example burrow.
Is it possible to receive only a number of messages from activemq.
Let say I need to receive only 100 messages from queue, is it possible.
I am using message listener method, is there any other method to receive messages.
example code snippet:
queue_messages = []
class SampleListener(object):
def on_message(self, headers, msg):
queue_messages.append(msg)
def read_messages():
queue_connection = stomp.Connection([(activemq_host, int(activemq_port))])
queue_connection.start()
queue_connection.connect('admin', 'admin')
queue_connection.set_listener('SampleListener', SampleListener())
queue_connection.subscribe(destination=activemq_input_q, id=1, ack='auto')
time.sleep(1)
queue_connection.disconnect()
read_messages()
Why don't you share your problem rather than the solution in your mind? Chances are the problem might not be a problem as you think or there can be better solutions.
To answer your question, yes you can. For ActiveMQ case, you can add extra header like {'activemq.prefetchSize':100}, ans set ack='client', when you subscribe the queue. But you do not acknowledge the messages at all. The consequence is you will not receive any more messages than 100.
It is a awkward solution I must say. Your code will end up with consuming the first 100 messages in the queue and that's it. You can apparently disconnect and resubscribe the same queue to receive the next 100 messages.
when ack='client' and i don't acknowledge the message on on_message event, when actually the acknowledgement will be sent to the server, will it send the acknowledgement on successful disconnect.
Also, if i abruptly kill the script, will be acknowledgement be still sent and will i miss the messages
I have a script that in the end executes two functions. It polls for data on a time interval (runs as daemon - and this data is retrieved from a shell command run on the local system) and, once it receives this data will: 1.) function 1 - first write this data to a log file, and 2.) function 2 - observe the data and then send an email IF that data meets certain criteria.
The logging will happen every time, but the alert may not. The issue is, in cases that an alert needs to be sent, if that email connection stalls or takes a lengthy amount of time to connect to the server, it obviously causes the next polling of the data to stall (for an undisclosed amount of time, depending on the server), and in my case it is very important that the polling interval remains consistent (for analytics purposes).
What is the most efficient way, if any, to keep the email process working independently of the logging process while still operating within the same application and depending on the same data? I was considering creating a separate thread for the mailer, but that kind of seems like overkill in this case.
I'd rather not set a short timeout on the email connection, because I want to give the process some chance to connect to the server, while still allowing the logging to be written consistently on the given interval. Some code:
def send(self,msg_):
"""
Send the alert message
:param str msg_: the message to send
"""
self.msg_ = msg_
ar = alert.Alert()
ar.send_message(msg_)
def monitor(self):
"""
Post to the log file and
send the alert message when
applicable
"""
read = r.SensorReading()
msg_ = read.get_message()
msg_ = read.get_message() # the data
if msg_: # if there is data in general...
x = read.get_failed() # store bad data
msg_ += self.write_avg(read)
msg_ += "==============================================="
self.ctlog.update_templog(msg_) # write general data to log
if x:
self.send(x) # if bad data, send...
This is exactly the kind of case you want to use threading/subprocesses for. Fork off a thread for the email, which times out after a while, and keep your daemon running normally.
Possible approaches that come to mind:
Multiprocessing
Multithreading
Parallel Python
My personal choice would be multiprocessing as you clearly mentioned independent processes; you wouldn't want a crashing thread to interrupt the other function.
You may also refer this before making your design choice: Multiprocessing vs Threading Python
Thanks everyone for the responses. It helped very much. I went with threading, but also updated the code to be sure it handled failing threads. Ran some regressions and found that the subsequent processes were no longer being interrupted by stalled connections and the log was being updated on a consistent schedule . Thanks again!!
A have a application with two threads. Its a network controlled game,
1. thread (Server)
Accept socket connections and receive messages
When message is sent, create an event and add it to the queue
Code:
class SingleTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
try:
while True:
sleep(0.06)
message = self.rfile.readline().strip()
my_event = pygame.event.Event(USEREVENT, {'control':message})
print message
pygame.event.post(my_event)
2. thread (pygame)
In charge of game rendering
Receives messages via event queue which Server populates
Renders the game based on messages every 60ms
This is how the game looks. The control messages are just speeds for the little square.
For the purpose of debug i connect to the server from a virtual machine with:
ncat 192.168.56.1 2000
And then send control messages. In production, these messages will be sent every 50ms by an Android device.
The problem
In my debug environment, i manually type messages with a period of a few seconds. During the time i don't type anything the game gets rendered many times. What happens is that the message (in server code) is constantly rendered with the previously received value.
I send the following:
1:0.5
On the console where the app is started i receive the following due to line print message in Server code:
alan#alan ~/.../py $ python main.py
1:0.5
What the game does is it acts as it is constantly (with the period it renders, and not every few seconds as i type) receiving this value.
SInce that is happenig i would expect that the print message which is in while True also outputs constantly and that the output is:
alan#alan ~/.../py $ python main.py
1:0.5
1:0.5
1:0.5
1:0.5
....
However that is not the case. Please advise (I'm also open for proposals to what to change the subject to if it isn't explanatory enough)
Your while True loop is polling the socket, which is only going to get messages when they are sent; it has no idea or care what the downstream event consumer is doing with those messages, it is just going to dispatch an event for and print the contents of the next record on the socket queue every .6 seconds. If you want the game to print the current command every render loop, you'll have to put the print statement in the render loop itself, not in the socket poller. Also, since you seem to want to have the last command "stick" and not post a new event unless the user actually inputs something, you might want to put an if message: block around the event dispatch code in the socket handler you have here. Right now, you'll send an empty event every .6 seconds if the user hasn't provided you any input since the last time you checked.
I also don't think it's probably advisable to put a sleep, or the loop you have for that matter, in your socket handler. The SocketServer is going to be calling it every time you receive data on the socket, so that loop is effectively being done for you, and all doing it here is going to do is open you up to overflowing the buffer, I think. If you want to control how often you post events to pygame, you probably want to do that by either blocking events of a certain type from being added if there is already 1 queued, or by grabbing all events of a given type from the queue each game loop and then just ignoring all but the first or last one. You could also control it by checking in the handler if it has been some amount of time since the last event was posted, but then you have to make sure the event consumer is capable of handling an event queue with multiple events waiting on it, and does the appropriate queue flushing when needed.
Edit:
Docs:
The difference is that the readline() call in the second handler will call recv() multiple times until it encounters a newline character, while the single recv() call in the first handler will just return what has been sent from the client in one sendall() call.
So yes, reading the whole line is guaranteed. In fact, I don't think the try is necessary either, since this won't even be called unless there is input to handle.