I'm trying to implement a basic pubsub using redis-py client.
The idea is, the publisher is actually a callback that gets called periodically and will publish some information on channel1 in the callback function.
The subscriber will listen on that channel for this message and do some processing accordingly.
The subscriber is actually a basic bare-bones webserver that is deployed on k8s and it simply should show up the messages that it receives via the event_handler function.
subscriber.py
class Sub(object):
def __init___(self):
redis = Redis(host=...,
port=...,
password=...,
db=0)
ps = redis.pubsub(ignore_subscribe_messages=True)
ps.subscribe(**{'channel1': Sub.event_handler})
ps.run_in_thread(sleep_time=0.01, daemon=True)
#staticmethod
def event_handler(msg):
print("Hello from event handler")
if msg and msg.get('type') == 'message': # interested only in messages, not subscribe/unsubscribe/pmessages
# process the message
publisher.py
redis = Redis(host=...,
port=...,
password=...,
db=0)
def call_back(msg):
global redis
redis.publish('channel1', msg)
At the beginning, the messages are published and the subscriber event handler prints and process it correctly.
The problem is, after few hours, the subscriber stops showing up those messages. I've checked publisher logs and the messages definitely get sent out, but I'm not able to figure out why the event_handler is not getting called after few hours.
The print statement in it stops getting printed which is why I say the handler is not getting fired after few hours.
Initially I suspected the thread must have died, but on exec into the system I see it listed under the list of threads.
I've read through a lot of blogs, documentations but haven't found much help.
All I can deduce is the event handler stops getting called after sometime.
Can anyone help understand what's going on and the best way to reliably consume pubsub messages in a non blocking way?
Really appreciate any insights you guys have! :(
could you post the whole puplisher.py, please? It could be the case that call_back(msg) isn't called anymore.
To check whether a client is still subscribed, you can use the command PUBSUB CHANNELS in reds-cli.
Regards, Martin
Related
I want to implement Erlang like messaging, unless already exists.
The idea is to create multiprocess application (I'm using Ray)
I can imagine how to do the send/recv :
#ray.remote
class Module:
def recv(self, folder, msg ) :
if folder not in self.inbox : self.inbox[folder] = deque()
self.inbox[folder].push(msg)
def send(self, mod, folder, msg): mod.recv(folder,msg)
You call .send() which remotely calls the target module .recv() method
my problem is i dont know how to do the internal eventloop that REACT on messages.
It has to be lightweight too, because it runs in every process.
One idea is while-loop with sleep, but it seems inefficient !!
Probably, when msg arrives it has to trigger some registered FILTER-HOOK if message matches ? So may be no event loop needed but just routines triggered by FILTER !!!
What i did for now is trigger a check routine every time i get a message, which goes trough rules defined as (key:regex, method-to-call) filters
I have run the below code in the Python Shell:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')
future = producer.send('hello-topic', b'Hello, World!')
This works perfectly in that the Kafka consumer picks up the messages.
BUT...
Running it via a script does nothing.
Am I missing something obvious?
The only way to get it working as a script is to add this line...
future.get(timeout=10)
Any help would be appreciated.
kafka send() details from the link : send() is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.
You can use flush()/poll() method to send the message immediately.
My idea is also described here if I express myself incorrectly (Send images with their names in one message - RabbitMQ (Python 3.X))
I currently have a problem with RabbitMQ --->
I made a working queue on which several consumers work at the same time, it is a containerized image processing that gives a str output with the requested information.
The results must be sent on another queue when the processing is finished,
but how do I know if the queue containing the images is empty and there is no more work to do? I would like to know if a command like "if the queue is empty, then send the results..." to say it roughly.
Thank you for your time, have a good day.
You can do a passive declare of the queue to get the count of messages, but that may not be reliable as the count returned does not include messages in the "unacked" state. You could query the queue's counts via the HTTP API.
Or, whatever application publishes the images could send a "no more images" message to indicate no more work to do. The consumer that receives that message could then query the HTTP API to confirm that no messages are in the Ready or Unacked state, then send the results to the next queue.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
hi i think you can solve this with queue_declare.
status = channel.queue_declare(queue_name, passive=True)
if status.method.message_count > 5:
return True
log.error(f'{queue_name} has no message or less than 5 messages')
return False
Is it possible to receive only a number of messages from activemq.
Let say I need to receive only 100 messages from queue, is it possible.
I am using message listener method, is there any other method to receive messages.
example code snippet:
queue_messages = []
class SampleListener(object):
def on_message(self, headers, msg):
queue_messages.append(msg)
def read_messages():
queue_connection = stomp.Connection([(activemq_host, int(activemq_port))])
queue_connection.start()
queue_connection.connect('admin', 'admin')
queue_connection.set_listener('SampleListener', SampleListener())
queue_connection.subscribe(destination=activemq_input_q, id=1, ack='auto')
time.sleep(1)
queue_connection.disconnect()
read_messages()
Why don't you share your problem rather than the solution in your mind? Chances are the problem might not be a problem as you think or there can be better solutions.
To answer your question, yes you can. For ActiveMQ case, you can add extra header like {'activemq.prefetchSize':100}, ans set ack='client', when you subscribe the queue. But you do not acknowledge the messages at all. The consequence is you will not receive any more messages than 100.
It is a awkward solution I must say. Your code will end up with consuming the first 100 messages in the queue and that's it. You can apparently disconnect and resubscribe the same queue to receive the next 100 messages.
when ack='client' and i don't acknowledge the message on on_message event, when actually the acknowledgement will be sent to the server, will it send the acknowledgement on successful disconnect.
Also, if i abruptly kill the script, will be acknowledgement be still sent and will i miss the messages
I'm sending some data to a Kafka topic using kafka-python. I struggled with not being able to send data to my Kafka topic for a while until I found out that if I delay the code briefly it works.
from kafka import KafkaProducer
from time import sleep
producer = KafkaProducer(bootstrap_servers="localhost:9092")
producer.send("topic", "foo")
sleep(.1)
This code does not work for me without using sleep(.1). It's like sending data needs time to settle for it to work properly. Is there anything in the kafka-python client that deals with this? Or a better solution?
A year later, but to anyone seeing this, a solution is below. The issue here is race condition with the end of the script and the send call, which is why the sleep() command works.
The kafka module should better handle the python exit, or at the minimum output something to standard out/error, so this behavior isn't silent.
From the kafka-python github:
# Block until a single message is sent (or timeout)
future = producer.send('foobar', b'another_message')
result = future.get(timeout=60)
Now you can guarantee that your script will block until a message has been confirmed published.