I'm trying to implement the client-server architecture in Python, where I have:
Server application
List of clients, who can subscribe to updates via API (sending POST requests to /subscribe endpoint).
It works fine. On the server side, I have a list of subscriber's URLs.
The main idea is to send requests from the server to all subscribed clients every X seconds (something similar to monitoring system).
I'm trying to do this part using threads:
class Monitor(threading.Thread):
def __init__(self):
super(Monitor, self).__init__()
self.setDaemon(True)
def send_notifications(self, subscribers):
for subscriber in subscribers:
request.post(subscriber["url"], json=subscriber["data"], timeout=0.5)
def run(self):
subscribers = get_subscribers() # getting list of subscribers via API call.
while True:
self.send_notifications(subscribers)
time.sleep(Y)
More or less it works, but I need to improve it a little.
The expected behaviour is:
The server should send notifications every X seconds to each subscribed client. If sending the notification to some client fails for Y minutes(5 minutes for example) it should unsubscribe this unresponsive client.
Is there some best practices for this?
Related
I'm using python with pika, and have the following two similar use cases:
Connect to RabbitMQ server A and server B (at different IP addrs with different credentials), listen on exchange A1 on server A; when a message arrives, process it and send to an exchange on server B
Open an HTTP listener and connect to RabbitMQ server B; when a specific HTTP request arrives, process it and send to an exchange on server B
Alas, in both these cases using my usual techniques, by the time I get to sending to server B the connection throws ConnectionClosed or ChannelClosed.
I assume this is the cause: while waiting on the incoming messages, the connection to server B (its "driver") is starved of CPU cycles, and it never gets a chance to service is connection socket, thus it can't respond to heartbeats from server B, thus the servers shuts down the connection.
But I can't noodle out the fix. My current work around is lame: I catch the ConnectionClosed, reopen a connection to server B, and retry sending my message.
But what is the "right" way to do this? I've considered these, but don't really feel I have all the parts to solve this:
Don't just sit forever in server A's basic_consume (my usual pattern), but rather, use a timeout, and when I catch the timeout somehow "service" heartbeats on server B's driver, before returning to a "consume with timeout"... but how do I do that? How do I "let service B's connection driver service its heartbeats"?
I know the socket library's select() call can wait for messages on several sockets and once, then service the socket who has packets waiting. So maybe this is what pika's SelectConnection is for? a) I'm not sure, this is just a hunch. b) Even if right, while I can find examples of how to create this connection, I can't find examples of how to use it to solve my multiconnection case.
Set up the the two server connections in different processes... and use Python interprocess queues to get the processed message from one process to the next. The concept is "two different RabbitMQ connections in two different processes should thus then be able to independently service their heartbeats". Except... I think this has a fatal flaw: the process with "server B" is, instead, going to be "stuck" waiting on the interprocess queue, and the same "starvation" is going to happen.
I've checked StackOverflow and Googled this for an hour last night: I can't for the life of me find a blog post or sample code for this.
Any input? Thanks a million!
I managed to work it out, basing my solution on the documentation and an answer in the pika-python Google group.
First of all, your assumption is correct — the client process that's connected to server B, responsible for publishing, cannot reply to heartbeats if it's already blocking on something else, like waiting a message from server A or blocking on an internal communication queue.
The crux of the solution is that the publisher should run as a separate thread and use BlockingConnection.process_data_events to service heartbeats and such. It looks like that method is supposed to be called in a loop that checks if the publisher still needs to run:
def run(self):
while self.is_running:
# Block at most 1 second before returning and re-checking
self.connection.process_data_events(time_limit=1)
Proof of concept
Since proving the full solution requires having two separate RabbitMQ instances running, I have put together a Git repo with an appropriate docker-compose.yml, the application code and comments to test this solution.
https://github.com/karls/rabbitmq-two-connections
Solution outline
Below is a sketch of the solution, minus imports and such. Some notable things:
Publisher runs as a separate thread
The only "work" that the publisher does is servicing heartbeats and such, via Connection.process_data_events
The publisher registers a callback whenever the consumer wants to publish a message, using Connection.add_callback_threadsafe
The consumer takes the publisher as a constructor argument so it can publish the messages it receives, but it can work via any other mechanism as long as you have a reference to an instance of Publisher
The code is taken from the linked Git repo, which is why certain details are hardcoded, e.g the queue name etc. It will work with any RabbitMQ setup needed (direct-to-queue, topic exchange, fanout, etc).
class Publisher(threading.Thread):
def __init__(
self,
connection_params: ConnectionParameters,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.daemon = True
self.is_running = True
self.name = "Publisher"
self.queue = "downstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.confirm_delivery()
def run(self):
while self.is_running:
self.connection.process_data_events(time_limit=1)
def _publish(self, message):
logger.info("Calling '_publish'")
self.channel.basic_publish("", self.queue, body=message.encode())
def publish(self, message):
logger.info("Calling 'publish'")
self.connection.add_callback_threadsafe(lambda: self._publish(message))
def stop(self):
logger.info("Stopping...")
self.is_running = False
# Call .process_data_events one more time to block
# and allow the while-loop in .run() to break.
# Otherwise the connection might be closed too early.
#
self.connection.process_data_events(time_limit=1)
if self.connection.is_open:
self.connection.close()
logger.info("Connection closed")
logger.info("Stopped")
class Consumer:
def __init__(
self,
connection_params: ConnectionParameters,
publisher: Optional["Publisher"] = None,
):
self.publisher = publisher
self.queue = "upstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.basic_qos(prefetch_count=1)
def start(self):
self.channel.basic_consume(
queue=self.queue, on_message_callback=self.on_message
)
try:
self.channel.start_consuming()
except KeyboardInterrupt:
logger.info("Warm shutdown requested...")
except Exception:
traceback.print_exception(*sys.exc_info())
finally:
self.stop()
def on_message(self, _channel: Channel, m, _properties, body):
try:
message = body.decode()
logger.info(f"Got: {message!r}")
if self.publisher:
self.publisher.publish(message)
else:
logger.info(f"No publisher provided, printing message: {message!r}")
self.channel.basic_ack(delivery_tag=m.delivery_tag)
except Exception:
traceback.print_exception(*sys.exc_info())
self.channel.basic_nack(delivery_tag=m.delivery_tag, requeue=False)
def stop(self):
logger.info("Stopping consuming...")
if self.connection.is_open:
logger.info("Closing connection...")
self.connection.close()
if self.publisher:
self.publisher.stop()
logger.info("Stopped")
I have a Python Kafka consumer application where I consume the messages and then call an external webservice synchronously. The webservice takes a minute to process the message and send the response.
Is there a way to consume the message, send the request to the Web service and consume the next message without waiting for the response?
from kafka import KafkaConsumer
from json import loads
consumer = KafkaConsumer(
'spring_test',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: loads(x.decode('utf-8')));
This is how I wait for the messages and send an external Web request
def consume_msgs():
for message in consumer:
message = message.value;
send('{}'.format(message))
consume_msgs()
The function send() takes one minute before I get the response. I want to start consuming the next message in the meantime asynchronously but I don't know where to start
def send(pload) :
import requests
r = requests.post('someurl',data = pload)
print(r)
Not sure if this is what you need but could you just spin each call to send out into a thread? Something like this the below. This way the for loop will continue without waiting for send to return. You may have to throttle the number of threads somehow if you are consuming data far quicker than you are processing it.
from threading import Thread
def consume_msgs():
for message in consumer:
message = message.value;
Thread(target=send, args = ('{}'.format(message),)).start()
consume_msgs()
In my Tornado app in some situation some clients disconnect from server but my current code doesn't detect that client is disconnect from server. I currently use ping to find out if client is disconnected.
here is my ping pong code:
from threading import Timer
class SocketHandler(websocket.WebSocketHandler):
def __init__(self, application, request, **kwargs):
# some code here
Timer(5.0, self.do_ping).start()
def do_ping(self):
try:
self.ping_counter += 1
self.ping("")
if self.ping_counter > 2:
self.close()
Timer(60, self.do_ping).start()
except WebSocketClosedError:
pass
def on_pong(self, data):
self.ping_counter = 0
now I want to set SO_RCVTIMEO in tornado instead of using ping pong method.
something like this :
sock.setsockopt(socket.SO_RCVTIMEO)
Is it possible to set SO_RCVTIMEO in Tornado for close clients from server after specific time out ?
SO_RCVTIMEO does not do anything in an asynchronous framework like Tornado. You probably want to wrap your reads in tornado.gen.with_timeout. You'll still need to use pings to test the connection and make sure it is still working; if the connection is idle there are few guarantees about how long it will take for the system to notice. (TCP keepalives are a possibility, but these are not configurable on all platforms and generally use very long timeouts).
I created a zmq_forwarder.py that's run separately and it passes messages from the app to a sockJS connection, and i'm currently working on right now on how a flask app could receive a message from sockJS via zmq. i'm pasting the contents of my zmq_forwarder.py. im new to ZMQ and i dont know why everytime i run it, it uses 100% CPU load.
import zmq
# Prepare our context and sockets
context = zmq.Context()
receiver_from_server = context.socket(zmq.PULL)
receiver_from_server.bind("tcp://*:5561")
forwarder_to_server = context.socket(zmq.PUSH)
forwarder_to_server.bind("tcp://*:5562")
receiver_from_websocket = context.socket(zmq.PULL)
receiver_from_websocket.bind("tcp://*:5563")
forwarder_to_websocket = context.socket(zmq.PUSH)
forwarder_to_websocket.bind("tcp://*:5564")
# Process messages from both sockets
# We prioritize traffic from the server
while True:
# forward messages from the server
while True:
try:
message = receiver_from_server.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from server: ", message
forwarder_to_websocket.send_string(message)
# forward messages from the websocket
while True:
try:
message = receiver_from_websocket.recv(zmq.DONTWAIT)
except zmq.Again:
break
print "Received from websocket: ", message
forwarder_to_server.send_string(message)
as you can see, i've setup 4 sockets. the app connects to port 5561 to push data to zmq, and port 5562 to receive from zmq (although im still figuring out how to actually set it up to listen for messages sent by zmq). on the other hand, sockjs receives data from zmq on port 5564 and sends data to it on port 5563.
i've read the zmq.DONTWAIT makes receiving of message asynchronous and non-blocking so i added it.
is there a way to improve the code so that i dont overload the CPU? the goal is to be able to pass messages between the flask app and the websocket using zmq.
You are polling your two receiver sockets in a tight loop, without any blocking (zmq.DONTWAIT), which will inevitably max out the CPU.
Note that there is some support in ZMQ for polling multiple sockets in a single thread - see this answer. I think you can adjust the timeout in poller.poll(millis) so that your code only uses lots of CPU if there are lots of incoming messages, and idles otherwise.
Your other option is to use the ZMQ event loop to respond to incoming messages asynchronously, using callbacks. See the PyZMQ documentation on this topic, from which the following "echo" example is adapted:
# set up the socket, and a stream wrapped around the socket
s = ctx.socket(zmq.REP)
s.bind('tcp://localhost:12345')
stream = ZMQStream(s)
# Define a callback to handle incoming messages
def echo(msg):
# in this case, just echo the message back again
stream.send_multipart(msg)
# register the callback
stream.on_recv(echo)
# start the ioloop to start waiting for messages
ioloop.IOLoop.instance().start()
Is it possible to add additional subscriptions to a Redis connection? I have a listening thread but it appears not to be influenced by new SUBSCRIBE commands.
If this is the expected behavior, what is the pattern that should be used if users add a stock ticker feed to their interests or join chatroom?
I would like to implement a Python class similar to:
import threading
import redis
class RedisPubSub(object):
def __init__(self):
self._redis_pub = redis.Redis(host='localhost', port=6379, db=0)
self._redis_sub = redis.Redis(host='localhost', port=6379, db=0)
self._sub_thread = threading.Thread(target=self._listen)
self._sub_thread.setDaemon(True)
self._sub_thread.start()
def publish(self, channel, message):
self._redis_pub.publish(channel, message)
def subscribe(self, channel):
self._redis_sub.subscribe(channel)
def _listen(self):
for message in self._redis_sub.listen():
print message
The python-redis Redis and ConnectionPool classes inherit from threading.local, and this is producing the "magical" effects you're seeing.
Summary: your main thread and worker threads' self._redis_sub clients end up using two different connections to the server, but only the main thread's connection has issued the SUBSCRIBE command.
Details: Since the main thread is creating the self._redis_sub, that client ends up being placed into main's thread-local storage. Next I presume the main thread does a client.subscribe(channel) call. Now the main thread's client is subscribed on connection 1. Next you start the self._sub_thread worker thread which ends up having its own self._redis_sub attribute set to a new instance of redis.Client which constructs a new connection pool and establishes a new connection to the redis server.
This new connection has not yet been subscribed to your channel, so listen() returns immediately. So with python-redis you cannot pass an established connection with outstanding subscriptions (or any other stateful commands) between threads.
Depending on how you plan to implement your app you may need to switch to using a different client, or come up with some other way to communicate subscription state to the worker threads, e.g. send subscription commands through a queue.
One other issue is that python-redis uses blocking sockets, which prevents your listening thread from doing other work while waiting for messages, and it cannot signal it wishes to unsubscribe unless it does so immediately after receiving a message.
Async way:
Twisted framework and the plug txredisapi
Example code (Subscribe:
import txredisapi as redis
from twisted.application import internet
from twisted.application import service
class myProtocol(redis.SubscriberProtocol):
def connectionMade(self):
print "waiting for messages..."
print "use the redis client to send messages:"
print "$ redis-cli publish chat test"
print "$ redis-cli publish foo.bar hello world"
self.subscribe("chat")
self.psubscribe("foo.*")
reactor.callLater(10, self.unsubscribe, "chat")
reactor.callLater(15, self.punsubscribe, "foo.*")
# self.continueTrying = False
# self.transport.loseConnection()
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" % (pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class myFactory(redis.SubscriberFactory):
# SubscriberFactory is a wapper for the ReconnectingClientFactory
maxDelay = 120
continueTrying = True
protocol = myProtocol
application = service.Application("subscriber")
srv = internet.TCPClient("127.0.0.1", 6379, myFactory())
srv.setServiceParent(application)
Only one thread, no headache :)
Depends on what kind of app u coding of course. In networking case go twisted.