How to add a timeout to method start_consuming() on pika library - python

I have a BlockingConnection, and I follow the examples of pika documentation. But in all of them, the example of code to start consuming messages are:
connection = pika.BlockingConnection()
channel = connection.channel()
channel.basic_consume('test', on_message)
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
connection.close()
(with more or less details).
I have to code many scripts, and I want to run one after another (for test/research purposes). But the above code require that I added ^C in each one.
I try to add some timeouts explained in the documentation, but I haven't luck. For example, if I find a parameter for set if client don't consuming any message in the last X seconds, then script finish. Is this posible in pika lib? or I have to change the approach?

Don't use start_consuming if you don't want your code to block. Either use SelectConnection or this method that uses consume. You can add a timeout to the parameters passed to consume.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

import pika
parameters = pika.ConnectionParameters(host="localhost")
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
def ack_message(channel, method):
"""Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(method.delivery_tag)
else:
# Channel is already closed, so we can't ACK this message;
# log and/or do something that makes sense for your app in this case.
pass
def callback(channel,method, properties, body):
ack_message(channel,method)
print("body",body, flush=True)
channel.basic_consume(
queue="hello", on_message_callback=callback)
channel.start_consuming()
connection.close()
I the original code is the answer of Luke Bakken.
But I have edited the code a lil bit.
:)

It's too late but perhaps someone gets benefited from that. You can use blocked_connection_timeout argument in pika.ConnectionParameters() as follows,
connection = pika.BlockingConnection(
pika.ConnectionParameters(
heartbeat=600,
blocked_connection_timeout=600,
host=self.queue_host,
port=constants.RABBTIMQ_PORT,
virtual_host=self.rabbitmq_virtual_host,
credentials=pika.PlainCredentials(
username=self.rabbitmq_username,
password=self.rabbitmq_password
)
)
)

Related

Python socket sendall blocks and I'm not sure how to handle bad clients / slow consumers

To simplify things, assume a TCP client-server app where the client sends a request and the server responds. The server uses sendall to respond to each client.
Now assume a bad client that sends requests to the server but doesn't really handle the responses. I.e. the client never calls socket.recv. (It doesn't have to be a bad client btw...it may be a slow consumer on the other end).
What ends up happening, is that the server keeps sending responses using sendall, until I'm assuming a buffer gets full, and then at some point sendall blocks and never returns.
This seems like a common problem to me so what would be the recommended solution?
Is there something like a try-send that would raise or return an EWOULDBLOCK (or similar) if the recipient's buffer is full? I'd like to avoid non-blocking select type calls if possible (happy to go that way if there are no alternatives).
Thank you in advance.
Following rveed's comment, here's a solution that works for my case:
def send_to_socket(self, sock: socket.socket, message: bytes) -> bool:
try:
sock.settimeout(10.0) # protect against bad clients / slow consumers by making this timeout (instead of blocking)
res = sock.sendall(message)
sock.settimeout(None) # put back to blocking (if needed for subsequent calls to recv, etc. using this socket)
if res is not None:
return False
return True
except socket.timeout as st:
# do whatever you need to here
return False
except Exception as ex:
# handle other exceptions here
return False
If needed, instead of setting the timeout to None afterwards (i.e. back to blocking), you can store the previous timeout value (using gettimeout) and restore to that.

How to interrupt in python

my subscribe function will call on_message function everytime that have message from websocket
and unsubscribe when message is STOP
is_sub = True
def on_message(msg):
print(msg)
if msg == "STOP":
unsubscribe(key)
is_sub = False
#continue to code
print("Start")
key = subscribe(on_message)
while is_sub:
print("on sub")
time.sleep(1)
#cotinue here
print("End")
without while loop the code will end and not recive any more message.
I want to find the better way (without time.sleep) to interrupt and continue the code
ps. I can't edit subscribe and unsubscribe function
You could run the instance of your websocket server in a separate thread. This will allow you to still continue to run your main thread while your webserver is listening to incoming messages.
I think you can't do it without a while loop. If you want to free your main thread you can do your message subscription on separate thread but that's not what you want I guess. Most WebSocket client would provide a method to keep the connection alive. for example this provides a run_forever() methods which does the 'while loop' part for you. If you are using a client there is a chance that they provide methods for keeping the connection alive. I suggest you go through the documentation once more.
If you want to continue the code, you could use the continue() command and if you want to stop, you could use the break() command.

How can I reject and close connections in django-channels 2?

It's been 1 month since I started using django-channels and now I have a feeling that I am not disconnecting websockets properly.
When I disconnect I want to destroy the group completely if no one is there and it should be no sign of existence.
When I'm rejecting connections I raise channels.exceptions.DenyConnection or send {'accepted': 'False'}
I was just wondering if this is the right way to do things that I've mentioned or not.
Try calling self.close()
From the channels Documentation:
class MyConsumer(WebsocketConsumer):
def connect(self):
# Called on connection.
# To accept the connection call:
self.accept()
# Or accept the connection and specify a chosen subprotocol.
# A list of subprotocols specified by the connecting client
# will be available in self.scope['subprotocols']
self.accept("subprotocol")
# To reject the connection, call:
self.close()
As far as I've understood this, the way to close a group is by using group_discard.
def disconnect(self, close_code):
async_to_sync(self.channel_layer.group_discard)("yourgroupname", self.channel_name)
Without having tested this, I would assume that raising an exception would result in an error 500 at the client. And a client receiving an error would probably interpret that not as "closed normally".
See channel docs here: https://channels.readthedocs.io/en/latest/topics/channel_layers.html#groups

Recovering from zmq.error.Again on a zmq.PAIR socket

I have a single client talking to a single server using a pair socket:
context = zmq.Context()
socket = context.socket(zmq.PAIR)
socket.setsockopt(zmq.SNDTIMEO, 1000)
socket.connect("tcp://%s:%i"%(host,port))
...
if msg != None:
try:
socket.send(msg)
except Exception as e:
print(e, e.errno)
The program sends approximately one 10-byte message every second. We were seeing issues where the program would eventually start to hang infinitely waiting for a message to send, so we added a SNDTIMEO. However, now we are starting to get zmq.error.Again instead. Once we get this error, the resource never becomes available again. I'm looking into which error code exactly is occurring, but I was generally wondering what techniques people use to recover from zmq.error.Again inside their programs. Should I destroy the socket connection and re-establish it?
Fact#0: PAIR/PAIR is different from other ZeroMQ archetypes
RFC 31 explicitly defines:
Overall Goals of this Pattern
PAIR is not a general-purpose socket but is intended for specific use cases where the two peers are architecturally stable. This usually limits PAIR to use within a single process, for inter-thread communication.
Next, if not correctly set the SNDHWM size and in case of the will to use the PAIR to operate over tcp://-transport-class also all the O/S-related L3/L2-attributed, any next .send() will also yield EAGAIN error.
There are a few additional counter-measures ( CONFLATE, IMMEDIATE, HEARTBEAT_{IVL|TTL|TIMEOUT} ), but there is the above mentioned principal limit on PAIR/PAIR, which sets what not to expect to happen if using this archetype.
The main suspect:
Given the said design-side limits, a damaged transport-path, the PAIR-access-point will not re-negotiate the reconstruction of the socket into the RTO-state.
For this reason, if your code indeed wants to remain using PAIR/PAIR, it may be wise to assemble also an emergency SIG/flag path so as to allow the distributed-system robustly survive such L3/L2/L1-incidents, that the PAIR/PAIR is known not to auto-take care of.
Epilogue:
your code does not use non-blocking .send()-mode, while the EAGAIN error-state is exactly used to signal a blocked-capability ( unability of the Access-Point to .send() at this very moment ) by setting the EAGAIN.
Better use the published API details:
aRetCODE = -1 # _______________________________________ PRESET
try:
aRetCODE = socket.send( msg, zmq.DONTWAIT ) #_______ .SET on RET
if ( aRetCODE == -1 ):
... # ZeroMQ: SIG'd via ERRNO:
except:
... #_______ .HANDLE EXC
finally:
...

Pika python asynchronous publisher: how to send data from user via console?

I am using the standard asynchronous publisher example. and i noticed that the publisher will keep publishing the same message in a loop forever.
So i commented the schedule_next_message call from publish_message to stop that loop.
But what i really want is for the publissher to start and publish only when a user give it a "message_body" and "Key"
basically publisher to publish the user inputs.
i was not able to fin any examples or hints of how to make the publisher take inputs from user in real time.
I am new to raabitmq, pika, python e.t.c
here is the snippet of code i am talking about :-
def publish_message(self):
"""If the class is not stopping, publish a message to RabbitMQ,
appending a list of deliveries with the message number that was sent.
This list will be used to check for delivery confirmations in the
on_delivery_confirmations method.
Once the message has been sent, schedule another message to be sent.
The main reason I put scheduling in was just so you can get a good idea
of how the process is flowing by slowing down and speeding up the
delivery intervals by changing the PUBLISH_INTERVAL constant in the
class.
"""
if self._stopping:
return
message = {"service":"sendgrid", "sender": "nutshi#gmail.com", "receiver": "nutshi#gmail.com", "subject": "test notification", "text":"sample email"}
routing_key = "email"
properties = pika.BasicProperties(app_id='example-publisher',
content_type='application/json',
headers=message)
self._channel.basic_publish(self.EXCHANGE, routing_key,
json.dumps(message, ensure_ascii=False),
properties)
self._message_number += 1
self._deliveries.append(self._message_number)
LOGGER.info('Published message # %i', self._message_number)
#self.schedule_next_message()
#self.stop()
def schedule_next_message(self):
"""If we are not closing our connection to RabbitMQ, schedule another
message to be delivered in PUBLISH_INTERVAL seconds.
"""
if self._stopping:
return
LOGGER.info('Scheduling next message for %0.1f seconds',
self.PUBLISH_INTERVAL)
self._connection.add_timeout(self.PUBLISH_INTERVAL,
self.publish_message)
def start_publishing(self):
"""This method will enable delivery confirmations and schedule the
first message to be sent to RabbitMQ
"""
LOGGER.info('Issuing consumer related RPC commands')
self.enable_delivery_confirmations()
self.schedule_next_message()
the site does not let me add the solution .. i was able to solve my issue using raw_input()
Thanks
I know I'm a bit late to answer the question but have you looked at this one?
Seems to be a bit more related to what you need than using a full async publisher. Normally you use those with a Python Queue to pass messages between threads.

Categories