I am modifying pika headers using
properties.headers = {
'myheader': myheader
}
But I am acking and nacking with the delivery_tag
channel.basic_nack(delivery_tag=delivery_tag, requeue=False)
How can I pass the update properties with the headers to the ack and nack response functions? Or what is the pika way of doing this?
It is correct that basic_nack cannot change the headers.
The way to do this is not to use NACK at all but to generate and return a 'new' message (which is simply the current message you are handling, but adding new headers to it).
It appears that a NACK is basically doing this anyway according to the AMQP spec.
So my logic is using basic_ack on success and message generation with updated headers on failure. And in my case I 'redirect' the new message to a dead letter exchange which a dead letter queue is binded to.
An extract from pika docs say that you misspell basic_nck... is just a question error or is your actual issue?
def basic_nack(self, delivery_tag=None, multiple=False, requeue=True):
"""This method allows a client to reject one or more incoming messages.
It can be used to interrupt and cancel large incoming messages, or
return untreatable messages to their original queue.
:param integer delivery-tag: int/long The server-assigned delivery tag
:param bool multiple: If set to True, the delivery tag is treated as
"up to and including", so that multiple messages
can be acknowledged with a single method. If set
to False, the delivery tag refers to a single
message. If the multiple field is 1, and the
delivery tag is zero, this indicates
acknowledgement of all outstanding messages.
:param bool requeue: If requeue is true, the server will attempt to
requeue the message. If requeue is false or the
requeue attempt fails the messages are discarded or
dead-lettered.
"""
self._raise_if_not_open()
return self._send_method(
spec.Basic.Nack(delivery_tag, multiple, requeue))
Sorry for that but as far as I know is not possible for a basic_nack (or basic_ack) modify the headers. The problem is that the modified message will be put in the dead letter queue and the new one has a newer ID.
L-
Related
I have an AWS SQS queue which receives the messages, iterates through them printing the details and then I attempt to delete them. Unfortunately they are not deleting even though I get a success response. I can't figure out why they are not being removed when I am sure I've used similar code before.
The basic example I'm trying is like this:
import boto3
# Create SQS client
sqs = boto3.client('sqs',
region_name='',
aws_access_key_id='',
aws_secret_access_key=''
)
queue_url = ''
# Receive message from SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'All'
],
MaxNumberOfMessages=10,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=0,
WaitTimeSeconds=0
)
print(len(response['Messages']))
for index, message in enumerate(response['Messages']):
print("Index Number: ", index)
print(message)
receipt_handle = message['ReceiptHandle']
# do some function
sqs.delete_message(
QueueUrl=queue_url,
ReceiptHandle=receipt_handle
)
Probably because you are using VisibilityTimeout=0. This means that the message immediately goes back to the SQS queue. So there is nothing to delete for you.
You are setting VisibilityTimeout=0 and WaitTimeSeconds=0 - the message will timeout and become visible again after zero seconds.
This is probably not what you want - you should try with higher values here and read the docs about them: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
You can time out the usual processing time and set the values to safe ones, so that messages will be delivered in case of errors.
Key points to understand regarding SQS:
Visibility timeout
Wait time seconds
Receipt handle
When a message is received by a Lambda function, it gets a 'ReceiptHandle' for that receive.
Now, this ReceiveHandle changes whenever a new lambda thread receives the message making the previous handler invalid.
Since VisibilityTimeout = 0 and WaitTimeSeconds = 0,(as pointed out by #Mandraenke), the message is immediately made available to another lambda thread with a new 'ReceiptHandle'.
The receipt 'ReceiptHandle' being processed by the previous lambda thread becomes invalid and could not be reached by the lambda function processing it.
I am using the standard asynchronous publisher example. and i noticed that the publisher will keep publishing the same message in a loop forever.
So i commented the schedule_next_message call from publish_message to stop that loop.
But what i really want is for the publissher to start and publish only when a user give it a "message_body" and "Key"
basically publisher to publish the user inputs.
i was not able to fin any examples or hints of how to make the publisher take inputs from user in real time.
I am new to raabitmq, pika, python e.t.c
here is the snippet of code i am talking about :-
def publish_message(self):
"""If the class is not stopping, publish a message to RabbitMQ,
appending a list of deliveries with the message number that was sent.
This list will be used to check for delivery confirmations in the
on_delivery_confirmations method.
Once the message has been sent, schedule another message to be sent.
The main reason I put scheduling in was just so you can get a good idea
of how the process is flowing by slowing down and speeding up the
delivery intervals by changing the PUBLISH_INTERVAL constant in the
class.
"""
if self._stopping:
return
message = {"service":"sendgrid", "sender": "nutshi#gmail.com", "receiver": "nutshi#gmail.com", "subject": "test notification", "text":"sample email"}
routing_key = "email"
properties = pika.BasicProperties(app_id='example-publisher',
content_type='application/json',
headers=message)
self._channel.basic_publish(self.EXCHANGE, routing_key,
json.dumps(message, ensure_ascii=False),
properties)
self._message_number += 1
self._deliveries.append(self._message_number)
LOGGER.info('Published message # %i', self._message_number)
#self.schedule_next_message()
#self.stop()
def schedule_next_message(self):
"""If we are not closing our connection to RabbitMQ, schedule another
message to be delivered in PUBLISH_INTERVAL seconds.
"""
if self._stopping:
return
LOGGER.info('Scheduling next message for %0.1f seconds',
self.PUBLISH_INTERVAL)
self._connection.add_timeout(self.PUBLISH_INTERVAL,
self.publish_message)
def start_publishing(self):
"""This method will enable delivery confirmations and schedule the
first message to be sent to RabbitMQ
"""
LOGGER.info('Issuing consumer related RPC commands')
self.enable_delivery_confirmations()
self.schedule_next_message()
the site does not let me add the solution .. i was able to solve my issue using raw_input()
Thanks
I know I'm a bit late to answer the question but have you looked at this one?
Seems to be a bit more related to what you need than using a full async publisher. Normally you use those with a Python Queue to pass messages between threads.
I am playing around with a small project to get a better understanding of web technologies.
One requirement is that if multiple clients have access to my site, and one make a change all the others should be notified. From what I have gathered Server Sent events seems to do what I want.
However when I open my site in both Firefox and Chrome and try to send an event, only one of the browsers gets it. If I send an event again only one of the browsers gets the new event, usually the browser that did not get event number one.
Here is the relevant code snippets.
Client:
console.log("setting sse handlers")
viewEventSource = new EventSource("{{ url_for('viewEventRequest') }}");
viewEventSource.onmessage = handleViewEvent;
function handleViewEvent(event){
console.log("called handle view event")
console.log(event);
}
Server:
#app.route('/3-3-3/view-event')
def view_event_request():
return Response(handle_view_event(), mimetype='text/event-stream')
def handle_view_event():
while True:
for message in pubsub_view.listen():
if message['type'] == 'message':
data = 'retry: 1\n'
data += 'data: ' + message['data'] + '\n\n'
return data
#app.route('/3-3-3/test')
def test():
red.publish('view-event', "This is a test message")
return make_response("success", 200)
My question is, how do I get the event send to all connected clients and not just one?
Here are some gists that may help (I've been meaning to release something like 'flask-sse' based on 'django-sse):
https://gist.github.com/3680055
https://gist.github.com/3687523
also useful - https://github.com/jkbr/chat/blob/master/app.py
The first thing I notice about your code is that 'handle_view_event' is not a generator.
Though it is in a 'while' loop, the use of 'return' will always exit the function the first time we return data; a function can only return once. I think you want it to be 'yield' instead.
In any case, the above links should give you an example of a working setup.
As Anarov says, websockets and socket.io are also an option, but SSE should work anyway. I think socket.io supports using SSE if ws are not needed.
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3.
What is the most elegant way of doing it?
There are at least a couple of ways of doing this.
When you read a message in boto, you receive a Message object or some subclass thereof. The Message object has an "attributes" field that is a dict containing all message attributes known by SQS. One of the things SQS tracks is the approximate # of times the message has been read. So, you could use this value to determine whether the message should be deleted or not but you would have to be comfortable with the "approximate" nature of the value.
Alternatively, you could record message ID's in some sort of database and increment a count field in the database each time you read the message. This could be done in a simple Python dict if the messages are always being read within a single process or it could be done in something like SimpleDB if you need to record readings across processes.
Hope that helps.
Here's some example code:
>>> import boto.sqs
>>> c = boto.sqs.connect_to_region()
>>> q = c.lookup('myqueue')
>>> messages = c.receive_message(q, num_messages=1, attributes='All')
>>> messages[0].attributes
{u'ApproximateFirstReceiveTimestamp': u'1365474374620',
u'ApproximateReceiveCount': u'2',
u'SenderId': u'419278470775',
u'SentTimestamp': u'1365474360357'}
>>>
Other way could be you can put an extra identifier at the end of the message in your SQS queue. This identifier can keep the count of the number of times the message has been read.
Also if you want that your service should not poll these message again and again then you can create one more queue say "Dead Message Queue" and can transfer then message which has crossed the threshold to this queue.
aws has in-built support for this, just follow the below steps:
create a dead letter queue
enable Redrive policy for the source queue by checking "Use Redrive Policy"
select the dead letter queue you created in step#1 for "Dead Letter Queue"
Set "Maximum Receives" as "3" or any value between 1 and 1000
How it works is, whenever a message is received by the worker, the receive count increments. Once it reaches "Maximum Receives" count, the message is pushed to the dead letter queue. Note, even if you access the message via aws console, the receive count increments.
Source Using Amazon SQS Dead Letter Queues
Get ApproximateReceiveCount attribute from message you read.
move it to another queue(than you can manage error messages) or just delete it.
foreach (var message in response.Messages){
try{
var notifyMessage = JsonConvert.DeserializeObject<NotificationMessage>(message.Body);
Global.Sqs.DeleteMessageFromQ(message.ReceiptHandle);
}
catch (Exception ex){
var receiveMessageCount = int.Parse(message.Attributes["ApproximateReceiveCount"]);
if (receiveMessageCount >3 )
Global.Sqs.DeleteMessageFromQ(message.ReceiptHandle);
}
}
It should be done in few steps.
create SQS connection :-
sqsconnrec = SQSConnection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
create queue object :-
request_q = sqsconnrec.create_queue("queue_Name")
load the queue messages :-
messages= request_q.get_messages()
now you get the array of message objects and to find the total number of messages :-
just do len(messages)
should work like charm.
I need to set up a jabber bot, using python, that will send messages based on the online/offline availability of several contacts.
I've been looking into pyxmpp and xmpppy, but couldn't find any way (at least nothing straightforward) to check the status of a given contact.
Any pointers on how to achieve this?
Ideally I would like something like e.g. bot.status_of("contact1#gmail.com") returning "online"
I don't think it is possible in the way you want it because the presence of contacts (which contains the information about their availability) is received asynchronously by the bot.
You will have to write a presence handler function and registered it with the connection. This function will get called whenever a presence is received from a contact. The parameter of the call will tell you if the contact is online or not. Depending upon it you can send the message to the contact.
Using xmpppy you do it something like this:
def connect(jid, password, res, server, proxy, use_srv):
conn = xmpp.Client(jid.getDomain())
if not conn.connect(server=server, proxy=proxy, use_srv=use_srv):
log( 'unable to connect to server.')
return None
if not conn.auth(jid.getNode(), password, res):
log( 'unable to authorize with server.')
return None
conn.RegisterHandler( 'presence', callback_presence)
return conn
conn = connect(...)
def callback_presence(sess, pres):
if pres.getStatus() == "online":
msg = xmpp.Message(pres.getFrom(), "Hi!")
conn.send(msg)
PS: I have not tested the code but it should be something very similar to this.
What you want is done via a <presence type="probe"/>. This is done on behalf of the client, and SHOULD not be done by them (as per the RFC for XMPP IM). Since this is a bot, you could implement the presence probe, and receive the current presence of a given entity. Remember to send the probe to the bare JID (sans resource), because the server responds on behalf of clients for presence probes. This means your workflow will look like:
<presence/> // I'm online! BOT
<presence from="juliet#capulet.org/balcony"/> RESPONSE
<presence from="romeo#montague.net/hallway"/> // and so on... RESPONSE
<presence type="probe" to="benvolio#montague.net"/> BOT
<presence from="benvoio#montague.net/hallway"> RESPONSE
<status>Huzzah!</status>
<priority>3</priority>
</presence>
Take a look at that portion of the RFC for more in depth information on how your call flow should behave.
What you need to do is:
Connect.
Declare a presence handler. That handler maintains a cache of every contact's presence (see details below)
Send initial presence to the server, which will provoke the reception of presence statuses from all of your online contacts, which will in turn trigger the handler.
The status_of() method reads the cache and deduces the contact's presence status immediately.
Now, it depends on what presence information you need. For the sake of simplicity, let's pretend you just need an "online"/"offline" value. The cache would be a dictionary whose keys are the bare (no resource) JIDs, and the values are a set of connected resources for this JID. For example:
{'foo#bar.com': set(['work', 'notebook']), 'bob#example.net': set(['gtalk'])}
Now, when you receive an "available" presence from a certain JID/resource:
if jid not in cache:
cache[jid] = set()
cache[jid].add(resource)
Reciprocally, when you receive an "unavailable" presence:
if jid in cache: # bad people send "unavailable" just to make your app crash
cache[jid].discard(resource)
if 0 == len(cache[jid]):
del cache[jid]
And now:
def is_online(jid):
return jid in cache
Of course, if you want more detailed information, you could maintain not only the list of available resources for a contact but also the status, status message, priority, etc. of each resource.