Amazon Lex v2 Welcome message - python

hello everyone I almost finished a bot with lex v2 that uses lambda, I did the integration using the kommunicate client to do before, but now I need the chatbot to start with an intent automatically, this is my first problem.
Also I would like to know if on lambda in the answers there is a way to send a response that makes an elicit slot, and then after 5 seconds send another to move to confirmation intent on lex v2.. I tried with time and asyncio, but it seems that the code does not go on, I can only get the first answer with the slot elicit, I wish that after about 5 seconds I refer to confirmation intent on lex v2, this is my code:
if carFound is not None:
# if resp['total'] < 30:
# print('Conferma intento')
# carDialog = {
# "type": "ConfirmIntent"
# }
# else:
# print('Filtri args')
# carDialog ={
# "type": "ElicitSlot",
# "slotToElicit": "Args"
# }
response = {
"sessionState": {
"dialogAction": {
"type": "ElicitSlot",
"slotToElicit": "Args"
},
"intent": {
'name': intent,
'slots': slots,
"state": "Fulfilled",
}
},
"messages": carFound,
}
practically after every call to my api that sends me carFound as a payload with all the machines, I should verify that when resp['total'] and less than 30 I refer in addition to the answers also to confirmation intent after some time.
as i said yet i tried with sleep() function of python, and i still have only the response that has the elicitslot,maybe sleep and asyncio are not well for lexv2...
I verified through the test of lambda, with my inputs that the condition of <30 is true, so the problem is on the response i think.
for the welcome i don't know how i can do, i want that my bot start with the intent Welcome for example, without write nothing on my kommunicate client that is on a website.

Related

Python - Azure function service bus trigger batch processing

I am using Azure function service bus trigger in Python to receive messages in batch from a service bus queue. Even though this process is not well documented in Python, but I managed to enable the batch processing by following the below Github PR.
https://github.com/Azure/azure-functions-python-library/pull/73
Here is the sample code I am using -
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"cardinality": "many",
"queueName": "<some queue name>",
"dataType": "binary",
"connection": "SERVICE_BUS_CONNECTION"
}
]
}
__init__.py
import logging
import azure.functions as func
from typing import List
def main(msg: List[func.ServiceBusMessage]):
message_length = len(msg)
if message_length > 1:
logging.warn('Handling multiple requests')
for m in msg:
#some call to external web api
host.json
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
},
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 32,
"maxAutoRenewDuration": "00:05:00"
},
"batchOptions": {
"maxMessageCount": 100,
"operationTimeout": "00:01:00",
"autoComplete": true
}
}
}
}
After using this code , I can see that service bus trigger is picking up messages in a batch of 100 (or sometimes < 100) based on the maxMessageCount but I have also observed that most of the messages are ending up in the dead letter queue with the MaxDeliveryCountExceeded reason code. I have tried with different values of MaxDeliveryCount from 10-20 but I had the same result. So my question is do we need to adjust/optimize the MaxDeliveryCount in case of batch processing of service bus messages ? How both of them are related ? What kind of change can be done in the configuration to avoid this dead letter issue ?
From what we discussed in the comments, this is what you encounter:
Your function app is fetching 100 messages from ServiceBus (prefetchCount) and locking them for a maximum of maxAutoRenewDuration
Your function code is processing messages one at a time at a slow rate because of the API you call.
By the time you finish a batch of messages (maxMessageCount), the lock already expired which is why you have exceptions and the message gets redelivered again. This eventually causes MaxDeliveryCountExceeded errors.
What can you do to improve this?
Reduce maxMessageCount and prefetchCount
Increase maxAutoRenewDuration
Increase the performance of your API (how to do that would be a different question)
Your current code would be much better off by using a "normal" single message trigger instead of the batch trigger
PS: Beware that your function app may scale horizontally if you are running in a consumption plan, further increasing the load on your struggling API.

Vonage transfer of call via inline ncco not working properly python

I am currently having an issue with Vonages Voice API. This issue specifically occurs while handling the transfer of a call via inline ncco, I am using the python code snippet from the documentation, which you can find here. When I paste the code from the snippet with no changes into my script, the call just plays the first action and hangs up, when trying to update that call, I am receiving the following error:
vonage.errors.ClientError: 400 response from api.nexmo.com
I've been searching for multiple hours but can't find anyone with a similar problem nor another person with a working implementation of this feature.
My code looks as follows:
import vonage
client = vonage.Client(
application_id="<ID>",
private_key=r"private.key",
)
voice = vonage.Voice(client)
response = voice.create_call({
'to': [{'type': 'phone', 'number': "<mynumber>"}],
'from': {'type': 'phone', 'number': "<vonagenumber>"},
"ncco": [
{
"action": "talk",
"text": "This is just a text whilst you tranfer to another NCCO"
}
]
})
response = voice.update_call(
response["uuid"], {
"action": "transfer",
"destination": {
"type": "ncco",
"ncco": [{"action": "talk", "text": "hello world"}]
}
}
)
print(response)
I don't know what to do with this error since it isn't defined in vonages documentation but my guess would be that it occurs because the call is already over by the time the script tries to update said call. Sadly vonage doesn't give any information on how to deal with this and the documentation only has this code snippet which is not working or at the very least incomplete.
You have a few race condition issues, the first NCCO, and therefore call, could end before your transfer happens. If you are just testing you can add a stream action to the first NCCO to keep that call alive:
[
{
"action": "talk",
"text": "This is just a text whilst you tranfer to another NCCO"
},
{
"action": "stream",
"streamUrl": [
"onhold2go.co.uk/song-demos/free/a-new-life-preview.mp3"
],
"loop": "0"
}
]
Secondly, if you call transfer immediately after the call is made, it is possible the call has not been set up yet. You can add a sleep between the two calls to remedy this. You wouldn't really run into these issues when working with normal calls. We will update the python snippet to reflect this.

Responding to on_adaptive_card_invoke

I'm struggling to respond to an action sent by an Adapative card with a Teams bot. The action is being sent like this:
"actions": [
{
"type": "Action.Execute",
"title": "Approve",
"verb": "APPROVE",
"data": {
"USER_ID": 13
}
},
]
This is being handled by the on_adapative_card_invoke method in our bot:
async def on_adaptive_card_invoke(self, turn_context: TurnContext, invoke_value: AdaptiveCardInvokeValue) -> AdaptiveCardInvokeResponse:
return AdaptiveCardInvokeResponse(status_code=200)
However Teams always shows 'Something went wrong. Please try again'
How should the bot respond, is it with another post or an actual returned response. I've tried both with no luck and there are no samples for this method in Python.
TIA
I'm muddling through this with python as well.
According to Microsoft's Universal Action Model the response requires a specific format.
If you're not looking to send or replace the card after the card has been invoked you can return an empty message.
return AdaptiveCardInvokeResponse(
status_code=HTTPStatus.OK,
type="application/vnd.microsoft.activity.message")
In my opinion, to prevent the card from being invoked multiple times, it's best to replace the card once it has been invoked.
return AdaptiveCardInvokeResponse(
status_code=HTTPStatus.OK,
type="application/vnd.microsoft.card.adaptive",
value=card
)
In my case the card was json formatted adaptive card that I deserialized.

Slack Bot | Get datepicker value

I'm trying to create my first slack bot with python.
I need your help to explain how I can get the value of the datepicker.
This is my code :
import os
from slack import WebClient
from slack.errors import SlackApiError
import time
client = WebClient(token=os.environ['SLACK_KEY'])
message = "Hey ! Pourrais-tu saisir la date de tes congés ce mois-ci ?"
attachments = [{
"blocks": [
{
"type": "actions",
"elements": [
{
"type": "datepicker",
"initial_date": "1990-04-28",
"placeholder": {
"type": "plain_text",
"text": "Select a date",
}
},
{
"type": "datepicker",
"initial_date": "1990-04-28",
"placeholder": {
"type": "plain_text",
"text": "Select a date",
}
}
]
}
]
}]
def list_users():
users_call = client.api_call("users.list")
if users_call.get('ok'):
return users_call['members']
return None
def send_message(userid):
response = client.chat_postMessage(channel=userid, text=message, username='groupadamin', attachments=attachments)
if __name__ == '__main__':
users = list_users()
if users:
for u in users:
send_message(u['id'])
print("Success!")
My bot sends a private message to all users of the slack. I want to get every one of their answers of the datepicker.
If you want more details, ask me.
In general, you need to be listening to the action.
When the use clicks the datetimepicker, a so-called interaction-payload will be sent to your python slackbot.
That payload is documented here https://api.slack.com/reference/interaction-payloads/block-actions ; you can see you can get the picked date's value by accessing actions.value.
In particular for your use case, given your code sample, it seems you've just built a script that can send messages. This will not allow you to listen to actions. You need rather a python service (an API) that can send as well as receive messages.
For this, I suggest having a look at bolt which is a slack maintained library that will take care of a lot of the heavy lifting for you. Specifically, regarding actions you can check https://slack.dev/bolt-python/concepts#action-listening

Minimum example for replying to a message via BotFramework's REST API?

Within my Python webapp for the Microsoft Botframework, I want to reply to a message with a REST API call to /bot/v1.0/messages.
When experimenting with the emulator on my local machine, I realized that the minimum payload for the REST call is something like:
{
"text": "Hello, Hello!",
"from": {
"address": "MyBot"
},
"channelConversationId": "ConvId"
}
where "ConvId" is the id given by my local emulator in the original message (Note, that I have to send channelConversationId not conversationId).
Obviously, this is not enough for the live bot connector site. But what is a (minimum) example for replying to a message with the REST API call /bot/v1.0/messages?
I've tested different payload data, for example with attributes from, to, channelConversationId, textand language as indicated in the docs. But usually I get a 500 error:
{
"error": {
"message": "Expression evaluation failed. Object reference not set to an instance of an object.",
"code": "ServiceError"
}
}
When I tried to send back the original message, just with to and from swapped, I got a different 500 error:
{
"error": {
"code": "ServiceError",
"message": "*Sorry, Web Chat is having a problem responding right now.*",
"statusCode": 500
}
}
The minimum payload for an inline reply (returned as the response) is:
{ "text": "Hello, Hello!" }
If you're posting a reply to out of band using a POST to /bot/v1.0/messages then you're correct you need a little more. Here's what I do in the Node version of the Bot Builder SDK:
// Post an additional reply
reply.from = ses.message.to;
reply.to = ses.message.replyTo ? ses.message.replyTo : ses.message.from;
reply.replyToMessageId = ses.message.id;
reply.conversationId = ses.message.conversationId;
reply.channelConversationId = ses.message.channelConversationId;
reply.channelMessageId = ses.message.channelMessageId;
reply.participants = ses.message.participants;
reply.totalParticipants = ses.message.totalParticipants;
this.emit('reply', reply);
post(this.options, '/bot/v1.0/messages', reply, (err) => {
if (err) {
this.emit('error', err);
}
});
Sending a reply to an existing conversation is a little complicated because you have to include all of the routing bits needed to get it back to the source conversation. Starting a new conversation is significantly easier:
// Start a new conversation
reply.from = ses.message.from;
reply.to = ses.message.to;
this.emit('send', reply);
post(this.options, '/bot/v1.0/messages', reply, (err) => {
if (err) {
this.emit('error', err);
}
});
Both examples assume that reply.text & reply.language has already been set.
Meanwhile, the answer has been posted to a GitHub issue, quoting wiltodelta
Experimentally found necessary parameters for Slack, Skype, Telegram: ... ChannelConversationId only necessary Slack, otherwise the message will be added #userAddress.
Message message = new Message
{
ChannelConversationId = channelConversationId,
From = new ChannelAccount
{
ChannelId = channelId,
Address = botAddress,
IsBot = true
},
To = new ChannelAccount
{
ChannelId = channelId,
Address = userAddress
},
Text = text
};
Especially, the attributes replyToMessageId and channelConversationId (earlier mentioned) are not necessary: They refer to the last seen message in the conversation and thus will change during a conversation.

Categories