Vonage transfer of call via inline ncco not working properly python - python

I am currently having an issue with Vonages Voice API. This issue specifically occurs while handling the transfer of a call via inline ncco, I am using the python code snippet from the documentation, which you can find here. When I paste the code from the snippet with no changes into my script, the call just plays the first action and hangs up, when trying to update that call, I am receiving the following error:
vonage.errors.ClientError: 400 response from api.nexmo.com
I've been searching for multiple hours but can't find anyone with a similar problem nor another person with a working implementation of this feature.
My code looks as follows:
import vonage
client = vonage.Client(
application_id="<ID>",
private_key=r"private.key",
)
voice = vonage.Voice(client)
response = voice.create_call({
'to': [{'type': 'phone', 'number': "<mynumber>"}],
'from': {'type': 'phone', 'number': "<vonagenumber>"},
"ncco": [
{
"action": "talk",
"text": "This is just a text whilst you tranfer to another NCCO"
}
]
})
response = voice.update_call(
response["uuid"], {
"action": "transfer",
"destination": {
"type": "ncco",
"ncco": [{"action": "talk", "text": "hello world"}]
}
}
)
print(response)
I don't know what to do with this error since it isn't defined in vonages documentation but my guess would be that it occurs because the call is already over by the time the script tries to update said call. Sadly vonage doesn't give any information on how to deal with this and the documentation only has this code snippet which is not working or at the very least incomplete.

You have a few race condition issues, the first NCCO, and therefore call, could end before your transfer happens. If you are just testing you can add a stream action to the first NCCO to keep that call alive:
[
{
"action": "talk",
"text": "This is just a text whilst you tranfer to another NCCO"
},
{
"action": "stream",
"streamUrl": [
"onhold2go.co.uk/song-demos/free/a-new-life-preview.mp3"
],
"loop": "0"
}
]
Secondly, if you call transfer immediately after the call is made, it is possible the call has not been set up yet. You can add a sleep between the two calls to remedy this. You wouldn't really run into these issues when working with normal calls. We will update the python snippet to reflect this.

Related

Amazon Lex v2 Welcome message

hello everyone I almost finished a bot with lex v2 that uses lambda, I did the integration using the kommunicate client to do before, but now I need the chatbot to start with an intent automatically, this is my first problem.
Also I would like to know if on lambda in the answers there is a way to send a response that makes an elicit slot, and then after 5 seconds send another to move to confirmation intent on lex v2.. I tried with time and asyncio, but it seems that the code does not go on, I can only get the first answer with the slot elicit, I wish that after about 5 seconds I refer to confirmation intent on lex v2, this is my code:
if carFound is not None:
# if resp['total'] < 30:
# print('Conferma intento')
# carDialog = {
# "type": "ConfirmIntent"
# }
# else:
# print('Filtri args')
# carDialog ={
# "type": "ElicitSlot",
# "slotToElicit": "Args"
# }
response = {
"sessionState": {
"dialogAction": {
"type": "ElicitSlot",
"slotToElicit": "Args"
},
"intent": {
'name': intent,
'slots': slots,
"state": "Fulfilled",
}
},
"messages": carFound,
}
practically after every call to my api that sends me carFound as a payload with all the machines, I should verify that when resp['total'] and less than 30 I refer in addition to the answers also to confirmation intent after some time.
as i said yet i tried with sleep() function of python, and i still have only the response that has the elicitslot,maybe sleep and asyncio are not well for lexv2...
I verified through the test of lambda, with my inputs that the condition of <30 is true, so the problem is on the response i think.
for the welcome i don't know how i can do, i want that my bot start with the intent Welcome for example, without write nothing on my kommunicate client that is on a website.

Change status of issue in Jira using Rest call from Python

I am trying to update the status of issue from 'Request Info' state to 'Submitted' via rest API in python.
after digging a lot in the documentation , I ran a Rest call to get the allowed status for the issue ID and i can see that the status 'Submitted' is allows:
"expand": "transitions",
"transitions": [{
"id": "381",
"name": "Resubmit",
"to": {
"self": "https://ies-valor-jira.ies.mentorg.com/rest/api/2/status/10000",
"description": "",
"iconUrl": "https://ies-valor-jira.ies.mentorg.com/",
"name": "Submitted",
"id": "10000",
"statusCategory": {
"self": "https://ies-valor-jira.ies.mentorg.com/rest/api/2/statuscategory/2",
"id": 2,
"key": "new",
"colorName": "blue-gray",
"name": "To Do"
So now i would like to change the status with the following code:
from jira import JIRA
JIRA_SERVER = "https://ies-valor-jira.ies.mentorg.com/"
jira_user_name = "myuser"
jira_password = "mypassword!"
jira_connection = JIRA(basic_auth=(jira_user_name, jira_password),
server=JIRA_SERVER)
jira.Issue='SF-6831'
jira_connection.add_comment(jira.Issue, body="Resubmit issues")
jira_connection.transition_issue("SF-6831", "Resubmit")
But i get an error message that indicate :customfield_10509":"You must define "Resubmit Note: before you moving to "CCB Review" state"}
this error is expected because this field is mandatory and it must be filled with a reason for the status change so i need to know how to update this custom field in the Rest call to allow the issue to change status.
I tried to use add.command function but i don't know where i should specify the field name.
or is there a different way to do it.
Thanks.
I do most of this work in straight Python using the API, I have a lot of what you need in this repo - https://github.com/dren79/JiraScripting_public
I'll add a transition function in the helpers file now.
Let me know if it helps!

AWS Lambda Python script. How do I return an object and keep the scipt running in the background?

I have a AWS lambda script that is triggered by a post request. The script takes about 10 minutes to run. When the POST request/trigger is received, I want to return a guid/identifier before/at the same time as running the script. That way, a user can use that guid to check the progress of the script via a db config I have not yet made.
My problem is that when my lambda_handler gives a return object, the script stops running. I want to return the guid, and keep the script running.
You can't do that with only lambda function.
If you want to achieve your goal, you must use Lambda with Step Functions to build a complete workflow:
https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
You have to break your lambda into 2 parts. The first one return the guid/identifier. The other continue processing the logic after the first lambda returns the result.
Step Functions configuration example:
{
"StartAt": "LambdaFunction1",
// The first lambda that return the guid/identifier
"States": {
"LambdaFunction1": {
"Type": "Task",
"Resource": "arn:aws:lambda1:here",
"Next": "LambdaFunction2"
},
"LambdaFunction2": {
"Type": "Task",
"Resource": "arn:aws:lambda2:here",
"End": true
}
}
}

Azure Function - Python - ServiceBus Output Binding - Setting Custom Properties

I have an Azure Function written in Python that has an Service Bus (Topic) output binding. The function is triggered by another queue, we process some files from a blobl storage and then put another message in a queue.
My function.json file looks like that:
{
"bindings": [
{
"type": "serviceBus",
"connection": "Omnibus_Input_Send_Servicebus",
"name": "outputMessage",
"queueName": "validation-output-queue",
"accessRights": "send",
"direction": "out"
}
],
"disabled": false
}
In my function, I can send a message to another queue like that:
with open(os.environ['outputMessage'], 'w') as output_message:
output_message.write('This is my output test message !')
It is working fine. Now I'd like to send a message to a topic. I've created a subscription with an SQLFilter and I need to set some custom properties to the BrokeredMessage.
From the azure sdk for python, I've found that I can add custom properties like that (I've installed the azure module using pip):
from azure.servicebus import Message
sent_msg = Message(b'This is the third message',
broker_properties={'Label': 'M3'},
custom_properties={'Priority': 'Medium',
'Customer': 'ABC'}
)
My new function.json file looks like that:
{
"bindings": [
{
"type": "serviceBus",
"connection": "Omnibus_Input_Send_Servicebus",
"name": "outputMessage",
"topicName": "validation-output-topic",
"accessRights": "send",
"direction": "out"
}
],
"disabled": false
}
And I've modify my function like that:
from azure.servicebus import Message
sent_msg = Message(b'This is the third message',
broker_properties={'Label': 'M3'},
custom_properties={'Priority': 'Medium',
'Customer': 'ABC'}
)
with open(os.environ['outputMessage'], 'w') as output_message:
output_message.write(sent_msg)
When I run the function, I get this exception:
TypeError: expected a string or other character buffer object
I tried to use the buffer and the memoryview function but still get another exception:
TypeError: cannot make memory view because object does not have the buffer interface
I am wondering if the actual binding supports BrokeredMessage and how to deal with it ?
The ServiceBus output binding for Python (and other script languages) only supports a simple string mapping, where the string you specify becomes the content of the BrokeredMessage created behind the scenes. To set any extended properties or do anything more sophisticated, you'll have to drop down to using the Azure Python SDK yourself in your function.
In the same situation, where I need to add user properties in the output service bus queue/topic, I used azure.servicebus.ServiceBusClient directly.
sb.Message class has a user_properties setter:
def main(
httpreq: func.HttpRequest,
context: func.Context ):
sbClient : sb.ServiceBusClient = sb.ServiceBusClient.from_connection_string( os.getenv("AzureWebJobsServiceBus") )
topicClient : sb.TopicClient = sbClient.get_topic('scoring-testtopic')
message = sb.Message( httpreq.get_body().decode( 'UTF-8' ))
message.user_properties = {
'#AzureWebJobsParentId' : context.invocation_id,
'Prom' : '31000001'
}
topicClient.send( message )

ATDD Google App Engine with Python - Trouble with Keys

My approach to this question might be completely wrong, so please don't hesitate to correct it. I also added ATDD in the question title, because I am trying to test the output of my web api which is bigger than just a unit test.
I am using:
Google App Engine 1.7.1
GAE High Replication Datastore
Python 2.7.3
I setup my tests using the boilerplate code:
self.testbed = testbed.Testbed()
self.testbed.activate()
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=0)
self.testbed.init_datastore_v3_stub()
Then I call a mock object setup to setup my entities and persist them to the testbed datastore. I use a counter to add 5 artists with artist_id 1 - 5:
def add_mock_artist_with(self, artist_id, link_id, name):
new_artist = dto.Artist(key_name=str(artist_id),
name=name,
link_id= str(link_id))
new_artist.put()
My test is that my web api returns:
{
"artists": [
{
"artist": {
"name": "Artist 1",
"links": {
"self": {
"href": "https://my_server.com/artist/1/"
}
}
}
},
.
.
.
{
"artist": {
"name": "Artist 5",
"links": {
"self": {
"href": "https://my_server.com/artist/5/"
}
}
}
}
],
"links": {
"self": {
"href": "https://my_server.com/artists/"
}
}
}
Initially I thought that if I were to startup a new test bed every time I run the tests, I could count on my artists being entered sequentially into the data store and therefore would get ids 1 - 5. My tests passed initially, but over time started to fail because of ids not matching (I would get a link like: "href": "https://my_server.com/artist/78/"). I felt a little guilty relying on ids being generated sequentially, so I resolved to fix it. I stumbled upon the concept of a key being either a name or a generated id. I updated my templates for the returned JSON to use:
artist.key().id_or_name()
In the case of a mock object, I supplied the key name on construction with:
key_name=str(artist_id)
For non-mock construction, I did not include that line of code and let GAE assign the id.
Since my template used key().id_or_name() to output the property, all went well and the tests passed.
However, now when I test the return of an individual artist which would be available by following http://my_server.com/artist/5/, my test fails. To grab the artist out of the datastore, I use the following code:
def retrieve_artist_by(id):
artist = dto.Artist.get_by_id()
In production, this will work fine because it will all be id based. However, in my test, it is not finding anything because I have used the key_name=str(artist_id) in my mock construction, and the name is not the same as the id.
I was hoping for something similar to:
artist = dto.Artist.get_by_id_or_name()
Any ideas?
Perhaps not the answer you are looking for, but it's possible to find out if you are running on the production or deployment servers and execute a different code path.
In Python, how can I test if I'm in Google App Engine SDK?
http://code.google.com/appengine/docs/python/runtime.html#The%5FEnvironment
os.environ['SERVER_SOFTWARE'].startswith('Development')
Here is what I am using right now. I don't like it, but it should be performant in prod as I will be using generated ids. In test, if it doesn't find it by id, it will attempt to look up by name.
def retrieve_artist_by(id):
artist = dto.Artist.get_by_id(id)
if artist is None:
artist = dto.Artist.get_by_key_name(str(id))
return artist

Categories