I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB.
tl;dr: The remote endpoint could not be called, or the response it returned was invalid. also means there may have been a timeout waiting for the endpoint.
I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests.
Unrelated to python, but I have found the same issue occurs for me if I do not have a handler for a specified intent:
# lets pretend intentName is actually 'FooBarIntent'
if (intentName == 'TestIntent') {
handleTestRequest(intent, session, callback);
} else {
throw "Invalid intent";
}
From here, amazon was barking that my lambda was invalid. For others it could indicate an error is being thrown earlier in the stack.
You can also log out your lambda errors with aws cloudwatch which will reveal any warnings or errors.
check out my repo, alexa lambda starter kit for a a simple hello world ask/lambda example.
I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.
Or it can be the alexa session timeout which is by default set to 8 seconds.
My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.
But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.
Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is.
Related
I do chatbot on Dialogflow and connect Heroku with python in Github. How can I prevent Heroku from sleeping by python? Could you share the code with me?
If you are using the Free Plan you cannot prevent the Dyno to sleep (after 30 min inactivity).
There is a workaround if your Chatbot runs on a web site: in this case you could send a dummy request (just to start the Dyno) to your webhook hosted on Heroku when the user access the page, for example
<body onload="wakeUpCall();">
<script language="javascript">
function wakeUpCall() {
var xhr2 = new XMLHttpRequest();
xhr2.open("GET", "https://mywebhook.herokuapp.com/", true);
xhr2.send(null);
}
</script>
It is not obviously a perfect approach (it works only if you control the client and it relies on the Dyno starting before the chatbot sends data to the webhook), but it is an option if you want to keep working with the Free plan.
First some things to keep in mind before you try to use the free dyno for something it wasn't intended to be used for:
Heroku provides 1000 free hours a month. This is enough to only run single Heroku dyno at the free tier level. If you need to avoid the startup delay for two apps, then you'll need to pay for at least one of them.
Heroku still only allows for a single free dyno to run on your app. This you might lose traffic when you you are pushing new code (since the free web down has to shut down so you can built a new one).
There are undoubtably other issues as well, but those are the main ones I can think of offhand.
Now the solution:
Really, you just need something to ping your site at least once every 30 minutes. You could write a script for this, but there is an extremely useful tool that already does something like this that provides more benefit to you.
That would be the Availability (or Uptime) Monitoring tool. This is a tool that ensures your site is "still up and running" by pinging a URL every X minutes and ensuring that the response is a valid, expected response (IE: 200 status code and/or checking for certain text on the page). These often also provide the benefit of contacting you if it receives unexpected response (almost certainly an error) for too long.
Here is an example of an availability monitor:
https://uptimerobot.com/
I built and deployed an app on GAE. Yesterday all seemed to be working fine, sending requests every few seconds to the app would be successful with a response time of about 2.5 seconds. Today GAE keeps deploying a new instance for every request, or fails to create even one, resulting in unacceptably high response times (and much higher charges) or even 500 server errors.
I tried to suspend and restart the app a few times, works again for a couple of requests, then reverts to the same behavior. On the console I can see that a new instance is immediately shut down after serving a request, or in case of server error, that GAE was unable to deploy a new instance.
I checked the quotas on the console, nothing seems to hint that I cannot send multiple requests from the same IP.
Has anyone experienced such issues, and if yes, what could be the cause(s) and remedies? Please note, I am very new to GAE so have no further clue right now on where to start.
EDIT: Just realized the average memory used by an instance (F2 in my case, which gives you 256MB) is very close to the max (250MB). Could it be the issue? I will upgrade to F4 (512MB) and see what happens.
As per the documentation - a new instance may be created based on request rate, response latencies, and other application metrics.
Therefore, it’s expected behaviour for the GAE Standard instances to scale up and down depending on the traffic they receive.
Also, if the maximum memory usage for the instance class is reached, a shutdown process will be triggered as explained here.
As for the failures to create a new instance, it’s hard to tell what may be causing it without the Stackdriver Logging information. At the top of my head, you may receive HTTP 500 errors due to having reached the response limit, but it could indeed happen for any other reason as well.
Finally, taking into account the nature of the issues, I think it’s a good idea testing the GAE app’s behaviour using a better instance class and comparing the results. If you no longer experience this using an F4 instance class, it’s safe to assume that the previous instance class was simply not enough to satisfy the app’s requirements.
I am using Python (2.7) and Selenium (3.4.3) to drive Firefox (52.2.0 ESR) via geckodriver (0.19.0) to automate a process on a CentOS 7 machine.
I need totally unattended operation of this automation with user credentials passed through; no storage allowed and no breaking in.
One piece of drama is being caused by the fact that the internal website required for the process is within an Active Directory domain while the machine running my automation is not. I have no need to validate the user, only pass the credentials to the website in such a way as to not require human interaction or for the person to be a local user on the machine.
I have tried various permutations of:
[protocol]://[user,pass]#[url]
driver.switch_to_alert() + send_keys
It seems some of those only work on IE, something I have no access to.
I have checked for libraries to handle this and all to no avail.
I can add libraries to python and I have sudo access to the machine - can't touch authentication, so AD integration is not possible.
How can I give this AD website the credentials of an arbitrary user such that no local storage of their credentials happens an no user interaction is required?
Thank you
EDIT
I think something like a proxy which could authenticate the user then retain that authentication for selenium to do its thing ...
Is there a simple LDAP/AD proxy available?
EDIT 2
Perhaps a very simple way of stating this is that I want to pass user credentials and prevent the authentication popup from happening.
Solution Found:
I needed to use a browser extension.
My solution has been built for chromium but it should port almost-unchanged for Firefox and maybe edge.
First up, you need 2 APIs to be available for your browser:
webRequest.onAuthRequired - Chrome & Firefox
runtime.nativeMessaging - Chrome & Firefox
While both browser APIs are very similar, they do have some significant differences - such as Chrome's implementation lacking Promises.
If you setup your Native Messaging Host to send a properly-formed JSON string, you need only poll it once. This means you can use a single call to runtime.sendNativeMessage() and be assured that your credentials are paresable. Pun intended.
Next, we need to look at how we're supposed to handle the webRequest.onAuthRequired event.
Since I'm working in Chromium, I need to use the promise-less Chrome API.
chrome.webRequest.onAuthRequired.addListener(
callbackFunctionHere,
{urls:[targetUrls]},
['asyncBlocking'] // --> this line is important, too. Very.
The Change:
I'll be calling my function provideCredentials because I'm a big stealy-stealer and used an example from this source. Look for the asynchronous version.
The example code fetches the credentials from storage.local ...
chrome.storage.local.get(null, gotCredentials);
We don't want that. Nope.
We want to get the credentials from a single call to sendNativeMessage so we'll change that one line.
chrome.runtime.sendNativeMessage(hostName, { text: "Ready" }, gotCredentials);
That's all it takes. Seriously. As long as your Host plays nice, this is the big secret. I won't even tell you how long it took me to find it!
Links:
My questions with helpful links:
Here - Workaround for Authenticating against Active Directory
Here - Also has some working code for a functional NM Host
Here - Some enlightening material on promises
So this turns out to be a non-trivial problem.
I haven't implemented the solution, yet, but I know how to get there...
Passing values to an extension is the first step - this can be done in both Chrome and Firefox. Watch the version to make sure the API required, nativeMessaging, actually exists in your version. I have had to switch to chromium for this reason.
Alternatively, one can use the storage API to put values in browser storage first. [edit: I did not go this way for security concerns]
Next is to use the onAuthRequired event from the webRequest API . Setup a listener on the event and pass in the values you need.
Caveats: I have built everything right up to the extension itself for the nativeMessaging API solution and there's still a problem with getting the script to recognise the data. This is almost certainly my JavaScript skills clashing with the arcane knowledge required to make these APIs make much sense ...
I have yet to attempt the storage method as it's less secure (in my mind) but it does seem to be simpler.
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html
I currently have a django app that is deployed to EC2. I was going to add some extra logging info in using boto.utils to get things like the instance id. However when running the code locally, the call to boto.utils.get_instance_metadata()['instance-id'] just hangs, rather than return None or an empty string.
I can't seem to see in boto if there is a flag or function to check if you are on an EC2.
Does anyone know of any?
Thanks!
Checking for the existence of the metadata server is the best (only?) way I know of to detect if you are running on an EC2 instance or not. The get_instance_metadata function takes a couple of optional parameters that you can supply to control the timeouts and retry strategy. For example:
>>> boto.utils.get_instance_metadata(timeout=1, num_retries=1)
{}
>>>
Will use a timeout of one second and will retry one time. You could also specify num_retries of zero if you want the call to return even faster. Note however that you do occasionally get failed requests on an actual EC2 instance so having at least one retry would be safer.