I'm getting an error "Unable to determine handler from trigger event" when hitting an AWS API Gateway endpoint. The endpoint triggers a lambda, which is using FastAPI and Mangum.
Lambda integration in the GET method on API Gateway needs to have "Use Lambda Proxy integration" ticked on.
Related
Background
I have a Google Cloud Composer 1 environment running on GCP.
I have written a Google Cloud Function that, when run in the cloud, successfully triggers a DAG Run in my Composer environment. My code is based on and almost identical to the code in the Trigger DAGs with Cloud Functions guide from GCP documentation.
Here is the section of code most relevant to my question (source):
from google.auth.transport.requests import Request
from google.oauth2 import id_token
import requests
def make_iap_request(url, client_id, method='GET', **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if 'timeout' not in kwargs:
kwargs['timeout'] = 90
# Obtain an OpenID Connect (OIDC) token from metadata server or using service
# account.
open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
# Fetch the Identity-Aware Proxy-protected URL, including an
# Authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(
method, url,
headers={'Authorization': 'Bearer {}'.format(
open_id_connect_token)}, **kwargs)
if resp.status_code == 403:
raise Exception('Service account does not have permission to '
'access the IAP-protected application.')
elif resp.status_code != 200:
raise Exception(
'Bad response from application: {!r} / {!r} / {!r}'.format(
resp.status_code, resp.headers, resp.text))
else:
return resp.text
Challenge
I want to be able to run the same Cloud Function locally on my dev machine. When I try to do that, the function crashes with this error message:
google.auth.exceptions.DefaultCredentialsError: Neither metadata server or valid service account credentials are found.
This makes sense because the line that throws the error is:
google_open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
Indeed when running locally the Metadata Server is not available and I don't know how to make valid service account credentials available to the call to fetch_id_token().
Question
My question is - What do I need to change in order to be able to securely obtain the OpenID Connect token when I run my function locally?
I've been able to run my code locally without changing it. Below are the details though I'm not sure this is the most secure option to get it done.
In the Google Cloud Console I browsed to the Service Accounts module.
I clicked on the "App Engine default service account" to see its details.
I switched to the "KEYS" tab.
I clicked on the "Add Key" button and generated a new JSON key.
I downloaded the JSON file and placed it outside of my source code folder.
Finally, on my dev machine*, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to be the path to where I placed the JSON file. More details here: https://cloud.google.com/docs/authentication/production
Once I did this, the call to id_token.fetch_id_token() picked up the service account details from the key file and returned the token successfully.
* - In my case I set the environment variable inside my PyCharm Debug Configuration.
I want to send a message to a websocket client when it connects to the server on AWS lambda and API gateway. Currently, I use wscat as a client. Since the response 'connected' is not shown on the wscat console when I connect to the server, I added post_to_connection to send a message 'hello world' to the client. However, it raises GoneException.
An error occurred (GoneException) when calling the PostToConnection
operation
How can I solve this problem and send some message to wscat when connecting to the server?
My python code is below. I use Python 3.8.5.
import os
import boto3
import botocore
dynamodb = boto3.resource('dynamodb')
connections = dynamodb.Table(os.environ['TABLE_NAME'])
def lambda_handler(event, context):
domain_name = event.get('requestContext',{}).get('domainName')
stage = event.get('requestContext',{}).get('stage')
connection_id = event.get('requestContext',{}).get('connectionId')
result = connections.put_item(Item={ 'id': connection_id })
apigw_management = boto3.client('apigatewaymanagementapi',
endpoint_url=F"https://{domain_name}/{stage}")
ret = "hello world";
try:
_ = apigw_management.post_to_connection(ConnectionId=connection_id,
Data=ret)
except botocore.exceptions.ClientError as e:
print(e);
return { 'statusCode': 500,
'body': 'something went wrong' }
return { 'statusCode': 200,
"body": 'connected'};
Self-answer: you cannot post_to_connection to the connection itself in onconnect.
I have found that the GoneException can occur when the client that initiated the websocket has disconnected from the socket and its connectionId can no longer be found. Is there something causing the originating client to disconnect from the socket before it can receive your return message?
My use case is different but I am basically using a DB to check the state of a connection before replying to it, and not using the request context to do that. This error's appearance was reduced by writing connectionIds to DynamoDB on connect, and deleting them from the table upon disconnect events. Messaging now writes to connectionIds in the table instead of the id in the request context. Most messages go through but some errors are still emitted when the client leaves the socket but does not emit a proper disconnect event which leaves orphans in the table. The next step is to enforce item deletes when irregular disconnections occur. Involving a DB may be overkill for your situation, just sharing what helped me make progress on the GoneException error.
We need to post to connection after connecting (i.e. when the routeKey is not $connect)
routeKey = event.get('requestContext', {}).get('routeKey')
print(routeKey) # for debugging
if routeKey != '$connect': # if we have defined multiple route keys we can choose the right one here
apigw_management.post_to_connection(ConnectionId=connection_id, Data=ret)
#nemy's answer is totally true but it doesn't explain the reason. So, I just want to explain...
So, first of all What is GoneException or GoneError 410 ?
A 410 Gone error occurs when a user tries to access an asset which no longer exists on the requested server. In order for a request to return a 410 Gone status, the resource must also have no forwarding address and be considered to be gone permanently.
you can find more details about GoneException in this article.
In here, GoneException has occured; it means that the POST connection we are trying to make, doesn't exist - which fits perfectly in the scenario. Because we still haven't established the connection between Client and Server. The way APIGatewayWebsocketAPIs work is that you request an Endpoint(Route) and that Endpoint will invoke that Lambda Function (In our case it is ConnectionLambdaFunction for $connect Route).
Now, if The Lambda function resolves with statusCode: 200 then and only then the API Gateway will allow the connection to be established. So, basically untill we return statusCode: 200 from our Lambda Function we are not connected and untill then we are totally unknown to server and thats why the Post call that has been made before the return statement itself will throw an error.
I have published a Lex bot and a Lambda function on the same region. I am trying to interact with Lex from Lambda using following code.
import boto3
client = boto3.client('lex-runtime')
def lambda_handler(event, context):
response = client.post_text(
botName='string',
botAlias='string',
userId='string',
# sessionAttributes={
# 'string': 'string'
# },
# requestAttributes={
# 'string': 'string'
# },
inputText='entity list'
)
return response
While testing the code my lambda function is getting timed out. Kindly let me know if you need to know anything else.
Error Message:
"errorMessage": "2021-04-30T07:09:45.715Z <req_id> Task timed out after 183.10 seconds"
Lambda in a VPC, including a default VPC, does not have internet access. Since your lambda in a default VPC, it will fail to connect to any lex-runtime AWS endpoint, resulting in a timeout.
Since lex does not support VPC endpoints, the only way for your lambda to access the lex-runtime is:
disassociated your function from VPC. If you do this, your function will be able to connect to the internet, and subsequently to lex-runtime.
create a private subnet in a default VPC, and setup a NAT gateway. Then place your function in the new subnet. This way your function will be able to connect to the internet using NAT.
More info is in:
How do I give internet access to a Lambda function that's connected to an Amazon VPC?
I have this stack:
A Laravel app hosted with Docker as a webapp on Azure;
An external SOAP API;
A Queue Trigger Azure Function App in Python.
The workflow is simple:
Laravel dispatches a message to an Azure Storage Queue;
The Queue Trigger starts and hits (with requests.get()) a Laravel endpoint (i.e. [host]/queue/queue-name), like this:
laravel_endpoint = os.environ['laravel_endpoint']
r = requests.get(laravel_endpoint)
This route is connected to a controller that processes the queue, like this:
$queue = $request->route('queue', null);
if ($queue) {
\Artisan::call('queue:work --queue='.$queue.' --stop-when-empty');
}
Logging the status code of the request (r.status_code), sometimes returns 502.
In that case, the code never reaches the controller, so the message is not processed.
We noticed this behaviour when the SOAP API returns a large amount of data.
There are no signs of this problem in the Apache logs, nor in the Laravel logs.
I've already tried raising the timeout and the memory for the Laravel queue, but, as said, when the endpoint returns a 502 error, the controller can't be reached.
Any ideas?
I am using a azure http triggered fucntion to perform a task and I am passing the function key as http header parameter and then my payload is a json with some data that I invoking down stream procedures.I am using urllib(python lib) for this request and this is the response I am getting but the function is getting triggered.
urllib.error.HTTPError: HTTP Error 417: Expectation Failed
This was more of a Firewall issue.We have been trying to connect to a azure analysis service from ADW and we have added the IP filtering(Our Corporate public IP) for AAS and then when the Function's procedure is trying to connected to AAS it is facing some IP issue(this is NOT the corporate public IP). We have added that IP and now things are working fine.