My AWS Lambda function code works fine when I run it outside of an Amazon Virtual Private Cloud (Amazon VPC). However, when I configure my function to connect to a VPC, I get function timeout errors. How do I fix these?
def get_db_connection_config():
# Create a Secrets Manager client.
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
# In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
# See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
# We rethrow the exception by default.
try:
logger.info("Retrieving MySQL database configuration...")
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as error:
logger.error(error)
sys.exit()
else:
# Decrypts secret using the associated KMS CMK.
# Depending on whether the secret is a string or binary, one of these fields will be populated.
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
return json.loads(secret)
else:
return base64.b64decode(get_secret_value_response['SecretBinary'])
When a Lambda resides in AWS network it is able to use the internet to connect to these services, however once it joins your VPC outbound internet traffic is also routed through your VPC. As there is presumably no outbound internet connectivity the Lambda is unable to reach the internet.
If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet access or a public IP address.
For your Lambda to be able to communicate with other AWS services when it resides within a VPC, one of the following must be in place.
The first option is that you create either a NAT gateway or a NAT instance, and then add this to the route table which your Lambda resides in. To be clear this subnet should be a private subnet only as by utilizing a NAT for a 0.0.0.0/0 record it will stop inbound traffic to instances which have a public IP address that share the same subnet.
The second option is that you utilize VPC endpoints for the services, by doing this any traffic that would have previously traversed the public internet will instead use a private connection directly to the AWS service itself. Please note that not every AWS service is covered yet for this.
Related
I have the following code
conn_str = "HostName=my_host.azure-devices.net;DeviceId=MY_DEVICE;SharedAccessKey=MY_KEY"
device_conn = IoTHubDeviceClient.create_from_connection_string(conn_str)
await device_conn.connect()
This works fine, but only because I've manually retrieved this from the IoT hub and pasted it into the code. We are going to have hundreds of these devices, so is there a way to retrieve this connection string programmatically?
It'll be the equivalent of the following
az iot hub device-identity connection-string show --device-id MY_DEVICCE --hub-name MY_HUB --subscription ABCD1234
How do I do this?
The device id and key are you give to the each device and you choose where to store/how to load it. The connection string is just a concept for easy to get started but it has no meaning in the actual technical level.
You can use create_from_symmetric_key(symmetric_key, hostname, device_id, **kwargs) to direct pass key, id and hub uri to sdk.
I found it's not possible to retrieve the actual connection string, but a connection string can be built from the device primary key
from azure.iot.hub import IoTHubRegistryManager
from azure.iot.device import IoTHubDeviceClient
# HUB_HOST is YOURHOST.azure-devices.net
# SHARED_ACCESS_KEY is from the registryReadWrite connection string
reg_str = "HostName={0};SharedAccessKeyName=registryReadWrite;SharedAccessKey={1}".format(
HUB_HOST, SHARED_ACCESS_KEY)
device = IoTHubRegistryManager(reg_str).get_device("MY_DEVICE_ID")
device_key = device.authentication.symmetric_key.primary_key
conn_str = "HostName={0};DeviceId={1};SharedAccessKey={2}".format(
HUB_HOST, "MY_DEVICE_ID", device_key)
client = IoTHubDeviceClient.create_from_connection_string(
conn_str)
client.connect()
# Remaining code here...
Other options you could consider include:
Use the Device Provisioning service to manage provisioning and connecting your device to your IoT hub. You won't need to generate your connection strings manually in this case.
Use X.509 certificates (recommended for production environments instead of SAS). Each device has an X.509 cert derived from the root cert in your hub. See: https://learn.microsoft.com/azure/iot-hub/tutorial-x509-introduction
I am trying to connect to AWS Secret manage, on local it works fine on the python server it gives "botocore.exceptions.NoCredentialsError: Unable to locate credentials"
error
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
So I had 2 ways to correct this :
First Method:
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name,aws_access_key_id=Xxxxxx,
aws_secret_access_key=xxxxxx
)
Second Method: To have this in a config file (Which will again expose keys)
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name,aws_access_key_id=confg.access,
aws_secret_access_key=confg.key
)
Arent, we exposing our key and access keys in github if we are specifying it here.?
What is the correct way to access Secret Manager without specifying it here
You are correct you shouldn't pass your access key and secret key to any running server or service in AWS to avoid exposing it. On your local machine, it worked because your environment is getting your user's permissions via AWS CLI.
What you need to do for a server is to add to the service role a policy allowing it to access the Secrets Manager, then you won't face permissions issues anymore
On Permissions policy examples - AWS Secrets Manager can find examples of how those policies need to be.
And on Assign an IAM Role to an EC2 Instance you can see how to attach a role with a specific policy to an EC2 instance.
I'm trying to fetch my bookings collection from AWS Document db cluster. I've maintained the credentials using AWS Secret Manager and passing them to connect. However, it gives me a Task Timed Out Error of 900 seconds. (I increased the time limit to 15 mins since it gave the same error with lesser duration)
The error is as such:
{
"errorMessage": "2021-08-19T09:05:22.872Z a96e95cb-4c42-4880-b339-9cb29e83c1ec Task timed out after 900.10 seconds"
}
Code snippet:
def lambda_handler(event, context):
db = create_mongo_connection()
print(db)
print("aaaaa") # this gets printed -- debugging
bookings = db.bookings.find({}) # bookings collection not fetched
print("bbbbb") # this does not get printed -- debugging
#configuration settings maintained in environment variables
mongoconfig = os.environ['mongoconfig']
def create_mongo_connection():
try :
secretsmanager = get_secret()
SecretString = json.loads(secretsmanager)
username = SecretString['username']
password = SecretString['password']
host = SecretString['host']
port = SecretString['port']
mongoclient = MongoClient(host, port, username=username, password=password,
authSource='admin', ssl_ca_certs='rds-combined-ca-bundle.pem',retryWrites='false')
dbname = mongoconfig['db_name']
print(dbname)
return mongoclient[dbname]
except Exception as e:
print("Exception : ", e, "\nTraceback : ", format_exc())
The API endpoints for the Secrets Manager live on the Internet. It sounds like the Lambda function is not able to access the Internet.
When an AWS Lambda function is connected to a VPC, it can access resources in the VPC. However, to access the Internet:
The Lambda function must be in a private subnet, and
A NAT Gateway must be in a public subnet
If the Lambda function does not require access to resources in the VPC, then simply disconnect the Lambda function from the VPC and it will receive Internet access automatically.
There is another option, which is to Use Secrets Manager with VPC endpoints - AWS Secrets Manager, which creates a tunnel between a VPC and the Secrets Manager.
I have published a Lex bot and a Lambda function on the same region. I am trying to interact with Lex from Lambda using following code.
import boto3
client = boto3.client('lex-runtime')
def lambda_handler(event, context):
response = client.post_text(
botName='string',
botAlias='string',
userId='string',
# sessionAttributes={
# 'string': 'string'
# },
# requestAttributes={
# 'string': 'string'
# },
inputText='entity list'
)
return response
While testing the code my lambda function is getting timed out. Kindly let me know if you need to know anything else.
Error Message:
"errorMessage": "2021-04-30T07:09:45.715Z <req_id> Task timed out after 183.10 seconds"
Lambda in a VPC, including a default VPC, does not have internet access. Since your lambda in a default VPC, it will fail to connect to any lex-runtime AWS endpoint, resulting in a timeout.
Since lex does not support VPC endpoints, the only way for your lambda to access the lex-runtime is:
disassociated your function from VPC. If you do this, your function will be able to connect to the internet, and subsequently to lex-runtime.
create a private subnet in a default VPC, and setup a NAT gateway. Then place your function in the new subnet. This way your function will be able to connect to the internet using NAT.
More info is in:
How do I give internet access to a Lambda function that's connected to an Amazon VPC?
I'm developing a lambda function that I write in DynamoDB. On one hand I have created a layer that has a script with the functions of DynamoDB:
class DynamoHandler():
def __init__(self):
self.resource = boto3.resource('dynamodb', region_name = 'eu-west-1')
self.__table = None
def set_table(self, table_name: str):
table = self.resource.Table(table_name)
table.table_arn
self.__table = table
def insert(self, item, **kwargs):
self.__check_table()
return self.__table.put_item(
Item=item,
**kwargs
)
In the lambda I write the following code:
from dynamo_class import DynamoHandler
db = DynamoHandler()
db.set_table(TABLE NAME)
db.insert(msg)
And I get the error:
[ERROR] EndpointConnectionError: Could not connect to the endpoint URL: "https://dynamodb.eu-west-1.amazonaws.com/"
Do you know how can I solve this problem?
I have searched for similar errors but they occurred when the region was not specified, in my case in the DynamoDB class I assign the region "eu-west-1".
The timeout occurs most likely because lambda in a VPC has no internet nor public IP address. From docs:
Connecting a function to a public subnet doesn't give it internet access or a public IP address.
Subsequently, the lambda function can't connect to DynamoDB endpoint.
There are two ways to rectify the issue:
place the lambda in a private subnet and setup NAT gateway to enable lambda access the internet.
Use VPC Gateway for DynamoDB which would be better in this case, as for DynamoDB gateway there are no extra charges.
In addition to the great answer by Marcin above, have you checked that the Security Group associated with the function has the correct egress rules that allow the network interface to connect to either the DynamoDB or its NAT gateway?