I suspect this might be IAM/VPC issue but perhaps not? My Lambda function calls Cognito's sign_up to register. The lambda function is behind an API GET endpoint.
I am able to call the endpoint locally and it creates the Cognito user successfully. However, when I deploy it to a staging environment, it doesn't seem to be able to call Cognito anymore. I peeked at the Cloudwatch logs and there is no error reported.
Staging environment has a lambda function behind a VPC that also has a MySQL database as well. I am able to pull data from this database from this lambda function. From that same environment, I am now trying to call Cognito user pool.
I also checked that the permission IAM role for this lambda function has CognitoPowerUser permissions.
I am not sure what other IAM roles/permissions I am missing here, it seems to work fine when I call it locally but not from the said staging environment.
My assumption would also be that this is an issue with your Lambda function in the VPC being unable to reach the public internet due to your VPC set up.
I would encourage you to review this blog to ensure you are set up correctly from a network perspective.
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Related
I'm building an application that can run user-submitted python code. I'm considering the following approaches:
Spinning up a new AWS lambda function for each user's request to run the submitted code in it. Delete the lambda function afterwards. I'm aware of AWS lambda's time limit - so this would be used to run only small functions.
Spinning up a new EC2 machine to run a user's code. One instance per user. Keep the instance running while the user is still interacting with my application. Kill the instance after the user is done.
Same as the 2nd approach but also spin up a docker container inside the EC2 instance to add an additional layer of isolation (is this necessary?)
Are there any security vulnerabilities I need to be aware of? Will the user be able to do anything if they gain access to environment variables in their own lambda function/ec2 machine? Are there any better solutions?
Any code which you run on AWS Lambda will have the capabilities of the associated function. Be very careful what you supply.
Even logging and metrics access can be manipulated to incur additional costs.
I am getting the error while deploying the Azure function from the local system.
I wen through some blogs and it is stating that my function is unable to connect with the Azure storage account which has the functions meta data.
Also, The function on the portal is showing the error as: Azure Functions runtime is unreachable
Earlier my function was running but after integrating the function with a Azure premium App service plan it has stooped working. My assumption is that my app service plan having some restriction for the inbound/outbound traffic rule and Due to this it is unable to establish the connection with the function's associated storage account.
Also, I would like to highlight that if a function is using the premium plan then we have to add few other configuration properties.
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=blob_container_storage_acc;AccountKey=dummy_value==;EndpointSuffix=core.windows.net"
WEBSITE_CONTENTSHARE = "my-function-name"
For the WEBSITE_CONTENTSHARE property I have added the function app name but I am not sure with the value.
Following is the Microsoft document reference for the function properties
Microsoft Function configuration properties link
Can you please help me to resolve the issue.
Note: I am using python for the Azure functions.
I have created a new function app with Premium plan and selected the interpreter as Python. When we select Python, OS will be automatically Linux.
Below is the message we get to create functions for Premium plan function App:
Your app is currently in read only mode because Elastic Premium on Linux requires running from a package.
PortalScreenshot
We need to create, deploy and run function apps from a package, refer to the documentation on how we can run functions from package.
Documentation
Make sure to add all your local.settings.json configurations to Application Settings in function app.
Not sure of what kind of Azure Function you are using but usually when there is a Storage Account associated, we need to specify the AzureWebJobsStorage field in the serviceDependencies.json file inside Properties folder. And when I had faced the same error, the cause was that while publishing the azure function from local, some settings from the local.settings.json were missing in the Application Settings of the app service under Configuration blade.
There can be few more things which you can recheck:
Does the storage account you are trying to use existing still or is deleted by any chance.
While publishing the application from local, using the web deploy method, the publish profile is correct or has any issues.
Disabling the function app and then stopping the app service before redeploying it.
Hope any of the above mentions help you diagnose and solve the issue.
The thing is that there is a difference in how the function deployed using Consumption vs Premium service plan.
Consumption - working out of the box.
Premium - need to add the WEBSITE_RUN_FROM_PACKAGE = 1 in the function Application settings. (see https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package for full details)
When trying to use requests in my function, it only timeout without additional error. I was trying to use rapidapi for amazon. I already have the host and key, but when I test my function it always timeout. My zip file and my function were on the same directory and I knew that my code was correct.
I just figured out that my VPC configuration in Lambda was can only access within the resources of the VPC. I just removed the VPC and it now runs. But when your lambda function will connect to your database, you need to add and configure your VPC.
What I understand is that, in order to access AWS applications such as redshift, the way to do it is
client = boto3.client("redshift", region_name="someRegion", aws_access_key_id="foo", aws_secret_access_key="bar")
response = client.describe_clusters(ClusterIdentifier="mycluster")
print(response)
This code runs fine for both locally through pycharm, as well as on AWS lambda.
However, am I correct that this aws_access_key_id and aws_secret_access_key are both from me? IE: my IAM user security access keys. Is this supposed to be the case? Or am I suppose to create a different user / role in order to access redshift via boto3?
The more important question is, how do I properly store & retrieve aws_access_key_id and aws_secret_access_key? I understand that this could potentially be done via secrets manager, but I am still faced with the problem that, if I run the below code, I get an error saying that it is unable to locate credentials.
client = boto3.client("secretsmanager", region_name="someRegion")
# Met with the problem that it is unable to locate my credentials.
The proper way to do this would be for you to create an IAM role which allows the desired redshift functionality, and then attaching that role to your lambda.
When you create the role, you have the flexibility to create a policy to fine-grain access permissions to certain actions and/or certain resources.
After you have attached the IAM role to your lambda, you will simply be able to do:
>>> client = boto3.client("redshift")
From the docs. The first & seconds options are not secured since you mix the credentials with the code.
If the code runs on AWS EC2 the best way is using "assume role" where you grant the EC2 instance permissions. If the code run outside AWS you will have to select an option like using ~/.aws/credentials
Boto3 will look in several locations when searching for credentials. The mechanism in which Boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
I have two simple lambda functions. Lambda 1 is invoking lambda 2 (both do a simple print for text).
If both lambdas are outside of a VPC then the invocation succeeds, however as soon as I set them both in to access a VPC (I need to test within a VPC as the full process will be wtihin a VPC) the invocation times out.
Do I have to give my lambda access to the internet to be able invoke a second lambda within the same VPC?
If your lambda functions are inside a VPC you need to configure your both lambda functions into private subnet not public subnet. That is the AWS recommended way.
If you are invoking the second Lambda from the first using Amazon API Gateway, then your Lambda will need to have access to the internet. Follow this guide to configure a NAT Gateway (last step).
regarding the VPC: in order to connect to your VPC and access resources there, the Lambdas must reside in the same region as your VPC and also be configured access to your VPC.
Please follow the steps provided in this AWS Guide: Configuring a Lambda Function to Access Resources in an Amazon VPC. This guide advises to use AWS CLI commands to do this and does not show how to configure it through the console.
You will need to be familiar with Amazon networking particulars (VPCs, Security Groups and Subnets), IAM security for the VPC and have a CLI environment setup. You are going to grant the Lambda Function access to this VPC using IDs and IAM execution roles via the CLI.