Is it possible to develop a Python based Docker image that I can run as AWS Lambda and non Lambda environment. The python script should be able to detect whether it's run in as AWS Lambda or not and process different logic flows in the code. I am not sure how to go about it. One way would be to check the handler function parameters and if they are null then it's non Lambda environment and vice versa. Not sure if there is a better way?
Related
I have come across answers saying that it is not entirely possible to clone a cluster using lambda boto3. And some saying that it is possible only through the aws cli. And I have come across run_job_flow function but it involves passing all the parameters separately. And couldn't figure how do we use the terminated cluster along with run_job_flow to get a new clone. If you can please suggest me a way to do this. Thank you.
Option1: If you have enough permissions in the AWS console, then in simply one click you should be able to get a new cluster, exactly the clone of the terminated cluster.
I am attaching the screenshot for your reference.
Option2: You can also do the AWS CLI export highlighted in the above image so that you can use them using the command line. paste the output of the CLI export in a file and run it from a place that has enough access to launch the EMR.
Option3: You can also write an AWS lambda function which will be responsible for spawning the EMR. You can find multiple examples of this online.
I would like to use a serverless lambda that will execute commands from a tool called WSO2 API CTL as I would on linux cli. I am not sure of how to mimic the downloading and calling of the commands as if I were on a linux machine using either Nodejs or Python via the lambda?
I am okay with creating and setting up the lambda and even getting it in the right VPC so that the commands will reach an application on an EC2 instance but I am stuck at how to actually execute the linux commands using either Nodejs or Python and which one would be better, if any.
After adding the following I get an error trying to download:
os.system("curl -O https://apim.docs.wso2.com/en/latest/assets/attachments/learn/api-controller/apictl-3.2.1-linux-x64.tar.gz")
Warning: Failed to create the file apictl-3.2.1-linux-x64.tar.gz: Read-only
It looks like there is no specific reason to download apictl during the initialisation of your Lambda. Therefore, I would propose to bundle it with your deployment package.
The advantage of this approach are:
Quicker initialisation
Less code in your Lambda
You could extend your CI/CD pipeline to download the application during build and then add it to your ZIP archive that you deploy.
I would like to know if theirs a lamba function, or python script I can run that would turn on EBS encryption for all AWS regions instead of me having to manually enable it.
You could certainly write a Python script that could run locally, or in an AWS Lambda environment, that loops through all the AWS regions, calling the boto2 EC2 client's enable_ebs_encryption_by_default() method on each region.
I have built a deployment package with pandas, numpy, etc for my sample code to run. The size is some 46 MB. Doubt is, do I have to zip my code update every time and again update the entire deployment package to AWS S3 for a simple code update too?
Is there any other way, by which, I can avoid the 45 MB upload cost of S3 everytime and just upload the few KBs of code?
I would recommend creating a layer in AWS lambda.
First you need to create an instance of Amazon Linux (using the AMI specified in https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html - at this time (26th of March 2019) it is amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 ) or a docker container with the same environment as the lambda execution environment.
I personally do it with docker.
For example, to create a layer for Python 3.6, I would run
sudo docker run --rm -it -v "$PWD":/var/task lambci/lambda:build-python3.6 bash
Then I would create a folder python/lib/python3.6/site-packages in /var/task in the docker container (so it will be accessible later on in the directory on the host machine where I started docker)
do pip3 install <your packages here> -t python/lib/python3.6/site-packages
zip up the folder python and upload it as a layer and use it in my AWS lambda function.
NB! The paths in the zip file should look like "python/lib/python3.6/site-packages/{your package names}"
Now the heavy dependencies are in separate layer and you don't have re-upload them every time you update the function, you only need to update the code
Split the application into two parts. The first part would be the lambda function that only includes your application code. The next part a lambda layer. The lambda layer will include onky the the dependencies and be uploaded once.
A lambda layer can be uploaded and attached to the lambda function. When your function is invoked, AWS will combine the lambda function with the lambda layer then execute the entire package.
When updating your code, you will only need to update the lambda function. Since the package is much smaller you can edit it using the web editor, or you can zip it and upload it directly to lambda using the cli tools.
Exmample: aws lambda update-function-code --function-name Yourfunctionname --zip-file fileb://Lambda_package.zip
Here are video instructions and examples on creating a lambda layer for dependencies.It demonstrates using the pymsql library, but you can install any of your libraries there.
https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python
I'm fairly new to both Python and AWS, so I'm trying to get some advice on how to best approach this problem.
I have a Python script that I run locally and it targets a production AWS environment. The script will show me certain errors. I have a read-only account to be able to run this script.
I'd like to be able to automate this so it runs the script maybe hourly and sends an email with the output.
After some research, I thought maybe a Lambda function would work. However, I need to be able to run the script from an AWS environment separate from the one I'm targeting. The reason is I don't have (or want) to add or change anything in the production environment. However, I do have access to a separate environment.
Is Lambda even the best way? If not, what is the most efficient way to achieve this?
To run the job hourly, you can create a CloudWatch Events Rule with a schedule (cron expression) and add the Lambda function as the target.
This lambda function may execute the python script in concern.
If from the Python script, you are invoking some AWS API actions on the resources of your production account, you would need to allow cross-account access. You can find more details around that here: Cross account role for an AWS Lambda function