I would like to know if theirs a lamba function, or python script I can run that would turn on EBS encryption for all AWS regions instead of me having to manually enable it.
You could certainly write a Python script that could run locally, or in an AWS Lambda environment, that loops through all the AWS regions, calling the boto2 EC2 client's enable_ebs_encryption_by_default() method on each region.
Related
I'm trying to run a python script file while in the AWS CLI. Does anyone have the syntax for that please? I've tried a few variations but without success:
aws ssm send-command --document-name "AWS-RunShellScript" --parameters commands=["/Documents/aws_instances_summary.py"]
I'm not looking to connect to a particular EC2 instance as the script gathers information about all instances
aws ssm send-command runs the command on an EC2 instance, not on your local computer.
From your comments, it looks like you are actually trying to determine how to configure the AWS SDK for Python (Boto3) with AWS API credentials, so you can run the script from your local computer and get information about the AWS account.
You would not use the AWS CLI tool at all for this purpose. Instead you would simply run the Python script directly, having configured the appropriate environment variables, or ~/.aws/credentials file, on your local computer with the API credentials. Please see the official documentation for configuring AWS API credentials for Boto3.
A minimal example would look something like this:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
python aws_instances_summary.py
I am having three instances in AWS. I have to start each instance by logging into the account and then start it manually. I want to start it using python by just running a script so that I don't need to login to the AWS account and start the service manually. Here are my instance types.
Is there any way I could do it? I am new to AWS so not finding a way to do it.
The easiest method would be to use the AWS Command-Line Interface (CLI):
aws ec2 start-instances --instance-ids i-11111 i-2222 i-3333
If you have not previously used the AWS CLI, you will first need to run aws configure and provide your IAM User credentials (Access Key + Secret Key).
You can also use a Python script to do this, using the boto3 SDK and the start_instances() command.
I have code on aws ec2. Right now, it accepts input and output files from s3. Its an inefficient process. I have to upload input file to s3, copy s3 to ec2, run program, copy output files from ec2 to s3, then download locally.
Is there a way to run the code on ec2 and accept a local file as input and then have the output saved on my local machine?
It appears that your scenario is:
Some software on an Amazon EC2 instance is used to process data on the local disk
You are manually transferring that data to/from the instance via Amazon S3
An Amazon EC2 instance is just like any other computer. It runs the same operating system and the same software as you would on a server in your company. However, it does benefit from being in the cloud in that it has easy access to other services (such as Amazon S3) and resources can be turned off to save expense.
Optimize current process
In sticking with the current process, you could improve it with some simple automation:
Upload your data to Amazon S3 via an AWS Command-Line Interface (CLI) command, such as: aws s3 cp file.txt s3://my-bucket/input/
Execute a script on the EC2 process that will:
Download the file, eg: aws s3 cp s3://my-bucket/input/file.txt .
Process the file
Copy the results to S3, eg: aws s3 cp file.txt s3://my-bucket/output/
Download the results to your own computer, eg: aws s3 cp s3://my-bucket/output/file.txt .
Use scp to copy files
Assuming that you are connect to a Linux instance, you could automate via:
Use scp to copy the file to the EC2 instance (which is very similar to the SSH command)
Use ssh with a [remote command(https://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html) parameter to trigger the remote process
Use scp to copy the file down once complete
Re-architect to use AWS Lambda
If the job that runs on the data is suitable for being run as an AWS Lambda function, then the flow would be:
Upload the data to Amazon S3
This automatically triggers the Lambda function, which processes the data and stores the result
Download the result from Amazon S3
Please note that an AWS Lambda function runs for a maximum of 15 minutes and has a limit of 512MB of temporary disk space. (This can be expanded by using Amazon EFS is needed.)
Something in-between
There are other ways to upload/download data, such as running a web server on the EC2 instance and interacting via a web browser, or using AWS Systems Manager Run Command to trigger the process on the EC2 instance. Such a choice would be based on how much you are permitted to modify what is running on the instance and your technical capabilities.
#John Rotenstein we have solved the problem of loading 60MB+ models into Lambdas by attaching AWS EFS volumes via VPC. Also solves the problem with large libs such as Tensorflow, opencv etc. Basically lambda layers almost become redundant and you can really sit back and relax, this saved us days if not weeks of tweaking, building and cherry picking library components from source allowing us to concentrate on the real problem. Beats loading from S3 everytime too. The EFS approach would require an ec2 instance obviously.
I'm fairly new to both Python and AWS, so I'm trying to get some advice on how to best approach this problem.
I have a Python script that I run locally and it targets a production AWS environment. The script will show me certain errors. I have a read-only account to be able to run this script.
I'd like to be able to automate this so it runs the script maybe hourly and sends an email with the output.
After some research, I thought maybe a Lambda function would work. However, I need to be able to run the script from an AWS environment separate from the one I'm targeting. The reason is I don't have (or want) to add or change anything in the production environment. However, I do have access to a separate environment.
Is Lambda even the best way? If not, what is the most efficient way to achieve this?
To run the job hourly, you can create a CloudWatch Events Rule with a schedule (cron expression) and add the Lambda function as the target.
This lambda function may execute the python script in concern.
If from the Python script, you are invoking some AWS API actions on the resources of your production account, you would need to allow cross-account access. You can find more details around that here: Cross account role for an AWS Lambda function
I am looking to launch an AWS instance by deploying a script. However, I do not fully understand what this means. What should be in the script in order to launch it and how do I approach this in order to meet the following requirements?
User specifies AWS credentials in a separate key file;
User invokes termination script and pass the instance ID from
command line;
Termination script shuts down AWS instance.
Upon completion, the termination script returns message indicating
whether the termination process has been completed successfully
I would appreciate some help in understanding what exactly a deployment script it and what language I should write it in. I have been coding thus far in Python and have created a script that creates an instance. But I am not sure how this is different from deploying an instance.
The usage of the expressions "create an instance" and "deploy an instance" can mean the same thing or different things. Depends on the engineer's viewpoint.
Basically creating an EC2 instance means the AWS definition of launching an EC2 instance. Deploying an EC2 instance may include additional configuration details such as patching the OS, installing software and applications, etc. It is up to you to decide which is which and how each should be done.
When deploying an EC2 instance, I prefer to configure a machine exactly the way that I want with OS patches, software and my applications. Then I create an AMI. When I then launch a new EC2 instance, I use my hand created AMI. Then the new EC2 instance is exactly what I want. No long deployment phase.
Best practices when writing scripts. Do not store your Amazon credentials in your scripts, source code, random files, etc. Install the Amazon CLI (Command Line Internface) tool and then configure the CLI with your credentials. Now your credentials are stored in a well defined location with the added benefit that Amazon SDKs, scripts, etc. will know how to find the credentials and will automatically load and use them.
The easiest way of writing scripts to manage AWS services is to use the AWS CLI. Just about anything that you can do in the Amazon Management Console, you can do with the CLI. The CLI works on Windows, Linux and Mac OS.
AWS Command Line Interface
Here is a CLI example that will terminate an EC2 instance. Replace with your instance ID:
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0
Writing your scripts in Python is another good idea. Managing AWS services with Python is very easy; there are lots of examples available on the Internet; and Python is just so easy and quick to develop Amazon apps. Use the Boto3 library and not the older Boto library. I use Python 3.x for all new development, but be aware that there is a lot of already created work on the Internet for AWS that runs under Python 2.x.
CLI EC2 Commands