AWS SES - Connect to boto set without exposing access keys in code - python

I am using python to send emails via an AWS Simple Email Service.
In attempt to have the best security possible I would like to
make a boto SES connection without exposing my access keys inside the code.
Right now I am establishing a connection like this
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id='<ACCESS_KEY>',
aws_secret_access_key='<SECRET_ACCESS_KEY>'
)
Is there a way to do this without exposing my access keys inside the script?

The simplest solution is to use environment variables you may retrieve in your Python code with os.environ.
export AWS_ACCESS_KEY_ID=<YOUR REAL ACCESS KEY>
export AWS_SECRET_ACCESS_KEY=<YOUR REAL SECRET KEY>
And in the Python code:
from os import environ as os_env
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os_env['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os_env['AWS_SECRET_ACCESS_KEY']'
)

To your EC2 instance attach an IAM role that has SES privileges, then you do not have to pass the credentials explicitly. Your script will get the credentials automatically from the metadata server.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console. Then your code will be like:
ses = boto.ses.connect_to_region('us-west-2')

Preferred method of authentication is to use boto3's ability to read your AWS credential file.
Configure your AWS CLI using the aws configure command.
Then, in your script you can use the Session call to get the credentials:
session = boto3.Session(profile_name='default')

Two options are to set an environment variable named ACCESS_KEY and another named SECRET_ACCESS_KEY, then in your code you would have:
import os
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os.environ['ACCESS_KEY'],
aws_secret_access_key=os.environ['SECRET_ACCESS_KEY']
)
or use a json file:
import json
path_to_json = 'your/path/here.json'
with open(path_to_json, 'r') as f:
keys = json.load(f)
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=keys['ACCESS_KEY'],
aws_secret_access_key=keys['SECRET_ACCESS_KEY']
)
the json file would contain:
{'ACCESS_KEY':<ACCESS_KEY>, 'SECRET_ACCESS_KEY':<SECRET_ACCESS_KEY>}

Related

Python Generating an IAM authentication token boto3.session

I am trying to use the documentation on https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Python.html. Right now I am stuck at session = boto3.session(profile_name='RDSCreds'). What is profile_name and how do I find that in my RDS?
import sys
import boto3
ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USR="jane_doe"
REGION="us-east-1"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USR, Region=REGION)
session = boto3.Session(profile_name='RDSCreds')
profile_name here means the name of the profile you have configured to use for your aws cli.
usually when you run aws configure it creates a default profile.But sometime users want to manage aws cli with another account credentials or amange request for another region so they configure separate profile. docs for creating configuring multiple profiles
aws configure --profile RDSCreds #enter your access keys for this profile
in case if you think you have already created RDSCreds profile to check that profile less ~/.aws/config
the documentation which you have mentioned for rds using boto3 also says "The code examples use profiles for shared credentials. For information about the specifying credentials, see Credentials in the AWS SDK for Python (Boto3) documentation."

AWS Glue - Python Shell Jobs Secret Manager Connectivity Issues

I am using Python Shell Jobs under AWS Glue which has boto3 and a few other libraries built-in . I am facing issues trying to access the secrets manager to get credentials to my RDS instance running Mysql , the job keeps running forever without any (error/success) message nor does it time out .
Below is the simple code that runs even from my local or a lambda for Python3.7 but not in Python Shell GLUE ,
import boto3
import base64
from botocore.exceptions import ClientError
secret_name = "secret_name"
region_name = "eu-west-1"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
print(get_secret_value_response)
Would be very helpful if someone could point out if anything needs to be done additionally in Python Shell jobs under AWS Glue in order to access the secret manager credentials .
Make sure the IAM role used by the Glue Job has the policy SecretsManagerReadWrite
Also AWSGlueServiceRole and AmazonS3FullAccess
According to the documentation
When you create a job without any VPC configuration , then glue tries to reach the secret manager through internet , if the policies allows to have internet route then we can connect to secret manager
But when a glue job is created with VPC configuration/connection then all the request are made from your VPC/subnet where the connection points to , if this is the case, make sure you have secret manager endpoint present in your route table of the subnet where glue launches the resources.
https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/vpc-endpoint-overview.html

AWS S3 bucket access issue with switching role

My login to AWS console is MFA & for that I am using Google Authenticator.
I have S3 DEV bucket and to access that DEV bucket, I have to switch role and after switching i can access DEV bucket.
I need help how to achieve same in python with boto3.
There are many csv file that I need to open in dataframe and without that resolving access, I cannot proceed.
I tried configuring AWS credentials & config and using that in my python code but didn't helped.
AWS document is not clear about how to do switching role while using & doing in python.
import boto3
import s3fs
import pandas as pd
import boto.s3.connection
access_key = 'XXXXXXXXXXX'
secret_key = 'XXXXXXXXXXXXXXXXX'
# bucketName = 'XXXXXXXXXXXXXXXXX'
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
Expected result should be to access that bucket after switching role in python code along with MFA.
In general, it is a bad for security to put credentials in your program code. It is better to store them in a configuration file. You can do this by using the AWS Command-Line Interface (CLI) aws configure command.
Once the credentials are stored this way, any AWS SDK (eg boto3) will automatically retrieve the credentials without having to reference them in code.
See: Configuring the AWS CLI - AWS Command Line Interface
There is an additional capability with the configuration file, that allows you to store a role that you wish to assume. This can be done by specifying a profile with the Role ARN:
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_access_key_id=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
The source_profile points to the profile that contains credentials that will be used to make the AssumeRole() call, and role_arn specifies the Role to assume.
See: Assume Role Provider
Finally, you can tell boto3 to use that particular profile for credentials:
session = boto3.Session(profile_name='crossaccount')
# Any clients created from this session will use credentials
# from the [crossaccount] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
An alternative to all the above (which boto3 does for you) is to call assume_role() in your code, then use the temporary credentials that are returned to define a new session that you can use to connect to a service. However, the above method using profiles is a lot easier.

boto3 ec2 & django [duplicate]

On boto I used to specify my credentials when connecting to S3 in such a way:
import boto
from boto.s3.connection import Key, S3Connection
S3 = S3Connection( settings.AWS_SERVER_PUBLIC_KEY, settings.AWS_SERVER_SECRET_KEY )
I could then use S3 to perform my operations (in my case deleting an object from a bucket).
With boto3 all the examples I found are such:
import boto3
S3 = boto3.resource( 's3' )
S3.Object( bucket_name, key_name ).delete()
I couldn't specify my credentials and thus all attempts fail with InvalidAccessKeyId error.
How can I specify credentials with boto3?
You can create a session:
import boto3
session = boto3.Session(
aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
)
Then use that session to get an S3 resource:
s3 = session.resource('s3')
You can get a client with new session directly like below.
s3_client = boto3.client('s3',
aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
region_name=REGION_NAME
)
This is older but placing this here for my reference too. boto3.resource is just implementing the default Session, you can pass through boto3.resource session details.
Help on function resource in module boto3:
resource(*args, **kwargs)
Create a resource service client by name using the default session.
See :py:meth:`boto3.session.Session.resource`.
https://github.com/boto/boto3/blob/86392b5ca26da57ce6a776365a52d3cab8487d60/boto3/session.py#L265
you can see that it just takes the same arguments as Boto3.Session
import boto3
S3 = boto3.resource('s3', region_name='us-west-2', aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY)
S3.Object( bucket_name, key_name ).delete()
I'd like expand on #JustAGuy's answer. The method I prefer is to use AWS CLI to create a config file. The reason is, with the config file, the CLI or the SDK will automatically look for credentials in the ~/.aws folder. And the good thing is that AWS CLI is written in python.
You can get cli from pypi if you don't have it already. Here are the steps to get cli set up from terminal
$> pip install awscli #can add user flag
$> aws configure
AWS Access Key ID [****************ABCD]:[enter your key here]
AWS Secret Access Key [****************xyz]:[enter your secret key here]
Default region name [us-west-2]:[enter your region here]
Default output format [None]:
After this you can access boto and any of the api without having to specify keys (unless you want to use a different credentials).
If you rely on your .aws/credentials to store id and key for a user, it will be picked up automatically.
For instance
session = boto3.Session(profile_name='dev')
s3 = session.resource('s3')
This will pick up the dev profile (user) if your credentials file contains the following:
[dev]
aws_access_key_id = AAABBBCCCDDDEEEFFFGG
aws_secret_access_key = FooFooFoo
region=op-southeast-2
There are numerous ways to store credentials while still using boto3.resource().
I'm using the AWS CLI method myself. It works perfectly.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html?fbclid=IwAR2LlrS4O2gYH6xAF4QDVIH2Q2tzfF_VZ6loM3XfXsPAOR4qA-pX_qAILys
you can set default aws env variables for secret and access keys - that way you dont need to change default client creation code - though it is better to pass it as a parameter if you have non-default creds

Cannot read a key from S3 with boto, but can with aws cli

Using this aws cli command (with access keys configured), I'm able to copy a key from S3 locally:
aws s3 cp s3://<bucketname>/test.txt test.txt
Using the following code in boto, I get S3ResponseError: 403 Forbidden, whether I allow boto to use configured credentials, or explicitly pass it keys.
import boto
c = boto.connect_s3()
b = c.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string() # exception thrown here
I've seen the other SO posts about not validating the key with validate=False etc, but none of these are my issue. I get similar results when copying the key to another location in the same bucket. Succeeds with the cli, but not with boto.
I've looked at the boto source to see if it's doing anything that requires extra permissions, but nothing stands out to me.
Does anyone have any suggestions? How does boto resolve its credentials?
Explicitly set your credentials so that our the same as the CLI with the ENV variables.
echo $ACCESS_KEY
echo $SECRET_KEY
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY
)
b = client.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string()
How boto resolves its credential.
The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
http://boto3.readthedocs.io/en/latest/guide/configuration.html#guide-configuration

Categories