I am trying to write python script to access s3 objects.
I am able to get access to s3 (using assume role, boto3 and MFA is enabled on my account)
My question is do i need to enter MFA token every single time i run the script? Is it possible to cache session for selected duration and use when running the script for the second time?
At the moment i have created a setup class in Test suite
import unittest
import boto3
class TestS3Buckets():
s3 = None
#classmethod
def setUpClass(cls):
credentials = get_role_credentials(
"arn:aws:iam::123456:role/admin-assume",
"test-env",
"arn:aws:iam::111111111:mfa/test#test.com",
"MFA token"
)
session = boto3.Session(
aws_access_key_id=credentials["Credentials"]["AccessKeyId"],
aws_secret_access_key=credentials["Credentials"]["SecretAccessKey"],
aws_session_token=credentials["Credentials"]["SessionToken"]
)
cls.s3 = session.client('s3')
Is there a way i can ignore MFA token while running the script for the second time ?
Any help would be appreciated.
Thanks,
Related
i am coming here after searching google but i am not able to find any answer which i can understand. Kindly help me with my query.
If i want to access GCP resource using an impersonated service account i know i can use it using commands like for example to list a bucket in a project:
gsutil -i service-account-id ls -p project-id
But how can i run a python code ex: test1.py to access the resources using impersonate service account ?
Is there any package or class that i need to use it ? if yes then how to use ? PFB the scenario and code:
I have a pub/sub topic hosted in project-A, where owner is xyz#gmail.com and I have a python code hosted in project-B where owner is abc#gmail.com.
In project-A I have created a service account where I have added abc#gmail.com to impersonate the service account which has pub/sub admin role. Now how can I access pubsub topic via my python code in project-B without using the keys ?
"""Publishes multiple messages to a Pub/Sub topic with an error handler."""
import os
from collections.abc import Callable
from concurrent import futures
from google.cloud import pubsub_v1
# os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:\gcp_poc\key\my-GCP-project.JSON"
project_id = "my-project-id"
topic_id = "myTopic1"
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_id)
publish_futures = []
def get_callback(publish_future: pubsub_v1.publisher.futures.Future, data: str) -> Callable[[pubsub_v1.publisher.futures.Future], None]:
def callback(publish_future: pubsub_v1.publisher.futures.Future) -> None:
try:
# Wait 60 seconds for the publish call to succeed.
print(f"Printing the publish future result here: {publish_future.result(timeout=60)}")
except futures.TimeoutError:
print(f"Publishing {data} timed out.")
return callback
for i in range(4):
data = str(i)
# When you publish a message, the client returns a future.
publish_future = publisher.publish(topic_path, data.encode("utf-8"))
# Non-blocking. Publish failures are handled in the callback function.
publish_future.add_done_callback(get_callback(publish_future, data))
publish_futures.append(publish_future)
# Wait for all the publish futures to resolve before exiting.
futures.wait(publish_futures, return_when=futures.ALL_COMPLETED)
print(f"Published messages with error handler to {topic_path}.")
Create a Service Account with the appropriate role. Then create a Service Account key file and download it. Then put the path of the key file in the "GOOGLE_APPLICATION_CREDENTIALS" environment variable. The Client Library will pick that key file and use it for further authentication/authorization. Please read this official doc to know more about how Application Default Credentials work.
I am trying to use the documentation on https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Python.html. Right now I am stuck at session = boto3.session(profile_name='RDSCreds'). What is profile_name and how do I find that in my RDS?
import sys
import boto3
ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USR="jane_doe"
REGION="us-east-1"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USR, Region=REGION)
session = boto3.Session(profile_name='RDSCreds')
profile_name here means the name of the profile you have configured to use for your aws cli.
usually when you run aws configure it creates a default profile.But sometime users want to manage aws cli with another account credentials or amange request for another region so they configure separate profile. docs for creating configuring multiple profiles
aws configure --profile RDSCreds #enter your access keys for this profile
in case if you think you have already created RDSCreds profile to check that profile less ~/.aws/config
the documentation which you have mentioned for rds using boto3 also says "The code examples use profiles for shared credentials. For information about the specifying credentials, see Credentials in the AWS SDK for Python (Boto3) documentation."
I am writing a python program using boto3 that grabs all of the queries made by a master account and pushes them out to all of the master account's sub accounts.
Grabbing the query IDs from the master instance is done, but I'm having trouble pushing them out to the sub accounts. With my authentication information AWS is connecting to the master account by default, but I can't figure out how to get it to connect to a sub account. Generally AWS services do this by switching roles, but Athena doesn't have a built in method for this. I could manually create different profiles but I'm not sure how to switch them manually in the middle of code execution
Here's Amazon's code example for switching in STS, which does support assuming different roles https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html
Here's what my program looks like so far
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='dev')
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3://dev/test/'
s3_output = 's3://dev/testOutput'
database = 'ex_athena_db'
table = 'test_data'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response
So I have the "dev" profile, but I'm not sure how to differentiate this profile to indicate to AWS that I'd like to access one of the child accounts. Is it just the name, or do I need some other paramter? I don't think I can (or need to) generate a seperate authentication token for this
I solved this by creating a new user profile for the sub account with a new ARN
sample config
[default]
region = us-east-1
[profile ecr-dev]
role_arn = arn:aws:iam::76532435:role/AccountRole
source_profile = default
sample code
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='name', region_name="us-east-1")
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3:/test/'
s3_output = 's3:/test'
database = 'ex_athena_db'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response
I am using python to send emails via an AWS Simple Email Service.
In attempt to have the best security possible I would like to
make a boto SES connection without exposing my access keys inside the code.
Right now I am establishing a connection like this
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id='<ACCESS_KEY>',
aws_secret_access_key='<SECRET_ACCESS_KEY>'
)
Is there a way to do this without exposing my access keys inside the script?
The simplest solution is to use environment variables you may retrieve in your Python code with os.environ.
export AWS_ACCESS_KEY_ID=<YOUR REAL ACCESS KEY>
export AWS_SECRET_ACCESS_KEY=<YOUR REAL SECRET KEY>
And in the Python code:
from os import environ as os_env
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os_env['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os_env['AWS_SECRET_ACCESS_KEY']'
)
To your EC2 instance attach an IAM role that has SES privileges, then you do not have to pass the credentials explicitly. Your script will get the credentials automatically from the metadata server.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console. Then your code will be like:
ses = boto.ses.connect_to_region('us-west-2')
Preferred method of authentication is to use boto3's ability to read your AWS credential file.
Configure your AWS CLI using the aws configure command.
Then, in your script you can use the Session call to get the credentials:
session = boto3.Session(profile_name='default')
Two options are to set an environment variable named ACCESS_KEY and another named SECRET_ACCESS_KEY, then in your code you would have:
import os
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os.environ['ACCESS_KEY'],
aws_secret_access_key=os.environ['SECRET_ACCESS_KEY']
)
or use a json file:
import json
path_to_json = 'your/path/here.json'
with open(path_to_json, 'r') as f:
keys = json.load(f)
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=keys['ACCESS_KEY'],
aws_secret_access_key=keys['SECRET_ACCESS_KEY']
)
the json file would contain:
{'ACCESS_KEY':<ACCESS_KEY>, 'SECRET_ACCESS_KEY':<SECRET_ACCESS_KEY>}
I am using tkinter to create gui application that returns the security groups. Currently if you want to change your credentials (e.g. if you accidentally entered the wrong ones) you would have to restart the application otherwise boto3 would carry on using the old credentials.
I'm not sure why it keeps using the old credentials because I am running everything again using the currently entered credentials.
This is a snippet of the code that sets the environment variables and launches boto3. It works perfectly fine if you enter the right credentials the first time.
os.environ['AWS_ACCESS_KEY_ID'] = self.accessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = self.secretKey
self.sts_client = boto3.client('sts')
self.assumedRoleObject = self.sts_client.assume_role(
RoleArn=self.role,
RoleSessionName="AssumeRoleSession1"
)
self.credentials = self.assumedRoleObject['Credentials']
self.ec2 = boto3.resource(
'ec2',
region_name=self.region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
The credentials variables are set using:
self.accessKey = str(self.AWS_ACCESS_KEY_ID_Form.get())
self.secretKey = str(self.AWS_SECRET_ACCESS_KEY_Form.get())
self.role = str(self.AWS_ROLE_ARN_Form.get())
self.region = str(self.AWS_REGION_Form.get())
self.instanceID = str(self.AWS_INSTANCE_ID_Form.get())
Is there a way to use different credentials in boto3 without restarting the program?
You need boto3.session.Session to overwrite the access credentials.
Just do this
reference http://boto3.readthedocs.io/en/latest/reference/core/session.html
import boto3
# Assign you own access
mysession = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
# If you want to use different profile call foobar inside .aws/credentials
mysession = boto3.session.Session(profile_name="fooboar")
# Afterwards, just declare your AWS client/resource services
sqs_resource=mysession.resource("sqs")
# or client
s3_client=mysession.client("s3")
Basically, little change to your code. you just pass in the session instead of direct boto3.client/boto3.resource
self.sts_client = mysession.client('sts')
Sure, just create different sessions from botocore.session.Session object for each set of credentials:
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
Also you can leverage set_credentials method to keep 1 session an change creds on the fly:
import botocore
session - botocore.session.Session()
session.set_credentials('foo', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo'
session.set_credentials('foo1', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo1'
The answers given by #mootmoot and #Vor clearly state the way of dealing with multiple credentials using a session.
#Vor's answer
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
But some of you would be curious about
why does the boto3 client or resource behave in that manner in the first place?
Let's clear out a few points about Session and Client as they'll actually lead us to the answer to the aforementioned question.
Session
A 'Session' stores configuration state and allows you to create service clients and resources
Client
if the credentials are not passed explicitly as arguments to the boto3.client method, then the credentials configured for the session will automatically be used. You only need to provide credentials as arguments if you want to override the credentials used for this specific client
Now let's get to the code and see what actually happens when you call boto3.client()
def client(*args, **kwargs):
return _get_default_session().client(*args, **kwargs)
def _get_default_session():
if DEFAULT_SESSION is None:
setup_default_session()
return DEFAULT_SESSION
def setup_default_session(**kwargs):
DEFAULT_SESSION = Session(**kwargs)
Learnings from the above
The function boto3.client() is really just a proxy for the boto3.Session.client() method
If you once use the client, the DEFAULT_SESSION is set up and for the next consecutive creation of clients it'll keep using the DEFAULT_SESSION
The credentials configured for the DEFAULT_SESSION are used if the credentials are not explicitly passed as arguments while creating the boto3 client.
Answer
The first call to boto3.client() sets up the DEFAULT_SESSION and configures the session with the oldCredsAccessKey, oldCredsSecretKey, the already set values for env variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACESS_KEY respectively.
So even if you set new values of credentials in the environment, i.e do this
os.environ['AWS_ACCESS_KEY_ID'] = newCredsAccessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = newCredsSecretKey
The upcoming boto3.client() calls still pick up the old credentials configured for the DEFAULT_SESSION
NOTE
boto3.client() call in this whole answer means that no arguments passed to the client method.
References
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3.html#client
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/session.html#Session
https://ben11kehoe.medium.com/boto3-sessions-and-why-you-should-use-them-9b094eb5ca8e