I am using tkinter to create gui application that returns the security groups. Currently if you want to change your credentials (e.g. if you accidentally entered the wrong ones) you would have to restart the application otherwise boto3 would carry on using the old credentials.
I'm not sure why it keeps using the old credentials because I am running everything again using the currently entered credentials.
This is a snippet of the code that sets the environment variables and launches boto3. It works perfectly fine if you enter the right credentials the first time.
os.environ['AWS_ACCESS_KEY_ID'] = self.accessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = self.secretKey
self.sts_client = boto3.client('sts')
self.assumedRoleObject = self.sts_client.assume_role(
RoleArn=self.role,
RoleSessionName="AssumeRoleSession1"
)
self.credentials = self.assumedRoleObject['Credentials']
self.ec2 = boto3.resource(
'ec2',
region_name=self.region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
The credentials variables are set using:
self.accessKey = str(self.AWS_ACCESS_KEY_ID_Form.get())
self.secretKey = str(self.AWS_SECRET_ACCESS_KEY_Form.get())
self.role = str(self.AWS_ROLE_ARN_Form.get())
self.region = str(self.AWS_REGION_Form.get())
self.instanceID = str(self.AWS_INSTANCE_ID_Form.get())
Is there a way to use different credentials in boto3 without restarting the program?
You need boto3.session.Session to overwrite the access credentials.
Just do this
reference http://boto3.readthedocs.io/en/latest/reference/core/session.html
import boto3
# Assign you own access
mysession = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
# If you want to use different profile call foobar inside .aws/credentials
mysession = boto3.session.Session(profile_name="fooboar")
# Afterwards, just declare your AWS client/resource services
sqs_resource=mysession.resource("sqs")
# or client
s3_client=mysession.client("s3")
Basically, little change to your code. you just pass in the session instead of direct boto3.client/boto3.resource
self.sts_client = mysession.client('sts')
Sure, just create different sessions from botocore.session.Session object for each set of credentials:
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
Also you can leverage set_credentials method to keep 1 session an change creds on the fly:
import botocore
session - botocore.session.Session()
session.set_credentials('foo', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo'
session.set_credentials('foo1', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo1'
The answers given by #mootmoot and #Vor clearly state the way of dealing with multiple credentials using a session.
#Vor's answer
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
But some of you would be curious about
why does the boto3 client or resource behave in that manner in the first place?
Let's clear out a few points about Session and Client as they'll actually lead us to the answer to the aforementioned question.
Session
A 'Session' stores configuration state and allows you to create service clients and resources
Client
if the credentials are not passed explicitly as arguments to the boto3.client method, then the credentials configured for the session will automatically be used. You only need to provide credentials as arguments if you want to override the credentials used for this specific client
Now let's get to the code and see what actually happens when you call boto3.client()
def client(*args, **kwargs):
return _get_default_session().client(*args, **kwargs)
def _get_default_session():
if DEFAULT_SESSION is None:
setup_default_session()
return DEFAULT_SESSION
def setup_default_session(**kwargs):
DEFAULT_SESSION = Session(**kwargs)
Learnings from the above
The function boto3.client() is really just a proxy for the boto3.Session.client() method
If you once use the client, the DEFAULT_SESSION is set up and for the next consecutive creation of clients it'll keep using the DEFAULT_SESSION
The credentials configured for the DEFAULT_SESSION are used if the credentials are not explicitly passed as arguments while creating the boto3 client.
Answer
The first call to boto3.client() sets up the DEFAULT_SESSION and configures the session with the oldCredsAccessKey, oldCredsSecretKey, the already set values for env variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACESS_KEY respectively.
So even if you set new values of credentials in the environment, i.e do this
os.environ['AWS_ACCESS_KEY_ID'] = newCredsAccessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = newCredsSecretKey
The upcoming boto3.client() calls still pick up the old credentials configured for the DEFAULT_SESSION
NOTE
boto3.client() call in this whole answer means that no arguments passed to the client method.
References
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3.html#client
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/session.html#Session
https://ben11kehoe.medium.com/boto3-sessions-and-why-you-should-use-them-9b094eb5ca8e
Related
I am trying to access my S3 bucket daily using Python but my session expires every so often. Someone on this site advised I use an "Assumed Role" STS script to re-establish connection. I found a script that uses it and I am getting the following error. FYI, i have my credentials file in .aws folder.
"botocore.exceptions.NoCredentialsError: Unable to locate credentials"
below is my code:
import boto3
# The calls to AWS STS AssumeRole must be signed with the access key ID
# and secret access key of an existing IAM user or by using existing temporary
# credentials such as those from another role. (You cannot call AssumeRole
# with the access key for the root account.) The credentials can be in
# environment variables or in a configuration file and will be discovered
# automatically by the boto3.client() function. For more information, see the
# Python SDK documentation:
# http://boto3.readthedocs.io/en/latest/reference/services/sts.html#client
# create an STS client object that represents a live connection to the
# STS service
sts_client = boto3.client('sts')
# Call the assume_role method of the STSConnection object and pass the role
# ARN and a role session name.
assumed_role_object=sts_client.assume_role(
RoleArn="ARNGOESHERE",
RoleSessionName="AssumeRoleSession1"
)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
credentials=assumed_role_object['Credentials']
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3_resource=boto3.resource(
's3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
for bucket in s3_resource.buckets.all():
print(bucket.name)
You will have 2 options here:
Create a separate user with programmatic access. This would be permanent and the credentials would not expire. Usually this is not allowed for developers in organizations for security concerns. Refer steps:
https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/
If you are not allowed to have a permanent access token through the above method, then you can get the token expiration duration increased from default (1 hour) to 12 hours max to skip re-running PowerShell script every hour or so. For that, you would need to modify the PowerShell script 'saml2aws' you run to get credentials.
Add the arg 'DurationSeconds' for assume_role_with_saml() method. Refer: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role_with_saml
response = client.assume_role_with_saml(
RoleArn='string',
PrincipalArn='string',
SAMLAssertion='string',
PolicyArns=[
{
'arn': 'string'
},
],
Policy='string',
DurationSeconds=123
)
The max duration you can enter here would be as per max session duration setting for your role. You can view it in your AWS console at IAM>Roles>{RoleName}>Summary>MaximumSessionDuration.
I am trying to authenticate google API without a config file, I can't even find proof it is possible other than old code in my service that wasn't used in years.
My class receive this dict:
self._connection_data = {
"type": args,
"project_id": args,
"private_key_id": args,
"private_key": args,
"client_email": args,
"client_id": args,
"auth_uri": args,
"token_uri": args,
"auth_provider_x509_cert_url": args,
"client_x509_cert_url": args
}
and the code is -
from google.cloud import bigquery
from google.oauth2 import service_account
def _get_client(self):
credentials = service_account.Credentials.from_service_account_info(self._connection_data)
return bigquery.Client(project=self._project_id, credentials=credentials, location='US')
I receive the error
'{"error":"invalid_grant","error_description":"Invalid grant: account not found"}
however, everything works when I use a helper file for the configs called config.json and an OS environmentnt variable:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "config.json"
self.job_config = bigquery.QueryJobConfig()
self.job_config.use_legacy_sql = True
return bigquery.Client()
I don't want a solution with the env variable, I would like to use the Credentials class without a file path
Well In the end I managed to make my code work without any need for the global variable or a file path. I had a problem with my configured credentials...
This is the code -
# init class here
self.job_config = bigquery.QueryJobConfig()
self.job_config.use_legacy_sql = True
def _get_client(self):
credentials = service_account.Credentials.from_service_account_info(self._connection_data)
return bigquery.Client(project=self._project_id, credentials=credentials)
# function to get columns
query_job = self._get_client().query(query, job_config=self.job_config)
results = query_job.result(timeout=self._current_timeout)
The only part I was missing was to send the QueryJobConfig class with legacy SQL set to true in all of my queries.
Unfortunately, there are no other methods to authenticate your API request without either using an environment variable or specifying the key file path. There are some ways of authenticating your request with GCP using a key json file. Before anything, you should set up your service account and download the json file with your key, as described here.
Then, the first method is using default credentials, according to the documentation:
If you don't specify credentials when constructing the client, the
client library will look for credentials in the environment.
That means, you just need to set your environment variable. Then, the Google Client Library will determine the credentials implicitly. In addition, it also allows you to provide credentials separately from your application, which eases the process of making changes in the code. You can set the environment variable as follows:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
After setting it, you would be able to run the following code:
def implicit():
from google.cloud import storage
# If you don't specify credentials when constructing the client, the
# client library will look for credentials in the environment.
storage_client = storage.Client()
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
Secondly, you can specify the file path within your code using the [google.oauth2.service_account][3] module. It is stated in the documentation that:
An OAuth 2.0 client identifies the application and lets end users
authenticate your application with Google. It allows your application
to access Google Cloud APIs on behalf of the end user.
In order to use the module, you can use one of both codes:
#It creates credentials using your .json file and the Credentials.from_service_account_file constructor
credentials = service_account.Credentials.from_service_account_file(
'service-account.json')
Or
#If you set the environment variable, you can also use
#info = json.loads(os.environ['GOOGLE_APPLICATION_CREDENTIALS_JSON_STRING'])
#Otherwise, you specify the path inside json.load() as below
service_account_info = json.load(open('service_account.json'))
credentials = service_account.Credentials.from_service_account_info(
service_account_info)
Finally, I encourage you to check the Authentication strategies in the documentation.
I'm trying to get the cpu utilization for ec2 instances for an account. My code is like following.
def GetRegions():
return array of regions
def getEC2InstanceID(RegionName):
cloudwatch = boto3.client('cloudwatch', region_name=RegionName)
response = cloudwatch.get_metric_statistics(
.
.
.)
returns array of ec2instanceID
def EC2_Average_Utilization(InstanceID, RegionName):
returns avg cpuusage
def main():
regions= GetRegions()
for i in range(len(regions)):
print(regions[i])
instance_id = getEC2InstanceID(regions[i])
print(instance_id) # prints all the instances if there is any
if (type(instance_id)==list):
for j in range(len(instance_id)):
print(instance_id[j])
print ("For InstanceID "+ instance_id[j] + ":")
EC2_Average_Utilization(instance_id[j], regions[i])
This code executes perfectly for all the regions under only one account. If I want to do the same thing for multiple AWS accounts, what will be the procedure?
n.b I've seen configuring the .aws/config by creating multiple profiles under every account in .aws/credentials, but as I'm generating the regions in the code, I don't want to specify them.
You will need to use a boto3 Session object, the 'Security Token Service (STS)', and a call to assume_role for each account/region combo. The effect is the same as the named profile - you need a role in each account with adequate permissions to call the API methods (EC2, CloudWatch, etc). Also, the target roles need a trust relationship back to the original account credentials.
sts = boto3.client('sts')
#this is called with your default credentials. Target roles need to trust this identity
creds = sts.assume_role(RoleArn='...', RoleSessionName='...')
# set up a session w/ the temporary credentials
session = boto3.Session(
aws_access_key_id=creds['Credentials']['AccessKeyId'],
aws_secret_access_key=creds['Credentials']['SecretAccessKey'],
aws_session_token=creds['Credentials']['SessionToken']
region_name='...')
# all subsequent clients/resources should be instantiated from the session object
cloudwatch = session.client('cloudwatch')
Hope this helps.
See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role
I'm using this code to assume an Amazon Web Services role via SAML authentication:
client = boto3.client('sts', region_name = region)
token = client.assume_role_with_saml(role, principal, saml)
As documented here, the assume_role_with_saml call does not require the use of AWS security credentials; all the auth info is contained in the parameters to the call itself. Nonetheless, if I have auth-related AWS_ environment variables set, the call to boto3.client() immediately tries to use them to authenticate. Usually, I have AWS_PROFILE set, and the reason I'm running this code is because the named profile's security token has expired, so the call fails, and I have to unset AWS_PROFILE and try again.
I can of course manually go through os.environ looking for and deleting relevant variables before the call to boto3.client(), but I'm wondering if there's any cleaner way to say "Hey, Boto, just give me an STS client object without trying to authenticate anything, OK?"
From this response on GitHub, here's how to set up a client that won't attempt to sign outgoing requests with IAM credentials:
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('sts', region_name=region, config=Config(signature_version=UNSIGNED))
By examining the boto3 and botocore code, I worked out a solution, but I'm not sure it's an improvement over unsetting the environment variables:
import boto3, botocore
bs = botocore.session.get_session({ 'profile': ( None, ['', ''], None, None ) })
bs.set_credentials('','','')
s = boto3.session.Session(botocore_session = bs)
client = s.client('sts', region_name = region)
Accepting my own answer for now, but if anyone has a better idea, I'm all ears.
I'm currently making a GUI YouTube video uploader for my community, but since I don't want all of my users to get my client_id and client_secret, I encoded them. Problem is that whenever program runs (it's not being run from command line using parameters, it gets those informations from Tkinter GUI) it start to authenticate users via web link, which contains real client_id and client_secret. I tried to use --noauth_local_webserver parameter but without success, since nothing is being run from command-line (I haven't found way to run this parameter without command line). As I saw on official docs, this parameter is set to "False" by default, is there any way to change that, or is there any way to disable web authentication? This is my code which I use to authenticate and start uploading a video (it's pretty much default one from official docs, with few changes so it fits my needs):
def get_authenticated_service():
makeitreal() #this is function which decodes encoded client_id and client_secret
flow = flow_from_clientsecrets(os.path.abspath(os.path.join(os.path.dirname(__file__), "client_secrets.json")), scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run(flow, storage)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
def initialize_upload():
makeitreal() #this is function which decodes encoded client_id and client_secret
youtube = get_authenticated_service()
os.remove(os.path.join(os.path.dirname(__file__), "upload_video.py-oauth2.json")) #I use this to remove this json since it's not being used anymore and it contains client_id and client_secret
tags = None
insert_request = youtube.videos().insert(
part="snippet,status",
body=dict(
snippet=dict(
title=video_title, #####
description=video_desc, # These 3 parameters are not being gathered through command line as it was in default code, I changed it so it gets these from Tkinter GUI
tags=video_keywords, ####
categoryId="22"
),
status=dict(
privacyStatus=VALID_PRIVACY_STATUSES[0]
)
),
# chunksize=-1 means that the entire file will be uploaded in a single
# HTTP request. (If the upload fails, it will still be retried where it
# left off.) This is usually a best practice, but if you're using Python
# older than 2.6 or if you're running on App Engine, you should set the
# chunksize to something like 1024 * 1024 (1 megabyte).
media_body=MediaFileUpload(filename, chunksize=-1, resumable=True)
)
makeitfake() #this is function which encodes previously decoded client_id and client_secret
resumable_upload(insert_request) #this function uploads video
Thanks in advance, Amar!
You're missing some code. Update to the latest API and examples and it's as simple as: args.noauth_local_webserver = True
Anyway, here's some of the code if you want to try adding support for argparser yourself. There's no longer a run but a run_flow. But you can pass args as the third parameter to your existing run function.
from oauth2client.tools import argparser, run_flow
args = argparser.parse_args()
args.noauth_local_webserver = True
credentials = run_flow(flow, storage, args)
Alternatively, if you must make it work, you can modify oauth2client/tools.py and search for if not flags.noauth_local_webserver and right above that just add flags.noauth_local_webserver = True However, I must point out that modifying core packages is not recommended as your changes will be clobbered the next time you update your packages. The cleanest solution is to update to the latest versions of everything which makes it easier to do what you want to do.