Python - Create AWS Signature with temporary security credentials - python

I have read the AWS documentation but I couldn't find an example of using Temporary Security Credentials to authenticate to AWS with Python.
I would like an example of using a temporary security credentials provided by the AWS Security Token Service (AWS STS) to sign a request.

There are several ways you can use STS to get temporary credential. The two most common ones would be:
get_session_token - used to get temp credentials for existing IAM user or account
assume_role - used to get credentials when assuming iam role
In both cases the call to these function will give you temp credentials, e.g.:
{
"Credentials": {
"AccessKeyId": "AddsdfsdfsdxxxxxxKJ",
"SecretAccessKey": "TEdsfsdfSfdsfsdfsdfsdclkb/",
"SessionToken": "FwoGZXIvYXdzEFkaDGgIUSvDdfgsdfgsdfgsMaVYgsSxO8OqRfjHc4se90WbaspOwCtdgZNgeasdfasdfasdf5wrtChz2QCTnR643exObm/zOJzXe9TUkcdODajHtxcgR8r+unzMo+7WxgQYyKGN9kfbCqv3kywk0EvOBCapusYo81fpv8S7j4JQxEwOGC9JZQL6umJ8=",
"Expiration": "2021-02-17T11:53:31Z"
}
}
Having these credentials, you create new boto3 session, e.g.:
new_session = boto3.session.Session(<temp credentails>)
The new_session will allow you to make new boto3 client or resource, e.g.:
ec2 = new_session.client('ec2')
s3r = new_session.resource('s3')
And then you can use these new clients/resource as you would normally use them.

Related

AWS Assume Role STS

I am trying to access my S3 bucket daily using Python but my session expires every so often. Someone on this site advised I use an "Assumed Role" STS script to re-establish connection. I found a script that uses it and I am getting the following error. FYI, i have my credentials file in .aws folder.
"botocore.exceptions.NoCredentialsError: Unable to locate credentials"
below is my code:
import boto3
# The calls to AWS STS AssumeRole must be signed with the access key ID
# and secret access key of an existing IAM user or by using existing temporary
# credentials such as those from another role. (You cannot call AssumeRole
# with the access key for the root account.) The credentials can be in
# environment variables or in a configuration file and will be discovered
# automatically by the boto3.client() function. For more information, see the
# Python SDK documentation:
# http://boto3.readthedocs.io/en/latest/reference/services/sts.html#client
# create an STS client object that represents a live connection to the
# STS service
sts_client = boto3.client('sts')
# Call the assume_role method of the STSConnection object and pass the role
# ARN and a role session name.
assumed_role_object=sts_client.assume_role(
RoleArn="ARNGOESHERE",
RoleSessionName="AssumeRoleSession1"
)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
credentials=assumed_role_object['Credentials']
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3_resource=boto3.resource(
's3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
for bucket in s3_resource.buckets.all():
print(bucket.name)
You will have 2 options here:
Create a separate user with programmatic access. This would be permanent and the credentials would not expire. Usually this is not allowed for developers in organizations for security concerns. Refer steps:
https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/
If you are not allowed to have a permanent access token through the above method, then you can get the token expiration duration increased from default (1 hour) to 12 hours max to skip re-running PowerShell script every hour or so. For that, you would need to modify the PowerShell script 'saml2aws' you run to get credentials.
Add the arg 'DurationSeconds' for assume_role_with_saml() method. Refer: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role_with_saml
response = client.assume_role_with_saml(
RoleArn='string',
PrincipalArn='string',
SAMLAssertion='string',
PolicyArns=[
{
'arn': 'string'
},
],
Policy='string',
DurationSeconds=123
)
The max duration you can enter here would be as per max session duration setting for your role. You can view it in your AWS console at IAM>Roles>{RoleName}>Summary>MaximumSessionDuration.

Using Secrets Manager to authenticate for Google API

I'm running a flask app that will access Bigquery on behalf of users using a service account they upload.
To store those service account credentials, I thought the following might be a good set up:
ENV Var: Stores my credentials for accessing google secrets manager
Secret & secret version: in google secrets manager for each user of the application. This will access the user's own bigquery instance on behalf of the user.
--
I'm still learning about secrets, but this seemed more appropriate than any way of storing credentials in my own database?
--
The google function for accessing secrets is:
def access_secret_version(secret_id, version_id=version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(name=name)
# Return the decoded payload.
return response.payload.data.decode('UTF-8')
However, this returns JSON as a string. When then using this for big query:
credentials = access_secret_version(secret_id, version_id=version_id)
BigQuery_client = bigquery.Client(credentials=json.dumps(credentials),
project=project_id)
I get the error:
File "/Users/Desktop/application_name/venv/lib/python3.8/site-
packages/google/cloud/client/__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python.
See https://google-auth.readthedocs.io/en/latest/ for help on authentication with
this library.
Locally I'm storing the credentials and accessing them via a env variable. But as I intend for this application to have multiple users, from different organisations I don't think that scales.
I think my question boils down to two pieces:
Is this a sensible method for storing and accessing credentials?
Can you authenticate to Bigquery using a string rather than a .json file indicated here

AWS S3 bucket access issue with switching role

My login to AWS console is MFA & for that I am using Google Authenticator.
I have S3 DEV bucket and to access that DEV bucket, I have to switch role and after switching i can access DEV bucket.
I need help how to achieve same in python with boto3.
There are many csv file that I need to open in dataframe and without that resolving access, I cannot proceed.
I tried configuring AWS credentials & config and using that in my python code but didn't helped.
AWS document is not clear about how to do switching role while using & doing in python.
import boto3
import s3fs
import pandas as pd
import boto.s3.connection
access_key = 'XXXXXXXXXXX'
secret_key = 'XXXXXXXXXXXXXXXXX'
# bucketName = 'XXXXXXXXXXXXXXXXX'
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
Expected result should be to access that bucket after switching role in python code along with MFA.
In general, it is a bad for security to put credentials in your program code. It is better to store them in a configuration file. You can do this by using the AWS Command-Line Interface (CLI) aws configure command.
Once the credentials are stored this way, any AWS SDK (eg boto3) will automatically retrieve the credentials without having to reference them in code.
See: Configuring the AWS CLI - AWS Command Line Interface
There is an additional capability with the configuration file, that allows you to store a role that you wish to assume. This can be done by specifying a profile with the Role ARN:
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_access_key_id=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
The source_profile points to the profile that contains credentials that will be used to make the AssumeRole() call, and role_arn specifies the Role to assume.
See: Assume Role Provider
Finally, you can tell boto3 to use that particular profile for credentials:
session = boto3.Session(profile_name='crossaccount')
# Any clients created from this session will use credentials
# from the [crossaccount] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
An alternative to all the above (which boto3 does for you) is to call assume_role() in your code, then use the temporary credentials that are returned to define a new session that you can use to connect to a service. However, the above method using profiles is a lot easier.

AWS Boto / Warrant library: SRP authentication and credentials error

I have been stuck on the following issue for quite some time now. Within Python I want users to retrieve a token based upon their username and password from the AWS cognito-identity-pool making use of srp authentication. With this token I want the users to upload data to s3.
This is part of the code I use (from the warrant library): https://github.com/capless/warrant
self.client = boto3.client('cognito-idp', region_name="us-east-1")
response = boto_client.initiate_auth(
AuthFlow='USER_SRP_AUTH',
AuthParameters=auth_params,
ClientId=self.client_id
)
def get_auth_params(self):
auth_params = {'USERNAME': self.username,
'SRP_A': long_to_hex(self.large_a_value)}
if self.client_secret is not None:
auth_params.update({
"SECRET_HASH":
self.get_secret_hash(self.username,self.client_id, self.client_secret)})
return auth_params
However, I keep on getting:
botocore\auth.py", line 352, in add_auth raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I was able to get rid of this error by adding credentials in the .aws/credentials file. But this is not in line with the purpose of this program. It seems like there is a mistake in the warrant or botocore library and the it keeps on attempting to use the AWS Access Key ID and AWS Secret Access Key from the credentials file, rather than that the given credentials (username and password) are used.
Any help is appreciated
I am on to Cognito team. initiate auth is an unauthenticated call so it shouldn't require you to provide AWS credentials. The service endpoint will not validate the sigv4 signature for these calls.
That being said, some client libraries have certain peculiarities in the sense that you need to provide some dummy credentials otherwise the client library will throw an exception. However you can provide anything for the credentials.
I too ran into this, using warrant.
The problem is that the boto3 libraries are trying to sign the request to aws, but this request is not supposed to be signed. To prevent that, create the identity pool client with a config that specifies no signing.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('cognito-idp', region_name='us-east-1', config=Config(signature_version=UNSIGNED))
AWS Access Key ID and AWS Secret Access Key are totally different from username and password.
The Boto3 client has to connect to the AWS service endpoint (in your case: cognito-idp.us-east-1.amazonaws.com) to execute any API. Before executing an API, the API credentials (key+secret) have to provided to authenticate your AWS account. Without autheticating your account, you cannot call cognito-idp APIs.
There is one AWS account (key/secret) but there can be multiple users (username/password).

boto3 and connecting to custom url

I have a test environment that mimics the S3 envrionment, and I want to write some test scripts using boto3. How can I connect to that service?
I tried:
client = boto3.client('s3', region_name="us-east-1", endpoint_url="http://mymachine")
client = boto3.client('iam', region_name="us-east-1", endpoint_url="http://mymachine")
Both fail to work.
The service is setup to use IAM authentication.
My error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Any ideas?
Thanks
Please use as below :
import boto3
client = boto3.client( 's3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY,)
Please check this link for more ways to configure AWS credentials.
http://boto3.readthedocs.io/en/latest/guide/configuration.html
1.
boto API always look for credential to pass on to connecting services, there is no way you can access AWS resources using bot without a access key and password.
If you intended to use some other method, e.g. Temporary Security Credentials, your AWS admin must setup roles and etc , to allow the VM instance connect to AWS using AWS Security Token Service.
Otherwise, you must request a restricted credential key from your AWS account admin.
2.On the other hand, if you want to mimics S3 and test rapid upload/download huge amount of data for development, then you should setup FakeS3. It will take any dummy access key. However, there is few drawback of FakeS3 : you can't setup and test S3 bucket policy.
3.Even you configure your S3 bucket to allow anyone to take the file, it is only through the url, it is a file access permission, not bucket access permission.

Categories