AWS Boto / Warrant library: SRP authentication and credentials error - python

I have been stuck on the following issue for quite some time now. Within Python I want users to retrieve a token based upon their username and password from the AWS cognito-identity-pool making use of srp authentication. With this token I want the users to upload data to s3.
This is part of the code I use (from the warrant library): https://github.com/capless/warrant
self.client = boto3.client('cognito-idp', region_name="us-east-1")
response = boto_client.initiate_auth(
AuthFlow='USER_SRP_AUTH',
AuthParameters=auth_params,
ClientId=self.client_id
)
def get_auth_params(self):
auth_params = {'USERNAME': self.username,
'SRP_A': long_to_hex(self.large_a_value)}
if self.client_secret is not None:
auth_params.update({
"SECRET_HASH":
self.get_secret_hash(self.username,self.client_id, self.client_secret)})
return auth_params
However, I keep on getting:
botocore\auth.py", line 352, in add_auth raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I was able to get rid of this error by adding credentials in the .aws/credentials file. But this is not in line with the purpose of this program. It seems like there is a mistake in the warrant or botocore library and the it keeps on attempting to use the AWS Access Key ID and AWS Secret Access Key from the credentials file, rather than that the given credentials (username and password) are used.
Any help is appreciated

I am on to Cognito team. initiate auth is an unauthenticated call so it shouldn't require you to provide AWS credentials. The service endpoint will not validate the sigv4 signature for these calls.
That being said, some client libraries have certain peculiarities in the sense that you need to provide some dummy credentials otherwise the client library will throw an exception. However you can provide anything for the credentials.

I too ran into this, using warrant.
The problem is that the boto3 libraries are trying to sign the request to aws, but this request is not supposed to be signed. To prevent that, create the identity pool client with a config that specifies no signing.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('cognito-idp', region_name='us-east-1', config=Config(signature_version=UNSIGNED))

AWS Access Key ID and AWS Secret Access Key are totally different from username and password.
The Boto3 client has to connect to the AWS service endpoint (in your case: cognito-idp.us-east-1.amazonaws.com) to execute any API. Before executing an API, the API credentials (key+secret) have to provided to authenticate your AWS account. Without autheticating your account, you cannot call cognito-idp APIs.
There is one AWS account (key/secret) but there can be multiple users (username/password).

Related

Using Secrets Manager to authenticate for Google API

I'm running a flask app that will access Bigquery on behalf of users using a service account they upload.
To store those service account credentials, I thought the following might be a good set up:
ENV Var: Stores my credentials for accessing google secrets manager
Secret & secret version: in google secrets manager for each user of the application. This will access the user's own bigquery instance on behalf of the user.
--
I'm still learning about secrets, but this seemed more appropriate than any way of storing credentials in my own database?
--
The google function for accessing secrets is:
def access_secret_version(secret_id, version_id=version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(name=name)
# Return the decoded payload.
return response.payload.data.decode('UTF-8')
However, this returns JSON as a string. When then using this for big query:
credentials = access_secret_version(secret_id, version_id=version_id)
BigQuery_client = bigquery.Client(credentials=json.dumps(credentials),
project=project_id)
I get the error:
File "/Users/Desktop/application_name/venv/lib/python3.8/site-
packages/google/cloud/client/__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python.
See https://google-auth.readthedocs.io/en/latest/ for help on authentication with
this library.
Locally I'm storing the credentials and accessing them via a env variable. But as I intend for this application to have multiple users, from different organisations I don't think that scales.
I think my question boils down to two pieces:
Is this a sensible method for storing and accessing credentials?
Can you authenticate to Bigquery using a string rather than a .json file indicated here

Python Generating an IAM authentication token boto3.session

I am trying to use the documentation on https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Python.html. Right now I am stuck at session = boto3.session(profile_name='RDSCreds'). What is profile_name and how do I find that in my RDS?
import sys
import boto3
ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USR="jane_doe"
REGION="us-east-1"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USR, Region=REGION)
session = boto3.Session(profile_name='RDSCreds')
profile_name here means the name of the profile you have configured to use for your aws cli.
usually when you run aws configure it creates a default profile.But sometime users want to manage aws cli with another account credentials or amange request for another region so they configure separate profile. docs for creating configuring multiple profiles
aws configure --profile RDSCreds #enter your access keys for this profile
in case if you think you have already created RDSCreds profile to check that profile less ~/.aws/config
the documentation which you have mentioned for rds using boto3 also says "The code examples use profiles for shared credentials. For information about the specifying credentials, see Credentials in the AWS SDK for Python (Boto3) documentation."

How to get authenticated identity response from AWS Cognito using boto3

I would like to use boto3 to get temporary credentials for access AWS services. The use case is this: A user in my Cognito User Pool logs in to my server and I want the server code to provide that user with temporary credentials to access other AWS services.
I have a Cognito User Pool where my users are stored. I have a Cognito Identity Pool that does NOT allow unauthorized access, only access by users from the Cognito User Pool.
So here is the code I am starting with:
import boto3
client = boto3.client('cognito-identity','us-west-2')
resp = client.get_id(AccountId='<ACCNTID>',
IdentityPoolId='<IDPOOLID>')
However, just running these three lines of code throws an exception:
botocore.errorfactory.NotAuthorizedException: An error
occurred (NotAuthorizedException) when calling
the GetId operation: Unauthenticated access is not
supported for this identity pool.
Since my Cognito Identity Pool is not set up for unauthenticated access, it seems that I cannot call get_id until I somehow authenticate somewhere.
How do I solve this? What exactly do I need to do to authenticate so I can call get_id?
UPDATE: Looks like I need to pass a Logins field and data to the get_id function call, but to do that I need the login JWT token. If I am running this inside a webapp (eg a Django backend) where I use the AWS Cognito prepackaged login screens, then yes I can get this from the homepage URL after redirection from successful login. But now I am writing some test scripts that have nothing to do with a website. Is there a way to use boto or boto3 or some other python package to login with username and password and get JWT token?
Just to add to the answer from Arka Mukherjee above, to get the token I do this:
auth_data = { 'USERNAME':username , 'PASSWORD':password }
provider_client=boto3.client('cognito-idp', region_name=region)
resp = provider_client.admin_initiate_auth(UserPoolId=user_pool_id, AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_data, ClientId=client_id)
token = resp['AuthenticationResult']['IdToken']
Here I have to use the username and password of the Cognito user, client_id is the app client id for the app client that I set up thru Cognito, and user_pool_id is the user pool id.
Note that my app client has this option checked/selected: Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH) and I created that app client with no secret key (apparently that is important for web clients especially).
To pass the Cognito User Pool JWT Token, you would need to use the Logins Map in the GetId API call. You could try the following Python code out on your end, after replacing the necessary placeholders.
response = client.get_id(
AccountId='string',
IdentityPoolId='string',
Logins={
'cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>': '<JWT ID Token>'
}
)
If you do not provide a Logins Map, Amazon Cognito treats the authentication event as Unauthenticated, and hence, you are facing this error.

Using Azure Key Vault and Active Directory to Retrieve Secrets

For a Python code base I would like to have developers accessing application secrets using Azure Key Vault, with the idea that when we deploy, the application also should be able to connect. Hence, I'm thinking Active Directory.
However, I can not find any examples on the interweb that show this with the Python SDK. Initially, I would think to retrieve the CLI user:
from azure.common.credentials import get_azure_cli_credentials
credentials, subscription_id, tenant_id = get_azure_cli_credentials(with_tenant=True)
and then use this retrieved set of credentials to access the key vault:
from azure.keyvault import KeyVaultClient
vault_url = "https://########.vault.azure.net/"
secret_name = "########"
secret_version = "########"
client = KeyVaultClient(credentials)
secret = client.get_secret(vault_url, secret_name, secret_version)
print(secret)
However, I retrieve an error that:
azure.keyvault.v7_0.models.key_vault_error_py3.KeyVaultErrorException: Operation returned an invalid status code 'Unauthorized'
I can confirm that credentials, subscription_id and tenant_id are correct, and that using the CLI, I can succesfully retrieve the secret content. So it must be some Python SDK-specific thing.
Any ideas?
It looks like this is a bug in the Python SDK.
https://github.com/Azure/azure-sdk-for-python/issues/5096
You can use your own AD username and password with the UserPassCredentials class. It's not the logged in user, but's it's probably as close as you'll get for now.
EG:
from azure.common.credentials import UserPassCredentials
credentials = UserPassCredentials('username','password')
client = KeyVaultClient(credentials)
secret = client.get_secret(vault_url, secret_name, secret_version)
print(secret)
I tried the same thing and had a different error ("...audience is invalid...") until I changed your first function call adding the resource parameter:
credentials, subscription_id, tenant_id =
get_azure_cli_credentials(resource='https://vault.azure.net', with_tenant=True)
With this change I was able to access secrets using the same code you show.
What about this code snippet? Comparing your code to the example, I don't see where you're setting the client_id or the tenant.
You’ll want to set the access policy for the key vault to allow the authenticated user to access secrets. This can be done in the portal. Bear in mind that key vault has an upper limit of 16 access definitions, so you’ll probably want to grant access to a group and add your users to that group.
As #8forty pointed out, adding a resource='https://vault.azure.net' parameter to your get_azure_cli_credentials call will resolve the issue.
However, there are new packages for working with Key Vault in Python that replace azure-keyvault:
azure-keyvault-certificates (Migration guide)
azure-keyvault-keys (Migration guide)
azure-keyvault-secrets (Migration guide)
azure-identity is also the package that should be used with these for authentication.
If you want to authenticate your Key Vault client with the credentials of the logged in CLI user, you can use the AzureCliCredential class:
from azure.identity import AzureCliCredential
from azure.keyvault.secrets import SecretClient
credential = AzureCliCredential()
vault_url = "https://{vault-name}.vault.azure.net"
secret_name = "secret-name"
client = SecretClient(vault_url, credential)
secret = client.get_secret(secret_name)
print(secret.value)
(I work on the Azure SDK in Python)

boto3 and connecting to custom url

I have a test environment that mimics the S3 envrionment, and I want to write some test scripts using boto3. How can I connect to that service?
I tried:
client = boto3.client('s3', region_name="us-east-1", endpoint_url="http://mymachine")
client = boto3.client('iam', region_name="us-east-1", endpoint_url="http://mymachine")
Both fail to work.
The service is setup to use IAM authentication.
My error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Any ideas?
Thanks
Please use as below :
import boto3
client = boto3.client( 's3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY,)
Please check this link for more ways to configure AWS credentials.
http://boto3.readthedocs.io/en/latest/guide/configuration.html
1.
boto API always look for credential to pass on to connecting services, there is no way you can access AWS resources using bot without a access key and password.
If you intended to use some other method, e.g. Temporary Security Credentials, your AWS admin must setup roles and etc , to allow the VM instance connect to AWS using AWS Security Token Service.
Otherwise, you must request a restricted credential key from your AWS account admin.
2.On the other hand, if you want to mimics S3 and test rapid upload/download huge amount of data for development, then you should setup FakeS3. It will take any dummy access key. However, there is few drawback of FakeS3 : you can't setup and test S3 bucket policy.
3.Even you configure your S3 bucket to allow anyone to take the file, it is only through the url, it is a file access permission, not bucket access permission.

Categories