Unable to list all secrets from AWS secret manager - python

I am trying to list all secrets available in AWS secret manager using lambda function, following is the python code snippet;
region='us-west-2'
session= boto3.sesssion.Session(region_name=region)
client = session.client('secretsmanager')
secrets = client.list_secrets()
secrets_manager = secrets['SecretList']
for secret in secrets_manager:
print(secret['Name'])
Above code only lists few secret not all the secrets but running following CLI command returns all secrets.
aws secretsmanager list-secrets | grep "Name"
What am I missing in python code? Please advise

The API is paginated. You need to send multiple requests to get all pages of responses. The CLI does this for you by default.
The easiest way is to use the paginator API in boto3 -- it will correctly implement pagination for you (which may be slightly different between different AWS services/APIs)
client = session.client('secretsmanager')
paginator = client.get_paginator('list_secrets')
page_iterator = paginator.paginate()
for page in page_iterator:
print(page)
Or you can do this 'manually' for the same effect:
secrets = []
response = client.list_secrets()
secrets.extend(response['SecretList'])
while 'NextToken' in response:
response = client.list_secrets(NextToken=response['NextToken'])
secrets.extend(response['SecretList'])
for secret in secrets:
print(secret['Name'])

Related

Flask web app on Cloud Run - google.auth.exceptions.DefaultCredentialsError:

I'm hosting a Flask web app on Cloud Run. I'm also using Secret Manager to store Service Account keys. (I previously downloaded a JSON file with the keys)
In my code, I'm accessing the payload then using os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload to authenticate. When I deploy the app and try to visit the page, I get an Internal Service Error. Reviewing the logs, I see:
File "/usr/local/lib/python3.10/site-packages/google/auth/_default.py", line 121, in load_credentials_from_file
raise exceptions.DefaultCredentialsError(
google.auth.exceptions.DefaultCredentialsError: File {"
I can access the secret through gcloud just fine with: gcloud secrets versions access 1 --secret="<secret_id>" while acting as the Service Account.
Here is my Python code:
# Grabbing keys from Secret Manager
def access_secret_version():
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = "projects/{project_id}/secrets/{secret_id}/versions/1"
# Access the secret version.
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
#app.route('/page/page_two')
def some_random_func():
# New way
payload = access_secret_version() # <---- calling the payload
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload
# Old way
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "service-account-keys.json"
I'm not technically accessing a JSON file like I was before. The payload variable is storing entire key. Is this why it's not working?
Your approach is incorrect.
When you run on a Google compute service like Cloud Run, the code runs under the identity of the compute service.
In this case, by default, Cloud Run uses the Compute Engine default service account but, it's good practice to create a Service Account for your service and specify it when you deploy it to Cloud Run (see Service accounts).
This mechanism is one of the "legs" of Application Default Credentials when your code is running on Google Cloud, you don't specify the environment variable (you also don't need to create a key) and Cloud Run service acquires the credentials from the Metadata service:
import google.auth
credentials, project_id = google.auth.default()
See google.auth package
It is bad practice to define|set an environment variable within code. By their nature, environment variables should be provided by the environment. Doing this with APPLICATION_DEFAULT_CREDENTIALS means that your code always sets this value when it should only do this when the code is running off Google Cloud.
For completeness, if you need to create Credentials from a JSON string rather than from a file contain a JSON string, you can use from_service_account_info (see google.oauth2.service_account)

Using Secrets Manager to authenticate for Google API

I'm running a flask app that will access Bigquery on behalf of users using a service account they upload.
To store those service account credentials, I thought the following might be a good set up:
ENV Var: Stores my credentials for accessing google secrets manager
Secret & secret version: in google secrets manager for each user of the application. This will access the user's own bigquery instance on behalf of the user.
--
I'm still learning about secrets, but this seemed more appropriate than any way of storing credentials in my own database?
--
The google function for accessing secrets is:
def access_secret_version(secret_id, version_id=version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(name=name)
# Return the decoded payload.
return response.payload.data.decode('UTF-8')
However, this returns JSON as a string. When then using this for big query:
credentials = access_secret_version(secret_id, version_id=version_id)
BigQuery_client = bigquery.Client(credentials=json.dumps(credentials),
project=project_id)
I get the error:
File "/Users/Desktop/application_name/venv/lib/python3.8/site-
packages/google/cloud/client/__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python.
See https://google-auth.readthedocs.io/en/latest/ for help on authentication with
this library.
Locally I'm storing the credentials and accessing them via a env variable. But as I intend for this application to have multiple users, from different organisations I don't think that scales.
I think my question boils down to two pieces:
Is this a sensible method for storing and accessing credentials?
Can you authenticate to Bigquery using a string rather than a .json file indicated here

Authenticate to Azure Functions App using Azure Active Directory in Python daemon application with MSAL

I built a REST API by creating an Azure Functions App and a Python console application should be able to authenticate and make requests to the API.
Here are the steps I have taken so far:
I created the REST API as an Azure Functions App (in C#) where I used AuthorizationLevel.Anonymous.
In the Azure Active Directory I created an app registration (Azure Active Directory > App registrations) where I added a client secret under Certificates & secrets > Client secrets.
In the Azure Functions App I added the registration from step 2 under Authentication > Identity provider where I provided the registration's client ID as well as the value of the registration's client secret.
Here is the code of the Python console application (as described here: https://github.com/AzureAD/microsoft-authentication-library-for-python):
import requests
import msal
import json
with open('configuration.json') as json_file:
configuration = json.load(json_file)
app = msal.ConfidentialClientApplication(
configuration["client_id"],
authority=configuration["authority"],
client_credential=configuration["secret"]
)
result = None
result = app.acquire_token_silent(configuration["scope"],account=None)
if not result:
result = app.acquire_token_for_client(scopes=configuration["scope"])
if "access_token" in result:
url = ... # the URL of a specific function of the Azure Functions App
parameters = {...}
response = requests.get(url=url, params=parameters, headers={'Authorization': 'Bearer ' + result['access_token']})
print("Status code: {}".format(response.status_code))
print("Message: {}".format(response.text))
registration["scope"] equals ["https://graph.microsoft.com/.default"] and registration["secret"] equals the value of the client secret created in step 2.
The code returns:
Status code: 401
Message: You do not have permission to view this directory or page.
What am I missing? I know there are similar issues on Stackoverflow but they did not solve my problem.
If you use the newest authentication, please check the issue url:
Check your Issuer URL:
Or to change the accessTokenAcceptedVersion to 2:
Any way, make sure the issue url version is the same.
(I found there are still many problems with Authentication, and many things will not be automatically configured.)

Python Generating an IAM authentication token boto3.session

I am trying to use the documentation on https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Python.html. Right now I am stuck at session = boto3.session(profile_name='RDSCreds'). What is profile_name and how do I find that in my RDS?
import sys
import boto3
ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USR="jane_doe"
REGION="us-east-1"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USR, Region=REGION)
session = boto3.Session(profile_name='RDSCreds')
profile_name here means the name of the profile you have configured to use for your aws cli.
usually when you run aws configure it creates a default profile.But sometime users want to manage aws cli with another account credentials or amange request for another region so they configure separate profile. docs for creating configuring multiple profiles
aws configure --profile RDSCreds #enter your access keys for this profile
in case if you think you have already created RDSCreds profile to check that profile less ~/.aws/config
the documentation which you have mentioned for rds using boto3 also says "The code examples use profiles for shared credentials. For information about the specifying credentials, see Credentials in the AWS SDK for Python (Boto3) documentation."

Switching IAM-user roles with Athena and boto3

I am writing a python program using boto3 that grabs all of the queries made by a master account and pushes them out to all of the master account's sub accounts.
Grabbing the query IDs from the master instance is done, but I'm having trouble pushing them out to the sub accounts. With my authentication information AWS is connecting to the master account by default, but I can't figure out how to get it to connect to a sub account. Generally AWS services do this by switching roles, but Athena doesn't have a built in method for this. I could manually create different profiles but I'm not sure how to switch them manually in the middle of code execution
Here's Amazon's code example for switching in STS, which does support assuming different roles https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html
Here's what my program looks like so far
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='dev')
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3://dev/test/'
s3_output = 's3://dev/testOutput'
database = 'ex_athena_db'
table = 'test_data'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response
So I have the "dev" profile, but I'm not sure how to differentiate this profile to indicate to AWS that I'd like to access one of the child accounts. Is it just the name, or do I need some other paramter? I don't think I can (or need to) generate a seperate authentication token for this
I solved this by creating a new user profile for the sub account with a new ARN
sample config
[default]
region = us-east-1
[profile ecr-dev]
role_arn = arn:aws:iam::76532435:role/AccountRole
source_profile = default
sample code
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='name', region_name="us-east-1")
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3:/test/'
s3_output = 's3:/test'
database = 'ex_athena_db'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response

Categories