in aws orgazination, use paginate to list all accounts - python

This should be a generic question about the usage paginate in boto3.
In this case, when I get a lot of accounts (100+) under AWS Orgazinations, use list_account() directly without with NextToken, you can't list all accounts.
response = client.list_accounts()
I knew the correct way is adding NextToken and MaxResults, but that needs more coding.
response = client.list_accounts(
NextToken='string',
MaxResults=123
)
So I switch to use another method, called paginate, ref class Organizations.Paginator.ListAccounts. It reports more accounts than list_accounts(), but still can't list all of them.
the Request Syntax has similar MaxItems and PageSize as in list_accounts()
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
So two questions from me:
if paginate can't list all accounts, what's the point to create it.
How can I list all accounts with painate , any sample codes for me?
Will be appreciated.

So i am wrong, maybe at beginning.
the paginator does handle the loop automatically. I can't run test on Organization list_accounts(), because I don't have so many accounts to run the test, but I did a test on s3 bucket objects.
aws cli to get s3 objects
$ aws s3api list-objects --bucket bucket-demo |jq '.Contents|length'
8696
So I can confirm there are 8000+ objects in this bucket.
Get s3 object by python sdk with paginator
>>> import boto3
>>> client = boto3.client('s3')
>>> paginator = client.get_paginator('list_objects')
>>> response_iterator = paginator.paginate(Bucket="bucket-demo")
>>> for i in response_iterator:
... print(len(i['Contents']))
...
1000
1000
1000
1000
1000
1000
1000
1000
696
it proves the paginator does the loop automatically.
show the problem why we need paginator
>>> import boto3
>>> client = boto3.client('s3')
>>> response = client.list_objects(Bucket="bucket-demo")
>>> len(response['Contents'])
1000
So paginator can similify your codes a lot and avoid to develop own loop with normal way.

Related

how do i paginate my query in Athena using Lambda and Boto3

I am querying my data in Athena from lambda using Boto3.
My result is json format.
when I run my lambda function I get the whole record.
Now how can I paginate this data.
I only want to get fewer data per page and
send that small dataset to the UI to display.
Here is my Python code:
def lambda_handler(event, context):
athena = boto3.client('athena')
s3 = boto3.client('s3')
query = event['query']
# Execution
query_id = athena.start_query_execution(
QueryString=query,
QueryExecutionContext={'Database': DATABASE},
ResultConfiguration = {'OutputLocation': output}
)['QueryExecutionId']
I use postman to pass my query to get data and
I am aware of the SQl query LIMIT and OFFSET
but want to know if there is any other better way to pass LIMIT and OFFSET parameter in my function.
Please help me in this case.
Thanks.
A quick google search and found this answer in the Athena docs, which seems to be promising. Example from the docs
response_iterator = paginator.paginate(
QueryExecutionId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
})
I hope this helps!

Boto3 S3 Paginator not returning filtered results

I'm trying to list the items in my S3 bucket from the last few months. I'm able to get results from the normal paginator (page_iterator), but the filtered_iterator isn't yielding anything when I iterate over it.
I've been referencing the documentation here. I've double checked my filter string both using JmesPath site and the AWS CLI, and it works in both places. I'm at a loss at this point on what I need to do.
Current Code:
client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
operation_parameters = {'Bucket': self.bucket_name,
'Prefix': file_path_prefix}
page_iterator = paginator.paginate(**operation_parameters)
filtered_iterator = page_iterator.search("Contents[?LastModified>='2022-10-31'][]")
for key_data in filtered_iterator:
print('page2', key_data)

alternative to pagination for aws client cognito list_users() function

in order to list all users of a cognito user-pool, I thought of using boto3's client.list_users()-function including pagination.
However, if I call print(client.can_paginate('list_users')), False is returned since this function list_users() is not pageable.
Is there an alternative to listing all users of a cognito user-pool without filtering those users out that have already been selected?
My current code without pagination looks this:
client = boto3.client('cognito-idp',
region_name=aws_region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key,
config=config)
response = client.list_users(
UserPoolId=userpool_id,
AttributesToGet=[
'email','sub'
]
)
Many thanks in advance!
Faced with the same problem, was also surprised that there is no paginator for the Cognito list_user API, so I've built something like this:
import boto3
def boto3_paginate(method_to_paginate, **params_to_pass):
response = method_to_paginate(**params_to_pass)
yield response
while response.get('PaginationToken', None):
response = method_to_paginate(PaginationToken=response['PaginationToken'], **params_to_pass)
yield response
class CognitoIDPClient:
def __init__(self):
self.client = boto3.client('cognito-idp', region_name=settings.COGNITO_AWS_REGION)
...
def get_all_users(self):
"""
Fetch all users from cognito
"""
# sadly, but there is no paginator for 'list_users' which is ... weird
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html?highlight=list_users#paginators
users = []
# if `Limit` is not provided - the api will return 60 items, which is maximum
for page in boto3_paginate(self.client.list_users, UserPoolId=settings.COGNITO_USER_POOL):
users += page['Users']
return users
This worked for me, it seems there is a Paginator for list_user in boto3 cognito idp documentation:
def get_cognito_users(**kwargs) -> [dict]:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html?highlight=list_users#CognitoIdentityProvider.Paginator.ListUsers
paginator = cognito_idp_client.get_paginator('list_users')
pages = paginator.paginate(**kwargs)
for page in pages:
users = []
for obj in page.get('Users', []):
users.append(obj)
yield users

Get url of public accessible s3 object using boto3 without expiration or security info

Running a line like:
s3_obj = boto3.resource('s3').Object(bucket, key)
s3_obj.meta.client.generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket':bucket,'Key':key})
Yields a result like:
https://my-bucket.s3.amazonaws.com/my-key/my-object-name?AWSAccessKeyId=SOMEKEY&Expires=SOMENUMBER&x-amz-security-token=SOMETOKEN
For an s3 object with public-read ACL, all the GET params are unnecessary.
I could cheat and use rewrite the URL without the GET params but that feels unclean and hacky.
How do I use boto3 to provide me with just the public link, e.g. https://my-bucket.s3.amazonaws.com/my-key/my-object-name? In other words, how do I skip the signing step in generate_presigned_url? I don't see anything like a generated_unsigned_url function.
The best solution I found is still to use the generate_presigned_url, just that the Client.Config.signature_version needs to be set to botocore.UNSIGNED.
The following returns the public link without the signing stuff.
config.signature_version = botocore.UNSIGNED
boto3.client('s3', config=config).generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket': bucket, 'Key': key})
The relevant discussions on the boto3 repository are:
https://github.com/boto/boto3/issues/110
https://github.com/boto/boto3/issues/169
https://github.com/boto/boto3/issues/1415

How to read data from Azure's CosmosDB in python

I have a trial account with Azure and have uploaded some JSON files into CosmosDB. I am creating a python program to review the data but I am having trouble doing so. This is the code I have so far:
import pydocumentdb.documents as documents
import pydocumentdb.document_client as document_client
import pydocumentdb.errors as errors
url = 'https://ronyazrak.documents.azure.com:443/'
key = '' # primary key
# Initialize the Python DocumentDB client
client = document_client.DocumentClient(url, {'masterKey': key})
collection_link = '/dbs/test1/colls/test1'
collection = client.ReadCollection(collection_link)
result_iterable = client.QueryDocuments(collection)
query = { 'query': 'SELECT * FROM server s' }
I read somewhere that the key would be my primary key that I can find in my Azure account Keys. I have filled the key string with my primary key shown in the image but key here is empty just for privacy purposes.
I also read somewhere that the collection_link should be '/dbs/test1/colls/test1' if my data is in collection 'test1' Collections.
My code gets an error at the function client.ReadCollection().
That's the error I have "pydocumentdb.errors.HTTPFailure: Status code: 401
{"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ncolls\ndbs/test1/colls/test1\nmon, 29 may 2017 19:47:28 gmt\n\n'\r\nActivityId: 03e13e74-8db4-4661-837a-f8d81a2804cc"}"
Once this error is fixed, what is there left to do? I want to get the JSON files as a big dictionary so that I can review the data.
Am I in the right path? Am I approaching this the wrong way? How can I read documents that are in my database? Thanks.
According to your error information, it seems to be caused by the authentication failed with your key as the offical explaination said below from here.
So please check your key, but I think the key point is using pydocumentdb incorrectly. These id of Database, Collection & Document are different from their links. These APIs ReadCollection, QueryDocuments need to be pass related link. You need to retrieve all resource in Azure CosmosDB via resource link, not resource id.
According to your description, I think you want to list all documents under the collection id path /dbs/test1/colls/test1. As reference, here is my sample code as below.
from pydocumentdb import document_client
uri = 'https://ronyazrak.documents.azure.com:443/'
key = '<your-primary-key>'
client = document_client.DocumentClient(uri, {'masterKey': key})
db_id = 'test1'
db_query = "select * from r where r.id = '{0}'".format(db_id)
db = list(client.QueryDatabases(db_query))[0]
db_link = db['_self']
coll_id = 'test1'
coll_query = "select * from r where r.id = '{0}'".format(coll_id)
coll = list(client.QueryCollections(db_link, coll_query))[0]
coll_link = coll['_self']
docs = client.ReadDocuments(coll_link)
print list(docs)
Please see the details of DocumentDB Python SDK from here.
For those using azure-cosmos, the current library (2019) I opened a doc bug and provided a sample in GitHub
Sample
from azure.cosmos import cosmos_client
import json
CONFIG = {
"ENDPOINT": "ENDPOINT_FROM_YOUR_COSMOS_ACCOUNT",
"PRIMARYKEY": "KEY_FROM_YOUR_COSMOS_ACCOUNT",
"DATABASE": "DATABASE_ID", # Prolly looks more like a name to you
"CONTAINER": "YOUR_CONTAINER_ID" # Prolly looks more like a name to you
}
CONTAINER_LINK = f"dbs/{CONFIG['DATABASE']}/colls/{CONFIG['CONTAINER']}"
FEEDOPTIONS = {}
FEEDOPTIONS["enableCrossPartitionQuery"] = True
# There is also a partitionKey Feed Option, but I was unable to figure out how to us it.
QUERY = {
"query": f"SELECT * from c"
}
# Initialize the Cosmos client
client = cosmos_client.CosmosClient(
url_connection=CONFIG["ENDPOINT"], auth={"masterKey": CONFIG["PRIMARYKEY"]}
)
# Query for some data
results = client.QueryItems(CONTAINER_LINK, QUERY, FEEDOPTIONS)
# Look at your data
print(list(results))
# You can also use the list as JSON
json.dumps(list(results), indent=4)

Categories