I'm new to AWS and boto 3 Python SDK. I configured the Access Key ID, Secret Access Key and the region name through AWS CLI.
import boto3
client = boto3.client('cognito-idp')
response = client.admin_get_user(
UserPoolId='us-east-2_hJpikme9T',
Username='wasdkiller'
)
Here is my user pool details,
I provided the correct UserPoolId, but when I run above code sample I got below error for every functions in CognitoIdentityProvider, for an example I used admin_get_user(**kwargs).
ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the AdminGetUser operation: User pool us-east-2_hJpikme9T does not exist.
We can provide more arguments in boto3.client(*args, **kwargs) other than the service_name(the default parameter). As you can see client() in Session Reference, we can provide aws_access_key_id, aws_secret_access_key, and region_name without using AWS CLI.
If you are using default parameters such as you already given through AWS CLI that's fine, you don't need to mention aws_access_key_id or aws_secret_access_key when calling boto3.client(). But I don't know for some reason you have to mention your region_name which is you already given through AWS CLI when calling boto3.client().
client = boto3.client('cognito-idp', region_name='us-east-2')
In this way I clear out my above problem. But still I don't know why we have to specially mention the region_name argument when calling boto3.client(), please update this answer or comment below if you know anything about it.
I had the same error. Another option is to create the User Pool in region us-east-1. I did that and cognito identity was authenticated successfully.
Related
I'm helping a colleague troubleshoot an S3 put "access denied" error when he uses a Python library that internally calls boto3. To simplify the troubleshooting process I'd like to know which IAM principal is getting denied. It's not the instance's IAM role because that role has full S3 access, so I'm trying to consider other possibilities.
Normally if I were to call the boto3 code directly I wouldn't have this question because I would either 1) need to be explicit about the principal or 2) boto3 would check for the presence of an environment variable such as AWS_ACCESS_KEY_ID, and if that wasn't set AWS would fall back to the EC2 instance's associated role / default profile (please correct me if I am wrong about this order of precedence).
How can I determine the denied principal used by a third party library other than by diving into the library's source code?
You can use this code to identify which credentials are being used:
import boto3
sts_client = boto3.client('sts')
response = sts_client.get_caller_identity()
print(response['Arn'])
It will show an ARN like: arn:aws:iam::123456789012:user/User-Name
If you run it from an EC2 instance it would show an ARN like: arn:aws:sts::123456789012:assumed-role/Role-Name/i-1234abcd
What I understand is that, in order to access AWS applications such as redshift, the way to do it is
client = boto3.client("redshift", region_name="someRegion", aws_access_key_id="foo", aws_secret_access_key="bar")
response = client.describe_clusters(ClusterIdentifier="mycluster")
print(response)
This code runs fine for both locally through pycharm, as well as on AWS lambda.
However, am I correct that this aws_access_key_id and aws_secret_access_key are both from me? IE: my IAM user security access keys. Is this supposed to be the case? Or am I suppose to create a different user / role in order to access redshift via boto3?
The more important question is, how do I properly store & retrieve aws_access_key_id and aws_secret_access_key? I understand that this could potentially be done via secrets manager, but I am still faced with the problem that, if I run the below code, I get an error saying that it is unable to locate credentials.
client = boto3.client("secretsmanager", region_name="someRegion")
# Met with the problem that it is unable to locate my credentials.
The proper way to do this would be for you to create an IAM role which allows the desired redshift functionality, and then attaching that role to your lambda.
When you create the role, you have the flexibility to create a policy to fine-grain access permissions to certain actions and/or certain resources.
After you have attached the IAM role to your lambda, you will simply be able to do:
>>> client = boto3.client("redshift")
From the docs. The first & seconds options are not secured since you mix the credentials with the code.
If the code runs on AWS EC2 the best way is using "assume role" where you grant the EC2 instance permissions. If the code run outside AWS you will have to select an option like using ~/.aws/credentials
Boto3 will look in several locations when searching for credentials. The mechanism in which Boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
I want to access AWS comprehend api from python script. Not getting any leads of how do I remove this error. One thing I know that I have to get session security token.
try:
client = boto3.client(service_name='comprehend', region_name='us-east-1', aws_access_key_id='KEY ID', aws_secret_access_key= 'ACCESS KEY')
text = "It is raining today in Seattle"
print('Calling DetectEntities')
print(json.dumps(client.detect_entities(Text=text, LanguageCode='en'), sort_keys=True, indent=4))
print('End of DetectEntities\n')
except ClientError as e:
print (e)
Error : An error occurred (UnrecognizedClientException) when calling the DetectEntities operation: The security token included in the request is invalid.
This error suggesting that you have provided invalid credentials.
It is also worth nothing that you should never put credentials inside your source code. This can lead to potential security problems if other people obtain access to the source code.
There are several ways to provide valid credentials to an application that uses an AWS SDK (such as boto3).
If the application is running on an Amazon EC2 instance, assign an IAM Role to the instance. This will automatically provide credentials that can be retrieved by boto3.
If you are running the application on your own computer, store credentials in the .aws/credentials file. The easiest way to create this file is with the aws configure command.
See: Credentials — Boto 3 documentation
Create a profile using aws configure or updating ~/.aws/config. If you only have one profile to work with = default, you can omit profile_name parameter from Session() invocation (see example below). Then create AWS service specific client using the session object. Example;
import boto3
session = boto3.session.Session(profile_name="test")
ec2_client = session.client('ec2')
ec2_client.describe_instances()
ec2_resource = session.resource(‘ec2’)
One useful tool I use daily is this: https://github.com/atward/aws-profile/blob/master/aws-profile
This makes assuming role so much easier!
After you set up your access key in .aws/credentials and your .aws/config
you can do something like:
AWS_PROFILE=**you-profile** aws-profile [python x.py]
The part in [] can be substituted with anything that you want to use AWS credentials. e.g., terraform plan
Essentially, this utility simply put your AWS credentials into os environment variables. Then in your boto script, you don't need to worry about setting aws_access_key_id and etc..
I'd like to write a file to S3 from my lambda function written in Python. But I’m struggling to pass my S3 ID and Key.
The following works on my local machine after I set my local Python environment variables AWS_SHARED_CREDENTIALS_FILE and AWS_CONFIG_FILE to point to the local files I created with the AWS CLI.
session = boto3.session.Session(region_name='us-east-2')
s3 = session.client('s3',
config=boto3.session.Config(signature_version='s3v4'))
And the following works on Lambda where I hand code my ID and Key (using *** here):
AWS_ACCESS_KEY_ID = '***'
AWS_SECRET_ACCESS_KEY = '***'
session = boto3.session.Session(region_name='us-east-2')
s3 = session.client('s3',
config=boto3.session.Config(signature_version='s3v4'),
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
But I understand this is insecure after reading best practices from Amazon. So I try:
AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
session = boto3.session.Session(region_name='us-east-2')
s3 = session.client('s3',
config=boto3.session.Config(signature_version='s3v4'),
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
But I get an error: “The AWS Access Key Id you provided does not exist in our records.” I also tried to define these variables in the Lambda console, but I then I get: "Lambda was unable to configure your environment variables because the environment variables you have provided contains reserved keys."
I am a little surprised I need to pass an ID or Key at all since I believe my account for authoring the Lambda function also has permission to write to the S3 account (the key and secret I hand code are from IAM for this same account). I got the same sense from reading the following post:
AWS Lambda function write to S3
You never need to use AWS access keys when you are using one AWS resource within another. Just allow the Lambda function to access the S3 bucket and any actions that you want to take (e.g. PutObject). If you make sure the Lambda function receives the role with the policy to allow for that kind of access, the SDK takes all authentication out of your hands.
If you do need to use any secrets keys in Lambda, e.g. of 3rd party systems or an AWS RDS database (non-Aurora), you may want to have a look at AWS KMS. This works nicely together with Lambda. But again: using S3 in Lambda should be handled with the right role/policy in IAM.
The fact that your account has permissions to "write" to the S3 account does not mean that you can do it. AWS has a service called IAM which will handle the permissions that your lambda (among many other services) have to perform actions against other AWS resources.
Most probably you are lacking the relevant IAM role/policy associated to your lambda to write to the S3 bucket.
As pointed in AWS Lambda Permissions Model you need to create and associate an IAM role when creating the lambda. You can do this from the console or using CloudFormation.
Once you have the relevant permissions in place for your lambda you do not need to deal with keys or authentication.
Incidentally, I was curious to see if it took less time for AWS to authenticate me if I explicitly passed ACCESS_KEY and SECRET_ACCESS_KEY when accessing AWS DynamoDB from AWS Lambda instead of letting Lambda find my credentials automatically via environment variables. Here's the log entry from AWS Cloudwatch for example:
[INFO] 2018-12-23T15:34:56.174Z 4c399add-06c8-11e9-8970-3b3cbb83cd9c Found
credentials in environment variables.
I did this because when I switched from using AWS RDS with MySQL to DynamoDB it took nearly 1000ms longer to complete some simple table reads and updates. For reference, my calls to AWS RDS MySQL passed credentials explicitly eg:
conn = pymysql.connect(host=db_host, port=db_port, user=db_user, \
passwd=db_pass, db=db_name, connect_timeout=5)
so, I thought this might be the problem because when connecting to DynamoDB I was using:
db = boto3.resource('dynamodb')
table = db.Table(dynamo_db_table)
I decided to try the following to see if my authentication time decreased:
session = boto3.Session(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY
)
db = session.resource('dynamodb')
table = db.Table(dynamo_db_table)
End result was explicitly providing my access keys ended up saving less than 100ms on average so I went back to letting the Lambda execution environment dynamically determine my access credential. Still working to understand why MySQL is so much faster for my simple table {key:value} query and update use case.
I'm trying to use AWS' recently announced "IAM roles for EC2" feature, which lets security credentials automatically get delivered to EC2 instances. (see http://aws.amazon.com/about-aws/whats-new/2012/06/11/Announcing-IAM-Roles-for-EC2-instances/).
I've set up an instance with an IAM role as described. I can also get (seemingly) proper access key / credentials with curl.
However, boto fails to do a simple call like "get_all_buckets", even though I've turned on ALL S3 permissions for the role.
The error I get is "The AWS Access Key Id you provided does not exist in our records"
However, the access key listed in the error matches the one I get from curl.
Here is the failing script, run on an EC2 instance with an IAM role attached that gives all S3 permissions:
import urllib2
import ast
from boto.s3.connection import S3Connection
resp=urllib2.urlopen('http://169.254.169.254/latest/meta-data/iam/security-credentials/DatabaseApp').read()
resp=ast.literal_eval(resp)
print "access:" + resp['AccessKeyId']
print "secret:" + resp['SecretAccessKey']
conn = S3Connection(resp['AccessKeyId'], resp['SecretAccessKey'])
rs= conn.get_all_buckets()
If you are using boto 2.5.1 or later it's actually much easier than this. Boto will automatically find the credentials in the instance metadata for you and use them as long as no other credentials are found in environment variables or in a boto config file. So, you should be able to simply do this on the EC2 instance:
>>> import boto
>>> c = boto.connect_s3()
>>> rs = c.get_all_buckets()
The reason that your manual approach is failing is that the credentials associated with the IAM Role is a temporary session credential and consists of an access_key, a secret_key and a security_token and you need to supply all three of those values to the S3Connection constructor.
I don't know if this answer will help anyone but I was getting the same error, I had to solve my problem a little differently.
First, my amazon instance did not have any IAM roles. I thought I could just use the access key and the secret key but I kept getting this error with only those two keys. I read I needed a security token as well, but I didn't have one because I didn't have any IAM roles. This is what I did to correct the issue:
Create an IAM role with AmazonS3FullAccess permissions.
Start a new instance and attach my newly created role.
Even after doing this it still didn't work. I had to also connect to the proper region with the code below:
import boto.s3.connection
conn = boto.s3.connect_to_region('your-region')
conn.get_all_buckets()