How to I access Security token for Python SDK boto3 - python

I want to access AWS comprehend api from python script. Not getting any leads of how do I remove this error. One thing I know that I have to get session security token.
try:
client = boto3.client(service_name='comprehend', region_name='us-east-1', aws_access_key_id='KEY ID', aws_secret_access_key= 'ACCESS KEY')
text = "It is raining today in Seattle"
print('Calling DetectEntities')
print(json.dumps(client.detect_entities(Text=text, LanguageCode='en'), sort_keys=True, indent=4))
print('End of DetectEntities\n')
except ClientError as e:
print (e)
Error : An error occurred (UnrecognizedClientException) when calling the DetectEntities operation: The security token included in the request is invalid.

This error suggesting that you have provided invalid credentials.
It is also worth nothing that you should never put credentials inside your source code. This can lead to potential security problems if other people obtain access to the source code.
There are several ways to provide valid credentials to an application that uses an AWS SDK (such as boto3).
If the application is running on an Amazon EC2 instance, assign an IAM Role to the instance. This will automatically provide credentials that can be retrieved by boto3.
If you are running the application on your own computer, store credentials in the .aws/credentials file. The easiest way to create this file is with the aws configure command.
See: Credentials — Boto 3 documentation

Create a profile using aws configure or updating ~/.aws/config. If you only have one profile to work with = default, you can omit profile_name parameter from Session() invocation (see example below). Then create AWS service specific client using the session object. Example;
import boto3
session = boto3.session.Session(profile_name="test")
ec2_client = session.client('ec2')
ec2_client.describe_instances()
ec2_resource = session.resource(‘ec2’)

One useful tool I use daily is this: https://github.com/atward/aws-profile/blob/master/aws-profile
This makes assuming role so much easier!
After you set up your access key in .aws/credentials and your .aws/config
you can do something like:
AWS_PROFILE=**you-profile** aws-profile [python x.py]
The part in [] can be substituted with anything that you want to use AWS credentials. e.g., terraform plan
Essentially, this utility simply put your AWS credentials into os environment variables. Then in your boto script, you don't need to worry about setting aws_access_key_id and etc..

Related

unable to locate credentials for boto3.client both locally, and on lambda

What I understand is that, in order to access AWS applications such as redshift, the way to do it is
client = boto3.client("redshift", region_name="someRegion", aws_access_key_id="foo", aws_secret_access_key="bar")
response = client.describe_clusters(ClusterIdentifier="mycluster")
print(response)
This code runs fine for both locally through pycharm, as well as on AWS lambda.
However, am I correct that this aws_access_key_id and aws_secret_access_key are both from me? IE: my IAM user security access keys. Is this supposed to be the case? Or am I suppose to create a different user / role in order to access redshift via boto3?
The more important question is, how do I properly store & retrieve aws_access_key_id and aws_secret_access_key? I understand that this could potentially be done via secrets manager, but I am still faced with the problem that, if I run the below code, I get an error saying that it is unable to locate credentials.
client = boto3.client("secretsmanager", region_name="someRegion")
# Met with the problem that it is unable to locate my credentials.
The proper way to do this would be for you to create an IAM role which allows the desired redshift functionality, and then attaching that role to your lambda.
When you create the role, you have the flexibility to create a policy to fine-grain access permissions to certain actions and/or certain resources.
After you have attached the IAM role to your lambda, you will simply be able to do:
>>> client = boto3.client("redshift")
From the docs. The first & seconds options are not secured since you mix the credentials with the code.
If the code runs on AWS EC2 the best way is using "assume role" where you grant the EC2 instance permissions. If the code run outside AWS you will have to select an option like using ~/.aws/credentials
Boto3 will look in several locations when searching for credentials. The mechanism in which Boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.

Google cloud get bucket - works with cli but not in python

I was asked to preform integration with an external google storage bucket, I had received a credentials json,
And while trying to do
gsutil ls gs://bucket_name (after configuring myself with the creds json) I had received a valid response, as well as when I tried to upload a file into the bucket.
When trying to do it with Python3, it does not work:
While using google-cloud-storage==1.16.0 (tried also the newer versions), I'm doing:
project_id = credentials_dict.get("project_id")
credentials = service_account.Credentials.from_service_account_info(credentials_dict)
client = storage.Client(credentials=credentials, project=project_id)
bucket = client.get_bucket(bucket_name)
But on the get_bucket line, I get:
google.api_core.exceptions.Forbidden: 403 GET https://www.googleapis.com/storage/v1/b/BUCKET_NAME?projection=noAcl: USERNAME#PROJECT_ID.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket.
The external partner which I'm integrating with, saying that the user is set correctly, and to prove it they're showing that I can preform the action with gsutil.
Can you please assist? Any idea what might be the problem?
The answer was that the creds were indeed wrong, but it did worked when I tried to preform on the client client.bucket(bucket_name) instead of client.get_bucket(bucket_name).
Please follow these steps in order to correctly set up the Cloud Storage Client Library for Python. In general, the Cloud Storage Libraries can use Application default credentials or environment variables for authentication.
Notice that the recommended method to use would be to set up authentication using environment variables (i.e if you are using Linux: export GOOGLE_APPLICATION_CREDENTIALS="/path/to/[service-account-credentials].json" should work) and avoid the use of the service_account.Credentials.from_service_account_info() method altogether:
from google.cloud import storage
storage_client = storage.Client(project='project-id-where-the-bucket-is')
bucket_name = "your-bucket"
bucket = client.get_bucket(bucket_name)
should simply work because the authentication is handled by the client library via the environment variable.
Now, if you are interested in explicitly using the service account instead of using service_account.Credentials.from_service_account_info() method you can use the from_service_account_json() method directly in the following way:
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'/[service-account-credentials].json')
bucket_name = "your-bucket"
bucket = client.get_bucket(bucket_name)
Find all the relevant details as to how to provide credentials to your application here.
tl;dr: dont use client.get_bucket at all.
See for detailed explanation and solution https://stackoverflow.com/a/51452170/705745

Trouble updating IAM to allow AWS Glue to the AWS Secrets Manager

I am working on a project that requires that an AWS Glue Python script access the AWS Secrets Manager.
I tried giving Glue permissions to do this via IAM, but I don't see how; I can see the permissions strings showing that Lambda has access but I don't see a way to edit them.
I tried creating a new role that had the right permissions but when I went to attach it seemed to have disappeared ...
My fallback workaround is to grab the secret via a tiny Lambda and xfer it via S3 to Glue ... but this should be doable directly. Any help you can give is very welcome!
You may need to add SecretsManagerReadWrite policy to the IAM Role that is associated with the AWS Glue. Please check, we are using secrets manager in our AWS Glue.
After adding the policy to the AWS Glue associated IAM Role, please add the following code snippet to read the credentials from the secret manager:
# Getting DB credentials from Secrets Manager
client = boto3.client("secretsmanager", region_name="us-west-2")
get_secret_value_response = client.get_secret_value(
SecretId="mysecrets-info" <--name as configured in secrets manager
)
secret = get_secret_value_response['SecretString']
secret = json.loads(secret)
uname = secret.get('username')
pwd = secret.get('password')
url = secret.get('host')
By the way you need to be an AWS admin user, to modify the IAM role. If you are a power user, please reach out the admin team for adding the policy to the IAM.
The SecretsManagerReadWrite policy does not give permissions only to Lambda. I think you may be looking at the second statement which grants the Role permissions to create Lambdas (used to create Lambdas to rotate secrets).
Looking at the Glue blog post for setting up ETL jobs, they also say you should only need to add the SecretsManagerReadWrite policy to the Glue role. However, they also say that this is just for testing and you should use a policy that grants only the necessary permissions needed (e.g. use an inline policy that grants secretsmanager:GetSecretValue with Resource being the secret in question).
You don't actually say what error message you are seeing. That might be helpful in figuring out what is going wrong.

How to fix user pool ******** does not exist in boto3

I'm new to AWS and boto 3 Python SDK. I configured the Access Key ID, Secret Access Key and the region name through AWS CLI.
import boto3
client = boto3.client('cognito-idp')
response = client.admin_get_user(
UserPoolId='us-east-2_hJpikme9T',
Username='wasdkiller'
)
Here is my user pool details,
I provided the correct UserPoolId, but when I run above code sample I got below error for every functions in CognitoIdentityProvider, for an example I used admin_get_user(**kwargs).
ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the AdminGetUser operation: User pool us-east-2_hJpikme9T does not exist.
We can provide more arguments in boto3.client(*args, **kwargs) other than the service_name(the default parameter). As you can see client() in Session Reference, we can provide aws_access_key_id, aws_secret_access_key, and region_name without using AWS CLI.
If you are using default parameters such as you already given through AWS CLI that's fine, you don't need to mention aws_access_key_id or aws_secret_access_key when calling boto3.client(). But I don't know for some reason you have to mention your region_name which is you already given through AWS CLI when calling boto3.client().
client = boto3.client('cognito-idp', region_name='us-east-2')
In this way I clear out my above problem. But still I don't know why we have to specially mention the region_name argument when calling boto3.client(), please update this answer or comment below if you know anything about it.
I had the same error. Another option is to create the User Pool in region us-east-1. I did that and cognito identity was authenticated successfully.

boto issue with IAM role

I'm trying to use AWS' recently announced "IAM roles for EC2" feature, which lets security credentials automatically get delivered to EC2 instances. (see http://aws.amazon.com/about-aws/whats-new/2012/06/11/Announcing-IAM-Roles-for-EC2-instances/).
I've set up an instance with an IAM role as described. I can also get (seemingly) proper access key / credentials with curl.
However, boto fails to do a simple call like "get_all_buckets", even though I've turned on ALL S3 permissions for the role.
The error I get is "The AWS Access Key Id you provided does not exist in our records"
However, the access key listed in the error matches the one I get from curl.
Here is the failing script, run on an EC2 instance with an IAM role attached that gives all S3 permissions:
import urllib2
import ast
from boto.s3.connection import S3Connection
resp=urllib2.urlopen('http://169.254.169.254/latest/meta-data/iam/security-credentials/DatabaseApp').read()
resp=ast.literal_eval(resp)
print "access:" + resp['AccessKeyId']
print "secret:" + resp['SecretAccessKey']
conn = S3Connection(resp['AccessKeyId'], resp['SecretAccessKey'])
rs= conn.get_all_buckets()
If you are using boto 2.5.1 or later it's actually much easier than this. Boto will automatically find the credentials in the instance metadata for you and use them as long as no other credentials are found in environment variables or in a boto config file. So, you should be able to simply do this on the EC2 instance:
>>> import boto
>>> c = boto.connect_s3()
>>> rs = c.get_all_buckets()
The reason that your manual approach is failing is that the credentials associated with the IAM Role is a temporary session credential and consists of an access_key, a secret_key and a security_token and you need to supply all three of those values to the S3Connection constructor.
I don't know if this answer will help anyone but I was getting the same error, I had to solve my problem a little differently.
First, my amazon instance did not have any IAM roles. I thought I could just use the access key and the secret key but I kept getting this error with only those two keys. I read I needed a security token as well, but I didn't have one because I didn't have any IAM roles. This is what I did to correct the issue:
Create an IAM role with AmazonS3FullAccess permissions.
Start a new instance and attach my newly created role.
Even after doing this it still didn't work. I had to also connect to the proper region with the code below:
import boto.s3.connection
conn = boto.s3.connect_to_region('your-region')
conn.get_all_buckets()

Categories