AWS IOT connection using Cognito credentials - python

I am having problems connecting to AWS IOT from a python script using Cognito credentials. I am using the AWS IOT SDK in python as well as the boto3 package. Here is what I am doing:
First, I set up a Cognito User Pool with a couple of users who have a username and password to login. I also set up a Cognito Identity Pool with my Cognito User Pool as the one and only Authentication Provider. I do not provide access to unauthenticated identities. The Identity Pool has an Auth Role I will just call "MyAuthRole" and when I go to IAM, that role has two policies attached: One is the default policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
}
]
}
and the second is a policy for IOT access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
}
]
}
Next I have python code to use my AWS IAM account credentials (access key and secret key) to get a temporary auth token like this:
auth_data = { 'USERNAME':username , 'PASSWORD':password }
provider_client=boto3.client('cognito-idp', region_name=region)
resp = provider_client.admin_initiate_auth(UserPoolId=user_pool_id, AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_data, ClientId=client_id)
id_token=resp['AuthenticationResult']['IdToken']
Finally I try to use this token to connect to AWS IOT, using this function:
def _get_aws_cognito_temp_credentials(aws_access_key_id=None,aws_secret_access_key=None,
region_name='us-west-2',account_id=None,user_pool_id=None,
identity_pool_id=None,id_token=None):
boto3.setup_default_session(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name = region_name)
identity_client = boto3.client('cognito-identity', region_name=region_name)
loginkey = "cognito-idp.%s.amazonaws.com/%s" % (region_name,user_pool_id)
print("loginkey is %s" % loginkey)
loginsdict={
loginkey: id_token
}
identity_response = identity_client.get_id(AccountId=account_id,
IdentityPoolId=identity_pool_id,
Logins=loginsdict)
identity_id = identity_response['IdentityId']
#
# Get the identity's credentials
#
credentials_response = identity_client.get_credentials_for_identity(
IdentityId=identity_id,
Logins=loginsdict)
credentials = credentials_response['Credentials']
access_key_id = credentials['AccessKeyId']
secret_key = credentials['SecretKey']
service = 'execute-api'
session_token = credentials['SessionToken']
expiration = credentials['Expiration']
return access_key_id,secret_key,session_token,expiration
Finally I create the AWS IOT client and try to connect like this:
myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId, useWebsocket=True)
myAWSIoTMQTTClient.configureEndpoint(host, port)
myAWSIoTMQTTClient.configureIAMCredentials(temp_access_key_id,
temp_secret_key,
temp_session_token)
myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
myAWSIoTMQTTClient.configureOfflinePublishQueueing(-1)
myAWSIoTMQTTClient.configureDrainingFrequency(2)
myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10)
myAWSIoTMQTTClient.configureMQTTOperationTimeout(5)
log.info("create_aws_iot_client", pre_connect=True)
myAWSIoTMQTTClient.connect()
log.info("create_aws_iot_client", post_connect=True, myAWSIoTMQTTClient=myAWSIoTMQTTClient)
The problem is that it gets to pre_connect and then just hangs and eventually times out. The error message I get is this:
AWSIoTPythonSDK.exception.AWSIoTExceptions.connectTimeoutException
I also read somewhere that there may be other policies that somehow have to be attached:
"According to the Policies for HTTP and WebSocketClients documentation, in order to authenticate an Amazon Cognito identity to publish MQTT messages over HTTP, you must specify two policies. The first policy must be attached to an Amazon Cognito identity pool role. This is the managed policy AWSIoTDataAccess that was defined earlier in the IdentityPoolAuthRole.
The second policy must be attached to an Amazon Cognito user using the AWS IoT AttachPrincipalPolicy API."
However, I have no clue how to achieve the above in python or using the AWS console.
How do I fix this issue?

You are right that the step you've missed is using the AttachPrincipalPolicy API (which is now depreciated and has been replaced with Iot::AttachPolicy.)
To do this:
Create an IoT policy (IoT Core > Secure > Policies > Create)
Give the policy the permissions you want any user attached to that policy to have, from your code as shared that would mean just copying the second IAM policy. Although you'll want to lock that down in production!
Using the AWS CLI you can attach this policy to your cognito user using:
aws iot attach-policy --policy-name <iot-policy-name> --target <cognito-identity-id>
There is a significantly more involved AWS example at aws-samples/aws-iot-chat-example, however it is in JavaScript. I wasn't able to find an equivalent in Python, however the Cognito/IAM/IoT configuration steps and required API calls will remain the same whatever the language.

Related

Using Python How to fetch data from AWS Parameter store in CI Jenkin server (which runs my tests nightly)

I have spent a lot of time searching the web for this answer.
I know how to get my QA env aws_access_key_id & aws_secret_access_key while running my code locally on my PC which are stored in my C:\Users[name].aws config file
[profile qa]
aws_access_key_id = ABC
aws_secret_access_key = XYZ
[profile crossaccount]
role_arn=arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C
source_profile=qa
Like this python code and I actually see the correct values.
import boto3
session = boto3.Session(profile_name='qa')
s3client = session.client('s3')
credentials = session.get_credentials()
accessKey = credentials.access_key
secretKey = credentials.secret_key
print("accessKey= " + str(accessKey) + " secretKey="+ secretKey)
A) How do I get these when my code is running on CI do I pass the "aws_access_key_id" and "aws_secret_access_key" as arguments to the code?
B) How exactly do I assume a role and get the parameters from AWS System manager parameter store
Given I know
secretsExternalId: 'DEF',
secretsRoleArn: 'arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C',
Install boto3 and botocore on the CI server.
As you mentioned the CI(Jenkins) is also running on AWS. You can attach a role to the CI server(EC2) which provides SSM read permission.
Below is the sample permission for the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": "*"
}
]
}
And from your python code, you can simply call the ssm to get the parameter values.
$ export parameter_value1="/path/to/ssm/parameter_1"
$ export parameter_value2="/path/to/ssm/parameter_2"
import os
import boto3
session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')
MYSQL_HOSTNAME = os.environ.get('parameter_value1')
MYSQL_USERNAME = os.environ.get('parameter_value2')
hostname = ssm.get_parameter(Name=parameter_value1, WithDecryption=True)
username = ssm.get_parameter(Name=parameter_value2, WithDecryption=True)
print("Param1: {}".format(hostname['Parameter']['Value']))
print("Param2: {}".format(username['Parameter']['Value']))

Copy from S3 bucket in one account to S3 bucket in another account using Boto3 in AWS Lambda

I have created a S3 bucket and created a file under my aws account. My account has trust relationship established with another account and I am able to put objects into the bucket in another account using Boto3. How can I copy objects from bucket in my account to bucket in another account using Boto3?
I see "access denied" when I use the code below -
source_session = boto3.Session(region_name = 'us-east-1')
source_conn = source_session.resource('s3')
src_conn = source_session.client('s3')
dest_session = __aws_session(role_arn=assumed_role_arn, session_name='dest_session')
dest_conn = dest_session.client ( 's3' )
copy_source = { 'Bucket': bucket_name , 'Key': key_value }
dest_conn.copy ( copy_source, dest_bucket_name , dest_key,ExtraArgs={'ServerSideEncryption':'AES256'}, SourceClient = src_conn )
In my case , src_conn has access to source bucket and dest_conn has access to destination bucket.
I believe the only way to achieve this by downloading and uploading the files.
AWS Session
client = boto3.client('sts')
response = client.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'])
Another approach is to attach a policy to the destination bucket permitting access from the account hosting the source bucket. eg. something like the following should work (although you may want to tighten up the permissions as appropriate):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<source account ID>:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dst_bucket",
"arn:aws:s3:::dst_bucket/*"
]
}
]
}
Then your Lambda hosted in your source AWS account should have no problems writing to the bucket(s) in the destination AWS account.

How to authenticate serverless web request using AWS Web API and Lambda?

Little background info,
I have built an interactive website where users can upload images to S3. I built it so the image upload goes right from the browser to AWS S3 using a signed request ( python django backend ).
Now the issue is, the users wish to be able to rotate the image. I Similarly I would like this set up so the user's request goes straight from the browser. I built an AWS Lambda function and attached it to a web api, which will accept POST requests. I have been testing and I finally got it working. The function takes 2 inputs, key, and rotate_direction, which are passed as POST variables to the web api. They come into the python function in the event variable. Here is the simple Lambda function:
from __future__ import print_function
import boto3
import os
import sys
import uuid
from PIL import Image
s3_client = boto3.client('s3')
def rotate_image(image_path, upload_path, rotate_direction):
with Image.open(image_path) as image:
if rotate_direction == "right":
image.rotate(-90).save(upload_path)
else:
image.rotate(90).save(upload_path)
def handler(event, context):
bucket = 'the-s3-bucket-name'
key = event['key']
rotate_direction = event['rotate_direction']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
upload_path = '/tmp/rotated_small-{}'.format(key)
s3_client.download_file(bucket, key, download_path)
rotate_image(download_path, upload_path, rotate_direction)
s3_client.delete_object(Bucket=bucket, Key=key)
s3_client.upload_file(upload_path, bucket, key)
return { 'message':'rotated' }
Everything is working. So now my issue is how to enforce some kind of authentication for this system? The ownership details about each image reside on the django web server. While all the images are considered "public", I wish to enforce that only the owner of each image is allowed to rotate their own images.
With this project I have been venturing into new territory by making content requests right from the browser. I could understand how I could control access by only making the POST requests from the web server, where I could validate the ownership of the images. Would it still be possible having the request come from the browser?
TL;DR Solution: create a Cognito Identity Pool, assign policy users can only upload files prefixed by their Identity ID.
If I understand your question correctly, you want to setup a way for an image stored on S3 to be viewable by public, yet only editable by the user who uploaded it. Actually, you can verify file ownership, rotate the image and upload the rotated image to S3 all in the browser without going through a Lambda function.
Step 1: Create a Cognito User Pool to create a user directory. If you already have a user login/sign up authentication system, you could skip this step.
Step 2: Create a Cognito Identify Pool to enable federated identity, so your users can get a temporary AWS credential from the Identity Pool, and use it to upload files to S3 without going through your server/lambda.
Step 3: When creating the Cognito Identity Pool, you can define a policy on what S3 resources a user is allowed to access. Here is a sample policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}*"
]
}
]
}
Note the second block assigns "S3:GetObject" to all files in your S3 bucket; and the third block assigns "S3:PutObject" to ONLY FILES prefixed with the user's Cognito Identity ID.
Step 4: In frontend JS, get a temporary credential from Cognito Identity Pool
export function getAwsCredentials(userToken) {
const authenticator = `cognito-idp.${config.cognito.REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`;
AWS.config.update({ region: config.cognito.REGION });
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: config.cognito.IDENTITY_POOL_ID,
Logins: {
[authenticator]: userToken
}
});
return new Promise((resolve, reject) => (
AWS.config.credentials.get((err) => {
if (err) {
reject(err);
return;
}
resolve();
})
));
}
Step 5: Upload files to S3 with the credential, prefix the file name with the user's Cognito Identity ID.
export async function s3Upload(file, userToken) {
await getAwsCredentials(userToken);
const s3 = new AWS.S3({
params: {
Bucket: config.s3.BUCKET,
}
});
const filename = `${AWS.config.credentials.identityId}-${Date.now()}-${file.name}`;
return new Promise((resolve, reject) => (
s3.putObject({
Key: filename,
Body: file,
ContentType: file.type,
ACL: 'public-read',
},
(error, result) => {
if (error) {
reject(error);
return;
}
resolve(`${config.s3.DOMAIN}/${config.s3.BUCKET}/${filename}`);
})
));
}

Boto3: aws credentials with limited permissions

I was provisioned some AWS keys. These keys give me access to certain directories in a s3 bucket. I want to use boto3 to interact with the directories that were exposed to me, however it seems that I can't actually do anything with the bucket at all, since I don't have access to the entire bucket.
This works for me from my terminal:
aws s3 ls s3://the_bucket/and/this/specific/path/
but if I do:
aws s3 ls s3://the_bucket/
I get:
An error occurred (AccessDenied) when calling the ListObjects
operation: Access Denied
which also happens when I try to access the directory via boto3.
session = boto3.Session(profile_name=my_creds)
client=session.client('s3')
list_of_objects = client.list_objects(Bucket='the_bucket', Prefix='and/this/specific/path', Delimiter='/')
Do I need to request access to the entire bucket for boto3 to be usable?
You need to set this Bucket Policy:
{
"Sid": "<SID>",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account>:user/<user_name>"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::<bucket_name>"
}
For more information about Specifying Permissions in a Policy

S3 Boto 403 Forbidden Unless Access Given to "Any Authenticated AWS User"

I am using Python and Boto to upload images to S3. I can get it to work if I add a grantee of "Any Authenticated AWS User" and give this grantee permission to upload/delete. However, my impression from the documentation and several different posts on this site is that this would allow literally any authenticated AWS user, not just those authenticated to my account, to access the bucket, which I do not want. However, I am unable to upload files (403) if I only give upload/delete permission to the owner of the account, even though I authenticate like this:
s3 = boto.connect_s3(aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
im = Image.open(BytesIO(urllib.urlopen(self.url).read()))
filename = self.url.split('/')[-1].split('.')[0]
extension = self.url.split('.')[-1]
out_im2 = cStringIO.StringIO()
im.save(out_im2, im.format)
key = bucket.new_key(filename + "." + extension)
key.set_contents_from_string(out_im2.getvalue(), headers={
"Content-Type": extension_contenttype_mapping[extension],
})
key.set_acl('public-read')
self.file = bucket_url + filename + "." + extension
What am I doing wrong in this situation?
I found an answer at least, if not the one that I was looking for. I created a user specific to this bucket and added that user to a group with AmazonS3FullAccess permissions, which I also had to create. Then I modified my boto requests so that they use this user instead of the owner of the account, and I added this bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::media.example.com",
"arn:aws:s3:::media.example.com/*"
]
}
]
}
This worked for me, although I don't know if the bucket policy was part of the solution or not, and I still don't know why it did not work when I was trying as the owner user. This is, however, the more proper and secure way to do things anyway.

Categories