How to authenticate serverless web request using AWS Web API and Lambda? - python

Little background info,
I have built an interactive website where users can upload images to S3. I built it so the image upload goes right from the browser to AWS S3 using a signed request ( python django backend ).
Now the issue is, the users wish to be able to rotate the image. I Similarly I would like this set up so the user's request goes straight from the browser. I built an AWS Lambda function and attached it to a web api, which will accept POST requests. I have been testing and I finally got it working. The function takes 2 inputs, key, and rotate_direction, which are passed as POST variables to the web api. They come into the python function in the event variable. Here is the simple Lambda function:
from __future__ import print_function
import boto3
import os
import sys
import uuid
from PIL import Image
s3_client = boto3.client('s3')
def rotate_image(image_path, upload_path, rotate_direction):
with Image.open(image_path) as image:
if rotate_direction == "right":
image.rotate(-90).save(upload_path)
else:
image.rotate(90).save(upload_path)
def handler(event, context):
bucket = 'the-s3-bucket-name'
key = event['key']
rotate_direction = event['rotate_direction']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
upload_path = '/tmp/rotated_small-{}'.format(key)
s3_client.download_file(bucket, key, download_path)
rotate_image(download_path, upload_path, rotate_direction)
s3_client.delete_object(Bucket=bucket, Key=key)
s3_client.upload_file(upload_path, bucket, key)
return { 'message':'rotated' }
Everything is working. So now my issue is how to enforce some kind of authentication for this system? The ownership details about each image reside on the django web server. While all the images are considered "public", I wish to enforce that only the owner of each image is allowed to rotate their own images.
With this project I have been venturing into new territory by making content requests right from the browser. I could understand how I could control access by only making the POST requests from the web server, where I could validate the ownership of the images. Would it still be possible having the request come from the browser?

TL;DR Solution: create a Cognito Identity Pool, assign policy users can only upload files prefixed by their Identity ID.
If I understand your question correctly, you want to setup a way for an image stored on S3 to be viewable by public, yet only editable by the user who uploaded it. Actually, you can verify file ownership, rotate the image and upload the rotated image to S3 all in the browser without going through a Lambda function.
Step 1: Create a Cognito User Pool to create a user directory. If you already have a user login/sign up authentication system, you could skip this step.
Step 2: Create a Cognito Identify Pool to enable federated identity, so your users can get a temporary AWS credential from the Identity Pool, and use it to upload files to S3 without going through your server/lambda.
Step 3: When creating the Cognito Identity Pool, you can define a policy on what S3 resources a user is allowed to access. Here is a sample policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}*"
]
}
]
}
Note the second block assigns "S3:GetObject" to all files in your S3 bucket; and the third block assigns "S3:PutObject" to ONLY FILES prefixed with the user's Cognito Identity ID.
Step 4: In frontend JS, get a temporary credential from Cognito Identity Pool
export function getAwsCredentials(userToken) {
const authenticator = `cognito-idp.${config.cognito.REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`;
AWS.config.update({ region: config.cognito.REGION });
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: config.cognito.IDENTITY_POOL_ID,
Logins: {
[authenticator]: userToken
}
});
return new Promise((resolve, reject) => (
AWS.config.credentials.get((err) => {
if (err) {
reject(err);
return;
}
resolve();
})
));
}
Step 5: Upload files to S3 with the credential, prefix the file name with the user's Cognito Identity ID.
export async function s3Upload(file, userToken) {
await getAwsCredentials(userToken);
const s3 = new AWS.S3({
params: {
Bucket: config.s3.BUCKET,
}
});
const filename = `${AWS.config.credentials.identityId}-${Date.now()}-${file.name}`;
return new Promise((resolve, reject) => (
s3.putObject({
Key: filename,
Body: file,
ContentType: file.type,
ACL: 'public-read',
},
(error, result) => {
if (error) {
reject(error);
return;
}
resolve(`${config.s3.DOMAIN}/${config.s3.BUCKET}/${filename}`);
})
));
}

Related

Integrate AWS API Gateway and AWS S3 Using AWS CDK for binary files

I am trying to setup content-type integration to my api gateway integration with S3.
s3_bucket_name=Fn.import_value("s3-bucket-name")
# Creating the Policy Document for Api gateway to access s3
s3_access_document=aws_iam.PolicyDocument(
statements=[
aws_iam.PolicyStatement(
actions=[
"s3:GetObject",
],
effect=aws_iam.Effect.ALLOW,
resources=[f"arn:aws:s3:::{s3_bucket_name}/script.ps1"],
)
]
)
# Create the Role for API Gateway to Access S3
api_s3_role=aws_iam.Role(
self,
id=f"{stack_id}-s3-access-role",
assumed_by=aws_iam.ServicePrincipal("apigateway.amazonaws.com"),
role_name=f"remote-debugging-s3-access-role",
inline_policies={
"s3-access": s3_access_document,
},
)
#Integrating s3 with API Gateway
s3_integration=aws_apigateway.AwsIntegration(
service="s3",
integration_http_method="GET",
path=f"{s3_bucket_name}"+"{proxy}",
options=aws_apigateway.IntegrationOptions(
credentials_role=api_s3_role,
integration_responses=[
aws_apigateway.IntegrationResponse(
status_code="200",
response_parameters= {
"method.response.header.Content-Type": "integration.response.header.Content-Type",
},
)
],
request_parameters= {
"integration.request.path.proxy": "method.request.path.proxy",
},
),
)
script_resource=api_gateway.root.add_resource("script")
script_resource.add_method(
"GET",
s3_integration,
method_responses =[
aws_apigateway.MethodResponse(
status_code = "200",
response_parameters={
"method.response.header.Content-Type": True,
},
)
],
request_parameters= {
"method.request.path.proxy": True,
"method.request.header.Content-Type": True,
},
authorizer=authorizer,
authorization_type=aws_apigateway.AuthorizationType.COGNITO,
)
There are two things to fix here:
1.
Something is not right with path=f"{s3_bucket_name}"+"{proxy}", I am getting Access denied error which i use it.
If i use path=f"{s3_bucket_name}/script.ps1", it gives me a response.
2 The content-type is not being propagate to method response and integration response.
I am trying to find any documents that would help me.
followed this link
I feel i am missing out some import details could you please help me with this as i am new CDK

cross-account file upload in S3 bucket using boto3 and python

I have an S3 bucket with a given access_key and secret_access_key. I use the following code to upload files into my S3 bucket successfully.
import boto3
import os
client = boto3.client('s3',
aws_access_key_id = access_key,
aws_secret_access_key = secret_access_key)
upload_file_bucket = 'my-bucket'
upload_file_key = 'my_folder/' + str(my_file)
client.upload_file(file, upload_file_bucket, upload_file_key)
Now, I want to upload my_file into another bucket that is owned by a new team. Therefore, I do not have access to access_key and secret_access_key. What is the best practice to do cross-account file upload using boto3 and Python?
You can actually use the same code, but the owner of the other AWS Account would need to add a Bucket Policy to the destination bucket that permits access from your IAM User. It would look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::their-bucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::YOUR-ACCOUNT-ID:user/username"
]
}
}
]
}
When uploading objects to a bucket owned by another AWS Account I recommend adding ACL= bucket-owner-full-control , like this:
client.upload_file(file, upload_file_bucket, upload_file_key, ExtraArgs={'ACL':'bucket-owner-full-control'})
This grants ownership of the object to the bucket owner, rather than the account that did the upload.

AWS IOT connection using Cognito credentials

I am having problems connecting to AWS IOT from a python script using Cognito credentials. I am using the AWS IOT SDK in python as well as the boto3 package. Here is what I am doing:
First, I set up a Cognito User Pool with a couple of users who have a username and password to login. I also set up a Cognito Identity Pool with my Cognito User Pool as the one and only Authentication Provider. I do not provide access to unauthenticated identities. The Identity Pool has an Auth Role I will just call "MyAuthRole" and when I go to IAM, that role has two policies attached: One is the default policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
}
]
}
and the second is a policy for IOT access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
}
]
}
Next I have python code to use my AWS IAM account credentials (access key and secret key) to get a temporary auth token like this:
auth_data = { 'USERNAME':username , 'PASSWORD':password }
provider_client=boto3.client('cognito-idp', region_name=region)
resp = provider_client.admin_initiate_auth(UserPoolId=user_pool_id, AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_data, ClientId=client_id)
id_token=resp['AuthenticationResult']['IdToken']
Finally I try to use this token to connect to AWS IOT, using this function:
def _get_aws_cognito_temp_credentials(aws_access_key_id=None,aws_secret_access_key=None,
region_name='us-west-2',account_id=None,user_pool_id=None,
identity_pool_id=None,id_token=None):
boto3.setup_default_session(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name = region_name)
identity_client = boto3.client('cognito-identity', region_name=region_name)
loginkey = "cognito-idp.%s.amazonaws.com/%s" % (region_name,user_pool_id)
print("loginkey is %s" % loginkey)
loginsdict={
loginkey: id_token
}
identity_response = identity_client.get_id(AccountId=account_id,
IdentityPoolId=identity_pool_id,
Logins=loginsdict)
identity_id = identity_response['IdentityId']
#
# Get the identity's credentials
#
credentials_response = identity_client.get_credentials_for_identity(
IdentityId=identity_id,
Logins=loginsdict)
credentials = credentials_response['Credentials']
access_key_id = credentials['AccessKeyId']
secret_key = credentials['SecretKey']
service = 'execute-api'
session_token = credentials['SessionToken']
expiration = credentials['Expiration']
return access_key_id,secret_key,session_token,expiration
Finally I create the AWS IOT client and try to connect like this:
myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId, useWebsocket=True)
myAWSIoTMQTTClient.configureEndpoint(host, port)
myAWSIoTMQTTClient.configureIAMCredentials(temp_access_key_id,
temp_secret_key,
temp_session_token)
myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
myAWSIoTMQTTClient.configureOfflinePublishQueueing(-1)
myAWSIoTMQTTClient.configureDrainingFrequency(2)
myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10)
myAWSIoTMQTTClient.configureMQTTOperationTimeout(5)
log.info("create_aws_iot_client", pre_connect=True)
myAWSIoTMQTTClient.connect()
log.info("create_aws_iot_client", post_connect=True, myAWSIoTMQTTClient=myAWSIoTMQTTClient)
The problem is that it gets to pre_connect and then just hangs and eventually times out. The error message I get is this:
AWSIoTPythonSDK.exception.AWSIoTExceptions.connectTimeoutException
I also read somewhere that there may be other policies that somehow have to be attached:
"According to the Policies for HTTP and WebSocketClients documentation, in order to authenticate an Amazon Cognito identity to publish MQTT messages over HTTP, you must specify two policies. The first policy must be attached to an Amazon Cognito identity pool role. This is the managed policy AWSIoTDataAccess that was defined earlier in the IdentityPoolAuthRole.
The second policy must be attached to an Amazon Cognito user using the AWS IoT AttachPrincipalPolicy API."
However, I have no clue how to achieve the above in python or using the AWS console.
How do I fix this issue?
You are right that the step you've missed is using the AttachPrincipalPolicy API (which is now depreciated and has been replaced with Iot::AttachPolicy.)
To do this:
Create an IoT policy (IoT Core > Secure > Policies > Create)
Give the policy the permissions you want any user attached to that policy to have, from your code as shared that would mean just copying the second IAM policy. Although you'll want to lock that down in production!
Using the AWS CLI you can attach this policy to your cognito user using:
aws iot attach-policy --policy-name <iot-policy-name> --target <cognito-identity-id>
There is a significantly more involved AWS example at aws-samples/aws-iot-chat-example, however it is in JavaScript. I wasn't able to find an equivalent in Python, however the Cognito/IAM/IoT configuration steps and required API calls will remain the same whatever the language.

Copy from S3 bucket in one account to S3 bucket in another account using Boto3 in AWS Lambda

I have created a S3 bucket and created a file under my aws account. My account has trust relationship established with another account and I am able to put objects into the bucket in another account using Boto3. How can I copy objects from bucket in my account to bucket in another account using Boto3?
I see "access denied" when I use the code below -
source_session = boto3.Session(region_name = 'us-east-1')
source_conn = source_session.resource('s3')
src_conn = source_session.client('s3')
dest_session = __aws_session(role_arn=assumed_role_arn, session_name='dest_session')
dest_conn = dest_session.client ( 's3' )
copy_source = { 'Bucket': bucket_name , 'Key': key_value }
dest_conn.copy ( copy_source, dest_bucket_name , dest_key,ExtraArgs={'ServerSideEncryption':'AES256'}, SourceClient = src_conn )
In my case , src_conn has access to source bucket and dest_conn has access to destination bucket.
I believe the only way to achieve this by downloading and uploading the files.
AWS Session
client = boto3.client('sts')
response = client.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'])
Another approach is to attach a policy to the destination bucket permitting access from the account hosting the source bucket. eg. something like the following should work (although you may want to tighten up the permissions as appropriate):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<source account ID>:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dst_bucket",
"arn:aws:s3:::dst_bucket/*"
]
}
]
}
Then your Lambda hosted in your source AWS account should have no problems writing to the bucket(s) in the destination AWS account.

S3 Boto 403 Forbidden Unless Access Given to "Any Authenticated AWS User"

I am using Python and Boto to upload images to S3. I can get it to work if I add a grantee of "Any Authenticated AWS User" and give this grantee permission to upload/delete. However, my impression from the documentation and several different posts on this site is that this would allow literally any authenticated AWS user, not just those authenticated to my account, to access the bucket, which I do not want. However, I am unable to upload files (403) if I only give upload/delete permission to the owner of the account, even though I authenticate like this:
s3 = boto.connect_s3(aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
im = Image.open(BytesIO(urllib.urlopen(self.url).read()))
filename = self.url.split('/')[-1].split('.')[0]
extension = self.url.split('.')[-1]
out_im2 = cStringIO.StringIO()
im.save(out_im2, im.format)
key = bucket.new_key(filename + "." + extension)
key.set_contents_from_string(out_im2.getvalue(), headers={
"Content-Type": extension_contenttype_mapping[extension],
})
key.set_acl('public-read')
self.file = bucket_url + filename + "." + extension
What am I doing wrong in this situation?
I found an answer at least, if not the one that I was looking for. I created a user specific to this bucket and added that user to a group with AmazonS3FullAccess permissions, which I also had to create. Then I modified my boto requests so that they use this user instead of the owner of the account, and I added this bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::media.example.com",
"arn:aws:s3:::media.example.com/*"
]
}
]
}
This worked for me, although I don't know if the bucket policy was part of the solution or not, and I still don't know why it did not work when I was trying as the owner user. This is, however, the more proper and secure way to do things anyway.

Categories