Unable to upload image to amazon s3 from boto [duplicate] - python

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.

AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.

With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );

You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.

For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)

I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"

Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region

AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..

Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)

In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))

With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')

For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful

Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here

For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");

Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});

Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}

In my case, the request type was wrong. I was using GET(dumb) It must be PUT.

Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False

Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py

For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)

Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});

I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"

Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project

Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);

Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image

Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();

Related

how to save image to aws-S3 by aws-lambda

the target img is on below website.
https://www.blockchaincenter.net/en/bitcoin-rainbow-chart/
i wanna to save rainbow img(/html/body/div/div[4]/div/div[3]/div/canvas) in my aws S3
i use screenshot_as_png on my windows desktop but i don't know how to apply this on aws-lambda python
how to upload screenshot to aws-s3?
i tried below code but no works
s3_read = boto3.client('s3')
s3_write = boto3.resource('s3')
rainbow=driver.find_element('xpath','//*[#id="rainbow"]).screenshot
**s3_write.Bucket('mybucket').put_object(Key='rainbow.png', Body=rainbow)**
To save an image to AWS S3 using AWS Lambda, you can follow the steps below:
Set up an AWS Lambda function and give it permission to access your S3 bucket.
Install the AWS SDK for Node.js in your AWS Lambda function.
Add the following code to your Lambda function to save the image to your S3 bucket:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucketName = 'your-bucket-name';
const objectKey = 'your-object-key';
const contentType = 'image/jpeg'; // Replace with your image content type
const body = 'your-image-data'; // Replace with your image data
const params = {
Bucket: bucketName,
Key: objectKey,
Body: body,
ContentType: contentType
};
s3.putObject(params, function(err, data) {
if (err) {
console.log(err, err.stack);
} else {
console.log(data);
}
});

Bug in Boto3 AWS S3 generate_presigned_url in Lambda Python 3.X with specified region?

I tried to write a python lambda function that returns a pre-signed url to put an object.
import os
import boto3
import json
import boto3
session = boto3.Session(region_name=os.environ['AWS_REGION'])
s3 = session.client('s3', region_name=os.environ['AWS_REGION'])
upload_bucket = 'BUCKER_NAME' # Replace this value with your bucket name!
URL_EXPIRATION_SECONDS = 30000 # Specify how long the pre-signed URL will be valid for
# Main Lambda entry point
def lambda_handler(event, context):
return get_upload_url(event)
def get_upload_url(event):
key = 'testimage.jpg' # Random filename we will use when uploading files
# Get signed URL from S3
s3_params = {
'Bucket': upload_bucket,
'Key': key,
'Expires': URL_EXPIRATION_SECONDS,
'ContentType': 'image/jpeg' # Change this to the media type of the files you want to upload
}
# Get signed URL
upload_url = s3.generate_presigned_url(
'put_object',
Params=s3_params,
ExpiresIn=URL_EXPIRATION_SECONDS
)
return {
'statusCode': 200,
'isBase64Encoded': False,
'headers': {
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps(upload_url)
}
The code itself works and returns a signed URL in the format "https://BUCKET_NAME.s3.amazonaws.com/testimage.jpg?[...]"
However when using POSTMAN to try to put an object, it loads without ending.
Originally I thought it was because of my code, and after a while I wrote a NodeJS function that does the same thing:
const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION })
const s3 = new AWS.S3()
const uploadBucket = 'BUCKET_NAME' // Replace this value with your bucket name!
const URL_EXPIRATION_SECONDS = 30000 // Specify how long the pre-signed URL will be valid for
// Main Lambda entry point
exports.handler = async (event) => {
return await getUploadURL(event)
}
const getUploadURL = async function(event) {
const randomID = parseInt(Math.random() * 10000000)
const Key = 'testimage.jpg' // Random filename we will use when uploading files
// Get signed URL from S3
const s3Params = {
Bucket: uploadBucket,
Key,
Expires: URL_EXPIRATION_SECONDS,
ContentType: 'image/jpeg' // Change this to the media type of the files you want to upload
}
return new Promise((resolve, reject) => {
// Get signed URL
let uploadURL = s3.getSignedUrl('putObject', s3Params)
resolve({
"statusCode": 200,
"isBase64Encoded": false,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": JSON.stringify(uploadURL)
})
})
}
The NodeJs version gives me a url in the format of "https://BUCKET_NAME.s3.eu-west-1.amazonaws.com/testimage.jpg?"
The main difference between the two is the aws sub domain. When using NodeJS it gives me "BUCKET_NAME.s3.eu-west-1.amazonaws.com" and when using Python "https://BUCKET_NAME.s3.amazonaws.com"
When using python the region does not appear.
I tried, using the signed url generated in python to add the "s3.eu-west-1" manually and IT Works!!
Is this a bug in the AWS Boto3 python library?
as you can see, in the python code I tried to specify the region but it does not do anything.?
Any idea guys ?
I wanna solve this mystery :)
Thanks a lot in advance,
Léo
I was able to reproduce the issue in us-east-1. There are a few bugs in Github (e.g., this and this), but the proposed resolutions are inconsistent.
The workaround is to create an Internet-facing access point for the bucket and then assign the full ARN of the access point to your upload_bucket variable.
Please note that the Lambda will create a pre-signed URL, but it will only work if the Lambda has an appropriate permissions policy attached to its execution role.

Using Python How to fetch data from AWS Parameter store in CI Jenkin server (which runs my tests nightly)

I have spent a lot of time searching the web for this answer.
I know how to get my QA env aws_access_key_id & aws_secret_access_key while running my code locally on my PC which are stored in my C:\Users[name].aws config file
[profile qa]
aws_access_key_id = ABC
aws_secret_access_key = XYZ
[profile crossaccount]
role_arn=arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C
source_profile=qa
Like this python code and I actually see the correct values.
import boto3
session = boto3.Session(profile_name='qa')
s3client = session.client('s3')
credentials = session.get_credentials()
accessKey = credentials.access_key
secretKey = credentials.secret_key
print("accessKey= " + str(accessKey) + " secretKey="+ secretKey)
A) How do I get these when my code is running on CI do I pass the "aws_access_key_id" and "aws_secret_access_key" as arguments to the code?
B) How exactly do I assume a role and get the parameters from AWS System manager parameter store
Given I know
secretsExternalId: 'DEF',
secretsRoleArn: 'arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C',
Install boto3 and botocore on the CI server.
As you mentioned the CI(Jenkins) is also running on AWS. You can attach a role to the CI server(EC2) which provides SSM read permission.
Below is the sample permission for the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": "*"
}
]
}
And from your python code, you can simply call the ssm to get the parameter values.
$ export parameter_value1="/path/to/ssm/parameter_1"
$ export parameter_value2="/path/to/ssm/parameter_2"
import os
import boto3
session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')
MYSQL_HOSTNAME = os.environ.get('parameter_value1')
MYSQL_USERNAME = os.environ.get('parameter_value2')
hostname = ssm.get_parameter(Name=parameter_value1, WithDecryption=True)
username = ssm.get_parameter(Name=parameter_value2, WithDecryption=True)
print("Param1: {}".format(hostname['Parameter']['Value']))
print("Param2: {}".format(username['Parameter']['Value']))

Copy from S3 bucket in one account to S3 bucket in another account using Boto3 in AWS Lambda

I have created a S3 bucket and created a file under my aws account. My account has trust relationship established with another account and I am able to put objects into the bucket in another account using Boto3. How can I copy objects from bucket in my account to bucket in another account using Boto3?
I see "access denied" when I use the code below -
source_session = boto3.Session(region_name = 'us-east-1')
source_conn = source_session.resource('s3')
src_conn = source_session.client('s3')
dest_session = __aws_session(role_arn=assumed_role_arn, session_name='dest_session')
dest_conn = dest_session.client ( 's3' )
copy_source = { 'Bucket': bucket_name , 'Key': key_value }
dest_conn.copy ( copy_source, dest_bucket_name , dest_key,ExtraArgs={'ServerSideEncryption':'AES256'}, SourceClient = src_conn )
In my case , src_conn has access to source bucket and dest_conn has access to destination bucket.
I believe the only way to achieve this by downloading and uploading the files.
AWS Session
client = boto3.client('sts')
response = client.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'])
Another approach is to attach a policy to the destination bucket permitting access from the account hosting the source bucket. eg. something like the following should work (although you may want to tighten up the permissions as appropriate):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<source account ID>:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dst_bucket",
"arn:aws:s3:::dst_bucket/*"
]
}
]
}
Then your Lambda hosted in your source AWS account should have no problems writing to the bucket(s) in the destination AWS account.

How to authenticate serverless web request using AWS Web API and Lambda?

Little background info,
I have built an interactive website where users can upload images to S3. I built it so the image upload goes right from the browser to AWS S3 using a signed request ( python django backend ).
Now the issue is, the users wish to be able to rotate the image. I Similarly I would like this set up so the user's request goes straight from the browser. I built an AWS Lambda function and attached it to a web api, which will accept POST requests. I have been testing and I finally got it working. The function takes 2 inputs, key, and rotate_direction, which are passed as POST variables to the web api. They come into the python function in the event variable. Here is the simple Lambda function:
from __future__ import print_function
import boto3
import os
import sys
import uuid
from PIL import Image
s3_client = boto3.client('s3')
def rotate_image(image_path, upload_path, rotate_direction):
with Image.open(image_path) as image:
if rotate_direction == "right":
image.rotate(-90).save(upload_path)
else:
image.rotate(90).save(upload_path)
def handler(event, context):
bucket = 'the-s3-bucket-name'
key = event['key']
rotate_direction = event['rotate_direction']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
upload_path = '/tmp/rotated_small-{}'.format(key)
s3_client.download_file(bucket, key, download_path)
rotate_image(download_path, upload_path, rotate_direction)
s3_client.delete_object(Bucket=bucket, Key=key)
s3_client.upload_file(upload_path, bucket, key)
return { 'message':'rotated' }
Everything is working. So now my issue is how to enforce some kind of authentication for this system? The ownership details about each image reside on the django web server. While all the images are considered "public", I wish to enforce that only the owner of each image is allowed to rotate their own images.
With this project I have been venturing into new territory by making content requests right from the browser. I could understand how I could control access by only making the POST requests from the web server, where I could validate the ownership of the images. Would it still be possible having the request come from the browser?
TL;DR Solution: create a Cognito Identity Pool, assign policy users can only upload files prefixed by their Identity ID.
If I understand your question correctly, you want to setup a way for an image stored on S3 to be viewable by public, yet only editable by the user who uploaded it. Actually, you can verify file ownership, rotate the image and upload the rotated image to S3 all in the browser without going through a Lambda function.
Step 1: Create a Cognito User Pool to create a user directory. If you already have a user login/sign up authentication system, you could skip this step.
Step 2: Create a Cognito Identify Pool to enable federated identity, so your users can get a temporary AWS credential from the Identity Pool, and use it to upload files to S3 without going through your server/lambda.
Step 3: When creating the Cognito Identity Pool, you can define a policy on what S3 resources a user is allowed to access. Here is a sample policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}*"
]
}
]
}
Note the second block assigns "S3:GetObject" to all files in your S3 bucket; and the third block assigns "S3:PutObject" to ONLY FILES prefixed with the user's Cognito Identity ID.
Step 4: In frontend JS, get a temporary credential from Cognito Identity Pool
export function getAwsCredentials(userToken) {
const authenticator = `cognito-idp.${config.cognito.REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`;
AWS.config.update({ region: config.cognito.REGION });
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: config.cognito.IDENTITY_POOL_ID,
Logins: {
[authenticator]: userToken
}
});
return new Promise((resolve, reject) => (
AWS.config.credentials.get((err) => {
if (err) {
reject(err);
return;
}
resolve();
})
));
}
Step 5: Upload files to S3 with the credential, prefix the file name with the user's Cognito Identity ID.
export async function s3Upload(file, userToken) {
await getAwsCredentials(userToken);
const s3 = new AWS.S3({
params: {
Bucket: config.s3.BUCKET,
}
});
const filename = `${AWS.config.credentials.identityId}-${Date.now()}-${file.name}`;
return new Promise((resolve, reject) => (
s3.putObject({
Key: filename,
Body: file,
ContentType: file.type,
ACL: 'public-read',
},
(error, result) => {
if (error) {
reject(error);
return;
}
resolve(`${config.s3.DOMAIN}/${config.s3.BUCKET}/${filename}`);
})
));
}

Categories