I'm writing a Lambda function which attaches a policy to the latest bucket. The script attaches the policy to the named bucket, but I'm not too sure how to get the name of the latest bucket created. Any ideas are appreciated :). The current code is below.
import boto3
import json
client = boto3.client('s3')
bucket_name = '*'
bucket_policy = {
"Version": "2012-10-17",
"Statement": [{
"Sid": "DenyS3PublicObjectACL",
"Effect": "Deny",
"Principal": "*",
"Action": ["s3:PutObjectAcl"],
"Resource": "arn:aws:s3:::%s/*" #% bucket_name,
"Condition": {
"StringEqualsIgnoreCaseIfExists": {
"s3:x-amz-acl": [
"public-read",
"public-read-write",
"authenticated-read"
]
}
}
}]
}
bucket_policy = json.dumps(bucket_policy) # Converting the policy into a JSON string.
def lambda_handler(event, context):
response = client.put_bucket_policy(Bucket=bucket_name, ConfirmRemoveSelfBucketAccess=True, Policy=bucket_policy)
Related
I am trying to generate presigned url using boto3 in django. When I am trying to upload the file using the URL through Postman or CURL, it fails with "SignatureDoesNotMatch". But when I try the same link removing the "X-Amz-Algorithm" parameter from the generated URL, it works fine. Below are the details of my s3 bucket Policy, CORS and the code. Please note I masked few confidential fields with "###<name>###"
Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"###<arn>###",
"###<arn>###/*"
]
}
]
}
Bucket policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 2592000
}
]
Python Code
s3_client = boto3.client(settings.S3_RESOURCE,region_name=settings.AWS_REGION, # eu-west-1
config=botocore.client.Config(signature_version=settings.AWS_S3_SIGNATURE_VERSION, # s3v4
s3={'addressing_style': settings.AWS_S3_ADDRESSING_STYLE} # virtual
)
preasigned_url = s3_client.generate_presigned_url(ClientMethod='put_object',
ExpiresIn=settings.AWS_PRESIGNED_EXPIRY,
Params={'Bucket': settings.AWS_OPEN_UPLOAD_BUCKET,
'Key': file_upload_folder + audiofile_name,
'ACL': settings.AWS_DEFAULT_ACL, # private
'ContentType': 'audio/mpeg'
}
)
Generated URL
https://s3.eu-west-1.amazonaws.com/###\<bucket/key>###?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=###<credential>###eu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221220T114721Z&X-Amz-Expires=604799&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-acl&X-Amz-Signature=###Signature###
When I try to upload with the above URL using Postman or CURL, I am receiving the below error.
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
But it works fine if I remove the "X-Amz-Algorithm" key/value from the URL.
https://s3.eu-west-1.amazonaws.com/###\<bucket/key>###?X-Amz-Credential=###<credential>###eu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221220T114721Z&X-Amz-Expires=604799&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-acl&X-Amz-Signature=###Signature###
Is there anything I'm missing while generating the pre_signed_url or while upload ?
I've used terraform to setup infra for an s3 bucket and my containerised lambda. I want to trigger the lambda to list the items in my s3 bucket. When I run the aws cli it's fine:
aws s3 ls
returns
2022-11-08 23:04:19 bucket-name
This is my lambda:
import logging
import boto3
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
s3 = boto3.resource('s3')
def lambda_handler(event, context):
LOGGER.info('Executing function...')
bucket = s3.Bucket('bucket-name')
total_objects = 0
for i in bucket.objects.all():
total_objects = total_objects + 1
return {'total_objects': total_objects}
When I run the test in the AWS console, I'm getting this:
[ERROR] ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
No idea why this is happening. These are my terraform lambda policies, roles and the s3 setup:
resource "aws_s3_bucket" "statements_bucket" {
bucket = "bucket-name"
acl = "private"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.statements_bucket.id
key = "excel/"
}
resource "aws_iam_role" "lambda" {
name = "${local.prefix}-lambda-role"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_policy" "lambda" {
name = "${local.prefix}-lambda-policy"
description = "S3 specified access"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}
]
}
EOF
}
As far as I can tell, your Lambda function has the correct IAM role (the one indicated in your Terraform template) but that IAM role has no attached policies.
You need to attach the S3 policy, and any other IAM policies needed, to the IAM role. For example:
resource "aws_iam_role_policy_attachment" "lambda-attach" {
role = aws_iam_role.role.name
policy_arn = aws_iam_policy.policy.arn
}
In order to run aws s3 ls, you would need to authorize the action s3:ListAllMyBuckets. This is because aws s3 ls lists all of your buckets.
You should be able to list the contents of your bucket using aws s3 ls s3://bucket-name. However, you're probably going to have to add "arn:aws:s3:::bucket-name/*" to the resource list for your role as well. Edit: nevermind!
I'm trying to upload a file to my S3 bucket using Postman PUT request with an excel file (binary). URL for a PUT request is automatically generated using boto3 (Python):
upload_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={'Bucket': bucket_name, 'Key': key},
ExpiresIn=3600,
HttpMethod='PUT',)
When I try to upload a file to that URL using PUT and generated URL in Postman I get this:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>NumbersAndLetters</RequestId>
<HostId>NumbersAndLetters</HostId>
</Error>
s3 permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Eugene,
according to this discussion, you should use the generate_presigned_post function to generate a URL to use to upload your files.
just to make sure, have you double-checked that the credentials used in the boto3 script to generate the pre-signed URL are the same that you have in your s3 policy?
The problem was in the template.yaml. You have to add the following policies to your function in the template.yaml:
Policies:
- S3ReadPolicy:
BucketName: "bucket-name"
- S3WritePolicy:
BucketName: "bucket-name"
I am trying to read the json file from my s3 bucket using lambda function.
I am getting Access Denied with below error:
Starting new HTTPS connection (1): test-dev-cognito-settings-us-west-2.s3.us-west-2.amazonaws.com
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied: ClientError
My Code snippet is below:
import boto3
import logging
def trigger_handler(event, context):
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3 = boto3.resource('s3')
obj = s3.Object('test-dev-cognito-settings-us-west-2', 'test/map.json') // This line working
regions=obj.get()['Body'].read() // This line giving Access Denied :(
logger.info('received event: %s ',obj)
return event
My IAM role attached to the lambda function is below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Test",
"Effect": "Allow",
"Action": "s3:Get*",
"Resource": "arn:aws:s3:::*"
}
]
}
IAM Role attached to the s3 bucket is below.
{
"Sid": "AllowForSpecificLambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/lambda_allow_pretoken_generation_jdtest"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-dev-cognito-settings-us-west-2/*",
"arn:aws:s3:::test-dev-cognito-settings-us-west-2"
]
},
Any help?
Thanks
I've developed a Python script using the boto library to upload files to my S3 bucket. When uploading the file using the Key.set_contents_from_filename method, I specify the Cache-Control and Expires headers to enable proper browser caching. This all works fine, the files appear in my bucket with the correct headers set in the metadata field.
To prevent hotlinking of my files, I added the following bucket policy in S3:
{
"Version": "2008-10-17",
"Id": "MySite",
"Statement": [
{
"Sid": "Deny access if not specified referrer",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.mysite.com/*",
"http://mysite.com/*"
]
}
}
}
]
}
This policy works to prevent hotlinking, but now when I upload files using boto, the Cache-Control and Expires headers are not being set. Removing the bucket policy fixes the problem, so I'm clearly not specifying the bucket policy properly.
Any ideas on how to modify my bucket policy to allow uploading metadata fields with boto while still preventing hotlinking?
I haven't tested the headers myself. But maybe the problem is with Referer header. I suggest you add this policy that allows get and put object to your bucket using Referer.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowFromReferer",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::745684876799:user/IAM_USER"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://mysite.com/*",
"http://www.mysite.com/*"
]
}
}
}
]
}
If this fails, you can assume the problem is with Referer. Use just * for referer and if it works fine, then it definitely is your Referer problem.