I am trying to read the json file from my s3 bucket using lambda function.
I am getting Access Denied with below error:
Starting new HTTPS connection (1): test-dev-cognito-settings-us-west-2.s3.us-west-2.amazonaws.com
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied: ClientError
My Code snippet is below:
import boto3
import logging
def trigger_handler(event, context):
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3 = boto3.resource('s3')
obj = s3.Object('test-dev-cognito-settings-us-west-2', 'test/map.json') // This line working
regions=obj.get()['Body'].read() // This line giving Access Denied :(
logger.info('received event: %s ',obj)
return event
My IAM role attached to the lambda function is below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Test",
"Effect": "Allow",
"Action": "s3:Get*",
"Resource": "arn:aws:s3:::*"
}
]
}
IAM Role attached to the s3 bucket is below.
{
"Sid": "AllowForSpecificLambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/lambda_allow_pretoken_generation_jdtest"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-dev-cognito-settings-us-west-2/*",
"arn:aws:s3:::test-dev-cognito-settings-us-west-2"
]
},
Any help?
Thanks
Related
I've used terraform to setup infra for an s3 bucket and my containerised lambda. I want to trigger the lambda to list the items in my s3 bucket. When I run the aws cli it's fine:
aws s3 ls
returns
2022-11-08 23:04:19 bucket-name
This is my lambda:
import logging
import boto3
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
s3 = boto3.resource('s3')
def lambda_handler(event, context):
LOGGER.info('Executing function...')
bucket = s3.Bucket('bucket-name')
total_objects = 0
for i in bucket.objects.all():
total_objects = total_objects + 1
return {'total_objects': total_objects}
When I run the test in the AWS console, I'm getting this:
[ERROR] ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
No idea why this is happening. These are my terraform lambda policies, roles and the s3 setup:
resource "aws_s3_bucket" "statements_bucket" {
bucket = "bucket-name"
acl = "private"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.statements_bucket.id
key = "excel/"
}
resource "aws_iam_role" "lambda" {
name = "${local.prefix}-lambda-role"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_policy" "lambda" {
name = "${local.prefix}-lambda-policy"
description = "S3 specified access"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}
]
}
EOF
}
As far as I can tell, your Lambda function has the correct IAM role (the one indicated in your Terraform template) but that IAM role has no attached policies.
You need to attach the S3 policy, and any other IAM policies needed, to the IAM role. For example:
resource "aws_iam_role_policy_attachment" "lambda-attach" {
role = aws_iam_role.role.name
policy_arn = aws_iam_policy.policy.arn
}
In order to run aws s3 ls, you would need to authorize the action s3:ListAllMyBuckets. This is because aws s3 ls lists all of your buckets.
You should be able to list the contents of your bucket using aws s3 ls s3://bucket-name. However, you're probably going to have to add "arn:aws:s3:::bucket-name/*" to the resource list for your role as well. Edit: nevermind!
After creating an IAM user, I am not able to perform a DeleteObject action. Necessary info (Access key ID, Secret access key etc.) have been inserted as env variables. Upload, Downlaod operations can be performed without issue.
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::************"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::************",
"arn:aws:s3:::************/*"
]
}
]
}
Bucket Permissions
Block all public access: off (all 4 options)
Error Message
Performing s3.Object('BUCKET_NAME','fol.jpeg').delete()
gets me this error message:
botocore.exceptions.ClientError: An error occurred (AllAccessDisabled) when calling the DeleteObject operation: All access to this object has been disabled
The typical reason that you would see AllAccessDisabled is that AWS has suspended the underlying account. If that turns out not to be the cause, then read this answer for other possibilities.
Also, information on reactivating a suspended account is here.
please note: I am not using AWS as the S3 provider but something called Cegedim S3.
Following operation is not working with Minio client but with boto3 it's working.
When I am trying to setup the following policy with Minio client it works for the bucket level operations but not for object the level operations.
policy = {
"Version": "2012-10-17",
"Id": "Policy1639139464683",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::test",
"Effect": "Allow",
"Principal": {
"AWS": [
"{user_access_key}"
]
},
"Sid": "Stmt1639139460416"
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::test/*",
"Effect": "Allow",
"Principal": {
"AWS": [
"{user_access_key}"
]
},
"Sid": "Stmt1639139460415",
}
]
}
Minio connection
def minio(self) -> Minio:
return Minio(
endpoint=f"{self.config.s3.host}:{self.config.s3.port}",
access_key=Secrets.S3_USER.get_value(),
secret_key=Secrets.S3_PASSWORD.get_value(),
secure=(self.config.s3.schema != "http"),
)
After setting up this policy I can't perform get/put or any other operation on objects.
is there any workaround for this with Minio client?
If you share the actual error that you get, it would be easier to figure out the problem. However, I would venture to guess that you also need "s3:GetBucketLocation" in your bucket level permissions statement in the policy.
I'm trying to upload a file to my S3 bucket using Postman PUT request with an excel file (binary). URL for a PUT request is automatically generated using boto3 (Python):
upload_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={'Bucket': bucket_name, 'Key': key},
ExpiresIn=3600,
HttpMethod='PUT',)
When I try to upload a file to that URL using PUT and generated URL in Postman I get this:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>NumbersAndLetters</RequestId>
<HostId>NumbersAndLetters</HostId>
</Error>
s3 permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Eugene,
according to this discussion, you should use the generate_presigned_post function to generate a URL to use to upload your files.
just to make sure, have you double-checked that the credentials used in the boto3 script to generate the pre-signed URL are the same that you have in your s3 policy?
The problem was in the template.yaml. You have to add the following policies to your function in the template.yaml:
Policies:
- S3ReadPolicy:
BucketName: "bucket-name"
- S3WritePolicy:
BucketName: "bucket-name"
I'm writing a Lambda function which attaches a policy to the latest bucket. The script attaches the policy to the named bucket, but I'm not too sure how to get the name of the latest bucket created. Any ideas are appreciated :). The current code is below.
import boto3
import json
client = boto3.client('s3')
bucket_name = '*'
bucket_policy = {
"Version": "2012-10-17",
"Statement": [{
"Sid": "DenyS3PublicObjectACL",
"Effect": "Deny",
"Principal": "*",
"Action": ["s3:PutObjectAcl"],
"Resource": "arn:aws:s3:::%s/*" #% bucket_name,
"Condition": {
"StringEqualsIgnoreCaseIfExists": {
"s3:x-amz-acl": [
"public-read",
"public-read-write",
"authenticated-read"
]
}
}
}]
}
bucket_policy = json.dumps(bucket_policy) # Converting the policy into a JSON string.
def lambda_handler(event, context):
response = client.put_bucket_policy(Bucket=bucket_name, ConfirmRemoveSelfBucketAccess=True, Policy=bucket_policy)