AWS S3 PutObject Access denied problem in Python - python

I'm trying to upload an image to AWS S3. This code previously worked fine (and still working for another project). This is a brand new project with a new AWS S3 bucket. I noticed they again changed a lot and maybe it's a problem.
This is the code:
s3_client.upload_fileobj(
uploaded_file,
files_bucket_name,
key_name,
ExtraArgs={
'ContentType': uploaded_file.content_type
}
)
This is the permission policy for the bucket:
{
"Version": "2012-10-17",
"Id": "Policy1204",
"Statement": [
{
"Sid": "Stmt15612",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
The upload did not work until I added the "PutObject" here but it was working for another project. I don't like about this policy that PutObject is now public available.
How to make:
all images are public available
but only owner can upload files?
This are screenshots from AWS permissions for this bucket:

The problem has gone as soon as I created an IAM user and granted it full access to S3. Not sure if this solution is good or not but at least it's working now.

It appears that your requirement is:
Allow everyone to see the files
Only allow an owner to upload them
There is a difference between "seeing the files" -- ListObjects allows listing of the objects in a bucket while GetObject allows downloading of an object.
If you want to make all objects available for download assuming that the user knows the name of the object, then you could use a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Note that this policy will not permit viewing the contents of the bucket.
If you wish to allow a specific IAM User permission to upload files to the bucket, then put this policy on the IAM User (not on the Bucket):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}

Related

IAM user unable to DeleteObject

After creating an IAM user, I am not able to perform a DeleteObject action. Necessary info (Access key ID, Secret access key etc.) have been inserted as env variables. Upload, Downlaod operations can be performed without issue.
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::************"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::************",
"arn:aws:s3:::************/*"
]
}
]
}
Bucket Permissions
Block all public access: off (all 4 options)
Error Message
Performing s3.Object('BUCKET_NAME','fol.jpeg').delete()
gets me this error message:
botocore.exceptions.ClientError: An error occurred (AllAccessDisabled) when calling the DeleteObject operation: All access to this object has been disabled
The typical reason that you would see AllAccessDisabled is that AWS has suspended the underlying account. If that turns out not to be the cause, then read this answer for other possibilities.
Also, information on reactivating a suspended account is here.

Bucket policy with Minio

please note: I am not using AWS as the S3 provider but something called Cegedim S3.
Following operation is not working with Minio client but with boto3 it's working.
When I am trying to setup the following policy with Minio client it works for the bucket level operations but not for object the level operations.
policy = {
"Version": "2012-10-17",
"Id": "Policy1639139464683",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::test",
"Effect": "Allow",
"Principal": {
"AWS": [
"{user_access_key}"
]
},
"Sid": "Stmt1639139460416"
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::test/*",
"Effect": "Allow",
"Principal": {
"AWS": [
"{user_access_key}"
]
},
"Sid": "Stmt1639139460415",
}
]
}
Minio connection
def minio(self) -> Minio:
return Minio(
endpoint=f"{self.config.s3.host}:{self.config.s3.port}",
access_key=Secrets.S3_USER.get_value(),
secret_key=Secrets.S3_PASSWORD.get_value(),
secure=(self.config.s3.schema != "http"),
)
After setting up this policy I can't perform get/put or any other operation on objects.
is there any workaround for this with Minio client?
If you share the actual error that you get, it would be easier to figure out the problem. However, I would venture to guess that you also need "s3:GetBucketLocation" in your bucket level permissions statement in the policy.

S3 permissions for PUT file upload

I'm trying to upload a file to my S3 bucket using Postman PUT request with an excel file (binary). URL for a PUT request is automatically generated using boto3 (Python):
upload_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={'Bucket': bucket_name, 'Key': key},
ExpiresIn=3600,
HttpMethod='PUT',)
When I try to upload a file to that URL using PUT and generated URL in Postman I get this:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>NumbersAndLetters</RequestId>
<HostId>NumbersAndLetters</HostId>
</Error>
s3 permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12digitnumber:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Eugene,
according to this discussion, you should use the generate_presigned_post function to generate a URL to use to upload your files.
just to make sure, have you double-checked that the credentials used in the boto3 script to generate the pre-signed URL are the same that you have in your s3 policy?
The problem was in the template.yaml. You have to add the following policies to your function in the template.yaml:
Policies:
- S3ReadPolicy:
BucketName: "bucket-name"
- S3WritePolicy:
BucketName: "bucket-name"

How to assume an AWS role from another AWS role?

I have two AWS account - lets say A and B.
In account B, I have a role defined that allow access to another role from account A. Lets call it Role-B
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::********:role/RoleA"
},
"Action": "sts:AssumeRole"
}]
}
In account A, I have defined a role that allows the root user to assume role. Lets call it Role-A
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::********:root"
},
"Action": "sts:AssumeRole"
}]
}
Role A has the following policy attached to it
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::****:role/RoleB",
"Effect": "Allow"
}]
}
As a user in account A, I assumed the Role-A. Now using this temporary credential, I want to assume the Role-B and access the resource owned by account B. I have the below code
client = boto3.client('sts')
firewall_role_object = client.assume_role(
RoleArn=INTERMEDIARY_IAM_ROLE_ARN,
RoleSessionName=str("default"),
DurationSeconds=3600)
firewall_credentials = firewall_role_object['Credentials']
firewall_client = boto3.client(
'sts',
aws_access_key_id=firewall_credentials['AccessKeyId'],
aws_secret_access_key=firewall_credentials['SecretAccessKey'],
aws_session_token=firewall_credentials['SessionToken'], )
optimizely_role_object = firewall_client.assume_role(
RoleArn=CUSTOMER_IAM_ROLE_ARN,
RoleSessionName=str("default"),
DurationSeconds=3600)
print(optimizely_role_object['Credentials'])
This code works for the set of roles I got from my client but is not working for the roles I defined between two of the AWS account I have access to.
Finally got this working. The above configuration is correct. There was a spelling mistake in the policy.
I will keep this question here for it may help someone who want to achieve double hop authentication using roles.

S3 bucket policy preventing boto from setting cache headers

I've developed a Python script using the boto library to upload files to my S3 bucket. When uploading the file using the Key.set_contents_from_filename method, I specify the Cache-Control and Expires headers to enable proper browser caching. This all works fine, the files appear in my bucket with the correct headers set in the metadata field.
To prevent hotlinking of my files, I added the following bucket policy in S3:
{
"Version": "2008-10-17",
"Id": "MySite",
"Statement": [
{
"Sid": "Deny access if not specified referrer",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.mysite.com/*",
"http://mysite.com/*"
]
}
}
}
]
}
This policy works to prevent hotlinking, but now when I upload files using boto, the Cache-Control and Expires headers are not being set. Removing the bucket policy fixes the problem, so I'm clearly not specifying the bucket policy properly.
Any ideas on how to modify my bucket policy to allow uploading metadata fields with boto while still preventing hotlinking?
I haven't tested the headers myself. But maybe the problem is with Referer header. I suggest you add this policy that allows get and put object to your bucket using Referer.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowFromReferer",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::745684876799:user/IAM_USER"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://mysite.com/*",
"http://www.mysite.com/*"
]
}
}
}
]
}
If this fails, you can assume the problem is with Referer. Use just * for referer and if it works fine, then it definitely is your Referer problem.

Categories