Im trying to change S3 file's metadata from python, and test with chalice server locally (need the chalice for other things). When running (without chalice server):
# change file metadata in s3:
s3 = boto3.resource('s3')
s3_object = s3.Object(bucket, key)
s3_object.metadata.update({'metadata-key': 'something'})
s3_object.copy_from(CopySource={'Bucket': bucket, 'Key': key}, Metadata=s3_object.metadata,
MetadataDirective='REPLACE')
In python locally (with aws configured locaclly) everything is fine and the metadata is changing in S3.
The problem is when using chalice python local server, Im getting this error:
An error occurred (404) when calling the HeadObject operation: Not
Found
(I need the chalice to simulate a lambda, and the s3 metadata part is in there)
Any ideas why this happenes?
Thanks :-)
An error occurred (404) when calling the HeadObject operation: Not Found
If the object exists, this error usually means the role you are using to make the request doesn't have the required permissions.
Your local tests with Python work because they are using your default AWS profile. But that key is not passed to the Chalice APP.
You need to add read/write permissions to that bucket for that Lambda in Chalice.
Related
I'm trying to view S3 bucket list through a python scripts using boto3. Credential file and config file is available in the C:\Users\user1.aws location. Secret access and access key available there for user "vscode". But unable to run the script which return exception message as
"botocore.exceptions.NoCredentialsError: Unable to locate credentials".
Code sample follows,
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
Do I need to specify user mentioned above ("vscode") ?
Copied the credential and config file to folder of python script is running. But same exception occurs.
When I got this error, I replaced resource with client and also added the secrets during initialization:
client = boto3.client('s3', region_name=settings.AWS_REGION, aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
You can try with boto3.client('s3') instead of boto3.resource('s3')
I have been given access to a subfolder of an S3 bucket, and want to access all files inside using Python and boto3. I am new to S3 and have read the docs to death, but haven't been able to figure out how to successfully access just one subfolder. I understand that s3 does not use unix-like directory structure, but I don't have access to the root bucket.
How can I configure boto3 to just connect to this subfolder?
I have successfully used this AWS CLI command to download the entire subfolder to my machine:
aws s3 cp --recursive s3://s3-bucket-name/SUB_FOLDER/ /Local/Path/Where/Files/Download/To --profile my-profile
This code:
AWS_BUCKET='s3-bucket-name'
s3 = boto3.client("s3", region_name='us-east-1', aws_access_key_id=AWS_KEY_ID, aws_secret_access_key=AWS_SECRET)
response = s3.list_objects(Bucket=AWS_BUCKET)
Returns this error:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I have also tried specifying the 'prefix' option in the call to list_objects, but this produces the same error.
You want to aws configure and save have your credentials and region then using boto3 is simple and easy.
Use boto3.resource and get the client like this:
s3_resource = boto3.resource('s3')
s3_client = s3_resource.meta.client
s3_client.list_objects(Bucket=AWS_BUCKET)
You should be good to go.
I have seen examples for checking whether an S3 bucket exists and have implemented them below. My bucket is located in us-east-1 region but the following code doesn't throw an exception. Is there a way to make the check region specific depending on my session?
session = boto3.Session(
profile_name = 'TEST'
,region_name='ap-south-1'
)
s3 = session.resource('s3')
bucket_name = 'TEST_BUCKET'
try:
s3.meta.client.head_bucket(Bucket = bucket_name)
except ClientError as c:
print(c)
It does not matter which S3 regional endpoint you send the request to. The underlying SDK (boto3) will redirect as needed. It's preferable, however, to target the correct region if you know it in advance, to save on redirects.
You can see this in detail if you use the awscli in debug mode:
aws s3api head-bucket --bucket mybucket --region ap-south-1 --debug
You will see debug output similar to this:
DEBUG - S3 client configured for region ap-south-1 but the bucket mybucket is in region us-east-1; Please configure the proper region to avoid multiple unnecessary redirects and signing attempts.
DEBUG - Switching signature version for service s3 to version s3v4 based on config file override.
DEBUG - Updating URI from https://s3.ap-south-1.amazonaws.com/mybucket to https://s3.us-east-1.amazonaws.com/mybucket
Note that the awscli uses the boto3 SDK, as does your Python script.
I am new to Flask and Python. I am trying to upload a file in my AWS S3 bucket. While this works fine in my local, I get the following exception when I try to do the same after deploying on Elastic Beanstalk.
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
app.py
#app.route('/snap/ingredient', methods=['POST'])
def findIngredient():
s3 = boto3.resource('s3')
response = s3.Bucket('<bucket-name>').put_object(Key="image.jpeg", Body=request.files['myFile'], ACL='public-read')
print(response.key)
return response
I am not sure if I am missing something. My Bucket access is public.
I have been following this post:
https://forums.aws.amazon.com/message.jspa?messageID=484342#484342
Just so whenever I generate a presigned url, I don't show my AWS Access Key Id.
url = s3_client.generate_presigned_url('get_object', Params={'Bucket': Bucket, 'Key': Key})
s3_client.put_object(Bucket="dummybucket, Key=other_key, WebsiteRedirectLocation=url)
My "dummybucket" has ACL='public-read'
So whenever I try to access http://dummybucket.s3.amazonaws.com/other_key
I get an access denied rather than the original object I'm trying to get.
I've also uploaded a file into the "other_bucket" and I can access that fine from the browser.
Things I haven't done:
Add policy to S3 bucket I'm trying to access
Enable website configuration for S3 bucket
EDIT: I cleared my browser cache too
I realized I had the wrong S3 bucket url that does the redirect.
*bucket_name*.s3-website.*region*.amazonaws.com