Upload image to S3 python - python

I am trying to upload an image to S3 through Python. My code looks like this:
import os
from PIL import Image
import boto
from boto.s3.key import Key
def upload_to_s3(aws_access_key_id, aws_secret_access_key, file, bucket, key, callback=None, md5=None, reduced_redundancy=False, content_type=None):
conn = boto.connect_s3(aws_access_key_id, aws_secret_access_key)
bucket = conn.get_bucket(bucket, validate=False)
k = Key(bucket)
k.key = key
k.set_contents_from_file(file)
AWS_ACCESS_KEY = "...."
AWS_ACCESS_SECRET_KEY = "....."
filename = "images/image_0.jpg"
file = Image.open(filename)
key = "image"
bucket = 'images'
upload_to_s3(AWS_ACCESS_KEY, AWS_ACCESS_SECRET_KEY, file, bucket, key)
I am getting this error message:
S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidRequest</Code><Message> The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.</Message>
<RequestId>90593132BA5E6D6C</RequestId>
<HostId>...</HostId></Error>
This code is based on the tutorial from this website: http://stackabuse.com/example-upload-a-file-to-aws-s3/
I have tried k.set_contents_from_file as well as k.set_contents_from_filename, but both don't seem to work for me.
The error says something about using AWS4-HMAC-SHA256, but I am not sure how to do that. Is there another way to solve this problem besides using AWS4-HMAC-SHA256? If anyone can help me out, I would really appreciate it.
Thank you!

Just use:
import boto3
client = boto3.client('s3', region_name='us-west-2')
client.upload_file('images/image_0.jpg', 'mybucket', 'image_0.jpg')
Try to avoid putting your credentials in the code. Instead:
If you are running the code from an Amazon EC2 instance, simply assign an IAM Role to the instance with appropriate permissions. The credentials will automatically be used.
If you are running the code on your own computer, use the AWS Command-Line Interface (CLI) aws configure command to store your credentials in a file, which will be automatically used by your code.

Related

Unable to locate credentials in boto3 AWS

I'm trying to view S3 bucket list through a python scripts using boto3. Credential file and config file is available in the C:\Users\user1.aws location. Secret access and access key available there for user "vscode". But unable to run the script which return exception message as
"botocore.exceptions.NoCredentialsError: Unable to locate credentials".
Code sample follows,
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
Do I need to specify user mentioned above ("vscode") ?
Copied the credential and config file to folder of python script is running. But same exception occurs.
When I got this error, I replaced resource with client and also added the secrets during initialization:
client = boto3.client('s3', region_name=settings.AWS_REGION, aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
You can try with boto3.client('s3') instead of boto3.resource('s3')

Ceph radosgw - bucket policy - make all objects public-read by default

I work with a group of non-developers which are uploading objects to an s3 style bucket through radosgw. All uploaded objects need to be publicly available, but they cannot do this programmatically. Is there a way to make the default permission of an object public-read so this does not have to be manually set every time? There has to be a way to do this with boto, but I've yet to find any examples. There's a few floating around using AWS' GUI, but that is not an option for me. :(
I am creating a bucket like this:
#!/usr/bin/env python
import boto
import boto.s3.connection
access_key = "SAMPLE3N84XBEHSAMPLE"
secret_key = "SAMPLEc4F3kfvVqHjMAnsALY8BCQFwTkI3SAMPLE"
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = '10.1.1.10',
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('public-bucket', policy='public-read')
I am setting the policy to public-read which seems to allow people to browse the bucket as a directory, but the objects within the bucket do not inherit this permission.
>>> print bucket.get_acl()
<Policy: http://acs.amazonaws.com/groups/global/AllUsers = READ, S3 Newbie (owner) = FULL_CONTROL>
To clarify, I do know I can resolve this on a per-object basis like this:
key = bucket.new_key('thefile.tgz')
key.set_contents_from_filename('/home/s3newbie/thefile.tgz')
key.set_canned_acl('public-read')
But my end users are not capable of doing this, so I need a way to make this the default permission of an uploaded file.
I found a solution to my problem.
First, many thanks to joshbean who posted this: https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/python/example_code/s3/s3-python-example-put-bucket-policy.py
I noticed he was using the boto3 library, so I started using it for my connection.
import boto3
import json
access_key = "SAMPLE3N84XBEHSAMPLE"
secret_key = "SAMPLEc4F3kfvVqHjMAnsALY8BCQFwTkI3SAMPLE"
conn = boto3.client('s3', 'us-east-1',
endpoint_url="http://mycephinstance.net",
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)
bucket = "public-bucket"
bucket_policy = {
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::{0}/*".format(bucket)]
}
]
}
bucket_policy = json.dumps(bucket_policy)
conn.put_bucket_policy(Bucket=bucket_name, Policy=bucket_policy)
Now when an object is uploaded in public-bucket, it can be anonymously downloaded without explicitly setting the key permission to public-read or generating a download URL.
If you're doing this, be REALLY REALLY certain that it's ok for ANYONE to download this stuff. Especially if your radosgw service is publicly accessible on the internet.

Python S3 Amazon Code with 'Access Denied' Error

I am trying to download a specific S3 file off a server using Python Boto and am getting "403 Forbidden" and "Access Denied" error messages. It says the error is occurring at line 24 (get_contents command). I have tried it with and without the "aws s3 cp" at the start of the source file path, received the same error message both time. My code is below, any advice would be helpful.
# Code to append csv:
import csv
import boto
from boto.s3.key import Key
keyId ="key"
sKeyId="secretkey"
srcFileName="aws s3 cp s3://...."
destFileName="C:\\Users...."
bucketName="bucket00001"
conn = boto.connect_s3(keyId,sKeyId)
bucket = conn.get_bucket(bucketName, validate = False)
#Get the Key object of the given key, in the bucket
k = Key(bucket, srcFileName)
#Get the contents of the key into a file
k.get_contents_to_filename(destFileName)
AWS is very vague with the errors that it outputs. This is intentional, but it definitely doesn't help with debugging. You are receiving an Access Denied error because the source file name you are using is not the correct path for the file.
aws s3 cp
This is the CLI command to copy one or more files from a source to a destination (with either being an s3 bucket). This should not be apart of the source file name.
s3://...
This prefix is appended to your bucket name that denotes that the path refers to an s3 object, however, this is not necessary in your source file path name when using boto3.
To download an s3 file using boto3, perform the following:
import boto3
BUCKET_NAME = 'my-bucket' # does not include s3://
KEY = 'image.jpg' # the file you want to download
s3 = boto3.resource('s3')
s3.Bucket(BUCKET_NAME).download_file(KEY, 'image.jpg')
Documentation for this command can be found here:
https://boto3.readthedocs.io/en/latest/guide/s3-example-download-file.html
In general, boto3 (and any other AWS SDK's) are simply wrappers around AWS api requests. You can also use the aws cli like I mentioned earlier to download a file from s3. That command would be:
aws s3 cp s3://my-bucket/my-file.jpg C:\location\my-file.jpg
srcFileName="aws s3 cp s3://...."
This has to be a key like somefolder/somekey or somekey as string.
You are providing a path or command to it.

scrapy store images to amazon s3

I store images in my local server then upload to s3
Now I want to edit it to stored images directly to amazon s3
But ther is error:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
here is my settings.py
AWS_ACCESS_KEY_ID = "XXXX"
AWS_SECRET_ACCESS_KEY = "XXXX"
IMAGES_STORE = 's3://how.are.you/'
Do I need to add something??
my scrapy edition: Scrapy==0.22.2
Please guide me,thank you!
AWS_ACCESS_KEY_ID = "xxxxxx"
AWS_SECRET_ACCESS_KEY = "xxxxxx"
IMAGES_STORE = "s3://bucketname/virtual_path/"
how.are.you should be a S3 Bucket that exist into your S3 account, and it will store the images you upload. If you want to store images inside any virtual_path then you need to create this folder into your S3 Bucket.
I found the cause of the problem is upload policy. The function Key.set_contents_from_string() takes argument policy, default set to S3FileStore.POLICY. So modify the code in scrapy/contrib/pipeline/files.py, change
return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
headers=h, policy=self.POLICY)
to
return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
headers=h)
Maybe you can try it, and share the result here.
I think the problem is not in your code, actually the problem lies in permission, please check your credentials first and make sure your permissions to access and write on s3 bucket.
import boto
s3 = boto.connect_s3('access_key', 'secret_key')
bucket = s3.lookup('bucket_name')
key = bucket.new_key('testkey')
key.set_contents_from_string('This is a test')
key.delete()
If test run successfuly then look into your permission, for setting permission you can look at amazon configuration

s3 connection to 'folder' via boto

I'm trying to upload a file to a specific location using boto and python. I'm accessing using something to this effect:
from boto.s3.connection import S3Connection
from boto.s3.key import Key
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket('the_bucket_name')
for key in bucket:
print key.name
Here's the trick. I have been provisioned credentials to a 'folder' within a bucket. per this - Amazon S3 boto - how to create a folder? I understand that there actually aren't folders in s3, rather keys like "foo/bar/my_key.txt". when i try to execute get_bucket() I get
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
because i dont actually have credentials to the base bucket, rather a subset of the bucket keys. my_bucket/the_area_I_have_permission/*
Does anyone know how I could pass the specific 'area' in the bucket i have access to in the connection step? or an alternative method i can use to access my_bucket/the_area_I_have_permission/* ?
Does this help :
bucket = conn.get_bucket('the_bucket_name',validate=False)
key = bucket.get_key("ur_key_name")
if key is not None:
print key.get_contents_as_string()
keys = bucket.list(prefix='the_area_I_have_permission')
for key in keys:
print key.name
Found the problem. RequestTimeTooSkewed Error using PHP S3 Class
The issue was my VM date was off and amazon uses the date to validate the request. 1+ to #bdonlan.

Categories