Boto3 not generating correct signed-url - python

I've a use case where I use lambda function to generate signed URL to upload to an S3 bucket, I also set the metadata values when generating signed URL, my boto3 version is boto3==1.18.35. Previously when I generate the signed-url to upload to the bucket the URL looks like this:
https://bucket-name.s3.amazonaws.com/scanned-file-list/cf389880-09ff-4301-8fa7-b4054941685b/6919de95-b795-4cac-a2d3-f88ed87a0d08.zip?AWSAccessKeyId=ASIAVK6XU35LOIUAABGC&Signature=xxxx%3D&content-type=application%2Fx-zip-compressed&x-amz-meta-scan_id=6919de95-b795-4cac-a2d3-f88ed87a0d08&x-amz-meta-collector_id=2e8672a1-72fd-41cc-99df-1ae3c581d31a&x-amz-security-token=xxxx&Expires=1641318176
But now the URL looks like this:
https://bucket-name.s3.amazonaws.com/scanned-file-list/f479e304-a2e4-47e7-b1c8-058e3012edac/3d349bab-c814-4aa7-b227-6ef86dd4b0a7.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA2BIILAZ55MATXAGA%2F20220105%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20220105T001950Z&X-Amz-Expires=36000&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-meta-collector_id%3Bx-amz-meta-scan_id&X-Amz-Security-Token=xxxxx&X-Amz-Signature=xxxx
Notice the URL it generates now does not have the correct value for metadata information i.e. x-amz-meta-collector_id and x-amz-meta-scan_id.
The I'm using to generate signed-url is:
bucket_name = os.environ['S3_UPLOADS_BUCKET_NAME']
metadata = {
'scan_id': scan_id,
'collector_id': collector_id
}
params = {
'Bucket': bucket_name,
'Key': path + file_obj['fileName'],
'ContentType': file_obj.get('contentType') or '',
'Metadata': metadata
}
logger.info('metadata used for generating URL: ' + str(metadata))
s3 = boto3.client('s3')
presigned_url = s3.generate_presigned_url('put_object', Params=params, ExpiresIn=36000)
logger.info(f'Presigned URL: {presigned_url}')
return presigned_url
Because of the change in the URL, I'm getting a SignatureDidNotMatch error, Thanks for the help in advance!

The problem is on the AWS servers, the URL generated from us-west-2 is different from the URL generated in ap-south-1.
More:
The signed-url generated from a lambda deployed in the ap-south-1 region, and the X-Amz-Signature-Version was automatically being added to the URL, but when I deploy the same lambda in a different region i.e. us-west-2, I get a different format of signed-url which in my case was the correct one!

Related

S3 presigned url: 403 error when reading a file but OK when downloading

I am working with a s3 presigned url.
OK: links works well to download.
NOT OK: using the presigned url to read a file in the bucket
I am getting the following error in console:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationQueryParametersError</Code><Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
and here is how generate the url with boto3
s3_client = boto3.client('s3', config=boto3.session.Config(signature_version='s3v4'), region_name='eu-west-3')
bucket_name = config("AWS_STORAGE_BUCKET_NAME")
file_name = '{}/{}/{}/{}'.format(user, 'projects', building.building_id, file.file)
ifc_url = s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': bucket_name,
'Key': file_name,
},
ExpiresIn=1799
)
I am using IFC.js which allows to load ifc formated models from their urls . Basically the url of the bucket acts as the path to the file. Accessing files in a public bucket has been working well however it won't work with private buckets.
Something to note as well is that using the presigned url copied from clipboard from the aws s3 interface works.
it looks like this:
"https://bucket.s3.eu-west-3.amazonaws.com/om1/projects/1/v1.ifc?response-content-disposition=inline&X-Amz-Security-Token=qqqqzdfffrA%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230105T224241Z&X-Amz-SignedHeaders=host&X-Amz-Expires=60&X-Amz-Credential=5%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Signature=c470c72b3abfb99"
the one I obtain with boto3 is the following:
"https://bucket.s3.amazonaws.com/om1/projects/1/v1.ifc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKI230105%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Date=20230105T223404Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=1b4f277b85639b408a9ee16e"
I am fairly new to use s3 buckets so I am not sure what is wrong here, and searching around on SO and online has not been very helpful so far. Could anyone point me in the right direction?

AWS s3 presigned url not downloading

I am trying to download file from S3 pre Signed URL.I am getting access denied error The url looks like this.
https://s3.us.amazonaws.com/test1/xxxxxxxx
Here my python code.
def getfile(self, url):
url2 = str(url).replace("['", "")
s3 = boto3.client('s3')
key = "xxxxxxxxxxdd"
filename = "file_name.hex"
s3.download_file(key, url2, filename)
The file is created in my disk. But looks like this.
<Error><Code>AccessDenied</Code><Message>Invalid date (should be seconds since epoch): 1666657825']</Message><RequestId>DDNEB34VSFF124GT</RequestId><HostId>Rby3Bz/pqz2p6l7nMf4=</HostId></Error>%
Any help what I am doing wrong.
Can you generate pre signed url like this:
s3.generate_presigned_url(
'get_object',
Params = {'Bucket': 'bucket-1', 'Key': key},
ExpiresIn = SIGNED_URL_TIMEOUT
)
Make sure ExpiresIn is in seconds
For more : https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html
or,
Use this format: https://<bucket-name>.s3.amazonaws.com/<key>
url = 'https://bucket-name.s3.amazonaws.com/' + key

S3 object returns octet-stream but was uploaded as png

I have this existing piece of code that is used to upload files to my s3 bucket.
def get_user_upload_url(customer_id, filename, content_type):
s3_client = boto3.client('s3')
object_name = "userfiles/uploads/{}/{}".format(customer_id, filename)
try:
url = s3_client.generate_presigned_url('put_object',
Params={'Bucket': BUCKET,
'Key': object_name,
"ContentType": content_type # set to "image/png"
},
ExpiresIn=100)
except Exception as e:
print(e)
return None
return url
This returns to my client a presigned URL that I use to upload my files without a issue. I have added a new use of it where I'm uploading a png and I have behave test that uploads to the presigned url just fine. The problem is if i go look at the file in s3 i cant preview it. If I download it, it wont open either. The s3 web client shows it has Content-Type image/png. I visual compared the binary of the original file and the downloaded file and i can see differences. A file type tool detects that its is an octet-stream.
signature_file_name = "signature.png"
with open("features/steps/{}".format(signature_file_name), 'rb') as f:
files = {'file': (signature_file_name, f)}
headers = {
'Content-Type': "image/png" # without this or with a different value the presigned url will error with a signatureDoesNotMatch
}
context.upload_signature_response = requests.put(response, files=files, headers=headers)
I would have expected to have been returned a PNG instead of an octet stream however I'm not sure what I have done wrong . Googling this generally results in people having a problem with the signature because there not properly setting or passing the content type and I feel like I've effectively done that here proven by the fact that if I change the content type everything fails . I'm guessing that there's something wrong with the way I'm uploading the file or maybe reading the file for the upload?
So it is todo with how im uploading. So instead it works if i upload like this.
context.upload_signature_response = requests.put(response, data=open("features/steps/{}".format(signature_file_name), 'rb'), headers=headers)
So this must have to do with the use of put_object. It must be expecting the body to be the file of the defined content type. This method accomplishes that where the prior one would make it a multi part upload. So I think it's safe to say the multipart upload is not compatible with a presigned URL for put_object.
Im still piecing it altogether, so feel free to fill in the blanks.

python AWS boto3 create presigned url for file upload

I'm writing a django backend for an application in which the client will upload a video file to s3. I want to use presigned urls, so the django server will sign a url and pass it back to the client, who will then upload their video to s3. The problem is, the generate_presigned_url method does not seem to know about the s3 client upload_file method...
Following this example, I use the following code to generate the url for upload:
s3_client = boto3.client('s3')
try:
s3_object_name = str(uuid4()) + file_extension
params = {
"file_name": local_filename,
"bucket": settings.VIDEO_UPLOAD_BUCKET_NAME,
"object_name": s3_object_name,
}
response = s3_client.generate_presigned_url(ClientMethod="upload_file",
Params=params,
ExpiresIn=500)
except ClientError as e:
logging.error(e)
return HttpResponse(503, reason="Could not retrieve upload url.")
When running it I get the error:
File "/Users/bridgedudley/.local/share/virtualenvs/ShoMe/lib/python3.6/site-packages/botocore/signers.py", line 574, in generate_presigned_url
operation_name = self._PY_TO_OP_NAME[client_method]
KeyError: 'upload_file'
which triggers the exception:
botocore.exceptions.UnknownClientMethodError: Client does not have method: upload_file
Afer debugging I found that the self._PY_TO_OP_NAME dictionary only contains a subset of the s3 client commands offered here:
scrolling down to "upload"...
No upload_file method! I tried the same code using "list_buckets" and it worked perfectly, giving me a presigned url that listed the buckets under the signer's credentials.
So without the upload_file method available in the generate_presigned_url function, how can I achieve my desired functionality?
Thanks!
In addition to the already mentioned usage of:
boto3.client('s3').generate_presigned_url('put_object', Params={'Bucket':'your-bucket-name', 'Key':'your-object-name'})
You can also use:
boto3.client('s3').generate_presigned_post('your-bucket_name', 'your-object_name')
Reference: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html#generating-a-presigned-url-to-upload-a-file
Sample generation of URL:
import boto3
bucket_name = 'my-bucket'
key_name = 'any-name.txt'
s3_client = boto3.client('s3')
upload_details = s3_client.generate_presigned_post(bucket_name, key_name)
print(upload_details)
Output:
{'url': 'https://my-bucket.s3.amazonaws.com/', 'fields': {'key': 'any-name.txt', 'AWSAccessKeyId': 'QWERTYUOP123', 'x-amz-security-token': 'a1s2d3f4g5h6j7k8l9', 'policy': 'z0x9c8v7b6n5m4', 'signature': 'qaz123wsx456edc'}}
Sample uploading of file:
import requests
filename_to_upload = './some-file.txt'
with open(filename_to_upload, 'rb') as file_to_upload:
files = {'file': (filename_to_upload, file_to_upload)}
upload_response = requests.post(upload_details['url'], data=upload_details['fields'], files=files)
print(f"Upload response: {upload_response.status_code}")
Output:
Upload response: 204
Additional notes:
As documented:
The credentials used by the presigned URL are those of the AWS user
who generated the URL.
Thus, make sure that the entity that would execute this generation of a presigned URL allows the policy s3:PutObject to be able to upload a file to S3 using the signed URL. Once created, it can be configured through different ways. Some of them are:
As an allowed policy for a Lambda function
Or through boto3:
s3_client = boto3.client('s3',
aws_access_key_id="your-access-key-id",
aws_secret_access_key="your-secret-access-key",
aws_session_token="your-session-token", # Only for credentials that has it
)
Or on the working environment:
# Run in the Linux environment
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_SESSION_TOKEN="your-session-token", # Only for credentials that has it
Or through libraries e.g. django-storages for Django
You should be able to use the put_object method here. It is a pure client object, rather than a meta client object like upload_file. That is the reason that upload_file is not appearing in client._PY_TO_OP_NAME. The two functions do take different inputs, which may necessitate a slight refactor in your code.
put_object: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.put_object
The accepted answer doesn't let you post your data to S3 from your client. This will:
import boto3
s3_client = boto3.client('s3',
aws_access_key_id="AKIA....",
aws_secret_access_key="G789...",
)
s3_client.generate_presigned_url('put_object', Params={
'Bucket': 'cat-pictures',
'Key': 'whiskers.png',
'ContentType': 'image/png', # required!
})
Send that to your front end, then in JavaScript on the frontend:
fetch(url, {
method: "PUT",
body: file,
})
Where file is a File object.

boto3 signed url resulting in SignatureDoesNotMatch

My code is successfully uploading documents into the correct bucket. I can login and see the docs in the buckets on AWS S3. When I try to use the generate_signed_url method in boto3 to obtain a URL for these documents that can then be sent to users for accessing the doc, I'm getting a SignatureDoesNotMatch with the message saying "The request signature we calculated does not match the signature you provided. Check your key and signing method."
When i save the object (verified that it is working correctly by logging into AWS and downloading the files manually), I use:
s3 = boto3.client('s3', region_name='us-east-2', aws_access_key_id='XXXX', aws_secret_access_key='XXXX', config=Config(signature_version='s3v4'))
s3.put_object(Bucket=self.bucket_name,Key=path, Body=temp_file.getvalue(),ACL='public-read')
Then, when I try to get the URL, I'm using:
s3 = boto3.client('s3', region_name='us-east-2', aws_access_key_id='XXXX', aws_secret_access_key='XXXX', config=Config(signature_version='s3v4'))
url = s3.generate_presigned_url(
'put_object', Params={
'Bucket':self.pdffile_storage_bucket_name,
'Key':self.pdffile_url
},
ExpiresIn=604799,
)
I saw quite a bit of users on the web talking about making sure your AWS access key doesn't include any special characters. I did this, and I still get the same issue.
You are generating a pre-signed URL for an upload, not a download. You don't want put_object here... it's get_object.
Also, as #ThijsvanDien pointed out, ACL='public-read' may not be what you want, when uploading -- this makes the object accessible to anyone with an unsigned URL.
I tried many solutions from many different places. None of the worked for me. Then i got a solution on GitHub, that worked for me. Here is the original solution, just copy pasting his solution.
s3 = boto3.client('s3', region_name='us-east-2',
aws_access_key_id='XXXX', aws_secret_access_key='XXXX',
config=Config(signature_version='s3v4'),
endpoint_url='https://s3.us-east-2.amazonaws.com')
(after 3+ hours of debugging and almost smashing the keyboard....)
in the response, it's telling you which header is missing:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<!-- .... -->
<CanonicalRequest>PUT (your pre-signed url)
content-type:image/jpeg
host:s3.eu-west-2.amazonaws.com
x-amz-acl:
content-type;host;x-amz-acl
UNSIGNED-PAYLOAD</CanonicalRequest>
Needed a x-amz-acl header matching the ACL set when generating the pre-signed URL
def python_presign_url():
return s3.generate_presigned_url('put_object', Params={
'Bucket': bucket_name,
'Key': filename,
'ContentType': type,
'ACL':'public-read' # your x-amz-acl
})
curl -X PUT \
-H "content-type: image/jpeg" \
-H "Host: s3.eu-west-2.amazonaws.com" \
-H "x-amz-acl: public-read" \
-d #/path/to/upload/file.jpg "$PRE_SIGNED_URL"
I had a trailing space on my secret access key variable. That was causing the issue. E.g:
AWS_SECRET_KEY = "xyz "

Categories