Failed to load object with presigned url Minio Python - python

i'm using Minio Server to handle files in my Flask API. I generate Presigned Url to upload images directly from Angular FrontEnd to save Backend resources.
Presign Url Generation works fine but when I upload my file from Postman or Angular Code, the file seems corrupted.
Same on the Minio web browser
I use simple code for presigned url generation :
def get_presigned_get_url(self, bucket: str, object_path: str) -> str:
url = self.client.presigned_get_object(
bucket_name=bucket,
object_name=object_path,
)
return url
def get_presigned_put_url(self, bucket: str, object_path: str) -> str:
url = self.client.presigned_put_object(
bucket_name=bucket,
object_name=object_path,
)
return url
And PUT request on Postman
Thanks for your help

The key in this case is how the file is uploaded from the postman. While uploading the file, you need to use Body > Binary > Select File, rather than using the Body > Form-Data.

Related

S3 presigned url: 403 error when reading a file but OK when downloading

I am working with a s3 presigned url.
OK: links works well to download.
NOT OK: using the presigned url to read a file in the bucket
I am getting the following error in console:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationQueryParametersError</Code><Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
and here is how generate the url with boto3
s3_client = boto3.client('s3', config=boto3.session.Config(signature_version='s3v4'), region_name='eu-west-3')
bucket_name = config("AWS_STORAGE_BUCKET_NAME")
file_name = '{}/{}/{}/{}'.format(user, 'projects', building.building_id, file.file)
ifc_url = s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': bucket_name,
'Key': file_name,
},
ExpiresIn=1799
)
I am using IFC.js which allows to load ifc formated models from their urls . Basically the url of the bucket acts as the path to the file. Accessing files in a public bucket has been working well however it won't work with private buckets.
Something to note as well is that using the presigned url copied from clipboard from the aws s3 interface works.
it looks like this:
"https://bucket.s3.eu-west-3.amazonaws.com/om1/projects/1/v1.ifc?response-content-disposition=inline&X-Amz-Security-Token=qqqqzdfffrA%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230105T224241Z&X-Amz-SignedHeaders=host&X-Amz-Expires=60&X-Amz-Credential=5%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Signature=c470c72b3abfb99"
the one I obtain with boto3 is the following:
"https://bucket.s3.amazonaws.com/om1/projects/1/v1.ifc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKI230105%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Date=20230105T223404Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=1b4f277b85639b408a9ee16e"
I am fairly new to use s3 buckets so I am not sure what is wrong here, and searching around on SO and online has not been very helpful so far. Could anyone point me in the right direction?

Signed URL for video upload causing problem in GCP

I am working on a project in which I need to upload the video files to GCS bucket using V4 Signed URL. Currently I am generating the signed url using Python script which is a part of Flask API. Here is the method signature I am using to generate url.
def GenerateURL(self,bucket_name,blob_name,method,timeout,content_type=None):
bucket = StoreCon.get_con(bucket_name)
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
expiration=datetime.timedelta(minutes=timeout),
method=method,
content_type=content_type,
)
resp = jsonify({'message':{'%s URL'%method:url}})
resp.status_code = 200
return resp
Now this is being called inside a blueprint route. Here is the snippet:
#CloudStoreEnd.route('/uploadMedia',methods=['POST'])
def uploadMedia():
blob_name = request.get_json()['FILE_NAME']
return StoreOperator.postMediaURL(blob_name)
When I make the call to this API route using Client side code, the video files are getting uploaded successfully to GCS bucket. But when I download the same video file from GCS bucket. The file becomes corrupted. Mentioning "0xc00d36c4" error.
Here is a sample function for client side:
def upload_file(path):
file_name = path.split('\\')[-1]
data = {'FILE_NAME':file_name}
#GET SIGNED URL FOR MEDIA UPLOAD
get_signed_url = 'https://CLOUD-RUN-SERVICE/uploadMedia'
headers = {'Content-Type':'application/json'}
resp = requests.post(url=get_signed_url,data=json.dumps(data),headers=headers)
upload_url = json.loads(resp.content)['message']['PUT URL']
#SEND A PUT REQUEST WITH MEDIA FILE
headers = {'Content-Type':MimeTypes().guess_type(file_name)[0]}
file = {'file':open(path,'rb')}
resp = requests.put(url=upload_url,headers=headers,files=file)
return resp
I am not sure why the Media(.mp4,.mov) are getting corrupted when I retrieve the same files, whereas for other files like (.pdf,.png) the files are fine. Is there an extra request parameter I need to add to get proper signed url? Or from client application I am sending the files wrong way to the signed url?

python AWS boto3 create presigned url for file upload

I'm writing a django backend for an application in which the client will upload a video file to s3. I want to use presigned urls, so the django server will sign a url and pass it back to the client, who will then upload their video to s3. The problem is, the generate_presigned_url method does not seem to know about the s3 client upload_file method...
Following this example, I use the following code to generate the url for upload:
s3_client = boto3.client('s3')
try:
s3_object_name = str(uuid4()) + file_extension
params = {
"file_name": local_filename,
"bucket": settings.VIDEO_UPLOAD_BUCKET_NAME,
"object_name": s3_object_name,
}
response = s3_client.generate_presigned_url(ClientMethod="upload_file",
Params=params,
ExpiresIn=500)
except ClientError as e:
logging.error(e)
return HttpResponse(503, reason="Could not retrieve upload url.")
When running it I get the error:
File "/Users/bridgedudley/.local/share/virtualenvs/ShoMe/lib/python3.6/site-packages/botocore/signers.py", line 574, in generate_presigned_url
operation_name = self._PY_TO_OP_NAME[client_method]
KeyError: 'upload_file'
which triggers the exception:
botocore.exceptions.UnknownClientMethodError: Client does not have method: upload_file
Afer debugging I found that the self._PY_TO_OP_NAME dictionary only contains a subset of the s3 client commands offered here:
scrolling down to "upload"...
No upload_file method! I tried the same code using "list_buckets" and it worked perfectly, giving me a presigned url that listed the buckets under the signer's credentials.
So without the upload_file method available in the generate_presigned_url function, how can I achieve my desired functionality?
Thanks!
In addition to the already mentioned usage of:
boto3.client('s3').generate_presigned_url('put_object', Params={'Bucket':'your-bucket-name', 'Key':'your-object-name'})
You can also use:
boto3.client('s3').generate_presigned_post('your-bucket_name', 'your-object_name')
Reference: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html#generating-a-presigned-url-to-upload-a-file
Sample generation of URL:
import boto3
bucket_name = 'my-bucket'
key_name = 'any-name.txt'
s3_client = boto3.client('s3')
upload_details = s3_client.generate_presigned_post(bucket_name, key_name)
print(upload_details)
Output:
{'url': 'https://my-bucket.s3.amazonaws.com/', 'fields': {'key': 'any-name.txt', 'AWSAccessKeyId': 'QWERTYUOP123', 'x-amz-security-token': 'a1s2d3f4g5h6j7k8l9', 'policy': 'z0x9c8v7b6n5m4', 'signature': 'qaz123wsx456edc'}}
Sample uploading of file:
import requests
filename_to_upload = './some-file.txt'
with open(filename_to_upload, 'rb') as file_to_upload:
files = {'file': (filename_to_upload, file_to_upload)}
upload_response = requests.post(upload_details['url'], data=upload_details['fields'], files=files)
print(f"Upload response: {upload_response.status_code}")
Output:
Upload response: 204
Additional notes:
As documented:
The credentials used by the presigned URL are those of the AWS user
who generated the URL.
Thus, make sure that the entity that would execute this generation of a presigned URL allows the policy s3:PutObject to be able to upload a file to S3 using the signed URL. Once created, it can be configured through different ways. Some of them are:
As an allowed policy for a Lambda function
Or through boto3:
s3_client = boto3.client('s3',
aws_access_key_id="your-access-key-id",
aws_secret_access_key="your-secret-access-key",
aws_session_token="your-session-token", # Only for credentials that has it
)
Or on the working environment:
# Run in the Linux environment
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_SESSION_TOKEN="your-session-token", # Only for credentials that has it
Or through libraries e.g. django-storages for Django
You should be able to use the put_object method here. It is a pure client object, rather than a meta client object like upload_file. That is the reason that upload_file is not appearing in client._PY_TO_OP_NAME. The two functions do take different inputs, which may necessitate a slight refactor in your code.
put_object: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.put_object
The accepted answer doesn't let you post your data to S3 from your client. This will:
import boto3
s3_client = boto3.client('s3',
aws_access_key_id="AKIA....",
aws_secret_access_key="G789...",
)
s3_client.generate_presigned_url('put_object', Params={
'Bucket': 'cat-pictures',
'Key': 'whiskers.png',
'ContentType': 'image/png', # required!
})
Send that to your front end, then in JavaScript on the frontend:
fetch(url, {
method: "PUT",
body: file,
})
Where file is a File object.

Cloudinary Image upload from django

I am trying to upload image/file to clodinary to get back the url using this code
medical_file = request.FILES['medical_file']
out = cloudinary.uploader.upload(File.open(medical_file, "rb"))
url = out['url']
medical_file_url = url
And I am successfully getting the url as well(I have printed that on my console)
But then I am getting this error of : cloudinary.api.Error: Empty file
Per Cloudinary's documentation:
In cases where images are uploaded by users of your Django application through a web form, you can pass the parameter of your Django's request.FILES to the upload method
So in your case, you can upload the file on the server-side by passing request.FILES['medical_file'] directly to the upload method:
out = cloudinary.uploader.upload(request.FILES['medical_file'])

Webapp2 request application/octet-stream file upload

I am trying to upload a file to s3 from a form via ajax. I am using fineuploader http://fineuploader.com/ on the client side and webapp2 on the server side. it sends the parameter as qqfile in the request and I can see the image data in the request headers but I have no idea how to get it back in browsers that do not use the multipart encoding.
This is how I was doing it in the standard html form post with multipart.
image = self.request.POST["image"]
this gives me the image name and the image file
currently I have only been able to get the image filename back not the data with
image = self.request.get_all('image')
[u'image_name.png']
when using the POST I get a warning about the content headers being application/octet-stream
<NoVars: Not an HTML form submission (Content-Type: application/octet-stream)>
How do I implement a BlobStoreHandler in webapp2 outside of GAE?
Your question code is not very clear to me. But you can use an example.
Have a look at this article from Nick Johnson. He implements a dropbox service using app engine blobstore and Plupload in the client : http://blog.notdot.net/2010/04/Implementing-a-dropbox-service-with-the-Blobstore-API-part-2
I ended up using fineuploader http://fineuploader.com/ which sends a multipart encoded form to my handler endpoint.
inside the handler I could simply just reference the POST and then read the FieldStorage Object into a cStringIO object.
image = self.request.POST["qqfile"]
imgObj = cStringIO.StringIO(image.file.read())
# Connect to S3...
# create s3 Key
key = bucket.new_key("%s" % uuid.uuid4());
# guess mimetype for headers
file_mime_type = mimetypes.guess_type(image.filename)
if file_mime_type[0]:
file_headers = {"Content-Type": "%s" % file_mime_type[0]}
else:
file_headers = None
key.set_contents_from_string(imgObj.getvalue(), headers=file_headers)
key_str = key.key
#return JSON response with key and append to s3 url on front end.
note: qqfile is the parameter fineuploader uses.
I am faking the progress but that is ok for my use case no need for BlobStoreUploadHandler.

Categories