I'm trying to access to a blob uploaded on a bucket of Google Cloud Storage via Python official client (google-cloud-storage).
I'm managing into retrieving the owner of the file (the one who uploaded it), and I'm not finding something useful on internet.
I've tried using the client with something like:
client(project='project').get_bucket('bucket').get_blob('blob')
But the blob properties like "owner" are empty!
So I tried using a Cloud Function and accessing to event and context.
In the Google documentation (https://cloud.google.com/storage/docs/json_api/v1/objects#resource) it is reported the structure of an event and it seems to have the owner propriety. But when I print or try to access it I obtain an error because it is not set.
Can someone help me? I just need to have the user email... thanks in advance!
EDIT:
It doesn't seem to be a permission error, because I obtain the correct results testing the API from the Google site: https://cloud.google.com/storage/docs/json_api/v1/objects/get?apix_params=%7B%22bucket%22%3A%22w3-dp-prod-technical-accounts%22%2C%22object%22%3A%22datagovernance.pw%22%2C%22projection%22%3A%22full%22%7D
By default, owner and ACL are not fetched by get_blob. You will have to explicitly fetch this info:
blob = client(project='project').get_bucket('bucket').get_blob('blob')
blob.reload(projection='full')
Note that if you use uniform bucket-level ACLs owner doesn't have any meaning and will be unset even with the above change.
EDIT: this is actually not the most efficient option because it makes an extra unnecessary call to GCS. The most efficient option is:
blob = Blob('bucket', 'blob')
blob.reload(projection='full', client=client)
I am currently working on uploading to some S3 buckets. I am able to get a JSON containing the name of the bucket, access key, and secret key. I need the signature.
The Authorization header is currently supposed to look like this
Authorizaiton: AWS XYZ:signature
I have done some research on how to generate this sig and what sig type it is and my understanding is that this is a v2 sig not a v4. That being said, I have tried following the documentation for generating the signature but I'm having some trouble reproducing their results.
Does anyone know of a library or tutorial that I can use?
import boto3
if __name__ == "__main__":
bucket='MyBucketName'
sourceFile='pic1.jpg'
targetFile='pic2.jpg'
client=boto3.client('rekognition','us-east-1')
response=client.compare_faces(SimilarityThreshold=70,
SourceImage={'S3Object':{'Bucket':bucket,'Name':sourceFile}},
TargetImage={'S3Object':{'Bucket':bucket,'Name':targetFile}})
for faceMatch in response['FaceMatches']:
position = faceMatch['Face']['BoundingBox']
confidence = str(faceMatch['Face']['Confidence'])
print('The face at ' +
str(position['Left']) + ' ' +
str(position['Top']) +
' matches with ' + confidence + '% confidence')
I am trying to compare two images present in my bucket but no matter which region i select i always get the following error:-
botocore.errorfactory.InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the CompareFaces operation: Unable to get object metadata from S3. Check object key, region and/or access permissions.
My bucket's region is us-east-1 and I have configured the same in my code.
what am I doing wrong?
I had the same problem. What I did to fix it was to rearrange my bucket and the folders. Make sure that your image is directly in your bucket and not in a folder in your bucket. Also double check that the name of the images are correct and that everything is on point.
Check if the S3 and Image Rekognition is in the same region, I know, it's not nice or documented (I guess), but this guys are talking about it here and here
Ensure bucket region is same as calling region. If you are using AWS CLI then make sure to include profile with appropriate region.
I faced a similar problem like
botocore.errorfactory.InvalidS3ObjectException: An error occurred
(InvalidS3ObjectException) when calling the CompareFaces operation: Unable to > get object metadata from S3. Check object key, region and/or access
permissions
It may be due to wrong AWS region, key or Permission was not given properly.
In my case the wrong region was set as an environment variable.
It happened to me using the AWS rekognition sdk for android , the problem was that the region of the S3 bucket is not the same in my request , so I had to put the correct region in the request (same as S3 bucket ) :
rekognitionClient.setRegion(Region.getRegion(Regions.US_WEST_1));//replace with your S3 region
It seems to me that you dont have enough permissions with that access_key and secret_key! If the credentials are of an IAM user, make sure the IAM user has permission to perform Rekognition compare_faces read operations and s3 read operations! Also check if your s3 source and target object key are correct.
And it is better to create roles with required permissions and assume that role to request temporary security credentials instead of using the permanent access keys.
For me the problem was the name of the file in s3 Bucket containing Spaces. So you have to make sure the key doesn't contain spaces while storing itself.
Ran into similar issue, and figured out it is due to having space in any of the folder name.
Please ensure the AWS environment variable configuration AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY in your script before compile
Also ran into this issue, noticed my IAM role had the bucketname as the resource, i had to add a slash and a wildcard to the end. changed it to "Resource": "arn:aws:s3:::/*"
For me changing the file permissions in the S3 bucket worked.
I had the same error: I checked and found out that the image was present in some subfolder of the bucket. Make sure that the image is in the root bucket.
In my case I had the path to my object prefixed with a slash (/). Removing it did the trick.
Although this is a very old question, but I also had the same issue. But In my case, I was using the Lambda and my Lambda role didn't had the access to S3, so if you are doing it through Lambda, you need to provide the S3 access to it in addition to Rekognition.
Same error mesagge but using Textract functions, no problem with permissions, but my files in s3 containing special caracters, once I renamed the files there was no problem.
I'm trying to get the User Owner of any particular AWS resource, be it instance, volume, security group etc..
I searched but could not find any helpful information apart from this
Link,
This is what I found closer [The answer suggests we can get the user who created an instance] to what I'm looking for but still couldn't use it as the code mentioned is not complete.
If I consider the code from above mentioned post's answer, the following line
ct_conn = sess.client(service_name='cloudtrail',region_name='us-east-1')
is mentioned without defining the "sess".
AWS will not records any AWS user activities info unless you setup Cloudtrail and send those info to S3 repository.
I have configured my bucket to be public, which means everyone can view the bucket as:
http://bucket.s3-website-us-east-1.amazonaws.com/
Now I need to be able to get the list of objects and download it if required.
I found answers on this very helpful in getting me setup on python:
Quick way to list all files in Amazon S3 bucket?
This work fine if I input the access-key and secret-access-key.
The problem though is we might have people accessing the bucket who we don't want to have any keys at all. So if the keys are not provided it gives me 400 Bad Response error.
At first I thought this might be impossible. But extensive search led me to this R-package:
Cloudyr R package
Using this I am able to full the objects without need of the keys:
get_bucket(bucket = 'bucket')
in R but the functionalities are limited in listing/downloading the files. Any ideas how I go about doing this in boto?
The default S3 policy is all deny, so you need to set permission policy to it:
choose your bucket and click property
add more permissions grantee everyone can list
why am I able to edit this without logging in?
I think what you need is Bucket Policy which will allow for anonymous to read objects stored.
Granting Read-Only Permission to an Anonymous User should help.