I am using drive api v3 to look for files I hare shared with others (anyone),to list them & potentially cancel sharing them.
I know that in the search box you can do a 'to:' and it will retrieve these files, but I could not use such thing on the API.
my current tri ;
query="'me' in owners and trashed=false and not mimeType = 'application/vnd.google-apps.folder' and visibility != 'limited'"
Thanks in advance
As tanaike said, a workaround that solves the problem is by looping through the files using files.list() function, and including id, owners, permissions in the fields. This is going to return a list of objects, and from there we can check if type is anyone.
From there, we can also check for attributes like shared:true & ownedByMe:true.
This is just a workaround, and surely not the best solution, since with Drive search, we can do all this by typing to:, which lists all owr shared files. I hope we get an API for this.
Thanks again tanaike
Related
I have configured my bucket to be public, which means everyone can view the bucket as:
http://bucket.s3-website-us-east-1.amazonaws.com/
Now I need to be able to get the list of objects and download it if required.
I found answers on this very helpful in getting me setup on python:
Quick way to list all files in Amazon S3 bucket?
This work fine if I input the access-key and secret-access-key.
The problem though is we might have people accessing the bucket who we don't want to have any keys at all. So if the keys are not provided it gives me 400 Bad Response error.
At first I thought this might be impossible. But extensive search led me to this R-package:
Cloudyr R package
Using this I am able to full the objects without need of the keys:
get_bucket(bucket = 'bucket')
in R but the functionalities are limited in listing/downloading the files. Any ideas how I go about doing this in boto?
The default S3 policy is all deny, so you need to set permission policy to it:
choose your bucket and click property
add more permissions grantee everyone can list
why am I able to edit this without logging in?
I think what you need is Bucket Policy which will allow for anonymous to read objects stored.
Granting Read-Only Permission to an Anonymous User should help.
I'm creating an application to backup files stored in Google Drive using Python. The fist problem I encountered is that for some reason it just list up to 100 files in every folder. I think I'm using the correct scope ('https://www.googleapis.com/auth/drive') and here is the line of code I'm using to list the contents of a folder
folder_contents = service.files().list( q="'%s' in parents" % folder['id'] ).execute()
Is there any way to change the number of listed files? I found in this thread Google Drive List API - not listed all the documents? that there is a way to do this but NumberToRetrieve does not work. I tried to test it in the Drive API reference webpage (https://developers.google.com/drive/v2/reference/files/list) but it gives me an error. I also can't figure out how to put this parameter when I request the list fromm my code (I'm new using the Drive API).
Thanks in advance.
EDIT: I found a way to increase the number of files using the MaxResuls flag. But it just allows to list up to 1000 and that's not quite enough.
It's pretty standard for all REST APIs to paginate their output lists. The Drive API is no different. You simply need to monitor the presence of a nextPageToken in the JSON response and recurse through the successive pages.
NB. Don't make the mistake as some have of assuming that a response with less than MaxResults is the last page. You should rely solely on nextPageToken.
As an aside, once you have pagination working in your code, consider setting maxResults back to its default. I have a suspicion that the non-default values are less well tested, and also more prone to timeouts.
"results = service.files().list(
pageSize = 1000, q="'xxx' in parents",fields="nextPageToken, files(id, name)").execute()"
This worked for me.
Default max results of query is 100. Must use pageToken/nextPageToken to repeat it. see Python Google Drive API - list the entire drive file tree
I'd like to build a site which contains a few folders for different teams. However, there is one team that is common to one folder on all sites. I do not want that team to be allowed to see the content of the other folders. I tried creating a folder in a site and giving permission to a user via CMIS (in python), however that folder doesn't seem to be accessible from their share UI.
I'm not even sure this is the best way to do this. The organisation of the information requires that the areas are in the same place (i.e. the same site) however if you have access to the site you seem to have access to all the folders (I can't figure out a way of removing access to a folder on a site for a single user)
Also the requirement here is that it needs to be done programmatically; I'm not bothered particularly about using CMIS and if I have to rewrite the file/folder code, but in my head the best thing to do would be to add a widget on the share UI that access all the folders that a user has access to in the absence of being able to deny access to a folder.
As Gagravarr says you are going to have to break inheritance on the documentLibrary folder to get it to work like you want. Breaking inheritance is not supported by CMIS so you'll have to write your own web script to do that.
I would manually set the permissions until it works like you want it to, then once you get that working, write a web script that puts it into effect for all of your sites.
A share site comes with a security model where every person goes in at least one of four groups: Manager, Collaborator, Contributor and Consumer - either directly or indirectly via another group. Access is generally managed by access control lists. You might want to look at Alfresco: Folder permission by role to get an idea how that works. The site security model does not work for you if you find the need for more than those four groups to map access control. It can still be done, but I would strongly discourage you implementing that because it can get very hard to understand quickly.
Using the following API allows you to obtain multiple properties assigned to a file:
props = service.properties().list(fileId=fileId).execute().get('items', [])
However, I don't see any way to set multiple properties. Is this just missing from the documentation, or have Google really overlooked this?
Think of properties as a list, rather than a map. So the answer is no.
To save http traffic you could batch your requests as described here https://code.google.com/p/google-api-java-client/wiki/Batch
I want to collect some names like: location names, organization names, etc. from freebase. I do some research and find some python scripts. But it looks like I need to register a google API key something ? Is it the must and if yes do you know how could I get that?
Is there any example that they do similar thing ? Many thanks!
you can download the entire freebase
you need a key to use the api. the documentation for the api of free base can found in:
https://developers.google.com/freebase/v1/getting-started#api-keys
and
to obtain a key: https://code.google.com/apis/console/