Google Drive API:How to download files from google drive? - python

access_token = ''
import json
r = session.request('get', 'https://www.googleapis.com/drive/v3/files?access_token=%s' % access_token)
response_text = str(r.content, encoding='utf-8')
files_list = json.loads(response_text).get('files')
files_id_list = []
for item in files_list:
files_id_list.append(item.get('id'))
for item in files_id_list:
file_r = session.request('get', 'https://www.googleapis.com/drive/v3/files/%s?alt=media&access_token=%s' % (item, access_token))
print(file_r.content)
I use the above code and Google shows:
We're sorry ...
... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.
I do n’t know if this method ca n’t be downloaded originally, or where is the problem?

The reason you are getting this error is you are requesting the data in a Loop.
causes so many requests to Google's server.
And hence the error
We're sorry ... ... but your computer or network may be sending automated queries

access_token should not be placed in the request body,We should put access_token in the header.Can try on this site oauthplayground

Related

Resumable Upload to Google CLoud Storage using Python?

Ive been testing resumable upload of a file (500MB) to google cloud storage using python but it doesn't seem to be working.
As per the official documentation(https://cloud.google.com/storage/docs/resumable-uploads#python): Resumable uploads occur when the object is larger than 8 MiB, and multipart uploads occur when the object is smaller than 8 MiB This threshold cannot be changed. The Python client library uses a buffer size that's equal to the chunk size. 100 MiB is the default buffer size used for a resumable upload, and you can change the buffer size by setting the blob.chunk_size property.
This is the python code Ive written to test resumable upload
def upload_to_bucket(blob_name, path_to_file, bucket_name):
"""Upload a file to the bucket"""
storage_client = storage.Client.from_service_account_json(RAW_DATA_BUCKET_PERMISSIONS_FILEPATH)
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
blob.upload_from_filename(path_to_file)
The time to upload the file using this function took about 84s. I then deleted the file and then re-ran this function, but cut-off my internet connection after about 40s. After establishing internet connection again, i re-ran the upload function expecting the upload time to be much shorter, instead it took the about 84s again.
Is this how resumable upload is suppose to work?
We have field units in remote locations with spotty cellular connection running raspberry pis. We have issues getting data out sometimes. This data is about 0.2-1MB in size. Having a resumable solution that works with small file sizes, and doesn't have to try and upload the whole file each time after an initial failure would be great.
Perhaps there is a better way? Thanks for any help, Rich :)
I believe that the documentation is trying to say that the client will, within that one function call, resume an upload in the event of a transient network failure. It does not mean that if you re-run the program and attempt to upload the same file to the same blob name a second time, that the client library will be able to detect your previous attempt and resume the operation.
In order to resume an operation, you'll need a session ID for an upload session. You can create one by calling blob.create_resumable_upload_session(). That'll get you a URL which you can upload data or query for recorded progress on the server. You'll need to save it somewhere your program will notice it on the next run.
You can either use an HTTP utility to do a PUT directly to the URL, or you could use the ResumableUpload class of the google-resumable-media package to manage the upload to that URL for you.
There is little info out there that demonstrates how this is done. This is how i ended up getting it to work. I'm sure there is a better way so let me know
def upload_to_bucket(blob_name, path_to_file, bucket_name):
"""Upload a file to the bucket"""
upload_url = f"https://www.googleapis.com/upload/storage/v1/b/{bucket_name}/o?uploadType=resumable&name={blob_name}"
file_total_bytes = os.path.getsize(path_to_file)
print('total bytes of file ' + str(file_total_bytes))
# intiate a resumable upload session
upload = ResumableUpload(upload_url, CHUNK_SIZE)
# provide authentication
transport = AuthorizedSession(credentials=CLIENT._credentials)
metadata={'name': blob_name}
with open(path_to_file, "rb") as file_to_transfer:
response = upload.initiate(transport, file_to_transfer, metadata, 'application/octet-stream', total_bytes=file_total_bytes)
print('Resumable Upload URL ' + response.headers['Location'])
# save resumable url to json file in case there is an issue
add_resumable_url_to_file(path_to_file, upload.resumable_url)
while True:
try:
response = upload.transmit_next_chunk(transport)
if response.status_code == 200:
#upload complete
break
if response.status_code != 308:
# save Resumable URL and try next time
raise Exception('Failed to upload chunk')
print(upload.bytes_uploaded)
except Exception as ex:
print(ex)
print('cloud upload complete')
remove_resumable_url_from_file(path_to_file)
def resume_upload_to_bucket(resumable_upload_url, path_to_file,):
# check resumable upload status
response = requests.post(resumable_upload_url, timeout=60)
if response.status_code == 200:
print('Resumable upload completed successfully')
remove_resumable_url_from_file(path_to_file)
return
# get the amount of bytes previously uploaded
previous_amount_bytes_uploaded = int(response.headers.get('Range', '0').split('-')[-1]) + 1
file_total_bytes = os.path.getsize(path_to_file)
with open(path_to_file, "rb") as file_to_transfer:
# Upload the remaining data
for i in range(previous_amount_bytes_uploaded, file_total_bytes, CHUNK_SIZE):
# chunk = file_to_transfer[i:i + CHUNK_SIZE]
file_byte_location = file_to_transfer.seek(i)
print(file_byte_location)
chunk = file_to_transfer.read(CHUNK_SIZE)
headers = {'Content-Range': f'bytes {i}-{i + len(chunk) - 1}/{file_total_bytes}'}
response = requests.put(resumable_upload_url, data=chunk, headers=headers, timeout=60)
if response.status_code == 200:
#upload complete
break
if response.status_code != 308:
# save Resumable URL and try next time
raise Exception('Failed to upload chunk')
print('resumable upload completed')
remove_resumable_url_from_file(path_to_file)

create email with python

im working in a personal project that needs new email in the start, and i want create a new email with python also i don't want run a complicate smtp server(I don't know much about that yet) i want do something like temp mail with api, i'd tried temp mail api but i got error i do something like this
import requests
url = "privatix-temp-mail-v1.p.rapidapi.com/request/mail/id/md5 of my temp mail"
req = request.get(url)
print(req)
but i got 401 status code that says your api key is invalid
then i go to rapidapi website and see examples there was a header for req so i put that to my code that was like:
import requests
url = "https://privatix-temp-mail-v1.p.rapidapi.com/request/mail/id/md5"
headers = {
'x-rapidapi-host': "privatix-temp-mail-v1.p.rapidapi.com",
'x-rapidapi-key': "that was a key"
}
req = request.get(url, headers=headers)
then i got this
{"message":"You are not subscribed to this API."}
now i get confused and i don't know what is problem if you know temp mail api or something liks this service or any suggest pls help me
In order to use any API from RapidAPI Hub, you need to subscribe to that particular API. It's pretty simple.
Go to the Pricing Page of this API and choose a plan according to your need. Click on the subscribe button and you will be good to go. However, the Basic plan is free but a soft limit is associated with it so it may ask for your card details.

Google Pubsub Push to AppEngine 502 and 504 Errors

I am basing this task on the documentation provided here by Google:
https://cloud.google.com/appengine/docs/flexible/python/writing-and-responding-to-pub-sub-messages
I ended up coming with this code for my main that takes out the globally stored variable for messages, and instead writes each one out to a flat file.
# [START push]
#app.route('/pubsub/push', methods=['POST'])
def pubsub_push():
if (request.args.get('token', '') != current_app.config['PUBSUB_VERIFICATION_TOKEN']):
return 'Invalid request', 400
# Decode the data
envelope = json.loads(request.data.decode('utf-8'))
# Current time in UTC
current_date = datetime.utcnow()
payload = json.loads(base64.b64decode(envelope['message']['data']))
# Normalize and flatten the data
if "events" in payload and payload['events']:
payload['events'] = payload['events'][0]['data']
payload = parse_dict(init=payload, sep='_')
# Now jsonify all remaining lists and dicts
for key in payload.keys():
value = payload[key]
if isinstance(value, list) or isinstance(value, dict):
if value:
value = json.dumps(value)
else:
value = None
payload[key] = value
# Custom id with the message id and date string
id = "{}.{}".format(
payload['timestamp_unixtime_ms'],
payload['message_id']
)
filename_hourly = 'landing_path/{date}/{hour}/{id}.json'.format(
date=current_date.strftime("%Y%m%d"),
hour=current_date.strftime("%H"),
id=id
)
blob = bucket.get_blob(filename_hourly)
if blob: # We already have this file, skip this message
print('Already have {} stored.'.format(filename_hourly))
return 'OK', 200
blob_hourly = Blob(bucket=bucket, name=filename_hourly)
blob_hourly.upload_from_string(json.dumps(payload, indent=2, sort_keys=True))
# Returning any 2xx status indicates successful receipt of the message.
return 'OK', 200
# [END push]
This works perfectly, but I am getting a ton of 502 and 504 errors. It's displayed here in my stackdriver dashboard I created.
My guess is it's taking too long to upload the files, but I am unsure of what to do otherwise. Any suggestions? The resources on the appengine boxes are quite low, and my API limits are not even close.
Suggestions anyone?
This issue seemed to have been addressed on the Google Cloud Platform issue tracker link. As explained on that link, usually, 5xx responses are from nginx commonly caused when application is too busy to respond. The nginx webserver sits in front of App Engine application on Google App Engine Instance.
However, further review suggests to avoid the use gevent async workers with the Google Cloud Client Library as they cause requests to hang, but instead use more gunicorn workers.

Python spotipy Oauth : Bad request

This is my code
UserScope = 'user-library-read'
util.prompt_for_user_token(username='vulrev1',scope=UserScope,client_id="533adb3f925b488za9d3772640ec6403",client_secret='66054b185c7541fcabce67afe522449b',redirect_uri="http://127.0.0.1/callback")
lz_uri = 'spotify:artist:36QJpDe2go2KgaRleHCDTp'
spotify = spotipy.Spotify()
results = spotify.artist_top_tracks(lz_uri)
for track in results['tracks'][:10]:
print ('track : ' + track['name'])
I'm getting this
spotipy.oauth2.SpotifyOauthError: Bad Request
I'm not quite sure what's going on here, is there something I need to do with the host files ? because http://127.0.0.1 refuses to connect
token = util.prompt_for_user_token(username='vulrev1',scope=UserScope,client_id="533adb3f925b488za9d3772640ec6403",client_secret='66054b185c7541fcabce67afe522449b',redirect_uri="http://127.0.0.1/callback")
spotify = spotipy.Spotify(auth=token)
You have to send your token as a auth. And also i think you must change your user ID to the specific numbers where u can find at the link of your profile.

Writing code using graph APIs

I am extremely new to python , scripting and APIs, well I am just learning. I came across a very cool code which uses facebook api to reply for birthday wishes.
I will add my questions, I will number it so that it will be easier for someone else later too. I hope this question will clear lots of newbies doubts.
1) Talking about APIs, in what format are the usually in? is it a library file which we need to dowload and later import? for instance, twitter API, we need to import twitter ?
Here is the code :
import requests
import json
AFTER = 1353233754
TOKEN = ' <insert token here> '
def get_posts():
"""Returns dictionary of id, first names of people who posted on my wall
between start and end time"""
query = ("SELECT post_id, actor_id, message FROM stream WHERE "
"filter_key = 'others' AND source_id = me() AND "
"created_time > 1353233754 LIMIT 200")
payload = {'q': query, 'access_token': TOKEN}
r = requests.get('https://graph.facebook.com/fql', params=payload)
result = json.loads(r.text)
return result['data']
def commentall(wallposts):
"""Comments thank you on all posts"""
#TODO convert to batch request later
for wallpost in wallposts:
r = requests.get('https://graph.facebook.com/%s' %
wallpost['actor_id'])
url = 'https://graph.facebook.com/%s/comments' % wallpost['post_id']
user = json.loads(r.text)
message = 'Thanks %s :)' % user['first_name']
payload = {'access_token': TOKEN, 'message': message}
s = requests.post(url, data=payload)
print "Wall post %s done" % wallpost['post_id']
if __name__ == '__main__':
commentall(get_posts())`
Questions:
importing json--> why is json imported here? to give a structured reply?
What is the 'AFTER' and the empty variable 'TOKEN' here?
what is the variable 'query' and 'payload' inside get_post() function?
Precisely explain almost what each methods and functions do.
I know I am extremely naive, but this could be a good start. A little hint, I can carry on.
If not going to explain the code, which is pretty boring, I understand, please tell me how to link to APIs after a code is written, meaning how does a script written communicate with the desired API.
This is not my code, I copied it from a source.
json is needed to access the web service and interpret the data that is sent via HTTP.
The 'AFTER' variable is supposed to get used to assume all posts after this certain timestamp are birthday wishes.
To make the program work, you need a token which you can obtain from Graph API Explorer with the appropriate permissions.

Categories