I need to upload an image string (as the one you get from requests.get(url).content) to google drive using the PyDrive package. I checked a similar question but the answer accepted there was to save it in a temporary file on a local drive and then upload that.
However, I cannot do that because of local storage and permission restrictions.
The accepted answer was previously to use SetContentString(image_string.decode('utf-8')) since
SetContentString requires a parameter of type str not bytes.
However the error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte came up, as in the comments on that answer.
Is there any way to do this without using a temporary file, using PIL/BytesIO/anything that can convert it to be uploaded correctly as a string or somehow using PIL manipulated as an image and uploaded using SetContentFile()?
A basic example of what I'm trying to do is:
img_content = requests.get('https://i.imgur.com/A5gIh7W.jpeg')
file = drive.CreateFile({...})
file.setContentString(img_content.decode('utf-8'))
file.Upload()
When I saw the document (Upload and update file content) of pydrive, it says as follows.
Managing file content is as easy as managing file metadata. You can set file content with either SetContentFile(filename) or SetContentString(content) and call Upload() just as you did to upload or update file metadata.
And, I searched about the method for directly uploading the binary data to Google Drive. But, I couldn't find it. From this situation, I thought that there might not be such method. So, in this answer, I would like to propose to upload the binary data using requests module. In this case, the access token is retrieved from the authorization script of pydrive. The sample script is as follows.
Sample script:
from pydrive.auth import GoogleAuth
import io
import json
import requests
url = 'https://i.imgur.com/A5gIh7W.jpeg' # Please set the direct link of the image file.
filename = 'sample file' # Please set the filename on Google Drive.
folder_id = 'root' # Please set the folder ID. The file is put to this folder.
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
metadata = {
"name": filename,
"parents": [folder_id]
}
files = {
'data': ('metadata', json.dumps(metadata), 'application/json'),
'file': io.BytesIO(requests.get(url).content)
}
r = requests.post(
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart",
headers={"Authorization": "Bearer " + gauth.credentials.access_token},
files=files
)
print(r.text)
Note:
In this script, it supposes that your URL is the direct link of the image file. Please be careful this.
In this case, uploadType=multipart is used. The official document says as follows. Ref
Use this upload type to quickly transfer a small file (5 MB or less) and metadata that describes the file, in a single request. To perform a multipart upload, refer to Perform a multipart upload.
When you want to upload the data of the large size, please use the resumable upload. Ref
References:
Upload and update file content of pydrive
Upload file data of Drive API
Related
I have the following Python function to write the given content to a bucket in Cloud Storage:
import gzip
from google.cloud import storage
def upload_to_cloud_storage(json):
"""Write to Cloud Storage."""
# The contents to upload as a JSON string.
contents = json
storage_client = storage.Client()
# Path and name of the file to upload (file doesn't yet exist).
destination = "path/to/name.json.gz"
# Gzip the contents before uploading
with gzip.open(destination, "wb") as f:
f.write(contents.encode("utf-8"))
# Bucket
my_bucket = storage_client.bucket('my_bucket')
# Blob (content)
blob = my_bucket.blob(destination)
blob.content_encoding = 'gzip'
# Write to storage
blob.upload_from_string(contents, content_type='application/json')
However, I receive an error when running the function:
FileNotFoundError: [Errno 2] No such file or directory: 'path/to/name.json.gz'
Highlighting this line as the cause:
with gzip.open(destination, "wb") as f:
I can confirm that the bucket and path both exist although the file itself is new and to be written.
I can also confirm that removing the Gzipping part sees the file successfully written to Cloud Storage.
How can I gzip a new file and upload to Cloud Storage?
Other answers I've used for reference:
https://stackoverflow.com/a/54769937
https://stackoverflow.com/a/67995040
Although #David's answer wasn't complete at the time of solving my problem, it got me on the right track. Here's what I ended up using along with explanations I found out along the way.
import gzip
from google.cloud import storage
from google.cloud.storage import fileio
def upload_to_cloud_storage(json_string):
"""Gzip and write to Cloud Storage."""
storage_client = storage.Client()
bucket = storage_client.bucket('my_bucket')
# Filename (include path)
blob = bucket.blob('path/to/file.json')
# Set blog meta data for decompressive transcoding
blob.content_encoding = 'gzip'
blob.content_type = 'application/json'
writer = fileio.BlobWriter(blob)
# Must write as bytes
gz = gzip.GzipFile(fileobj=writer, mode="wb")
# When writing as bytes we must encode our JSON string.
gz.write(json_string.encode('utf-8'))
# Close connections
gz.close()
writer.close()
We use the GzipFile() class instead of convenience method (compress) to enable us to pass in the mode. When trying to write using w or wt you will receive the error:
TypeError: memoryview: a bytes-like object is required, not 'str'
So we must write in binary mode (wb). This will also enable the .write() method. When doing so however we need to encode our JSON string. This can be done using str.encode() and setting it as utf-8. Failing to do this will also result in the same error.
Finally, I wanted to be able to enable decompressive transcoding where the requester (browser in my case) will receive the uncompressed version of the file when requested. To enable this google.cloud.storage.blob allows you to set some meta data including content_type and content_encoding so we can can follow best practices.
This sees the JSON object in memory written to your chosen destination in Cloud Storage in a compressed format and decompressed on the fly (without needing to download a gzip archive).
Thanks also to #JohnHanley for the troubleshooting advice.
The best solution is not to write the gzip to a file at all, and directly compress and stream to GCS.
from google.cloud import storage
from google.cloud.storage import fileio
storage_client = storage.Client()
bucket = storage_client.bucket('my_bucket')
blob = bucket.blob('my_object')
writer = fileio.BlobWriter(blob)
gz = gzip.GzipFile(fileobj=writer, mode="w") # use "wb" if bytes
gz.write(contents)
gz.close()
writer.close()
I exported some images from Google Earth Engine to Google Drive. I need to download those images to a local drive using a Python script. Then, I tried to use oauth2client, apiclient as I saw here:
I got a list of files in Drive and the corresponding IDs, then I use the ID to try to download the file using the gdown lib:
gdown.download(f'https://drive.google.com/uc?id={file_data["id"]}',
f'{download_path}{os.sep}{filename_to_download}.tif')
I got the following error message:
Access denied with the following error:
Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses.
You may still be able to access the file from the browser:
https://drive.google.com/uc?id=<id>
As I got the Drive file list, I suppose that the Drive authentication is ok. If I use the error message suggested link in the browser, I can download the file. If a check file properties at Drive, I can see:
Who can access: not shared.
What should I do to download the files?
This is the complete code:
# https://medium.com/swlh/google-drive-api-with-python-part-i-set-up-credentials-1f729cb0372b
# https://levelup.gitconnected.com/google-drive-api-with-python-part-ii-connect-to-google-drive-and-search-for-file-7138422e0563
# https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import os
from apiclient import discovery
from httplib2 import Http
from oauth2client import client, file, tools
import gdown
class GoogleDrive(object):
# define API scope
def __init__(self, secret_credentials_file_path = './credentials'):
self.DriveFiles = None
SCOPE = 'https://www.googleapis.com/auth/drive'
self.store = file.Storage(f'{secret_credentials_file_path}{os.sep}credentials.json')
self.credentials = self.store.get()
if not self.credentials or self.credentials.invalid:
flow = client.flow_from_clientsecrets(f'{secret_credentials_file_path}{os.sep}client_secret.json',
SCOPE)
self.credentials = tools.run_flow(flow, self.store)
oauth_http = self.credentials.authorize(Http())
self.drive = discovery.build('drive', 'v3', http=oauth_http)
def RetrieveAllFiles(self):
results = []
page_token = None
while True:
try:
param = {}
if page_token:
param['pageToken'] = page_token
files = self.drive.files().list(**param).execute()
# append the files from the current result page to our list
results.extend(files.get('files'))
# Google Drive API shows our files in multiple pages when the number of files exceed 100
page_token = files.get('nextPageToken')
if not page_token:
break
except Exception as error:
print(f'An error has occurred: {error}')
break
self.DriveFiles = results
def GetFileData(self, filename_to_search):
for file_data in self.DriveFiles:
if file_data.get('name') == filename_to_search:
return file_data
else:
return None
def DownloadFile(self, filename_to_download, download_path):
file_data = self.GetFileData(f'{filename_to_download}.tif')
gdown.download(f'https://drive.google.com/uc?id={file_data["id"]}',
f'{download_path}{os.sep}{filename_to_download}.tif')
Google drive may not be the best tool for this, you may want to upload them into a RAW file hosting service like Imgur and download it to a file using requests, you can then read the file using the script or you don't even have to write it to the file and just use image.content instead to specify the image. Here's an example:
image = requests.get("https://i.imgur.com/5SMNGtv.png")
with open("image.png", 'wb') as file:
file.write(image.content)
(You can specify the location of where you want the file to download by adding the PATH before the file name, like this:)
image = requests.get("https://i.imgur.com/5SMNGtv.png")
with open("C://Users//Admin//Desktop//image.png", 'wb') as file:
file.write(image.content)
Solution 1.
Access denied with the following error:
Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses.
You may still be able to access the file from the browser:
https://drive.google.com/uc?id=<id>
In the sharing tab on gdrive (Right click on image, open Share or Get link), please change privacy to anyone with the link. Hopefully your code should work.
Solution 2.
If you can use Google Colab, then you can mount gdrive easily and access files there using
from google.colab import drive
drive.mount('/content/gdrive')
Google has this policy that they do not accept your regular google-/gmail-password. They only accept so called "App Passwords" that you need to create for your google-account in order to authenticate if you are using thirdparty apps
I want to read a file directly from Google Drive using the Google Drive API on Visual Studio Code using Python.
Here is a part of codes:
file2 = drive.CreateFile({'id': file1['<the file ID of my file that is inside the Google Drive>']})
file2.GetContentString('testing.csv')
Upon running this, I get a
KeyError: KeyError('<the file ID of my file that is inside the Google Drive>')
I searched on the internet the possible ways to solve this but nothing seems to work so far...
I followed this tutorial: Hands-on tutorial for managing Google Drive files with Python
{'id': file1['id']} implies that you want to retrieve the id of file1 - a file that you are expected to have uploaded in previous steps
If you did not define file1 in your code, you can hardcode instead the valid id of any file on your Drive as parameter.
Sample:
file2 = drive.CreateFile({'id': '<the file ID of a csv file on your Google Drive>'})
file2.GetContentString('testing.csv')
uploading a file to google drive
The you need to send the MediaFileUpload for a file to be uploaded
file_metadata = {'name': 'photo.jpg'}
media = MediaFileUpload('files/photo.jpg', mimetype='image/jpeg')
file = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print 'File ID: %s' % file.get('id')
view uploaded file on google drive web application.
using file file.get you get a File resporce response. If the file you have uploaded is of a type that Google drive can open and display it will have an property called.
webViewLink string A link for opening the file in a relevant Google editor or viewer in a browser.
You can use that link to open the file over in the Google drive web application. However note that the user opening the file must have permissions on this file to be able to view it.
reading data programmitclly.
Remember Google drive api is just a file storage api it doesn't have the ability to open files it just stores them. If your working with an CSV then you should consider converting it to a google sheet and then using the Google Sheets api to access the data programmaticlly
Cloud Function will triggered once a file gets uploaded in the storage,
My File Name : PubSubMessage.
Inside Text : Hi, this this the first message
from google.cloud import storage
storage_client = storage.Client()
def hello_gcs(event, context):
file = event
bucket = storage_client.get_bucket(file['bucket'])
blob = bucket.blob(file['name'])
contents = blob.download_as_string()
print('contents: {}'.format(contents))
decodedstring = contents.decode(encoding="utf-8", errors="ignore")
print('decodedstring: \n{}'.format(decodedstring))
print('decodedstring: \n{}'.format(decodedstring))
------WebKitFormBoundaryAWAKqDaYZB3fJBhx
Content-Disposition: form-data; name="file"; filename="PubSubMessage.txt"
Content-Type: text/plain
Hi, this this the first line.
Hi ,this is the second line.
hi this is the space after.
------WebKitFormBoundaryAWAKqDaYZB3fJBhx--
My Requirements.txt file
google-cloud-storage
requests==2.20.0
requests-toolbelt==0.9.1
How do i get the actual string inside the file "Hi, I am the first message....." ?
What is the best possible way to get the text from a file?
TIA
The string you read from Google Storage is a string representation of a multipart form. It contains not only the uploaded file contents but also some metadata. The same kind of request may be used to represent more than one file and/or form fields along with a file.
To access the file contents you want, you can use a library which supports that, such as requests-toolbelt. Check out this SO answer for an example. You'll need the Content-Type header, which includes the boundary, or to manually parse the boundary just from the content, if you absolutely must.
EDIT: from your answer, it seems that the Content-Type header was available in the Storage Metadata in Google Storage, which is a common scenario. For future readers of this answer, the specifics of where to read this header from will depend on your particular case.
Since this library is present in PyPI (the Python Package Index), you can use it even in Cloud Functions by specifying it as a dependency in the requirements.txt file.
Below Code will print the actual text present inside a file.
from requests_toolbelt.multipart import decoder
from google.cloud import storage
storage_client = storage.Client()
def hello_gcs(event, context):
file = event
bucket = storage_client.bucket(file['bucket'])
#print('Bucket Name : {}'.format(file['bucket']))
#print('Object Name : {}'.format(file['name']))
#print('Bucket Object : {}'.format(bucket))
blob = bucket.get_blob(file['name'])
#print('Blob Object : {}'.format(blob))
contentType = blob.content_type
print('Blob ContentType: {}'.format(contentType))
#To download the file as byte object
content = blob.download_as_string()
print('content: {}'.format(content))
for part in decoder.MultipartDecoder(content, contentType).parts:
print(part.text)
I am a python developer and somewhat new to using Google's gMail API to import .eml files into a gMail account.
I've gotten all of the groundwork done getting my oAuth credentials working, etc.
However, I am stuck where I load in the data-file. I need help loading the message data in to place in a variable..
How do I create the message_data variable reference - in the appropriate format - from my sample email file (which is stored in rfc822 format) that is on disk?
Assuming I have a file on disk at /path/to/file/sample.eml ... how do I load that to message_data in the proper format for the gMail API import call?
...
# how do I properly load message_data from the rfc822 disk file?
media = MediaIoBaseUpload(message_data, mimetype='message/rfc822')
message_response = service.users().messages().import_(
userId='me',
fields='id',
neverMarkSpam=True,
processForCalendar=False,
internalDateSource='dateHeader',
media_body=media).execute(num_retries=2)
...
You want to import an eml file using Gmail API.
You have already been able to get and put values for Gmail API.
You want to achieve this using google-api-python-client.
service in your script can be used for uploading the eml file.
If my understanding is correct, how about this answer? Please think of this as just one of several possible answers.
Modification point:
In this case, the method of "Users.messages: insert" is used.
Modified script:
Before you run the script, please set the filename with the path of the eml file.
eml_file = "###" # Please set the filename with the path of the eml file.
user_id = "me"
f = open(eml_file, "r", encoding="utf-8")
eml = f.read()
f.close()
message_data = io.BytesIO(eml.encode('utf-8'))
media = MediaIoBaseUpload(message_data, mimetype='message/rfc822', resumable=True)
metadata = {'labelIds': ['INBOX']}
res = service.users().messages().insert(userId=user_id, body=metadata, media_body=media).execute()
print(res)
In above script, the following modules are also required.
import io
from googleapiclient.http import MediaIoBaseUpload
Note:
In above modified script, {'labelIds': ['INBOX']} is used as the metadata. In this case, the imported eml file can be seen at INBOX of Gmail. If you want to change this, please modify this.
Reference:
Users.messages: insert
If I misunderstood your question and this was not the result you want, I apologize.