Can I read from the AppEngine BlobStore using the remote api - python

I am trying to read (and subsequently save) blobs from the blobstore using the remote api. I get the error: "No api proxy found for service "blobstore"" when I execute the read.
Here is the stub code:
for b in bs.BlobInfo.all().fetch(100):
blob_reader = bs.BlobReader(str(b.key))
file = blob_reader.read()
the error occurs on the line: file = blob_reader.read()
I am reading the file from my personal appspot via terminal with:
python tools/remote_api/blobstore_download.py --servername=myinstance.appspot.com --appid=myinstance
So, reading from the blobstore possible via the remote api? or is my code bad? Any suggestions?

We recently added blobstore support to remote_api. Make sure you're using the latest version of the SDK, and your error should go away.

Related

How to upload a video file directly to Cloud Storage from a Flask form without BlobStore?

I am using Flask, and I have a form on my web app's index page, which requires users to upload MP4 videos. I expect my users to upload 30min long videos, so the video sizes are likely going to be in the hundreds of megabytes. The issue now is that I intend to deploy this Flask application to Google App Engine, and apparently I cannot work with any static file above 32MB. Somehow, when I try to upload any video in the deployed version that is above 32MB, I get a Request Too Large error.
I see that the BlobStore Python API used to be a recommended solution to work with really large files on the server in the past. But that was for Python 2.7: https://cloud.google.com/appengine/docs/standard/python/blobstore/
I'm using Python 3.7, and Google now recommends that files get uploaded directly to Cloud Storage, and I am not exactly sure how to do that.
Below is a snippet showing how I'm currently storing my users' uploaded videos through the form into Cloud Storage. Unfortunately, I'm still restricted from uploading large files because I get error messages. So again, my question is: How can I make my users upload their files directly to Cloud Storage in a way that won't let the server timeout or give me a Request Too Large error?
form = SessionForm()
blob_url = ""
if form.validate_on_submit():
f = form.video.data
video_string = f.read()
filename = secure_filename(f.filename)
try:
# The custom function upload_session_video() uploads the file to a Cloud Storage bucket
# It uses the Storage API's upload_from_string() method.
blob_url = upload_session_video(video_string, filename)
except FileNotFoundError as error:
flash(error, 'alert')
# Create the Cloud Storage bucket (same name as the video file)
user_bucket = create_bucket(form.patient_name.data.lower())
You cannot upload files more than 32MB to Cloud Storage using Google App Engine due to a request limitation. However, you can bypass that by uploading to Cloud Storage using with resumable uploads in python case use "google-resumable-media".
the size of the resource is not known (i.e. it is generated on the
fly)
requests must be short-lived
the client has request size limitations
the resource is too large to fit into memory
example code included here.

How to upload HTML file to an Azure Web App using Python?

I have a Python application which creates a HTML file which I then want to upload to an Azure Web Application.
What is the best way to do this?
I originally started try to do it using FTP and then switched to pushing with GIT. None of these really felt right. How should I be doing this?
UPDATE
I have this 99% working. I'm using a Storage Account to host a static site (which feels like the right way to do this).
This is how I am uploading:
blob_service_client = BlobServiceClient.from_connection_string(az_string)
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name)
print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)
# Upload the created file
with open('populated.html', "rb") as data:
blob_client.upload_blob(data)
The only problem that I have now, is that the file is downloading instead of opening in the browser. I think I need to set the content type somewhere.
Update 2
Working now, I added:
my_content_settings = ContentSettings(content_type='text/html')
test = blob_client.upload_blob(data, overwrite=True, content_settings=my_content_settings)
Cheers,
Mick
The best way to do this is up to you.
Generally, there are two ways to upload a HTML file to Azure Web App for Windows, as below.
Following the Kudu wiki page Accessing files via ftp to upload a file via FTP.
Following the sections VFS, Zip and Zip Deployment of Kudu wiki page REST API to call the related PUT REST API to upload a file via HTTP client.
However, based on my understanding for your scenario, the two ways above are not simple. So I recommanded to use the feature Static website of Azure Blob Storage Gen 2 to host your static HTML file generated by your Python application and to upload files via Azure Storage SDK for Python. I think it's simple enough to you, even you can bind a custom domain to the default host name of static website of Azure Blob Storage via DNS CNAME.
The steps are below.
Refer to the offical document Host a static website in Azure Storage to create an account of Azure Blob Storage Gen 2 and enable the feature Static website.
Refer to the other offical document Quickstart: Azure Blob storage client library v12 for Python to write the code for uploading in your current Python application . The container default named $web is for hosting static website, you just need to upload the files to it, then access its primary endpoint as the figure from offical document below to see it.

Allow Google Cloud Compute Engine Instance to write file to Google Storage Bucket - Python

In my python server script which is running on a google cloud VM instance, it tries to save an image(jpeg) in the storage. But it throws following error.
File "/home/thamindudj_16/server/object_detection/object_detector.py",
line 109, in detect Hand
new_img.save("slicedhand/{}#sliced_image{}.jpeg".format(threadname,
i)) File
"/home/thamindudj_16/.local/lib/python3.5/site-packages/PIL/Image.py",
line 2004, in save
fp = builtins.open(filename, "w+b")
OSError: [Errno 5] Input/output error: 'slicedhand/thread_1#sliced_image0.jpeg'
All the files including python scripts are in a google storage bucket and have mounted to the VM instance using gcsfuse. App tries to save new image in the slicedhand folder.
Python code snippet where image saving happen.
from PIL import Image
...
...
i = 0
new_img = Image.fromarray(bounding_box_img) ## conversion to an image
new_img.save("slicedhand/{}#sliced_image{}.jpeg".format(threadname, i))
I think may be the problem is with permission access. Doc says to use --key_file. But what is the key file I should use and where I can find that. I'm not clear whether this is the problem or something else.
Any help would be appreciated.
I understand that you are using gcfuse on your Linux VM Instance to access Google Cloud Storage.
Key file is a Service Account credentials key, that will allow you to initiate Cloud SDK or Client Library as another Service Account. You can download key file from Cloud Console. However, if you are using VM Instance, you are automatically using Compute Engine Default Service Account. You can check it using console command: $ gcloud init.
To configure properly your credentials, please follow the documentation.
Compute Engine Default Service Account, need to have enabled Access Scope Storage > Full. Access Scope is the mechanism that limits access level to Cloud APIs. That can be done during machine creation or when VM Instance is stopped.
Please note that Access Scopes are defined explicitly for the Service Account that you select for VM Instance.
Cloud Storage objects names have requirements. It is strongly recommended avoid using hash symbol "#" in the names of the objects.

How to read HTTP response from Azure Python SDK

I'm trying to push a file (put blob request) to the Azure CDN Blob storage using the python SDK. It works no problem, I just can't figure out how to read the header information in the response. According to the docs, its supposed to send back a 201 status if it is successful.
http://msdn.microsoft.com/en-us/library/azure/dd179451.aspx
http://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/
from azure.storage import BlobService
blob_service = BlobService(account_name='accountnamehere', account_key='apikeyhere')
file_contents = open('path/to/image.jpg').read()
blob_service.put_blob(CONTAINER, 'filename.jpg', file_contents, x_ms_blob_type='BlockBlob', x_ms_blob_content_type='image/jpeg')
Any help is greatly appreciated.
Thanks
You can't read the response code.
Source code for SDK is available on GitHub, and in the current version put_blob() function does not return anything.
Do you need to read it though? If put_blob completes succesfully, then your code continues from the next statement. If it were to fail, then the SDK will raise an exception which you can then catch.
You could verify your exception/error handling by using a wrong access key for example.

Upload a large blob from Appengine blobstore to Google Drive using Python using Drive SDK

The Python Drive API requires a "local file" to perform a resumable file upload to Google Drive, how can this be accomplished using Google Appengine which only has blobs and no access to a local file system.
Under the old doclist API (now depreciated) you could upload files from Google Appengine blobstore to Google Drive using the code below:
CHUNK_SIZE = 524288
uploader = gdata.client.ResumableUploader(
client, blob_info.open(), blob_info.content_type, blob_info.size, chunk_size=CHUNK_SIZE, desired_class=gdata.docs.data.DocsEntry)
The key part is using blob_info.open() rather than providing a reference to a local file.
How can we accomplish the same using the new Drive API?
Note the files are fairly big so a resumable upload is required, also I know this can be accomplished in Java but I am looking for a Python solution.
Many thanks,
Ian.
It looks like you are using the older GData client library and the Documents List API. If you use the new Drive SDK and the Google APIs Python client library, you can use the MediaIoBaseUpload class to create a media upload object from memory instead of from a file.

Categories