Including Google Cloud SDK in virtualenv using Python Flexible Environment - python

I want to use the google.appengine.api images package but I do not know how to install the tools for virtualenv. The package works fine when I use dev_appserver.py on my normal environment but when I use the flexible environment with flask it cannot find the package. Is there a way to add the images library into my virtualenv?
When I try using Pillow to resize the image before I uploaded it to the server but when I would do that the image would arrive in the cloud storage at 0B.
if file and allowed_file(file.filename):
filename = '%s_%s.jpg' % (item.id, len(item.photos))
# Resize file using pillow
image = Image.open(file)
image.thumbnail((300,300)
resized_image = io.BytesIO()
image.save(resized_image, format='JPEG')
# if I did a image.show() here the image would
# properly be shown resized
gcs = storage.Client()
bucket = gcs.get_bucket(CLOUD_STORAGE_BUCKET)
blob = bucket.blob(filename)
blob.upload_from_file(resized_image,
content_type=file.content_type)
# I would then view the image in the bucket and it shows up as 0 bytes
# and blank
# If I just use the regular file it uploads fine.

You may be out of luck, the images service is not available outside the standard environment.
From the Migrating Services from the Standard Environment to the Flexible Environment:
The Images service is not available outside of the standard
environment. However, you can easily serve images directly from your
application or directly from Cloud Storage.
If you need to do image processing, you can install and use any image
processing library such as Pillow.
The Images service also provided functionality to avoid dynamic
requests to your application by handling image resizing using a
serving URL. If you want similar functionality, you can generate the
re-sized images ahead of time and upload them to Cloud Storage for
serving. Alternatively, you could use a third-party content delivery
network (CDN) service that offers image resizing.
For more resources, see the following guides:
Using Cloud Storage
Serving Static Files

Related

Load folder of images (dataset) stored in Google Cloud Storage using python

I am trying to load a folder of images from Google Cloud Storage bucket into a Python script (version 3.9) using different way but didn't work! My folder contains approximately 930,000 images in path: buket-name/main-folder/image-folder.
I tried this solution, and it takes a long time without any result.
I define the Google credential and service account, but still no response!
Thank you in advance.

How to download images on Colab / AWS ML instance for ML purposes when Images already belong to AWS links

So I have my images links like:
https://my_website_name.s3.ap-south-1.amazonaws.com/XYZ/image_id/crop_image.png
and I have almost 10M images which I want to use for Deep Learning purpose. I have a script to download the images and save them in desired directories already using requests and PIL
Most naïve idea that I have and which I have been using my whole life is to first download all the images in my local machine, make a zip and upload it to Google Drive where I can just use gdown to download it anywhere based on my Network Speed. Or just copy to Colab using terminal.
But that data was not so big. Always under 200K images. But now, the data is huge so downloading and again uploading the images will take a whole lot of time in days and on top of that, it'll just make the Google Drive run out of space with 10M images. So I am thinking about using AWS ML (SageMaker) or something else from AWS. So is there a better approach to this? How can I import the data directly to my SSD supported based virtual machine?
You can use the AWS python library boto3 to connect to the S3 bucket from Colab: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html

Upload image file to cache on Pyramid

I have a python web project based on Pyramid. I have a confusion in deciding to use which tools I have to use to enable image uploading. I used pyramid_storage(https://github.com/danjac/pyramid_storage) to handle the image uploading before, but I haven't figured how to expire the uploaded file.

Importing images Azure Machine Learning Studio

Is it possible to import images from your Azure storage account from within a Python script module as opposed to using the Import Images module that Azure ML Studio provides. Ideally I would like to use cv2.imread(). I only want to read in grayscale data but the Import Images module reads in RGB.
Can I use the BlockBlobService library as if I were calling it from an external Python script?
yes, you should be able to do that using Python. At the very least, straight REST calls should work.
Based on my understanding, I think you want to get the grayscale data of a image which comes from Azure Blob Storage via the method cv2.imread of python-opencv2.
I tried to write a python script using azure-storage==0.20.3 package to do it. Here is my sample code as below.
from azure.storage.blob import BlobService
import numpy
import cv2
service = BlobService(account_name='<your storage account name>', account_key='<your storage account key>')
blob = service.get_blob_to_bytes('mycontainer', 'test.jpg')
print type(blob)
np_array = numpy.fromstring(blob, numpy.uint8)
print np_array
img = cv2.imdecode(np_array, cv2.CV_LOAD_IMAGE_COLOR)
If using the latest azure-storage package, make sure that using the code as below.
from azure.storage.blob import BlockBlobService
service = BlockBlobService(account_name='<your storage account name>', account_key='<your storage account key>')
The code above works fine on local environment, but it doesn't work as a Execute Python Script module on the Experiments of Azure ML Studio, because of missing the required Python packages azure-storage & cv2. Then I tried to follow the document Adding Python Script as a Custom Resource to add these packages, but failed that I realized the python-opencv2 package is depended on the C native library opencv2.
So per my experience, I think the simple & workaround way is that computing the grayscale data with the RGB data in dataframe from Import Images module of OpenCV Library Modules.
Hope it helps.

Sever Side convert SVG to PNG or JPG image with Python on Google Application Engine

How to convert SVG to PNG or JPG image with Python on Google Application Engine?
Any idea?
Google Application Engine has PIL support. However, PIL doesn't support SVG.
When working with Google App Engine you will need a pure Python library because you can't install anything that's compiled (eg. Cairo). While there are pure Python libraries for creating SVG files (eg. pySVG) I don't think there are any for rasterizing SVG images.
If you manage to find (or write) a pure Python library to do this, it is likely to be prohibitive to run it on GAE due to the amount of computation required and the request time restrictions.
I would consider hosting the image conversion service elsewhere and have the GAE fetch the PNGs from there

Categories