How to display image stored in Google Cloud bucket - python

I can successfully access the google cloud bucket from my python code running on my PC using the following code.
client = storage.Client()
bucket = client.get_bucket('bucket-name')
blob = bucket.get_blob('images/test.png')
Now I don't know how to retrieve and display image from the "blob" without writing to a file on the hard-drive?

You could, for example, generate a temporary url
from gcloud import storage
client = storage.Client() # Implicit environ set-up
bucket = client.bucket('my-bucket')
blob = bucket.blob('my-blob')
url_lifetime = 3600 # Seconds in an hour
serving_url = blob.generate_signed_url(url_lifetime)
Otherwise you can set the image as public in your bucket and use the permanent link that you can find in your object details
https://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME

Download the image from GCS as bytes, wrap it in BytesIO object to make the bytes file-like, then read in as a PIL Image object.
from io import BytesIO
from PIL import Image
img = Image.open(BytesIO(blob.download_as_bytes()))
Then you can do whatever you want with img -- for example, to display it, use plt.imshow(img).

In Jupyter notebooks you can display the image directly with download_as_bytes:
from google.cloud import storage
from IPython.display import Image
client = storage.Client() # Implicit environment set up
# with explicit set up:
# client = storage.Client.from_service_account_json('key-file-location')
bucket = client.get_bucket('bucket-name')
blob = bucket.get_blob('images/test.png')
Image(blob.download_as_bytes())

Related

How to load fonts from GCS

I want to load "fonts" from Google Storage, I've try two ways, but none of them work. Any pointers? Appreciated for any advices provided.
First:
I follow the instruction load_font_from_gcs(uri)given in the answer here, but I received an NameError: name 'load_font_from_gcs' is not defined message. I installed google storage dependency and execute from google.cloud import storage
.
Second:
I try to execute the following code (reference #1) , and running into an blob has no attribute open() error, just the same answer I get it here, but as the reference in this link, it give a positive answer.
reference #1
bucket = storage_client.bucket({bucket_name})
blob = bucket.get_blob({blob_name)
with blob.open("r") as img:
imgblob = Image.open(img)
draw = ImageDraw.Draw(imgblob)
According to the provided links, your code must use BytesIO in order to work with the font file loaded from GCS.
The load_font_from_gcs is a custom function, written by the author of that question you referencing.
And it is not represented in the google-cloud-storage package.
Next, according to the official Google Cloud Storage documentation here:
Files from storage can be accessed this way (this example loads the font file into the PIL.ImageFont.truetype):
# Import PIL
from PIL import Image, ImageFont, ImageDraw
# Import the Google Cloud client library
from google.cloud import storage
# Import BytesIO module
from io import BytesIO
# Instantiate a client
storage_client = storage.Client()
# The name of the bucket
bucket_name = "my-new-bucket"
# Required blob
blob_name = "somefont.otf"
# Creates the bucket & blob instance
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
# Download the given blob
blob_content = blob.download_as_string()
# Make ImageFont out of it (or whatever you want)
font = ImageFont.truetype(BytesIO(font_file), 18)
So, your reference code can be changed respectively:
bucket = storage_client.bucket({bucket_name})
blob = bucket.get_blob({blob_name).download_as_string()
bytes = BytesIO(blob)
imgblob = Image.open(bytes)
draw = ImageDraw.Draw(imgblob)
You can read more about PIL here.
Also, don't forget to check the official Google Cloud Storage documentation.
(There are plenty of examples using Python code.)

Download a picture from a blob using python ( azure Blob storage)

I want to download an image from a blob that is in a container.
I searched and I only found how to download a container, but as I said I do not want to download the whole container and not the whole blob otherwise just an image.
(container/blob/image.png)
this is the code that i found ( to download all the container):
import os
from azure.storage.blob import BlobServiceClient, BlobClient
from azure.storage.blob import ContentSettings, ContainerClient
# IMPORTANT: Replace connection string with your storage account connection string
# Usually starts with DefaultEndpointsProtocol=https;...
MY_CONNECTION_STRING = "CONNECTION_STRING"
# Replace with blob container
MY_BLOB_CONTAINER = "name"
# Replace with the local folder where you want files to be downloaded
LOCAL_BLOB_PATH = "Blobsss"
BLOBNAME="test"
class AzureBlobFileDownloader:
def __init__(self):
print("Intializing AzureBlobFileDownloader")
# Initialize the connection to Azure storage account
self.blob_service_client = BlobServiceClient.from_connection_string(MY_CONNECTION_STRING)
self.my_container = self.blob_service_client.get_container_client(MY_BLOB_CONTAINER)
def save_blob(self, file_name, file_content):
# Get full path to the file
download_file_path = os.path.join(LOCAL_BLOB_PATH, file_name)
# for nested blobs, create local path as well!
os.makedirs(os.path.dirname(download_file_path), exist_ok=True)
with open(download_file_path, "wb") as file:
file.write(file_content)
def download_all_blobs_in_container(self):
my_blobs = self.my_container.list_blobs()
for blob in my_blobs:
print(blob.name)
bytes = self.my_container.get_blob_client(blob).download_blob().readall()
self.save_blob(blob.name, bytes)
# Initialize class and upload files
azure_blob_file_downloader = AzureBlobFileDownloader()
azure_blob_file_downloader.download_all_blobs_in_container()
Could you please help me ?
thanks you

Copy Azure Blob as BlockBlob from Remote URL

I'm using the azure-sdk-for-python BlobClient start_copy_from_url to copy a remote file to my local storage.
However, the file always ends up as an AppendBlob instead of BlockBlob. I can't see how I can force the destination BlockType to be BlockBlob.
connection_string = "connection string to my dest blob storage account"
container_name = "myContainerName"
dest_file_name = "myDestFile.csv"
remote_blob_url = "http://path/to/remote/blobfile.csv"
client = BlobServiceClient.from_connection_string(connection_string)
dest_blob = client.get_blob_client(container_name, dest_file_name)
dest_blob.start_copy_from_url(remote_blob_url)
You can't change blob type as soon as you create it.Please see the Copy Blob From URL REST API,no blob-types header.
You could refer to my code to create block blob from append blob:
from azure.storage.blob import BlobPermissions
from datetime import datetime, timedelta
from azure.storage.blob import BlockBlobService
import requests
from io import BytesIO
account_name = "***"
account_key = "***"
container_name = "test"
blob_name = "test2.csv"
block_blob_service = BlockBlobService(account_name, account_key)
sas_token = block_blob_service.generate_blob_shared_access_signature(container_name, blob_name,
permission=BlobPermissions.READ,
expiry=datetime.utcnow() + timedelta(hours=1))
blob_url_with_sas = block_blob_service.make_blob_url(container_name, blob_name, sas_token=sas_token)
r = requests.get(blob_url_with_sas, stream=True)
block_blob_service.create_blob_from_stream("test", "jay.block", stream=BytesIO(r.content))
Here is what you want to do using the latest version (v12)
According to the documentation,
The source blob for a copy operation may be a block blob, an append blob,
or a page blob. If the destination blob already exists, it must be of the
same blob type as the source blob.
Right now, you cannot use start_copy_from_url to specify a blob type. However, you can use the synchronous copy APIS to do so in some cases.
For example, for block to page blob, create the destination page blob first and invoke update_range_from_url on the destination, with each chunk of 4 MB from the source.
Similarly, in your case, create an empty block blob first and the use the stage_block_from_url method.
from azure.storage.blob import ContainerClient
import os
conn_str = os.getenv("AZURE_STORAGE_CONNECTION_STRING")
dest_blob_name = "mynewblob"
source_url = "http://www.gutenberg.org/files/59466/59466-0.txt"
container_client = ContainerClient.from_connection_string(conn_str, "testcontainer")
blob_client = container_client.get_blob_client(dest_blob_name)
# upload the empty block blob
blob_client.upload_blob(b'')
# this will only stage your block
blob_client.stage_block_from_url(block_id=1, source_url=source_url)
# now it is committed
blob_client.commit_block_list(['1'])
# if you want to verify it's committed now
committed, uncommitted = blob_client.get_block_list('all')
assert len(committed) == 1
Let me know if this doesn't work.
EDIT:
You can leverage the source_offset and source_length params to upload blocks in chunks.
For example,
stage_block_from_url(block_id, source_url, source_offset=0, source_length=10)
will upload the first 10 bytes i.e. bytes from 0 to 9.
So, you can use a counter to keep incrementing the block_id and track your offset and length till you exhaust all your chunks.
EDIT2:
for step in range(....):
###
blob.stage_block_from_url(...)
##do not commit it##
#outside the for loop
blob.commit_block_list([j for j in range(i+1)]) (#or i+2?)
As I know there is no direct conversion between blob types. To do this you need to download the blob and reupload it as Block Blob.

How to read audio file from google cloud storage bucket and play with ipd in a datalab notebook

I want to play a sound file in a datalab notebook which I read from a google cloud storage bucket. How to do this?
import numpy as np
import IPython.display as ipd
import librosa
import soundfile as sf
import io
from google.cloud import storage
BUCKET = 'some-bucket'
# Create a Cloud Storage client.
gcs = storage.Client()
# Get the bucket that the file will be uploaded to.
bucket = gcs.get_bucket(BUCKET)
# specify a filename
file_name = 'some_dir/some_audio.wav'
# read a blob
blob = bucket.blob(file_name)
file_as_string = blob.download_as_string()
# convert the string to bytes and then finally to audio samples as floats
# and the audio sample rate
data, sample_rate = sf.read(io.BytesIO(file_as_string))
left_channel = data[:,0] # I assume the left channel is column zero
# enable play button in datalab notebook
ipd.Audio(left_channel, rate=sample_rate)

Python: upload Pillow Image to Firebase storage bucket

I'm trying to figure out how to upload a Pillow Image instance to a Firebase storage bucket. Is this possible?
Here's some code:
from PIL import Image
image = Image.open(file)
# how to upload to a firebase storage bucket?
I know there's a gcloud-python library but does this support Image instances? Is converting the image to a string my only option?
The gcloud-python library is the correct library to use. It supports uploads from Strings, file pointers, and local files on the file system (see the docs).
from PIL import Image
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id-here')
blob = bucket.blob('image.png')
# use pillow to open and transform the file
image = Image.open(file)
# perform transforms
image.save(outfile)
of = open(outfile, 'rb')
blob.upload_from_file(of)
# or... (no need to use pillow if you're not transforming)
blob.upload_from_filename(filename=outfile)
This is how to directly upload the pillow image to firebase storage
from PIL import Image
from firebase_admin import credentials, initialize_app, storage
# Init firebase with your credentials
cred = credentials.Certificate("YOUR DOWNLOADED CREDENTIALS FILE (JSON)")
initialize_app(cred, {'storageBucket': 'YOUR FIREBASE STORAGE PATH (without gs://)'})
bucket = storage.bucket()
blob = bucket.blob('image.jpg')
bs = io.BytesIO()
im = Image.open("test_image.jpg")
im.save(bs, "jpeg")
blob.upload_from_string(bs.getvalue(), content_type="image/jpeg")

Categories