How to fix the Image reference? - python

I'm using the Google Cloud Vision API with Python 3, but i'm getting the error
"Cannot find reference 'Image' in types.py" when i use:
image = vision.types.Image(content=content)
I made the correct imports and the documentation tells me to use this function to get an image. Anyone can help me?
Code:
import io
import os
from google.cloud import vision
from google.cloud.vision import types
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:/Keys/key.json"
client = vision.ImageAnnotatorClient
path = os.path.join(os.path.dirname(__file__), "image.jpg")
with io.open(path, "rb") as image_file:
content = image_file.read()
image = types.Image(content=content)
Error Message:
Google Cloud Vision API version: 0.36.0

You should be importing like this:
from google.cloud import vision
from google.cloud.vision import types
and then you will be able to do this:
image = types.Image(content=content)
There is a full tutorial here. It works on my machine on Python3.7 perfectly fine.

Related

How to load fonts from GCS

I want to load "fonts" from Google Storage, I've try two ways, but none of them work. Any pointers? Appreciated for any advices provided.
First:
I follow the instruction load_font_from_gcs(uri)given in the answer here, but I received an NameError: name 'load_font_from_gcs' is not defined message. I installed google storage dependency and execute from google.cloud import storage
.
Second:
I try to execute the following code (reference #1) , and running into an blob has no attribute open() error, just the same answer I get it here, but as the reference in this link, it give a positive answer.
reference #1
bucket = storage_client.bucket({bucket_name})
blob = bucket.get_blob({blob_name)
with blob.open("r") as img:
imgblob = Image.open(img)
draw = ImageDraw.Draw(imgblob)
According to the provided links, your code must use BytesIO in order to work with the font file loaded from GCS.
The load_font_from_gcs is a custom function, written by the author of that question you referencing.
And it is not represented in the google-cloud-storage package.
Next, according to the official Google Cloud Storage documentation here:
Files from storage can be accessed this way (this example loads the font file into the PIL.ImageFont.truetype):
# Import PIL
from PIL import Image, ImageFont, ImageDraw
# Import the Google Cloud client library
from google.cloud import storage
# Import BytesIO module
from io import BytesIO
# Instantiate a client
storage_client = storage.Client()
# The name of the bucket
bucket_name = "my-new-bucket"
# Required blob
blob_name = "somefont.otf"
# Creates the bucket & blob instance
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
# Download the given blob
blob_content = blob.download_as_string()
# Make ImageFont out of it (or whatever you want)
font = ImageFont.truetype(BytesIO(font_file), 18)
So, your reference code can be changed respectively:
bucket = storage_client.bucket({bucket_name})
blob = bucket.get_blob({blob_name).download_as_string()
bytes = BytesIO(blob)
imgblob = Image.open(bytes)
draw = ImageDraw.Draw(imgblob)
You can read more about PIL here.
Also, don't forget to check the official Google Cloud Storage documentation.
(There are plenty of examples using Python code.)

Upload a modified XML file to google cloud storage after editting it with ElementTree (python)

I've modified a piece of code for merging two or more xml files into one. I got it working locally without using or storing files on google cloud storage.
I'd like to use it via cloud functions, which seems to work mostly fine, apart from uploading the final xml file to google cloud storage.
import os
import wget
import logging
from io import BytesIO
from google.cloud import storage
from xml.etree import ElementTree as ET
def merge(event, context):
client = storage.Client()
bucket = client.get_bucket('mybucket')
test1 = bucket.blob("xml-file1.xml")
inputxml1 = test1.download_as_string()
root1 = ET.fromstring(inputxml1)
test2 = bucket.blob("xml-file2.xml")
inputxml2 = test2.download_as_string()
root2 = ET.fromstring(inputxml2)
copy_files = [e for e in root1.findall('./SHOPITEM')]
src_files = set([e.find('./SHOPITEM') for e in copy_files])
copy_files.extend([e for e in root2.findall('./SHOPITEM') if e.find('./CODE').text not in src_files])
files = ET.Element('SHOP')
files.extend(copy_files)
blob = bucket.blob("test.xml")
blob.upload_from_string(files)
Ive tried the functions .write and .tostring but unsuccessfully.
Sorry for the incomplete question. I've already found a solution and I cant recall the error message I got.
Here is my solution:
blob.upload_from_string(ET.tostring(files, encoding='UTF-8',xml_declaration=True, method='xml').decode('UTF-8'),content_type='application/xml')

Read and use image requested from url in Python

I'm currently trying to work with Google's python vision library. But I'm currently stuck on how to read images from the web. So far I've got this here down below. My issue is that the contents always seem to be empty and when I check using PyCharm, it says that it only contains b''.
How can I open this image so I can use it for Google's library?
from google.cloud import vision
from google.cloud.vision import types
from urllib import request
import io
client = vision.ImageAnnotatorClient.from_service_account_json('cred.json')
url = "https://cdn.getyourguide.com/img/location_img-59-1969619245-148.jpg"
img = request.urlopen(url)
with io.open('location_img-59-1969619245-148.jpg', 'rb') as fhand:
content = fhand.read()
image = types.Image(content=content)
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
Do you try get image via requests library?
import requests
r = requests.get("https://cdn.getyourguide.com/img/location_img-59-1969619245-148.jpg")
o = open("location_img.jpg", "wb")
o.write(r.content)
o.close()

How to display image stored in Google Cloud bucket

I can successfully access the google cloud bucket from my python code running on my PC using the following code.
client = storage.Client()
bucket = client.get_bucket('bucket-name')
blob = bucket.get_blob('images/test.png')
Now I don't know how to retrieve and display image from the "blob" without writing to a file on the hard-drive?
You could, for example, generate a temporary url
from gcloud import storage
client = storage.Client() # Implicit environ set-up
bucket = client.bucket('my-bucket')
blob = bucket.blob('my-blob')
url_lifetime = 3600 # Seconds in an hour
serving_url = blob.generate_signed_url(url_lifetime)
Otherwise you can set the image as public in your bucket and use the permanent link that you can find in your object details
https://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME
Download the image from GCS as bytes, wrap it in BytesIO object to make the bytes file-like, then read in as a PIL Image object.
from io import BytesIO
from PIL import Image
img = Image.open(BytesIO(blob.download_as_bytes()))
Then you can do whatever you want with img -- for example, to display it, use plt.imshow(img).
In Jupyter notebooks you can display the image directly with download_as_bytes:
from google.cloud import storage
from IPython.display import Image
client = storage.Client() # Implicit environment set up
# with explicit set up:
# client = storage.Client.from_service_account_json('key-file-location')
bucket = client.get_bucket('bucket-name')
blob = bucket.get_blob('images/test.png')
Image(blob.download_as_bytes())

How to upload a bytes image on Google Cloud Storage from a Python script

I want to upload an image on Google Cloud Storage from a python script. This is my code:
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient import discovery
scopes = ['https://www.googleapis.com/auth/devstorage.full_control']
credentials = ServiceAccountCredentials.from_json_keyfile_name('serviceAccount.json', scop
es)
service = discovery.build('storage','v1',credentials = credentials)
body = {'name':'my_image.jpg'}
req = service.objects().insert(
bucket='my_bucket', body=body,
media_body=googleapiclient.http.MediaIoBaseUpload(
gcs_image, 'application/octet-stream'))
resp = req.execute()
if gcs_image = open('img.jpg', 'r') the code works and correctly save my image on Cloud Storage. How can I directly upload a bytes image? (for example from an OpenCV/Numpy array: gcs_image = cv2.imread('img.jpg'))
In my case, I wanted to upload a PDF document to Cloud Storage from bytes.
When I tried the below, it created a text file with my byte string in it.
blob.upload_from_string(bytedata)
In order to create an actual PDF file using the byte string I had to do:
blob.upload_from_string(bytedata, content_type='application/pdf')
My byte data was b64encoded, so I also had b64decode it first.
If you want to upload your image from file.
import os
from google.cloud import storage
def upload_file_to_gcs(bucket_name, local_path, local_file_name, target_key):
try:
client = storage.Client()
bucket = client.bucket(bucket_name)
full_file_path = os.path.join(local_path, local_file_name)
bucket.blob(target_key).upload_from_filename(full_file_path)
return bucket.blob(target_key).public_url
except Exception as e:
print(e)
return None
but if you want to upload bytes directly:
import os
from google.cloud import storage
def upload_data_to_gcs(bucket_name, data, target_key):
try:
client = storage.Client()
bucket = client.bucket(bucket_name)
bucket.blob(target_key).upload_from_string(data)
return bucket.blob(target_key).public_url
except Exception as e:
print(e)
return None
note that target_key is prefix and the name of the uploaded file.
MediaIoBaseUpload expects an io.Base-like object and raises following error:
'numpy.ndarray' object has no attribute 'seek'
upon receiving a ndarray object. To solve it I am using TemporaryFile and numpy.ndarray().tofile()
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient import discovery
import googleapiclient
import numpy as np
import cv2
from tempfile import TemporaryFile
scopes = ['https://www.googleapis.com/auth/devstorage.full_control']
credentials = ServiceAccountCredentials.from_json_keyfile_name('serviceAccount.json', scopes)
service = discovery.build('storage','v1',credentials = credentials)
body = {'name':'my_image.jpg'}
with TemporaryFile() as gcs_image:
cv2.imread('img.jpg').tofile(gcs_image)
req = service.objects().insert(
bucket='my_bucket’, body=body,
media_body=googleapiclient.http.MediaIoBaseUpload(
gcs_image, 'application/octet-stream'))
resp = req.execute()
Be aware that googleapiclient is non-idiomatic and maintenance only(it’s not developed anymore). I would recommend using idiomatic one.
Here is how to directly upload a PIL Image from memory:
from google.cloud import storage
import io
from PIL import Image
# Define variables
bucket_name = XXXXX
destination_blob_filename = XXXXX
# Configure bucket and blob
client = storage.Client()
bucket = client.bucket(bucket_name)
im = Image.open("test.jpg")
bs = io.BytesIO()
im.save(bs, "jpeg")
blob.upload_from_string(bs.getvalue(), content_type="image/jpeg")
In addition to that, here is how to download blobfiles directly to memory as PIL Images:
blob = bucket.blob(destination_blob_filename)
downloaded_im_data = blob.download_as_bytes()
downloaded_im = Image.open(io.BytesIO(downloaded_im_data))

Categories