Python custom vision predictor fails - python

I'm using Python to retrieve a Blob image from Azure storage and then send it to Custom Vision for a prediction.
This is the code:
import io
from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
block_blob_service = BlockBlobService(
account_name=account_name,
account_key=account_key
)
fp = io.BytesIO()
block_blob_service.get_blob_to_stream(
container_name,
blob_name,
fp,
max_connections=2
)
predictor = CustomVisionPredictionClient(
cv_prediction_key,
endpoint=cv_endpoint
)
# This call breaks with the below error message
results = predictor.predict_image(
cv_project_id,
image_data.getvalue(),
iteration_id=cv_iteration_id
)
However, executing the predict_image function results in the following error:
System.Private.CoreLib: Exception while executing function: Functions.ReloadPostgres. System.Private.CoreLib: Result: Failure
Exception: HttpOperationError: Operation returned an invalid status code 'Resource Not Found'
Stack: File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 288, in _handle__invocation_request
self.__run_sync_func, invocation_id, fi.func, args)
File "~/.pyenv/versions/3.6.8/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 347, in __run_sync_func
return func(**params)
File "~/py_func_app/ReloadPostgres/__init__.py", line 14, in main
data_handler.fetch_prediction_data()
File "~/py_func_app/Shared_Code/data_handler.py", line 127, in fetch_prediction_data
cv_handler.predict_image(image_data.getvalue(), cv_model)
File "~/py_func_app/Shared_Code/custom_vision.py", line 30, in predict_image
raise e
File "~/py_func_app/Shared_Code/custom_vision.py", line 26, in predict_image
iteration_id=cv_model.cv_iteration_id
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/cognitiveservices/vision/customvision/prediction/custom_vision_prediction_client.py", line 215, in predict_image
raise HttpOperationError(self._deserialize, response)

Here in below i am providing similar example using custom vision prediction using image URL, you can change it to image file :
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 11:04:54 2019
#author: moverm
"""
#from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
#block_blob_service = BlockBlobService(
# account_name=account_name,
# account_key=account_key
#)
#
#fp = io.BytesIO()
#block_blob_service.get_blob_to_stream(
# container_name,
# blob_name,
# fp,
# max_connections=2
#)
predictor = CustomVisionPredictionClient(
"prediction-key",
endpoint="https://southcentralus.api.cognitive.microsoft.com"
)
# This call breaks with the below error message
#results = predictor.predict_image(
# 'prediction-key',
# image_data.getvalue(),
# iteration_id=cv_iteration_id
#)
test_img_url = "https://pointsprizes-blog.s3-accelerate.amazonaws.com/316.jpg"
results = predictor.predict_image_url("project-Id", "Iteration-Id", url=test_img_url)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100))
Basically issue is related to endpoint.Use https://southcentralus.api.cognitive.microsoft.com for an endpoint.
It should work, and you should be able to see the prediction probability.
Hope it helps.

I tried to reproduce your issue and got a similar issue, which was caused by using the incorrect endpoint from Azure portal when I created a Cognitive Service on the region of Janpa East, as the figure below.
As the figure above shown, the endpoint is https://japaneast.api.cognitive.microsoft.com/customvision/training/v1.0 for version 1, but the azure-cognitiveservices-vision-customvision PyPI page points out the corrent endpoint which should be https://{AzureRegion}.api.cognitive.microsoft.com as the figure below.
So I got the similar issue with yours if using the incorrent endpoint, as below. My code used is the same as yours, the only difference is the running environment which yours is on Azure Functions, but mine is a console script.
Meanwhile, according to the source code custom_vision_prediction_client.py of Azure Cognitive Service SDK for Custom Vision, you can see the code base_url = '{Endpoint}/customvision/v2.0/Prediction' to concat your passed endpoint with /customvision/v2.0/Prediction to generate the real endpoint for calling prediction api.
Therefore, as #MohitVerma-MSFT said, using https://<your cognitive service region>.api.cognitive.microsoft.com for the current version of Python package.
Additional notes as below, there is an announce of important update for customvision.ai you need to know, it may make effect for your current code working soon after.

Related

Can't deploy python script to google cloud functions due to issue with flairnlp import

I am trying to deploy a Google Cloud Function that performs sentiment analysis on tweets using a flair nlp model. The code deploys perfectly fine without the line 'import flair' or alternatives like 'from flair import x,y,z'. As soon as I include the import statement for flair the function fails to deploy. Below is the error I get when deploying with the import statement (error is copied from Firebase logs). This is my first time posting on StackOverflow so please pardon me if the post looks ugly.
{"#type":"type.googleapis.com/google.cloud.audit.AuditLog","status":{"code":3,"message":"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Code in file main.py can't be loaded.\nDetailed stack trace:\nTraceback (most recent call last):\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 359, in check_or_load_user_function\n _function_handler.load_user_function()\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 236, in load_user_function\n spec.loader.exec_module(main_module)\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/user_code/main.py\", line 5, in <module>\n from flair import models, data\n File \"/env/local/lib/python3.7/site-packages/flair/__init__.py\", line 20, in <module>\n from . import models\n File \"/env/local/lib/python3.7/site-packages/flair/models/__init__.py\", line 1, in <module>\n from .sequence_tagger_model import SequenceTagger, MultiTagger\n File \"/env/local/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py\", line 21, in <module>\n from flair.embeddings import TokenEmbeddings, StackedEmbeddings, Embeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/__init__.py\", line 6, in <module>\n from .token import TokenEmbeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/token.py\", line 10, in <module>\n from transformers import AutoTokenizer, AutoConfig, AutoModel, CONFIG_MAPPING, PreTrainedTokenizer\nImportError: cannot import name 'AutoModel' from 'transformers' (unknown location)\n. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."},"authenticationInfo":
And here is the script I am trying to deploy, as well as the requirements.txt file
main.py
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
from datetime import datetime, timedelta
import flair
# or from flair import models, data
class FirestoreHandler():
cred = credentials.Certificate("serviceAccountKey.json")
firebase_admin.initialize_app(cred)
db = firestore.client()
def analysis_on_create(self):
docs = self.db.collection('tweets').order_by(u'time', direction=firestore.Query.DESCENDING).limit(1).get()
data = docs[0].to_dict()
most_recent_tweet = data['full-text']
sentiment_model = flair.models.TextClassifier.load('en-sentiment')
sentence = flair.data.Sentence(str(most_recent_tweet))
sentiment_model.predict(sentence)
result = sentence.labels[0]
if result.value == "POSITIVE":
val= 1 * result.score
else:
val= -1 * result.score
self.db.collection('sentiment').add({'sentiment':val,'timestamp':datetime.now()+timedelta(hours=3)})
def add_test(self):
self.db.collection('test3').add({"status":"success", 'timestamp':datetime.now()+timedelta(hours=3)})
def hello_firestore(event, context):
"""Triggered by a change to a Firestore document.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
resource_string = context.resource
# print out the resource string that triggered the function
print(f"Function triggered by change to: {resource_string}.")
# now print out the entire event object
print(str(event))
fire = FirestoreHandler()
fire.add_test()
fire.analysis_on_create()
requirements.txt
# Function dependencies, for example:
# package>=version
firebase-admin==5.0.1
https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl
flair
I included the url to pytorch download because flair is built on pytorch, and the function would not deploy without the url (even when I didn't import flair in main.py). I have also tried specifying different versions for flair to no avail.
Any intuition as to what may be causing this issue would be greatly appreciated! I am new to the Google Cloud ecosystem, this being my first project. If there is any additional information I can provide please let me know.
Edit: I am deploying from the website (not using CLI)
I am not sure that the provided requirements.txt is OK for GCP cloud functions deployment. Not sure that an explicit https URL is going to be handled correctly...
The Specifying dependencies in Python documentation page describes how the dependencies are to be stated - using the pip package manager's requirements.txt file or packaging local dependencies alongside your function.
Can you simply mention flair with necessary version in the requirements.txt file? Will it work?
In addition, the error you provided highlights that the transformers package is required. Can it be that some specific version is required?
====
As a side comment - I don't know your context and requirements, but I am not sure that in order to work with the Firestore from inside a cloud function all of that
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
is required, as it might be better to avoid using serviceAccountKey.json at all, and simply assign relevant IAM roles to the service account which is used for the given cloud function execution.

How to describe configuration of topic kafka?

I want to describe the configuration of one topic. I developed a script using confluent-kafka-python librairie (version 1.5.0) and my version of python is 2.7.
My final goal is to be able to change the retention time to my topic (retention.ms), but for this I need to extract all configuration of my topic and change just what I want and leaving the others as they have been defined.
My script:
def describe_topic(admin_client, topic):
resources = [ConfigResource(confluent_kafka.admin.RESOURCE_TOPIC, topic)]
fs = admin_client.describe_configs(resources)
for resource, f in fs.items():
remote_config = f.result()
print(remote_config)
return resource, remote_config
But I have this error :
Error
File "kafka.py", line 192, in describe_topic
resources = [ConfigResource(confluent_kafka.admin.RESOURCE_TOPIC, topic)]
File "environment/mamba-6Od8R-HF-py2.7/lib/python2.7/site-packages/kafka/admin/config_resource.py", line 33, in init
resource_type = ConfigResourceType[str(resource_type).upper()] # pylint: disable-msg=unsubscriptable-object
File "/product/tedhdev/environment/mamba-6Od8R-HF-py2.7/lib/python2.7/site-packages/enum/init.py", line 394, in getitem
return cls.member_map[name]
KeyError: '2'\
Can someone help me?
Thanks a lot
import kafka
from kafka.admin import KafkaAdminClient, ConfigResource, ConfigResourceType
client_conn = kafka.KafkaAdminClient(bootstrap_servers="localhost:9092")
print(client_conn.describe_configs(config_resources=[ConfigResource(ConfigResourceType.TOPIC, "topic")]))

Getting KeyError: 'Endpoint' error in Python when calling Custom Vision API

from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.customvision import prediction
from PIL import Image
endpoint = "https://southcentralus.api.cognitive.microsoft.com/"
project_id = "projectidhere"
prediction_key = "predictionkeyhere"
predict = CustomVisionPredictionClient(prediction_key, endpoint)
with open("c:/users/paul.barbin/pycharmprojects/hw3/TallowTest1.jpg", mode="rb") as image_data:
tallowresult = predict.detect_image(project_id, "test1", image_data)
Python 3.7, and I'm using Azure Custom Vision 3.1? (>azure.cognitiveservices.vision.customvision) (3.1.0)
Note that I've seen the same question on SO but no real solution. The posted answer on the other question says to use the REST API instead.
I believe the error is in the endpoint (as stated in the error), and I've tried a few variants - with the slash, without, using an environment variable, without, I've tried appending various strings to my endpoint but I keep getting the same message. Any help is appreciated.
Full error here:
Traceback (most recent call last):
File "GetError.py", line 15, in <module>
tallowresult = predict.detect_image(project_id, "test1", image_data)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\azure\cognitiveservices\vision\customvision\prediction\operations\_custom_vision_
prediction_client_operations.py", line 354, in detect_image
request = self._client.post(url, query_parameters, header_parameters, form_content=form_data_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 193, in post
request = self._request('POST', url, params, headers, content, form_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 108, in _request
request = ClientRequest(method, self.format_url(url))
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 155, in format_url
base = self.config.base_url.format(**kwargs).rstrip('/')
KeyError: 'Endpoint'
CustomVisionPredictionClient takes two required, positional parameters: endpoint and credentials. Endpoint needs to be passed in before credentials, try swapping the order:
predict = CustomVisionPredictionClient(endpoint, prediction_key)

Can anyone give me the documentation of "Get started: write and deploy your first functions" with python?

There is this document from the Firebase about how to write and deploy the cloud function in Nodejs but can anyone help me out getting that very document in python. I am getting confused as I am a newbie in this field?
However, I tried to write my cloud function which looks like the below but constantly getting some errors that I am going to mention below:
import json
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
def go_firebase(request):
cred = credentials.Certificate('firebasesdk.json')
firebase_admin.initialize_app(cred, {
'databaseURL' : 'https://firebaseio.com/'
})
ref=db.reference('agents')
snapshot = ref.order_by_key().get()
for key, val in snapshot.items():
kw=val
dictfilt = lambda x, y: dict([ (i,x[i]) for i in x if i in set(y) ])
wanted_keys = ("address","name","phone","uid")
result = dictfilt(kw, wanted_keys)
data= json.dumps(result, sort_keys=True)
return data
And after deploying the function with http trigger, in the log, it is saying:
severity: "ERROR"
textPayload: "Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 313, in run_http_function
result = _function_handler.invoke_user_function(flask.request)
File "/env/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 215, in invoke_user_function
return call_user_function(request_or_event)
File "/env/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 208, in call_user_function
return self._user_function(request_or_event)
File "/user_code/main.py", line 6, in go_firebase
cred = credentials.Certificate('firebasesdk.json')
File "/env/lib/python3.7/site-packages/firebase_admin/credentials.py", line 83, in __init__
with open(cert) as json_file:
FileNotFoundError: [Errno 2] No such file or directory: 'firebasesdk.json'
I have no idea why it is saying file not found because I have that json file in the same path where I am executing the function! I am using google cloud shell!
Can anyone be kind enough to tell me where I am going wrong?
The .json file that you are referring to is probably the dependencies file for Node.js Cloud Function.
How it works
Every Google Cloud Function has an extra file (additionally to the main code) that has all the libraries to be installed. For example if you are using requests library in your code then there is no way of running pip install requests before executing the main code. Therefore you add this library in the additional file and the Cloud Function will first read that file during deployment and will try to install all the libraries mentioned there.
For Node.js code the file with the libraries is a .json file. For the Python it is a requirements.txt file. For more information, you can refer to The Python Runtime documentation.

google.gax.errors.RetryError StatusCode.DEADLINE_EXCEEDED

from this github:
https://github.com/GoogleCloudPlatform/python-docs-samples
i am trying to test Video intelligence API and do label analysis.
import argparse
import sys
import time
import io
import base64
from google.cloud.gapic.videointelligence.v1beta1 import enums
from google.cloud.gapic.videointelligence.v1beta1 import (
video_intelligence_service_client)
# [END imports]
#python labels.py /Users/rockbaek/tildawatch-contents/EpicSkillShot/M7-_VukSueY/SKT\ vs\ KT\ Game\ 3\ _\ Grand\ Finals\ S7\ LCK\ Spring\ 2017\ _\ KT\ Rolster\ vs\ SK\ Telecom\ T1\ G3\ 1080p-M7-_VukSueY.mp4
def analyze_labels_file(path):
""" Detects labels given a file path. """
video_client = (video_intelligence_service_client.
VideoIntelligenceServiceClient())
features = [enums.Feature.LABEL_DETECTION]
with io.open(path, "rb") as movie:
content_base64 = base64.b64encode(movie.read())
operation = video_client.annotate_video(
'', features, input_content=content_base64)
print('\nProcessing video for label annotations:')
while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(15)
print('\nFinished processing.')
# first result is retrieved because a single video was processed
results = operation.result().annotation_results[0]
for i, label in enumerate(results.label_annotations):
print('Label description: {}'.format(label.description))
print('Locations:')
for l, location in enumerate(label.locations):
positions = 'Entire video'
if (location.segment.start_time_offset != -1 or
location.segment.end_time_offset != -1):
positions = '{} to {}'.format(
location.segment.start_time_offset / 1000000.0,
location.segment.end_time_offset / 1000000.0)
print('\t{}: {}'.format(l, positions))
print('\n')
if __name__ == '__main__':
# [START running_app]
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('path', help='GCS file path for label detection.')
args = parser.parse_args()
analyze_labels_file(args.path)
# [END running_app]
# [END full_tutorial]
and then i run it from terminal
python labels.py MP4_FILE_PATH
After quite a while, it fails with this error code:
Traceback (most recent call last):
File "labels.py", line 123, in <module>
analyze_labels_file(args.path)
File "labels.py", line 52, in analyze_labels_file
'', features, input_content=content_base64)
File "/Library/Python/2.7/site-packages/google/cloud/gapic/videointelligence/v1beta1/video_intelligence_service_client.py", line 237, in annotate_video
self._annotate_video(request, options), self.operations_client,
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 428, in inner
return api_caller(api_call, this_settings, request)
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 416, in base_caller
return api_call(*args)
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 376, in inner
return a_func(*args, **kwargs)
File "/Library/Python/2.7/site-packages/google/gax/retry.py", line 144, in inner
raise exc
google.gax.errors.RetryError: GaxError(Retry total timeout exceeded with exception, caused by <_Rendezvous of RPC that terminated with (StatusCode.DEADLINE_EXCEEDED, Deadline Exceeded)>)
Please help why it is not working! :(
For the visitors from future, I faced same error with Cloud Speech API.
I just increased the timeout value while calling operation.result. That solved it. Though this fragment isn't in OP's code, it should be in Google's example code OP mentioned.
operation.result(timeout=90) # increase this numeric value
I tried your code with a small video and it seems to work just fine for me. Maybe you are hitting some form of quota or limits (refer: https://cloud-dot-devsite.googleplex.com/video-intelligence/limits)? I have also run large videos loaded in Google Storage and using Python client library without issue.
Other step to try: send the request to the service via a curl command.

Categories