I want to describe the configuration of one topic. I developed a script using confluent-kafka-python librairie (version 1.5.0) and my version of python is 2.7.
My final goal is to be able to change the retention time to my topic (retention.ms), but for this I need to extract all configuration of my topic and change just what I want and leaving the others as they have been defined.
My script:
def describe_topic(admin_client, topic):
resources = [ConfigResource(confluent_kafka.admin.RESOURCE_TOPIC, topic)]
fs = admin_client.describe_configs(resources)
for resource, f in fs.items():
remote_config = f.result()
print(remote_config)
return resource, remote_config
But I have this error :
Error
File "kafka.py", line 192, in describe_topic
resources = [ConfigResource(confluent_kafka.admin.RESOURCE_TOPIC, topic)]
File "environment/mamba-6Od8R-HF-py2.7/lib/python2.7/site-packages/kafka/admin/config_resource.py", line 33, in init
resource_type = ConfigResourceType[str(resource_type).upper()] # pylint: disable-msg=unsubscriptable-object
File "/product/tedhdev/environment/mamba-6Od8R-HF-py2.7/lib/python2.7/site-packages/enum/init.py", line 394, in getitem
return cls.member_map[name]
KeyError: '2'\
Can someone help me?
Thanks a lot
import kafka
from kafka.admin import KafkaAdminClient, ConfigResource, ConfigResourceType
client_conn = kafka.KafkaAdminClient(bootstrap_servers="localhost:9092")
print(client_conn.describe_configs(config_resources=[ConfigResource(ConfigResourceType.TOPIC, "topic")]))
Related
I am writing code to make a translator using GUI. My program runs but when I try to translate text it throws the error AttributeError: 'NoneType' object has no attribute 'group'
My code
from tkinter import*
from tkinter import ttk
from googletrans import Translator,LANGUAGES
def change(text="type",src="English",dest="Hindi"):
text1=text
src1=src
dest1=dest
trans = Translator()
trans1 = trans.translate(text,src=src1,dest=dest1)
return trans1.text
def data():
s =comb_sor.get()
d =comb_dest.get()
msg = Sor_txt.get(1.0,END)
textget = change(text=msg,src=s,dest=d)
dest_txt.delete(1.0,END)
dest_txt.insert(END,textget)
root = Tk()
root.title("Translater")
root.geometry("500x800")
root.config(bg="#FFE1F3")
lab_txt=Label(root,text="Translator", font=("Time New Roman",40,"bold"),fg="#478C5C")
lab_txt.place(x=100,y=40,height=50,width=300)
frame=Frame(root).pack(side=BOTTOM)
lab_txt=Label(root,text="Source Text", font=("Time New Roman",20,"bold"),fg="#FFFF8A",bg="#FDA172")
lab_txt.place(x=100,y=100,height=20,width=300)
Sor_txt =Text(frame,font=("Time New Roman",20,"bold"),wrap=WORD)
Sor_txt.place(x=10,y=130,height=150,width=480)
list_text = list(LANGUAGES.values())
comb_sor = ttk.Combobox(frame,value=list_text)
comb_sor.place(x=10,y=300,height=20,width=100)
comb_sor.set("English")
button_change = Button(frame,text="Translate",relief=RAISED,command=data)
button_change.place(x=120,y=300,height=40,width=100)
comb_dest = ttk.Combobox(frame,value=list_text)
comb_dest.place(x=230,y=300,height=20,width=100)
comb_dest.set("English")
lab_txt=Label(root,text="Dest Text", font=("Time New Roman",20,"bold"),fg="#2E2EFF")
lab_txt.place(x=100,y=360,height=50,width=300)
dest_txt=Text(frame,font=("Time New Roman",20,"bold"),wrap=WORD)
dest_txt.place(x=10,y=400,height=150,width=480)
root.mainloop()
The error and stack trace
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\praful pawar\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "C:\Users\praful pawar\AppData\Local\Temp\ipykernel_9920\1422625581.py", line 19, in data
textget = change(text=msg,src=s,dest=d)
File "C:\Users\praful pawar\AppData\Local\Temp\ipykernel_9920\1422625581.py", line 10, in change
trans1 = trans.translate(text,src=src1,dest=dest1)
File "C:\Users\praful pawar\AppData\Local\Programs\Python\Python310\lib\site-packages\googletrans\client.py", line 182, in translate
data = self._translate(text, dest, src, kwargs)
File "C:\Users\praful pawar\AppData\Local\Programs\Python\Python310\lib\site-packages\googletrans\client.py", line 78, in _translate
token = self.token_acquirer.do(text)
File "C:\Users\praful pawar\AppData\Local\Programs\Python\Python310\lib\site-packages\googletrans\gtoken.py", line 194, in do
self._update()
File "C:\Users\praful pawar\AppData\Local\Programs\Python\Python310\lib\site-packages\googletrans\gtoken.py", line 62, in _update
code = self.RE_TKK.search(r.text).group(1).replace('var ', '')
AttributeError: 'NoneType' object has no attribute 'group'
What it looks like
This image shows that my application is running but something is wrong:
This library's own documentation says (bold added by me):
I eventually figure out a way to generate a ticket by reverse engineering on the obfuscated and minified code used by Google to generate such token, and implemented on the top of Python. However, this could be blocked at any time.
It is designed to work around limitations that Google deliberately installed in Google Translate to make sure you use their official API (and presumably pay) to connect to their service programmatically.
I believe that what you are seeing today is that, as the author warned could happen at any time, it got blocked. The last release was two years ago, so Google has had plenty of time to patch the hole this library exploited.
PS: there's a ticket about this...
Well, I still think this library could get blocked at any time, but it turns out the library is still actively maintained, and they have a ticket open about this issue: https://github.com/ssut/py-googletrans/issues/354 I suggest you watch that ticket, maybe they will fix it.
I just tested the release candidate one reply mentions in the ticket, and it fixed the problem for me:
pip install googletrans==4.0.0rc1
might make your code work.
I am trying to deploy a Google Cloud Function that performs sentiment analysis on tweets using a flair nlp model. The code deploys perfectly fine without the line 'import flair' or alternatives like 'from flair import x,y,z'. As soon as I include the import statement for flair the function fails to deploy. Below is the error I get when deploying with the import statement (error is copied from Firebase logs). This is my first time posting on StackOverflow so please pardon me if the post looks ugly.
{"#type":"type.googleapis.com/google.cloud.audit.AuditLog","status":{"code":3,"message":"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Code in file main.py can't be loaded.\nDetailed stack trace:\nTraceback (most recent call last):\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 359, in check_or_load_user_function\n _function_handler.load_user_function()\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 236, in load_user_function\n spec.loader.exec_module(main_module)\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/user_code/main.py\", line 5, in <module>\n from flair import models, data\n File \"/env/local/lib/python3.7/site-packages/flair/__init__.py\", line 20, in <module>\n from . import models\n File \"/env/local/lib/python3.7/site-packages/flair/models/__init__.py\", line 1, in <module>\n from .sequence_tagger_model import SequenceTagger, MultiTagger\n File \"/env/local/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py\", line 21, in <module>\n from flair.embeddings import TokenEmbeddings, StackedEmbeddings, Embeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/__init__.py\", line 6, in <module>\n from .token import TokenEmbeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/token.py\", line 10, in <module>\n from transformers import AutoTokenizer, AutoConfig, AutoModel, CONFIG_MAPPING, PreTrainedTokenizer\nImportError: cannot import name 'AutoModel' from 'transformers' (unknown location)\n. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."},"authenticationInfo":
And here is the script I am trying to deploy, as well as the requirements.txt file
main.py
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
from datetime import datetime, timedelta
import flair
# or from flair import models, data
class FirestoreHandler():
cred = credentials.Certificate("serviceAccountKey.json")
firebase_admin.initialize_app(cred)
db = firestore.client()
def analysis_on_create(self):
docs = self.db.collection('tweets').order_by(u'time', direction=firestore.Query.DESCENDING).limit(1).get()
data = docs[0].to_dict()
most_recent_tweet = data['full-text']
sentiment_model = flair.models.TextClassifier.load('en-sentiment')
sentence = flair.data.Sentence(str(most_recent_tweet))
sentiment_model.predict(sentence)
result = sentence.labels[0]
if result.value == "POSITIVE":
val= 1 * result.score
else:
val= -1 * result.score
self.db.collection('sentiment').add({'sentiment':val,'timestamp':datetime.now()+timedelta(hours=3)})
def add_test(self):
self.db.collection('test3').add({"status":"success", 'timestamp':datetime.now()+timedelta(hours=3)})
def hello_firestore(event, context):
"""Triggered by a change to a Firestore document.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
resource_string = context.resource
# print out the resource string that triggered the function
print(f"Function triggered by change to: {resource_string}.")
# now print out the entire event object
print(str(event))
fire = FirestoreHandler()
fire.add_test()
fire.analysis_on_create()
requirements.txt
# Function dependencies, for example:
# package>=version
firebase-admin==5.0.1
https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl
flair
I included the url to pytorch download because flair is built on pytorch, and the function would not deploy without the url (even when I didn't import flair in main.py). I have also tried specifying different versions for flair to no avail.
Any intuition as to what may be causing this issue would be greatly appreciated! I am new to the Google Cloud ecosystem, this being my first project. If there is any additional information I can provide please let me know.
Edit: I am deploying from the website (not using CLI)
I am not sure that the provided requirements.txt is OK for GCP cloud functions deployment. Not sure that an explicit https URL is going to be handled correctly...
The Specifying dependencies in Python documentation page describes how the dependencies are to be stated - using the pip package manager's requirements.txt file or packaging local dependencies alongside your function.
Can you simply mention flair with necessary version in the requirements.txt file? Will it work?
In addition, the error you provided highlights that the transformers package is required. Can it be that some specific version is required?
====
As a side comment - I don't know your context and requirements, but I am not sure that in order to work with the Firestore from inside a cloud function all of that
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
is required, as it might be better to avoid using serviceAccountKey.json at all, and simply assign relevant IAM roles to the service account which is used for the given cloud function execution.
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.customvision import prediction
from PIL import Image
endpoint = "https://southcentralus.api.cognitive.microsoft.com/"
project_id = "projectidhere"
prediction_key = "predictionkeyhere"
predict = CustomVisionPredictionClient(prediction_key, endpoint)
with open("c:/users/paul.barbin/pycharmprojects/hw3/TallowTest1.jpg", mode="rb") as image_data:
tallowresult = predict.detect_image(project_id, "test1", image_data)
Python 3.7, and I'm using Azure Custom Vision 3.1? (>azure.cognitiveservices.vision.customvision) (3.1.0)
Note that I've seen the same question on SO but no real solution. The posted answer on the other question says to use the REST API instead.
I believe the error is in the endpoint (as stated in the error), and I've tried a few variants - with the slash, without, using an environment variable, without, I've tried appending various strings to my endpoint but I keep getting the same message. Any help is appreciated.
Full error here:
Traceback (most recent call last):
File "GetError.py", line 15, in <module>
tallowresult = predict.detect_image(project_id, "test1", image_data)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\azure\cognitiveservices\vision\customvision\prediction\operations\_custom_vision_
prediction_client_operations.py", line 354, in detect_image
request = self._client.post(url, query_parameters, header_parameters, form_content=form_data_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 193, in post
request = self._request('POST', url, params, headers, content, form_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 108, in _request
request = ClientRequest(method, self.format_url(url))
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 155, in format_url
base = self.config.base_url.format(**kwargs).rstrip('/')
KeyError: 'Endpoint'
CustomVisionPredictionClient takes two required, positional parameters: endpoint and credentials. Endpoint needs to be passed in before credentials, try swapping the order:
predict = CustomVisionPredictionClient(endpoint, prediction_key)
I'm using Python to retrieve a Blob image from Azure storage and then send it to Custom Vision for a prediction.
This is the code:
import io
from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
block_blob_service = BlockBlobService(
account_name=account_name,
account_key=account_key
)
fp = io.BytesIO()
block_blob_service.get_blob_to_stream(
container_name,
blob_name,
fp,
max_connections=2
)
predictor = CustomVisionPredictionClient(
cv_prediction_key,
endpoint=cv_endpoint
)
# This call breaks with the below error message
results = predictor.predict_image(
cv_project_id,
image_data.getvalue(),
iteration_id=cv_iteration_id
)
However, executing the predict_image function results in the following error:
System.Private.CoreLib: Exception while executing function: Functions.ReloadPostgres. System.Private.CoreLib: Result: Failure
Exception: HttpOperationError: Operation returned an invalid status code 'Resource Not Found'
Stack: File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 288, in _handle__invocation_request
self.__run_sync_func, invocation_id, fi.func, args)
File "~/.pyenv/versions/3.6.8/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 347, in __run_sync_func
return func(**params)
File "~/py_func_app/ReloadPostgres/__init__.py", line 14, in main
data_handler.fetch_prediction_data()
File "~/py_func_app/Shared_Code/data_handler.py", line 127, in fetch_prediction_data
cv_handler.predict_image(image_data.getvalue(), cv_model)
File "~/py_func_app/Shared_Code/custom_vision.py", line 30, in predict_image
raise e
File "~/py_func_app/Shared_Code/custom_vision.py", line 26, in predict_image
iteration_id=cv_model.cv_iteration_id
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/cognitiveservices/vision/customvision/prediction/custom_vision_prediction_client.py", line 215, in predict_image
raise HttpOperationError(self._deserialize, response)
Here in below i am providing similar example using custom vision prediction using image URL, you can change it to image file :
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 11:04:54 2019
#author: moverm
"""
#from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
#block_blob_service = BlockBlobService(
# account_name=account_name,
# account_key=account_key
#)
#
#fp = io.BytesIO()
#block_blob_service.get_blob_to_stream(
# container_name,
# blob_name,
# fp,
# max_connections=2
#)
predictor = CustomVisionPredictionClient(
"prediction-key",
endpoint="https://southcentralus.api.cognitive.microsoft.com"
)
# This call breaks with the below error message
#results = predictor.predict_image(
# 'prediction-key',
# image_data.getvalue(),
# iteration_id=cv_iteration_id
#)
test_img_url = "https://pointsprizes-blog.s3-accelerate.amazonaws.com/316.jpg"
results = predictor.predict_image_url("project-Id", "Iteration-Id", url=test_img_url)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100))
Basically issue is related to endpoint.Use https://southcentralus.api.cognitive.microsoft.com for an endpoint.
It should work, and you should be able to see the prediction probability.
Hope it helps.
I tried to reproduce your issue and got a similar issue, which was caused by using the incorrect endpoint from Azure portal when I created a Cognitive Service on the region of Janpa East, as the figure below.
As the figure above shown, the endpoint is https://japaneast.api.cognitive.microsoft.com/customvision/training/v1.0 for version 1, but the azure-cognitiveservices-vision-customvision PyPI page points out the corrent endpoint which should be https://{AzureRegion}.api.cognitive.microsoft.com as the figure below.
So I got the similar issue with yours if using the incorrent endpoint, as below. My code used is the same as yours, the only difference is the running environment which yours is on Azure Functions, but mine is a console script.
Meanwhile, according to the source code custom_vision_prediction_client.py of Azure Cognitive Service SDK for Custom Vision, you can see the code base_url = '{Endpoint}/customvision/v2.0/Prediction' to concat your passed endpoint with /customvision/v2.0/Prediction to generate the real endpoint for calling prediction api.
Therefore, as #MohitVerma-MSFT said, using https://<your cognitive service region>.api.cognitive.microsoft.com for the current version of Python package.
Additional notes as below, there is an announce of important update for customvision.ai you need to know, it may make effect for your current code working soon after.
from this github:
https://github.com/GoogleCloudPlatform/python-docs-samples
i am trying to test Video intelligence API and do label analysis.
import argparse
import sys
import time
import io
import base64
from google.cloud.gapic.videointelligence.v1beta1 import enums
from google.cloud.gapic.videointelligence.v1beta1 import (
video_intelligence_service_client)
# [END imports]
#python labels.py /Users/rockbaek/tildawatch-contents/EpicSkillShot/M7-_VukSueY/SKT\ vs\ KT\ Game\ 3\ _\ Grand\ Finals\ S7\ LCK\ Spring\ 2017\ _\ KT\ Rolster\ vs\ SK\ Telecom\ T1\ G3\ 1080p-M7-_VukSueY.mp4
def analyze_labels_file(path):
""" Detects labels given a file path. """
video_client = (video_intelligence_service_client.
VideoIntelligenceServiceClient())
features = [enums.Feature.LABEL_DETECTION]
with io.open(path, "rb") as movie:
content_base64 = base64.b64encode(movie.read())
operation = video_client.annotate_video(
'', features, input_content=content_base64)
print('\nProcessing video for label annotations:')
while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(15)
print('\nFinished processing.')
# first result is retrieved because a single video was processed
results = operation.result().annotation_results[0]
for i, label in enumerate(results.label_annotations):
print('Label description: {}'.format(label.description))
print('Locations:')
for l, location in enumerate(label.locations):
positions = 'Entire video'
if (location.segment.start_time_offset != -1 or
location.segment.end_time_offset != -1):
positions = '{} to {}'.format(
location.segment.start_time_offset / 1000000.0,
location.segment.end_time_offset / 1000000.0)
print('\t{}: {}'.format(l, positions))
print('\n')
if __name__ == '__main__':
# [START running_app]
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('path', help='GCS file path for label detection.')
args = parser.parse_args()
analyze_labels_file(args.path)
# [END running_app]
# [END full_tutorial]
and then i run it from terminal
python labels.py MP4_FILE_PATH
After quite a while, it fails with this error code:
Traceback (most recent call last):
File "labels.py", line 123, in <module>
analyze_labels_file(args.path)
File "labels.py", line 52, in analyze_labels_file
'', features, input_content=content_base64)
File "/Library/Python/2.7/site-packages/google/cloud/gapic/videointelligence/v1beta1/video_intelligence_service_client.py", line 237, in annotate_video
self._annotate_video(request, options), self.operations_client,
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 428, in inner
return api_caller(api_call, this_settings, request)
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 416, in base_caller
return api_call(*args)
File "/Library/Python/2.7/site-packages/google/gax/api_callable.py", line 376, in inner
return a_func(*args, **kwargs)
File "/Library/Python/2.7/site-packages/google/gax/retry.py", line 144, in inner
raise exc
google.gax.errors.RetryError: GaxError(Retry total timeout exceeded with exception, caused by <_Rendezvous of RPC that terminated with (StatusCode.DEADLINE_EXCEEDED, Deadline Exceeded)>)
Please help why it is not working! :(
For the visitors from future, I faced same error with Cloud Speech API.
I just increased the timeout value while calling operation.result. That solved it. Though this fragment isn't in OP's code, it should be in Google's example code OP mentioned.
operation.result(timeout=90) # increase this numeric value
I tried your code with a small video and it seems to work just fine for me. Maybe you are hitting some form of quota or limits (refer: https://cloud-dot-devsite.googleplex.com/video-intelligence/limits)? I have also run large videos loaded in Google Storage and using Python client library without issue.
Other step to try: send the request to the service via a curl command.