I have the following Python code to label images using Clarifai. It was a working code and had been usable for the past 6-8 months. However, for the last few days, I have been getting the error mentioned below. Note that I have not made any changes to the working version of the code for the error to creep in.
#python program to analyse an image and label it
'''
Dependencies:
pip install flask
pip install clarifai-grpc
pip install logging
'''
import json
from flask import Flask, render_template, request
import logging
import os
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_pb2, status_code_pb2
channel = ClarifaiChannel.get_json_channel()
stub = service_pb2_grpc.V2Stub(channel)
# This will be used by every Clarifai endpoint call.
# The word 'Key' is required to precede the authentication key.
metadata = (('authorization', 'Key API_KEY_HERE'),)
webapp = Flask(__name__) #creating a web application for the current python file
#decorators
#webapp.route('/')
def index():
return render_template('index.html', len=0)
#webapp.route('/' , methods = ['POST'])
def search():
if request.form['searchByURL']:
url = request.form['searchByURL']
my_request = service_pb2.PostModelOutputsRequest(
# This is the model ID of a publicly available General model.
#You may use any other public or custom model ID.
model_id='aaa03c23b3724a16a56b629203edc62c',
inputs=[
resources_pb2.Input(data=resources_pb2.Data(image=resources_pb2.Image(url=url)))
])
response = stub.PostModelOutputs(my_request, metadata=metadata)
if response.status.code != status_code_pb2.SUCCESS:
message = ["You have reached the limit for today!"]
return render_template('/index.html' , len = 1, searchResults = message)
concepts = []
for concept in response.outputs[0].data.concepts:
concepts.append(concept.name)
concepts = concepts[0:10]
return render_template('/index.html' , len = len(concepts), searchResults = concepts )
elif request.files['searchByImage']:
file = request.files['searchByImage']
file.save(file.filename)
#IMAGE INPUT:
with open(file.filename, "rb") as f:
file_bytes = f.read()
post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
model_id="aaa03c23b3724a16a56b629203edc62c",
version_id="aa7f35c01e0642fda5cf400f543e7c40", # This is optional. Defaults to the latest model version.
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
base64=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
message = ["You have reached the limit for today!"]
return render_template('/index.html' , len = 1, searchResults = message)
# Since we have one input, one output will exist here.
output = post_model_outputs_response.outputs[0]
os.remove(file.filename)
concepts = []
#Predicted concepts:
for concept in output.data.concepts:
concepts.append(concept.name)
concepts = concepts[0:10]
return render_template('/index.html' , len = len(concepts), searchResults = concepts )
else:
return render_template('/index.html' , len = 1, searchResults = ["No Image entered!"] )
#run the server
if __name__ == "__main__":
logging.basicConfig(filename = 'error.log' , level = logging.DEBUG, )
webapp.run(debug=True)
Error:
Exception has occurred: ImportError
dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
File "/Users/eshaangupta/Desktop/Python-Level-4/Image Analyser.py", line 15, in <module>
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
It looks like you are trying to run this on a different architecture than you've been in the past. You've been running on x86 (looks like likely MacOS) and now have moved to an arm architecture. I'm guessing you've upgraded to an M1 chip macbook, although maybe you've moved over to a different ARM based chip.
(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
This is a problem with a file in grpc - specifically the library file cygrpc.cpython-310-darwin.so. I'd recommend removing gRPC and re-installing and see if that helps to resolve it.
Something like this might work:
python -m pip install --force-reinstall grpcio
(This is assuming python points to python3.10 in your system).
although I'm not sure how you've installed it so that is just a guess.
Related
I want to call a python function that uses numpy and pandas from my flutter app and get the output of this function.
I found a way to do that by using ffi package but I don't know how.
some says that I can do this by making a .dylib file from the python project then use this code to call it
final path = absolute('native/libadd.dylib');
final dylib = DynamicLibrary.open(path);
final add = dylib.lookupFunction('add');
but I am getting this error
: Error: Expected type 'NativeFunction<Function>' to be a valid and instantiated subtype of 'NativeType'.
lib/home_screen.dart:32
- 'NativeFunction' is from 'dart:ffi'.
- 'Function' is from 'dart:core'.
final add = dylib.lookupFunction('add');
so I think it's not available on Android
You should try using Flet for this. It is totally writted in Python language and still provide complete flutter functionality and code. Basic app code from it looks something like:
import flet as ft
def main(page: ft.Page):
page.title = "Flet counter example"
page.vertical_alignment = ft.MainAxisAlignment.CENTER
txt_number = ft.TextField(value="0", text_align=ft.TextAlign.RIGHT, width=100)
def minus_click(e):
txt_number.value = str(int(txt_number.value) - 1)
page.update()
def plus_click(e):
txt_number.value = str(int(txt_number.value) + 1)
page.update()
page.add(
ft.Row(
[
ft.IconButton(ft.icons.REMOVE, on_click=minus_click),
txt_number,
ft.IconButton(ft.icons.ADD, on_click=plus_click),
],
alignment=ft.MainAxisAlignment.CENTER,
)
)
ft.app(target=main)
It's just like a numpy or pandas library that you can import right into your project
I am following the tutorial found here https://www.geeksforgeeks.org/youtube-data-api-set-1/. After I run the below code, I am getting a "No module named 'apiclient'" error. I also tried using "from googleapiclient import discovery" but that gave an error as well. Does anyone have alternatives I can try out?
I have already imported pip install --upgrade google-api-python-client
Would appreciate any help/suggestions!
Here is the code:
from apiclient.discovery import build
# Arguments that need to passed to the build function
DEVELOPER_KEY = "your_API_Key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# creating Youtube Resource Object
youtube_object = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
developerKey = DEVELOPER_KEY)
def youtube_search_keyword(query, max_results):
# calling the search.list method to
# retrieve youtube search results
search_keyword = youtube_object.search().list(q = query, part = "id, snippet",
maxResults = max_results).execute()
# extracting the results from search response
results = search_keyword.get("items", [])
# empty list to store video,
# channel, playlist metadata
videos = []
playlists = []
channels = []
# extracting required info from each result object
for result in results:
# video result object
if result['id']['kind'] == "youtube# video":
videos.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["videoId"], result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# playlist result object
elif result['id']['kind'] == "youtube# playlist":
playlists.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["playlistId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# channel result object
elif result['id']['kind'] == "youtube# channel":
channels.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["channelId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
print("Videos:\n", "\n".join(videos), "\n")
print("Channels:\n", "\n".join(channels), "\n")
print("Playlists:\n", "\n".join(playlists), "\n")
if __name__ == "__main__":
youtube_search_keyword('Geeksforgeeks', max_results = 10)
With this information it's hard to say what is the problem. But sometimes I've been banging my head to wall when installing something with pip (Python2) and then trying to import module in Python3 or vice versa.
So if you are running your script with Python3, try install package by using pip3 install --upgrade google-api-python-client
Try the YouTube docs here:
https://developers.google.com/youtube/v3/code_samples
They worked for me on a recently updated Slackware_64 14.2
I use them with Python 3.8. Since there may also be a version 2 of Python installed, I make sure to use this in the Interpreter line:
!/usr/bin/python3.8
Likewise with pip, I use pip3.8 to install dependencies
I installed Python from source. python3.8 --version Python 3.8.2
You can also look at this video here:
https://www.youtube.com/watch?v=qDWtB2q_09g
It sort of explains how to use YouTube's API explorer. You can copy code samples directly from there. The video above covers Android, but the same concept applies to Python regarding using YouTube's API Explorer.
I concur with the previous answer regarding version control.
I am using Google Cloud / JupyterLab /Python
I'm trying to run a sample sentiment analysis, following the guide here
However, on running the example, I get this error:
AttributeError: 'SpeechClient' object has no attribute
'analyze_sentiment'
Below is the code I'm trying:
def sample_analyze_sentiment (gcs_content_uri):
gcs_content_uri = 'gs://converted_audiofiles/Converted_Audio/200315_1633 1.txt'
client = language_v1.LanguageServiceClient()
type_ = enums.Document.Type.PLAIN_TEXT
language = "en" document = {
"gcs_content_uri":'gs://converted_audiofiles/Converted_Audio/200315_1633 1.txt',
"type": 'enums.Document.Type.PLAIN_TEXT', "language": 'en'
}
response = client.analyze_sentiment(document,
encoding_type=encoding_type)
I had no problem generating the transcript using Speech to Text but no success getting a document sentiment analysis!?
I had no problem to perform analyze_sentiment following the documentation example.
I have some issues about your code. To me it should be
from google.cloud import language_v1
from google.cloud.language import enums
from google.cloud.language import types
def sample_analyze_sentiment(path):
#path = 'gs://converted_audiofiles/Converted_Audio/200315_1633 1.txt'
# if path is sent through the function it does not need to be specified inside it
# you can always set path = "default-path" when defining the function
client = language_v1.LanguageServiceClient()
document = types.Document(
gcs_content_uri = path,
type = enums.Document.Type.PLAIN_TEXT,
language = 'en',
)
response = client.analyze_sentiment(document)
return response
Therefore, I have tried the previous code with a path of my own to a text file inside a bucket in Google Cloud Storage.
response = sample_analyze_sentiment("<my-path>")
sentiment = response.document_sentiment
print(sentiment.score)
print(sentiment.magnitude)
I've got a successful run with sentiment score -0.5 and magnitude 1.5. I performed the run in JupyterLab with python3 which I assume is the set up you have.
Hi I'm trying to set up a toolchain for the Fn project. The approach is to set up a toolchain per binary available in GitHub and then, in theory use it in a rule.
I have a common package which has the available binaries:
default_version = "0.5.44"
os_list = [
"linux",
"mac",
"windows"
]
def get_bin_name(os):
return "fn_cli_%s_bin" % os
The download part looks like this:
load(":common.bzl", "get_bin_name", "os_list", "default_version")
_url = "https://github.com/fnproject/cli/releases/download/{version}/{file}"
_os_to_file = {
"linux" : "fn_linux",
"mac" : "fn_mac",
"windows" : "fn.exe",
}
def _fn_binary(os):
name = get_bin_name(os)
file = _os_to_file.get(os)
url = _url.format(
file = file,
version = default_version
)
native.http_file(
name = name,
urls = [url],
executable = True
)
def fn_binaries():
"""
Installs the hermetic binary for Fn.
"""
for os in os_list:
_fn_binary(os)
Then I set up the toolchain like this:
load(":common.bzl", "get_bin_name", "os_list")
_toolchain_type = "toolchain_type"
FnInfo = provider(
doc = "Information about the Fn Framework CLI.",
fields = {
"bin" : "The Fn Framework binary."
}
)
def _fn_cli_toolchain(ctx):
toolchain_info = platform_common.ToolchainInfo(
fn_info = FnInfo(
bin = ctx.attr.bin
)
)
return [toolchain_info]
fn_toolchain = rule(
implementation = _fn_cli_toolchain,
attrs = {
"bin" : attr.label(mandatory = True)
}
)
def _add_toolchain(os):
toolchain_name = "fn_cli_%s" % os
native_toolchain_name = "fn_cli_%s_toolchain" % os
bin_name = get_bin_name(os)
compatibility = ["#bazel_tools//platforms:%s" % os]
fn_toolchain(
name = toolchain_name,
bin = ":%s" % bin_name,
visibility = ["//visibility:public"]
)
native.toolchain(
name = native_toolchain_name,
toolchain = ":%s" % toolchain_name,
toolchain_type = ":%s" % _toolchain_type,
target_compatible_with = compatibility
)
def setup_toolchains():
"""
Macro te set up the toolchains for the different platforms
"""
native.toolchain_type(name = _toolchain_type)
for os in os_list:
_add_toolchain(os)
def fn_register():
"""
Registers the Fn toolchains.
"""
path = "//tools/bazel_rules/fn/internal/cli:fn_cli_%s_toolchain"
for os in os_list:
native.register_toolchains(path % os)
In my BUILD file I call setup_toolchains:
load(":toolchain.bzl", "setup_toolchains")
setup_toolchains()
With this set up I have a rule which looks like this:
_toolchain = "//tools/bazel_rules/fn/cli:toolchain_type"
def _fn(ctx):
print("HEY")
bin = ctx.toolchains[_toolchain].fn_info.bin
print(bin)
# TEST RULE
fn = rule(
implementation = _fn,
toolchains = [_toolchain]
)
Workpace:
workspace(name = "basicwindow")
load("//tools/bazel_rules/fn:defs.bzl", "fn_binaries", "fn_register")
fn_binaries()
fn_register()
When I query for the different binaries with bazel query //tools/bazel_rules/fn/internal/cli:fn_cli_linux_bin they are there but calling bazel build //... results in an error which complains of:
ERROR: /Users/marcguilera/Code/Marc/basicwindow/tools/bazel_rules/fn/internal/cli/BUILD.bazel:2:1: in bin attribute of fn_toolchain rule //tools/bazel_rules/fn/internal/cli:fn_cli_windows: rule '//tools/bazel_rules/fn/internal/cli:fn_cli_windows_bin' does not exist. Since this rule was created by the macro 'setup_toolchains', the error might have been caused by the macro implementation in /Users/marcguilera/Code/Marc/basicwindow/tools/bazel_rules/fn/internal/cli/toolchain.bzl:35:15
ERROR: Analysis of target '//tools/bazel_rules/fn/internal/cli:fn_cli_windows' failed; build aborted: Analysis of target '//tools/bazel_rules/fn/internal/cli:fn_cli_windows' failed; build aborted
INFO: Elapsed time: 0.079s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
I tried to follow the toolchain tutorial in the documentation but I can't get it to work. Another interesting thing is that I'm actually using mac so the toolchain compatibility seems to also be wrong.
I'm using this toolchain in a repo so the paths vary but here's a repo containing only the fn stuff for ease of read.
Two things:
One, I suspect this is your actual issue: https://github.com/bazelbuild/bazel/issues/6828
The core of the problem is that, is the toolchain_type target is in an external repository, it always needs to be referred to by the fully-qualified name, never by the locally-qualified name.
The second is a little more fundamental: you have a lot of Starlark macros here that are generating other targets, and it's very hard to read. It would actually be a lot simpler to remove a lot of the macros, such as _fn_binary, fn_binaries, and _add_toolchains. Just have setup_toolchains directly create the needed native.toolchain targets, and have a repository macro that calls http_archive three times to declare the three different sets of binaries. This will make the code much easier to read and thus easier to debug.
For debugging toolchains, I follow two steps: first, I verify that the tool repositories exist and can be accessed directly, and then I check the toolchain registration and resolution.
After going several levels deep, it looks like you're calling http_archive, naming the new repository #linux, and downloading a specific binary file. This isn't how http_archive works: it expects to fetch a zip file (or tar.gz file), extract that, and find a WORKSPACE and at least one BUILD file inside.
My suggestions: simplify your macros, get the external repositories clearly defined, and then explore using toolchain resolution to choose the right one.
I'm happy to help answer further questions as needed.
I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.
This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).
So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
Add a model to TF Serving server and to the existing config file conf_filepath: Use arguments name, base_path, model_platform for the new model. Keeps the original models intact.
Notice a small difference from #Karl 's answer - using MergeFrom instead of CopyFrom
pip install tensorflow-serving-api
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use --model_config_file_poll_wait_seconds
As mentioned here in the documentation -
By setting the --model_config_file_poll_wait_seconds flag to instruct the server to periodically check for a new config file at --model_config_file filepath.
So, you just have to update the config file at model_config_path and tf-serving will load any new models and unload any models removed from the config file.
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see this). So, try to use the latest version if possible.
If you're using the method described in this answer, please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency.