TensorFlow Serving: Update model_config (add additional models) at runtime - python

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.
This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).

So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")

Add a model to TF Serving server and to the existing config file conf_filepath: Use arguments name, base_path, model_platform for the new model. Keeps the original models intact.
Notice a small difference from #Karl 's answer - using MergeFrom instead of CopyFrom
pip install tensorflow-serving-api
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)

While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use --model_config_file_poll_wait_seconds
As mentioned here in the documentation -
By setting the --model_config_file_poll_wait_seconds flag to instruct the server to periodically check for a new config file at --model_config_file filepath.
So, you just have to update the config file at model_config_path and tf-serving will load any new models and unload any models removed from the config file.
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see this). So, try to use the latest version if possible.

If you're using the method described in this answer, please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency.

Related

pass extra file argument to azureml inference config class

Currently crating inf_conf from entry script (score.py) and environment however, I have a json file that i also want to include in this.
Is there a way i can do this?
I have seen source_directory argument but json file is not in the same folder as score.py file. https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py
inf_conf = InferenceConfig(entry_script="score.py",environment=environment)
Currently, it is required that all the necessary files and objects related to the endpoint be placed in the source_directory:
inference_config = InferenceConfig(
environment=env,
source_directory='./endpoint_source',
entry_script="./score.py",
)
One workaround is to upload your JSON file somewhere else, e.g., on the Blob Storage, and download it in the init() function of your entry script. For example:
score.py:
import requests
def init():
"""
This function is called when the container is initialized/started,
typically after create/update of the deployment.
"""
global model
# things related to initializing the model
model = ...
# download your JSON file
json_file_rul = 'https://sampleazurestorage.blob.core.windows.net/data/my-configs.json'
response = requests.get(json_file_rul)
open('my_configs.json', "wb").write(response.content)

Extract Keyfile JSON from saved connection of type "google_cloud_platform"

I have saved a connection of type "google_cloud_platform" in Airflow as described here https://cloud.google.com/composer/docs/how-to/managing/connections
Now in my DAG, I need to extract from the saved connection the Keyfile JSON
What is the correct hook to be used?
Use airflow.contrib.hooks.gcp_api_base_hook.GoogleCloudBaseHook to get the stored connection. For example
from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
gcp_hook = GoogleCloudBaseHook(gcp_conn_id="<your-conn-id>")
keyfile_dict = gcp_hook._get_field('keyfile_dict')
You can just use BaseHook as follows:
from airflow.hooks.base_hook import BaseHook
GCP_CONNECTION_ID="my-gcp-connection"
BaseHook.get_connection(GCP_CONNECTION_ID).extras["extra__google_cloud_platform__keyfile_dict"]
The other solutions no longer work. Here's a way that's working in 2023:
from airflow.models import Connection
conn = Connection.get_connection_from_secrets(
conn_id='my-gcp-connection'
)
json_key = conn.extra_dejson['keyfile_dict']
with open('gcp_svc_acc.json', 'w') as f:
f.write(json_key)
Mostly because the imports moved around I think.
The question specifically refers to keyfile JSON, but this is a quick addendum for those who configured keyfile path instead: take care to check if it's keyfile_dict OR keyfile_path that the Airflow admin configured, as they're two different ways to set up the connection.

FileUploadMiscError while persisting output file from Azure Batch

I'm facing the following error while trying to persist log files to Azure Blob storage from Azure Batch execution - "FileUploadMiscError - A miscellaneous error was encountered while uploading one of the output files". This error doesn't give a lot of information as to what might be going wrong. I tried checking the Microsoft Documentation for this error code, but it doesn't mention this particular error code.
Below is the relevant code for adding the task to Azure Batch that I have ported from C# to Python for persisting the log files.
Note: The container that I have configured gets created when the task is added, but there's no blob inside.
import datetime
import logging
import os
import azure.storage.blob.models as blob_model
import yaml
from azure.batch import models
from azure.storage.blob.baseblobservice import BaseBlobService
from azure.storage.common.cloudstorageaccount import CloudStorageAccount
from dotenv import load_dotenv
LOG = logging.getLogger(__name__)
def add_tasks(batch_client, job_id, task_id, io_details, blob_details):
task_commands = "This is a placeholder. Actual code has an actual task. This gets completed successfully."
LOG.info("Configuring the blob storage details")
base_blob_service = BaseBlobService(
account_name=blob_details['account_name'],
account_key=blob_details['account_key'])
LOG.info("Base blob service created")
base_blob_service.create_container(
container_name=blob_details['container_name'], fail_on_exist=False)
LOG.info("Container present")
container_sas = base_blob_service.generate_container_shared_access_signature(
container_name=blob_details['container_name'],
permission=blob_model.ContainerPermissions(write=True),
expiry=datetime.datetime.now() + datetime.timedelta(days=1))
LOG.info(f"Container SAS created: {container_sas}")
container_url = base_blob_service.make_container_url(
container_name=blob_details['container_name'], sas_token=container_sas)
LOG.info(f"Container URL created: {container_url}")
# fpath = task_id + '/output.txt'
fpath = task_id
LOG.info(f"Creating output file object:")
out_files_list = list()
out_files = models.OutputFile(
file_pattern=r"../stderr.txt",
destination=models.OutputFileDestination(
container=models.OutputFileBlobContainerDestination(
container_url=container_url, path=fpath)),
upload_options=models.OutputFileUploadOptions(
upload_condition=models.OutputFileUploadCondition.task_completion))
out_files_list.append(out_files)
LOG.info(f"Output files: {out_files_list}")
LOG.info(f"Creating the task now: {task_id}")
task = models.TaskAddParameter(
id=task_id, command_line=task_commands, output_files=out_files_list)
batch_client.task.add(job_id=job_id, task=task)
LOG.info(f"Added task: {task_id}")
There is a bug in Batch's OutputFile handling which causes it to fail to upload to containers if the full container URL includes any query-string parameters other than the ones included in the SAS token. Unfortunately, the azure-storage-blob Python module includes an extra query string parameter when generating the URL via make_container_url.
This issue was just raised to us, and a fix will be released in the coming weeks, but an easy workaround is instead of using make_container_url to craft the URL, craft it yourself like so: container_url = 'https://{}/{}?{}'.format(blob_service.primary_endpoint, blob_details['container_name'], container_sas).
The resulting URL should look something like this: https://<account>.blob.core.windows.net/<container>?se=2019-01-12T01%3A34%3A05Z&sp=w&sv=2018-03-28&sr=c&sig=<sig> - specifically it shouldn't have restype=container in it (which is what the azure-storage-blob package is including)

How to retrieve a listening history object by spotipy?

I'm working at a recommendation system for Spotify and I'm using spotipy on Python. I can't use the function current_user_recently_played, because Python says that the attribute current_user_recently_played isn't valid.
I don't know how to solve this problem, I absolutely need of this information to continue with my work.
This is my code:
import spotipy
import spotipy.util as util
import json
def current_user_recently_played(self, limit=50):
return self._get('me/player/recently-played', limit=limit)
token = util.prompt_for_user_token(
username="212887#studenti.unimore.it",
scope="user-read-recently-played user-read-private user-top-read user-read-currently-playing",
client_id="xxxxxxxxxxxxxxxxxxxxxx",
client_secret="xxxxxxxxxxxxxxxxxxxxxx",
redirect_uri="https://www.google.it/")
spotify = spotipy.Spotify(auth=token)
canzonirecenti= spotify.current_user_recently_played(limit=50)
out_file = open("canzonirecenti.json","w")
out_file.write(json.dumps(canzonirecenti, sort_keys=True, indent=2))
out_file.close()
print json.dumps(canzonirecenti, sort_keys=True, indent=2)
and the response is:
AttributeError: 'Spotify' object has no attribute 'current_user_recently_played'
The Spotify API Endpoints current_user_recently_added exists in the source code on Github, but I don't seem to have it in my local installation. I think the version on the Python package index is out of date, last change to the source code was 8 months ago and last change to the PyPI version was over a year ago.
I've gotten the code example to work by patching the Spotify client object to add the method myself, but this way of doing it is not the best way generally as it adds custom behaviour to a particular instance rather than the general class.
import spotipy
import spotipy.util as util
import json
import types
def current_user_recently_played(self, limit=50):
return self._get('me/player/recently-played', limit=limit)
token = util.prompt_for_user_token(
username="xxxxxxxxxxxxxx",
scope="user-read-recently-played user-read-private user-top-read user-read-currently-playing",
client_id="xxxxxxxxxxxxxxxxxxxxxx",
client_secret="xxxxxxxxxxxxxxxxxxxxxxxx",
redirect_uri="https://www.google.it/")
spotify = spotipy.Spotify(auth=token)
spotify.current_user_recently_played = types.MethodType(current_user_recently_played, spotify)
canzonirecenti = spotify.current_user_recently_played(limit=50)
out_file = open("canzonirecenti.json","w")
out_file.write(json.dumps(canzonirecenti, sort_keys=True, indent=2))
out_file.close()
print(json.dumps(canzonirecenti, sort_keys=True, indent=2))
Other ways of getting it to work in a more correct way are:
installing it from the source on Github, instead of through Pip
poking Plamere to request he update the version on PyPI
subclass the Spotify client class and add the missing methods to the subclass (probably the quickest and simplest)
Here's a partial snippet of the way I've subclassed it in my own project:
class SpotifyConnection(spotipy.Spotify):
"""Modified version of the spotify.Spotipy class
Main changes are:
-implementing additional API endpoints (currently_playing, recently_played)
-updating the main internal call method to update the session and retry once on error,
due to an issue experienced when performing actions which require an extended time
connected.
"""
def __init__(self, client_credentials_manager, auth=None, requests_session=True, proxies=None,
requests_timeout=None):
super().__init__(auth, requests_session, client_credentials_manager, proxies, requests_timeout)
def currently_playing(self):
"""Gets whatever the authenticated user is currently listening to"""
return self._get("me/player/currently-playing")
def recently_played(self, limit=50):
"""Gets the last 50 songs the user has played
This doesn't include whatever the user is currently listening to, and no more than the
last 50 songs are available.
"""
return self._get("me/player/recently-played", limit=limit)
<...more stuff>

Link generator using django or any python module

I want to generate for my users temporary download link.
Is that ok if i use django to generate link using url patterns?
Could it be correct way to do that. Because can happen that I don't understand some processes how it works. And it will overflow my memory or something else. Some kind of example or tools will be appreciated. Some nginx, apache modules probably?
So, what i wanna to achieve is to make url pattern which depend on user and time. Decript it end return in view a file.
A simple scheme might be to use a hash digest of username and timestamp:
from datetime import datetime
from hashlib import sha1
user = 'bob'
time = datetime.now().isoformat()
plain = user + '\0' + time
token = sha1(plain)
print token.hexdigest()
"1e2c5078bd0de12a79d1a49255a9bff9737aa4a4"
Next you store that token in a memcache with an expiration time. This way any of your webservers can reach it and the token will auto-expire. Finally add a Django url handler for '^download/.+' where the controller just looks up that token in the memcache to determine if the token is valid. You can even store the filename to be downloaded as the token's value in memcache.
Yes it would be ok to allow django to generate the urls. This being exclusive from handling the urls, with urls.py. Typically you don't want django to handle the serving of files see the static file docs[1] about this, so get the notion of using url patterns out of your head.
What you might want to do is generate a random key using a hash, like md5/sha1. Store the file and the key, datetime it's added in the database, create the download directory in a root directory that's available from your webserver like apache or nginx... suggest nginx), Since it's temporary, you'll want to add a cron job that checks if the time since the url was generated has expired, cleans up the file and removes the db entry. This should be a django command for manage.py
Please note this is example code written just for this and not tested! It may not work the way you were planning on achieving this goal, but it works. If you want the dl to be pw protected also, then look into httpbasic auth. you can generate and remove entries on the fly in a httpd.auth file using htpasswd and the subprocess module when you create the link or at registration time.
import hashlib, random, datetime, os, shutil
# model to hold link info. has these fields: key (charfield), filepath (filepathfield)
# datetime (datetimefield), url (charfield), orgpath (filepathfield of the orignal path
# or a foreignkey to the files model.
from models import MyDlLink
# settings.py for the app
from myapp import settings as myapp_settings
# full path and name of file to dl.
def genUrl(filepath):
# create a onetime salt for randomness
salt = ''.join(['{0}'.format(random.randrange(10) for i in range(10)])
key = hashlib('{0}{1}'.format(salt, filepath).hexdigest()
newpath = os.path.join(myapp_settings.DL_ROOT, key)
shutil.copy2(fname, newpath)
newlink = MyDlink()
newlink.key = key
newlink.date = datetime.datetime.now()
newlink.orgpath = filepath
newlink.newpath = newpath
newlink.url = "{0}/{1}/{2}".format(myapp_settings.DL_URL, key, os.path.basename(fname))
newlink.save()
return newlink
# in commands
def check_url_expired():
maxage = datetime.timedelta(days=7)
now = datetime.datetime.now()
for link in MyDlink.objects.all():
if(now - link.date) > maxage:
os.path.remove(link.newpath)
link.delete()
[1] http://docs.djangoproject.com/en/1.2/howto/static-files/
It sounds like you are suggesting using some kind of dynamic url conf.
Why not forget your concerns by simplifying and setting up a single url that captures a large encoded string that depends on user/time?
(r'^download/(?P<encrypted_id>(.*)/$', 'download_file'), # use your own regexp
def download_file(request, encrypted_id):
decrypted = decrypt(encrypted_id)
_file = get_file(decrypted)
return _file
A lot of sites just use a get param too.
www.example.com/download_file/?09248903483o8a908423028a0df8032
If you are concerned about performance, look at the answers in this post: Having Django serve downloadable files
Where the use of the apache x-sendfile module is highlighted.
Another alternative is to simply redirect to the static file served by whatever means from django.

Categories