pass extra file argument to azureml inference config class - python

Currently crating inf_conf from entry script (score.py) and environment however, I have a json file that i also want to include in this.
Is there a way i can do this?
I have seen source_directory argument but json file is not in the same folder as score.py file. https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py
inf_conf = InferenceConfig(entry_script="score.py",environment=environment)

Currently, it is required that all the necessary files and objects related to the endpoint be placed in the source_directory:
inference_config = InferenceConfig(
environment=env,
source_directory='./endpoint_source',
entry_script="./score.py",
)
One workaround is to upload your JSON file somewhere else, e.g., on the Blob Storage, and download it in the init() function of your entry script. For example:
score.py:
import requests
def init():
"""
This function is called when the container is initialized/started,
typically after create/update of the deployment.
"""
global model
# things related to initializing the model
model = ...
# download your JSON file
json_file_rul = 'https://sampleazurestorage.blob.core.windows.net/data/my-configs.json'
response = requests.get(json_file_rul)
open('my_configs.json', "wb").write(response.content)

Related

Name error when trying to import API key from .env

I am trying to store my API Keys in a .env file
I created the file as a File containing settings for editor file type. Stored my APIKeys
TWILIO_ACCOUNT_SID=***
TWILIO_AUTH_TOKEN=***
TWIML_APPLICATION_SID=***
TWILIO_API_KEY=***
TWILIO_API_SECRET=***
Installed decouple, imported and used config to retrieve my API tokens in my settings.py file
from decouple import config
...
TWILIO_ACCOUNT_SID = config(TWILIO_ACCOUNT_SID)
TWILIO_AUTH_TOKEN = config(TWILIO_AUTH_TOKEN)
TWIML_APPLICATION_SID = config(TWIML_APPLICATION_SID)
TWILIO_API_KEY = config(TWILIO_API_KEY)
TWILIO_API_SECRET = config(TWILIO_API_SECRET)
I am however getting the error message:
TWILIO_ACCOUNT_SID = config(TWILIO_ACCOUNT_SID)
NameError: name 'TWILIO_ACCOUNT_SID' is not defined
You don't need to use the decouple library to read your environment variables.
Firstly download the .env plug-in supporter for PyCharm (if that's what you're using)
https://www.codestudyblog.com/cs2112pyc/1224021812.html
This will allow you to set and get variables from your file. Make sure your configuration has the correct .env file set.
my .env file has the variable set to:
TWILIO_ACCOUNT_SID=SUPER SECRET KEY
Then all you need to is:
import os
twilio_key = os.environ.get('TWILIO_ACCOUNT_SID')
print(twilio_key)
>>>SUPER SECRET KEY
Process finished with exit code 0

TensorFlow Serving: Update model_config (add additional models) at runtime

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.
This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).
So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
Add a model to TF Serving server and to the existing config file conf_filepath: Use arguments name, base_path, model_platform for the new model. Keeps the original models intact.
Notice a small difference from #Karl 's answer - using MergeFrom instead of CopyFrom
pip install tensorflow-serving-api
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use --model_config_file_poll_wait_seconds
As mentioned here in the documentation -
By setting the --model_config_file_poll_wait_seconds flag to instruct the server to periodically check for a new config file at --model_config_file filepath.
So, you just have to update the config file at model_config_path and tf-serving will load any new models and unload any models removed from the config file.
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see this). So, try to use the latest version if possible.
If you're using the method described in this answer, please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency.

How to make secure connection with python?

I'm using python3. i need to use certificates file to make secure connection.
In this case, i used Httpsconnection class from http.client...
this class get certs file path and use it. like this:
import http.client
client=http.client.HTTPSConnection\
("epp.nic.ir",key_file="filepath\\nic.pem",cert_file="filepath\\nic.crt")
As you see, this class get path of files and works correctly.
But I need to give contents of these files. Because I want to put contents of crt file and pem file into DB. the reason is that maybe files path changes...
so i tried this:
import http.client
import base64
cert = b'''
content of cert file
'''
pem = b'''
content of pem file
'''
client=http.client.HTTPSConnection("epp.nic.ir" ,pem, cert)
as expected, i got this error:
TypeError: certfile should be a valid filesystem path
is there any way to make this class to get content of file instead of file path ?!
Or Is it possible to make changes in source codes of http for this purpose ?!
It is possible to modify Python source code, but it is not the recommended way as it definitely brings about portability, maintainability and other issues.
Consider you want to update Python version, you have to apply your modification each time you update it.
Consider you want to run your code in another machine, again the same problem.
Instead of modifying the source code, there is a better and preferrable way: extending the API.
You can subclass the existing HTTPSConnection class and override its constructor method by your own implementation.
There are plenty of ways to achieve what you need.
Here is a possible solution with subclassing:
import http.client
import tempfile
class MyHTTPSConnection(http.client.HTTPSConnection):
"""HTTPSConnection with key and cert files passed as contents rather than file names"""
def __init__(self, host, key_content=None, cert_content=None, **kwargs):
# additional parameters are also optional so that
# so that this class can be used with or without cert/key files
# as a replacement of standard HTTPSConnection
self.key_file = None
self.cert_file = None
# here we write the content of cert & pem into a temporary file
# delete=False keeps the file in the file system
# but, this time we need to remove it manually when we are done
if key_content:
self.key_file = tempfile.NamedTemporaryFile(delete=False)
self.key_file.write(key_content)
self.key_file.close()
# NamedTemporaryFile object provides 'name' attribute
# which is a valid file name in the file system
# so we can use those file names to initiate the actual HTTPSConnection
kwargs['key_file'] = self.key_file.name
# same as above but this time for cert content and cert file
if cert_content:
self.cert_file = tempfile.NamedTemporaryFile(delete=False)
self.cert_file.write(cert_content)
self.cert_file.close()
kwargs['cert_file'] = self.cert_file.name
# initialize super class with host and keyword arguments
super().__init__(host, **kwargs)
def clean(self):
# remove temp files from the file system
# you need to decide when to call this method
os.unlink(self.cert_file.name)
os.unlink(self.pem_file.name)
host = "epp.nic.ir"
key_content = b'''content of key file'''
cert_content = b'''content of cert file'''
client = MyHTTPSConnection(host, key_content=key_content, cert_content=cert_content)
# ...

How to pass variable values to another config parameters?

[plugin_jira]
maxuser=
finduser=
endpoint="https://nepallink.atlassian.net/rest/api/latest/user/search?startAt=0&maxResults={maxi}&username={manche}%".format(maxi=maxresult,manche=user_find)
This is my config file , in the endpoints item why I am using the format is so that I can pass a variable to it in my script.The script where it is running is below ,my main.py
maxresult = config.get('plugin_jira', 'maxuser')
user_find = config.get('plugin_jira', 'finduser')
endpoint = config.get('plugin_jira', 'endpoint')
Now what I am confused is when I call the endpoints values in the script it just fetching what is in the config without the variable values that got defined just above it.
How can I make the variable value of maxresult and user_find added to endpoints which is defined to access it.
In your config file, import the variables from main.py as following.
Config file:
from main import maxresult, user_find
[plugin_jira]
endpoint="https://nepallink.atlassian.net/rest/api/latest/user/search?startAt=0&maxResults={maxi}&username={manche}%".format(maxi=maxresult,manche=user_find)
This will allow you to access the required variables to your endpoints.
Hope it helps!

Nose Tests - File Uploads

How would one go about testing a Pylons controller (using Nose Tests) that takes a file upload as a POST parameter?
Like this:
class TestUploadController(TestController):
// ....
def test_upload_files(self):
""" Check that upload of text file works. """
files = [("Filedata", "filename.txt", "contents of the file")]
res = self.app.post("/my/upload/path", upload_files = files)
Uploading file usually requires authenticated user so you may also need to pass "extra_environ" argument to self.app.post() to circumvent that.
See paste.fixture documentation for details on the arguments accepted by self.app.post()

Categories