ValueError while deploying tensorflow model to Amazon SageMaker - python

I want to deploy my trained tensorflow model to the amazon sagemaker, I am following the official guide here: https://aws.amazon.com/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker/ to deploy my model using jupyter notebook.
But when I try to use code:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
It gives me the following error message:
ValueError: Error hosting endpoint sagemaker-tensorflow-2019-08-07-22-57-59-547: Failed Reason: The image '520713654638.dkr.ecr.us-west-1.amazonaws.com/sagemaker-tensorflow:1.12-cpu-py3 ' does not exist.
I think the tutorial does not tell me to create an image, and I do not know what to do.
import boto3, re
from sagemaker import get_execution_role
role = get_execution_role()
# make a tar ball of the model data files
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('export', recursive=True)
# create a new s3 bucket and upload the tarball to it
import sagemaker
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py',
py_version='py3')
%%time
#here I fail to deploy the model and get the error message
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')

https://github.com/aws/sagemaker-python-sdk/issues/912#issuecomment-510226311
As mentioned in the issue
Python 3 isn't supported using the TensorFlowModel object, as the container uses the TensorFlow serving api library in conjunction with the GRPC client to handle making inferences, however the TensorFlow serving api isn't supported in Python 3 officially, so there are only Python 2 versions of the containers when using the TensorFlowModel object.
If you need Python 3 then you will need to use the Model object defined in #2 above. The inference script format will change if you need to handle pre and post processing. https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing.

Related

Permission Denied using Google AiPlatform ModelServiceClient

I am following a guide to get a Vertex AI pipeline working:
https://codelabs.developers.google.com/vertex-pipelines-intro#5
I have implemented the following custom component:
from google.cloud import aiplatform as aip
from google.oauth2 import service_account
project = "project-id"
region = "us-central1"
display_name = "lookalike_model_pipeline_1646929843"
model_name = f"projects/{project}/locations/{region}/models/{display_name}"
api_endpoint = "us-central1-aiplatform.googleapis.com" #europe-west2
model_resource_path = model_name
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
client = aip.gapic.ModelServiceClient(credentials=service_account.Credentials.from_service_account_file('..\\service_accounts\\aiplatform_sa.json'),
client_options=client_options)
#get model evaluation
response = client.list_model_evaluations(parent=model_name)
And I get following error:
(<class 'google.api_core.exceptions.PermissionDenied'>, PermissionDenied("Permission 'aiplatform.modelEvaluations.list' denied on resource '//aiplatform.googleapis.com/projects/project-id/locations/us-central1/models/lookalike_model_pipeline_1646929843' (or it may not exist)."), <traceback object at 0x000002414D06B9C0>)
The model definitely exists and has finished training. I have given myself admin rights in the aiplatform service account. In the guide, they do not use a service account, but uses only client_options instead. The client_option has the wrong type since it is a dict(str, str) when it should be: Optional['ClientOptions']. But this doesn't cause an error.
My main question is: how do I get around this permission issue?
My subquestions are:
How can I use my model_name variable in a URL to get to the model?
How can I create an Optional['ClientOptions'] object to pass as client_option
Is there another way I can list_model_evaluations from a model that is in VertexAI, trained using automl?
Thanks
With the caveats in my comment that, while familiar with GCP, I'm less familiar with the AI|ML stuff. The following should work. I don't have a model to deploy to test it.
BILLING=[[YOUR-BILLING]]
export PROJECT=[[YOUR-PROJECT]]
export LOCATION="us-central1"
export MODEL=[[YOUR-MODEL]]
ACCOUNT="tester"
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
# Unsure whether ML is needed
for SERVICE in "aiplatform" "ml"
do
gcloud services enable ${SERVICE}.googleapis.com \
--project=${PROJECT}
done
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding ${PROJECT} \
--role=roles/aiplatform.admin \
--member=serviceAccount:${EMAIL}
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json
python3 -m venv venv
source venv/bin/activate
python3 -m pip install google-cloud-aiplatform
python3 main.py
main.py:
import os
from google.cloud import aiplatform
project = os.getenv("PROJECT")
location = os.getenv("LOCATION")
model = os.getenv("MODEL")
aiplatform.init(
project=project,
location=location,
experiment="test",
)
parent = f"projects/{project}/locations/{location}/models/{model}"
model = aiplatform.Model(parent)
I tried using your code and it did not also work for me and got a different error. As #DazWilkin mentioned it is recommended to use the Cloud Client.
I used aiplatform_v1 and it worked fine. One thing I noticed is that you should always define a value for client_options so it will point to the correct endpoint. Checking the code for ModelServiceClient, if I'm not mistaken the endpoint defaults to "aiplatform.googleapis.com" which don't have a location prepended. AFAIK the endpoint should prepend a location.
See code below. I used AutoML models and it returns their model evaluations.
from google.cloud import aiplatform_v1 as aiplatform
from typing import Optional
def get_model_eval(
project_id: str,
model_id: str,
client_options: dict,
location: str = 'us-central1',
):
client_model = aiplatform.services.model_service.ModelServiceClient(client_options=client_options)
model_name = f'projects/{project_id}/locations/{location}/models/{model_id}'
list_eval_request = aiplatform.types.ListModelEvaluationsRequest(parent=model_name)
list_eval = client_model.list_model_evaluations(request=list_eval_request)
print(list_eval)
api_endpoint = 'us-central1-aiplatform.googleapis.com'
client_options = {"api_endpoint": api_endpoint} # api_endpoint is required for client_options
project_id = 'project-id'
location = 'us-central1'
model_id = '99999999999' # aiplatform_v1 uses the model_id
get_model_eval(
client_options = client_options,
project_id = project_id,
location = location,
model_id = model_id,
)
This is an output snippet from my AutoML Text Classification:

How do I load the model artifact from AWS Sagemaker built-in container?

I'm using the linear-learner container from Sagemaker to train a model. The training has completed and the model artifact is saved in S3. I download it which is a .tar.gz file and there is the actual model file stored in it called model-algo-1 without format extension. I'm trying to load this model and inspect the model coefficients but not sure how to do so.
I tried pickle and joblib but they didn't work. Does anyone know how to load the model file trained from Sagemaker built-in container? Or is there any other way I can check the model coefficients? It's a logistic regression model.
Thanks
I managed to get this working.
From the SageMaker documentation, there are two classes that can be used to load an already deployed Linear Learner
Option 1: Use sagemaker.LinearLearnerModel to build predictor from S3 artificat
According to the documentation, this class:
"Reference LinearLearner s3 model data"
You can use it as following:
Step 1: Build LinearLearner Model using the artificats hosted on S3
from sagemaker import LinearLearnerModel, get_execution_role, Session
import boto3
sess = boto3.Session(region_name=region_name)
sagemaker_session = Session(boto_session=sess)
role = get_execution_role(sagemaker_session)
model = LinearLearnerModel(model_data, role, sagemaker_session=sagemaker_session)
Step 2: Deploy the model to an endpoint (in this example it is a serverless endpoint)
my_serverless_inference_config = ServerlessInferenceConfig(memory_size_in_mb=2048, max_concurrency=1)
linear_predictor = model .deploy(endpoint_name=my_endpoint_name,serverless_inference_config=my_serverless_inference_config,serializer=CSVSerializer(), deserializer=JSONDeserializer())
from sagemaker.predictor import csv_serializer, json_deserializer
linear_regressor.serializer = csv_serializer
linear_regressor.deserializer = json_deserializer
Step 3: use the sagemaker.predictor object returned by the deploy function
result = linear_predictor.predict(X_test)
#Iterate the result JSON to get an NP array of all the predictions so we can compare to Y test
predictions = np.array([res['score'] for res in result['predictions']])
Option 2: Use LinearLearnerPredictor to build a predictor from an already endpoint
You can follow this option if your model has an already deployed endpoint
from sagemaker import LinearLearnerPredictor, get_execution_role, Session
import boto3
sess = boto3.Session(region_name=region_name)
sagemaker_session = Session(boto_session=sess)
role = get_execution_role(sagemaker_session)
predictor = LinearLearnerPredictor(endpoint_name, sagemaker_session=sagemaker_session)

No such file or directory Error with Google Cloud Storage

I quite new to Google Cloud Platform and I am trying to train a model with TPU. I follow this tutorial to set up the TPU with Google Colab. All the code below follows the tutorial.
This is the step I have done:
import datetime
import json
import os
import pprint
import random
import string
import sys
import tensorflow as tf
assert 'COLAB_TPU_ADDR' in os.environ, 'ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!'
TPU_ADDRESS = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print('TPU address is => ', TPU_ADDRESS)
from google.colab import auth
auth.authenticate_user()
with tf.Session(TPU_ADDRESS) as session:
print('TPU devices:')
pprint.pprint(session.list_devices())
# Upload credentials to TPU.
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f)
tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
# Now credentials are set for all future sessions on this TPU.
Output:
TPU address is => grpc://10.4.89.154:8470
Provide my BUCKET name and OUPUT DIRECTORY name:
BUCKET = 'my_xlnet' ##param {type:"string"}
assert BUCKET, '*** Must specify an existing GCS bucket name ***'
output_dir_name = 'xlnet_output' ##param {type:"string"}
BUCKET_NAME = 'gs://{}'.format(BUCKET)
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET,output_dir_name)
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
Move the pretrained model to GCS bucket:
!gsutil mv /content/xlnet_extension_tf/model/xlnet_cased_L-24_H-1024_A-16 $BUCKET_NAME
Output:
...
Operation completed over 5 objects/1.3 GiB.
Then run the main code:
!python /content/xlnet_extension_tf/run_coqa.py \
--use_tpu=True \
--tpu_name=grpc://10.4.89.154:8470 \
--spiece_model_file=$BUCKET_NAME/xlnet_cased_L-24_H-1024_A-16/spiece.model \
--model_config_path=$BUCKET_NAME/xlnet_cased_L-24_H-1024_A-16/xlnet_config.json \
--init_checkpoint=$BUCKET_NAME/xlnet_cased_L-24_H-1024_A-16/xlnet_model.ckpt \
...
Then I got this error:
OSError: Not found: "gs://my_xlnet/xlnet_cased_L-24_H-1024_A-16/spiece.model": No such file or directory Error #2
This is the GCS bucket screen:
I don't know why this error exists because I can move my pretrained model to the bucket successfully.
Do you guys know how to fix this?
Update:
The run_coqa.py file: https://github.com/stevezheng23/xlnet_extension_tf/blob/master/run_coqa.py
Can you post the part where run_coqa.py is opening the file?
It seems like you're trying to open it with a regular os. command where you should be using GCP's sdk.
This tutorial was created by a third party. I cannot see any common issue going on right now that would stop this code from running.

Python ML Deployment Fails on Azure Container Instance

I have same problem as
Why does my ML model deployment in Azure Container Instance still fail?
but the above solution does not work for me. Besides I get additional errors like belos
code": "AciDeploymentFailed",
"message": "Aci Deployment failed with exception: Your container application
crashed. This may be caused by errors in your scoring file's init()
function.\nPlease check the logs for your container instance: anomaly-detection-2.
From the AML SDK, you can run print(service.get_logs()) if you have service object
to fetch the logs. \nYou can also try to run image
mlad046a4688.azurecr.io/anomaly-detection-
2#sha256:fcbba67cf683626291c1bd084f31438fcd641ddaf80f9bdf8cea274d22d1fcb5 locally.
Please refer to http://aka.ms/debugimage#service-launch-fails for more
information.",
"details": [
{
"code": "CrashLoopBackOff",
"message": "Your container application crashed. This may be caused by errors in
your scoring file's init() function.\nPlease check the logs for your container
instance: anomaly-detection-2. From the AML SDK, you can run
print(service.get_logs()) if you have service object to fetch the logs. \nYou can
also try to run image mlad046a4688.azurecr.io/anomaly-detection-
2#sha256:fcbba67cf683626291c1bd084f31438fcd641ddaf80f9bdf8cea274d22d1fcb5 locally.
Please refer to http://aka.ms/debugimage#service-launch-fails for more
information."
}
]
}
It keeps pointing to scoring file but not sure what is wrong here
import numpy as np
import os
import pickle
import joblib
#from sklearn.externals import joblib
from sklearn.linear_model import LogisticRegression
from azureml.core.authentication import AzureCliAuthentication
from azureml.core import Model,Workspace
import logging
logging.basicConfig(level=logging.DEBUG)
def init():
global model
from sklearn.externals import joblib
# retrieve the path to the model file using the model name
model_path = Model.get_model_path(model_name='admlpkl')
print(model_path)
model = joblib.load(model_path)
#ws = Workspace.from_config(auth=cli_auth)
#logging.basicConfig(level=logging.DEBUG)
#modeld = ws.models['admlpkl']
#model=Model.deserialize(ws, modeld)
def run(raw_data):
# data = np.array(json.loads(raw_data)['data'])
# make prediction
data = json.loads(raw_data)
y_hat = model.predict(data)
#r = json.dumps(y_hat.tolist())
r = json.dumps(y_hat)
return r
The model has depencency on other file which I have added in
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file='conda_dependencies.yml',
dependencies=['modeling.py']
The logs are too abstract and really does not help to debug.I am able to create the image but provisioning service fails
Any inputs will be appreciated
Have you registered the model 'admlpkl' in your workspace using the register() function on the model object? If not, there will be no model path and that can cause failure.
See this section on model registration: https://learn.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#registermodel
Please follow the below to register and deploy the model to ACI.

How to use multiple inputs for custom Tensorflow model hosted by AWS Sagemaker

I have a trained Tensorflow model that uses two inputs to make predictions. I have successfully set up and deployed the model on AWS Sagemaker.
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket()
+ '/R2-model/R2-model.tar.gz',
role = role,
framework_version = '1.12',
py_version='py2',
entry_point='train.py')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
predictor.predict([data_scaled_1.to_csv(),
data_scaled_2.to_csv()]
)
I always receive an error. I could use an AWS Lambda function, but I don't see any documentation on specifying multiple inputs to deployed models. Does anyone know how to do this?
You need to actually build a correct signature when deploying the model first.
Also, you need to deploy with tensorflow serving.
At inference, you need to also give a proper input format when requesting: basically sagemaker docker server takes the request input and passes it by to tensorflow serving. So, the input needs to match TF serving inputs.
Here is a simple example of deploying a Keras multi-input multi-output model in Tensorflow serving using Sagemaker and how to make inference afterwards:
import tarfile
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
from keras import backend as K
import sagemaker
#nano ~/.aws/config
#get_ipython().system('nano ~/.aws/config')
from sagemaker import get_execution_role
from sagemaker.tensorflow.serving import Model
def serialize_to_tf_and_dump(model, export_path):
"""
serialize a Keras model to TF model
:param model: compiled Keras model
:param export_path: str, The export path contains the name and the version of the model
:return:
"""
# Build the Protocol Buffer SavedModel at 'export_path'
save_model_builder = builder.SavedModelBuilder(export_path)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
inputs={
"input_type_1": model.input[0],
"input_type_2": model.input[1],
},
outputs={
"decision_output_1": model.output[0],
"decision_output_2": model.output[1],
"decision_output_3": model.output[2]
}
)
with K.get_session() as sess:
# Save the meta graph and variables
save_model_builder.add_meta_graph_and_variables(
sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
save_model_builder.save()
# instanciate model
model = ....
# convert to tf model
serialize_to_tf_and_dump(model, 'model_folder/1')
# tar tf model
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('model_folder', recursive=True)
# upload it to s3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz')
# convert to sagemaker model
role = get_execution_role()
sagemaker_model = Model(model_data = inputs,
name='DummyModel',
role = role,
framework_version = '1.12')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.t2.medium', endpoint_name='MultiInputMultiOutputModel')
At inference, here is how to request for predictions:
import json
import boto3
x_inputs = ... # list with 2 np arrays of size (batch_size, ...)
data={
'inputs':{
"input_type_1": x[0].tolist(),
"input_type_2": x[1].tolist()
}
}
endpoint_name = 'MultiInputMultiOutputModel'
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data), ContentType='application/json')
predictions = json.loads(response['Body'].read())
You likely need to customize the inference functions loaded in the endpoints. In the SageMaker TF SDK doc here you can find that there are two options for SageMaker TensorFlow deployment:
Python Endpoint, that is the default, check if modifying the
input_fn can accomodate your inference scheme
TF Serving
endpoint
You can diagnose error in Cloudwatch (accessible through the sagemaker endpoint UI), choose the most appropriate serving architecture among the above-mentioned two and customize the inference functions if need be
Only the TF serving endpoint supports multiple inputs in one inference request. You can follow the documentation here to deploy a TFS endpoint -
https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst

Categories