how to deploy a xgboost model on amazon sagemaker? - python

Is there a way to deploy a xgboost model trained locally using amazon sagemaker? I only saw tutorial talking about both training and deploying model with amazon sagemaker.
Thanks.

This example notebook is good starting point showing how to use a pre-existing scikit-learn xgboost model with the Amazon SageMaker to create a hosted endpoint for that model.

Related

Firebase deploy - Cannot publish a model that is not verified

I'm trying to deploy a .tflite to Firebase ML so that I can distribute it from there.
I used transfer learning on this TF Hub model. I then followed this tutorial to convert the model .tflite format.
This model gives good results in the python TFLite interpreter and can be used on Android if I package with the app.
However I want to serve the model via Firebase, so I use this tutorial to deploy the .tflite file to Firebase. Using this tutorial, I get an error firebase_admin.exceptions.FailedPreconditionError: Cannot publish a model that is not verified..
I can't find any information about this error anywhere, and given the model works on both Android and Python, I'm at a loss as to what could be causing this
Did you solve this issue? I had the same one, and it turned out the model size should be < 40mb. That caused the error, and the detailed error is only reported when uploading a model manually through web Firebase dashboard

How to run your own python code on amazon sagemaker

I have a python code which uses keras and tensorflow backend. My system doesn't support training this model due to low memory space. I want to take use of Amazon sagemaker.
However all the tutorials I find are about deploying your model in docker containers. My model isn't trained and I want to train it on Amazon Sagemaker.
Is there a way to do this?
EDIT : Also can I make a script of my python code and run on it on AWS sagemaker?
SageMaker provides the capability for users to bring in their custom training scripts and train their algorithms using the script it on SageMaker using one of the pre-built containers for frameworks like TensorFlow, MXNet, PyTorch.
Please take a look at https://github.com/aws/amazon-sagemaker-examples/blob/master/frameworks/tensorflow/get_started_mnist_train.ipynb
It walks through how you can bring in your training script using TensorFlow and train it using SageMaker.
There are several other examples in the repository which will help you answer other questions you might have as you progress on with your SageMaker journey.

With AWS SageMaker, is it possible to deploy a pre-trained model using the sagemaker SDK?

I'm trying to avoid migrating an existing model training process to SageMaker and avoid creating a custom Docker container to host our trained model.
My hope was to inject our existing, trained model into the pre-built scikit learn container that AWS provides via the sagemaker-python-sdk. All of the examples that I have found require training the model first which creates the model/model configuration in SageMaker. This is then deployed with the deploy method.
Is it possible to provide a trained model to the deploy method and have it hosted in the pre-built scikit learn container that AWS provides?
For reference, the examples I've seen follow this order of operations:
Creating an instance of sagemaker.sklearn.estimator.SKLearn and providing a training script
Call the fit method on it
This creates the model/model configuration in SageMaker
Call the deploy method on the SKLearn instance which automagically takes the model created in step 2/3 and deploys it in the pre-build scikit learn container as an HTTPS endpoint.
Yes, you can import existing models to SageMaker.
For scikit-learn, you would use the SKLearnModel() object to load to model from S3 and create it in SageMaker. Then, you could deploy it as usual.
https://sagemaker.readthedocs.io/en/latest/sagemaker.sklearn.html
Here's a full example based on MXNet that will point you in the right direction:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_onnx_superresolution/mxnet_onnx.ipynb
Struggled with the same use case for a couple days.
We used sagemaker.model.Model class and sagemaker.pipeline.PipelineModel
Outlined our solution here.
How to handle custom transformation/ inference and requirements in sagemaker endpoints

Can i deploy pretrained sklearn model (pickle in s3) on sagemaker?

I have already developed a scikit learn based machine learning model and have it in a pickle file. I am trying to deploy it only for inferencing and found sagemaker on aws. I do not see scikit learn based libraries on their available libraries and I also do not want to train the model all over again. Is it possible to only deploy the model that is already trained and present in AWS S3 on sagemaker?
You need to containerize it before deploy to SageMaker.
This might be a good start: https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/

deploy a pipeline with tensorflow in DSX

I have a model built using python and tensorflow.
The model is trained and works well. I don't understand how I can deploy it? I mean how can I call this model in order to obtain a score on actual data?
I cannot use Watson ML deploy because of TensorFlow.
DSX supports training TensorFlow (without GPUs). I hear DSX will support training TensorFlow with GPUs and then deploying into Watch Machine Learning (WML) in early 2018.
For other models that you've built in DSX using SparkML, ScikitLearn, XGBoost and SPSS, go here for details on how deploy using WML:
Scala Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook: Recognition of hand-written digits--Train and deploy Scikit Learn model

Categories