I have already developed a scikit learn based machine learning model and have it in a pickle file. I am trying to deploy it only for inferencing and found sagemaker on aws. I do not see scikit learn based libraries on their available libraries and I also do not want to train the model all over again. Is it possible to only deploy the model that is already trained and present in AWS S3 on sagemaker?
You need to containerize it before deploy to SageMaker.
This might be a good start: https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/
Related
I have a python code which uses keras and tensorflow backend. My system doesn't support training this model due to low memory space. I want to take use of Amazon sagemaker.
However all the tutorials I find are about deploying your model in docker containers. My model isn't trained and I want to train it on Amazon Sagemaker.
Is there a way to do this?
EDIT : Also can I make a script of my python code and run on it on AWS sagemaker?
SageMaker provides the capability for users to bring in their custom training scripts and train their algorithms using the script it on SageMaker using one of the pre-built containers for frameworks like TensorFlow, MXNet, PyTorch.
Please take a look at https://github.com/aws/amazon-sagemaker-examples/blob/master/frameworks/tensorflow/get_started_mnist_train.ipynb
It walks through how you can bring in your training script using TensorFlow and train it using SageMaker.
There are several other examples in the repository which will help you answer other questions you might have as you progress on with your SageMaker journey.
Im using Azure ML Studio to create an automated ML pipeline. I've successfully gotten my model to be trained and tested in Azure, but it fails on model.to_json() and model.save_weights().
I believe these functions do not exist on my model as scikit-multilearn is a wrapper around Keras. However, I want to be able to save my model and weight so I can deploy them to a webservice service. The scikit-multilearn model I'm using is Binary Relevance.
Thanks to anyone who helps.
I'm trying to avoid migrating an existing model training process to SageMaker and avoid creating a custom Docker container to host our trained model.
My hope was to inject our existing, trained model into the pre-built scikit learn container that AWS provides via the sagemaker-python-sdk. All of the examples that I have found require training the model first which creates the model/model configuration in SageMaker. This is then deployed with the deploy method.
Is it possible to provide a trained model to the deploy method and have it hosted in the pre-built scikit learn container that AWS provides?
For reference, the examples I've seen follow this order of operations:
Creating an instance of sagemaker.sklearn.estimator.SKLearn and providing a training script
Call the fit method on it
This creates the model/model configuration in SageMaker
Call the deploy method on the SKLearn instance which automagically takes the model created in step 2/3 and deploys it in the pre-build scikit learn container as an HTTPS endpoint.
Yes, you can import existing models to SageMaker.
For scikit-learn, you would use the SKLearnModel() object to load to model from S3 and create it in SageMaker. Then, you could deploy it as usual.
https://sagemaker.readthedocs.io/en/latest/sagemaker.sklearn.html
Here's a full example based on MXNet that will point you in the right direction:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_onnx_superresolution/mxnet_onnx.ipynb
Struggled with the same use case for a couple days.
We used sagemaker.model.Model class and sagemaker.pipeline.PipelineModel
Outlined our solution here.
How to handle custom transformation/ inference and requirements in sagemaker endpoints
Is there a way to deploy a xgboost model trained locally using amazon sagemaker? I only saw tutorial talking about both training and deploying model with amazon sagemaker.
Thanks.
This example notebook is good starting point showing how to use a pre-existing scikit-learn xgboost model with the Amazon SageMaker to create a hosted endpoint for that model.
I have a model built using python and tensorflow.
The model is trained and works well. I don't understand how I can deploy it? I mean how can I call this model in order to obtain a score on actual data?
I cannot use Watson ML deploy because of TensorFlow.
DSX supports training TensorFlow (without GPUs). I hear DSX will support training TensorFlow with GPUs and then deploying into Watch Machine Learning (WML) in early 2018.
For other models that you've built in DSX using SparkML, ScikitLearn, XGBoost and SPSS, go here for details on how deploy using WML:
Scala Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook: Recognition of hand-written digits--Train and deploy Scikit Learn model