Saving Keras model and weights - python

Im using Azure ML Studio to create an automated ML pipeline. I've successfully gotten my model to be trained and tested in Azure, but it fails on model.to_json() and model.save_weights().
I believe these functions do not exist on my model as scikit-multilearn is a wrapper around Keras. However, I want to be able to save my model and weight so I can deploy them to a webservice service. The scikit-multilearn model I'm using is Binary Relevance.
Thanks to anyone who helps.

Related

Firebase deploy - Cannot publish a model that is not verified

I'm trying to deploy a .tflite to Firebase ML so that I can distribute it from there.
I used transfer learning on this TF Hub model. I then followed this tutorial to convert the model .tflite format.
This model gives good results in the python TFLite interpreter and can be used on Android if I package with the app.
However I want to serve the model via Firebase, so I use this tutorial to deploy the .tflite file to Firebase. Using this tutorial, I get an error firebase_admin.exceptions.FailedPreconditionError: Cannot publish a model that is not verified..
I can't find any information about this error anywhere, and given the model works on both Android and Python, I'm at a loss as to what could be causing this
Did you solve this issue? I had the same one, and it turned out the model size should be < 40mb. That caused the error, and the detailed error is only reported when uploading a model manually through web Firebase dashboard

Do I need the Tensorflow Object Detection API to use a trained model I made on a different computer?

Working on my local computer, I've created a Tensorflow Object Detector. I have exported the model (which I've tested using the checkpoints) to a protobuf file as well as several others (TF lite, TF js, etc). I now need to transfer this trained model to another computer that doesn't have the Object Detection API or other things I needed to build the model.
Do I need all these dependencies on the new machine? Or, does the protobuf file contain everything that the machine will need? The new machine only has the basic anaconda environment packages as well as tensorflow.
Protobuf files most commonly contains both model and weights. So in theory you can load your model on any machine with TensorFlow.
The only problem that I can think of is saving custom layers/losses/optimizers and data pre/postprocessing.

How to run your own python code on amazon sagemaker

I have a python code which uses keras and tensorflow backend. My system doesn't support training this model due to low memory space. I want to take use of Amazon sagemaker.
However all the tutorials I find are about deploying your model in docker containers. My model isn't trained and I want to train it on Amazon Sagemaker.
Is there a way to do this?
EDIT : Also can I make a script of my python code and run on it on AWS sagemaker?
SageMaker provides the capability for users to bring in their custom training scripts and train their algorithms using the script it on SageMaker using one of the pre-built containers for frameworks like TensorFlow, MXNet, PyTorch.
Please take a look at https://github.com/aws/amazon-sagemaker-examples/blob/master/frameworks/tensorflow/get_started_mnist_train.ipynb
It walks through how you can bring in your training script using TensorFlow and train it using SageMaker.
There are several other examples in the repository which will help you answer other questions you might have as you progress on with your SageMaker journey.

With AWS SageMaker, is it possible to deploy a pre-trained model using the sagemaker SDK?

I'm trying to avoid migrating an existing model training process to SageMaker and avoid creating a custom Docker container to host our trained model.
My hope was to inject our existing, trained model into the pre-built scikit learn container that AWS provides via the sagemaker-python-sdk. All of the examples that I have found require training the model first which creates the model/model configuration in SageMaker. This is then deployed with the deploy method.
Is it possible to provide a trained model to the deploy method and have it hosted in the pre-built scikit learn container that AWS provides?
For reference, the examples I've seen follow this order of operations:
Creating an instance of sagemaker.sklearn.estimator.SKLearn and providing a training script
Call the fit method on it
This creates the model/model configuration in SageMaker
Call the deploy method on the SKLearn instance which automagically takes the model created in step 2/3 and deploys it in the pre-build scikit learn container as an HTTPS endpoint.
Yes, you can import existing models to SageMaker.
For scikit-learn, you would use the SKLearnModel() object to load to model from S3 and create it in SageMaker. Then, you could deploy it as usual.
https://sagemaker.readthedocs.io/en/latest/sagemaker.sklearn.html
Here's a full example based on MXNet that will point you in the right direction:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_onnx_superresolution/mxnet_onnx.ipynb
Struggled with the same use case for a couple days.
We used sagemaker.model.Model class and sagemaker.pipeline.PipelineModel
Outlined our solution here.
How to handle custom transformation/ inference and requirements in sagemaker endpoints

Can i deploy pretrained sklearn model (pickle in s3) on sagemaker?

I have already developed a scikit learn based machine learning model and have it in a pickle file. I am trying to deploy it only for inferencing and found sagemaker on aws. I do not see scikit learn based libraries on their available libraries and I also do not want to train the model all over again. Is it possible to only deploy the model that is already trained and present in AWS S3 on sagemaker?
You need to containerize it before deploy to SageMaker.
This might be a good start: https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/

Categories