deploy a pipeline with tensorflow in DSX - python

I have a model built using python and tensorflow.
The model is trained and works well. I don't understand how I can deploy it? I mean how can I call this model in order to obtain a score on actual data?
I cannot use Watson ML deploy because of TensorFlow.

DSX supports training TensorFlow (without GPUs). I hear DSX will support training TensorFlow with GPUs and then deploying into Watch Machine Learning (WML) in early 2018.
For other models that you've built in DSX using SparkML, ScikitLearn, XGBoost and SPSS, go here for details on how deploy using WML:
Scala Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook end-to-end tutorial-Train and deploy a SparkML model
Python Jupyter Notebook: Recognition of hand-written digits--Train and deploy Scikit Learn model

Related

Deploy yolo3 keras .h5 model to Raspberry Pi

I have trained a tiny yolo3 model with custom data on Keras and want to deploy the model onto a Raspberry Pi.
I have converted the Keras .h5 model to a int-quantized .tflite model and wanted to do inference with tflite_support.task, which is only for SSD networks it seems, so that doesn't work for yolo, raises error that it requires a metadata.
So my question now is what would be the best way to deploy a .h5 Keras model onto a Raspberry Pi, I have also tried to convert it to a frozen .pb and use opencv dnn but the conversion doesn't seem to work even following this: How to export Keras .h5 to tensorflow .pb?
Running Keras on a Raspberry Pi can't be really an option, since it would require the full tensorflow installation.
Is there a lightweight way for deployment using opencv.dnn or tflite interpreter?

How to run your own python code on amazon sagemaker

I have a python code which uses keras and tensorflow backend. My system doesn't support training this model due to low memory space. I want to take use of Amazon sagemaker.
However all the tutorials I find are about deploying your model in docker containers. My model isn't trained and I want to train it on Amazon Sagemaker.
Is there a way to do this?
EDIT : Also can I make a script of my python code and run on it on AWS sagemaker?
SageMaker provides the capability for users to bring in their custom training scripts and train their algorithms using the script it on SageMaker using one of the pre-built containers for frameworks like TensorFlow, MXNet, PyTorch.
Please take a look at https://github.com/aws/amazon-sagemaker-examples/blob/master/frameworks/tensorflow/get_started_mnist_train.ipynb
It walks through how you can bring in your training script using TensorFlow and train it using SageMaker.
There are several other examples in the repository which will help you answer other questions you might have as you progress on with your SageMaker journey.

Can i deploy pretrained sklearn model (pickle in s3) on sagemaker?

I have already developed a scikit learn based machine learning model and have it in a pickle file. I am trying to deploy it only for inferencing and found sagemaker on aws. I do not see scikit learn based libraries on their available libraries and I also do not want to train the model all over again. Is it possible to only deploy the model that is already trained and present in AWS S3 on sagemaker?
You need to containerize it before deploy to SageMaker.
This might be a good start: https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/

how to deploy a xgboost model on amazon sagemaker?

Is there a way to deploy a xgboost model trained locally using amazon sagemaker? I only saw tutorial talking about both training and deploying model with amazon sagemaker.
Thanks.
This example notebook is good starting point showing how to use a pre-existing scikit-learn xgboost model with the Amazon SageMaker to create a hosted endpoint for that model.

Results of training a Keras model different on Google Cloud

I've created a script to train a keras neural net and have run it successfully on my machine (at the end of training there is roughly 0.8 validation accuracy). However, when I try to run the exact same code (on the same data) on a Google Cloud VM instance I get drastically worse results (~0.2 validation accuracy).
Git status confirms that the repo in the VM is up to date with master (same with my local machine), and I have verified that its versions of tf and keras are up to date (and same as my local machine). I've also set the numpy and tensorflow random seeds before importing Keras.
Has anyone run into a problem like this before? I'm at a loss for what could be causing this... the only difference I can think of is that my machine is running Python 3.6 whereas the VM is running Python 2.7. Could that account for the vast difference is training results?
I found a buggy interaction between Keras and the Estimator API in tensorflow 1.10 (current gcloud version), but not in >=1.11 (what I was using locally).
Not sure if it applies to you (do you use Keras+Estimator and tensorflow >=1.11 for local?)
I filed a bug report here: https://github.com/tensorflow/tensorflow/issues/24299

Categories