What is the difference between Google Cloud Vision Auto ML and Google Tensor Flow?
https://cloud.google.com/vision/automl/docs/train
https://www.tensorflow.org/hub/tutorials/object_detection
They are both made from the same company, and conduct API object detection, allowing user to both train image models, allowing user to code in python. Does Google Cloud AutoML use Google Tensorflow under the hood, just in a Cloud model, so user does not need to download software?
Just trying to understand both components.
Google Cloud Auto ML
Google Cloud AutoML Vision runs on Google Cloud Platform and Google TensorFlow Object Detection runs on Google Colab.
There are few main differences. I will try to cover the most important ones here:
Colab instance session times out after 12 hours where the maximum lifespan for a custom AutoML vision model is 18 months.
Colab runs on Google Drive and its basic version is free where AutoML vision costs $3.15 per node hour.
Note: "You can try AutoML Vision Object Detection for free by using 40 free node hours each for training and online prediction, and 1 free node hour for batch prediction, per billing account."
Colab easy authentication and has pre-built connectors where AutoML is fully integrated with Google Cloud Platform.
Colab is a consumer product that doesn't have any enterprise support, where AutML vision has enterprise support to resolve all your issues and queries.
For the highest level of programming control, you are better off creating a custom object detection model using TensorFlow. Some examples can be found here. To reduce time to production, the customer can use transfer learning and use one of the available model architectures in TensorFlow Hub or download model weights from other sources.
AutoML Vision on the other hand doesn’t provide granular control over model building, so you won’t be able to tune the resulting model programmatically. AutoML uses Neural Architecture Search to design optimal neural network for the task at hand. This approach is faster since all that’s needed is training data and task to build a neural network for. You can find more details and examples on AutoML Vision.
If you want to use ready-made APIs (with zero customization to the model itself), they can directly leverage Vision API. The API serves predictions for common tasks (such as object detection, OCR, etc) and allows no control, without having to bring any data or train any ML model. The downside here is that the model behind the API is static and cannot be retrained like AutoML Vision. More details on the difference between Vision API and AutoML Vision can be found here.
Regarding what product does a better job of object detection - you should have to perform your own experiments and see which product fits your needs best.
Finally for Increasing order of programmatic control, it is as follows:
Vision API > AutoML Vision > TensorFlow (with transfer learning) > TensorFlow (custom model)
Related
We currently have a system running on AWS Sagemaker whereby several units have their own trained machine learning model artifact (using an SKLearn training script with the Sagemaker SKLearn estimator).
Through the use of Sagemaker's multi-model endpoints, we are able to host all of these units on a single instance.
The problem we have is that we need to scale this system up such that we can train individual models for hundreds of thousand of units and then host the resulting model artifacts on a multi-model endpoint. But, Sagemaker has a limit to the number of models you can train in parallel (our limit is 30).
Aside from training our models in batches, does anyone have any ideas how to go about implementing a system in AWS Sagemaker whereby for hundreds of thousands of units, we can have a separate trained model artifact for each unit?
Is there a way to output multiple model artifacts for 1 sagemaker training job with the use of an SKLearn estimator?
Furthermore, how does Sagemaker make use of multiple CPUs when a training script is submitted? Does this have to be specified in the training script/estimator object or is this handled automatically?
Here are some ideas:
1. does anyone have any ideas how to go about implementing a system in AWS Sagemaker whereby for hundreds of thousands of units, we can have a separate trained model artifact for each unit? Is there a way to output multiple model artifacts for 1 sagemaker training job with the use of an SKLearn estimator?
I don't know if the 30-training job concurrency is a hard limit, if it is a blocker you should try and open a support ticket to ask if it is and try and get it raised. Otherwise as you can point out, you can try and train multiple models in one job, and produce multiple artifacts that you can either (a) send to S3 manually, or (b) save to opt/ml/model so that they all get sent to the model.tar.gz artifact in S3. Note that if this artifact gets too big this could get impractical though
2. how does Sagemaker make use of multiple CPUs when a training script is submitted? Does this have to be specified in the training script/estimator object or is this handled automatically?
This depends on the type of training container you are using. SageMaker built-in containers are developed by Amazon teams and designed to efficiently use available resources. If you use your own code such as custom python in the Sklearn container, you are responsible for making sure that your code is efficiently written and uses available hardware. Hence framework choice is quite important :) for example, some sklearn models support explicitly using multiple CPUs (eg the n_jobs parameter in the random forest), but I don't think that Sklearn natively supports GPU, multi-GPU or multi-node training.
Is there a way to train your machine learning model in the cloud? Or does it really have to be batch training? i.e. (Pull some data on SQL, then feed that to the model)
What i was thinking is implementing my own model from scratch, use Stochastic Gradient Descent to update the parameters for every row from the database.
I think you are looking for something like GCP AI platform
You can use BigQuery to store your data and do some analytics and perform inbuilt ML models.
AI Platform Notebooks for manage your notebooks
Check this list for built in algorithms in GCP
Or if you have a your model, you can use cloud resources to run your model.check this link how to use GCP resources for your model
I built my first covnet using the process described in this colab. Now I would like to run the model on Googles shiny new edge tpu.
But according to the Model Requirments described here, I need to use quantization-aware training (post-training quantization is not supported). to be able to convert the model into a format that I can use on the EdgeTPU.
How do I modify the example colab to do this quantization-aware training thing?
well because the keras API does not support quantization in the current edition you are left with 3 options:
wait for keras to have the required functionality
rewrite your model with a different API that has this functionality
find a different TPU that does not require you to quantize your data
either way though the solution is not great though.
Does anyone know what is the difference between using Google Cloud Machine Learning compare to a Virtual Machine instance in the Google Cloud Engine ?
I am using Keras with Python 3 and feel like GML is more restricting (using python 2.7, older version of TensorFlow, must follow the given structure...). I guess they are benefits of using GML over a VM in GCE but I would like to know what they are.
Google Cloud ML is a fully managed service whereas Google Compute Engine is not (the latter is IaaS).
Assuming that you just want to know some differences for the case when you have your own model, here you have some:
The most noticeable feature of Google CloudML is the deployment
itself. You don't have to take care of things like setting up your
cluster (that is, scaling), launching it, installing the packages and
deploy your model for training. This is all done automatically, and
you would have to do it yourself in Compute Engine although you would be unrestricted in what you can install.
Although all that deployment you can automatise more or less, there
is no magic to it. In fact, you can see in the logs of CloudML for a
training job that it is quite rudimentary in the sense that a cluster
of instances is launched and thereafter TF is installed and your
model is run with the options you set. This is due to TensorFlow
being a framework decoupled from Google systems.
However, there is a substancial difference of CloudMl vs Compute
Engine when it comes to prediction. And that is what you pay for
mostly I would say with CloudML. You can have deployed model in
CloudML for online and batch prediction out of the box pretty much.
In Compute Engine, you would have to take care of all the quirks of
TensorFlow Serving which are not that trivial (compared to
training your model).
Another advantage of CloudML is hyper-parameter tuning. It is no more
than just a somewhat smart brute-forcing tool to find out the best
combination of parameters for your given model, and you could
possibly automatise this in Compute Engine, but you would have to do
that part of figuring out the optimisation algorithms to find the
combination of parameters and values that would improve the objective
function (usually maximise your accuracy or reduce your loss).
Finally, pricing is slightly different in either service. Until recently, pricing of CloudML was in pair with other competitors (you would pay for computing time in both training and prediction but also per prediction which you could compare with the computing time in Compute Engine). However, now you will only pay for that computing time (and it is even cheaper than before) which probably renders the idea of managing and scaling your own cluster (with TensorFlow) in Compute Engine useless in most scenarios.
I'm currently developing a prediction model using Tensorflow and my model works well for a customer, so I'm tring to make it as a real product.
My model needs to be retrained using customer's input as time passes, and it should be deployed on customers infrastructure. (Not a SaaS or cloud.) Moreover, I'd like to protect my codes and models.
From my understanding of Tensorflow, trained model can be exported as protobuf, freezed and kept nodes that are required by prediction. freeze_graph.py at Tensorflow repo, I tried it and I successfully ran my prediction model using Golang + libtensorflow.so runtime. (Or, I could use Tensorflow Serving & C++)
If I can train my model on our company's infra, I could say "Okay, let's get some beers". However, my model has to be trained on the customer's infra, and without python code, it seems like I cannot train my model.
https://www.tensorflow.org/versions/r0.12/how_tos/language_bindings/index.html
At this time, support for gradients, functions and control flow operations ("if" and "while") is not available in languages other than Python. This will be updated when the C API provides necessary support.
Is there any workaround deploying TF app without exposing python code or model? Thanks in advance.
You can still use Python with a pre-trained model, without exposing all the code you needed to build it in the first place. As an example of this, have a look at the Inception retraining code, which loads a pretrained GraphDef and then retrains a new top layer:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py