I am a newbie working on the LPRNet provided by openvino toolkit:
https://github.com/openvinotoolkit/training_extensions
I want to get the probability of the predicted result but it seems that tf.nn.ctc_greedy_decoder only returns neg_sum_logits and I'm not sure how to convert it into prabability.
Does anyone know how can I get that?
Any suggestion will be appreciated!
Thanks.
You may try to use the validation app.
Inference Engine Validation Application is a tool that allows to infer deep learning models with standard inputs and outputs configuration and to collect simple validation metrics for topologies. It supports top-1 and top-5 metric for Classification networks and 11-points mAP metric for Object Detection networks.
You may refer here for some insight: https://docs.openvinotoolkit.org/2019_R1/_inference_engine_samples_validation_app_README.html
And also this is a step by step explainations for the application of validation app: https://www.youtube.com/watch?v=4WAQSx3LC-M
Related
From what I can see in the docs, H2O supports calibration for GBM, DRF, XGBoost models only and has to be specified prior to the training phase.
I find it confusing. If calibration is a post-processing step and is model agnostic, shouldn't it be possible to calibrate any model trained using H2O, even after the training process is finished?
Currently, I'm dealing with a model that I've trained using AutoML. Even though it is a GBM model, I'm not able to easily calibrate it by providing a calibrate_model parameter as it is not supported by AutoML. I don't see any option to calibrate it after it's trained either.
Does anyone know an easy way to calibrate already-trained H2O models? Is it necessary to "manually" calibrate them using algorithms such as Platt scaling or is there a way to do it without using any extra libraries?
Thanks
I find it confusing. If calibration is a post-processing step
The reason why it is part of model training right now is to have it in MOJO (our deployment artifact).
and is model agnostic, shouldn't it be possible to calibrate any model trained using H2O, even after the training process is finished?
Calibrating a model ex-post makes a lot of sense, all the code is already in - it “just” needs to be exposed to users. We created a ticket for this here.
So, I've googled prior to asking this, obviously, however, there doesn't seem to be much mention on these modes directly. Tensorflow documentation mentions "test" mode in passing which, upon further reading, didn't make very much sense to me.
From what I've gathered, my best shot at this is that to reduce ram, when your model is in prediction mode, you just use a pretrained model to make some predictions based on your input?
If someone could help with this and help me understand, I would be extremely grateful.
Training refers to the part where your neural network learns. By learning I mean how your model changes it's weights to improve it's performance on a task given a dataset. This is achieved using the backpropogation algorithm.
Predicting, on the other hand, does not involve any learning. It is only to see how well your model performs after it has been trained. There are no changes made to the model when it is in prediction mode.
I am following this tutorial:
https://www.tensorflow.org/tutorials/keras/text_classification_with_hub
It only goes up to fitting the model but I couldn't find how to use the model on a new dataset to classify unlabeled data. I tried following other tutorials but I couldn't get them to work since they might not be text based.
model.add(tf.keras.layers.Dense(1))
I run into an issue where I try to set the layer to 2 for positive, negative but that doesn't work either.
I think you misunderstood the purpose of that tutorial. That tutorial is applying the use of what is known as "transfer learning". Transfer Learning is when you take an already trained model, and train it with other data. What you are doing is creating an entirely new model, which is not the purpose of that specific tutorial. Furthermore, for that model you need a labeled dataset, which is provided in the tutorial using the Tensorflow Datasets library. To accomplish what you are trying to do, you must look at a different tutorial explaining how to train an LSTM model for text classification from scratch.
First, I got a model trained, which is kinda cool.
I am also able - after some headache and confusion about why the model is in us-central - to make single API predictions.
I am wondering is it possible to do batch predictions also?
Cheers
I came to the point where I deployed my trained model done with Keras and based on Tensorflow. I have Tensorflow-Serving running and when I feed the input with the test data I get exactly the output as expected. Wonderful!
However, in the real world (deployed scenario) I need to pass a new data set to the model that the model has never seen before. And in the training/testing setup I did categorization and one-hot encoding. So I need to transform the submitted data-set first. This I might be able to do.
I also did normalization (Standardscaler from sklearn) and now I have no clue what is best practice to do here. In order to do normalization I would need to run through the training data again plus the one submitted data-set.
I believe this can be solved in an elegant way. Any ideas?