First, I got a model trained, which is kinda cool.
I am also able - after some headache and confusion about why the model is in us-central - to make single API predictions.
I am wondering is it possible to do batch predictions also?
Cheers
Related
I have created a model using TensorFlow for detecting any type of violence in the video. I have trained the model on approx. 2000 videos by splitting it into frames.
But when I use that model on any unseen video or real-time video then it's not predicted correctly.
I just wanted to ask if anyone can tell me I have taken the correct hidden layers and if there are any tweaks I can make for correct predictions.
The neural_v2.ipynb is used to train the model. The test_v2.py is the file that loads the model and captures videos and predicts.
If you need any more technical clarification please ask me.
If anyone can help in any way, I would really appreciate it.
Dataset Link
Code Link
Ideally, you would split your data into three: training, validation, and test (you are using your testing data as your validation).
As #finko's answer, I would try a more epochs, but more importantly a denser model. Experiment with some state of the art models (like VGG16, ResNet152, MobileNet etc). All of these are available as Keras applications (https://www.tensorflow.org/api_docs/python/tf/keras/applications).
You may set the epochs=50 to train again, it will be better
So, I've googled prior to asking this, obviously, however, there doesn't seem to be much mention on these modes directly. Tensorflow documentation mentions "test" mode in passing which, upon further reading, didn't make very much sense to me.
From what I've gathered, my best shot at this is that to reduce ram, when your model is in prediction mode, you just use a pretrained model to make some predictions based on your input?
If someone could help with this and help me understand, I would be extremely grateful.
Training refers to the part where your neural network learns. By learning I mean how your model changes it's weights to improve it's performance on a task given a dataset. This is achieved using the backpropogation algorithm.
Predicting, on the other hand, does not involve any learning. It is only to see how well your model performs after it has been trained. There are no changes made to the model when it is in prediction mode.
I am a newbie working on the LPRNet provided by openvino toolkit:
https://github.com/openvinotoolkit/training_extensions
I want to get the probability of the predicted result but it seems that tf.nn.ctc_greedy_decoder only returns neg_sum_logits and I'm not sure how to convert it into prabability.
Does anyone know how can I get that?
Any suggestion will be appreciated!
Thanks.
You may try to use the validation app.
Inference Engine Validation Application is a tool that allows to infer deep learning models with standard inputs and outputs configuration and to collect simple validation metrics for topologies. It supports top-1 and top-5 metric for Classification networks and 11-points mAP metric for Object Detection networks.
You may refer here for some insight: https://docs.openvinotoolkit.org/2019_R1/_inference_engine_samples_validation_app_README.html
And also this is a step by step explainations for the application of validation app: https://www.youtube.com/watch?v=4WAQSx3LC-M
I am following this tutorial:
https://www.tensorflow.org/tutorials/keras/text_classification_with_hub
It only goes up to fitting the model but I couldn't find how to use the model on a new dataset to classify unlabeled data. I tried following other tutorials but I couldn't get them to work since they might not be text based.
model.add(tf.keras.layers.Dense(1))
I run into an issue where I try to set the layer to 2 for positive, negative but that doesn't work either.
I think you misunderstood the purpose of that tutorial. That tutorial is applying the use of what is known as "transfer learning". Transfer Learning is when you take an already trained model, and train it with other data. What you are doing is creating an entirely new model, which is not the purpose of that specific tutorial. Furthermore, for that model you need a labeled dataset, which is provided in the tutorial using the Tensorflow Datasets library. To accomplish what you are trying to do, you must look at a different tutorial explaining how to train an LSTM model for text classification from scratch.
I have a model which is trained on the large training corpus. I also have a feedback loop which is providing me the feedback from the users. Model is built on top of Theano and Python.
How can I add this feedback into my model? Right now I am thinking about two approaches :
Add mini-batch to the training corpus and training it again. This is straight forward but it will take a lot of time to train.
Use the saved state of trained model and just train on the mini-batch. This looks promising but right now stuck in how to do it with Theano.
Can someone help me for the second case?