Pytorch model doesn't fit for some reason - python

I'm learning Pytorch so I decided to participate in kaggle competition DigitRecognition. It seems to me that eveything is correct but during the training the model accuracy was get worse and worst. I think that I've made some unnoticable mistake which spoils the model maybe connected with custom DataSet class (MyDataset) but I can't find it myself. Please, somebody help me.
Here is colab notebook with all solution.

Related

deep neural network doing wrong predictions on Realtime videos

I have created a model using TensorFlow for detecting any type of violence in the video. I have trained the model on approx. 2000 videos by splitting it into frames.
But when I use that model on any unseen video or real-time video then it's not predicted correctly.
I just wanted to ask if anyone can tell me I have taken the correct hidden layers and if there are any tweaks I can make for correct predictions.
The neural_v2.ipynb is used to train the model. The test_v2.py is the file that loads the model and captures videos and predicts.
If you need any more technical clarification please ask me.
If anyone can help in any way, I would really appreciate it.
Dataset Link
Code Link
Ideally, you would split your data into three: training, validation, and test (you are using your testing data as your validation).
As #finko's answer, I would try a more epochs, but more importantly a denser model. Experiment with some state of the art models (like VGG16, ResNet152, MobileNet etc). All of these are available as Keras applications (https://www.tensorflow.org/api_docs/python/tf/keras/applications).
You may set the epochs=50 to train again, it will be better

Can you plot the accuracy graph of a pre-trained model? Deep Learning

I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook.
I need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty.
Any help would be appreciated! Thanks!
What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you.
Unfortunately, for Pytorch/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history).
The only thing you can extract is the final loss/accuracy you had.
However, if you regularly saved a version of the model, you can load them and compute manually the accuracy/loss that you need. Next, you can use matplotlib to reconstruct the graph.
I understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.

Tensorflow - What does Training and Prediction mode mean when making a model?

So, I've googled prior to asking this, obviously, however, there doesn't seem to be much mention on these modes directly. Tensorflow documentation mentions "test" mode in passing which, upon further reading, didn't make very much sense to me.
From what I've gathered, my best shot at this is that to reduce ram, when your model is in prediction mode, you just use a pretrained model to make some predictions based on your input?
If someone could help with this and help me understand, I would be extremely grateful.
Training refers to the part where your neural network learns. By learning I mean how your model changes it's weights to improve it's performance on a task given a dataset. This is achieved using the backpropogation algorithm.
Predicting, on the other hand, does not involve any learning. It is only to see how well your model performs after it has been trained. There are no changes made to the model when it is in prediction mode.

How to train in Keras a nn4.small2 model for face recognition system using a triplet loss function?

I am trying to train an nn4.small2 model for face recognition app, but I am not sure how to do it right using triplet loss function.
I have found a lot of documentation about this, the most relevant is here: https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_089.pdf.
So, I am going to reproduce it with my dataset.
Have anyone done that before and would like to share his/her experience with me?

Brief explanation on tensorflow object detection working mechanism

I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.
Can anyone explain how dataset are trained in fit into models?
You can't "simply" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.
I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.

Categories