I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.
Can anyone explain how dataset are trained in fit into models?
You can't "simply" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.
I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.
Related
I'm learning Pytorch so I decided to participate in kaggle competition DigitRecognition. It seems to me that eveything is correct but during the training the model accuracy was get worse and worst. I think that I've made some unnoticable mistake which spoils the model maybe connected with custom DataSet class (MyDataset) but I can't find it myself. Please, somebody help me.
Here is colab notebook with all solution.
I have implemented with my own custom dataset a faster RCNN in Keras following this very useful guide:
https://medium.com/analytics-vidhya/a-practical-implementation-of-the-faster-r-cnn-algorithm-for-object-detection-part-2-with-cac45dada619
I would like to ask you some details in that implementation for those of you that already had a similar experience on that:
Once I the code is running the compiler returns: "Could not load pretrained model weights. Weights can be found in the keras application folder \https://github.com/fchollet/keras/tree/master/keras/applications", Does it mean that I not exploiting transfer learning?
Do you know something on that?
Thanks in advance
I'm a super beginner. I checked the official site Transfer learning with a pretrained ConvNet. I'd like to make predictions by the site's trained model. The following code is wrong? And I'd like to know that “image” and “class name”. Hm.. How should I do?... Please give me some advice…
predictions = model.predict(test_batches)[1]
print(predictions)
#[-0.11642772]
First of all Tensorflow is horrible. Incredibly error-prone and difficult to install and use. I would recommend using Pytorch tutorials instead. Second, the link you posted has a Colab button. Did you try clicking that button? Colab makes installation easier because it’s online and not on your computer.
Also transfer learning is NOT a beginner topic. Maybe try an easier notebook, if that works for you: https://github.com/sgrvinod/Deep-Tutorials-for-PyTorch/blob/master/README.md
I am trying to train an nn4.small2 model for face recognition app, but I am not sure how to do it right using triplet loss function.
I have found a lot of documentation about this, the most relevant is here: https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_089.pdf.
So, I am going to reproduce it with my dataset.
Have anyone done that before and would like to share his/her experience with me?
I am trying to implement basic NLP tasks in Tensorflow without using the build in modules as much as possible (just for learning sake)
I have been trying to implement a Part of Speech tagger using data from http://www.cnts.ua.ac.be/conll2000/chunking/
I am having a little difficulty with implementing a RNN code from scratch using an embedding layer in front and was wondering if there are examples and implementations on the same.
I have seen tons of examples using Theano and on MNIST data but havent been able to find a concrete implementation of RNNs from scratch in Tensorflow on textual data.
Any suggestions?
after searching for a long time, I decided to do it on my own.
Here are a couple of repos I created:
recurrent-neural-nets-tensorflow
lstm-tensorflow