I'm new to Tensorflow and AI, so I'm having trouble researching my question. Either that, or my question hasn't been answered.
I'm trying to make a text classifier to put websites into categories based on their keywords. I have at minimum 5,000 sites and maximum 37,000 sites to train with.
What I'm trying to accomplish is: after the model is trained, I want it to continue to train as it makes predictions about the category a website belongs in.
The keywords that the model is trained on is chosen by clients, so it can always be different than the rest of the websites in its category.
How can I make Tensorflow retrain it's model based on corrections made by me if it's prediction is inaccurate? Basically, to be training for ever.
The key phrase you lack is fine-tuning. This is when you take a model that has finished its customary training (whatever that may be), and needs more work for the application you have in mind. You then give it additional training with new input; when that training has completed (training accuracy plateaus and is close to test accuracy), you then deploy the enhanced model for your purposes.
This is often used in commercial applications -- for instance, when a large predictive model is updated to include the most recent week of customer activity. Another common use is to find a model in a zoo that is trained for something related to the application you want -- perhaps cats v dogs -- and use its recognition of facial features to shorten training for a model to identify two classes of cartoon characters -- perhaps Pokemon v Tiny Toons.
In this latter case, your fine-tuning will almost entirely eliminate what was learned by the last few layers of the model. What you gain is the early-layer abilities to find edges, regions, and features through eyes-nose-mouth combinations. This saves at least 30% of the overall training time.
Related
I am working with the YOLOv3 model for an object detection task. I am using pre-trained weights that were generated for the COCO dataset, however, I have my own data for the problem I am working on. According to my knowledge, using those trained weights as a starting point for my own model should not have any effect on the performance of the model once it is trained on an entirely different dataset (right?).
My question is: will the model give "honest" results if I train it multiple times and test it on the same test set each time, or would it have better performance since it has already been exposed to those test images during an earlier experiment? I've heard people say things like "the model has already seen that data", does that apply in my case?
For hyper-parameter selection, evaluation of different models, or evaluating during training, you should always use a validation set.
You are not allowed to use the test set until the end!
The whole purpose of test data is to get an estimation of the performance after the deployment. When you use it during training to train your model or evaluate your model, you expose that data. For example, based on the accuracy of the test set, you decide to increase the number of layers.
Now, your performance will be increased on the test set. However, it will come with a price!
Your estimation on the test set becomes biased, and you no longer be able to use that estimation to talk about data that your model sees after deployment.
For example, You want to train an object detector for self-driving cars, and you exposed the test set during training. Therefore, you can not use the accuracy on the test set to talk about the performance of the object detector when you put it on a car and sell it to a customer.
There is an old sentence related to this matter:
If you torture the data enough, it will confess.
i'm totally new in NLP and Bert Model.
What im trying to do right now is Sentiment Analysis on Twitter Trending Hashtag ("neg", "neu", "pos") by using DistilBert Model, but the accurazcy was about 50% ( I tried w Label data taken from Kaggle).
So here is my idea:
(1) First, I will Fine-tunning Distilbertmodel (Model 1) with IMDB dataset,
(2) After that since i've got some data took from Twitter post, i will sentiment analysis them my Model 1 and get Result 2.
(3) Then I will refine-tunning Model 1 with the Result 2 and expecting to have Model (3).
Im not really sure this process has any meaning to make the model more accuracy or not.
Thanks for reading my post.
I'm a little skeptical about your first step. Since the IMDB database is different from your target database, I do not think it will positively affect the outcome of your work. Thus, I would suggest fine-tuning it on a dataset like a tweeter or other social media hashtags; however, if you are only focusing on hashtags and do not care about the text, that might work! My little experience with fine-tuning transformers like BART and BERT shows that the dataset that you are working on should be very similar to your actual data. But in general, you can fine-tune a model with different datasets, and if the datasets are structured for one goal, it can improve the model's accuracy.
If you want to fine-tune a sentiment classification head of BERT for classifying tweets, then I'd recommend a different strategy:
IMDB dataset is a different kind of sentiment - the ratings do not really correspond with short post sentiment, unless you want to focus on tweets regarding movies.
using classifier output as input for further training of that classifier is not really a good approach, because, if the classifier made many mistakes while classifying, these will be reflected in the training, and so the errors will deapen. This is basically creating endogenous labels, which will not really improve your real-world classification.
You should consider other ways of obtaining labelled training data. There are a few good examples for twitter:
Twitter datasets on Kaggle - there are plenty of datasets available containing millions of various tweets. Some of those even contain sentiment labels (usually inferred from emoticons, as these were proven to be more accurate than words in predicting sentiment - for explanation see e.g. Frasincar 2013). So that's probably where you should look.
Stocktwits (if youre interested in financial sentiments)- contain posts that authors can label for sentiments, thus are a perfect way of mining labelled data, if stocks/cryptos is what you're looking for.
Another thing is picking a model that's better for your language, I'd recommend this one. It has been pretrained on 80M tweets, so should provide strong improvements. I believe it even contains a sentiment classification head that you can use.
Roberta Twitter Base
Check out the website for that and guidance for loading the model in your code - it's very easy, just use the following code (this is for sentiment classification):
MODEL = "cardiffnlp/twitter-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
Another benefit of this model is that it has been pretrained from scratch with a vocabulary that contains emojis, meaning it has a deep understanding of them, their typical contexts and co-occurences. This can greatly benefit a social media classification, as many researchers would agree that emojis/emoticons are better predictors of sentiment than normal words.
I am looking to train a large model (resnet or vgg) for face identification.
Is it valid strategy to train on few faces (1..3) to validate a model?
In other words - if a model learns one face well - is it evidence that the model is good for the task?
point here is that I don't want to spend a week of GPU expensive time only to find out that my model is no good or data has errors or my TF coding has a bug
Short answer: No, because Deep Learning works well on huge amount of data.
Long answer: No. The problem is that learning only one face could overfit your model on that specific face, without learning features not present in your examples. Because for example, the model has learn to detect your face thanks to a specific, very simple, pattern in that face (that's called overfitting).
Making a stupid simple example, your model has learn to detect that face because there is a mole on your right cheek, and it has learn to identify it
To make your model perform well on the general case, you need an huge amount of data, making your model capable to learn different kind of patterns
Suggestion:
Because the training of a deep neural network is a time consuming task, usually one does not train one single neural network at time, but many neural network are trained in parallel, with different hyperparameters (layers, nodes, activation functions, learning rate, etc).
Edit because of the discussion below:
If your dataset is small is quite impossible to have a good performance on the general case, because the neural network will learn the easiest pattern, which is usually not the general/better one.
Adding data you force the neural network to extract good patterns, that work on the general case.
It's a tradeoff, but usually a training on a small dataset would not lead to a good classifier on the general case
edit2: refrasing everything to make it more clear. A good performance on a small dataset don't tell you if your model when trained on all the dataset is a good model. That's why you train to
the majority of your dataset and test/validate on a smaller dataset
For face recognition, usually a siamese net or triplet loss are used. This is an approach for one-shot learning. Which means it could perform really well given only few examples per class (person face here), but you still need to train it on many examples (different person faces). See for example:
https://towardsdatascience.com/one-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d
You wouldn't train your model from scratch but use a pretrained model anyways and fine-tune it for your task
You could also have a look at pretrained face recognition models for better results like facenet
https://github.com/davidsandberg/facenet
I have a use case where I have about 300 images out of 300 different items. I need machine learning to detect an item about once a minute.
I've been using Keras with Sequential to detect images but I'm wondering what I should take into consideration when I have 300 labels and only one image per label for learning.
So in short:
1) Can you do machine learning image detection with one learning image per label?
2) Are there any special things I take into consideration?
If this were a special case -- say, one class in 100 was represented by a single training image -- then you might get away with it. However, a unique image per class is asking for trouble.
A neural network learns by iterative correction, figuring out what features and combinations are important, and which are not, in discriminating the classes from one another. Training starts by a chaotic process that has some similarities to research: look at the available data, form hypotheses, and test then against the real world.
In a NN, the "hypotheses" are the various kernels it develops. Each kernel is a pattern to recognize something important to the discrimination process. If you lack enough examples for the model to generalize and discriminate for each class, then you run the risk (actually, you have the likelihood) of the model making a conclusion that is valid for the one input image, but not others in the same class.
For instance, one acquaintance of mine did the canonical cat-or-dog model, using his own photos, showing the pets of his own household and those of a couple of friends. The model trained well, identified cats and dogs with 100% accuracy on the test data, and he brought it into work ...
... where it failed, having an accuracy of about 65% (random guessing is 50%). He did some analysis and found the problem: his friends have indoor cats, but their preferred dog photos were out of doors. Very simply, the model had learned to identify not cats vs dogs, but rather couches and kitchen cabinets vs outdoor foliage. One of the main filters was of large, textured, green areas. Yes, a dog is a large, textured, green being. :-)
The only way your one-shot training would work is if each of your training images was specifically designed to include exactly those features that differentiate this class from the other 299, and no other visual information. Unfortunately, to identify what features those might be, and to provide canonical training photos, you'd have to know in advance what patterns the model needed to pick.
This entirely defeats the use case of deep learning and model training.
If you were to only train on that image once, it probably wouldn't be able to detect it yet. If you train it more, it will probably over fit and only recognize that one image. If that is what you are trying to do then you should make an algorithm to search the screen for that image (it will be more efficient).
1) You'll probably have problems with the generalization of your models because the lack of training set. In other words, your model will not "learn" about that class.
2) It's good to have a better training set in order to create a better model.
I'm wondering if the result of the test set is used to make the optimization of model's weights. I'm trying to make a model but the issue I have is I don't have many data because they are medical research patients. The number of patient is limited in my case (61) and I have 5 feature vectors per patient. What I tried is to create a deep learning model by excluding one subject and I used the exclude subject as the test set. My problem is there is a large variability in subject features and my model fits well the training set (60 subjects) but not that good the 1 excluded subject.
So I'm wondering if the testset (in my case the excluded subject) could be used in a certain way to make converge the model to better classify the exclude subject?
You should not use the test data of your data set in your training process. If your training data is not enough, one approach using a lot during this days(especially for medical images) is data augmentation. So I highly recommend you to use this technique in your training process. How to use Deep Learning when you have Limited Data is one of the good tutorial about data augmentation.
No , you souldn't use your test set for training to prevent overfitting , if you use cross-validation principles you need exactly to split your data into three datasets a train set which you'll use to train your model , a validation set to test different value of your hyperparameters , and a test set to finally test your model , if you use all your data for training, your model will overfit obviously.
One thing to remember deep learning work well if you have a large and very rich datasets