gensim doc2vec - How to infer label - python

I am using gensim's doc2vec implementation and I have a few thousand documents tagged with four labels.
yield TaggedDocument(text_tokens, [labels])
I'm training a Doc2Vec model with a list of these TaggedDocuments. However, I'm not sure how to infer the tag for a document that was not seen during training. I see that there is a infer_vector method which returns the embedding vector. But how can I get the most likely label from that?
An idea would be to infer the vectors for every label that I have and then calculate the cosine similarity between these vectors and the vector for the new document I want to classify. Is this the way to go? If so, how can I get the vectors for each of my four labels?

The infer_vector() method will train-up a doc-vector for a new text, which should be a list-of-tokens that were preprocessed just like the training texts).
And, as you've noted, model.docvecs['my_tag'] will get the pre-trained doc-vector for one of the tags that was known during training.
Checking the similarity of a new vector, against the vectors for all known-tags, is a reasonable baseline way to see what existing tags a new document is similar-to. The closest tag, or closest few tags, might be reasonable labels for an unknown document, as a sort of 'nearest-neighbor' approach.
But, note that the original/usual Doc2Vec approach is to give each document a unique ID, and let each ID-tag get its own vector. And then, perhaps, use those vectors with known-labels to train some other classifier that maps vectors to labels. (This might work better in some cases, if the "areas of the doc-vector space" that humans associate with a particular label aren't neat radiuses around a single centroid point for each label.)
Your approach of using, or adding, known-labels as doc-tags can often help. But also note that if you're only using 4 unique tags across thousands of documents, that's functionally very similar to just training the model with 4 giant documents – which may not be good at positioning those 4 vectors in a large-dimensional space (>4 dimensions), because there's not so much of the variety/subtle-contrasts that are needed to nudge the trained vectors into useful arrangements. (Typical published Doc2Vec work uses tens-of-thousands to millions of unique docs and doc-tags.)

I found the solution:
model.docvecs['my_tag']
gives me the vector for a given tag. Easy

Related

Why does KNN algorithm perform better on Word2Vec than on TF-IDF vector representation?

I am doing a project on multi-class text classification and could do with some advice.
I have a dataset of reviews which are classified into 7 product categories.
Firstly, I create a term document matrix using TF-IDF (tfidfvectorizer from sklearn). This generates a matrix of n x m where n in the number of reviews in my dataset and m is the number of features.
Then after splitting term document matrix into 80:20 train:test, I pass it through the K-Nearest Neighbours (KNN) algorithm and achieve an accuracy of 53%.
In another experiment, I used the Google News Word2Vec pretrained embedding (300 dimensional) and averaged all the word vectors for each review. So, each review consists of x words and each of the words has a 300 dimensional vector. Each of the vectors are averaged to produce one 300 dimensional vector per review.
Then I pass this matrix through KNN. I get an accuracy of 72%.
As for other classifiers that I tested on the same dataset, all of them performed better on the TF-IDF method of vectorization. However, KNN performed better on word2vec.
Can anyone help me understand why there is a jump in accuracy for KNN in using the word2vec method as compared to when using the tfidf method?
By using the external word-vectors, you've introduced extra info about the words to the word2vec-derived features – info that simply may not be deducible at all to the plain word-occurrece (TF-IDF) model.
For example, imagine just a single review in your train set, and another single review in your test set, use some less-common word for car like jalopy – but then zero other car-associated words.
A TFIDF model will have a weight for that unique term in a particular slot - but may have no other hints in the training dataset that jalopy is related to cars at all. In TFIDF space, that weight will just make those 2 reviews more-distant from all other reviews (which have a 0.0 in that dimension). It doesn't help or hurt much. A review 'nice jalopy' will be no closer to 'nice car' than it is to 'nice movie'.
On the other hand, if the GoogleNews has a vector for that word, and that vector is fairly close to car, auto, wheels, etc, then reviews with all those words will be shifted a little in the same direction in the word2vec-space, giving an extra hint to some classifiers, especially, perhaps the KNN one. Now, 'nice jalopy' is quite a bit closer to 'nice car' than to 'nice movie' or most other 'nice [X]' reviews.
Using word-vectors from an outside source may not have great coverage of your dataset's domain words. (Words in GoogleNews, from a circa-2013 training run on news articles, might miss both words, and word-senses in your alternative & more-recent reviews.) And, summarizing a text by averaging all its words is a very crude method: it can learn nothing from word-ordering/grammar (that can often reverse intended sense), and aspects of words may all cancel-out/dilute each other in longer texts.
But still, it's bringing in more language info that otherwise wouldn't be in the data at all, so in some cases it may help.
If your dataset is sufficiently large, training your own word-vectors may help a bit, too. (Though, the gain you've seen so far suggests some useful patterns of word-similarities may not be well-taught from your limited dataset.)
Of course also note that you can use blended techniques. Perhaps, each text can be even better-represented by a concatenation of the N TF-IDF dimensions and the M word2vec-average dimensions. (If your texts have many significany 2-word phrases that mean hings different than the individual words, adding in word 2-grams features may help. If your texts have many typos or rare word variants that still share word-roots with other words, than adding in character-n-grams – word fragments – may help.)

Gensim pretrained model similarity

Problem :
Im using glove pre-trained model with vectors to retrain my model with a specific domain say #cars, after training I want to find similar words within my domain but I got words not in my domain corpus, I believe it's from glove's vectors.
model_2.most_similar(positive=['spacious'], topn=10)
[('bedrooms', 0.6275501251220703),
('roomy', 0.6149100065231323),
('luxurious', 0.6105825901031494),
('rooms', 0.5935696363449097),
('furnished', 0.5897485613822937),
('cramped', 0.5892841219902039),
('courtyard', 0.5721820592880249),
('bathrooms', 0.5618442893028259),
('opulent', 0.5592212677001953),
('expansive', 0.555268406867981)]
Here I expect something like leg-room, car's spacious features mentioned in the domain's corpus. How can we exclude the glove vectors while having similar vectors?
Thanks
There may not be enough info in a simple set of generic word-vectors to filter neighbors by domain-of-use.
You could try using a mixed-weighting: combine the similarities to 'spacious', and to 'cars', and return the top results in that combination – and it might help a little.
Supplying more than one positive word to the most_similar() method might approximate this. If you're sure of some major sources of interference/overlap, you might even be able to use negative word examples, similar to how word2vec finds candidate answers for analogies (though this might also suppress useful results that are legitimately related to both domains, like 'roomy'). For example:
candidates = vec_model.most_similar(positive=['spacious', 'car'],
negative=['house'])
(Instead of using single words like 'car' or 'house' you could also try using vectors combined from many words that define a domain.)
But a sharp distinction sounds like a research project, rather than something easily possible with off-the-shelf libraries/vectors – and may requires more sophisticated approaches and datasets.
You could also try using a set of vectors trained only on a dataset of text from the domain of interest – thus ensuring the vocabulary, and senses, of words are all in that domain.
You cannot exclude the words from already trained model. I don't know in which framework you're working on but I'll give you the examples in Keras as it's simple to understand the intentions.
What you could do is use Embedding layer, populate it with GloVe "knowledge" and then resume training with your corpus so that layer will learn the words and fit them for your specific domain. You can read more about it in Keras blog

Looking to cluster short descriptions of reports. Should I use Word2Vec or Doc2Vec

So, I have close to 2000 reports and each report has an associated short description of the problem. My goal is to cluster all of these so that we can find distinct trends within these reports.
One of the features I'd like to use some sort of contextual text vector. Now, I've used Word2Vec and think this would be a good option but I also so Doc2Vec and I'm not quite sure what would be a better option for this use case.
Any feedback would be greatly appreciated.
They're very similar, so just as with a single approach, you'd try tuning parameters to improve results in some rigorous manner, you should try them both, and compare the results.
Your dataset sounds tiny compared to what either needs to induce good vectors – Word2Vec is best trained on corpuses of many millions to billions of words, while Doc2Vec's published results rely on tens-of-thousands to millions of documents.
If composing some summary-vector-of-the-document from word-vectors, you could potentially leverage word-vectors that are reused from elsewhere, but that will work best if the vectors' original training corpus is similar in vocabulary/domain-language-usage to your corpus. For example, don't expect words trained on formal news writing to work well with, or even cover the same vocabulary as, informal tweets, or vice-versa.
If you had a larger similar-text corpus of documents to train a Doc2Vec model, you could potentially train a good model on the full set of documents, but then just use your small subset, or re-infer vectors for your small subset, and get better results than a model that was only trained on your subset.
Strictly for clustering, and with your current small corpus of short texts, if you have good word-vectors from elsewhere, it may be worth looking at the "Word Mover's Distance" method of calculating pairwise document-to-document similarity. It can be expensive to calculate on larger docs and large document-sets, but might support clustering well.

Formatting and combining word frequency with other data machine learning python

I'm new in machine learning algorithms. I extensively read the scikit learn website and other SO post, which led me to build my first machine learning algorithm using the RandomForestClassifier and LinearSVC.
I'm working on medical notes. Each stay of a patient is associated (or not) to a code corresponding to a complication (bleeding, infection, heart attack...)
Using the notes, fitted and transformed with Countvectorizer and tfidfTransformer, i can accurately predict most of the codes. However, i'd like to add more data to my training dataset: length of stay, number of operations, title of operations, ICU stay duration...etc...
After parsing the web and SO, i ended up by adding all continuous/binary/scaled value to my word frequency array.
e.g: [0,0,0.34,0,0.45,0, 2, 45] (last 2 numbers are added data, whereas previous one match countvectorizer and tfdif.fit_transform(train_set)
However, this seems to me to be a gross way to combine data, and a huge number of words could mask others data.
I tried to set my data like: [[0,0,0.34,0,0.45,0],[2],[45]] but it doesn't work.
I searched the web, but no real clue, even though i might not be the first one facing this issue...:p
Thanks for your help
Edit:
Thanks for your detailed valuable answer. I really appreciated. However, what is exactly the range 0-1: is it the {predict_proba} value (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.predict) ?. I understood that the score is the accuracy of the prediction model. Then when you have all your predictions depending of each variable, do you average all of them ? Eventually, i'm working with multiple outputs, i guess it's not a problem since i can get a prediction for each of the output (btw predict_proba(X) give me an array like [array([[0.,1.]]), array ([[0.2,0.8]]).....] with a random forest tree classifier. i guess one of the number is the probability of the output, but i haven't explored this yet !)
Your first solution of just appending to the list is the correct solution. However, you should think about what this is implying. If you have 100 words and add two additional features, each specific word will get the same "weight" as the added features - IE - your added features won't be treated very strongly in the model. Additionally, you're saying that the last feature with a value of 45 is 100x the value of the feature 4th from end (0.45).
One common way to get around that is to use an ensemble model. Instead of adding those features to your list of words and predicting, first build a prediction model just using the words. That prediction will be in the range 0-1 and will capture the "sentiment" of the article. Then, scale your other variables (minmax scaler, normal distribution, etc.). Finally, combine the score from the words with the last two scaled variables and run another prediction on a list like this [.86,.2,.65]. In this way, you have transformed all of the words to a sentiment score, which you can use as a feature.
Hope that helps.
EDIT PER YOUR UPDATE ABOVE
Yes, in this instance you could use the predict_proba, but really if everything is scaled correctly, and you are using 1/0 as your targets for a class you don't need the predict_proba. The idea is to take the prediction from the words and combine it with the other variables. You do not average the predictions, you make a prediction from the predictions! This is called ensemble learning. Train another model with the output of your predictions as the features. Here is a flow of what you need to do.
Thanks for your time and your detailed answer. I think i get it. In short:
Prediction based on words, and for each bag of words of the training set (t1), you pull out a "sentiment"
Create a new array for each training set row with the sentiment and others values->new training set(t2)
Make a prediction based on t2.
Apply previous steps to the test.
One more question though !
What is the "sentiment" value ?! For each bag of words, i have a sparse matrix (countvectorizer+tf_idf). So how do you calculate the sentiment ? Do you run each row of the test again the rest of the test ? and your sentiment is the clf.predict(X) value ?

NLTK - Multi-labeled Classification

I am using NLTK, to classify documents - having 1 label each, with there being 10 type of documents.
For text extraction, I am cleaning text (punctuation removal, html tag removal, lowercasing), removing nltk.corpus.stopwords, as well as my own collection of stopwords.
For my document feature I am looking across all 50k documents, and gathering the top 2k words, by frequency (frequency_words) then for each document identifying which words in the document that are also in the global frequency_words.
I am then passing in each document as hashmap of {word: boolean} into the nltk.NaiveBayesClassifier(...) I have a 20:80 test-training ratio in regards to the total number of documents.
The issues I am having:
Is this classifier by NLTK, suitable to multi labelled data? - all examples I have seen are more about 2-class classification, such as whether something is declared as a positive or negative.
The documents are such that they should have a set of key skills in - unfortunately I haven't got a corpus where these skills lie. So I have taken an approach with the understanding, a word count per document would not be a good document extractor - is this correct? Each document has been written by individuals, so I need to leave way for individual variation in the document. I am aware SkLearn MBNaiveBayes which deals with word count.
Is there an alternative library I should be using, or variation of this algorithm?
Thanks!
Terminology: Documents are to be classified into 10 different classes which makes it a multi-class classification problem. Along with that if you want to classify documents with multiple labels then you can call it as multi-class multi-label classification.
For the issues which you are facing,
nltk.NaiveBayesClassifier() is a out-of-box multi-class classifier. So yes you can use this to solve this problem. As per the multi-labelled data, if your labels are a,b,c,d,e,f,g,h,i,j then you have to define label 'b' of a particular document as '0,1,0,0,0,0,0,0,0,0'.
Feature extraction is the hardest part of Classification (Machine learning). I recommend you to look into different algorithms to understand and select the one best suits for your data(without looking at your data, it is tough to recommend which algorithm/implementation to use)
There are many different libraries out there for classification. I personally used scikit-learn and i can say it was good out-of-box classifier.
Note: Using scikit-learn, i was able to achieve results within a week, given data set was huge and other setbacks.

Categories