Python Calculating similarity between two documents using word2vec, doc2vec - python

I am trying to calculate similarity between two documents which are comprised of more than thousands sentences.
Baseline would be calculating cosine similarity using BOW.
However, I want to capture more of semantic difference between documents.
Hence, I built word embedding and calculated documents similarity by generating document vectors by simply averaging all the word vectors in each of documents and measure cosine similarity between these documents vectors.
However, since the size of each input document is rather big, the results I get from using the method above are very similar to simple BOW cosine similarity.
I have two questions,
Q1. I found gensim module offers soft cosine similarity. But I am having hard time understanding the difference from the methods I used above, and I think it may not be the mechanism to calculate similarity between million pairs of documents.
Q2. I found Doc2Vec by gensim would be more appropriate for my purpose. But I recognized that training Doc2Vec requires more RAM than I have (32GB) (the size of my entire documents is about 100GB). Would there be any way that I train the model with small part(like 20GB of them) of entire corpus, and use this model to calculate pairwise similarities of entire corpus?
If yes, then what would be the desirable train set size, and is there any tutorial that I can follow?

Ad Q1: If the similarity matrix contains the cosine similarities of the word embeddings (which it more or less does, see Equation 4 in SimBow at SemEval-2017 Task 3) and if the word embeddings are L2-normalized, then the SCM (Soft Cosine Measure) is equivalent to averaging the word embeddings (i.e. your baseline). For a proof, see Lemma 3.3 in the Implementation Notes for the SCM. My Gensim implementation of the SCM (1, 2) additionally sparsifies the similarity matrix to keep the memory footprint small and to regularize the embeddings, so you will get slightly different results compared to vanilla SCM. If embedding averaging gives you similar results to simple BOW cosine similarity, I would question the quality of the embeddings.
Ad Q2: Training a Doc2Vec model on the entire dataset for one epoch is equivalent to training a Doc2Vec model on smaller segments of the entire dataset, one epoch for each segment. Just be aware that Doc2Vec uses document ids as a part of the training process, so you must ensure that the ids are still unique after the segmentation (i.e. the first document of the first segment must have a different id than the first document of the second segment).

Related

What's the best way to classify text data in ML?

Let's say i have a dataset consisting of a review column with exactly 100 words for each review, then it may be easy to train my model as i can simply tokenize each of the 100 words for each reviews then convert it into a numerical array and then feed it into a Sequential model with input_shape=(1,100). But in the real world, reviews are never the same size. If I use a function such as CountVectorizer, then the structure of the sentence is not reserved, and one hot encoding may not be efficient enough.
So what is the proper way to preprocess this particular dataset so that i feed it into a trainable NN
A common way to represent text as vectors is by utilizing word embeddings. The main idea is that you used a large text corpus to compute vector representations of all words occurring in that dataset. So now for each review, you could run the following algorithm to compute its vector representation:
For each word in the review, check if a word embedding exists (in other words, that word occurred in the large training corpus) and if it does, add its vector representation to the representation of the review
Once you summed up the vector representations of all words, you compute the average embedding by dividing the summed review vector by the number of words in the document and this results in the final vector representation for that document
This vector can now be fed into a trainable NN
Before performing steps 1-3, you could also apply more preprocessing steps and remove fill words such as "and", "or", etc. as they usually carry no meaning, you could convert words to lower case and apply other standard NLP (natural language processing techniques) which could affect the vector representation of the reviews. But the key idea is to sum up the word vectors of a review and use its averaged vector as the representation of the review. By averaging, the length of the reviews is unimportant. Similarly, in word embeddings, the dimensionality of the word vectors is fixed (100D, 200D, ...), so you can experiment with the most suitable dimensionality.
Note that there are many different models available that compute word embeddings, so you could choose any of them. One that is nicely integrated into Python is word2vec.
And a state-of-the-art model that is currently being used by Google is called BERT.

Use tf-idf with FastText vectors

I'm interested in using tf-idf with FastText library, but have found a logical way to handle the ngrams. I have used tf-idf with SpaCy vectors already for what I have found several examples like these ones:
http://dsgeek.com/2018/02/19/tfidf_vectors.html
https://www.aclweb.org/anthology/P16-1089
http://nadbordrozd.github.io/blog/2016/05/20/text-classification-with-word2vec/
But for FastText library is not that clear to me, since it has a granularity that isn't that intuitive, E.G.
For a general word2vec aproach I will have one vector for each word, I can count the term frequency of that vector and divide its value accordingly.
But for fastText same word will have several n-grams,
"Listen to the latest news summary" will have n-grams generated by a sliding windows like:
lis ist ste ten tot het...
These n-grams are handled internally by the model so when I try:
model["Listen to the latest news summary"]
I get the final vector directly, hence what I have though is to split the text into n-grams before feeding the model like:
model['lis']
model['ist']
model['ten']
And make the tf-idf from there, but that seems like an inefficient approach both, is there a standar way to apply tf-idf to vector n-grams like these.
I would leave FastText deal with trigrams, but keep building the tfidf-weighted embeddings at the word level.
That is, you send
model["Listen"]
model["to"]
model["the"]
...
to FastText, and then use your old code to get the tf-idf weights.
In any case, it would be good to know whether FastText itself considers the word construct when processing a sentence, or it truly only works it as a sequence of trigrams (blending consecutive words). If the latter is true, then for FastText you would lose information by breaking the sentence into separate words.
You are talking about fasttext tokenization step (not fasttext embeddings) which is a (3,6) char-n-gram tokenization, compatible with tfidf. The full step can be computed outside of fasttext quite easily Calculate TF-IDF using sklearn for n-grams in python
For what I understood from your question you are confusing the difference between word embeddings methods (such as word2vec and many other) and Tf-Idf:
Basically Word Embeddings methods are unsupervised models for
generating word vectors. The word vectors generated by this kind of
models are now very popular in NPL tasks. This is because a word
embedding representation of a word captures more information about
a word than just a one-hot representation of the word, since the
former captures semantic similarity of that word to other words
whereas the latter representation of the word is equidistant from all
other words. FastText is another way to implements word embedding (recently opensourced by facebook researcher).
Tf-idf, instead is a scoring scheme for words, that is a measure of how
important a word is to a document.
From a practical usage standpoint, while tf-idf is a simple scoring scheme and that is its key advantage, word embeddings may be a better choice for most tasks where tf-idf is used, particularly when the task can benefit from the semantic similarity captured by word embeddings (e.g. in information retrieval tasks).
Unlike Word2Vec that learn a vector representation of the entire word, FastText learn a representation for each n-gram of the word as you already seen. So the overall word embeddings is the sum of the n-gram representation. Basically FastText model (number of n-grams > number of words), it performs better than Word2Vec and allows rare words to be represented appropriately.
For my standpoint in general It does not make sense use FastText (or any word embeddings methods) together with Tf-Idf. But if you want use Tf-Idf with FastText you must sum all the n-gram that compose your word and use this representation to calculate the Tf-Idf.

gensim doc2vec - How to infer label

I am using gensim's doc2vec implementation and I have a few thousand documents tagged with four labels.
yield TaggedDocument(text_tokens, [labels])
I'm training a Doc2Vec model with a list of these TaggedDocuments. However, I'm not sure how to infer the tag for a document that was not seen during training. I see that there is a infer_vector method which returns the embedding vector. But how can I get the most likely label from that?
An idea would be to infer the vectors for every label that I have and then calculate the cosine similarity between these vectors and the vector for the new document I want to classify. Is this the way to go? If so, how can I get the vectors for each of my four labels?
The infer_vector() method will train-up a doc-vector for a new text, which should be a list-of-tokens that were preprocessed just like the training texts).
And, as you've noted, model.docvecs['my_tag'] will get the pre-trained doc-vector for one of the tags that was known during training.
Checking the similarity of a new vector, against the vectors for all known-tags, is a reasonable baseline way to see what existing tags a new document is similar-to. The closest tag, or closest few tags, might be reasonable labels for an unknown document, as a sort of 'nearest-neighbor' approach.
But, note that the original/usual Doc2Vec approach is to give each document a unique ID, and let each ID-tag get its own vector. And then, perhaps, use those vectors with known-labels to train some other classifier that maps vectors to labels. (This might work better in some cases, if the "areas of the doc-vector space" that humans associate with a particular label aren't neat radiuses around a single centroid point for each label.)
Your approach of using, or adding, known-labels as doc-tags can often help. But also note that if you're only using 4 unique tags across thousands of documents, that's functionally very similar to just training the model with 4 giant documents – which may not be good at positioning those 4 vectors in a large-dimensional space (>4 dimensions), because there's not so much of the variety/subtle-contrasts that are needed to nudge the trained vectors into useful arrangements. (Typical published Doc2Vec work uses tens-of-thousands to millions of unique docs and doc-tags.)
I found the solution:
model.docvecs['my_tag']
gives me the vector for a given tag. Easy

Weighting specific features in TF-IDF feature vectors for k-means clustering and cosine similarity

I have an array of TF-IDF feature vectors. I'd like to find similar vectors in the array using two methods:
Cosine similarity
k-means clustering
Using Scikit Learn, this process is pretty simple.
Now I'd like to weight certain features so that they will influence the results more than the other features. For example, I might like to weight the first 100 elements of the TF-IDF vectors so that those features are more indicative of similarity than the rest of the features.
How can I meaningfully weight certain features in my feature vectors? Is the process for weighting certain features the same for each of the similarity algorithms I listed above?
As I understand, low values in the TFIDF matrix mean that the words are less significant. So one approach is to lower the values in the matrix for those columns you considered.
The arrays in scikit are sparse, so for testing and debugging you might want to convert to regular matrix. I also used xlsxwriter to get an overview to what is really happening when applying TFIDF and KMeans++ (see) https://www.dbc-enterprise-it-consulting.com/text-classifier/.

Unsupervised Clustering of Words in a document semantically

I want to cluster words based on their semantic similarity. Currently I have a list of documents with detected noun phrases in them. I want to make cluster out of these obtained nouns within the documents and unsupervisedly cluster them semantically?
I have looked at wordnet and gensim libraries. Any suggestions as to which can really help in getting the required cluster of words based on their semantic similarity?
For similarity based on phrase co-occurrence (phrases appearing more often together in documents will be more similar), you can use gensim.
Check out the Latent Semantic Analysis and Latent Dirichlet Allocation there: http://radimrehurek.com/gensim/tut2.html#available-transformations
Depending on what exactly you want your clusters to do, you can either use the LSI/LDA topics directly as clusters. Or cluster the obtained latent phrase vectors etc.

Categories