I'm trying to write a program to evaluate semantic similarity between texts. I have already compared n-gram frequencies between texts (a lexical measure). I wanted something a bit less shallow than this, and I figured that looking at similarity in sentence construction would be one way to evaluate text similarity.
However, all I can figure out how to do is to count the POS (for example, 4 nouns per text, 2 verbs, etc.). This is then similar to just counting n-grams (and actually works less well than the ngrams).
postags = nltk.pos_tag(tokens)
self.pos_freq_dist = Counter(tag for word,tag in postags)
for pos, freq in self.pos_freq_dist.iteritems():
self.pos_freq_dist_relative[pos] = freq/self.token_count #normalise pos freq by token counts
Lots of people (Pearsons, ETS Research, IBM, academics, etc.) use Parts-of-Speech for deeper measures, but no one says how they have done it. How can Parts-of-Speech be used for a 'deeper' measure of semantic text similarity?
A more sophisticated tagger is required such as http://phpir.com/part-of-speech-tagging/.
You will need to write algorithms and create word banks to determine the meaning or intention of sentences. Semantic analysis is artificial intelligence.
Nouns and capitalized nouns will be the subjects of the content. Adjectives will give some hint as to the polarity of the content. Vagueness, clarity, power, weakness, the types of words used. The possibilities are endless.
Take a look at chapter 6 of the NLTK Book. It should give you plenty of ideas for features you can use to classify text.
Related
I try to find, effectively, a similarity between a short phrase and a large corpus. For example, suppose my corpus is the book Moby Dick. This book has tens of thousands of words.
In addition to that, I have a few short phrases. for example:
phrase1 = "Call me Ishmael" # This is the first sentence in the book exactly.
phrase2 = "Call me Isabel" # This is like the previous with changes of few letters from the third word.
phrase3 = "Call me Is mael" #It's a similar sentence but one word split in two.
In addition, I have of course many other sentences that are not similar to sentences from the book.
I wonder what is the generic and effective way to identify sentences that have a sentence similar to them in the book.
What have I tried to do that seems less appropriate to me?
I split all the input sentences into 3/4/5/6 n-grams.
I split all the corpus sentences into 3/4/5/6 n-grams.
Then I tried to find an approximate match (with FuzzySet) between all possible combinations of corpus n-grams and input n-grams (The combinations are required to grasp even cases where words have split or merged.)
It is quite clear to me that this method is very wasteful and probably also not the most accurate for my needs. I would love to understand how best to do it.
You can use corpus-based spell correction followed by fuzzyset. For spell correction, you can use a python implementation of symspell algorithm. Here you can find the list of repository implementing symspell. Use symspell with compounding configuration.
Use a speller trained on the corpus to spell correct short sentences.
Use fuzzyset/fuzzywuzzy to find a similarity score between spell
corrected sentence and each sentence in the corpus.
Define a threshold by experiment, if the similarity is above the
threshold,call it a match
I am trying to get unique words for each topic.
I am using gensim and this is the line that help me to generate my model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary)
But I have repeated words in two different topics, I would like to have different words per topic
You cannot enforce words uniqueness by topic in LDA since each topic is a distribution over all the words in the vocabulary. This distribution measure the probability that words co-occur inside a topic. Thus, nothing ensures that a word won't co-occur with different words in different contexts, which will leads to words represented in different topics.
Let's take an example by considering these two documents:
doc1: The python is a beautiful snake living in the forest.
doc2: Python is a beautiful language used by programmer and data scientist.
In doc1 the word python co-occur with snake, forest and living which might give a good probability for this word to appear in a topic, let's say, about biology.
In doc2, the word python co-occur with language, programmer and data which, in this case, will associate this word in a topic about computer science.
What you can eventually do, is to look for words that have the highest probability in topics in order to achieve what you want.
Words that are grouped into one topic do not mean that they are semantically similar(low distance in space mapped from word2vec). They are just co-occurred more often.
I am working on natural language programming in the German Language in which I need to categorize words according to the meaning of the words. E.g 'Communication', 'Social skills', 'Interpersonal Skills' belongs to 'Communication skills' and so forth.
Basically, the words need to sort based on the similarity of the meaning it has with given set of standard words.
I have tried Levenstein-distance, edit-distance and open-source fuzzy string matching technique but the result are not satisfying.
Best results come from using Longest-common Subsequence the list of words but I want to match the words based on the underlying meaning of the words.
What you are looking for is "semantic similarity". One possible option is to use Spacy or another NLP framework. You would want to explore word 2 vector algorithms to help with you task.
Semantic Similarity with Spacy
I am using Gensim Phrases to identify important n-grams in my text as follows.
bigram = Phrases(documents, min_count=5)
trigram = Phrases(bigram[documents], min_count=5)
for sent in documents:
bigrams_ = bigram[sent]
trigrams_ = trigram[bigram[sent]]
However, this detects uninteresting n-grams such as special issue, important matter, high risk etc. I am particularly, interested in detecting concepts in the text such as machine learning, human computer interaction etc.
Is there a way to stop phrases detecting uninteresting n-grams as I have mentioned above in my example?
Phrases has a configurable threshold parameter which adjusts the statistical cutoff for promoting word-pairs into phrases. (Larger thresholds mean fewer pairs become phrases.)
You can adjust that to try to make a greater proportion of its promoted phrases match your own ad hoc intuition about "interesting" phrases – but this class is still using a fairly crude method, without any awareness of grammar or domain knowledge beyond what's in the corpus. So any value that gets all/most of the phrases you want will likely include many uninteresting ones, or vice-versa.
If you have a priori knowledge that certain word-groups are of importance, you could preprocess the corpus yourself to combine those into single tokens, before (or instead of) the collocation-statistics-based Phrases process.
If I understand what you're trying to do, you could try tf_idf on your corpus compared to the tf_idf of a larger, say, standard corpus (wikipedia or something).
from sklearn.feature_extraction.text import
TfidfVectorizertfidf_vectorizer =
TfidfVectorizer(max_df=0.8, max_features=500,min_df=0.2,
stop_words='english', use_idf=True, ngram_range=(1,2))
X = tfidf_vectorizer.transform(docs) # transform the documents to their tf_idf vectors
Look only at ngrams that have a very different value, this of course will only work if you have a large enough number of documents.
How can you determine the semantic similarity between two texts in python using WordNet?
The obvious preproccessing would be removing stop words and stemming, but then what?
The only way I can think of would be to calculate the WordNet path distance between each word in the two texts. This is standard for unigrams. But these are large (400 word) texts, that are natural language documents, with words that are not in any particular order or structure (other than those imposed by English grammar). So, which words would you compare between texts? How would you do this in python?
One thing that you can do is:
Kill the stop words
Find as many words as possible that have maximal intersections of synonyms and antonyms with those of other words in the same doc. Let's call these "the important words"
Check to see if the set of the important words of each document is the same. The closer they are together, the more semantically similar your documents.
There is another way. Compute sentence trees out of the sentences in each doc. Then compare the two forests. I did some similar work for a course a long time ago. Here's the code (keep in mind this was a long time ago and it was for class. So the code is extremely hacky, to say the least).
Hope this helps