All synonyms for word in python? [duplicate] - python

This question already has answers here:
How to get synonyms from nltk WordNet Python
(8 answers)
Closed 7 years ago.
The code to get the synonyms of a word in python is say:
from nltk.corpus import wordnet
dog = wordnet.synset('dog.n.01')
print dog.lemma_names
>>['dog', 'domestic_dog', 'Canis_familiaris']
However dog.n.02 gives different words. For any words i can't know how many words there may be. How can i return all of the synonyms for a word?

Using wn.synset('dog.n.1').lemma_names is the correct way to access the synonyms of a sense. It's because a word has many senses and it's more appropriate to list synonyms of a particular meaning/sense. To enumerate words with similar meanings, possibly you can also look at the hyponyms.
Sadly, the size of Wordnet is very limited so there are very few lemma_names available for each senses.
Using Wordnet as a dictionary/thesarus is not very apt per se, because it was developed as an inventory of sense/meaning rather than a inventory of words. However you can use access the a particular sense and several (not a lot) related words to the sense. One can use Wordnet as a:
Dictionary: given a word, what are the different meaning of the word
for i,j in enumerate(wn.synsets('dog')):
print "Meaning",i, "NLTK ID:", j.name
print "Definition:",j.definition
Thesarus: given a word, what are the different words for each meaning of the word
for i,j in enumerate(wn.synsets('dog')):
print "Meaning",i, "NLTK ID:", j.name
print "Definition:",j.definition
print "Synonyms:", ", ".join(j.lemma_names)
print
Ontology: given a word, what are the hyponyms (i.e. sub-types) and hypernyms (i.e. super-types).
for i,j in enumerate(wn.synsets('dog')):
print "Meaning",i, "NLTK ID:", j.name
print "Hypernyms:", ", ".join(list(chain(*[l.lemma_names for l in j.hypernyms()])))
print "Hyponyms:", ", ".join(list(chain(*[l.lemma_names for l in j.hyponyms()])))
print
[Ontology Output]
Meaning 0 NLTK ID: dog.n.01
Hypernyms words domestic_animal, domesticated_animal, canine, canid
Hyponyms puppy, Great_Pyrenees, basenji, Newfoundland, Newfoundland_dog, lapdog, poodle, poodle_dog, Leonberg, toy_dog, toy, spitz, pooch, doggie, doggy, barker, bow-wow, cur, mongrel, mutt, Mexican_hairless, hunting_dog, working_dog, dalmatian, coach_dog, carriage_dog, pug, pug-dog, corgi, Welsh_corgi, griffon, Brussels_griffon, Belgian_griffon
Meaning 1 NLTK ID: frump.n.01
Hypernyms: unpleasant_woman, disagreeable_woman
Hyponyms:
Meaning 2 NLTK ID: dog.n.03
Hypernyms: chap, fellow, feller, fella, lad, gent, blighter, cuss, bloke
Hyponyms:
Meaning 3 NLTK ID: cad.n.01
Hypernyms: villain, scoundrel
Hyponyms: perisher

Note this other answer:
>>> wn.synsets('small')
[Synset('small.n.01'),
Synset('small.n.02'),
Synset('small.a.01'),
Synset('minor.s.10'),
Synset('little.s.03'),
Synset('small.s.04'),
Synset('humble.s.01'),
Synset('little.s.07'),
Synset('little.s.05'),
Synset('small.s.08'),
Synset('modest.s.02'),
Synset('belittled.s.01'),
Synset('small.r.01')]
Keep in mind that in your code you were trying to get the lemmas, but that's one level too deep for what you want. The synset level is about meaning, while the lemma level gives you words. In other words:
In WordNet (and I’m speaking of English WordNet here, though I think
the ones in other langauges are similarly organized) a lemma has
senses. Specifically, a lemma (that is, a base word form that is
indexed in WordNet) has exactly as many senses as the number of
synsets that it participates in. Conversely, and as you say, synsets
contain one more more lemmas, which means that multiple lemmas (words)
can represent the same sense, or meaning.
Also have a look at the NLTK's WordNet how to for a few more ways of exploring around a meaning or a word.

The documentation suggests
wordnet.synsets('dog')
to get all synsets for dog.

Related

Python Count Number of Phrases in Text

I have a list of product reviews/descriptions in excel and I am trying to classify them using Python based on words that appear in the reviews.
I import both the reviews, and a list of words that would indicate the product falling into a certain classification, into Python using Pandas and then count the number of occurrences of the classification words.
This all works fine for single classification words e.g. 'computer' but I am struggling to make it work for phrases e.g. 'laptop case'.
I have look through a few answers but none were successful for me including:
using just text.count(['laptop case', 'laptop bag']) as per the answer here: Counting phrase frequency in Python 3.3.2 but because you need to split the text up that does not work (and I think maybe text.count does not work for lists either?)
Other answers I have found only look at the occurrence of a single word. Is there something I can do to count words and phrases that does not involve the splitting of the body of text into individual words?
The code I currently have (that works for individual terms) is:
for i in df1.index:
descriptions = df1['detaileddescription'][i]
if type(descriptions) is str:
descriptions = descriptions.split()
pool.append(sum(map(descriptions.count, df2['laptop_bag'])))
else:
pool.append(0)
print(pool)
You're on the right track! You're currently splitting into single words, which facilitates finding occurrences of single words as you pointed out. To find phrases of length n you should split the text into chunks of length n, which are called n-grams.
To do that, check out the NLTK package:
from nltk import ngrams
sentence = 'I have a laptop case and a laptop bag'
n = 2
bigrams = ngrams(sentence.split(), n)
for gram in bigrams:
print(gram)
Sklearn's CountVectorizer is the standard way
from sklearn.feature_extraction import text
vectorizer = text.CountVectorizer()
vec = vectorizer.fit_transform(descriptions)
And if you want to see the counts as a dict:
count_dict = {k:v for k,v in zip(vectorizer.get_feature_names(), vec.toarray()[0]) if v>0}
print (count_dict)
The default is unigrams, you can use bigrams or higher ngrams with the ngram_range parameter

Extracting collocates for a given word from a text corpus - Python

I am trying to find out how to extract the collocates of a specific word out of a text. As in: what are the words that make a statistically significant collocation with e.g. the word "hobbit" in the entire text corpus? I am expecting a result similar to a list of words (collocates ) or maybe tuples (my word + its collocate).
I know how to make bi- and tri-grams using nltk, and also how to select only the bi- or trigrams that contain my word of interest. I am using the following code (adapted from this StackOverflow question).
import nltk
from nltk.collocations import *
corpus = nltk.Text(text) # "text" is a list of tokens
trigram_measures = nltk.collocations.TrigramAssocMeasures()
tri_finder = TrigramCollocationFinder.from_words(corpus)
# Only trigrams that appear 3+ times
tri_finder.apply_freq_filter(3)
# Only the ones containing my word
my_filter = lambda *w: 'Hobbit' not in w
tri_finder.apply_ngram_filter(my_filter)
print tri_finder.nbest(trigram_measures.likelihood_ratio, 20)
This works fine and gives me a list of trigrams (one element of of which is my word) each with their log-likelihood value. But I don't really want to select words only from a list of trigrams. I would like to make all possible N-Gram combinations in a window of my choice (for example, all words in a window of 3 left and 3 right from my word - that would mean a 7-Gram), and then check which of those N-gram words has a statistically relevant frequency paired with my word of interest. I would like to take the Log-Likelihood value for that.
My idea would be:
1) Calculate all N-Gram combinations in different sizes containing my word (not necessarily using nltk, unless it allows to calculate units larger than trigrams, but i haven't found that option),
2) Compute the log-likelihood value for each of the words composing my N-grams, and somehow compare it against the frequency of the n-gram they appear in (?). Here is where I get lost a bit... I am not experienced in this and I don't know how to think this step.
Does anyone have suggestions how I should do?
And assuming I use the pool of trigrams provided by nltk for now: does anyone have ideas how to proceed from there to get a list of the most relevant words near my search word?
Thank you
Interesting problem ...
Related to 1) take a look at this thread...different nice solutions to make ngrams .. basically I lo
from nltk import ngrams
sentence = 'this is a foo bar sentences and i want to ngramize it'
n = 6
sixgrams = ngrams(sentence.split(), n)
for grams in sixgrams:
print (grams)
The other way could be:
phrases = Phrases(doc,min_count=2)
bigram = models.phrases.Phraser(phrases)
phrases = Phrases(bigram[doc],min_count=2)
trigram = models.phrases.Phraser(phrases)
phrases = Phrases(trigram[doc],min_count=2)
Quadgram = models.phrases.Phraser(phrases)
... (you could continue infinitely)
min_count controls the frequency of each word in the corpora.
Related to 2) It's somehow tricky calculating loglikelihood for more than two variables since you should count for all the permutations. look this thesis which guy proposed a solution (page 26 contains a good explanation).
However, in addition to log-likelihood function, there is PMI (Pointwise Mutual Information) metric which calculates the co-occurrence of pair of words divided by their individual frequency in the text. PMI is easy to understand and calculate which you could use it for each pair of the words.

iterate through nltk dictionaries

I'd like to know whether it's possible to iterate through some of the available nltk dictionaries, ie: Spanish dictionary. I'd like to find certain words matching some requirements.
Let's say I got this list ["tv", "tb", "tp", "dv", "db", "dp"], the algorithm would give me words like ["tapa", "tubo", "tuba", ...]. As you can see, if you get rid of the vowels in those words they'll be in the initial list:
tapa => tp
tubo => tb
tuba => tb
Anyway, I just want to know whether it's possible to iterate through spanish words on nltk dictionaries and how, that's pretty much
The nltk has plenty of Spanish language resources, but I'm not aware of a dictionary. So I'll leave the choice of wordlist up to you, and go on from there.
In general, the nltk represents wordlists as corpus readers with the usual method words() for the individual words. So here's how you could find words matching your template in the English wordlist:
templates = set(["tv", "tb", "tp", "dv", "db", "dp"])
for w in nltk.corpus.words.words("en"):
<remove vowels and check if it is in `templates`>
I notice there's a Spanish stopwords list; here's how you would iterate over it:
for w in nltk.corpus.stopwords.words("spanish"):
...
You could also create your own "wordlist" from a Spanish-language corpus. I used the scare quotes because the best data structure for this purpose is a set. In python, iterating over a set or dict will give you its keys:
mywords = set(w.lower() for w in nltk.corpus.conll2002.words("esp.train"))
for w in mywords:
...

comparing synonyms NLTK [duplicate]

This question already has answers here:
All synonyms for word in python? [duplicate]
(3 answers)
Closed 7 years ago.
I can't come up with a stranger problem, guess you'll help me.
for p in wn.synsets('change'):<br>
print(p)
Getting:
Synset('change.n.01')
Synset('change.n.02')
Synset('change.n.03')
Synset('change.n.04')
Synset('change.n.05')
Synset('change.n.06')
Synset('change.n.07')
Synset('change.n.08')
Synset('change.n.09')
Synset('variety.n.06')
Synset('change.v.01')
Synset('change.v.02')
Synset('change.v.03')
Synset('switch.v.03')
Synset('change.v.05')
Synset('change.v.06')
Synset('exchange.v.01')
Synset('transfer.v.06')
Synset('deepen.v.04')
Synset('change.v.10')
For example I have an a string:
a = 'transfer'
I'd like to be able to identify all kinds of synonyms of word 'change' and know f.e. 'transfer' is the one of them. How can I ask my program:
"Is 'transfer' is one of the synonyms of 'change'?"
Firstly, wordnet indexes concepts (aka Synsets) and link possible words for each concept, the following code shows the concepts link to the word 'change':
>>> from nltk.corpus import wordnet as wn
>>> wn.synsets('change')
[Synset('change.n.01'), Synset('change.n.02'), Synset('change.n.03'), Synset('change.n.04'), Synset('change.n.05'), Synset('change.n.06'), Synset('change.n.07'), Synset('change.n.08'), Synset('change.n.09'), Synset('variety.n.06'), Synset('change.v.01'), Synset('change.v.02'), Synset('change.v.03'), Synset('switch.v.03'), Synset('change.v.05'), Synset('change.v.06'), Synset('exchange.v.01'), Synset('transfer.v.06'), Synset('deepen.v.04'), Synset('change.v.10')]
A synset has several properties, it has:
ID number
Part-of-Speech label
definition
lemma names, i.e. the possible words that can be used to instantiate the concept
links to other synset by N-nymy relations (e.g. hypernym, hyponym, meronym)
Here's how to interface the above properties in NLTK:
>>> wn.synsets('change')[0]
Synset('change.n.01')
>>> wn.synsets('change')[0].offset()
7296428
>>> wn.synsets('change')[0].pos()
u'n'
>>> wn.synsets('change')[0].definition()
u'an event that occurs when something passes from one state or phase to another'
>>> wn.synsets('change')[0].lemma_names()
[u'change', u'alteration', u'modification']
>>> wn.synsets('change')[0].hypernyms()
[Synset('happening.n.01')]
But a synset doesn't necessary have synonym relations. If we define synonyms as words that have similar meaning, it is the words (i.e. lemmas) that have synonymy relations. In addition, the context of the words defines whether a word is a synonym of another. A single word has limited meaning, it's the "concept" that contains meaning and instantiate the meaning through human words. At least that's the typical theory of semantics, see chapter 2 in http://goo.gl/ZHzlNF
So when you want to ask is 'transfer' a synonym of 'change', you have to first:
define/select the concept you're referring to here and provide the context where 'transfer' is used, google Word Sense Disambiguation
define which concept of change are you referring to.
Then comparison of meaning is possible.
See also:
All synonyms for word in python?
How to get synonyms from nltk WordNet Python
You need to first get the lemmas then iterate over your lemmas and get the names then check the membership with in operand:
>>> a in [j.name() for i in wn.synsets('change') for j in i.lemmas()]
True
>>> [j.name() for i in wn.synsets('change') for j in i.lemmas()]
[u'change', u'alteration', u'modification', u'change', u'change', u'change', u'change', u'change', u'change', u'change', u'change', u'variety', u'change', u'change', u'alter', u'modify', u'change', u'change', u'alter', u'vary', u'switch', u'shift', u'change', u'change', u'change', u'exchange', u'commute', u'convert', u'exchange', u'change', u'interchange', u'transfer', u'change', u'deepen', u'change', u'change']
wn.synsets gives you the list of meanings, each meaning has a list of words.
for sense in wn.synsets('change'):
if "transfer" in sense.lemma_names:
print "'transfer' is synonym to 'change'"
break
Those are different senses of the word. you can obtain synonyms of each sense using synset('xxx').lemma_names. Then you can compare if the word is present in them.

How to find collocations in text, python

How do you find collocations in text?
A collocation is a sequence of words that occurs together unusually often.
python has built-in func bigrams that returns word pairs.
>>> bigrams(['more', 'is', 'said', 'than', 'done'])
[('more', 'is'), ('is', 'said'), ('said', 'than'), ('than', 'done')]
>>>
What's left is to find bigrams that occur more often based on the frequency of individual words. Any ideas how to put it in the code?
Try NLTK. You will mostly be interested in nltk.collocations.BigramCollocationFinder, but here is a quick demonstration to show you how to get started:
>>> import nltk
>>> def tokenize(sentences):
... for sent in nltk.sent_tokenize(sentences.lower()):
... for word in nltk.word_tokenize(sent):
... yield word
...
>>> nltk.Text(tkn for tkn in tokenize('mary had a little lamb.'))
<Text: mary had a little lamb ....>
>>> text = nltk.Text(tkn for tkn in tokenize('mary had a little lamb.'))
There are none in this small segment, but here goes:
>>> text.collocations(num=20)
Building collocations list
Here is some code that takes a list of lowercase words and returns a list of all bigrams with their respective counts, starting with the highest count. Don't use this code for large lists.
from itertools import izip
words = ["more", "is", "said", "than", "done", "is", "said"]
words_iter = iter(words)
next(words_iter, None)
count = {}
for bigram in izip(words, words_iter):
count[bigram] = count.get(bigram, 0) + 1
print sorted(((c, b) for b, c in count.iteritems()), reverse=True)
(words_iter is introduced to avoid copying the whole list of words as you would do in izip(words, words[1:])
import itertools
from collections import Counter
words = ['more', 'is', 'said', 'than', 'done']
nextword = iter(words)
next(nextword)
freq=Counter(zip(words,nextword))
print(freq)
A collocation is a sequence of tokens that are better treated as a single token when parsing e.g. "red herring" has a meaning that can't be derived from its components. Deriving a useful set of collocations from a corpus involves ranking the n-grams by some statistic (n-gram frequency, mutual information, log-likelihood, etc) followed by judicious manual editing.
Points that you appear to be ignoring:
(1) the corpus must be rather large ... attempting to get collocations from one sentence as you appear to suggest is pointless.
(2) n can be greater than 2 ... e.g. analysing texts written about 20th century Chinese history will throw up "significant" bigrams like "Mao Tse" and "Tse Tung".
What are you actually trying to achieve? What code have you written so far?
Agree with Tim McNamara on using nltk and problems with the unicode. However, I like the text class a lot - there is a hack that you can use to get the collocations as list , i discovered it looking at the source code . Apparently whenever you invoke the collocations method it saves it as a class variable!
import nltk
def tokenize(sentences):
for sent in nltk.sent_tokenize(sentences.lower()):
for word in nltk.word_tokenize(sent):
yield word
text = nltk.Text(tkn for tkn in tokenize('mary had a little lamb.'))
text.collocations(num=20)
collocations = [" ".join(el) for el in list(text._collocations)]
enjoy !

Categories