From the semcor corpus (http://www.cse.unt.edu/~rada/downloads.html), there are senses wasn't mapped to the later versions of wordnet. And magically, the mapping can be found in the NLTK WordNet API as such:
>>> from nltk.corpus import wordnet as wn
# Emunerate the possible senses for the lemma 'delayed'
>>> wn.synsets('delayed')
[Synset('delay.v.01'), Synset('delay.v.02'), Synset('stay.v.06'), Synset('check.v.07'), Synset('delayed.s.01')]
>>> wn.synset('delay.v.01')
Synset('delay.v.01')
# Magically, there is a 0th sense of the word!!!
>>> wn.synset('delayed.a.0')
Synset('delayed.s.01')
I've checked the code and the API (http://nltk.googlecode.com/svn/trunk/doc/api/nltk.corpus.reader.wordnet.Synset-class.html, http://nltk.org/_modules/nltk/corpus/reader/wordnet.html) but i can't find how they did the magically mapping that didn't shouldn't exist (e.g. for delayed.a.0 -> delayed.s.01).
Does anyone know which part of the NLTK Wordnet API code does the magical mapping?
It's a bug I guess. When you do wn.synset('delayed.a.0') the first two lines in the method are:
lemma, pos, synset_index_str = name.lower().rsplit('.', 2)
synset_index = int(synset_index_str) - 1
So in this case the value of synset_index is -1 which is a valid index in python. And it won't fail when looking up in the array of synsets whose lemma is delayed and pos is a.
With this behavior you can do tricky things like:
>>> wn.synset('delay.v.-1')
Synset('stay.v.06')
Related
I am new to NLTK. I want to use nltk to extract hyponyms for a given list of words, specifically, for some combined words
my code:
import nltk
from nltk.corpus import wordnet as wn
list = ["real_time", 'Big_data', "Healthcare",
'Fuzzy_logic', 'Computer_vision']
def get_synset(a_list):
synset_list = []
for word in a_list:
a = wn.synsets(word)[:1] #The index is to ensure each word gets assigned 1st synset only
synset_list.append(a)
return synset_list
lst_synsets = get_synset(list)
lst_synsets
Here is the output:
[[Synset('real_time.n.01')],
[],
[Synset('healthcare.n.01')],
[Synset('fuzzy_logic.n.01')],
[]]
How could I find NLTK Wordnet Synsets for combined items? if no, any suggestion to use one of these methods for combined terms?
am fairly new to nltk. I am trying to find out a solution to the
problem I am currently working on:
Given two words w1 and w2 is there a way to find out whether they belong to the same sysnet in the Wordnet database?
Also is it possible to find the list of sysnets that contain a given word?
Thanks.
Also is it possible to find the list of sysnets that contain a given
word?
Yes:
>>> from nltk.corpus import wordnet as wn
>>> auto, car = 'auto', 'car'
>>> wn.synsets(auto)
[Synset('car.n.01')]
>>> wn.synsets(car)
[Synset('car.n.01'), Synset('car.n.02'), Synset('car.n.03'), Synset('car.n.04'), Synset('cable_car.n.01')]
If we look at lemmas in every synset from wn.synsets(car), we'll find "car" exist as one of the lemma:
>>> for ss in wn.synsets(car):
... assert 'car' in ss.lemma_names()
...
>>> for ss in wn.synsets(car):
... print 'car' in ss.lemma_names(), ss.lemma_names()
...
True [u'car', u'auto', u'automobile', u'machine', u'motorcar']
True [u'car', u'railcar', u'railway_car', u'railroad_car']
True [u'car', u'gondola']
True [u'car', u'elevator_car']
True [u'cable_car', u'car']
Note: A lemma is not exactly a surface word, see Stemmers vs Lemmatizers, also, you might find this helpful https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L66 (Disclaimer: Shameless plug)
Given two words w1 and w2 is there a way to find out whether they
belong to the same sysnet in the Wordnet database?
Yes:
>>> from nltk.corpus import wordnet as wn
>>> auto, car = 'auto', 'car'
>>> wn.synsets(auto)
[Synset('car.n.01')]
>>> wn.synsets(car)
[Synset('car.n.01'), Synset('car.n.02'), Synset('car.n.03'), Synset('car.n.04'), Synset('cable_car.n.01')]
>>> auto_ss = set(wn.synsets(auto))
>>> car_ss = set(wn.synsets(car))
>>> car_ss.intersection(auto_ss)
set([Synset('car.n.01')])
I spotted some problems with WordNet's hierarchy for verbs.
For example,
a.lowest_common_hypernyms(wn.synset('love.v.02')) returns [].
Isn't there a common ancestor like entity for verbs as well ?
Are verbs even connected to nouns in the same hierarchy ?
To find the top hypernym of any synset, use the Synset.root_hypernyms() function, e.g.:
>>> from nltk.corpus import wordnet as wn
>>> wn.synsets('car')[0].root_hypernyms()
[Synset('entity.n.01')]
>>> wn.synsets('love')[0].root_hypernyms()
[Synset('entity.n.01')]
>>> wn.synsets('love', 'v')[0].root_hypernyms()
[Synset('love.v.01')]
It seems that there's no overarching/umbrella hypernym that covers all verbs, unlike nouns that covered by entity.n.01:
>>> root_hypernyms_of_nouns = Counter(chain(*[ss.root_hypernyms() for ss in wn.all_synsets(pos='n')]))
>>> len(root_hypernyms_of_nouns)
1
>>> root_hypernyms_of_nouns.items()
[(Synset('entity.n.01'), 82115)]
But you can try to iterate through all verbs, e.g.:
wn.all_synsets(pos='v')
And try to find the top most hypernyms for verbs (it will be a rather large list):
>>> from collections import Counter
>>> from itertools import chain
>>> root_hypernyms_of_verbs = Counter(chain(*[ss.root_hypernyms() for ss in wn.all_synsets(pos='v')]))
>>> root_hypernyms_of_verbs.most_common(10)
[(Synset('change.v.01'), 1704), (Synset('change.v.02'), 1295), (Synset('act.v.01'), 1083), (Synset('move.v.02'), 1027), (Synset('make.v.03'), 659), (Synset('travel.v.01'), 526), (Synset('think.v.03'), 451), (Synset('transfer.v.05'), 420), (Synset('move.v.03'), 329), (Synset('connect.v.01'), 262)]
>>> root_hypernyms_of_verbs.keys() # This will return all root_hypernyms.
Visuwords has a very pretty interactive graph that you can use to look through the WordNet hierarchy manually, http://visuwords.com/entity
I want to do the following in Python (I have the NLTK library, but I'm not great with Python, so I've written the following in a weird pseudocode):
from nltk.corpus import wordnet as wn #Import the WordNet library
for each adjective as adj in wn #Get all adjectives from the wordnet dictionary
print adj & antonym #List all antonyms for each adjective
once list is complete then export to txt file
This is so I can generate a complete dictionary of antonyms for adjectives. I think it should be doable, but I don't know how to create the Python script. I'd like to do it in Python as that's the NLTK's native language.
from nltk.corpus import wordnet as wn
for i in wn.all_synsets():
if i.pos() in ['a', 's']: # If synset is adj or satelite-adj.
for j in i.lemmas(): # Iterating through lemmas for each synset.
if j.antonyms(): # If adj has antonym.
# Prints the adj-antonym pair.
print j.name(), j.antonyms()[0].name()
Note that there will be reverse duplicates.
[out]:
able unable
unable able
abaxial adaxial
adaxial abaxial
acroscopic basiscopic
basiscopic acroscopic
abducent adducent
adducent abducent
nascent dying
dying nascent
abridged unabridged
unabridged abridged
absolute relative
relative absolute
absorbent nonabsorbent
nonabsorbent absorbent
adsorbent nonadsorbent
nonadsorbent adsorbent
absorbable adsorbable
adsorbable absorbable
abstemious gluttonous
gluttonous abstemious
abstract concrete
...
The following function uses WordNet to return a set of adjective-only antonyms for a given word:
from nltk.corpus import wordnet as wn
def antonyms_for(word):
antonyms = set()
for ss in wn.synsets(word):
for lemma in ss.lemmas():
any_pos_antonyms = [ antonym.name() for antonym in lemma.antonyms() ]
for antonym in any_pos_antonyms:
antonym_synsets = wn.synsets(antonym)
if wn.ADJ not in [ ss.pos() for ss in antonym_synsets ]:
continue
antonyms.add(antonym)
return antonyms
Usage:
print(antonyms_for("good"))
Is there a way to get an adjective corresponding to a given adverb in NLTK or other python library.
For example, for the adverb "terribly", I need to get "terrible".
Thanks.
There is a relation in wordnet that connects the adjectives to adverbs and vice versa.
>>> from itertools import chain
>>> from nltk.corpus import wordnet as wn
>>> from difflib import get_close_matches as gcm
>>> possible_adjectives = [k.name for k in chain(*[j.pertainyms() for j in chain(*[i.lemmas for i in wn.synsets('terribly')])])]
['terrible', 'atrocious', 'awful', 'rotten']
>>> gcm('terribly',possible_adjectives)
['terrible']
A more human readable way to computepossible_adjective is as followed:
possible_adj = []
for ss in wn.synsets('terribly'):
for lemmas in ss.lemmas: # all possible lemmas.
for lemma in lemmas:
for ps in lemma.pertainyms(): # all possible pertainyms.
for p in ps:
for ln in p.name: # all possible lemma names.
possible_adj.append(ln)
EDIT: In the newer version of NLTK:
possible_adj = []
for ss in wn.synsets('terribly'):
for lemmas in ss.lemmas(): # all possible lemmas
for ps in lemmas.pertainyms(): # all possible pertainyms
possible_adj.append(ps.name())
As MKoosej mentioned, nltk's lemmas is no longer an attribute but a method. I also made a little simplification to get the most possible word. Hope someone else can use it also:
wordtoinv = 'unduly'
s = []
winner = ""
for ss in wn.synsets(wordtoinv):
for lemmas in ss.lemmas(): # all possible lemmas.
s.append(lemmas)
for pers in s:
posword = pers.pertainyms()[0].name()
if posword[0:3] == wordtoinv[0:3]:
winner = posword
break
print winner # undue