Have a script for getting italian synonyms from Wordnet like this:
from nltk.corpus import wordnet as wn
it_lemmas = wn.lemmas("problema", lang="ita")
hypernyms = it_lemmas[0].synset().hypernyms()
print(hypernyms[0].lemmas(lang="ita"))
When I do the looping I get message that list indices must be integers or slices, not Lemma
How should I do the looping to get not only one value ([0]) but all the values in this dictionary (the amount can be different) and print them all?
from nltk.corpus import wordnet as wn
it_lemmas = wn.lemmas("problema", lang="ita")
for i in range(len(it_lemmas)):
hypernyms = it_lemmas[i].synset().hypernyms()
for i in range(len(hypernyms)):
syn = hypernyms[i].lemmas(lang="ita")
print (syn)
Related
I am new to NLTK. I want to use nltk to extract hyponyms for a given list of words, specifically, for some combined words
my code:
import nltk
from nltk.corpus import wordnet as wn
list = ["real_time", 'Big_data', "Healthcare",
'Fuzzy_logic', 'Computer_vision']
def get_synset(a_list):
synset_list = []
for word in a_list:
a = wn.synsets(word)[:1] #The index is to ensure each word gets assigned 1st synset only
synset_list.append(a)
return synset_list
lst_synsets = get_synset(list)
lst_synsets
Here is the output:
[[Synset('real_time.n.01')],
[],
[Synset('healthcare.n.01')],
[Synset('fuzzy_logic.n.01')],
[]]
How could I find NLTK Wordnet Synsets for combined items? if no, any suggestion to use one of these methods for combined terms?
I am trying to get all the words in Wordnet dictionary that are of type noun and category food.
I have found a way to check if a word is noun.food but I need the reverse method:
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
def if_food(word):
syns = wn.synsets(word, pos = wn.NOUN)
for syn in syns:
print(syn.lexname())
if 'food' in syn.lexname():
return 1
return 0
So I think I have found a solution:
# Using the NLTK WordNet dictionary check if the word is noun and a food.
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
def if_food(word):
syns = wn.synsets(str(word), pos = wn.NOUN)
for syn in syns:
if 'food' in syn.lexname():
return 1
return 0
Then using the qdapDictionaries::GradyAugmented R English words dictionary I have checked each word if it's a noun.food:
en_dict = pd.read_csv("GradyAugmentedENDict.csv")
en_dict['is_food'] = en_dict.word.apply(if_food)
en_dict[en_dict.is_food == 1].to_csv("en_dict_is_food.csv")
It it actually did the job.
Hope it will help others.
I have a list of stems in NLTK/python and want to get the possible words that create that stem.
Is there a way to take a stem and get a list of words that will stem to it in python?
To the best of my knowledge the answer is No, and depending on the stemmer it might be difficult to come up with an exhaustive search for reverting the effect of the stemming rules and the results would be mostly invalid words by any standard. E.g for Porter stemmer:
from nltk.stem.porter import *
stemmer = PorterStemmer()
stemmer.stem('grabfuled')
# results in "grab"
So, a reverse function would generate "grabfuled" as one of the valid words as "-ed" and "-ful" suffixes are removed consecutively in the stemming process.
However, given a valid lexicon, you can do the following which is independent of the stemming method:
from nltk.stem.porter import *
from collections import defaultdict
vocab = set(['grab', 'grabbing', 'grabbed', 'run', 'running', 'eat'])
# Here porter stemmer, but can be any other stemmer too
stemmer = PorterStemmer()
d = defaultdict(set)
for v in vocab:
d[stemmer.stem(v)].add(v)
print(d)
# defaultdict(<class 'set'>, {'grab': {'grab', 'grabbing', 'grabbed'}, 'eat': {'eat'}, 'run': {'run', 'running'}})
Now we have a dictionary that maps stems to the valid words that can generate them. For any stem we can do the following:
print(d['grab'])
# {'grab', 'grabbed', 'grabbing'}
For building the vocabulary you can tokenize a corpus or use nltk's built-in dictionary of English words.
Is there built-in functionality to find the lowest word in a word hierarchy using NLTK? For example, if there were no edge between 'placenta' and 'carnivore' in the first graph at http://www.randomhacks.net/2009/12/29/visualizing-wordnet-relationships-as-graphs/, the lowest words would be 'placenta' and 'carnivore' (both having distance 10 from 'entity').
You can find the synset with no hyponyms, e.g.
from nltk.corpus import wordnet as wn
lowest_level = set()
for ss in wn.all_synsets():
if ss.hyponyms() == []:
lowest_level.add(ss)
len(lowest_level) # 97651
If you would like to exclude synsets with instance hyponyms:
from nltk.corpus import wordnet as wn
lowest_level = set()
for ss in wn.all_synsets():
if ss.hyponyms() == ss.instance_hyponyms() == []:
lowest_level.add(ss)
len(lowest_level) # 97187
I want to do the following in Python (I have the NLTK library, but I'm not great with Python, so I've written the following in a weird pseudocode):
from nltk.corpus import wordnet as wn #Import the WordNet library
for each adjective as adj in wn #Get all adjectives from the wordnet dictionary
print adj & antonym #List all antonyms for each adjective
once list is complete then export to txt file
This is so I can generate a complete dictionary of antonyms for adjectives. I think it should be doable, but I don't know how to create the Python script. I'd like to do it in Python as that's the NLTK's native language.
from nltk.corpus import wordnet as wn
for i in wn.all_synsets():
if i.pos() in ['a', 's']: # If synset is adj or satelite-adj.
for j in i.lemmas(): # Iterating through lemmas for each synset.
if j.antonyms(): # If adj has antonym.
# Prints the adj-antonym pair.
print j.name(), j.antonyms()[0].name()
Note that there will be reverse duplicates.
[out]:
able unable
unable able
abaxial adaxial
adaxial abaxial
acroscopic basiscopic
basiscopic acroscopic
abducent adducent
adducent abducent
nascent dying
dying nascent
abridged unabridged
unabridged abridged
absolute relative
relative absolute
absorbent nonabsorbent
nonabsorbent absorbent
adsorbent nonadsorbent
nonadsorbent adsorbent
absorbable adsorbable
adsorbable absorbable
abstemious gluttonous
gluttonous abstemious
abstract concrete
...
The following function uses WordNet to return a set of adjective-only antonyms for a given word:
from nltk.corpus import wordnet as wn
def antonyms_for(word):
antonyms = set()
for ss in wn.synsets(word):
for lemma in ss.lemmas():
any_pos_antonyms = [ antonym.name() for antonym in lemma.antonyms() ]
for antonym in any_pos_antonyms:
antonym_synsets = wn.synsets(antonym)
if wn.ADJ not in [ ss.pos() for ss in antonym_synsets ]:
continue
antonyms.add(antonym)
return antonyms
Usage:
print(antonyms_for("good"))