NLTK words lemmatizing - python

I am trying to do lemmatization on words with NLTK.
What I can find now is that I can use the stem package to get some results like transform "cars" to "car" and "women" to "woman", however I cannot do lemmatization on some words with affixes like "acknowledgement".
When using WordNetLemmatizer() on "acknowledgement", it returns "acknowledgement" and using .PorterStemmer(), it returns "acknowledg" rather than "acknowledge".
Can anyone tell me how to eliminate the affixes of words?
Say, when input is "acknowledgement", the output to be "acknowledge"

Lemmatization does not (and should not) return "acknowledge" for "acknowledgement". The former is a verb, while the latter is a noun. Porter's stemming algorithm, on the other hand, simply uses a fixed set of rules. So, your only way there is to change the rules at source. (NOT the right way to fix your problem).
What you are looking for is the derivationally related form of "acknowledgement", and for this, your best source is WordNet. You can check this online on WordNet.
There are quite a few WordNet-based libraries that you can use for this (e.g. in JWNL in Java). In Python, NLTK should be able to get the derivationally related form you saw online:
from nltk.corpus import wordnet as wn
acknowledgment_synset = wn.synset('acknowledgement.n.01')
acknowledgment_lemma = acknowledgment_synset.lemmas[1]
print(acknowledgment_lemma.derivationally_related_forms())
# [Lemma('admit.v.01.acknowledge'), Lemma('acknowledge.v.06.acknowledge')]

Related

How to lemmatise nouns?

I am trying to lemmatise words like "Escalation" to "Escalate" using NLTK.stem Wordlemmatizer.
word_lem = WordNetLemmatizer()
print( word_lem.lemmatize("escalation", pos = "n")
Which pos tag should be used to get result like "escalate"
First, please notice that:
Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma .
Now, if you desire to obtain a canonical form for both "escalation" and "escalate", you can use a summarizer, e.g., Porter stemmer.
from nltk.stem import PorterStemmer
ps = PorterStemmer()
print(ps.stem("escalate"))
print(ps.stem("escalation"))
Although the result is escal, but it is the same for both words.

How to exclude certain names and terms from stemming (Python NLTK SnowballStemmer (Porter2))

I am newly getting into NLP, Python, and posting on Stackoverflow at the same time, so please be patient with me if I might seem ignorant :).
I am using SnowballStemmer in Python's NLTK in order to stem words for textual analysis. While lemmatization seems to understem my tokens, the snowball porter2 stemmer, which I read is mostly preferred to the basic porter stemmer, overstems my tokens. I am analyzing tweets including many names and probably also places and other words which should not be stemmed, like: hillary, hannity, president, which are now reduced to hillari, hanniti, and presid (you probably guessed already whose tweets I am analyzing).
Is there an easy way to exclude certain terms from stemming? Conversely, I could also merely lemmatize tokens and include a rule for common suffixes like -ed, -s, …. Another idea might be to merely stem verbs and adjectives as well as nouns ending in s. That might also be close enough…
I am using below code as of now:
# LEMMATIZE AND STEM WORDS
from nltk.stem.snowball import EnglishStemmer
lemmatizer = nltk.stem.WordNetLemmatizer()
snowball = EnglishStemmer()
def lemmatize_text(text):
return [lemmatizer.lemmatize(w) for w in text]
def snowball_stemmer(text):
return [snowball.stem(w) for w in text]
# APPLY FUNCTIONS
tweets['text_snowball'] = tweets.text_processed.apply(snowball_stemmer)
tweets['text_lemma'] = tweets.text_processed.apply(lemmatize_text)
I hope someone can help… Contrary to my past experience with all kinds of issues, I have not been able to find adequate help for my issue online so far.
Thanks!
Do you know NER? It means named entity recognition. You can preprocess your text and locate all named entities, which you then exclude from stemming. After stemming, you can merge the data again.

Getting a Large List of Nouns (or Adjectives) in Python with NLTK; or Python Mad Libs

Like this question, I am interested in getting a large list of words by part of speech (a long list of nouns; a list of adjectives) to be used programmatically elsewhere. This answer has a solution using the WordNet database (in SQL) format.
Is there a way to get at such list using the corpora/tools built into the Python NLTK. I could take a large bunch of text, parse it and then store the nouns and adjectives. But given the dictionaries and other tools built in, is there a smarter way to simply extract the words that are already present in the NLTK datasets, encoded as nouns/adjectives (whatever)?
Thanks.
It's worth noting that Wordnet is actually one of the corpora included in the NLTK downloader by default. So you could conceivably just use the solution you already found without having to reinvent any wheels.
For instance, you could just do something like this to get all noun synsets:
from nltk.corpus import wordnet as wn
for synset in list(wn.all_synsets('n')):
print synset
# Or, equivalently
for synset in list(wn.all_synsets(wn.NOUN)):
print synset
That example will give you every noun that you want and it will even group them into their synsets so you can try to be sure that they're being used in the correct context.
If you want to get them all into a list you can do something like the following (though this will vary quite a bit based on how you want to use the words and synsets):
all_nouns = []
for synset in wn.all_synsets('n'):
all_nouns.extend(synset.lemma_names())
Or as a one-liner:
all_nouns = [word for synset in wn.all_synsets('n') for word in synset.lemma_names()]
You should use the Moby Parts of Speech Project data. Don't be fixated on using only what is directly in NLTK by default. It would be little work to download the files for this and pretty easy to parse them with NLTK once loaded.
I saw a similar question earlier this week (can't find the link), but like I said then, I don't think maintaining a list of nouns/adjectives/whatever is a great idea. This is primarily because the same word can have different parts of speech, depending on the context.
However, if you are still dead set on using these lists, then here's how I would do it (I don't have a working NLTK install on this machine, but I remember the basics):
nouns = set()
for sentence in my_corpus.sents():
# each sentence is either a list of words or a list of (word, POS tag) tuples
for word, pos in nltk.pos_tag(sentence): # remove the call to nltk.pos_tag if `sentence` is a list of tuples as described above
if pos in ['NN', "NNP"]: # feel free to add any other noun tags
nouns.add(word)
Hope this helps

Performing Stemming outputs jibberish/concatenated words

I am experimenting with the python library NLTK for Natural Language Processing.
My Problem: I'm trying to perform stemming; reduce words to their normalised form. But its not producing correct words. Am I using the stemming class correctly? And how can I get the results I am attempting to get?
I want to normalise the following words:
words = ["forgot","forgotten","there's","myself","remuneration"]
...into this:
words = ["forgot","forgot","there","myself","remunerate"]
My code:
from nltk import stem
words = ["forgot","forgotten","there's","myself","remuneration"]
for word in words:
print stemmer.stem(word)
#output is:
#forgot forgotten there' myself remuner
There are two types of normalization you can do at a word level.
Stemming - a quick and dirty hack to convert words into some token which is not guaranteed to be an actual word, but generally different forms of the same word should map to the same stemmed token
Lemmatization - converting a word into some base form (singular, present tense, etc) which is always a legitimate word on its own. This can obviously be slower and more complicated and is generally not required for a lot of NLP tasks.
You seem to be looking for a lemmatizer instead of a stemmer. Searching Stack Overflow for 'lemmatization' should give you plenty of clues about how to set one of those up. I have played with this one called morpha and have found it to be pretty useful and cool.
Like adi92, I too believe you're looking for lemmatization. Since you're using NLTK you could probably use its WordNet interface.

Using NLTK and WordNet; how do I convert simple tense verb into its present, past or past participle form?

Using NLTK and WordNet, how do I convert simple tense verb into its present, past or past participle form?
For example:
I want to write a function which would give me verb in expected form as follows.
v = 'go'
present = present_tense(v)
print present # prints "going"
past = past_tense(v)
print past # prints "went"
With the help of NLTK this can also be done. It can give the base form of the verb. But not the exact tense, but it still can be useful. Try the following code.
from nltk.stem.wordnet import WordNetLemmatizer
words = ['gave','went','going','dating']
for word in words:
print word+"-->"+WordNetLemmatizer().lemmatize(word,'v')
The output is:
gave-->give
went-->go
going-->go
dating-->date
Have a look at Stack Overflow question NLTK WordNet Lemmatizer: Shouldn't it lemmatize all inflections of a word?.
I think what you're looking for is the NodeBox::Linguistics library. It does exactly that:
print en.verb.present("gave")
>>> give
For Python3:
pip install pattern
then
from pattern.en import conjugate, lemma, lexeme, PRESENT, SG
print (lemma('gave'))
print (lexeme('gave'))
print (conjugate(verb='give',tense=PRESENT,number=SG)) # he / she / it
yields
give
['give', 'gives', 'giving', 'gave', 'given']
gives
thnks to #Agargara for pointing & authors of Pattern for their beautiful work, go support them ;-)
PS. To use most of pattern's functionality in python 3.7+, you might want to use the trick described here
JWI (the WordNet library by MIT) also has a stemmer (WordNetStemmer) which converts different morphological forms of a word like ("written", "writes", "wrote") to their base form. It seems it works only for nouns (like plurals) and verbs though.
Word Stemming in Java with WordNet and JWNL also shows how to do this kind of stemming using JWNL, another Java-based Wordnet library:

Categories