Is there a way to reverse stem in python nltk? - python

I have a list of stems in NLTK/python and want to get the possible words that create that stem.
Is there a way to take a stem and get a list of words that will stem to it in python?

To the best of my knowledge the answer is No, and depending on the stemmer it might be difficult to come up with an exhaustive search for reverting the effect of the stemming rules and the results would be mostly invalid words by any standard. E.g for Porter stemmer:
from nltk.stem.porter import *
stemmer = PorterStemmer()
stemmer.stem('grabfuled')
# results in "grab"
So, a reverse function would generate "grabfuled" as one of the valid words as "-ed" and "-ful" suffixes are removed consecutively in the stemming process.
However, given a valid lexicon, you can do the following which is independent of the stemming method:
from nltk.stem.porter import *
from collections import defaultdict
vocab = set(['grab', 'grabbing', 'grabbed', 'run', 'running', 'eat'])
# Here porter stemmer, but can be any other stemmer too
stemmer = PorterStemmer()
d = defaultdict(set)
for v in vocab:
d[stemmer.stem(v)].add(v)
print(d)
# defaultdict(<class 'set'>, {'grab': {'grab', 'grabbing', 'grabbed'}, 'eat': {'eat'}, 'run': {'run', 'running'}})
Now we have a dictionary that maps stems to the valid words that can generate them. For any stem we can do the following:
print(d['grab'])
# {'grab', 'grabbed', 'grabbing'}
For building the vocabulary you can tokenize a corpus or use nltk's built-in dictionary of English words.

Related

Find NLTK Wordnet Synsets for combined items in a list

I am new to NLTK. I want to use nltk to extract hyponyms for a given list of words, specifically, for some combined words
my code:
import nltk
from nltk.corpus import wordnet as wn
list = ["real_time", 'Big_data', "Healthcare",
'Fuzzy_logic', 'Computer_vision']
def get_synset(a_list):
synset_list = []
for word in a_list:
a = wn.synsets(word)[:1] #The index is to ensure each word gets assigned 1st synset only
synset_list.append(a)
return synset_list
lst_synsets = get_synset(list)
lst_synsets
Here is the output:
[[Synset('real_time.n.01')],
[],
[Synset('healthcare.n.01')],
[Synset('fuzzy_logic.n.01')],
[]]
How could I find NLTK Wordnet Synsets for combined items? if no, any suggestion to use one of these methods for combined terms?

In Python, is there a way in any NLP library to combine words to state them as positive?

I have tried looking into this and couldn't find any possible way to do this the way I imagine. The term as an example I am trying to group is 'No complaints', when looking at this word the 'No' is picked up during the stopwords which I have manually removed from the stopwords to ensure it is included in the data. However, both words will be picked during the sentiment analysis as Negative words. I am wanting to combine them together so they can be categorised under either Neutral or Positive. Is it possible to manually group them words or terms together and decide how they are analysed in the sentiment analysis?
I have found a way to group words using POS tagging & Chunking but this combines tags together or Multi-Word Expressionsand doesn't necessarily pick them up correctly in the sentiment analysis.
Current code (using POS Tagging):
from nltk.corpus import stopwords
from nltk.sentiment import SentimentIntensityAnalyzer
from nltk.stem import PorterStemmer, WordNetLemmatizer
from nltk.tokenize import word_tokenize, sent_tokenize, MWETokenizer
import re, gensim, nltk
from gensim.utils import simple_preprocess
import pandas as pd
d = {'text': ['no complaints', 'not bad']}
df = pd.DataFrame(data=d)
stop = stopwords.words('english')
stop.remove('no')
stop.remove('not')
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
data_words = list(sent_to_words(df))
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
data_words_nostops = remove_stopwords(data_words)
txt = df
txt = txt.apply(str)
#pos tag
words = [word_tokenize(i) for i in sent_tokenize(txt['text'])]
pos_tag= [nltk.pos_tag(i) for i in words]
#chunking
tagged_token = nltk.pos_tag(tokenized_text)
grammar = "NP : {<DT>+<NNS>}"
phrases = nltk.RegexpParser(grammar)
result = phrases.parse(tagged_token)
print(result)
sia = SentimentIntensityAnalyzer()
def find_sentiment(post):
if sia.polarity_scores(post)["compound"] > 0:
return "Positive"
elif sia.polarity_scores(post)["compound"] < 0:
return "Negative"
else:
return "Neutral"
df['sentiment'] = df['text'].apply(lambda x: find_sentiment(x))
df['compound'] = [sia.polarity_scores(x)['compound'] for x in df['text']]
df
Output:
(S
0/CD
(NP no/DT complaints/NNS)
1/CD
not/RB
bad/JJ
Name/NN
:/:
text/NN
,/,
dtype/NN
:/:
object/NN)
|text |sentiment |compound
|:--------------|:----------|:--------
0 |no complaints |Negative |-0.5994
1 |not bad |Positive | 0.4310
I understand that my current code does not incorporate the POS Tagging & chunking in the sentiment analysis, but you can see the combination of the word 'no complaints' however it's current sentiment and sentiment score is negative (-0.5994), the aim is to use POS tagging and assign the sentiment as positive... somehow if possible!
Option 1
Use VADER sentiment analysis instead, which seems to be handling such idioms better than how nltk does (NLTK incorporates VADER actually, but seems to behave differently in such situations). No need to change anything in your code, except install VADER, as described in the instructions, and then import the library in your code as follows (while removing the one from nltk.sentiment...)
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
Using VADER, you should get the following results. I've added one extra idiom (i.e., "no worries"), which would also be given a negative score if nltk's sentiment was used.
text sentiment compound
0 no complaints Positive 0.3089
1 not bad Positive 0.4310
2 no worries Positive 0.3252
Option 2
Modify NLTK's lexicon, as described here; however, it might not always work (as probably accepts only single words, but not idioms). Example below:
new_words = {
'no complaints': 3.0
}
sia = SentimentIntensityAnalyzer()
sia.lexicon.update(new_words)

Find most common multi words in an input file in Python

Say I have a text file, I can find the most frequent words easily using Counter. However, I would also like to find multi words like "tax year, fly fishing, u.s. capitol, etc.". Words that occur together the most.
import re
from collections import Counter
with open('full.txt') as f:
passage = f.read()
words = re.findall(r'\w+', passage)
cap_words = [word for word in words]
word_counts = Counter(cap_words)
for k, v in word_counts.most_common():
print(k, v)
I have this currently, however, this only find one word. How do I find multiple words?
What you're looking for is a way to count bigrams (strings containing two words).
The nltk library is great for doing lots of language related tasks, and you can use Counter from collections for all your counting-related activities!
import nltk
from nltk import bigrams
from collections import Counter
tokens = nltk.word_tokenize(passage)
print(Counter(bigrams(tokens))
What you call mutliwords (there is no such thing) is actually called bigrams. You can get a list of bigrams from a list of words by zipping it with itself with a displacement:
bigrams = [f"{x} {y}" for x,y, in zip(words, words[1:])]
P.S. NLTK would be indeed a better tool to get bigrams.

How to remove stop words from documents in gensim?

I'm building a NLP chat application using Doc2Vec technique in Python using its gensim package. I have already done tokenizing and stemming. I want to remove the stop words (to test if it works better) from both the training set as well as the question which user throws.
Here is my code.
import gensim
import nltk
from gensim import models
from gensim import utils
from gensim import corpora
from nltk.stem import PorterStemmer
ps = PorterStemmer()
sentence0 = models.doc2vec.LabeledSentence(words=[u'sampl',u'what',u'is'],tags=["SENT_0"])
sentence1 = models.doc2vec.LabeledSentence(words=[u'sampl',u'tell',u'me',u'about'],tags=["SENT_1"])
sentence2 = models.doc2vec.LabeledSentence(words=[u'elig',u'what',u'is',u'my'],tags=["SENT_2"])
sentence3 = models.doc2vec.LabeledSentence(words=[u'limit', u'what',u'is',u'my'],tags=["SENT_3"])
sentence4 = models.doc2vec.LabeledSentence(words=[u'claim',u'how',u'much',u'can',u'I'],tags=["SENT_4"])
sentence5 = models.doc2vec.LabeledSentence(words=[u'retir',u'i',u'am',u'how',u'much',u'can',u'elig',u'claim'],tags=["SENT_5"])
sentence6 = models.doc2vec.LabeledSentence(words=[u'resign',u'i',u'have',u'how',u'much',u'can',u'i',u'claim',u'elig'],tags=["SENT_6"])
sentence7 = models.doc2vec.LabeledSentence(words=[u'promot',u'what',u'is',u'my',u'elig',u'post',u'my'],tags=["SENT_7"])
sentence8 = models.doc2vec.LabeledSentence(words=[u'claim',u'can,',u'i',u'for'],tags=["SENT_8"])
sentence9 = models.doc2vec.LabeledSentence(words=[u'product',u'coverag',u'cover',u'what',u'all',u'are'],tags=["SENT_9"])
sentence10 = models.doc2vec.LabeledSentence(words=[u'hotel',u'coverag',u'cover',u'what',u'all',u'are'],tags=["SENT_10"])
sentence11 = models.doc2vec.LabeledSentence(words=[u'onlin',u'product',u'can',u'i',u'for',u'bought',u'through',u'claim',u'sampl'],tags=["SENT_11"])
sentence12 = models.doc2vec.LabeledSentence(words=[u'reimburs',u'guidelin',u'where',u'do',u'i',u'apply',u'form',u'sampl'],tags=["SENT_12"])
sentence13 = models.doc2vec.LabeledSentence(words=[u'reimburs',u'procedur',u'rule',u'and',u'regul',u'what',u'is',u'the',u'for'],tags=["SENT_13"])
sentence14 = models.doc2vec.LabeledSentence(words=[u'can',u'i',u'submit',u'expenditur',u'on',u'behalf',u'of',u'my',u'friend',u'and',u'famili',u'claim',u'and',u'reimburs'],tags=["SENT_14"])
sentence15 = models.doc2vec.LabeledSentence(words=[u'invoic',u'bills',u'procedur',u'can',u'i',u'submit',u'from',u'shopper stop',u'claim'],tags=["SENT_15"])
sentence16 = models.doc2vec.LabeledSentence(words=[u'invoic',u'bills',u'can',u'i',u'submit',u'from',u'pantaloon',u'claim'],tags=["SENT_16"])
sentence17 = models.doc2vec.LabeledSentence(words=[u'invoic',u'procedur',u'can',u'i',u'submit',u'invoic',u'from',u'spencer',u'claim'],tags=["SENT_17"])
# User asks a question.
document = input("Ask a question:")
tokenized_document = list(gensim.utils.tokenize(document, lowercase = True, deacc = True))
#print(type(tokenized_document))
stemmed_document = []
for w in tokenized_document:
stemmed_document.append(ps.stem(w))
sentence19 = models.doc2vec.LabeledSentence(words= stemmed_document, tags=["SENT_19"])
# Building vocab.
sentences = [sentence0,sentence1,sentence2,sentence3, sentence4, sentence5,sentence6, sentence7, sentence8, sentence9, sentence10, sentence11, sentence12, sentence13, sentence14, sentence15, sentence16, sentence17, sentence19]
#I tried to remove the stop words but it didn't work out as LabeledSentence object has no attribute lower.
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in sentences]
..
Is there a way I can remove stop words from sentences directly and get a new set of vocab without stop words ?
Your sentences object is a already a list of LabeledSentence objects. You construct these above; they include a list-of-strings in words and a list-of-strings in tags.
So each item in that list (document in your list-comprehension) can't have a string method like .lower() applied to it. (Nor would it need to be .split(), as its words are already separate tokens.)
The cleanest approach would be to remove stop-words from the lists-of-words before they're used to construct LabeledSentence objects. For example, you could make a function without_stopwords(), defined at the top. Then your lines creating LabeledSentence objects could instead be like:
sentence0 = LabeledSentence(words=remove_stopwords([u'sampl', u'what', u'is']),
tags=["SENT_0"])
Alternatively, you could mutate the existing LabeledSentence objects so that each of their words attributes now lack stop-words. This would replace your last line with something more like:
for doc in sentences:
doc.words = [word for word in doc.words if word not in stoplist]
texts = sentences
Separately, things you didn't ask but should know:
TaggedDocument is now the preferred example-class name for Doc2Vec text objects – but in fact any object that has the two required properties words and tags will work fine.
Doc2Vec doesn't show many of the desired properties on tiny, toy-sized datasets – don't be surprised if a model built on dozens of sentences does not do anything useful, or misleads about what preprocessing/meta-parameter options are best. (Tens of thousands of texts, and texts at least tens-of-words long, are much better for meaningful results.)
Much Word2Vec/Doc2Vec work doesn't bother with stemming or stop-word removal, but it may sometimes be helpful.

How to generate a list of antonyms for adjectives in WordNet using Python

I want to do the following in Python (I have the NLTK library, but I'm not great with Python, so I've written the following in a weird pseudocode):
from nltk.corpus import wordnet as wn #Import the WordNet library
for each adjective as adj in wn #Get all adjectives from the wordnet dictionary
print adj & antonym #List all antonyms for each adjective
once list is complete then export to txt file
This is so I can generate a complete dictionary of antonyms for adjectives. I think it should be doable, but I don't know how to create the Python script. I'd like to do it in Python as that's the NLTK's native language.
from nltk.corpus import wordnet as wn
for i in wn.all_synsets():
if i.pos() in ['a', 's']: # If synset is adj or satelite-adj.
for j in i.lemmas(): # Iterating through lemmas for each synset.
if j.antonyms(): # If adj has antonym.
# Prints the adj-antonym pair.
print j.name(), j.antonyms()[0].name()
Note that there will be reverse duplicates.
[out]:
able unable
unable able
abaxial adaxial
adaxial abaxial
acroscopic basiscopic
basiscopic acroscopic
abducent adducent
adducent abducent
nascent dying
dying nascent
abridged unabridged
unabridged abridged
absolute relative
relative absolute
absorbent nonabsorbent
nonabsorbent absorbent
adsorbent nonadsorbent
nonadsorbent adsorbent
absorbable adsorbable
adsorbable absorbable
abstemious gluttonous
gluttonous abstemious
abstract concrete
...
The following function uses WordNet to return a set of adjective-only antonyms for a given word:
from nltk.corpus import wordnet as wn
def antonyms_for(word):
antonyms = set()
for ss in wn.synsets(word):
for lemma in ss.lemmas():
any_pos_antonyms = [ antonym.name() for antonym in lemma.antonyms() ]
for antonym in any_pos_antonyms:
antonym_synsets = wn.synsets(antonym)
if wn.ADJ not in [ ss.pos() for ss in antonym_synsets ]:
continue
antonyms.add(antonym)
return antonyms
Usage:
print(antonyms_for("good"))

Categories