In python using NLTK how would I find a count of the number of non stop words in a document filtered by category?
I can figure out how to get the words in a corpus filtered by a category e.g. all the words in the brown corpus for category ‘news’ is:
text = nltk.corpus.brown.words(categories=category)
And separately I can figure out how to get all the words for a particular document e.g. all the words in the document ‘cj47’ in the brown corpus is:
text = nltk.corpus.brown.words(fileids='cj47')
And then I can loop through the results and count up the words that are not stopwords e.g.
stopwords = nltk.corpus.stopwords.words('english')
for w in text:
if w.lower() not in stopwords:
#found a non stop words
But how do I put it together so that I am filtering by category for a particular document? If I try to specify a category and a filter at the same time e.g.
text = nltk.corpus.brown.words(categories=category, fields=’cj47’)
I get an error saying:
ValueError: Specify fields or categories, not both
Get fileids for a category:
fileids = nltk.corpus.brown.fileids(categories=category)
For each file, count the non-stopwords:
for f in fileids:
words = nltk.corpus.brown.words(fileids=f)
sum = sum([1 for w in words if w.lower() not in stopwords])
print "Document %s: %d non-stopwords." % (f, sum)
Related
I'm analyzing a book of 1167 pages (txt file). So far I've done preprocessing of the data (cleaning, removing punctuation, stop word removing, tokenization).
Now How can I cluster the text and visualize the similarity (plot the cluster)
For example -
text1 = "there is one unshakable conviction that people—whatever the degree of development of their understanding and whatever the form taken by the factors present in their individuality for engendering all kinds of ideal"
text2 = "That is why I now also, in setting forth on this venture quite new for me, namely authorship, begin by pronouncing this invocation"
In my task, I divided whole book into chapters. tex1 is chapter one text2 is chapter 2 and so on. Now I wanna compare chapter 1 and 2 like this.
Example code that I used for data pre processing:
# split into words
from nltk.tokenize import word_tokenize
tokens = word_tokenize(pages[1])
# convert to lower case
tokens = [w.lower() for w in tokens]
# remove punctuation from each word
import string
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
# remove remaining tokens that are not alphabetic
words = [word for word in stripped if word.isalpha()]
# filter out stop words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
words = [w for w in words if not w in stop_words]
I have a list of sentences. Each sentence has to be converted to a json. There is a unique 'name' for each sentence that is also specified in that json. The problem is that the number of sentences is large so it's monotonous to manually give a name. The name should be similar to the meaning of the sentence e.g., if the sentence is "do you like cake?" then the name should be like "likeCake". I want to automate the process of creation of name for each sentence. I googled text summarization but the results were not for sentence summarization but paragraph summarization. How to go about this?
This sort of task is used for natural language processing. You can get a result similar to what you want by removing Stop Words. Bases on this article, you can use the Natural Language Toolkit for dealing with the stop words. After installing the libray (pip install nltk), you can do something around the lines of:
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
# load data
file = open('yourFileWithSentences.txt', 'rt')
lines = file.readlines()
file.close()
stop_words = set(stopwords.words('english'))
for line in Lines:
# split into words
tokens = word_tokenize(line)
# remove punctuation from each word
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
# filter out stop words
words = [w for w in words if not w in stop_words]
print(f"Var name is {''.join(words)}")
Note that you can extend the stop_words set by adding any other words you might want to remove.
I am trying to rewrite algorithm that basically takes a input text file and compares with different documents and results the similarities.
Now I want to print output of unmatched words and output a new textile with unmatched words.
From this code, "hello force" is the input and is checked against the raw_documents and prints out rank for matched document between 0-1(word "force" is matched with second document and ouput gives more rank to second document but "hello" is not in any raw_document i want to print unmatched word "hello" as not matched ), But what i want is to print unmatched input word that was not matched with any of the raw_document
import gensim
import nltk
from nltk.tokenize import word_tokenize
raw_documents = ["I'm taking the show on the road",
"My socks are a force multiplier.",
"I am the barber who cuts everyone's hair who doesn't cut their own.",
"Legend has it that the mind is a mad monkey.",
"I make my own fun."]
gen_docs = [[w.lower() for w in word_tokenize(text)]
for text in raw_documents]
dictionary = gensim.corpora.Dictionary(gen_docs)
corpus = [dictionary.doc2bow(gen_doc) for gen_doc in gen_docs]
tf_idf = gensim.models.TfidfModel(corpus)
s = 0
for i in corpus:
s += len(i)
sims = gensim.similarities.Similarity('/usr/workdir/',tf_idf[corpus],
num_features=len(dictionary))
query_doc = [w.lower() for w in word_tokenize("hello force")]
query_doc_bow = dictionary.doc2bow(query_doc)
query_doc_tf_idf = tf_idf[query_doc_bow]
result = sims[query_doc_tf_idf]
print result
I have a corpus of sentences in a specific domain.
I am looking for an open-source code/package, that I can give the data and it will generate a good, reliable language model. (Meaning, given a context, know the probability for each word).
Is there such a code/project?
I saw this github repo: https://github.com/rafaljozefowicz/lm, but it didn't work.
I recommend writing your own basic implementation. First, we need some sentences:
import nltk
from nltk.corpus import brown
words = brown.words()
total_words = len(words)
sentences = list(brown.sents())
sentences is now a list of lists. Each sublist represents a sentence with each word as an element. Now you need to decide whether or not you want to include punctuation in your model. If you want to remove it, try something like the following:
punctuation = [",", ".", ":", ";", "!", "?"]
for i, sentence in enumerate(sentences.copy()):
new_sentence = [word for word in sentence if word not in punctuation]
sentences[i] = new_sentence
Next, you need to decide whether or not you care about capitalization. If you don't care about it, you could remove it like so:
for i, sentence in enumerate(sentences.copy()):
new_sentence = list()
for j, word in enumerate(sentence.copy()):
new_word = word.lower() # Lower case all characters in word
new_sentence.append(new_word)
sentences[i] = new_sentence
Next, we need special start and end words to represent words that are valid at the beginning and end of sentences. You should pick start and end words that don't exist in your training data.
start = ["<<START>>"]
end = ["<<END>>"]
for i, sentence in enumerate(sentences.copy()):
new_sentence = start + sentence + end
sentences[i] = new_sentence
Now, let's count unigrams. A unigram is a sequence of one word in a sentence. Yes, a unigram model is just a frequency distribution of each word in the corpus:
new_words = list()
for sentence in sentences:
for word in sentence:
new_words.append(word)
unigram_fdist = nltk.FreqDist(new_words)
And now it's time to count bigrams. A bigram is a sequence of two words in a sentence. So, for the sentence "i am the walrus", we have the following bigrams: "<> i", "i am", "am the", "the walrus", and "walrus <>".
bigrams = list()
for sentence in sentences:
new_bigrams = nltk.bigrams(sentence)
bigrams += new_bigrams
Now we can create a frequency distribution:
bigram_fdist = nltk.ConditionalFreqDist(bigrams)
Finally, we want to know the probability of each word in the model:
def getUnigramProbability(word):
if word in unigram_fdist:
return unigram_fdist[word]/total_words
else:
return -1 # You should figure out how you want to handle out-of-vocabulary words
def getBigramProbability(word1, word2):
if word1 not in bigram_fdist:
return -1 # You should figure out how you want to handle out-of-vocabulary words
elif word2 not in bigram_fdist[word1]:
# i.e. "word1 word2" never occurs in the corpus
return getUnigramProbability(word2)
else:
bigram_frequency = bigram_fdist[word1][word2]
unigram_frequency = unigram_fdist[word1]
bigram_probability = bigram_frequency / unigram_frequency
return bigram_probability
While this isn't a framework/library that just builds the model for you, I hope seeing this code has demystified what goes on in a language model.
You might try word_language_model from PyTorch examples. There just might be an issue if you have a big corpus. They load all data in memory.
I have a list of processed text files, that looks somewhat like this:
text = "this is the first text document " this is the second text document " this is the third document "
I've been able to successfully tokenize the sentences:
sentences = sent_tokenize(text)
for ii, sentence in enumerate(sentences):
sentences[ii] = remove_punctuation(sentence)
sentence_tokens = [word_tokenize(sentence) for sentence in sentences]
And now I would like to remove stopwords from this list of tokens. However, because it's a list of sentences within a list of text documents, I can't seem to figure out how to do this.
This is what I've tried so far, but it returns no results:
sentence_tokens_no_stopwords = [w for w in sentence_tokens if w not in stopwords]
I'm assuming achieving this will require some sort of for loop, but what I have now isn't working. Any help would be appreciated!
You can create two lists generators like that:
sentence_tokens_no_stopwords = [[w for w in s if w not in stopwords] for s in sentence_tokens ]