How can I import a specific stopword dictionary (excel sheet) into Python and run it additionally to the nltk stopword list? Currently my stopword section looks like this:
# filter out stop words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
words = [w for w in words if not w in stop_words]
Thanks in advance!
You can import an excel sheet using the pandas library. This example assumes that your stopwords are located in the first column, one word per row. Afterwards, create the union of the nltk stopwords and your own stopwords:
import pandas as pd
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
# check pandas docs for more info on usage of read_excel
custom_words = pd.read_excel('your_file.xlsx', header=None, names=['mywords'])
# union of two sets
stop_words = stop_words | set(custom_words['mywords'])
words = [w for w in words if not w in stop_words]
Related
I have a text file with comments and I need to do classification with it. The text file is a list has a length of 1000. But after tokenization, stem, and stopwords the length of gets bigger. How can I tokenize with out changing the length?
Here is my code:
!pip install nltk
import nltk
import pandas as pd
data = open('X_train.txt',encoding='utf8').readlines()
nltk.download('punkt')
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
from nltk import word_tokenize
for word in data:
if word in stopwords.words('english'):
data.remove(word)
words_after = []
for w in data:
tokens = word_tokenize(w)
for token in tokens:
stemmed_word = ps.stem(token)
words_after.append(stemmed_word)
print(len(words_after))
Here I have m trying to read the content let's say 'book1.txt' and here I have to remove all the special characters and punctuation marks and word tokenise the content using nltk's word tokeniser.
Lemmatize those token using wordnetLemmatizer
And write those token into csv file one by one.
Here is the code I m using which obviously is not working but just need some suggestion on this please.
import nltk
from nltk.stem import WordNetLemmatizer
import csv
from nltk.tokenize import word_tokenize
file_out=open('data.csv','w')
with open('book1.txt','r') as myfile:
for s in myfile:
words = nltk.word_tokenize(s)
words=[word.lower() for word in words if word.isalpha()]
for word in words:
token=WordNetLemmatizer().lemmatize(words,'v')
filtered_sentence=[""]
for n in words:
if n not in token:
filtered_sentence.append(""+n)
file_out.writelines(filtered_sentence+["\n"])
There's some issues here, most notably with the last 2 for loops.
The way you are doing it made it write it as follows:
word1
word1word2
word1word2word3
word1word2word3word4
........etc
I'm guessing that is not the expected output. I'm assuming the expected output is:
word1
word2
word3
word4
........etc (without creating duplicates)
I applied the code below to a 3 paragraph Cat Ipsum file. Note that I changed some variable names due to my own naming conventions.
import nltk
nltk.download('punkt')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from pprint import pprint
# read the text into a single string.
with open("book1.txt") as infile:
text = ' '.join(infile.readlines())
words = word_tokenize(text)
words = [word.lower() for word in words if word.isalpha()]
# create the lemmatized word list
results = []
for word in words:
# you were using words instead of word below
token = WordNetLemmatizer().lemmatize(word, "v")
# check if token not already in results.
if token not in results:
results.append(token)
# sort results, just because :)
results.sort()
# print and save the results
pprint(results)
print(len(results))
with open("nltk_data.csv", "w") as outfile:
outfile.writelines(results)
I am working on an e-commerce data in python. I have loaded that data in python and converted it into a pandas data frame. Now I want to perform text processing on that data like removing unwanted characters, stopwords, stemming etc. currently the code that I have applied is working fine but it takes a lot of time. I have around 2 million rows of data to process and it takes forever to process it. I tried that code on 10,000 rows and it took around 240 seconds. I am working on this kind of project for the first time. Any help to reduce time would be very helpful.
Thanks in advance.
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
import re
def textprocessing(text):
stemmer = PorterStemmer()
# Remove unwanted characters
re_sp= re.sub(r'\s*(?:([^a-zA-Z0-9._\s "])|\b(?:[a-z])\b)'," ",text.lower())
# Remove single characters
no_char = ' '.join( [w for w in re_sp.split() if len(w)>1]).strip()
# Removing Stopwords
filtered_sp = [w for w in no_char.split(" ") if not w in stopwords.words('english')]
# Perform Stemming
stemmed_sp = [stemmer.stem(item) for item in filtered_sp]
# Converting it to string
stemmed_sp = ' '.join([x for x in stemmed_sp])
return stemmed_sp
I am calling this method on that dataframe:
files['description'] = files.loc[:,'description'].apply(lambda x: textprocessing(str(x)))
You can take any data as per your convenience. Due to some policy, I am not able to share the data.
you could try to finish it in one loop and not create stemmer/stop_word every loop
STEMMER = PorterStemmer()
STOP_WORD = stopwords.words('english')
def textprocessing(text):
return ''.join(STEMMER.stem(item) for token in re.sub(r'\s*(?:([^a-zA-Z0-9._\s "])|\b(?:[a-z])\b)'," ",text.lower()).split() if token not in STOP_WORD and len(token) > 1)
you could also use nltk to remove unwant word
from nltk.tokenize import RegexpTokenizer
STEMMER = PorterStemmer()
STOP_WORD = stopwords.words('english')
TOKENIZER = RegexpTokenizer(r'\w+')
def textprocessing(text):
return ''.join(STEMMER.stem(item) for token in TOKENIZER.tokenize(test.lower()) if token not in STOP_WORD and len(token) > 1)
I have the following code. I have to add more words in nltk stopword list. After i run thsi, it does not add the words in the list
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import string
stop = set(stopwords.words('english'))
new_words = open("stopwords_en.txt", "r")
new_stopwords = stop.union(new_word)
exclude = set(string.punctuation)
lemma = WordNetLemmatizer()
def clean(doc):
stop_free = " ".join([i for i in doc.lower().split() if i not in new_stopwords])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
return normalized
doc_clean = [clean(doc).split() for doc in emails_body_text]
Don't do things blindly. Read in your new list of stopwords, inspect it to see that it's right, then add it to the other stopword list. Start with the code suggested by #greg_data, but you'll need to strip newlines and maybe do other things -- who knows what your stopwords file looks like?
This might do it, for example:
new_words = open("stopwords_en.txt", "r").read().split()
new_stopwords = stop.union(new_words)
PS. Don't keep splitting and joining your document; tokenize once and work with the list of tokens.
I keep getting this error
sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or buffer
when i try to run this script. Not sure what is wrong. I am essentially reading from a text file, filtering out the stopwords and tokenizing them using NLTK.
import nltk
from nltk.collocations import *
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stopset = set(stopwords.words('english'))
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()
text_file=open('sentiment_test.txt', 'r')
lines=text_file.readlines()
filtered_words = [w for w in lines if not w in stopwords.words('english')]
print filtered_words
tokens=word_tokenize(str(filtered_words)
print tokens
finder = BigramCollocationFinder.from_words(tokens)
Any help would be much appreciated.
I am presuming that sentiment_test.txt is just plain text, and not a specific format.
You are trying to filter lines and not words. You should first tokenize and then filter the stopwords.
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stopset = set(stopwords.words('english'))
with open('sentiment_test.txt', 'r') as text_file:
text = text_file.read()
tokens=word_tokenize(str(text))
tokens = [w for w in tokens if not w in stopset]
print tokens
Hope this helps.