Related
I'm just starting to use NLTK and I don't quite understand how to get a list of words from text. If I use nltk.word_tokenize(), I get a list of words and punctuation. I need only the words instead. How can I get rid of punctuation? Also word_tokenize doesn't work with multiple sentences: dots are added to the last word.
Take a look at the other tokenizing options that nltk provides here. For example, you can define a tokenizer that picks out sequences of alphanumeric characters as tokens and drops everything else:
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
Output:
['Eighty', 'seven', 'miles', 'to', 'go', 'yet', 'Onward']
You do not really need NLTK to remove punctuation. You can remove it with simple python. For strings:
import string
s = '... some string with punctuation ...'
s = s.translate(None, string.punctuation)
Or for unicode:
import string
translate_table = dict((ord(char), None) for char in string.punctuation)
s.translate(translate_table)
and then use this string in your tokenizer.
P.S. string module have some other sets of elements that can be removed (like digits).
Below code will remove all punctuation marks as well as non alphabetic characters. Copied from their book.
http://www.nltk.org/book/ch01.html
import nltk
s = "I can't do this now, because I'm so tired. Please give me some time. # sd 4 232"
words = nltk.word_tokenize(s)
words=[word.lower() for word in words if word.isalpha()]
print(words)
output
['i', 'ca', 'do', 'this', 'now', 'because', 'i', 'so', 'tired', 'please', 'give', 'me', 'some', 'time', 'sd']
As noticed in comments start with sent_tokenize(), because word_tokenize() works only on a single sentence. You can filter out punctuation with filter(). And if you have an unicode strings make sure that is a unicode object (not a 'str' encoded with some encoding like 'utf-8').
from nltk.tokenize import word_tokenize, sent_tokenize
text = '''It is a blue, small, and extraordinary ball. Like no other'''
tokens = [word for sent in sent_tokenize(text) for word in word_tokenize(sent)]
print filter(lambda word: word not in ',-', tokens)
I just used the following code, which removed all the punctuation:
tokens = nltk.wordpunct_tokenize(raw)
type(tokens)
text = nltk.Text(tokens)
type(text)
words = [w.lower() for w in text if w.isalpha()]
Sincerely asking, what is a word? If your assumption is that a word consists of alphabetic characters only, you are wrong since words such as can't will be destroyed into pieces (such as can and t) if you remove punctuation before tokenisation, which is very likely to affect your program negatively.
Hence the solution is to tokenise and then remove punctuation tokens.
import string
from nltk.tokenize import word_tokenize
tokens = word_tokenize("I'm a southern salesman.")
# ['I', "'m", 'a', 'southern', 'salesman', '.']
tokens = list(filter(lambda token: token not in string.punctuation, tokens))
# ['I', "'m", 'a', 'southern', 'salesman']
...and then if you wish, you can replace certain tokens such as 'm with am.
I think you need some sort of regular expression matching (the following code is in Python 3):
import string
import re
import nltk
s = "I can't do this now, because I'm so tired. Please give me some time."
l = nltk.word_tokenize(s)
ll = [x for x in l if not re.fullmatch('[' + string.punctuation + ']+', x)]
print(l)
print(ll)
Output:
['I', 'ca', "n't", 'do', 'this', 'now', ',', 'because', 'I', "'m", 'so', 'tired', '.', 'Please', 'give', 'me', 'some', 'time', '.']
['I', 'ca', "n't", 'do', 'this', 'now', 'because', 'I', "'m", 'so', 'tired', 'Please', 'give', 'me', 'some', 'time']
Should work well in most cases since it removes punctuation while preserving tokens like "n't", which can't be obtained from regex tokenizers such as wordpunct_tokenize.
I use this code to remove punctuation:
import nltk
def getTerms(sentences):
tokens = nltk.word_tokenize(sentences)
words = [w.lower() for w in tokens if w.isalnum()]
print tokens
print words
getTerms("hh, hh3h. wo shi 2 4 A . fdffdf. A&&B ")
And If you want to check whether a token is a valid English word or not, you may need PyEnchant
Tutorial:
import enchant
d = enchant.Dict("en_US")
d.check("Hello")
d.check("Helo")
d.suggest("Helo")
You can do it in one line without nltk (python 3.x).
import string
string_text= string_text.translate(str.maketrans('','',string.punctuation))
Just adding to the solution by #rmalouf, this will not include any numbers because \w+ is equivalent to [a-zA-Z0-9_]
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'[a-zA-Z]')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
Remove punctuaion(It will remove . as well as part of punctuation handling using below code)
tbl = dict.fromkeys(i for i in range(sys.maxunicode) if unicodedata.category(chr(i)).startswith('P'))
text_string = text_string.translate(tbl) #text_string don't have punctuation
w = word_tokenize(text_string) #now tokenize the string
Sample Input/Output:
direct flat in oberoi esquire. 3 bhk 2195 saleable 1330 carpet. rate of 14500 final plus 1% floor rise. tax approx 9% only. flat cost with parking 3.89 cr plus taxes plus possession charger. middle floor. north door. arey and oberoi woods facing. 53% paymemt due. 1% transfer charge with buyer. total cost around 4.20 cr approx plus possession charges. rahul soni
['direct', 'flat', 'oberoi', 'esquire', '3', 'bhk', '2195', 'saleable', '1330', 'carpet', 'rate', '14500', 'final', 'plus', '1', 'floor', 'rise', 'tax', 'approx', '9', 'flat', 'cost', 'parking', '389', 'cr', 'plus', 'taxes', 'plus', 'possession', 'charger', 'middle', 'floor', 'north', 'door', 'arey', 'oberoi', 'woods', 'facing', '53', 'paymemt', 'due', '1', 'transfer', 'charge', 'buyer', 'total', 'cost', 'around', '420', 'cr', 'approx', 'plus', 'possession', 'charges', 'rahul', 'soni']
I would like to count the occurrences of a list of words for every article contained in a single text file.
Each article can be identified since they all start with a common tag "< p > Advertisement'".
This is a sample of the text file:
"[<p>Advertisement , By TIM ARANGO , SABRINA TAVERNISE and CEYLAN YEGINSU JUNE 28, 2016
,Credit Ilhas News Agency, via Agence France-Presse — Getty Images,ISTANBUL ......]
[<p>Advertisement , By MILAN SCHREUER and ALISSA J. RUBIN OCT. 5, 2016
, BRUSSELS — A man wounded two police officers with a knife in Brussels around noon
on Wednesday in what the authorities called “a potential terrorist attack.” ,
The two ......]"
What I would like to do is counting the frequency of each word I have one a csv file(20 words) and write the output like this:
id, attack, war, terrorism, people, killed, said
article_1, 45, 5, 4, 6, 2,1
article_2, 10, 3, 2, 1, 0,0
The words in the csv are stored like this:
attack
people
killed
attacks
state
islamic
As suggested I am first trying to split the whole text file by the tag <p> before starting to count the words. Then I tokenized the list in the file text.
This is what I have so far:
opener = open("News_words_most_common.csv")
words = opener.read()
my_pattern = ('\w+')
x = re.findall(my_pattern, words)
file_open = open("Training_News_6.csv")
files = file_open.read()
r = files.lower()
stops = set(stopwords.words("english"))
words = r.split("<p>")
token= word_tokenize(words)
string = str(words)
token= word_tokenize(string)
print(token)
This is the output:
['[', "'", "''", '|', '[', "'", ',', "'advertisement",
',', 'by', 'milan', 'schreuer'.....']', '|', "''", '\\n', "'", ']']
The next step will be looping around the articles splitted (now turned in list of words tokenized) and counting the frequency of the words from the first file. If you have any suggestion in how to interate and count please let me know!
I am using Python 3.5 on Anaconda
You can try to use pandas and sklearn:
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
vocabulary = [word.strip() for word in open('vocabulary.txt').readlines()]
corpus = open('articles.txt').read().split('<p>Advertisement')
vectorizer = CountVectorizer(min_df=1, vocabulary=vocabulary)
words_matrix = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=words_matrix.todense(),
index=('article_%s' % i for i in range(words_matrix.shape[0])),
columns=vectorizer.get_feature_names())
df.index.name = 'id'
df.to_csv('articles.csv')
In file articles.csv:
$ cat articles.csv
id,attack,people,killed,attacks,state,islamic
article_0,0,0,0,0,0,0
article_1,0,0,0,0,0,0
article_2,1,0,0,0,0,0
You could try reading your text file, then splitting at the '<p>' (if, as you say, they are used to mark the beginning of new articles) and then you have a list of articles. A simple loop with count will do.
I would recommend you take a look at the nltk module. I am not sure what your end goal is but nltk has really easy to implement functions to do these sort of things and much more (for example instead of just looking at the number of times a word appears in each article, you could calculate frequency, and even scale it by inverse document frequency, known as tf-idf).
Perhaps I didn't get the task well...
If you are making a text categorisation it could be handy to use standard scikit vectorizers, for example Bag of Words, which takes a text and returns an array with words. You may use it directly in classifiers or output to csv if you really need csv.
It already included into scikit and Anaconda.
Anoter way - is to split manually.
You can load data, split into words, count them, exclude stopwords (what's it?) and put into output result file. Like:
import re
from collections import Counter
txt = open('file.txt', 'r').read()
words = re.findall('[a-z]+', txt, re.I)
cnt = Counter(_ for _ in words if _ not in stopwords)
How about this:
import re
from collections import Counter
csv_data = [["'", "\\n", ","], ['fox'],
['the', 'fox', 'jumped'],
['over', 'the', 'fence'],
['fox'], ['fence']]
key_words = ['over', 'fox']
words_list = []
for i in csv_data:
for j in i:
line_of_words = ",".join(re.findall("[a-zA-Z]+", j))
words_list.append(line_of_words)
word_count = Counter(words_list)
match_dict = {}
for aword, word_freq in zip(word_count.keys(), word_count.items()):
if aword in key_words:
match_dict[aword] = word_freq[1]
Which results in:
print('Article words: ', words_list)
print('Article Word Count: ', word_count)
print('Matches: ', match_dict)
Article words: ['', 'n', '', 'fox', 'the', 'fox', 'jumped', 'over', 'the', 'fence', 'fox', 'fence']
Article Word Count: Counter({'fox': 3, '': 2, 'the': 2, 'fence': 2, 'n': 1, 'over': 1, 'jumped': 1})
Matches: {'over': 1, 'fox': 3}
I am trying to filter out stopwords in my text like so:
clean = ' '.join([word for word in text.split() if word not in (stopwords)])
The problem is that text.split() has elements like 'word.' that don't match to the stopword 'word'.
I later use clean in sent_tokenize(clean), however, so I don't want to get rid of the punctuation altogether.
How do I filter out stopwords while retaining punctuation, but filtering words like 'word.'?
I thought it would be possible to change the punctuation:
text = text.replace('.',' . ')
and then
clean = ' '.join([word for word in text.split() if word not in (stop words)] or word == ".")
But is there a better way?
Tokenize the text first, than clean it from stopwords. A tokenizer usually recognizes punctuation.
import nltk
text = 'Son, if you really want something in this life,\
you have to work for it. Now quiet! They are about\
to announce the lottery numbers.'
stopwords = ['in', 'to', 'for', 'the']
sents = []
for sent in nltk.sent_tokenize(text):
tokens = nltk.word_tokenize(sent)
sents.append(' '.join([w for w in tokens if w not in stopwords]))
print sents
['Son , if you really want something this life , you have work it .', 'Now quiet !', 'They are about announce lottery numbers .']
You could use something like this:
import re
clean = ' '.join([word for word in text.split() if re.match('([a-z]|[A-Z])+', word).group().lower() not in (stopwords)])
This pulls out everything except lowercase and uppercase ascii letters and matches it to words in your stopcase set or list. Also, it assumes that all of your words in stopwords are lowercase, which is why I converted the word to all lowercase. Take that out if I made to great of an assumption
Also, I'm not proficient in regex, sorry if there's a cleaner or robust way of doing this.
I collected some tweets through twitter api. Then I counted the words using split(' ') in python. However, some words appear like this:
correct!
correct.
,correct
blah"
...
So how can I format the tweets without punctuation? Or maybe I should try another way to split tweets? Thanks.
You can do the split on multiple characters using re.split...
from string import punctuation
import re
puncrx = re.compile(r'[{}\s]'.format(re.escape(punctuation)))
print filter(None, puncrx.split(your_tweet))
Or, just find words that contain certain contiguous characters:
print re.findall(re.findall('[\w##]+', s), your_tweet)
eg:
print re.findall(r'[\w##]+', 'talking about #python with #someone is so much fun! Is there a 140 char limit? So not cool!')
# ['talking', 'about', '#python', 'with', '#someone', 'is', 'so', 'much', 'fun', 'Is', 'there', 'a', '140', 'char', 'limit', 'So', 'not', 'cool']
I did originally have a smiley in the example, but of course these end up getting filtered out with this method, so that's something to be wary of.
Try removing the punctuation from the string before doing the split.
import string
s = "Some nice sentence. This has punctuation!"
out = s.translate(string.maketrans("",""), string.punctuation)
Then do the split on out.
I would advice to clean text from special symbols before splitting it using this code:
tweet_object["text"] = re.sub(u'[!?##$.,#:\u2026]', '', tweet_object["text"])
You would need to import re before using function sub
import re
I'm just starting to use NLTK and I don't quite understand how to get a list of words from text. If I use nltk.word_tokenize(), I get a list of words and punctuation. I need only the words instead. How can I get rid of punctuation? Also word_tokenize doesn't work with multiple sentences: dots are added to the last word.
Take a look at the other tokenizing options that nltk provides here. For example, you can define a tokenizer that picks out sequences of alphanumeric characters as tokens and drops everything else:
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
Output:
['Eighty', 'seven', 'miles', 'to', 'go', 'yet', 'Onward']
You do not really need NLTK to remove punctuation. You can remove it with simple python. For strings:
import string
s = '... some string with punctuation ...'
s = s.translate(None, string.punctuation)
Or for unicode:
import string
translate_table = dict((ord(char), None) for char in string.punctuation)
s.translate(translate_table)
and then use this string in your tokenizer.
P.S. string module have some other sets of elements that can be removed (like digits).
Below code will remove all punctuation marks as well as non alphabetic characters. Copied from their book.
http://www.nltk.org/book/ch01.html
import nltk
s = "I can't do this now, because I'm so tired. Please give me some time. # sd 4 232"
words = nltk.word_tokenize(s)
words=[word.lower() for word in words if word.isalpha()]
print(words)
output
['i', 'ca', 'do', 'this', 'now', 'because', 'i', 'so', 'tired', 'please', 'give', 'me', 'some', 'time', 'sd']
As noticed in comments start with sent_tokenize(), because word_tokenize() works only on a single sentence. You can filter out punctuation with filter(). And if you have an unicode strings make sure that is a unicode object (not a 'str' encoded with some encoding like 'utf-8').
from nltk.tokenize import word_tokenize, sent_tokenize
text = '''It is a blue, small, and extraordinary ball. Like no other'''
tokens = [word for sent in sent_tokenize(text) for word in word_tokenize(sent)]
print filter(lambda word: word not in ',-', tokens)
I just used the following code, which removed all the punctuation:
tokens = nltk.wordpunct_tokenize(raw)
type(tokens)
text = nltk.Text(tokens)
type(text)
words = [w.lower() for w in text if w.isalpha()]
Sincerely asking, what is a word? If your assumption is that a word consists of alphabetic characters only, you are wrong since words such as can't will be destroyed into pieces (such as can and t) if you remove punctuation before tokenisation, which is very likely to affect your program negatively.
Hence the solution is to tokenise and then remove punctuation tokens.
import string
from nltk.tokenize import word_tokenize
tokens = word_tokenize("I'm a southern salesman.")
# ['I', "'m", 'a', 'southern', 'salesman', '.']
tokens = list(filter(lambda token: token not in string.punctuation, tokens))
# ['I', "'m", 'a', 'southern', 'salesman']
...and then if you wish, you can replace certain tokens such as 'm with am.
I think you need some sort of regular expression matching (the following code is in Python 3):
import string
import re
import nltk
s = "I can't do this now, because I'm so tired. Please give me some time."
l = nltk.word_tokenize(s)
ll = [x for x in l if not re.fullmatch('[' + string.punctuation + ']+', x)]
print(l)
print(ll)
Output:
['I', 'ca', "n't", 'do', 'this', 'now', ',', 'because', 'I', "'m", 'so', 'tired', '.', 'Please', 'give', 'me', 'some', 'time', '.']
['I', 'ca', "n't", 'do', 'this', 'now', 'because', 'I', "'m", 'so', 'tired', 'Please', 'give', 'me', 'some', 'time']
Should work well in most cases since it removes punctuation while preserving tokens like "n't", which can't be obtained from regex tokenizers such as wordpunct_tokenize.
I use this code to remove punctuation:
import nltk
def getTerms(sentences):
tokens = nltk.word_tokenize(sentences)
words = [w.lower() for w in tokens if w.isalnum()]
print tokens
print words
getTerms("hh, hh3h. wo shi 2 4 A . fdffdf. A&&B ")
And If you want to check whether a token is a valid English word or not, you may need PyEnchant
Tutorial:
import enchant
d = enchant.Dict("en_US")
d.check("Hello")
d.check("Helo")
d.suggest("Helo")
You can do it in one line without nltk (python 3.x).
import string
string_text= string_text.translate(str.maketrans('','',string.punctuation))
Just adding to the solution by #rmalouf, this will not include any numbers because \w+ is equivalent to [a-zA-Z0-9_]
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'[a-zA-Z]')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
Remove punctuaion(It will remove . as well as part of punctuation handling using below code)
tbl = dict.fromkeys(i for i in range(sys.maxunicode) if unicodedata.category(chr(i)).startswith('P'))
text_string = text_string.translate(tbl) #text_string don't have punctuation
w = word_tokenize(text_string) #now tokenize the string
Sample Input/Output:
direct flat in oberoi esquire. 3 bhk 2195 saleable 1330 carpet. rate of 14500 final plus 1% floor rise. tax approx 9% only. flat cost with parking 3.89 cr plus taxes plus possession charger. middle floor. north door. arey and oberoi woods facing. 53% paymemt due. 1% transfer charge with buyer. total cost around 4.20 cr approx plus possession charges. rahul soni
['direct', 'flat', 'oberoi', 'esquire', '3', 'bhk', '2195', 'saleable', '1330', 'carpet', 'rate', '14500', 'final', 'plus', '1', 'floor', 'rise', 'tax', 'approx', '9', 'flat', 'cost', 'parking', '389', 'cr', 'plus', 'taxes', 'plus', 'possession', 'charger', 'middle', 'floor', 'north', 'door', 'arey', 'oberoi', 'woods', 'facing', '53', 'paymemt', 'due', '1', 'transfer', 'charge', 'buyer', 'total', 'cost', 'around', '420', 'cr', 'approx', 'plus', 'possession', 'charges', 'rahul', 'soni']