this is my code:
from whoosh.analysis import RegexAnalyzer
rex = RegexAnalyzer(re.compile(ur"([\u4e00-\u9fa5])|(\w+(\.?\w+)*)"))
a=[(token.text) for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")]
self.render_template('index.html',{'a':a})
and it show this on the web page:
[u'hi', u'\u4e2d', u'000', u'\u4e2d', u'\u6587', u'\u6d4b', u'\u8bd5', u'\u4e2d', u'\u6587', u'there', u'3.141', u'big', u'time', u'under_score']
but i want to show chinese word , so i change this:
a=[(token.text).encode('utf-8') for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")]
and it show :
['hi', '\xe4\xb8\xad', '000', '\xe4\xb8\xad', '\xe6\x96\x87', '\xe6\xb5\x8b', '\xe8\xaf\x95', '\xe4\xb8\xad', '\xe6\x96\x87', 'there', '3.141', 'big', 'time', 'under_score']
so how to show chinese word in my code,
thanks
By default, printing a larger built-in structure gives the repr() of each of the elements. If you want the str()/unicode() instead then you need to iterate over the sequence yourself.
a = u"['" + u"', '".join(token.text for token in ...) + u"']"
print a
Related
I am relatively new to NLP so please be gentle. I
have a complete list of the text from Trump's tweets since taking office and I am tokenizing the text to analyze the content.
I am using the TweetTokenizer from the nltk library in python and I'm trying to get everything tokenized except for numbers and punctuation. Problem is my code removes all the tokens except one.
I have tried using the .isalpha() method but this did not work, which I thought would as should only be True for strings composed from the alphabet.
#Create a content from the tweets
text= non_re['text']
#Make all text in lowercase
low_txt= [l.lower() for l in text]
#Iteratively tokenize the tweets
TokTweet= TweetTokenizer()
tokens= [TokTweet.tokenize(t) for t in low_txt
if t.isalpha()]
My output from this is just one token.
If I remove the if t.isalpha() statement then I get all of the tokens including numbers and punctuation, suggesting the isalpha() is to blame from the over-trimming.
What I would like, is a way to get the tokens from the tweet text without punctuation and numbers.
Thanks for your help!
Try something like below:
import string
import re
import nltk
from nltk.tokenize import TweetTokenizer
tweet = "first think another Disney movie, might good, it's kids movie. watch it, can't help enjoy it. ages love movie. first saw movie 10 8 years later still love it! Danny Glover superb could play"
def clean_text(text):
# remove numbers
text_nonum = re.sub(r'\d+', '', text)
# remove punctuations and convert characters to lower case
text_nopunct = "".join([char.lower() for char in text_nonum if char not in string.punctuation])
# substitute multiple whitespace with single whitespace
# Also, removes leading and trailing whitespaces
text_no_doublespace = re.sub('\s+', ' ', text_nopunct).strip()
return text_no_doublespace
cleaned_tweet = clean_text(tweet)
tt = TweetTokenizer()
print(tt.tokenize(cleaned_tweet))
output:
['first', 'think', 'another', 'disney', 'movie', 'might', 'good', 'its', 'kids', 'movie', 'watch', 'it', 'cant', 'help', 'enjoy', 'it', 'ages', 'love', 'movie', 'first', 'saw', 'movie', 'years', 'later', 'still', 'love', 'it', 'danny', 'glover', 'superb', 'could', 'play']
# Function for removing Punctuation from Text and It gives total no.of punctuation removed also
# Input: Function takes Existing fie name and New file name as string i.e 'existingFileName.txt' and 'newFileName.txt'
# Return: It returns two things Punctuation Free File opened in read mode and a punctuation count variable.
def removePunctuation(tokenizeSampleText, newFileName):
from nltk.tokenize import word_tokenize
existingFile = open(tokenizeSampleText, 'r')
read_existingFile = existingFile.read()
tokenize_existingFile = word_tokenize(read_existingFile)
puncRemovedFile = open(newFileName, 'w+')
import string
stringPun = list(string.punctuation)
count_pun = 0
for word in tokenize_existingFile:
if word in stringPun:
count_pun += 1
else:
word = word + ' '
puncRemovedFile.write(''.join(word))
existingFile.close()
puncRemovedFile.close()
return open(newFileName, 'r'), count_pun
punRemoved, punCount = removePunctuation('Macbeth.txt', 'Macbeth-punctuationRemoved.txt')
print(f'Total Punctuation : {punCount}')
punRemoved.read()
I'm new in python . I have a big data set from twitter and i want to tokenize it .
but i don't know how can i token verbs like this : "look for , take off ,grow up and etc." and it's important to me .
my code is :
>>> from nltk.tokenize import word_tokenize
>>> s = "I'm looking for the answer"
>>> word_tokenize(s)
['I', "'m", 'looking', 'for', 'the', 'answer']
my data set is big and i can't use this page code :
Find multi-word terms in a tokenized text in Python
so , how can i solve my problem?
You need to use parts of speech tags for that, or actually dependency parsing would be more accurate. I haven't tried with nltk, but with spaCy you can do it like this:
import spacy
nlp = spacy.load('en_core_web_lg')
def chunk_phrasal_verbs(lemmatized_sentence):
ph_verbs = []
for word in nlp(lemmatized_sentence):
if word.dep_ == 'prep' and word.head.pos_ == 'VERB':
ph_verb = word.head.text+ ' ' + word.text
ph_verbs.append(ph_verb)
return ph_verbs
I also suggest first lemmatizing the sentence to get rid of conjugations. Also if you need noun phrases, with the similar way you can use compound relationship.
I do not get it. Why people down vote this without explanation? What mistake I made?
How to extract Apple Recipe, 3, pages, 29.4KB from the following string?
'\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t'
I've tried re.compile('\w+') but can only get results like:
Apple
Recipe
29
.
4
KB
However, I want to get them together as they are, not separately. For example, I want to get Apple Recipe together but not as two separate tokens.
data = """\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t"""
import re
g = re.findall(r'[^\r\n\t]+', data)
print(g)
Prints:
['Apple Recipe', '3', 'pages', '29.4KB']
The [^\r\n\t]+ will match any string that doesn't contain \r, \n or \t characters.
txt = """\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t"""
import re
output = re.findall(r'\w+[.\d]?\w+', txt)
print(output)
u will get the required output
['Apple', 'Recipe', '3', 'pages', '29.4KB']
The purpose of this code is to make a program that searches a persons name (on Wikipedia, specifically) and uses keywords to come up with reasons why that person is significant.
I'm having issues with this specific line "if fact_amount < 5 and (terms in sentence.lower()):" because I get this error ("TypeError: coercing to Unicode: need string or buffer, list found")
If you could offer some guidance it would be greatly appreciated, thank you.
import requests
import nltk
import re
#You will need to install requests and nltk
terms = ['pronounced'
'was a significant'
'major/considerable influence'
'one of the (X) most important'
'major figure'
'earliest'
'known as'
'father of'
'best known for'
'was a major']
names = ["Nelson Mandela","Bill Gates","Steve Jobs","Lebron James"]
#List of people that you need to get info from
for name in names:
print name
print '==============='
#Goes to the wikipedia page of the person
r = requests.get('http://en.wikipedia.org/wiki/%s' % (name))
#Parses the raw html into text
raw = nltk.clean_html(r.text)
#Tries to split each sentence.
#sort of buggy though
#For example St. Mary will split after St.
sentences = re.split('[?!.][\s]*',raw)
fact_amount = 0
for sentence in sentences:
#I noticed that important things came after 'he was' and 'she was'
#Seems to work for my sample list
#Also there may be buggy sentences, so I return 5 instead of 3
if fact_amount < 5 and (terms in sentence.lower()):
#remove the reference notation that wikipedia has
#ex [ 33 ]
sentence = re.sub('[ [0-9]+ ]', '', sentence)
#removes newlines
sentence = re.sub('\n', '', sentence)
#removes trailing and leading whitespace
sentence = sentence.strip()
fact_amount += 1
#sentence is formatted. Print it out
print sentence + '.'
print
You should be checking it the other way
sentence.lower() in terms
terms is list and sentence.lower() is a string. You can check if a particular string is there in a list, but you cannot check if a list is there in a string.
you might mean if any(t in sentence_lower for t in terms), to check whether any terms from terms list is in the sentence string.
I collected some tweets through twitter api. Then I counted the words using split(' ') in python. However, some words appear like this:
correct!
correct.
,correct
blah"
...
So how can I format the tweets without punctuation? Or maybe I should try another way to split tweets? Thanks.
You can do the split on multiple characters using re.split...
from string import punctuation
import re
puncrx = re.compile(r'[{}\s]'.format(re.escape(punctuation)))
print filter(None, puncrx.split(your_tweet))
Or, just find words that contain certain contiguous characters:
print re.findall(re.findall('[\w##]+', s), your_tweet)
eg:
print re.findall(r'[\w##]+', 'talking about #python with #someone is so much fun! Is there a 140 char limit? So not cool!')
# ['talking', 'about', '#python', 'with', '#someone', 'is', 'so', 'much', 'fun', 'Is', 'there', 'a', '140', 'char', 'limit', 'So', 'not', 'cool']
I did originally have a smiley in the example, but of course these end up getting filtered out with this method, so that's something to be wary of.
Try removing the punctuation from the string before doing the split.
import string
s = "Some nice sentence. This has punctuation!"
out = s.translate(string.maketrans("",""), string.punctuation)
Then do the split on out.
I would advice to clean text from special symbols before splitting it using this code:
tweet_object["text"] = re.sub(u'[!?##$.,#:\u2026]', '', tweet_object["text"])
You would need to import re before using function sub
import re