I have a sentences (logs) look like that ['GET http://10.0.0.0:1000/ HTTP/X.X']
I want to have it in this form :
['GET', 'http://10.0.0.0:1000/', 'HTTP/X.X']
but that's not the fall. i've used this code :
import re
sentences = ['GET http://10.0.0.0:1000/ HTTP/X.X']
rx = re.compile(r'\b(\w{3})\s+(\d{1,2})\s+(\d{2}:\d{2}:\d{2})\s+(\w+)\W+(\d{1,3}(?:\.\d{1,3}){3})(?:\s+\S+){2}\s+\[([^][\s]+)\s+([+\d]+)]\s+"([A-Z]+)\s+(\S+)\s+(\S+)"\s+(\d+)\s+(\d+)\s+\S+\s+"([^"]*)')
words=[]
for sent in sentences:
m = rx.search(sent)
if m:
words.append(list(m.groups()))
else:
words.append(nltk.word_tokenize(sent))
print(words)
i get as an output :
[['GET', 'http', ':', '//10.0.0.0:1000/', 'HTTP/X.X']]
can someone know where is the error, or why it's not working as i want .
Thank you
import re
sentences = ['GET http://10.0.0.0:1000/ HTTP/X.X']
words=[]
for sent in sentences:
words.append(list(sent.split(' ')))
print(words)
Can you use a simple space split? I think nltk.word_tokenize is given you the wrong output!
Looks like, you want to split it using a space. So,
sentences = ['GET http://10.0.0.0:1000/ HTTP/X.X']
words = [x.split(" ") for x in sentences]
print(words)
Related
I have allow_wd as words that I want to search.
The sentench is an array of the main database.
The output need:
Newsentench = ['one three','']
Please help
sentench=['one from twooo or three people are here','he is here']
allow_wd=['one','two','three','four']
It is difficult to understand what you are asking. Assuming you want any word in sentench to be kept if it contains anything in allow_wd, something like the following will work:
sentench=['one from twooo or three people are here','he is here']
allow_wd=['one','two','three','four']
result = []
for sentence in sentench:
filtered = []
for word in sentence.split():
for allowed_word in allow_wd:
if allowed_word.lower() in word.lower():
filtered.append(word)
result.append(" ".join(filtered))
print(result)
If you want the word in the word to be exactly equal to an allowed word instead of just contain, change if allowed_word.lower() in word.lower(): to if allowed_word.lower() == word.lower()
Using regex boundaries with \b will ensure that two will be strictly matched and won't match twoo.
import re
sentench=['one from twooo or three people are here','he is here']
allow_wd=['one','two','three','four']
newsentench = []
for sent in sentench:
output = []
for wd in allow_wd:
if re.findall('\\b' + wd + '\\b',sent):
output.append(wd)
newsentench.append(' '.join(word for word in output))
print(newsentench)
Thanks for your clarification, this should be what you want.
sentench=['one from twooo or three people are here','he is here']
allow_wd=['one','two','three','four']
print([" ".join([word for word in s.split(" ") if word in allow_wd]) for s in sentench])
returning: ['one three', '']
I am relatively new to NLP so please be gentle. I
have a complete list of the text from Trump's tweets since taking office and I am tokenizing the text to analyze the content.
I am using the TweetTokenizer from the nltk library in python and I'm trying to get everything tokenized except for numbers and punctuation. Problem is my code removes all the tokens except one.
I have tried using the .isalpha() method but this did not work, which I thought would as should only be True for strings composed from the alphabet.
#Create a content from the tweets
text= non_re['text']
#Make all text in lowercase
low_txt= [l.lower() for l in text]
#Iteratively tokenize the tweets
TokTweet= TweetTokenizer()
tokens= [TokTweet.tokenize(t) for t in low_txt
if t.isalpha()]
My output from this is just one token.
If I remove the if t.isalpha() statement then I get all of the tokens including numbers and punctuation, suggesting the isalpha() is to blame from the over-trimming.
What I would like, is a way to get the tokens from the tweet text without punctuation and numbers.
Thanks for your help!
Try something like below:
import string
import re
import nltk
from nltk.tokenize import TweetTokenizer
tweet = "first think another Disney movie, might good, it's kids movie. watch it, can't help enjoy it. ages love movie. first saw movie 10 8 years later still love it! Danny Glover superb could play"
def clean_text(text):
# remove numbers
text_nonum = re.sub(r'\d+', '', text)
# remove punctuations and convert characters to lower case
text_nopunct = "".join([char.lower() for char in text_nonum if char not in string.punctuation])
# substitute multiple whitespace with single whitespace
# Also, removes leading and trailing whitespaces
text_no_doublespace = re.sub('\s+', ' ', text_nopunct).strip()
return text_no_doublespace
cleaned_tweet = clean_text(tweet)
tt = TweetTokenizer()
print(tt.tokenize(cleaned_tweet))
output:
['first', 'think', 'another', 'disney', 'movie', 'might', 'good', 'its', 'kids', 'movie', 'watch', 'it', 'cant', 'help', 'enjoy', 'it', 'ages', 'love', 'movie', 'first', 'saw', 'movie', 'years', 'later', 'still', 'love', 'it', 'danny', 'glover', 'superb', 'could', 'play']
# Function for removing Punctuation from Text and It gives total no.of punctuation removed also
# Input: Function takes Existing fie name and New file name as string i.e 'existingFileName.txt' and 'newFileName.txt'
# Return: It returns two things Punctuation Free File opened in read mode and a punctuation count variable.
def removePunctuation(tokenizeSampleText, newFileName):
from nltk.tokenize import word_tokenize
existingFile = open(tokenizeSampleText, 'r')
read_existingFile = existingFile.read()
tokenize_existingFile = word_tokenize(read_existingFile)
puncRemovedFile = open(newFileName, 'w+')
import string
stringPun = list(string.punctuation)
count_pun = 0
for word in tokenize_existingFile:
if word in stringPun:
count_pun += 1
else:
word = word + ' '
puncRemovedFile.write(''.join(word))
existingFile.close()
puncRemovedFile.close()
return open(newFileName, 'r'), count_pun
punRemoved, punCount = removePunctuation('Macbeth.txt', 'Macbeth-punctuationRemoved.txt')
print(f'Total Punctuation : {punCount}')
punRemoved.read()
I'm new in python . I have a big data set from twitter and i want to tokenize it .
but i don't know how can i token verbs like this : "look for , take off ,grow up and etc." and it's important to me .
my code is :
>>> from nltk.tokenize import word_tokenize
>>> s = "I'm looking for the answer"
>>> word_tokenize(s)
['I', "'m", 'looking', 'for', 'the', 'answer']
my data set is big and i can't use this page code :
Find multi-word terms in a tokenized text in Python
so , how can i solve my problem?
You need to use parts of speech tags for that, or actually dependency parsing would be more accurate. I haven't tried with nltk, but with spaCy you can do it like this:
import spacy
nlp = spacy.load('en_core_web_lg')
def chunk_phrasal_verbs(lemmatized_sentence):
ph_verbs = []
for word in nlp(lemmatized_sentence):
if word.dep_ == 'prep' and word.head.pos_ == 'VERB':
ph_verb = word.head.text+ ' ' + word.text
ph_verbs.append(ph_verb)
return ph_verbs
I also suggest first lemmatizing the sentence to get rid of conjugations. Also if you need noun phrases, with the similar way you can use compound relationship.
I want to clean my reviews data. Here's my code :
def processData(data):
data = data.lower() #casefold
data = re.sub('<[^>]*>',' ',data) #remove any html
data = re.sub(r'#([^\s]+)', r'\1', data) #Replace #word with word
remove = string.punctuation
remove = remove.replace("'", "") # don't remove '
p = r"[{}]".format(remove) #create the pattern
data = re.sub(p, "", data)
data = re.sub('[\s]+', ' ', data) #remove additional whitespaces
pp = re.compile(r"(.)\1{1,}", re.DOTALL) #pattern for remove repetitions
data = pp.sub(r"\1\1", data)
return data
This code almost work well, but there still a problem.
For this sentence "she work in public-service" ,
I got "she work in publicservice".
The problem is there are no whitespace after string punctuation.
I want my sentence to be like this "she work in public service".
Can you help me with my code?
I think you want this:
>>> st = 'she works in public-service'
>>> import re
>>> re.sub(r'([{}])'.format(string.punctuation),r' ',st)
'she works in public service'
>>>
I am trying to filter out stopwords in my text like so:
clean = ' '.join([word for word in text.split() if word not in (stopwords)])
The problem is that text.split() has elements like 'word.' that don't match to the stopword 'word'.
I later use clean in sent_tokenize(clean), however, so I don't want to get rid of the punctuation altogether.
How do I filter out stopwords while retaining punctuation, but filtering words like 'word.'?
I thought it would be possible to change the punctuation:
text = text.replace('.',' . ')
and then
clean = ' '.join([word for word in text.split() if word not in (stop words)] or word == ".")
But is there a better way?
Tokenize the text first, than clean it from stopwords. A tokenizer usually recognizes punctuation.
import nltk
text = 'Son, if you really want something in this life,\
you have to work for it. Now quiet! They are about\
to announce the lottery numbers.'
stopwords = ['in', 'to', 'for', 'the']
sents = []
for sent in nltk.sent_tokenize(text):
tokens = nltk.word_tokenize(sent)
sents.append(' '.join([w for w in tokens if w not in stopwords]))
print sents
['Son , if you really want something this life , you have work it .', 'Now quiet !', 'They are about announce lottery numbers .']
You could use something like this:
import re
clean = ' '.join([word for word in text.split() if re.match('([a-z]|[A-Z])+', word).group().lower() not in (stopwords)])
This pulls out everything except lowercase and uppercase ascii letters and matches it to words in your stopcase set or list. Also, it assumes that all of your words in stopwords are lowercase, which is why I converted the word to all lowercase. Take that out if I made to great of an assumption
Also, I'm not proficient in regex, sorry if there's a cleaner or robust way of doing this.