Python String split by specific pattern with Indices - python

I'm trying to split sentences from different characters, where each word has its own tag, and store with indices, and names can be Mike or Steve with different lengths. Content can be multiple languages like Chinese or Japanese, etc.
content = "A:Hello.B:How are you?A:I'm fine."
which I want to be like:
[0]A:Hello. , 0:7
[1]B:How are you? , 8:21
[2]A:I'm fine. ,22:33

You can use re.split as follow:
import re
s = "A:Hello.B:How are you?A:I'm fine."
t = re.split(r'[.?]', s)
print(t)
that gives
['A:Hello', 'B:How are you', "A:I'm fine", '']

You can use re.finditer for the task:
import re
content = "A:Hello.B:How are you?A:I'm fine."
for idx, i in enumerate(re.finditer(r'(.*?[.?])(?=[A-Z]|\Z)', content)):
print('[{}]{:<20}, {}:{}'.format(idx, i.group(1), i.start(), i.end()-1))
Prints:
[0]A:Hello. , 0:7
[1]B:How are you? , 8:21
[2]A:I'm fine. , 22:32

Related

python regex: extract string from escaped sequences

I do not get it. Why people down vote this without explanation? What mistake I made?
How to extract Apple Recipe, 3, pages, 29.4KB from the following string?
'\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t'
I've tried re.compile('\w+') but can only get results like:
Apple
Recipe
29
.
4
KB
However, I want to get them together as they are, not separately. For example, I want to get Apple Recipe together but not as two separate tokens.
data = """\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t"""
import re
g = re.findall(r'[^\r\n\t]+', data)
print(g)
Prints:
['Apple Recipe', '3', 'pages', '29.4KB']
The [^\r\n\t]+ will match any string that doesn't contain \r, \n or \t characters.
txt = """\r\n\t\t\t\t\t\r\n\t\t\t\t\tApple Recipe\r\r\n\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t3\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t\tpages\r\n
\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\t29.4KB\r\n
\t\t\t\t\t\r\n\t\t\t\t\t\r\n\t\t\t\t"""
import re
output = re.findall(r'\w+[.\d]?\w+', txt)
print(output)
u will get the required output
['Apple', 'Recipe', '3', 'pages', '29.4KB']

Removing stopwords from list using python3

I have been trying to remove stopwords from a csv file that im reading using python code but my code does not seem to work. I have tried using a sample text in the code to validate my code but it is still the same . Below is my code and i would appreciate if anyone can help me rectify the issue.. here is the code below
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import csv
article = ['The computer code has a little bug' ,
'im learning python' ,
'thanks for helping me' ,
'this is trouble' ,
'this is a sample sentence'
'cat in the hat']
tokenized_models = [word_tokenize(str(i)) for i in article]
stopset = set(stopwords.words('english'))
stop_models = [i for i in tokenized_models if str(i).lower() not in stopset]
print('token:'+str(stop_models))
Your tokenized_models is a list of tokenized sentences, so a list of lists. Ergo, the following line tries to match a list of words to a stopword:
stop_models = [i for i in tokenized_models if str(i).lower() not in stopset]
Instead, iterate again through words. Something like:
clean_models = []
for m in tokenized_models:
stop_m = [i for i in m if str(i).lower() not in stopset]
clean_models.append(stop_m)
print(clean_models)
Off-topic useful hint:
To define a multi-line string, use brackets and no comma:
article = ('The computer code has a little bug'
'im learning python'
'thanks for helping me'
'this is trouble'
'this is a sample sentence'
'cat in the hat')
This version would work with your original code
word_tokenize(str(i)) returns a list of words, so tokenized_models is a list of lists. You need to flatten that list, or better yet just make article a single string, since I don't see why it's a list at the moment.
This is because the in operator won't search through a list and then through strings in that list at the same time, e.g.:
>>> 'a' in 'abc'
True
>>> 'a' in ['abc']
False

Tagging words based on a dictionary/list in Python

I have the following dictionary of gene names:
gene_dict = {"repA1":1, "leuB":1}
# the actual dictionary is longer, around ~30K entries.
# or in list format
# gene_list = ["repA1", "leuB"]
What I want to do is given any sentence, we search for terms that is contained in the above dictionary and then tagged them.
For example given this sentence:
mytext = "xxxxx repA1 yyyy REPA1 zzz."
It will be then tagged as:
xxxxx <GENE>repA1</GENE> yyyy <GENE>REPA1</GENE> zzz.
Is there any efficient way to do that? In practicality we would process couple of millions of sentences.
If you "gene_list" in not really-really-really long, you could use a compiled regular expression, like
import re
gene_list = ["repA1", "leuB"]
regexp = re.compile('|'.join(gene_list), flags=re.IGNORECASE)
result = re.sub(regexp, r'<GENE>\g<0></GENE>', 'xxxxx repA1 yyyy REPA1 zzz.')
and put in a loop for all your sentences. I think this should be quite fast.
If most of the sentences are short and separated by single spaces, something like:
gene_dict = {"repA1":1, "leuB":1}
format_gene = "<GENE>{}</GENE>".format
mytext = " ".join(format_gene(word) if word in gene_dict else word for word in mytext.split())
is going to be faster.
For slightly longer sentences or sentences you cannot reform with " ".join it might be more efficient or more correct to use several .replaces:
gene_dict = {"repA1":1, "leuB":1}
genes = set(gene_dict)
format_gene = "<GENE>{}</GENE>".format
to_replace = genes.intersection(mytext.split())
for gene in to_replace:
mytext = mytext.replace(gene, format_gene(gene))
Each of these assume that splits of the sentences will not take extortionate time, which is fair assuming genes_dict is a much longer than the sentences.

Python Error"TypeError: coercing to Unicode: need string or buffer, list found"

The purpose of this code is to make a program that searches a persons name (on Wikipedia, specifically) and uses keywords to come up with reasons why that person is significant.
I'm having issues with this specific line "if fact_amount < 5 and (terms in sentence.lower()):" because I get this error ("TypeError: coercing to Unicode: need string or buffer, list found")
If you could offer some guidance it would be greatly appreciated, thank you.
import requests
import nltk
import re
#You will need to install requests and nltk
terms = ['pronounced'
'was a significant'
'major/considerable influence'
'one of the (X) most important'
'major figure'
'earliest'
'known as'
'father of'
'best known for'
'was a major']
names = ["Nelson Mandela","Bill Gates","Steve Jobs","Lebron James"]
#List of people that you need to get info from
for name in names:
print name
print '==============='
#Goes to the wikipedia page of the person
r = requests.get('http://en.wikipedia.org/wiki/%s' % (name))
#Parses the raw html into text
raw = nltk.clean_html(r.text)
#Tries to split each sentence.
#sort of buggy though
#For example St. Mary will split after St.
sentences = re.split('[?!.][\s]*',raw)
fact_amount = 0
for sentence in sentences:
#I noticed that important things came after 'he was' and 'she was'
#Seems to work for my sample list
#Also there may be buggy sentences, so I return 5 instead of 3
if fact_amount < 5 and (terms in sentence.lower()):
#remove the reference notation that wikipedia has
#ex [ 33 ]
sentence = re.sub('[ [0-9]+ ]', '', sentence)
#removes newlines
sentence = re.sub('\n', '', sentence)
#removes trailing and leading whitespace
sentence = sentence.strip()
fact_amount += 1
#sentence is formatted. Print it out
print sentence + '.'
print
You should be checking it the other way
sentence.lower() in terms
terms is list and sentence.lower() is a string. You can check if a particular string is there in a list, but you cannot check if a list is there in a string.
you might mean if any(t in sentence_lower for t in terms), to check whether any terms from terms list is in the sentence string.

Extract words between the 2nd and the 3rd comma

I am total newbie to regex, so this question might seem trivial to many of you.
I would like to extract the words between the second and the third comma, like in the sentence:
Chateau d'Arsac, Bordeaux blanc, Cuvee Celine, 2012
I have tried : (?<=,\s)[^,]+(?=,) but this doesn't return what I want...
data = "Chateau d'Arsac, Bordeaux blanc, Cuvee Celine, 2012"
import re
print re.match(".*?,.*?,\s*(.*?),.*", data).group(1)
Output
Cuvee Celine
But for this simple task, you can simply split the strings based on , like this
data.split(",")[2].strip()
In this case I see easier to use a simple split by comma.
>>> s = "Chateau d'Arsac, Bordeaux blanc, Cuvee Celine, 2012"
>>> s.split(',')[2]
' Cuvee Celine'
Why not just split the string by commas using str.split() ?
data.split(",")[2]

Categories