Regex .search function to ignore name suffixes - python

I created a regex definition that should read suffixes (eg., jr/sr/etc.) at the end of a name (space or comma) and then return the name if the suffix is in the name and then move on the next part of the if-then-else statement, which splits and does a reverse join on names with the last name, first name format. I can't figure out what the problem is...but the re.search function is returning all values, instead of just the ones that are part of the name suffixes. Please help!
d = {'Person': ['red robin, jr', 'bluejay, bluie', 'finch, mustard e', 'awing blackcrow' ]}
df = pd.DataFrame(data=d)
def separatetypes(name):
if re.search(r'(?:\,|\s+(?:i|ii|iii|iv|jr|sr))*$', name):
return name
elif ',' in name:
namesplit = name.split(',',1)
newname = str(namesplit[1]) + ' ' + str(namesplit[0])
return newname
else:
return name
df['Person'] = df['Person'].apply(separatetypes)

You have a * in the pattern, which means "zero or more repetitions"; as a result, it's returning a match when it found zero suffixes.
The pattern you probably want is r'(?:,|\s+(?:i|ii|iii|iv|jr|sr))$' (without the * and omitting the unnecessary \ before the comma) or r'(?:,|\s+)(?:i|ii|iii|iv|jr|sr)$' (which allows a suffix separated by comma, rather than a trailing comma).
As a general tool, sites like https://regex101.com/ (there are a bunch of them) can help develop regexes by explaining what's going on and by immediately showing results.

Related

Regular expression to capture different lines

I'm trying to find a better way to capture variable values from a file that stores some information but facing the problem with line breaks and spaces. For example, a DataSetList variable is given that stores a value in two different ways:
Input
Templates = <
item
Name = 'fruits'
TemplateList = '7,12'
end>
Surveys = <
item
ID = 542
Name = 'apple'
end
item
ID = 872
Name = 'banana'
DataSetList = '873,887,971,1055'
PluginInfo = {something}
end
item
ID = 437
Name = 'cherry'
DataSetList =
'438,452,536,620,704,788,1143,1179,1563,1647,1731,1839,1875,1851,' +
'1863,2060,2359,2443,2469,2620'
PluginInfo = {something}
end>
The only way i've found to capture the values of the variables ID, Name, DataSetList variable values that are stored in 'item end' block is (My approach):
Expression
ID[\s\=]*(?P<UID>\d*)\s*Name[\s\=]*'(?P<Name>.*)'\s*DataSetList[\s\=]*(?P<DataSetList>(?:'[\d\,]*'[\s\+]*)*)
ID[\s\=]*(?P<UID>\d*) # capture ID
\s* # match spaces
Name[\s\=]*'(?P<Name>.*)' # capture Name
\s* # match spaces
DataSetList[\s\=]*(?P<DataSetList>(?:'[\d\,]*'[\s\+]*)*) # capture DataSetList
My approach output
{'UID': '872',
'Name': 'banana',
'DataSetList': "'873,887,971,1055'\n "}
{'UID': '437',
'Name': 'cherry',
'DataSetList': "'438,452,536,620,704,788,1143,1179,1563,1647,1731,1839,1875,1851,' +\n '1863,2060,2359,2443,2469,2620'\n "}
Problem
I don't think my approach is good because named capturing group DataSetList also captures spaces, line breaks, literal + and finally requires postprocessing of values.
Any approaches or ideas to improve my regular expression would be very helpful. Unfortunately my knowledge base of regex isn't as deep as i would like it to be. It's very interesting to see how it's done in other ways
You can improve the regex a bit.
ID[\s=]*(?P<UID>\d*)\s*Name[\s=]*'(?P<Name>.*)'\s*DataSetList[\s=]*(?P<DataSetList>'(?:[\d,]|'[\s+]*')*')
This gets rid of the unnecessary = and , escapes. The last part now won't match the whitespace after the final bit of the DataSetList.
I can't see a nice way to avoid having to post-process the DataSetList, if you stick to regular expressions.
If you need to do anything more complicated with this, I'd advise moving away from regexes. They are great for simple things, but it looks like in this case you'd be better off with a proper parser. If none already exists for the language you have here, you can use a parsing library such as Lark to create one without too much difficulty.

searching a word in the column pandas dataframe python

I have two text columns and I would like to find whether a word from one column is present in another. I wrote the below code, which works very well, but it detects if a word is present anywhere in the string. For example, it will find "ha" in "ham". I want to use regex expression instead, but I am stuck. I came across this post and looked at the second answer, but I haven't been able to modify it for my purpose. I would like to do something similar.
I would appreciate help and/or any pointers
d = {'emp': ['abc d. efg', 'za', 'sdfadsf '], 'vendor': ['ABCD enterprise', 'za industries', '' ]}
df = pd.DataFrame(data=d)
df['clean_empy_name']=df["emp"].str.lower().str.replace('\W', ' ')
def check_subset(vendor, employee):
s = []
for n in employee.split():
# n=" " + n +"[^a-zA-Z\d:]"
if ((str(n) in vendor.lower()) & (len(str(n))>1)):
s.append(n)
return s
check_subset("ABC-xy 54", "54 xy")
df['emp_name_find_in_vendor'] = df.apply(lambda row: check_subset(row['vendor'],row['clean_empy_name']), axis=1)
df
#########update 2
i updated my dataframe as below
d = {'emp': ['abc d. efg', 'za', 'sdfadsf ','abc','yuma'], 'vendor': ['ABCD enterprise', 'za industries', '','Person Vue\Cisco','U OF M CONTLEARNING' ]}
df = pd.DataFrame(data=d)
df['clean_empy_name']=df["emp"].str.lower().str.replace('\W', ' ')
I used code provided by first answer and it fails
in case of 'Person Vue\Cisco' it throws the error error: bad escape \c. If i remove \ in 'Person Vue\Cisco', code runs fine
in case of 'U OF M CONTLEARNING' it return u and m when clearly they are not a match
Yes, you can! It is going to be a little bit messy so let me construct in a few steps:
First, let's just create a regular expression for the single case of check_subset("ABC-xy 54", "54 xy"):
We will use re.findall(pattern, string) to find all the occurrences of pattern in string
The regex pattern will basically say "any of the words":
for the "any" we use the | (or) operator
for constructing words we need to use the parenthesis to group together... However, parenthesis (word) create a group that keeps track, so we could later call reuse these groups, since we are not interested we can create a non-capturing group by adding ?: as follows: (?:word)
import re
re.findall('(?:54)|(?:xy)', 'ABC-xy 54')
# -> ['xy', '54']
Now, we have to construct the pattern each time:
Split into words
Wrap each word inside a non-capturing group (?:)
Join all of these groups by |
re.findall('|'.join(['(?:'+x+')' for x in '54 xy'.split()]), 'ABC-xy 54')
One minor thing, since the last row's vendor is empty and you seem to want no matches (technically, the empty string matches with everything) we have to add a minor check. So we can rewrite your function to be:
def check_subset_regex(vendor, employee):
if vendor == '':
return []
pattern = '|'.join(['(?:'+x+')' for x in vendor.lower().split(' ')])
return re.findall(pattern, employee)
And then we can apply the same way:
df['emp_name_find_in_vendor_regex'] = df.apply(lambda row: check_subset_regex(row['vendor'],row['clean_empy_name']), axis=1)
One final comment is that your solution matches partial words, so employee Tom Sawyer would match "Tom" to the vendor "Atomic S.A.". The regex function I provided here will not give this as a match, should you want to do this the regex would become a little more complicated.
EDIT: Removing punctuation marks from vendors
You could either add a new column as you did with clean_employee, or simply add the removal to the function, as so (you will need to import string to get the string.punctuation, or just add in there a string with all the symbols you want to substitute):
def check_subset_regex(vendor, employee):
if vendor == '':
return []
clean_vnd = re.sub('[' + string.punctuation + ']', '', vendor)
pattern = '|'.join(['(?:'+x+')' for x in clean_vnd.lower().split(' ')])
return re.findall(pattern, employee)
In the spirit of teaching to fish :), in regex the [] denote any of these characters... So [abc] would be the same to a|b|c.
So the re.sub line will substitute any occurrence of the string.punctuation (which evaluates to !"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~) characters by a '' (removing them).
EDIT2: Adding the possibility of a single non-alphanumeric character at the end of each searchword:
def check_subset_regex(vendor, employee):
if vendor == '':
return []
clean_vnd = re.sub('[' + string.punctuation + ']', '', vendor)
pattern = '|'.join(['(?:'+x+'[^a-zA-Z0-9]?)' for x in clean_vnd.lower().split(' ')])
return re.findall(pattern, employee)
In this case we are using:
- ^ as the first character inside a [] (called character class), denotes any character except for those specified in the character class, e.g. [^abc] would match anything that is not a or b or c (so d, or a white space, or #)
- and the ?, which means the previous symbol is optional...
So, [^a-zA-Z0-9]? means an optional single non-alphanumeric character.

Python re.sub() optimization

I have a python list with each string being one of the following 4 possible options like this (of course the names would be different):
Mr: Smith\n
Mr: Smith; John\n
Smith\n
Smith; John\n
I want these to be corrected to:
Mr,Smith,fname\n
Mr,Smith,John\n
title,Smith,fname\n
title,Smith,John\n
Easy enough to do with 4 re.sub():
with open ("path/to/file",'r') as fileset:
dataset = fileset.readlines()
for item in dataset:
dataset = [item.strip() for item in dataset] #removes some misc. white noise
item = re.sub((.*):\W(.*);\W,r'\g<1>'+','+r'\g<2>'+',',item)
item = re.sub((.*);\W(.*),'title,'+r'\g<1>'+','+r'\g<2>',item)
item = re.sub((.*):\W(.*),r'\g<1>'+','+r'\g<2>'+',fname',item)
item = re.sub((*.),'title,'+r'\g<1>'+',fname',item)
While this is fine for the dataset I'm using, I want to be more efficient.
Is there a single operation that can simplify this process?
Please pardon if I forgot a quote or some such; I'm not at my workstation now and I'm aware I've stripped the newline (\n).
Thank you,
Brief
Instead of running two loops, you can reduce it to just one line. Adapted from How to iterate over the file in Python (and using the code in my Code section):
f = open("path/to/file",'r')
while True:
x = f.readline()
if not x: break
print re.sub(r, repl, x)
See Python - How to use regexp on file, line by line, in Python for other alternatives.
Code
For viewing sake I've changed your file to an array.
See regex in use here
^(?:([^:\r\n]+):\W*)?([^;\r\n]+)(?:;\W*(.+))?
Note: You don't need all that in python, I do in order to show it on regex101, so your regex would actually just be ^(?:([^:]+):\W*)?([^;]+)(?:;\W*(.+))?
Usage
See code in use here
import re
a = [
"Mr: Smith",
"Mr: Smith; John",
"Smith",
"Smith; John"
]
r = r"^(?:([^:]+):\W*)?([^;]+)(?:;\W*(.+))?"
def repl(m):
return (m.group(1) or "title" ) + "," + m.group(2) + "," + (m.group(3) or "fname")
for s in a:
print re.sub(r, repl, s)
Explanation
^ Assert position at the start of the line
(?:([^:]+):\W*)? Optionally match the following
([^:]+) Capture any character except : one or more times into capture group 1
: Match this literally
\W* Match any number of non-word characters (copied from OP's original code, I assume \s* can be used instead)
([^;]+) Group any character except ; one or more times into capture group 2
(?:;\W*(.+))? Optionally match the following
; Match this literally
\W* Match any number of non-word characters (copied from OP's original code, I assume \s* can be used instead)
(.+) Capture any character one or more times into capture group 3
Given the above explanation of the regex part. The re.sub(r, repl, s) works as follows:
repl is a callback to the repl function which returns:
group 1 if it captured anything, title otherwise
group 2 (it's supposedly always set - using OP's logic here again)
group 3 if it captured anything, fname otherwise
IMHO, RegEx are just too complex here, you can use classic string function to split your string item in chunks. For that, you can use partition (or rpartition).
First, split your item string in "records", like that:
item = "Mr: Smith\n Mr: Smith; John\n Smith\n Smith; John\n"
records = item.splitlines()
# -> ['Mr,Smith,fname', 'Mr,Smith,John', 'title,Smith,fname', 'title,Smith,John']
Then, you can create a short function to normalize each "record".
Here is an example:
def normalize_record(record):
# type: (str) -> str
name, _, fname = record.partition(';')
title, _, name = name.rpartition(':')
title = title.strip() or 'title'
name = name.strip()
fname = fname.strip() or 'fname'
return "{0},{1},{2}".format(title, name, fname)
This function is easier to understand than a collection of RegEx. And, in most case, it is faster.
For a better integration, you can define another function to handle each item:
def normalize(row):
records = row.splitlines()
return "\n".join(normalize_record(record) for record in records) + "\n"
Demo:
item = "Mr: Smith\n Mr: Smith; John\n Smith\n Smith; John\n"
item = normalize(item)
You get:
'Mr,Smith,fname\nMr,Smith,John\ntitle,Smith,fname\ntitle,Smith,John\n'

How can I use regex to search inside sentence -not a case sensitive

I'm a newbie to Regular expression in Python :
I have a list that i want to search if it's contain a employee name.
The employee name can be :
it can be at the beginning followed by space.
followed by ®
OR followed by space
OR Can be at the end and space before it
not a case sensitive
ListSentence = ["Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel"]
ListEmployee = ["Steve", "Rob", "daniel"]
The output from the ListSentence is:
["Steve®", "Rob spring", "Car Daniel", "Done daniel"]
First take all your employee names and join them with a | character and wrap the string so it looks like:
(?:^|\s)((?:Steve|Rob|Daniel)(?:®)?)(?=\s|$)
By first joining all the names together you avoid the performance overhead of using a nested set of for next loops.
I don't know python well enough to offer a python example, however in powershell I'd write it like this
[array]$names = #("Steve", "Rob", "daniel")
[array]$ListSentence = #("Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel")
# build the regex, and insert the names as a "|" delimited string
$Regex = "(?:^|\s)((?:" + $($names -join "|") + ")(?:®)?)(?=\s|$)"
# use case insensitive match to find any matching array values
$ListSentence -imatch $Regex
Yields
Steve®
Rob spring
Car Daniel
Done daniel
Why do you want to use regular expressions? I'd generally recommend avoiding them in Python - you can use string methods instead.
For example:
def string_has_employee_name_in_it(test_string):
test_string = test_string.lower() # case insensitive
for name in ListEmployee:
name = name.lower()
if name == test_string:
return True
elif name + '®' == test_string:
return True
elif test_string.endswith(' ' + name):
return True
elif test_string.startswith(name + ' '):
return True
elif (' ' + name + ' ') in test_string:
return True
return False
final_list = []
for string in ListSentence:
if string_has_employee_name_in_it(string):
final_list.append(string)
final_list is the list you want. This is longer than a regex, but it's also a lot easier to parse and maintain. You can make it a lot shorter in various ways (e.g. combining the tests in the function, and using a list comprehension instead of a loop), but as you're starting out with Python it's a good idea to be clear with what's going on.
I don't think you need to check for all of those scenarios. I think all you need to do is check for word breaks.
You can join the ListEmployee list with | to make an either or regex (also, lowercase it for case-insensitivity), surrounded by \b for word breaks, and that should work:
regex = '|'.join(ListEmployee).lower()
import re
[l for l in ListSentence if re.search(r'\b(%s)\b' % regex, l.lower())]
Should output:
['Steve\xb6\xa9', 'Rob spring', 'Car Daniel', 'Done daniel']
If you're just looking for strings containing a space, as your example indicates, it should be something like this:
[i for i in ListSentence if i.endswith('®') or (' ' in i)]
A possible solution:
import re
ListSentence = ["Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel"]
ListEmployee = ["Steve", "Rob", "daniel"]
def findEmployees(employees, sentence):
retval = []
for employee in employees:
expr = re.compile(r'(^%(employee)s(®)?(\s|$))|((^|\s)%(employee)s(®)?(\s|$))|((^|\s)%(employee)s(®)?$)'
% {'employee': employee},
re.IGNORECASE)
for part in sentence:
if expr.search(part):
retval.append(part)
return retval
findEmployees(ListEmployee, ListSentence)
>> Returns ['Steve\xc3\x82\xc2\xae', 'Rob spring', 'Car Daniel', 'Done daniel']

Keyword Matching in Pyparsing: non-greedy slurping of tokens

Pythonistas:
Suppose you want to parse the following string using Pyparsing:
'ABC_123_SPEED_X 123'
were ABC_123 is an identifier; SPEED_X is a parameter, and 123 is a value. I thought of the following BNF using Pyparsing:
Identifier = Word( alphanums + '_' )
Parameter = Keyword('SPEED_X') or Keyword('SPEED_Y') or Keyword('SPEED_Z')
Value = # assume I already have an expression valid for any value
Entry = Identifier + Literal('_') + Parameter + Value
tokens = Entry.parseString('ABC_123_SPEED_X 123')
#Error: pyparsing.ParseException: Expected "_" (at char 16), (line:1, col:17)
If I remove the underscore from the middle (and adjust the Entry definition accordingly) it parses correctly.
How can I make this parser be a bit lazier and wait until it matches the Keyword (as opposed to slurping the entire string as an Identifier and waiting for the _, which does not exist.
Thank you.
[Note: This is a complete rewrite of my question; I had not realized what the real problem was]
I based my answer off of this one, since what you're trying to do is get a non-greedy match. It seems like this is difficult to make happen in pyparsing, but not impossible with some cleverness and compromise. The following seems to work:
from pyparsing import *
Parameter = Literal('SPEED_X') | Literal('SPEED_Y') | Literal('SPEED_Z')
UndParam = Suppress('_') + Parameter
Identifier = SkipTo(UndParam)
Value = Word(nums)
Entry = Identifier + UndParam + Value
When we run this from the interactive interpreter, we can see the following:
>>> Entry.parseString('ABC_123_SPEED_X 123')
(['ABC_123', 'SPEED_X', '123'], {})
Note that this is a compromise; because I use SkipTo, the Identifier can be full of evil, disgusting characters, not just beautiful alphanums with the occasional underscore.
EDIT: Thanks to Paul McGuire, we can concoct a truly elegant solution by setting Identifier to the following:
Identifier = Combine(Word(alphanums) +
ZeroOrMore('_' + ~Parameter + Word(alphanums)))
Let's inspect how this works. First, ignore the outer Combine; we'll get to this later. Starting with Word(alphanums) we know we'll get the 'ABC' part of the reference string, 'ABC_123_SPEED_X 123'. It's important to note that we didn't allow the "word" to contain underscores in this case. We build that separately in to the logic.
Next, we need to capture the '_123' part without also sucking in '_SPEED_X'. Let's also skip over ZeroOrMore at this point and return to it later. We start with the underscore as a Literal, but we can shortcut with just '_', which will get us the leading underscore, but not all of '_123'. Instictively, we would place another Word(alphanums) to capture the rest, but that's exactly what will get us in trouble by consuming all of the remaining '_123_SPEED_X'. Instead, we say, "So long as what follows the underscore is not the Parameter, parse that as part of my Identifier. We state that in pyparsing terms as '_' + ~Parameter + Word(alphanums). Since we assume we can have an arbitrary number of underscore + WordButNotParameter repeats, we wrap that expression a ZeroOrMore construct. (If you always expect at least underscore + WordButNotParameter following the initial, you can use OneOrMore.)
Finally, we need to wrap the initial Word and the special underscore + Word repeats together so that it's understood they are contiguous, not separated by whitespace, so we wrap the whole expression up in a Combine construct. This way 'ABC _123_SPEED_X' will raise a parse error, but 'ABC_123_SPEED_X' will parse correctly.
Note also that I had to change Keyword to Literal because the ways of the former are far too subtle and quick to anger. I do not trust Keywords, nor could I get matching with them.
If you are sure that the identifier never ends with an underscore, you can enforce it in the definition:
from pyparsing import *
my_string = 'ABC_123_SPEED_X 123'
Identifier = Combine(Word(alphanums) + Literal('_') + Word(alphanums))
Parameter = Literal('SPEED_X') | Literal('SPEED_Y') | Literal('SPEED_Z')
Value = Word(nums)
Entry = Identifier + Literal('_').suppress() + Parameter + Value
tokens = Entry.parseString(my_string)
print tokens # prints: ['ABC_123', 'SPEED_X', '123']
If it's not the case but if the identifier length is fixed you can define Identifier like this:
Identifier = Word( alphanums + '_' , exact=7)
You can also parse the identifier and parameter as one token, and split them in a parse action:
from pyparsing import *
import re
def split_ident_and_param(tokens):
mo = re.match(r"^(.*?_.*?)_(.*?_.*?)$", tokens[0])
return [mo.group(1), mo.group(2)]
ident_and_param = Word(alphanums + "_").setParseAction(split_ident_and_param)
value = Word(nums)
entry = ident_and_param + value
print entry.parseString("APC_123_SPEED_X 123")
The example above assumes that the identifiers and parameters always have the format XXX_YYY (containing one single underscore).
If this is not the case, you need to adjust the split_ident_and_param() method.
This answers a question that you probably have also asked yourself: "What's a real-world application for reduce?):
>>> keys = ['CAT', 'DOG', 'HORSE', 'DEER', 'RHINOCEROS']
>>> p = reduce(lambda x, y: x | y, [Keyword(x) for x in keys])
>>> p
{{{{"CAT" | "DOG"} | "HORSE"} | "DEER"} | "RHINOCEROS"}
Edit:
This was a pretty good answer to the original question. I'll have to work on the new one.
Further edit:
I'm pretty sure you can't do what you're trying to do. The parser that pyparsing creates doesn't do lookahead. So if you tell it to match Word(alphanums + '_'), it's going to keep matching characters until it finds one that's not a letter, number, or underscore.

Categories