Doing multiple, successive regex replacements in Python. Inefficient? - python

First off - my code works. It just runs slowly, and I'm wondering if i'm missing something that will make it more efficient. I'm parsing PDFs with python (and yes, I know that this should be avoided if at all possible).
My problem is that i have to do several rather complex regex substitutions - and when i say substitution, I really mean deleting. I have done the ones that strip out the most data first so that the next expressions don't need to analyze too much text, but that's all I can think of to speed things up.
I'm pretty new to python and regexes, so it's very conceivable this could be done better.
Thanks for reading.
regexPagePattern = r"(Wk)\d{1,2}.\d{2}(\d\.\d{1,2})"
regexCleanPattern = r"(\(continued\))?((II)\d\.\d{1,2}|\d\.\d{1,2}(II)|\d\.\d{1,2})"
regexStartPattern = r".*(II)(\s)?(INDEX OF CHARTS AFFECTED)"
regexEndPattern = r"(II.)\d{1,5}\((P|T)\).*"
contentRaw = re.sub(regexStartPattern,"",contentRaw)
contentRaw = re.sub(regexEndPattern,"",contentRaw)
contentRaw = re.sub(regexPagePattern,"",contentRaw)
contentRaw = re.sub(regexCleanPattern,"",contentRaw)

I'm not sure if you do this inside of a loop. If not the following does not apply.
If you use a pattern multiple times you should compile it using re.compile( ... ). This way the pattern is only compiled once. The speed increase should be huge. Minimal example:
>>> a="a b c d e f"
>>> re.sub(' ', '-', a)
'a-b-c-d-e-f'
>>> p=re.compile(' ')
>>> re.sub(p, '-', a)
'a-b-c-d-e-f'
Another idea: Use re.split( ... ) instead of re.sub and operate on the array with the resulting fragments of your data. I'm not entirely sure how it is implemented, but I think re.sub creates text fragments and merges them into one string in the end, which is expensive. After the last step you can join the array using " ".join(fragments). Obviously, This method will not work if your patterns overlap somewhere.
It would be interesting to get timing information for your program before and after your changes.

Regex are always the last choice when trying to decode strings. So if you see another possibility to solve your problem, use that.
That said, you could use re.compile to precompile your regex patterns:
regexPagePattern = re.compile(r"(Wk)\d{1,2}.\d{2}(\d\.\d{1,2})")
regexPagePattern.sub("",contentRaw)
That should speed things up a bit (a pretty nice bit ;) )

Related

What is the _qualitative_ difference between re.findall and an iterative re.search?

When one wants to find and process all occurrences of a regex in a string, what is the qualitative difference between re.findall and iterating over its returned list versus iterating directly with re.search?
In code, what is the qualitative difference between the findall version:
sbls = "some big ... long string"
for i in some_regex.findall( sbls )
process_item( i )
And the iterative search version:
sbls = "some big ... long string"
m = some_regex.search( sbls )
while m:
process_item( m.group() )
m = some_regex.search(sbls, m.end())
In a very large string and a poorly chosen/low-cardinality regex, would the findall version consume more memory (perhaps via the list)? Conversely, would the search version take noticeably more time?
Up to the constraints of the hardware to which I have access, I have not been able to discern a non-negligible difference, so would much appreciate others' insights.
Like #Jan says, find all is marginally faster, however the later gives you finer control over what to do with each match, eg perhaps you find what you want early you can break and save the effort of finding the remaining matches.

best way to extract data using re.compiler

I need to extract (a lot of) info from different text files.
I wonder if there is a shorter and more efficient way than the following:
First part: (N lines long)
N1 = re.compile(r'')
N2 = re.compile(r'')
.
Nn = re.compile(r'')
Second part: (2N lines long)
with open(filename) as f:
for line in f:
if N1.match(line):
var1 = N1.match(line).group(x).strip()
elif N2.match(line):
var2 = N1.match(line).group(x).strip()
elif Nn.match(line):
varn = Nn
Do you recommend having the re.compile vars (part 1) separate from the part 2. What do you people use in this cases? Perhaps a function pasing the regex as argument? and call it every time.
In my case N is 30, meaning I have 90 lines for feeding a dictionary with very little, or no logic at all.
I’m going to attempt to answer this without really knowing what you are actually doing there. So this answer might help you, or it might not.
First of all, what re.compile does is pre-compile a regular expression, so you can use it later and do not have to compile it every time you use it. This is primarily useful when you have a regular expression that is used multiple times throughout your program. But if the expression is only used a few times, then there is not really that much of a benefit to compiling it up front.
So you should ask yourself, how often the code runs that attempts to match all those expressions. Is it just once during the script execution? Then you can make your code simpler by inlining the expressions. Since you’re running the matches for each line in a file, pre-compiling likely makes sense here.
But just because you pre-compiled the expression, that does not mean that you should be sloppy and match the same expression too often. Look at this code:
if N1.match(line):
var1 = N1.match(line).group(x).strip()
Assuming there is a match, this will run N1.match() twice. That’s an overhead you should avoid since matching expressions can be relatively expensive (depending on the expression), even if the expression is already pre-compiled.
Instead, just match it once, and then reuse the result:
n1_match = N1.match(line)
if n1_match:
var1 = n1_match.group(x).strip()
Looking at your code, your regular expressions also appear to be mutally exclusive—or at least you only ever use the first match and skip the remaining ones. In that case, you should make sure that you order your checks
so that the most common checks are done first. That way, you avoid running too many expressions that won’t match anyway. Also, try to order them so that more complex expressions are ran less often.
Finally, you are collecting the match result in separate variables varN. At this point, I’m questioning what exactly you are doing there, since after all your if checks, you do not have a clear way of figuring out what the result was and which variable to use. At this point, it might make more sense to just collect it in a single variable, or to move specific logic within the condition bodies. But it’s difficult to tell with the amount of information you gave.
As mentionned in re module documentation, the regexes you pass through re methods are cached: depending on the number of expressions you have, caching them yourself might not be useful.
That being said, you should make a list of your regexes, so that a simple for loop would allow you to test all your patterns.
regexes = map(re.compile, ['', '', '', '', ...])
vars = ['']*len(regexes)
with open(filename) as f:
for line in f:
for i,regex in enumerate(regexes):
if regex.match(line):
var[i] = regex.match(line).group(x).strip()
break # break here if you only want the first match for any given line.

Simple regular expression not working

I am trying to match a string with a regular expression but it is not working.
What I am trying to do is simple, it is the typical situation when an user intruduces a range of pages, or single pages. I am reading the string and checking if it is correct or not.
Expressions I am expecting, for a range of pages are like: 1-3, 5-6, 12-67
Expressions I am expecting, for single pages are like: 1,5,6,9,10,12
This is what I have done so far:
pagesOption1 = re.compile(r'\b\d\-\d{1,10}\b')
pagesOption2 = re.compile(r'\b\d\,{1,10}\b')
Seems like the first expression works, but not the second.
And, would it be possible to merge both of them in one single regular expression?, In a way that, if the user introduces either something like 1-2, 7-10 or something like 3,5,6,7 the expression will be recogniced as good.
Simpler is better
Matching the entire input isn't simple, as the proposed solutions show, at least it is not as simple as it could/should be. Will become read only very quickly and probably be scrapped by anyone that isn't regex savvy when they need to modify it with a simpler more explicit solution.
Simplest
First parse the entire string and .split(","); into individual data entries, you will need these anyway to process. You have to do this anyway to parse out the useable numbers.
Then the test becomes a very simple, test.
^(\d+)(?:-\(d+))?$
It says, that there the string must start with one or more digits and be followed by optionally a single - and one or more digits and then the string must end.
This makes your logic as simple and maintainable as possible. You also get the benefit of knowing exactly what part of the input is wrong and why so you can report it back to the user.
The capturing groups are there because you are going to need the input parsed out to actually use it anyway, this way you get the numbers if they match without having to add more code to parse them again anyway.
This regex should work -
^(?:(\d+\-\d+)|(\d+))(?:\,[ ]*(?:(\d+\-\d+)|(\d+)))*$
Demo here
Testing this -
>>> test_vals = [
'1-3, 5-6, 12-67',
'1,5,6,9,10,12',
'1-3,1,2,4',
'abcd',
]
>>> regex = re.compile(r'^(?:(\d+\-\d+)|(\d+))(?:\,[ ]*(?:(\d+\-\d+)|(\d+)))*$')
>>> for val in test_vals:
print val
if regex.match(val) == None:
print "Fail"
else:
print "Pass"
1-3, 5-6, 12-67
Pass
1,5,6,9,10,12
Pass
1-3,1,2,4.5
Fail
abcd
Fail

Efficient way to do a large number of search/replaces in Python?

I'm fairly new to Python, and am writing a series of script to convert between some proprietary markup formats. I'm iterating line by line over files and then basically doing a large number (100-200) of substitutions that basically fall into 4 categories:
line = line.replace("-","<EMDASH>") # Replace single character with tag
line = line.replace("<\\#>","#") # tag with single character
line = line.replace("<\\n>","") # remove tag
line = line.replace("\xe1","•") # replace non-ascii character with entity
the str.replace() function seems to be pretty efficient (fairly low in the numbers when I examine profiling output), but is there a better way to do this? I've seen the re.sub() method with a function as an argument, but am unsure if this would be better? I guess it depends on what kind of optimizations Python does internally. Thought I would ask for some advice before creating a large dict that might not be very helpful!
Additionally I do some parsing of tags (that look somewhat like HTML, but are not HTML). I identify tags like this:
m = re.findall('(<[^>]+>)',line)
And then do ~100 search/replaces (mostly removing matches) within the matched tags as well, e.g.:
m = re.findall('(<[^>]+>)',line)
for tag in m:
tag_new = re.sub("\*t\([^\)]*\)","",tag)
tag_new = re.sub("\*p\([^\)]*\)","",tag_new)
# do many more searches...
if tag != tag_new:
line = line.replace(tag,tag_new,1) # potentially problematic
Any thoughts of efficiency here?
Thanks!
str.replace() is more efficient if you're going to do basic search and replaces, and re.sub is (obviously) more efficient if you need complex pattern matching (because otherwise you'd have to use str.replace several times).
I'd recommend you use a combination of both. If you have several patterns that all get replaced by one thing, use re.sub. If you just have some cases where you just need to replace one specific tag with another, use str.replace.
You can also improve efficiency by using larger strings (call re.sub once instead of once for each line). Increases memory use, but shouldn't be a problem unless the file is HUGE, but also improves execution time.
If you don't actually need the regex and are just doing literal replacing, string.replace() will almost certainly be faster. But even so, your bottleneck here will be file input/output, not string manipulation.
The best solution though would probably be to use cStringIO
Depending on the ratio of relevant-to-not-relevant portions of the text you're operating on (and whether or not the parts each substitution operates on overlap), it might be more efficient to try to break down the input into tokens and work on each token individually.
Since each replace() in your current implementation has to examine the entire input string, that can be slow. If you instead broke down that stream into something like...
[<normal text>, <tag>, <tag>, <normal text>, <tag>, <normal text>]
# from an original "<normal text><tag><tag><normal text><tag><normal text>"
...then you could simply look to see if a given token is a tag, and replace it in the list (and then ''.join() at the end).
You can pass a function object to re.sub instead of a substitution string, it takes the match object and returns the substitution, so for example
>>> r = re.compile(r'<(\w+)>|(-)')
>>> r.sub(lambda m: '(%s)' % (m.group(1) if m.group(1) else 'emdash'), '<atag>-<anothertag>')
'(atag)(emdash)(anothertag)'
Of course you can use a more complex function object, this lambda is just an example.
Using a single regex that does all the substitution should be slightly faster than iterating the string many times, but if a lot of substitutions are perfomed the overhead of calling the function object that computes the substitution may be significant.

Python regular expressions with more than 100 groups?

Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit.
There is a limit because it would take too much memory to store the complete state machine efficiently. I'd say that if you have more than 100 groups in your re, something is wrong either in the re itself or in the way you are using them. Maybe you need to split the input and work on smaller chunks or something.
I found the easiest way was to
import regex as re
instead of
import re
The default _MAXCACHE for regex is 500 instead of 100 I believe. This is one of the many reasons I find regex to be a better module than re.
If I'm not mistaken, the "new" regex module (currently third-party, but intended to eventually replace the re module in the stdlib) does not have this limit, so you might give that a try.
I'm not sure what you're doing exactly, but try using a single group, with a lot of OR clauses inside... so (this)|(that) becomes (this|that). You can do clever things with the results by passing a function that does something with the particular word that is matched:
newContents, num = cregex.subn(lambda m: replacements[m.string[m.start():m.end()]], contents)
If you really need so many groups, you'll probably have to do it in stages... one pass for a dozen big groups, then another pass inside each of those groups for all the details you want.
I doubt you really need to process 100 named groups by next commands or use it in regexp replacement command. It would be quite impractical. If you just need groups to express the rich conditions in regexp you can use non-grouping group.
(?:word1|word2)(?:word3|word4)
etc. Complex scenarios including nesting groups are possible.
There is no limit for non-grouping groups.
First, as others have said, there are probably good alternatives to using 100 groups. The re.findall method might be a useful place to start. If you really need more than 100 groups, the only workaround I see is to modify the core Python code.
In [python-install-dir]/lib/sre_compile.py simply modify the compile() function by removing the following lines:
# in lib/sre_compile.py
if pattern.groups > 100:
raise AssertionError(
"sorry, but this version only supports 100 named groups"
)
For a slightly more flexible version, just define a constant at the top of the sre_compile module, and have the above line compare to that constant instead of 100.
Funnily enough, in the (Python 2.5) source there is a comment indicating that the 100 group limit is scheduled to be removed in future versions.
I've found that Python 3 doesn't have this limitation, whereas the same code ran in latest 2.7 displays this error.
When I run into this I had a really complex pattern that was actually composed of a bunch of high-level patterns joined by ORs, like this:
pattern_string = u"pattern1|" \
u"pattern2|" \
u"patternN"
pattern = re.compile(pattern_string, re.UNICODE)
for match in pattern.finditer(string_to_search):
pass # Extract data from the groups in the match.
As a workaround, I turned the pattern into a list and I used that list as follows:
pattern_strings = [
u"pattern1",
u"pattern2",
u"patternN",
]
patterns = [re.compile(pattern_string, re.UNICODE) for pattern_string in pattern_strings]
for pattern in patterns:
for match in pattern.finditer(string_to_search):
pass # Extract data from the groups in the match.
string_to_search = pattern.sub(u"", string_to_search)
I would say you could reduce the number of groups by using non-grouping parentheses, but whatever it is that you're doing seems like you want all these groupings.
in my case, i have a dictionary of n words and want to create a single regex that matches all of them.. ie: if my dictionary is
hello
goodbye
my regex would be: (^|\s)hello($|\s)|(^|\s)goodbye($|\s) ... it's the only way to do it, and works fine on small dictionaries, but when you have more tan 50 words, well...
It's very ease to resolve this error:
Open the re class and you'll see this constant _MAXCACHE = 100.
Change the value to 1000, for example, and do a test.

Categories