I would like to generate string matching my regexes using Python 3. For this I am using handy library called rstr.
My regexes:
^[abc]+.
[a-z]+
My task:
I must find a generic way, how to create string that would match both my regexes.
What I cannot do:
Modify both regexes or join them in any way. This I consider as ineffective solution, especially in the case if incompatible regexes:
import re
import rstr
regex1 = re.compile(r'^[abc]+.')
regex2 = re.compile(r'[a-z]+')
for index in range(0, 1000):
generated_string = rstr.xeger(regex1)
if re.fullmatch(regex2, generated_string):
break;
else:
raise Exception('Regexes are probably incompatibile.')
print('String matching both regexes is: {}'.format(generated_string))
Is there any workaround or any magical library that can handle this? Any insights appreciated.
Questions which are seemingly similar, but not helpful in any way:
Match a line with multiple regex using Python
Asker already has the string, which he just want to check against multiple regexes in the most elegant way. In my case we need to generate string in a smart way that would match regexes.
If you want really generic way, you can't really use brute force approach.
What you look for is create some kind of representation of regexp (as rstr does through call of sre_parse.py) and then calling some SMT solver to satisfy both criteria.
For Haskell there is https://github.com/audreyt/regex-genex which uses Yices SMT solver to do just that, but I doubt there is anything like this for Python. If I were you, I'd bite a bullet and call it as external program from your python program.
I don't know if there is something that can fulfill your needs much smother.
But I would do it something like (as you've done it already):
Create a Regex object with the re.compile() function.
Generate String based on 1st regex.
Pass the string you've got into the 2nd regex object using search() method.
If that passes... your done, string passed both regexs.
Maybe you can create a function and pass both regexes as parameters and test "2 by 2" using the same logic.
And then if you have 8 regexes to match...
Just do:
call (regex1, regex2)
call (regex2, regex3)
call (regex4, regex5)
...
I solved this using a little alternative approach. Notice second regex is basically insurance so only lowercase letters are generated in our new string.
I used Google's python package sre_yield which allows charset limitation. Package is also available on PyPi. My code:
import sre_yield
import string
sre_yield.AllStrings(r'^[abc]+.', charset=string.ascii_lowercase)[0]
# returns `aa`
Related
My input string contains various entities like this:
conn_type://host:port/schema#login#password
I want to find out all of them using regex in python.
As of now, I am able to find them one by one, like
conn_type=re.search(r'[a-zA-Z]+',test_string)
if (conn_type):
print "conn_type:", conn_type.group()
next_substr_len = conn_type.end()
host=re.search(r'[^:/]+',test_string[next_substr_len:])
and so on.
Is there a way to do it without if and else?
I expect there to be some way, but not able to find it. Please note that every entity regex is different.
Please help, I don't want to write a boring code.
Why don't you use re.findall?
Here is an example:
import re;
s = 'conn_type://host:port/schema#login#password asldasldasldasdasdwawwda conn_type://host:port/schema#login#email';
def get_all_matches(s):
matches = re.findall('[a-zA-Z]+_[a-zA-Z]+:\/+[a-zA-Z]+:+[a-zA-Z]+\/+[a-zA-Z]+#+[a-zA-Z]+#[a-zA-Z]+',s);
return matches;
print get_all_matches(s);
this will return a list full of matches to your current regex as seen in this example which in this case would be:
['conn_type://host:port/schema#login#password', 'conn_type://host:port/schema#login#email']
If you need help making regex patterns in Python I would recommend using the following website:
A pretty neat online regex tester
Also check the re module's documentation for more on re.findall
Documentation for re.findall
Hope this helps!
>>>import re
>>>uri = "conn_type://host:port/schema#login#password"
>>>res = re.findall(r'(\w+)://(.*?):([A-z0-9]+)/(\w+)#(\w+)#(\w+)', uri)
>>>res
[('conn_type', 'host', 'port', 'schema', 'login', 'password')]
No need for ifs. Use findall or finditer to search through your collection of connection types. Filter the list of tuples, as need be.
If you like it DIY, consider creating a tokenizer. This is very elegant "python way" solution.
Or use a standard lib: https://docs.python.org/3/library/urllib.parse.html but note, that your sample URL is not fully valid: there is no schema 'conn_type' and you have two anchors in the query string, so urlparse wouldn't work as expected. But for real-life URLs I highly recommend this approach.
I have been researching a lot of questions about It on stackOverflow and google. But none of then solved my problem.
Let's say, I have an irregular length string (like a phone number or specific and territorial documents) for example:
docA="A123B456"
docB="A123B456CD"
docC="6CD"
I'm writing a function to print documents. But they don't have a definite pattern, my approach was to use a default variable using the most common pattern and give the responsibility of corner cases to the programmer.
e.g:
def printDoc(text, pattern="{}{}{}{}#{}{}{}-{}")
print(pattern.format(*text))
But It would be much more clean and explicit if there's a way to simplify the pattern like
def printDoc(text, pattern="{0:3}#{4:-1}-{-1}")
print(pattern.format(*text))
Then I could use It like:
printDoc(docA)
printDoc(docB)
printDoc(docC, "{0:1}-{2}")
But It's not a valid syntax. Is there a way of doing this properly?
If my approach is wrong, is there a better way of doing this?
You could use regular expression to parse the indexes/slices from the format string and use those to index given text. You'd also have to remove the indeces from format string before using it with str.format. The only tricky part is actually getting format parameters out from text but if you consider eval acceptable you could do following:
import re
def printDoc(text, pattern="{0:3}#{4:-1}-{-1}"):
params = []
# Find occurrences of '{}' from format string and extract format parameter
for m in re.finditer(r'\{([-:\d]+)\}', pattern):
params.append(eval('text[{}]'.format(m.group(1))))
# Remove indeces from format string
pattern = re.sub(r'\{([-:\d]+)\}', '{}', pattern)
print(pattern.format(*params))
printDoc('A123B456')
Output:
A12#B45-6
Note that using eval is generally considered bad and unsafe practice. Although the potential risks are limited here because of restricted character set given to eval you might want to consider other alternatives unless you're the one who controls the format strings.
I cannot find a way to serially search a string and append replacements. Let's say I am implementing a templating language. A simplified template looks something like this:
Hello words on #DATE# in #COUNTRY# on this beautiful day.
Imagine a very long template, with many #SOMETHING# tags. Now I want to use regex to parse through this, and every time I found #SOMETHING#, do some python logic, replace it with some string, append it, and continue. All I found is that I can break the string up into tokens and matches and then reassemble it. Is there something better, without generating all those string chunks? Maybe I am trying to optimize too early, but in Java, we have the
appendReplacement(StringBuffer,String) and appendTail(StringBuffer)
methods and I was wondering if something similar can be done in Python.
See http://docs.oracle.com/javase/tutorial/essential/regex/matcher.html
You can use a function as the "replacement" in re.sub. Then re.sub will invoke your function for every match in the string, and the return value of the function will be the replacement in the string.
I am converting some matlab code to C, currently I have some lines that have powers using the ^, which is rather easy to do with something along the lines \(?(\w*)\)?\^\(?(\w*)\)?
works fine for converting (glambda)^(galpha),using the sub routine in python pattern.sub(pow(\g<1>,\g<2>),'(glambda)^(galpha)')
My problem comes with nested parenthesis
So I have a string like:
glambdastar^(1-(1-gphi)*galpha)*(glambdaq)^(-(1-gphi)*galpha);
And I can not figure out how to convert that line to:
pow(glambdastar,(1-(1-gphi)*galpha))*pow(glambdaq,-(1-gphi)*galpha));
Unfortunately, regular expressions aren't the right tool for handling nested structures. There are some regular expressions engines (such as .NET) which have some support for recursion, but most — including the Python engine — do not, and can only handle as many levels of nesting as you build into the expression (which gets ugly fast).
What you really need for this is a simple parser. For example, iterate over the string counting parentheses and storing their locations in a list. When you find a ^ character, put the most recently closed parenthesis group into a "left" variable, then watch the group formed by the next opening parenthesis. When it closes, use it as the "right" value and print the pow(left, right) expression.
I think you can use recursion here.
Once you figure out the Left and Right parts, pass each of those to your function again.
The base case would be that no ^ operator is found, so you will not need to add the pow() function to your result string.
The function will return a string with all the correct pow()'s in place.
I'll come up with an example of this if you want.
Nested parenthesis cannot be described by a regexp and require a full parser (able to understand a grammar, which is something more powerful than a regexp). I do not think there is a solution.
See recent discussion function-parser-with-regex-in-python (one of many similar discussions). Then follow the suggestion to pyparsing.
An alternative would be to iterate until all ^ have been exhausted. no?.
Ruby code:
# assuming str contains the string of data with the expressions you wish to convert
while str.include?('^')
str!.gsub!(/(\w+)\^(\w+)/, 'pow(\1,\2)')
end
Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit.
There is a limit because it would take too much memory to store the complete state machine efficiently. I'd say that if you have more than 100 groups in your re, something is wrong either in the re itself or in the way you are using them. Maybe you need to split the input and work on smaller chunks or something.
I found the easiest way was to
import regex as re
instead of
import re
The default _MAXCACHE for regex is 500 instead of 100 I believe. This is one of the many reasons I find regex to be a better module than re.
If I'm not mistaken, the "new" regex module (currently third-party, but intended to eventually replace the re module in the stdlib) does not have this limit, so you might give that a try.
I'm not sure what you're doing exactly, but try using a single group, with a lot of OR clauses inside... so (this)|(that) becomes (this|that). You can do clever things with the results by passing a function that does something with the particular word that is matched:
newContents, num = cregex.subn(lambda m: replacements[m.string[m.start():m.end()]], contents)
If you really need so many groups, you'll probably have to do it in stages... one pass for a dozen big groups, then another pass inside each of those groups for all the details you want.
I doubt you really need to process 100 named groups by next commands or use it in regexp replacement command. It would be quite impractical. If you just need groups to express the rich conditions in regexp you can use non-grouping group.
(?:word1|word2)(?:word3|word4)
etc. Complex scenarios including nesting groups are possible.
There is no limit for non-grouping groups.
First, as others have said, there are probably good alternatives to using 100 groups. The re.findall method might be a useful place to start. If you really need more than 100 groups, the only workaround I see is to modify the core Python code.
In [python-install-dir]/lib/sre_compile.py simply modify the compile() function by removing the following lines:
# in lib/sre_compile.py
if pattern.groups > 100:
raise AssertionError(
"sorry, but this version only supports 100 named groups"
)
For a slightly more flexible version, just define a constant at the top of the sre_compile module, and have the above line compare to that constant instead of 100.
Funnily enough, in the (Python 2.5) source there is a comment indicating that the 100 group limit is scheduled to be removed in future versions.
I've found that Python 3 doesn't have this limitation, whereas the same code ran in latest 2.7 displays this error.
When I run into this I had a really complex pattern that was actually composed of a bunch of high-level patterns joined by ORs, like this:
pattern_string = u"pattern1|" \
u"pattern2|" \
u"patternN"
pattern = re.compile(pattern_string, re.UNICODE)
for match in pattern.finditer(string_to_search):
pass # Extract data from the groups in the match.
As a workaround, I turned the pattern into a list and I used that list as follows:
pattern_strings = [
u"pattern1",
u"pattern2",
u"patternN",
]
patterns = [re.compile(pattern_string, re.UNICODE) for pattern_string in pattern_strings]
for pattern in patterns:
for match in pattern.finditer(string_to_search):
pass # Extract data from the groups in the match.
string_to_search = pattern.sub(u"", string_to_search)
I would say you could reduce the number of groups by using non-grouping parentheses, but whatever it is that you're doing seems like you want all these groupings.
in my case, i have a dictionary of n words and want to create a single regex that matches all of them.. ie: if my dictionary is
hello
goodbye
my regex would be: (^|\s)hello($|\s)|(^|\s)goodbye($|\s) ... it's the only way to do it, and works fine on small dictionaries, but when you have more tan 50 words, well...
It's very ease to resolve this error:
Open the re class and you'll see this constant _MAXCACHE = 100.
Change the value to 1000, for example, and do a test.