Extract a number from a string, after a certain character - python

Ayyyy, I need some help. I have the following strings, always in "char,num" format:
s = "abcdef,12"
v = "gbhjjj,699"
I want to get just the digits after the comma, how do I do that without splitting the string with the comma as a delimiter?
I tried s[-2:] and v[-3:] which works, but how do I make it work without knowing the number of digits?

Assuming:
You know there is a comma in the string, so you don't have to search the entire string to find out if there is or not.
You know the pattern is 'many_not_digits,few_digits' so there is a big imbalance between the size of the left/right parts either side of the comma.
You can get to the end of the string without walking it, which you can in Python because string indexing is constant time
Then you could start from the end and walk backwards looking for the comma, which would be less overall work for your examples than walking from the left looking for the comma.
Doing work in Python code is way slower than using Python engine code written in C, right? So would it really be faster?
Make a string "aaaaa....,12"
use the timeit module to compare each approach - split, or right-walk.
Timeit does a million runs of some code.
Extend the length of "aaaaaaaaaaaaaaaa....,12" to make it extreme.
How do they compare?
String split: 1400 "a"'s run a million times took 1 second.
String split: 4000 "a"'s run a million times took 2 seconds.
Right walk: 1400 "a"'s run a million times took 0.4 seconds.
Right walk: 999,999 "a"'s run a million times took ... 0.4 seconds.
!
from timeit import timeit
_split = """num = x.split(',')[-1]"""
_rwalk = """
i=-1
while x[i] != ',':
i-=1
num = x[i+1:]
"""
print(timeit(_split, setup='x="a"*1400 + ",12"'))
print(timeit(_rwalk, setup='x="a"*999999 + ",12"'))
e.g.
1.0063155219977489 # "aaa...,12" for 1400 chars, string split
0.4027107510046335 # "aaa...,12" for 999999 chars, rwalked. Faster.
Try it online at repl.it
I don't think this is algorithmically better than O(n), but with the constraints of the assumptions I made you have more knowledge than str.split() has, and can leverage that to skip walking most of the string and beat it in practise - and the longer the text part, and shorter the digit part, the more you benefit.

If you are worried about using split from the left because of lots of unwanted characters in the beginning, use rsplit.
s = "abcdef,12"
s.rsplit(",", 1)[-1]
Here, rsplit will start splitting the string from the right and the optional second argument we used will stop rsplit to proceed further than the first comma operator it encountered.
(eg):
s = "abc,def,12"
s.rsplit(",", 1)[-1]
# Outputs 12
s = "abcdef12"
s.rsplit(",", 1)[-1]
# Outputs abcdef12
This will be lot simpler and cleaner to get the string of numbers in the end rather than doing anything manually.
Not to mention, it will be lot easier if we wish to check whether we get only numbers with this. Even if it is a list of strings.
def get_numbers(string_list, skip_on_error=True):
numbers_list = []
for input_string in string_list:
the_number = input_string.rsplit(",", 1)[-1]
if the_number.isdigit():
numbers_list.append(the_number)
elif skip_on_error:
numbers_list.append("")
else:
raise Exception("Wrong Format occurred: %s" % (input_string))
return numbers_list
And if you are looking for even further optimization and sure that most(if not all) strings will be of the correct format, you can even use try except if you are going to go with an integer list instead of string list. Like this:
# Instead of the if.. elif.. else construct
try:
numbers_list.append(int(the_number))
except ValueError:
if skip_on_error:
numbers_list.append(0)
else:
raise Exception("Wrong Format occurred: %s" % (input_string))
But always remember the Zen Of Python and using split/rsplit follows these:
Beautiful is better than ugly
Explicit is better than implicit
Simple is better than complex
Readability counts
There should be one-- and preferably only one --obvious way to do it
And also remember Donald Knuth:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%

Using split is superior because it is very clear and fast:
>>> s = "abcdef,12"
>>> s.split(',')[1]
'12'
Another way is with index or find:
>>> s = "abcdef,12"
>>> s[s.find(',')+1:]
'12'
And another way with re:
>>> import re
>>> s = "abcdef,12"
>>> re.search(r',(.*)', s).group(1)
'12'
And with csv (and io so I don't have to write a file to the hard drive):
>>> import csv
>>> import io
>>> s = "abcdef,12"
>>> r = csv.reader(i)
>>> for line in r:
... print(line[1])
...
12
I'm sure there are other ways to accomplish this task. This is just a small sample.

Maybe you can try with a regular expression
import re
input_strings = ["abcdef,12", "gbhjjj,699"]
matcher = re.compile("\d+$")
for input_string in input_strings:
is_matched = matcher.search(input_string)
if is_matched:
print(is_matched.group())

I like .partition() for this kind of thing:
for text in ('gbhjjj,699', 'abcdef,12'):
x, y, z = text.partition(',')
number = int(z)
print(number)
Unlike .split() it will always return three values.
I'll sometimes do this to emphasize that I don't care about certain values:
_, _, z = text.partition(',')

Related

Printing substrings' patterns from a string in Python

The input to this problem is a string and has a specific form. For example if s is a string then inputs can be s='3(a)2(b)' or s='3(aa)2(bbb)' or s='4(aaaa)'. The output should be a string, that is the substring inside the brackets multiplied by numerical substring value the substring inside the brackets follows.
For example,
Input ='3(a)2(b)'
Output='aaabb'
Input='4(aaa)'
Output='aaaaaaaaaaaa'
and similarly for other inputs. The program should print an empty string for wrong or invalid inputs.
This is what I've tried so far
s='3(aa)2(b)'
p=''
q=''
for i in range(0,len(s)):
#print(s[i],end='')
if s[i]=='(':
k=int(s[i-1])
while(s[i+1]!=')'):
p+=(s[i+1])
i+=1
if s[i]==')':
q+=k*p
print(q)
Can anyone tell what's wrong with my code?
A oneliner would be:
''.join(int(y[0])*y[1] for y in (x.split('(') for x in Input.split(')')[:-1]))
It works like this. We take the input, and split on the close paren
In [1]: Input ='3(a)2(b)'
In [2]: a = Input.split(')')[:-1]
In [3]: a
Out[3]: ['3(a', '2(b']
This gives us the integer, character pairs we're looking for, but we need to get rid of the open paren, so for each x in a, we split on the open paren to get a two-element list where the first element is the int (as a string still) and the character. You'll see this in b
In [4]: b = [x.split('(') for x in a]
In [5]: b
Out[5]: [['3', 'a'], ['2', 'b']]
So for each element in b, we need to cast the first element as an integer with int() and multiply by the character.
In [6]: c = [int(y[0])*y[1] for y in b]
In [7]: c
Out[7]: ['aaa', 'bb']
Now we join on the empty string to combine them into one string with
In [8]: ''.join(c)
Out[8]: 'aaabb'
Try this:
a = re.findall(r'[\d]+', s)
b = re.findall(r'[a-zA-Z]+', s)
c = ''
for i, j in zip(a, b):
c+=(int(i)*str(j))
print(c)
Here is how you could do it:
Step 1: Simple case, getting the data out of a really simple template
Let's assume your template string is 3(a). That's the simplest case I could think of. We'll need to extract pieces of information from that string. The first one is the count of chars that will have to be rendered. The second is the char that has to be rendered.
You are in a case where regex are more than suited (hence, the use of re module from python's standard library).
I won't do a full course on regex. You'll have to do that by our own. However, I'll explain quickly the step I used. So, count (the variable that holds the number of times we should render the char to render) is a digit (or several). Hence our first capturing group will be something like (\d+). Then we have a char to extract that is enclosed by parenthesis, hence \((\w+)\) (I actually enable several chars to be rendered at once). So, if we put them together, we get (\d+)\((\w+)\). For testing you can check this out.
Applied to our case, a straight forward use of the re module is:
import re
# Our template
template = '3(a)'
# Run the regex
match = re.search(r'(\d+)\((\w+)\)', template)
if match:
# Get the count from the first capturing group
count = int(match.group(1))
# Get the string to render from the second capturing group
string = match.group(2)
# Print as many times the string as count was given
print count * string
Output:
aaa
Yeah!
Step 2: Full case, with several templates
Okay, we know how to do it for 1 template, how to do the same for several, for instance 3(a)4(b)? Well... How would we do it "by hand"? We'd read the full template from left to right and apply each template one by one. Then this is what we'll do with python!
Hopefully for us the re module has a function just for that: finditer. It does exactly what we described above.
So, we'll do something like:
import re
# Our template
template = '3(a)4(b)'
# Iterate through found templates
for match in re.finditer(r'(\d+)\((\w+)\)', template):
# Get the count from the first capturing group
count = int(match.group(1))
# Get the string to render from the second capturing group
string = match.group(2)
print count * string
Output:
aaa
bbbb
Okay... Just remains the combination of that stuff. We know we can put everything at each step in an array, and then join each items of this array at the end, no?
Let's do it!
import re
template = '3(a)4(b)'
parts = []
for match in re.finditer(r'(\d+)\((\w+)\)', template):
parts.append(int(match.group(1)) * match.group(2))
print ''.join(parts)
Output:
aaabbb
Yeah!
Step 3: Final step, optimization
Because we can always do better, we won't stop. for loops are cool. But what I love (it's personal) about python is that there is so much stuff you can actually just write with one line! Is it the case here? Well yes :).
First we can remove the for loop and the append using a list comprehension:
parts = [int(match.group(1)) * match.group(2) for match in re.finditer(r'(\d+)\((\w+)\)', template)]
rendered = ''.join(parts)
Finally, let's remove the two lines with parts populating and then join and let's do all that in a single line:
import re
template = '3(a)4(b)'
rendered = ''.join(
int(match.group(1)) * match.group(2) \
for match in re.finditer(r'(\d+)\((\w+)\)', template))
print rendered
Output:
aaabbb
Yeah! Still the same output :).
Hope it helped!
The value of 'p' should be refreshed after each iteration.
s='1(aaa)2(bb)'
p=''
q=''
i=0
while i<len(s):
if s[i]=='(':
k=int(s[i-1])
p=''
while(s[i+1]!=')'):
p+=(s[i+1])
i+=1
if s[i]==')':
q+=k*p
i+=1
print(q)
The code is not behaving the way I want it to behave. The problem here is the placement of 'p'. 'p' is the variable that adds the substring inside the ( )s. I'm repeating the process even after sufficient adding is done. Placing 'p' inside the 'if' block will do the job.
s='2(aa)2(bb)'
q=''
for i in range(0,len(s)):
if s[i]=='(':
k=int(s[i-1])
p=''
while(s[i+1]!=')'):
#print(i,'first time')
p+=s[i+1]
i+=1
q+=p*k
#print(i,'second time')
print(q)
what you want is not print substrings . the real purpose is most like to generate text based regular expression or comands.
you can parametrize a function to read it or use something like it:
The python library rstr has the function xeger() to do what you need by using random strings and only returning ones that match:
Example
Install with pip install rstr
In [1]: from __future__ import print_function
In [2]: import rstr
In [3]: for dummy in range(10):
...: print(rstr.xeger(r"(a|b)[cd]{2}\1"))
...:
acca
bddb
adda
bdcb
bccb
bcdb
adca
bccb
bccb
acda
Warning
For complex re patterns this might take a long time to generate any matches.

regex or other way to fetch data from a line with variable entry

in python, I am trying to fetch data from a line.
The lines look like:
1. cpasite=5 nsubl=4 cpatypes=3,4,5,6
2. cpasite=6 nsubl=2 cpatypes=7,8
3. cpasite=7 nsubl=4 cpatypes=9,10
4. cpasite=8 nsubl=2 cpatypes=11,12
5. cpasite=9 nsubl=6 cpatypes=13,14,15,16,17,18
I have regexed it as:
pattern=r'(\d+)\. \s* cpasite=(.*)\s* nsubl=(.*)\s* cpatypes=(.*)'
the problem is, I need those sites(e.g. 3,4,5,6) seperately, so that I can use them for my purpose. but given that the number is not fixed, i cant split(in my knowledge).
How I can use those cpasites ?
Why not use your regex and then take the 4th captured group which will be as:
'3,4,5,6'
You can then split that string on , to get your list of values that you can work with separately:
s = '3,4,5,6'
s = map(int, s.split(','))
print s
[3,4,5,6]
>>> print s[2]
5
Split is your friend. Less regex == less complexity.
s = '1. cpasite=5 nsubl=4 cpatypes=3,4,5,6'
sites = s.split('=')[-1]
sites = sites.split(',')
It should be:
pattern=r'(\d+)\. \s* cpasite=(.*)\s* nsubl=(.*)\s* cpatypes=(\d+)(?:,(\d+))*'
however for some reason it seems that
(\d+)(?:,(\d+))*
only captures the first cpatypes and the last one... Seems a bug of the re module to me.
Turn arround:
However if you know the maximum number of cpatypes you can have, then
pattern=r'(\d+)\. \s* cpasite=(.*)\s* nsubl=(.*)\s* cpatypes=(\d+)(?:,(\d+))?(?:,(\d+))?'
repeating
(?:,(\d+))?
as many times as needed (the maximum expected cpatypes), worked for me. Even so it is a bit ugly...

Python: Dividing a string into substrings

I have a bunch of mathematical expressions stored as strings. Here's a short one:
stringy = "((2+2)-(3+5)-6)"
I want to break this string up into a list that contains ONLY the information in each "sub-parenthetical phrase" (I'm sure there's a better way to phrase that.) So my yield would be:
['2+2','3+5']
I have a couple of ideas about how to do this, but I keep running into a "okay, now what" issue.
For example:
for x in stringy:
substring = stringy[stringy.find('('+1 : stringy.find(')')+1]
stringlist.append(substring)
Works just peachy to return 2+2, but that's about as far as it goes, and I am completely blanking on how to move through the remainder...
One way using regex:
import re
stringy = "((2+2)-(3+5)-6)"
for exp in re.findall("\(([\s\d+*/-]+)\)", stringy):
print exp
Output
2+2
3+5
You could use regular expressions like the following:
import re
x = "((2+2)-(3+5)-6)"
re.findall(r"(?<=\()[0-9+/*-]+(?=\))", x)
Result:
['2+2', '3+5']

Matching 2 regular expressions in Python

Is it possible to match 2 regular expressions in Python?
For instance, I have a use-case wherein I need to compare 2 expressions like this:
re.match('google\.com\/maps', 'google\.com\/maps2', re.IGNORECASE)
I would expect to be returned a RE object.
But obviously, Python expects a string as the second parameter.
Is there a way to achieve this, or is it a limitation of the way regex matching works?
Background: I have a list of regular expressions [r1, r2, r3, ...] that match a string and I need to find out which expression is the most specific match of the given string. The way I assumed I could make it work was by:
(1) matching r1 with r2.
(2) then match r2 with r1.
If both match, we have a 'tie'. If only (1) worked, r1 is a 'better' match than r2 and vice-versa.
I'd loop (1) and (2) over the entire list.
I admit it's a bit to wrap one's head around (mostly because my description is probably incoherent), but I'd really appreciate it if somebody could give me some insight into how I can achieve this. Thanks!
Outside of the syntax clarification on re.match, I think I am understanding that you are struggling with taking two or more unknown (user input) regex expressions and classifying which is a more 'specific' match against a string.
Recall for a moment that a Python regex really is a type of computer program. Most modern forms, including Python's regex, are based on Perl. Perl's regex's have recursion, backtracking, and other forms that defy trivial inspection. Indeed a rogue regex can be used as a form of denial of service attack.
To see of this on your own computer, try:
>>> re.match(r'^(a+)+$','a'*24+'!')
That takes about 1 second on my computer. Now increase the 24 in 'a'*24 to a bit larger number, say 28. That take a lot longer. Try 48... You will probably need to CTRL+C now. The time increase as the number of a's increase is, in fact, exponential.
You can read more about this issue in Russ Cox's wonderful paper on 'Regular Expression Matching Can Be Simple And Fast'. Russ Cox is the Goggle engineer that built Google Code Search in 2006. As Cox observes, consider matching the regex 'a?'*33 + 'a'*33 against the string of 'a'*99 with awk and Perl (or Python or PCRE or Java or PHP or ...) Awk matches in 200 microseconds but Perl would require 1015 years because of exponential back tracking.
So the conclusion is: it depends! What do you mean by a more specific match? Look at some of Cox's regex simplification techniques in RE2. If your project is big enough to write your own libraries (or use RE2) and you are willing to restrict the regex grammar used (i.e., no backtracking or recursive forms), I think the answer is that you would classify 'a better match' in a variety of ways.
If you are looking for a simple way to state that (regex_3 < regex_1 < regex_2) when matched against some string using Python or Perl's regex language, I think that the answer is it is very very hard (i.e., this problem is NP Complete)
Edit
Everything I said above is true! However, here is a stab at sorting matching regular expressions based on one form of 'specific': How many edits to get from the regex to the string. The greater number of edits (or the higher the Levenshtein distance) the less 'specific' the regex is.
You be the judge if this works (I don't know what 'specific' means to you for your application):
import re
def ld(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
s='Mary had a little lamb'
d={}
regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb',
r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little']
for reg in regs:
m=re.search(reg,s)
if m:
print "'%s' matches '%s' with sub group '%s'" % (reg, s, m.group(0))
ld1=ld(reg,m.group(0))
ld2=ld(m.group(0),s)
score=max(ld1,ld2)
print " %i edits regex->match(0), %i edits match(0)->s" % (ld1,ld2)
print " score: ", score
d[reg]=score
print
else:
print "'%s' does not match '%s'" % (reg, s)
print " ===== %s ===== === %s ===" % ('RegEx'.center(10),'Score'.center(10))
for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)):
print " %22s %5s" % (key, value)
The program is taking a list of regex's and matching against the string Mary had a little lamb.
Here is the sorted ranking from "most specific" to "least specific":
===== RegEx ===== === Score ===
Mary had a little lamb 0
Mary.*little lamb 7
.*little lamb 11
little lamb 11
.*[lL]ittle [Ll]amb 15
\blittle\b 16
little 16
Mary 18
\b\w+mb 18
lamb 18
.* 22
This based on the (perhaps simplistic) assumption that: a) the number of edits (the Levenshtein distance) to get from the regex itself to the matching substring is the result of wildcard expansions or replacements; b) the edits to get from the matching substring to the initial string. (just take one)
As two simple examples:
.* (or .*.* or .*?.* etc) against any sting is a large number of edits to get to the string, in fact equal to the string length. This is the max possible edits, the highest score, and the least 'specific' regex.
The regex of the string itself against the string is as specific as possible. No edits to change one to the other resulting in a 0 or lowest score.
As stated, this is simplistic. Anchors should increase specificity but they do not in this case. Very short stings don't work because the wild-card may be longer than the string.
Edit 2
I got anchor parsing to work pretty darn well using the undocumented sre_parse module in Python. Type >>> help(sre_parse) if you want to read more...
This is the goto worker module underlying the re module. It has been in every Python distribution since 2001 including all the P3k versions. It may go away, but I don't think it is likely...
Here is the revised listing:
import re
import sre_parse
def ld(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
s='Mary had a little lamb'
d={}
regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb',
r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little',
r'^.*lamb',r'.*.*.*b',r'.*?.*',r'.*\b[lL]ittle\b \b[Ll]amb',
r'.*\blittle\b \blamb$','^'+s+'$']
for reg in regs:
m=re.search(reg,s)
if m:
ld1=ld(reg,m.group(0))
ld2=ld(m.group(0),s)
score=max(ld1,ld2)
for t, v in sre_parse.parse(reg):
if t=='at': # anchor...
if v=='at_beginning' or 'at_end':
score-=1 # ^ or $, adj 1 edit
if v=='at_boundary': # all other anchors are 2 char
score-=2
d[reg]=score
else:
print "'%s' does not match '%s'" % (reg, s)
print
print " ===== %s ===== === %s ===" % ('RegEx'.center(15),'Score'.center(10))
for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)):
print " %27s %5s" % (key, value)
And soted RegEx's:
===== RegEx ===== === Score ===
Mary had a little lamb 0
^Mary had a little lamb$ 0
.*\blittle\b \blamb$ 6
Mary.*little lamb 7
.*\b[lL]ittle\b \b[Ll]amb 10
\blittle\b 10
.*little lamb 11
little lamb 11
.*[lL]ittle [Ll]amb 15
\b\w+mb 15
little 16
^.*lamb 17
Mary 18
lamb 18
.*.*.*b 21
.* 22
.*?.* 22
It depends on what kind of regular expressions you have; as #carrot-top suggests, if you actually aren't dealing with "regular expressions" in the CS sense, and instead have crazy extensions, then you are definitely out of luck.
However, if you do have traditional regular expressions, you might make a bit more progress. First, we could define what "more specific" means. Say R is a regular expression, and L(R) is the language generated by R. Then we might say R1 is more specific than R2 if L(R1) is a (strict) subset of L(R2) (L(R1) < L(R2)). That only gets us so far: in many cases, L(R1) is neither a subset nor a superset of L(R2), and so we might imagine that the two are somehow incomparable. An example, trying to match "mary had a little lamb", we might find two matching expressions: .*mary and lamb.*.
One non-ambiguous solution is to define specificity via implementation. For instance, convert your regular expression in a deterministic (implementation-defined) way to a DFA and simply count states. Unfortunately, this might be relatively opaque to a user.
Indeed, you seem to have an intuitive notion of how you want two regular expressions to compare, specificity-wise. Why not simple write down a definition of specificity, based on the syntax of regular expressions, that matches your intuition reasonably well?
Totally arbitrary rules follow:
Characters = 1.
Character ranges of n characters = n (and let's say \b = 5, because I'm not sure how you might choose to write it out long-hand).
Anchors are 5 each.
* divides its argument by 2.
+ divides its argument by 2, then adds 1.
. = -10.
Anyway, just food for thought, as the other answers do a good job of outlining some of the issues you're facing; hope it helps.
I don't think it's possible.
An alternative would be to try to calculate the number of strings of length n that the regular expression also matches. A regular expression that matches 1,000,000,000 strings of length 15 characters is less specific than one that matches only 10 strings of length 15 characters.
Of course, calculating the number of possible matches is not trivial unless the regular expressions are simple.
Option 1:
Since users are supplying the regexes, perhaps ask them to also submit some test strings which they think are illustrative of their regex's specificity. (i.e. that show their regex is more specific than a competitor's regex.) Collect all the user's submitted test strings, and then test all the regexes against the complete set of test strings.
To design a good regex, the author must have put thought into what strings match and don't match their regex, so it should be easy for them to supply good test strings.
Option 2:
You might try a Monte Carlo approach: Starting with the string that both regexes match, write a generator which generates mutations of that string (permute characters, add/remove characters, etc.) If both regexes match or don't match the same way for each mutation, then the regexes "probably tie". If one matches a mutations that the other doesn't, and vice versa, then they "absolutely tie".
But if one matches a strict superset of mutations then it is "probably less specific" than the other.
The verdict after a large number of mutations may not always be correct, but may be reasonable.
Option 3:
Use ipermute or pyParsing's invert to generate strings which match each regex. This will only work on a regexes that use a limited subset of regex syntax.
I think you could do it by looking the result of matching with the longest result
>>> m = re.match(r'google\.com\/maps','google.com/maps/hello')
>>> len(m.group(0))
15
>>> m = re.match(r'google\.com\/maps2','google.com/maps/hello')
>>> print (m)
None
>>> m = re.match(r'google\.com\/maps','google.com/maps2/hello')
>>> len(m.group(0))
15
>>> m = re.match(r'google\.com\/maps2','google.com/maps2/hello')
>>> len(m.group(0))
16
re.match('google\.com\/maps', 'google\.com\/maps2', re.IGNORECASE)
The second item to re.match() above is a string -- that's why it's not working: the regex says to match a period after google, but instead it finds a backslash. What you need to do is double up the backslashes in the regex that's being used as a regex:
def compare_regexes(regex1, regex2):
"""returns regex2 if regex1 is 'smaller' than regex2
returns regex1 if they are the same
returns regex1 if regex1 is 'bigger' than regex2
otherwise returns None"""
regex1_mod = regex1.replace('\\', '\\\\')
regex2_mod = regex2.replace('\\', '\\\\')
if regex1 == regex2:
return regex1
if re.match(regex1_mod, regex2):
return regex2
if re.match(regex2_mod, regex1):
return regex1
You can change the returns to whatever suits your needs best. Oh, and make sure you are using raw strings with re. r'like this, for example'
Is it possible to match 2 regular expressions in Python?
That certainly is possible. Use parenthetical match groups joined by | for alteration. If you arrange the parenthetical match groups by most specific regex to least specific, the rank in the returned tuple from m.groups() will show how specific your match is. You can also use named groups to name how specific your match is, such as s10 for very specific and s0 for a not so specific match.
>>> s1='google.com/maps2text'
>>> s2='I forgot my goggles at the house'
>>> s3='blah blah blah'
>>> m1=re.match(r'(^google\.com\/maps\dtext$)|(.*go[a-z]+)',s1)
>>> m2=re.match(r'(^google\.com\/maps\dtext$)|(.*go[a-z]+)',s2)
>>> m1.groups()
('google.com/maps2text', None)
>>> m2.groups()
(None, 'I forgot my goggles')
>>> patt=re.compile(r'(?P<s10>^google\.com\/maps\dtext$)|
... (?P<s5>.*go[a-z]+)|(?P<s0>[a-z]+)')
>>> m3=patt.match(s3)
>>> m3.groups()
(None, None, 'blah')
>>> m3.groupdict()
{'s10': None, 's0': 'blah', 's5': None}
If you do not know ahead of time which regex is more specific, this is a much harder problem to solve. You want to have a look at this paper covering security of regex matches against file system names.
I realize that this is a non-solution, but as there is no unambiguous way to tell which is the "most specific match", certainly when it depends on what your users "meant", the easiest thing to do would be to ask them to provide their own priority. For example just by putting the regexes in the right order. Then you can simply take the first one that matches. If you expect the users to be comfortable with regular expressions anyway, this is maybe not too much to ask?

Replacing leading text in Python

I use Python 2.6 and I want to replace each instance of certain leading characters (., _ and $ in my case) in a string with another character or string. Since in my case the replacement string is the same, I came up with this:
def replaceLeadingCharacters(string, old, new = ''):
t = string.lstrip(old)
return new * (len(string) - len(t)) + t
which seems to work fine:
>>> replaceLeadingCharacters('._.!$XXX$._', '._$', 'Y')
'YYY!$XXX$._'
Is there a better (simpler or more efficient) way to achieve the same effect in Python ?
Is there a way to achieve this effect with a string instead of characters? Something like str.replace() that stops once something different than the string-to-be-replaced comes up in the input string? Right now I've come up with this:
def replaceLeadingString(string, old, new = ''):
n = 0
o = 0
s = len(old)
while string.startswith(old, o):
n += 1
o += s
return new * n + string[o:]
I am hoping that there is a way to do this without an explicit loop
EDIT:
There are quite a few answers using the re module. I have a couple of questions/issues with it:
Isn't it significantly slower than the str methods when used as a replacement for them?
Is there an easy way to properly quote/escape strings that will be used in a regular expression? For example if I wanted to use re for replaceLeadingCharacters, how would I ensure that the contents of the old variable will not mess things up in ^[old]+ ? I'd rather have a "black-box" function that does not require its users to pay attention to the list of characters that they provide.
Your replaceLeadingCharacters() seems fine as is.
Here's replaceLeadingString() implementation that uses re module (without the while loop):
#!/usr/bin/env python
import re
def lreplace(s, old, new):
"""Return a copy of string `s` with leading occurrences of
substring `old` replaced by `new`.
>>> lreplace('abcabcdefabc', 'abc', 'X')
'XXdefabc'
>>> lreplace('_abc', 'abc', 'X')
'_abc'
"""
return re.sub(r'^(?:%s)+' % re.escape(old),
lambda m: new * (m.end() / len(old)),
s)
Isn't it significantly slower than the str methods when used as a replacement for them?
Don't guess. Measure it for expected input.
Is there an easy way to properly quote/escape strings that will be used in a regular expression?
re.escape()
re.sub(r'^[._$]+', lambda m: 'Y' * m.end(0), '._.!$XXX$._')
But IMHO your first solution is good enough.

Categories