I'm looking to extract,
ID=(?P<group>.+?);
from a string, the 'ID=' is a constant, group can be anything. The match's position will vary in the string.
This is fine in most cases, however, occasionally the match will be at the end of the string and the semi-colon will be missing. In this case, how do I optionally match the end of the string? I tried the following:
ID=(?P<group>.+?)[;$]
But this didn't seem to work, I imagine because $ is not a character (it's an anchor?).
This is being done in Python using the re module, and all normal behaviour such as using raw strings has been accounted for (I think!).
You can use (;|$) to match it. Or if you don't want a capture, (?:;|$)
if the option re.MUTILINE is NOT enabled ($ means end of string)
ID=(?P<group>)[^;]+
if the option re.MUTILINE IS enabled ($ means end of line)
ID=(?P<group>)[^;\r\n]+
What do you capture, by the way : what is between (?P<group>) and ';' ?
Related
I am trying to get a substring between two markers using re in Python, for example:
import re
test_str = "#$ -N model_simulation 2022"
# these two lines work
# the output is: model_simulation
print(re.search("-N(.*)2022",test_str).group(1))
print(re.search(" -N(.*)2022",test_str).group(1))
# these two lines give the error: 'NoneType' object has no attribute 'group'
print(re.search("$ -N(.*)2022",test_str).group(1))
print(re.search("#$ -N(.*)2022",test_str).group(1))
I read the documentation of re here. It says that "#" is intentionally ignored so that the outputs look neater.
But in my case, I do need to include "#" and "$". I need them to identify the part of the string that I want, because the "-N" is not unique in my entire text string for real work.
Is there a way to force re to include those? Or is there a different way without using re?
Thanks.
You can escape both with \, for example,
print(re.search("\#\$ -N(.*)2022",test_str).group(1))
# output model_simulation
You can get rid of the special meaning by using the backslash prefix: $. This way, you can match the dollar symbol in a given string
# add backslash before # and $
# the output is: model_simulation
print(re.search("\$ -N(.*)2022",test_str).group(1))
print(re.search("\#\$ -N(.*)2022",test_str).group(1))
In regular expressions, $ signals the end of the string. So 'foo' would match foo anywhere in the string, but 'foo$' only matches foo if it appears at the end. To solve this, you need to escape it by prefixing it with a backslash. That way it will match a literal $ character
# is only the start of a comment in verbose mode using re.VERBOSE (which also ignores spaces), otherwise it just matches a literal #.
In general, it is also good practice to use raw string literals for regular expressions (r'foo'), which means Python will let backslashes alone so it doesn't conflict with regular expressions (that way you don't have to type \\\\ to match a single backslash \).
Instead of re.search, it looks like you actually want re.fullmatch, which matches only if the whole string matches.
So I would write your code like this:
print(re.search(r"\$ -N(.*)2022", test_str).group(1)) # This one would not work with fullmatch, because it doesn't match at the start
print(re.fullmatch(r"#\$ -N(.*)2022", test_str).group(1))
In a comment you mentioned that the string you need to match changes all the time. In that case, re.escape may prove useful.
Example:
prefix = '#$ - N'
postfix = '2022'
print(re.fullmatch(re.escape(prefix) + '(.*)' + re.escape(postfix), tst_str).group(1))
This is a follow up to this SO post which gives a solution to replace text in a string column
How to replace text in a column of a Pandas dataframe?
df['range'] = df['range'].str.replace(',','-')
However, this doesn't seem to work with double periods or a question mark followed by a period
testList = ['this is a.. test stence', 'for which is ?. was a time']
testDf = pd.DataFrame(testList, columns=['strings'])
testDf['strings'].str.replace('..', '.').head()
results in
0 ...........e
1 .............
Name: strings, dtype: object
and
testDf['strings'].str.replace('?.', '?').head()
results in
error: nothing to repeat at position 0
Add regex=False parameter, because as you can see in the docs, regex it's by default True:
-regex bool, default True
Determines if assumes the passed-in pattern is a regular expression:
If True, assumes the passed-in pattern is a regular expression.
And ? . are special characters in regular expressions.
So, one way to do it without regex will be this double replacing:
testDf['strings'].str.replace('..', '.',regex=False).str.replace('?.', '?',regex=False)
Output:
strings
0 this is a. test stence
1 for which is ? was a time
Replace using regular expression. In this case, replace any sepcial character '.' followed immediately by white space. This is abit curly, I advice you go with #Mark Reed answer.
testDf.replace(regex=r'([.](?=\s))', value=r'')
strings
0 this is a. test stence
1 for which is ? was a time
str.replace() works with a Regex where . is a special character which denotes "any" character. If you want a literal dot, you need to escape it: "\.". Same for other special Regex characters like ?.
First, be aware that the Pandas replace method is different from the standard Python one, which operates only on fixed strings. The Pandas one can behave as either the regular string.replace or re.sub (the regular-expression substitute method), depending on the value of a flag, and the default is to act like re.sub. So you need to treat your first argument as a regular expression. That means you do have to change the string, but it also has the benefit of allowing you to do both substitutions in a single call.
A regular expression isn't a string to be searched for literally, but a pattern that acts as instructions telling Python what to look for. Most characters just ask Python to match themselves, but some are special, and both . and ? happen to be in the special category.
The easiest thing to do is to use a character class to match either . or ? followed by a period, and remember which one it was so that it can be included in the replacement, just without the following period. That looks like this:
testDF.replace(regex=r'([.?])\.', value=r'\1')
The [.?] means "match either a period or a question mark"; since they're inside the [...], those normally-special characters don't need to be escaped. The parentheses around the square brackets tell Python to remember which of those two characters is the one it actually found. The next thing that has to be there in order to match is the period you're trying to get rid of, which has to be escaped with a backslash because this one's not inside [...].
In the replacement, the special sequence \1 means "whatever you found that matched the pattern between the first set of parentheses", so that's either the period or question mark. Since that's the entire replacement, the following period is removed.
Now, you'll notice I used raw strings (r'...') for both; that keeps Python from doing its own interpretation of the backslashes before replace can. If the replacement were just '\1' without the r it would replace them with character code 1 (control-A) instead of the first matched group.
To replace both the ? and . at the same time you can separate by | (the regex OR operator).
testDf['strings'].str.replace('\?.|\..', '.')
Prefix the .. with a \, because you need to escape as . is a regex character:
testDf['strings'].str.replace('\..', '.')
You can do the same with the ?, which is another regex character.
testDf['strings'].str.replace('\?.', '.')
I have this expression
:([^"]*) \(([^"]*)\)
and this text
:chkpf_uid ("{4astr-hn389-918ks}")
:"#cert" ("false")
Im trying to match it so that on the first sentence ill get these groups:
chkpf_uid
{4astr-hn389-918ks}
and on the second, ill get these:
#cert
false
I want to avoid getting the quotes.
I can't seem to understand why the expression I use won't match these, especially if I switch the [^"]* to a (.*).
with ([^"]*): wont match
with (.*): does match, but with quotes
This is using the re module in python 2.7
Sidenote: your input may require a specific parser to handle, especially if it may have escape sequences.
Answering the question itself, remember that a regex is processed from left to right sequentially, and the string is processed the same here. A match is returned if the pattern matches a portion/whole string (depending on the method used).
If there are quotation marks in the string, and your pattern does not let match those quotes, the match will be failed, no match will be returned.
A possible solution can be adding the quotes as otpional subpatterns:
:"?([^"]*)"? \("?([^"]*)"?\)
^^ ^^ ^^ ^^
See the regex demo
The parts you need are captured into groups, and the quotes, present or not, are just matched, left out of your re.findall reach.
I'm having trouble validating this type of input strings in Python.
The weekday have a variable number of characters.
Regular: 16Mar2009(mon), 17Mar2009(tues), 18Mar2009(wed)
Regular: 20Mar2009(fri), 21Mar2009(sat), 22Mar2009(sun)
Rewards: 26Mar2009(thur), 27Mar2009(fri), 28Mar2009(sat)
I want to validate the whole line, every line should have this specific format:
<name>: <date>(<weekday>), <date>(<weekday>), <date>(<weekday>)
Thanks in advance!
Try this: \w+: \d+\w+\(\w+\)(?:,\s*\d+\w+\(\w+\))*
Using programs like kiki-re you can test regexps easily.
Regular: 20Mar2009(fri), 21Mar2009(sat), 22Mar2009(sun)
your_regex = r'^[A-Za-z]+:\s+(?:\d{1,2}[A-Za-z]{3}\d{4}\([A-Za-z]{3}\),\s+){2}
\d{1,2}[A-Za-z]{3}\d{4}\([A-Za-z]{3}\)$'
To see how this works, see the picture and explanations below:
NB: You could use \w, but the character class corresponding to \w is [a-zA-Z0-9_].
r means interpret as a raw string
^ means begins with
[A-Za-z] means any character that is in the character class A,B,C...Z and a, b, c, ...z
+ means 1 or more of the preceding
: matches the literal colon
\s means whitespace
(?:...) means group, but do not capture (see capturing groups for the alternative (...)
{x,y} means that there must exist [x,y] of whatever precedes this
{x} means that there must exist exactly 'x' of whatever came before this
\( and \) mean ( and ), but need to be preceded by \ since parenthesis are special characters in regular expressions.
$ means ends with
While this may not be exactly what you want, it works for your input and you now hopefully have the tools to change it to fit your needs. You should consider edges cases, though, for example in the proposed solution above, you could easily match: blah: 99zzz0000... Good luck!
I'm trying to find all instances of the keyword "public" in some Java code (with a Python script) that are not in comments or strings, a.k.a. not found following //, in between a /* and a */, and not in between double or single quotes, and which are not part of variable names-- i.e. they must be preceded by a space, tab, or newline, and must be followed by the same.
So here's what I have at the moment--
//.*\spublic\s.*\n
/\*.*\spublic\s.*\*/
".*\spublic\s.*"
'.*\spublic\s.*'
Am I messing this up at all?
But that finds exactly what I'm NOT looking for. How can I turn it around and search the inverse of the sum of those four expressions, as a single regex?
I've figured out this probably uses negative look-ahead and look-behind, but I still can't quite piece it together. Also, for the /**/ regex, I'm concerned that .* doesn't match newlines, so it would fail to recognize that this public is in a comment:
/*
public
*/
Everything below this point is me thinking on paper and can be disregarded. These thoughts are not fully accurate.
Edit:
I daresay (?<!//).*public.* would match anything not in single line comments, so I'm getting the hang of things. I think. But still unsure how to combine everything.
Edit2:
So then-- following that idea, I |ed them all to get--
(?<!//).*public.*|(?<!/\*).*public.\*/(?!\*/)|(?<!").*public.*(?!")|(?<!').*public.*(?!')
But I'm not sure about that. //public will not be matched by the first alternate, but it will be matched by the second. I need to AND the look-aheads and look-behinds, not OR the whole thing.
I'm sorry, but I'll have to break the news to you, that what you are trying to do is impossible. The reason is mostly because Java is not a regular language. As we all know by now, most regex engines provide non-regular features, but Python in particular is lacking something like recursion (PCRE) or balancing groups (.NET) which could do the trick. But let's look into that in more depth.
First of all, why are your patterns not as good as you think they are? (for the task of matching public inside those literals; similar problems will apply to reversing the logic)
As you have already recognized, you will have problems with line breaks (in the case of /*...*/). This can be solved by either using the modifier/option/flag re.S (which changes the behavior of .) or by using [\s\S] instead of . (because the former matches any character).
But there are other problems. You only want to find surrounding occurrences of the string or comment literals. You are not actually making sure that they are specifically wrapped around the public in question. I'm not sure how much you can put onto a single line in Java, but if you had an arbitrary string, then later a public and then another string on a single line, then your regex would match the public because it can find the " before and after it. Even if that is not possible, if you have two block comments in the same input, then any public between those two block comments would cause a match. So you would need to find a way to assert only that your public is really inside "..." or /*...*/ and not just that these literals can be found anywhere to left of right of it.
Next thing: matches cannot overlap. But your match includes everything from the opening literal until the ending literal. So if you had "public public" that would cause only one match. And capturing cannot help you here. Usually the trick to avoid this is to use lookarounds (which are not included in the match). But (as we will see later) the lookbehind doesn't work as nicely as you would think, because it cannot be of arbitrary length (only in .NET that is possible).
Now the worst of all. What if you have " inside a comment? That shouldn't count, right? What if you have // or /* or */ inside a string? That shouldn't count, right? What about ' inside "-strings and " inside '-strings? Even worse, what about \" inside "-string? So for 100% robustness you would have to do a similar check for your surrounding delimiters as well. And this is usually where regular expressions reach the end of their capabilities and this is why you need a proper parser that walks the input string and builds a whole tree of your code.
But say you never have comment literals inside strings and you never have quotes inside comments (or only matched quotes, because they would constitute a string, and we don't want public inside strings anyway). So we are basically assuming that every of the literals in question is correctly matched, and they are never nested. In that case you can use a lookahead to check whether you are inside or outside one of the literals (in fact, multiple lookaheads). I'll get to that shortly.
But there is one more thing left. What does (?<!//).*public.* not work? For this to match it is enough for (?<!//) to match at any single position. e.g. if you just had input // public the engine would try out the negative lookbehind right at the start of the string, (to the left of the start of the string), would find no //, then use .* to consume // and the space and then match public. What you actually want is (?<!//.*)public. This will start the lookbehind from the starting position of public and look all the way to the left through the current line. But... this is a variable-length lookbehind, which is only supported by .NET.
But let's look into how we can make sure we are really outside of a string. We can use a lookahead to look all the way to the end of the input, and check that there is an even number of quotes on the way.
public(?=[^"]*("[^"]*"[^"]*)*$)
Now if we try really hard we can also ignore escaped quotes when inside of a string:
public(?=[^"]*("(?:[^"\\]|\\.)*"[^"]*)*$)
So once we encounter a " we will accept either non-quote, non-backslash characters, or a backslash character and whatever follows it (that allows escaping of backslash-characters as well, so that in "a string\\" we won't treat the closing " as being escaped). We can use this with multi-line mode (re.M) to avoid going all the way to the end of the input (because the end of the line is enough):
public(?=[^"\r\n]*("(?:[^"\r\n\\]|\\.)*"[^"\r\n]*)*$)
(re.M is implied for all following patterns)
This is what it looks for single-quoted strings:
public(?=[^'\r\n]*('(?:[^'\r\n\\]|\\.)*'[^'\r\n]*)*$)
For block comments it's a bit easier, because we only need to look for /* or the end of the string (this time really the end of the entire string), without ever encountering */ on the way. That is done with a negative lookahead at every single position until the end of the search:
public(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))
But as I said, we're stumped on the single-line comments for now. But anyway, we can combine the last three regular expressions into one, because lookaheads don't actually advance the position of the regex engine on the target string:
public(?=[^"\r\n]*("(?:[^"\r\n\\]|\\.)*"[^"\r\n]*)*$)(?=[^'\r\n]*('(?:[^'\r\n\\]|\\.)*'[^'\r\n]*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))
Now what about those single-line comments? The trick to emulate variable-length lookbehinds is usually to reverse the string and the pattern - which makes the lookbehind a lookahead:
cilbup(?!.*//)
Of course, that means we have to reverse all other patterns, too. The good news is, if we don't care about escaping, they look exactly the same (because both quotes and block comments are symmetrical). So you could run this pattern on a reversed input:
cilbup(?=[^"\r\n]*("[^"\r\n]*"[^"\r\n]*)*$)(?=[^'\r\n]*('[^'\r\n]*'[^'\r\n]*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
You can then find the match positions in your actual input by using inputLength -foundMatchPosition - foundMatchLength.
Now what about escaping? That get's quite annoying now, because we have to skip quotes, if they are followed by a backslash. Because of some backtracking issues we need to take care of that in five places. Three times, when consuming non-quote characters (because we need to allow "\ as well now. And twice, when consuming quote characters (using a negative lookahead to make sure there is no backslash after them). Let's look at double quotes:
cilbup(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)
(It looks horrible, but if you compare it with the pattern that disregards escaping, you will notice the few differences.)
So incorporating that into the above pattern:
cilbup(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
So this might actually do it for many cases. But as you can see it's horrible, almost impossible to read, and definitely impossible to maintain.
What were the caveats? No comment literals inside strings, no string literals inside strings of the other type, no string literals inside comments. Plus, we have four independent lookaheads, which will probably take some time (at least I think I have a voided most of backtracking).
In any case, I believe this is as close as you can get with regular expressions.
EDIT:
I just realised I forgot the condition that public must not be part of a longer literal. You included spaces, but what if it's the first thing in the input? The easiest thing would be to use \b. That matches a position (without including surrounding characters) that is between a word character and a non-word character. However, Java identifiers may contain any Unicode letter or digit, and I'm not sure whether Python's \b is Unicode-aware. Also, Java identifiers may contain $. Which would break that anyway. Lookarounds to the rescue! Instead of asserting that there is a space character on every side, let's assert that there is no non-space character. Because we need negative lookarounds for that, we will get the advantage of not including those characters in the match for free:
(?<!\S)cilbup(?!\S)(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
And because just from scrolling this code snippet to the right one cannot quite grasp how ridiculously huge this regex is, here it is in freespacing mode (re.X) with some annotations:
(?<!\S) # make sure there is no trailing non-whitespace character
cilbup # public
(?!\S) # make sure there is no leading non-whitespace character
(?= # lookahead (effectively lookbehind!) to ensure we are not inside a
# string
(?:[^"\r\n]|"\\)*
# consume everything except for line breaks and quotes, unless the
# quote is followed by a backslash (preceded in the actual input)
(?: # subpattern that matches two (unescaped) quotes
"(?!\\) # a quote that is not followed by a backslash
(?:[^"\r\n]|"\\)*
# we've seen that before
"(?!\\) # a quote that is not followed by a backslash
(?:[^"\r\n]|"\\)*
# we've seen that before
)* # end of subpattern - repeat 0 or more times (ensures even no. of ")
$ # end of line (start of line in actual input)
) # end of double-quote lookahead
(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)
# the same horrible bastard again for single quotes
(?= # lookahead (effectively lookbehind) for block comments
(?: # subgroup to consume anything except */
(?![*]/) # make sure there is no */ coming up
[\s\S] # consume an arbitrary character
)* # repeat
(?:/[*]|\Z)# require to find either /* or the end of the string
) # end of lookahead for block comments
(?!.*//) # make sure there is no // on this line
Have you considered replacing all comments and single and double quoted string literals with null strings using the re sub() method. Then just do a simple search/match/find of the resulting file for the word you're looking for?
That would at least give you the line numbers where the word is located. You may be able to use that information to edit the original file.
You could use pyparsing to find public keyword outside a comment or a double quoted string:
from pyparsing import Keyword, javaStyleComment, dblQuotedString
keyword = "public"
expr = Keyword(keyword).ignore(javaStyleComment | dblQuotedString)
Example
for [token], start, end in expr.scanString(r"""{keyword} should match
/*
{keyword} should not match "
*/
// this {keyword} also shouldn't match
"neither this \" {keyword}"
but this {keyword} will
re{keyword} is ignored
'{keyword}' - also match (only double quoted strings are ignored)
""".format(keyword=keyword)):
assert token == keyword and len(keyword) == (end - start)
print("Found at %d" % start)
Output
Found at 0
Found at 146
Found at 187
To ignore also single quoted string, you could use quotedString instead of dblQuotedString.
To do it with only regexes, see regex-negation tag on SO e.g., Regular expression to match string not containing a word? or using even less regex capabilities Regex: Matching by exclusion, without look-ahead - is it possible?. The simple way would be to use a positive match and skip matched comments, quoted strings. The result is the rest of the matches.
It's finding the opposite because that's just what you're asking for. :)
I don't know a way to match them all in a single regex (though it should be theoretically possible, since the regular languages are closed under complements and intersections). But you could definitely search for all instances of public, and then remove any instances that are matched by one of your "bad" regexes. Try using for example set.difference on the match.start and match.end properties from re.finditer.