I am currently writing a Python script that will search an entire .c or .h file and find all the function definitions so I can then make some slight edits. To find all of the function definitions I am trying to use Regular Expressions.
What I currently have is:
"r'\w+?\s+?\w+?\s*?\(.*?\)[\n\s]*?{"
The problem with this logic is that it currently will accept certain if statements in some cases. For example:
else
if(//herpderp){}
It does this because that \s includes \n. I feel that I wouldn't have this issue if I had my expression only look for spaces instead of any whitespace, but I can't test my theory out as it seems there is no \(insert letter here) for just a simple space.
So there is the problem. If you have any advice on how to fix my regular expression, or if there is a better way of writing the script in general please let me know.
A single space can be matched by using a single space, the same way you'd match any other character that isn't a metacharacter:
"r'\w+? +?\w+? *?\(.*?\)\s*?{"
The ' +?' sequence matches only one or more spaces, non-greedily. I replaced [\n\s] with \s as the \n is included already.
You can expand to a character class with more types of whitespace:
[ \t]
which would match a space or a tab.
It does this because that \s includes \n
I'm not sure that this is a good theory since writing something like this in C is allowed:
int
main()
A possible way can be to use a black or whitelist to ensure that what you obtain is a function. Example:
r'\b(int|float|double|char)\s+(\w+)\s*\([^)]*\)\s*{' // whitelist
or
r'(?!(?:else)\b)\b(\w+)\s+(\w+)\s*\([^)]*\)\s*{' // blacklist
Note: no need to use lazy quantifiers.
Related
I'm trying to automatically translate some simple Perl code with a regex to Python, and I'm having an issue. Here is the Perl code:
$stamp='[stamp]';
$message = "message\n";
$message =~ s/^/$stamp/gm;
print "$message";
[stamp]message
Here is my Python equivalent:
>>> import re
>>> re.sub(re.compile("^", re.M), "[stamp]", "message\n", count=0)
'[stamp]message\n[stamp]'
Note the answer is different (it has an extra [stamp] at the end). How do I generate code that has the same behavior for the regex?
Perl and Python's regex engines differ slightly on the definition of a "line"; Perl does not consider the empty string following a trailing newline in the input string to be a line, Python does.
Best solution I can come up with is to change "^" to r"^(?=.|\n)" (note r prefix on string to make it a raw literal; all regex should use raw literals). You can also simplify a bit by just calling methods on the compiled regex or call re.sub with the uncompiled pattern, and since count=0 is already the default, you can omit it. Thus, the final code would be either:
re.compile(r"^(?=.|\n)", re.M).sub("[stamp]", "message\n")
or:
re.sub(r"^(?=.|\n)", "[stamp]", "message\n", flags=re.M)
Even better would be:
start_of_line = re.compile(r"^(?=.|\n)", re.M) # Done once up front
start_of_line.sub("[stamp]", "message\n") # Done on demand
avoiding recompiling/rechecking compiled regex cache each time, by creating the compiled regex just once and reusing it.
Alternative solutions:
Split up the lines in a way that will match Perl's definition of a line, then use the non-re.MULTILINE version of the regex per line, then shove them back together, e.g.:
start_of_line = re.compile(r"^") # Compile once up front without re.M
# Split lines, keeping ends, in a way that matches Perl's definition of a line
# then substitute on line-by-line basis
''.join([start_of_line.sub("[stamp]", line) for line in "message\n".splitlines(keepends=True)])
Strip a single trailing newline, if it exists, up-front, perform regex substitution, add back newline (if applicable):
message = '...'
if message.endswith('\n'):
result = start_of_line.sub("[stamp]", message[:-1]) + '\n'
else:
result = start_of_line.sub("[stamp]", message)
Neither option is as succinct/efficient as trying to tweak the regex, but if arbitrary user-supplied regex must be handled, there's always going to be a corner case, and pre-processing to something that removes the Perl/Python incompatibility is a lot safer.
Perl's multiline mode doesn't consider an empty string after the last newline to be a line of its own. That is, it treats A\nB and A\nB\n as both being two lines, while A\nB\nC as being three lines. This differs from Python's multine mode, which treats every newline as starting a new line:
re.M: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)
You can mimic the behavior of Perl's multiline mode by adding a lookahead assertion for at least one character at the start of the line:
(?=.|\n)
Note that we need to explicitly permit \n with an alternative |'d pattern since by default . does not match \n. Without this, the pattern would fail to match starts of lines that are immediately followed by a \n.
Here's how this pattern behaves in your example:
>>> re.sub(re.compile(r"^(?=.|\n)", re.M), "[stamp]", "message\n", count=0)
'[stamp]message\n'
>>> re.sub(re.compile(r"^(?=.|\n)", re.M), "[stamp]", "message\nmessage", count=0)
'[stamp]message\n[stamp]message'
>>> re.sub(re.compile(r"^(?=.|\n)", re.M), "[stamp]", "message\nmessage\n", count=0)
'[stamp]message\n[stamp]message\n'
>>> re.sub(re.compile(r"^(?=.|\n)", re.M), "[stamp]", "message\n\n", count=0)
'message\n[stamp]\n'
The problem is that Perl doesn't consider the empty string after the \n to be a line of text, but Python does. So the Perl's RegEx code sees "message\n" as one line of text, but Python's RegEx code sees it as two.
You can resolve this difference by having the Python code check for a final \n. If it detects one, remove that \n before running the regular expression, and then add it back in after.
You will probably want to check all edge cases, too. For example, how do the Perl and Python code behave if the entire message itself is an empty string? (Will the Perl code do anything in that case? How about the Python code?)
If all your messages are guaranteed to be non-zero-length text ending in a newline, then you can probably get away with just removing the final \n, applying the Python regex code, and then appending that \n back in. But it would still be good form to consider all edge cases.
ADDITION:
Although it's tempting to come up with a regular expression in Python that exactly mimics the one in Perl, I wouldn't necessarily recommend it -- especially if the Python regex is so complicated that it's not easy to tell what it's doing at first or second glance.
Regular expressions don't always handle algorithmic logic very gracefully. So if you can eliminate regexp complexity by introducing some simple algorithmic logic, I would recommend doing that instead.
In other words, you won't win any awards by using overly-complicated regular expressions, so why not use a simple regex paired with simple non-regex algorithmic logic instead? (The future maintainers of your code will thank you!)
Here's a recommendation:
import re
message = "message\n"
message = re.sub(r"(?m:^)", "[stamp]", message)
# Remove the final "[stamp]" if it appears alone at the end:
message = message.removesuffix("[stamp]")
It's simple and easy to follow, and the only part of it that might be confusing is if you've never seen the m flag used inside of a regular expression like (?m:...) before.
Just be sure to test this out on edge cases (like empty messages) before you decide to use it. You don't want to leave your program's logic to behavior you didn't realize existed.
i have string like this
<name:john student male age=23 subject=\computer\sience_{20092973}>
i am confused ":","="
i want to parsing this string!
so i want to split to list like this
name:john
job:student
sex:male
age:23
subject:{20092973}
parsing string with specific name(name, job, sex.. etc) in python
i already searching... but i can't find.. sorry..
how can i this?
thank you.
It's generally a good idea to give more than one example of the strings you're trying to parse. But I'll take a guess. It looks like your format is pretty simple, and primarily whitespace-separated. It's simple enough that using regular expressions should work, like this, where line_to_parse is the string you want to parse:
import re
matchval = re.match("<name:(\S+)\s+(\S+)\s+(\S+)\s+age=(\S+)\s+subject=[^\{]*(\{\S+\})", line_to_parse)
matchgroups = matchval.groups()
Now matchgroups will be a tuple of the values you want. It should be trivial for you to take those and get them into the desired format.
If you want to do many of these, it may be worth compiling the regular expression; take a look at the re documentation for more on this.
As for the way the expression works: I won't go into regular expressions in general (that's what the re docs are for) but in this case, we want to get a bunch of strings that don't have any whitespace in them, and have whitespace between them, and we want to do something odd with the subject, ignoring all the text except the part between { and }.
Each "(...)" in the expression saves whatever is inside it as a group. Each "\S+" stands for one or more ("+") characters that aren't whitespace ("\S"), so "(\S+)" will match and save a string of length at least one that has no whitespace in it. Each "\s+" does the opposite: it has not parentheses around it, so it doesn't save what it matches, and it matches at one or more ("+") whitespace characters ("\s"). This suffices for most of what we want. At the end, though, we need to deal with the subject. "[...]" allows us to list multiple types of characters. "[^...]" is special, and matches anything that isn't in there. {, like [, (, and so on, needs to be escaped to be normal in the string, so we escape it with \, and in the end, that means "[^{]*" matches zero or more ("*") characters that aren't "{" ("[^{]"). Since "*" and "+" are "greedy", and will try to match as much as they can and still have the expression match, we now only need to deal with the last part. From what I've talked about before, it should be pretty clear what "({\S+})" does.
I got a expression like:
int(i[10])
I want to change this as a whole into
int(arg_dict["count"])
both have lots of special characters.
Is there a way that I can ignore all its special characters in these two stuff and treat them as raw material in vim ?
for example, I want to make the following block work with your answer
:%s/#your answer# int(i[10])/#your answer# int(arg_dict["count"])
The only "special" characters, here, are [] so they are the only ones you need to escape:
:s/int(i\[10\])/int(arg_dict["count"])/g
If you want to make use of capturing you will need to escape the () of the capture group as well:
:s/\(int(\)i\[10\])/\1arg_dict["count"])/g
I'm trying to find all instances of the keyword "public" in some Java code (with a Python script) that are not in comments or strings, a.k.a. not found following //, in between a /* and a */, and not in between double or single quotes, and which are not part of variable names-- i.e. they must be preceded by a space, tab, or newline, and must be followed by the same.
So here's what I have at the moment--
//.*\spublic\s.*\n
/\*.*\spublic\s.*\*/
".*\spublic\s.*"
'.*\spublic\s.*'
Am I messing this up at all?
But that finds exactly what I'm NOT looking for. How can I turn it around and search the inverse of the sum of those four expressions, as a single regex?
I've figured out this probably uses negative look-ahead and look-behind, but I still can't quite piece it together. Also, for the /**/ regex, I'm concerned that .* doesn't match newlines, so it would fail to recognize that this public is in a comment:
/*
public
*/
Everything below this point is me thinking on paper and can be disregarded. These thoughts are not fully accurate.
Edit:
I daresay (?<!//).*public.* would match anything not in single line comments, so I'm getting the hang of things. I think. But still unsure how to combine everything.
Edit2:
So then-- following that idea, I |ed them all to get--
(?<!//).*public.*|(?<!/\*).*public.\*/(?!\*/)|(?<!").*public.*(?!")|(?<!').*public.*(?!')
But I'm not sure about that. //public will not be matched by the first alternate, but it will be matched by the second. I need to AND the look-aheads and look-behinds, not OR the whole thing.
I'm sorry, but I'll have to break the news to you, that what you are trying to do is impossible. The reason is mostly because Java is not a regular language. As we all know by now, most regex engines provide non-regular features, but Python in particular is lacking something like recursion (PCRE) or balancing groups (.NET) which could do the trick. But let's look into that in more depth.
First of all, why are your patterns not as good as you think they are? (for the task of matching public inside those literals; similar problems will apply to reversing the logic)
As you have already recognized, you will have problems with line breaks (in the case of /*...*/). This can be solved by either using the modifier/option/flag re.S (which changes the behavior of .) or by using [\s\S] instead of . (because the former matches any character).
But there are other problems. You only want to find surrounding occurrences of the string or comment literals. You are not actually making sure that they are specifically wrapped around the public in question. I'm not sure how much you can put onto a single line in Java, but if you had an arbitrary string, then later a public and then another string on a single line, then your regex would match the public because it can find the " before and after it. Even if that is not possible, if you have two block comments in the same input, then any public between those two block comments would cause a match. So you would need to find a way to assert only that your public is really inside "..." or /*...*/ and not just that these literals can be found anywhere to left of right of it.
Next thing: matches cannot overlap. But your match includes everything from the opening literal until the ending literal. So if you had "public public" that would cause only one match. And capturing cannot help you here. Usually the trick to avoid this is to use lookarounds (which are not included in the match). But (as we will see later) the lookbehind doesn't work as nicely as you would think, because it cannot be of arbitrary length (only in .NET that is possible).
Now the worst of all. What if you have " inside a comment? That shouldn't count, right? What if you have // or /* or */ inside a string? That shouldn't count, right? What about ' inside "-strings and " inside '-strings? Even worse, what about \" inside "-string? So for 100% robustness you would have to do a similar check for your surrounding delimiters as well. And this is usually where regular expressions reach the end of their capabilities and this is why you need a proper parser that walks the input string and builds a whole tree of your code.
But say you never have comment literals inside strings and you never have quotes inside comments (or only matched quotes, because they would constitute a string, and we don't want public inside strings anyway). So we are basically assuming that every of the literals in question is correctly matched, and they are never nested. In that case you can use a lookahead to check whether you are inside or outside one of the literals (in fact, multiple lookaheads). I'll get to that shortly.
But there is one more thing left. What does (?<!//).*public.* not work? For this to match it is enough for (?<!//) to match at any single position. e.g. if you just had input // public the engine would try out the negative lookbehind right at the start of the string, (to the left of the start of the string), would find no //, then use .* to consume // and the space and then match public. What you actually want is (?<!//.*)public. This will start the lookbehind from the starting position of public and look all the way to the left through the current line. But... this is a variable-length lookbehind, which is only supported by .NET.
But let's look into how we can make sure we are really outside of a string. We can use a lookahead to look all the way to the end of the input, and check that there is an even number of quotes on the way.
public(?=[^"]*("[^"]*"[^"]*)*$)
Now if we try really hard we can also ignore escaped quotes when inside of a string:
public(?=[^"]*("(?:[^"\\]|\\.)*"[^"]*)*$)
So once we encounter a " we will accept either non-quote, non-backslash characters, or a backslash character and whatever follows it (that allows escaping of backslash-characters as well, so that in "a string\\" we won't treat the closing " as being escaped). We can use this with multi-line mode (re.M) to avoid going all the way to the end of the input (because the end of the line is enough):
public(?=[^"\r\n]*("(?:[^"\r\n\\]|\\.)*"[^"\r\n]*)*$)
(re.M is implied for all following patterns)
This is what it looks for single-quoted strings:
public(?=[^'\r\n]*('(?:[^'\r\n\\]|\\.)*'[^'\r\n]*)*$)
For block comments it's a bit easier, because we only need to look for /* or the end of the string (this time really the end of the entire string), without ever encountering */ on the way. That is done with a negative lookahead at every single position until the end of the search:
public(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))
But as I said, we're stumped on the single-line comments for now. But anyway, we can combine the last three regular expressions into one, because lookaheads don't actually advance the position of the regex engine on the target string:
public(?=[^"\r\n]*("(?:[^"\r\n\\]|\\.)*"[^"\r\n]*)*$)(?=[^'\r\n]*('(?:[^'\r\n\\]|\\.)*'[^'\r\n]*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))
Now what about those single-line comments? The trick to emulate variable-length lookbehinds is usually to reverse the string and the pattern - which makes the lookbehind a lookahead:
cilbup(?!.*//)
Of course, that means we have to reverse all other patterns, too. The good news is, if we don't care about escaping, they look exactly the same (because both quotes and block comments are symmetrical). So you could run this pattern on a reversed input:
cilbup(?=[^"\r\n]*("[^"\r\n]*"[^"\r\n]*)*$)(?=[^'\r\n]*('[^'\r\n]*'[^'\r\n]*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
You can then find the match positions in your actual input by using inputLength -foundMatchPosition - foundMatchLength.
Now what about escaping? That get's quite annoying now, because we have to skip quotes, if they are followed by a backslash. Because of some backtracking issues we need to take care of that in five places. Three times, when consuming non-quote characters (because we need to allow "\ as well now. And twice, when consuming quote characters (using a negative lookahead to make sure there is no backslash after them). Let's look at double quotes:
cilbup(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)
(It looks horrible, but if you compare it with the pattern that disregards escaping, you will notice the few differences.)
So incorporating that into the above pattern:
cilbup(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
So this might actually do it for many cases. But as you can see it's horrible, almost impossible to read, and definitely impossible to maintain.
What were the caveats? No comment literals inside strings, no string literals inside strings of the other type, no string literals inside comments. Plus, we have four independent lookaheads, which will probably take some time (at least I think I have a voided most of backtracking).
In any case, I believe this is as close as you can get with regular expressions.
EDIT:
I just realised I forgot the condition that public must not be part of a longer literal. You included spaces, but what if it's the first thing in the input? The easiest thing would be to use \b. That matches a position (without including surrounding characters) that is between a word character and a non-word character. However, Java identifiers may contain any Unicode letter or digit, and I'm not sure whether Python's \b is Unicode-aware. Also, Java identifiers may contain $. Which would break that anyway. Lookarounds to the rescue! Instead of asserting that there is a space character on every side, let's assert that there is no non-space character. Because we need negative lookarounds for that, we will get the advantage of not including those characters in the match for free:
(?<!\S)cilbup(?!\S)(?=(?:[^"\r\n]|"\\)*(?:"(?!\\)(?:[^"\r\n]|"\\)*"(?!\\)(?:[^"\r\n]|"\\)*)*$)(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)(?=(?:(?![*]/)[\s\S])*(?:/[*]|\Z))(?!.*//)
And because just from scrolling this code snippet to the right one cannot quite grasp how ridiculously huge this regex is, here it is in freespacing mode (re.X) with some annotations:
(?<!\S) # make sure there is no trailing non-whitespace character
cilbup # public
(?!\S) # make sure there is no leading non-whitespace character
(?= # lookahead (effectively lookbehind!) to ensure we are not inside a
# string
(?:[^"\r\n]|"\\)*
# consume everything except for line breaks and quotes, unless the
# quote is followed by a backslash (preceded in the actual input)
(?: # subpattern that matches two (unescaped) quotes
"(?!\\) # a quote that is not followed by a backslash
(?:[^"\r\n]|"\\)*
# we've seen that before
"(?!\\) # a quote that is not followed by a backslash
(?:[^"\r\n]|"\\)*
# we've seen that before
)* # end of subpattern - repeat 0 or more times (ensures even no. of ")
$ # end of line (start of line in actual input)
) # end of double-quote lookahead
(?=(?:[^'\r\n]|'\\)*(?:'(?!\\)(?:[^'\r\n]|'\\)*'(?!\\)(?:[^'\r\n]|'\\)*)*$)
# the same horrible bastard again for single quotes
(?= # lookahead (effectively lookbehind) for block comments
(?: # subgroup to consume anything except */
(?![*]/) # make sure there is no */ coming up
[\s\S] # consume an arbitrary character
)* # repeat
(?:/[*]|\Z)# require to find either /* or the end of the string
) # end of lookahead for block comments
(?!.*//) # make sure there is no // on this line
Have you considered replacing all comments and single and double quoted string literals with null strings using the re sub() method. Then just do a simple search/match/find of the resulting file for the word you're looking for?
That would at least give you the line numbers where the word is located. You may be able to use that information to edit the original file.
You could use pyparsing to find public keyword outside a comment or a double quoted string:
from pyparsing import Keyword, javaStyleComment, dblQuotedString
keyword = "public"
expr = Keyword(keyword).ignore(javaStyleComment | dblQuotedString)
Example
for [token], start, end in expr.scanString(r"""{keyword} should match
/*
{keyword} should not match "
*/
// this {keyword} also shouldn't match
"neither this \" {keyword}"
but this {keyword} will
re{keyword} is ignored
'{keyword}' - also match (only double quoted strings are ignored)
""".format(keyword=keyword)):
assert token == keyword and len(keyword) == (end - start)
print("Found at %d" % start)
Output
Found at 0
Found at 146
Found at 187
To ignore also single quoted string, you could use quotedString instead of dblQuotedString.
To do it with only regexes, see regex-negation tag on SO e.g., Regular expression to match string not containing a word? or using even less regex capabilities Regex: Matching by exclusion, without look-ahead - is it possible?. The simple way would be to use a positive match and skip matched comments, quoted strings. The result is the rest of the matches.
It's finding the opposite because that's just what you're asking for. :)
I don't know a way to match them all in a single regex (though it should be theoretically possible, since the regular languages are closed under complements and intersections). But you could definitely search for all instances of public, and then remove any instances that are matched by one of your "bad" regexes. Try using for example set.difference on the match.start and match.end properties from re.finditer.
So I have one variable that has all the code from some file. I need to remove all comments from this file. One of my regexp lines is this
x=re.sub('\/\*.*\*\/','',x,re.M,re.S);
What I want this to be doing is to remove all multi line comments. For some odd reason though, its skipping two instances of */, and removing everything up to the third instance of */.
I'm pretty sure the reason is this third instance of */ has code after it, while the first two are by themselves on the line. I'm not sure why this matters, but I'm pretty sure thats why.
Any ideas?
.* will always match as many characters as possible. Try (.*?) - most implementations should try to match as few characters as possible then (should work without the brackets but not sure right now). So your whole pattern should look like this: \/\*.*?\*\/ or \/\*(.*?)\*\/
The expression .* is greedy, meaning that it will attempt to match as many characters as possible. Instead, use (.*?) which will stop matching characters as soon as possible.
The regular expression is "greedy" and when presented with several stopping points will take the farthest one. Regex has some patterns to help control this, in particular the
(?>!...)
which matches the following expression only if it is Not preceeded by a match of the pattern in parens. (put in a pointy brace for > in the above - I don't know the forum convention for getting on in my answer).
(?*...) was not in Python 2.4 but is a good choice if you are using a later version.