Currently, I have a Boolean expression which supports & (logical AND), | (logical OR), (, ) (parentheses) operators along with status codes like s, f, d, n, t and job names.
The status codes represent the status of a job. (Eg: s = success, f = failure, etc...) and the job name is enclosed within parentheses with an optional argument which is a number within quotes.
Example i/p:
( s(job_A, "11:00") & f(job_B) ) | ( s(job_C) & t(job_D) )
My requirement is for such a given string in Python, I need to replace the existing job names with new job names containing a prefix and everything else should remain the same:
Example o/p:
( s(prefix_job_A, "11:00") & f(prefix_job_B) ) | ( s(prefix_job_C) & t(prefix_job_D) )
This logical expression can be arbitrarily nested like any Boolean expression and being a non-regular language we can't use regexes.
Please note: The job names are NOT known before-hand so we can't statically store the names in a dictionary and perform a replacement.
The current approach I have thought of is to generate an expression tree and perform replacements in the OPERAND nodes of that tree, however I am not sure how to proceed with this. Is there any library in python which can help me to define the grammar to build this tree? How do I specify the grammar?
Can someone help me with the approach?
Edit: The job names don't have any particular form. The minimum length of a job name is 6 and the job names are alphanumeric with underscores.
Given that we can assume that job names are alphanumeric + _ and length at least 6, we should be able to do this just with a regex since it appears nothing else in the given strings look like that.
import regex
exp = '( s(job__A, "11:00") & f(job__B) ) | ( s(job__C) & t(job__D) )'
name_regex = "([a-zA-Z\d_]{6,})" # at least 6 alphanumeric + _ characters
prefix = "prefix_"
new_exp = regex.sub(name_regex, f"{prefix}\\1", exp)
print(new_exp)
Related
I'm trying to find a better way to capture variable values from a file that stores some information but facing the problem with line breaks and spaces. For example, a DataSetList variable is given that stores a value in two different ways:
Input
Templates = <
item
Name = 'fruits'
TemplateList = '7,12'
end>
Surveys = <
item
ID = 542
Name = 'apple'
end
item
ID = 872
Name = 'banana'
DataSetList = '873,887,971,1055'
PluginInfo = {something}
end
item
ID = 437
Name = 'cherry'
DataSetList =
'438,452,536,620,704,788,1143,1179,1563,1647,1731,1839,1875,1851,' +
'1863,2060,2359,2443,2469,2620'
PluginInfo = {something}
end>
The only way i've found to capture the values of the variables ID, Name, DataSetList variable values that are stored in 'item end' block is (My approach):
Expression
ID[\s\=]*(?P<UID>\d*)\s*Name[\s\=]*'(?P<Name>.*)'\s*DataSetList[\s\=]*(?P<DataSetList>(?:'[\d\,]*'[\s\+]*)*)
ID[\s\=]*(?P<UID>\d*) # capture ID
\s* # match spaces
Name[\s\=]*'(?P<Name>.*)' # capture Name
\s* # match spaces
DataSetList[\s\=]*(?P<DataSetList>(?:'[\d\,]*'[\s\+]*)*) # capture DataSetList
My approach output
{'UID': '872',
'Name': 'banana',
'DataSetList': "'873,887,971,1055'\n "}
{'UID': '437',
'Name': 'cherry',
'DataSetList': "'438,452,536,620,704,788,1143,1179,1563,1647,1731,1839,1875,1851,' +\n '1863,2060,2359,2443,2469,2620'\n "}
Problem
I don't think my approach is good because named capturing group DataSetList also captures spaces, line breaks, literal + and finally requires postprocessing of values.
Any approaches or ideas to improve my regular expression would be very helpful. Unfortunately my knowledge base of regex isn't as deep as i would like it to be. It's very interesting to see how it's done in other ways
You can improve the regex a bit.
ID[\s=]*(?P<UID>\d*)\s*Name[\s=]*'(?P<Name>.*)'\s*DataSetList[\s=]*(?P<DataSetList>'(?:[\d,]|'[\s+]*')*')
This gets rid of the unnecessary = and , escapes. The last part now won't match the whitespace after the final bit of the DataSetList.
I can't see a nice way to avoid having to post-process the DataSetList, if you stick to regular expressions.
If you need to do anything more complicated with this, I'd advise moving away from regexes. They are great for simple things, but it looks like in this case you'd be better off with a proper parser. If none already exists for the language you have here, you can use a parsing library such as Lark to create one without too much difficulty.
Guys I have the input string +00000000995510.32 and I need to remove the + sign and the leading zeros, my output number should be: 995510.32.
Is there a regular expression to do this in regexp_replace?
My current code:
df.withColumn("vl_fat",regexp_replace(col("vl_fat"),"^([0-9]|[1-9][0-9])$+", ""))
but that didn't work
if you want to practise regex, try: https://regex101.com/. The pattern you describe is that it starts with one + and then with a zero to many amount of 0, which in python regex would be [+][0]*. You also need to consider the look ahead feature of regex that can get a little weird. This should work however:
(?![+])(?![0]).*
you can use this regex "\+0+" to catch the leading +000...
explanation from regex101 :
\+ matches the character + literally (case sensitive)
0 matches the character 0 literally (case sensitive)
+ matches the previous token between one and unlimited times, as many times as possible, giving back as needed (greedy)
My two cents: you can use regex_extract (that seems to suite better your use case) and convert the input string into a float:
from pyspark.sql import functions as F, types as T
df = spark.createDataFrame(
[('+00000000995510.32',),
('34.32',),
('+00000.34',),
('+0444444',),
('9.',)],
T.StructType([
T.StructField('input_string', T.StringType())
])
)
df.withColumn('parsed_float',
F.regexp_extract('input_string', '^(\+0+|)(\d+(\.\d*|))$', 2).cast(T.FloatType()))
This is what you get:
+------------------+------------+
| input_string|parsed_float|
+------------------+------------+
|+00000000995510.32| 995510.3|
| 34.32| 34.32|
| +00000.34| 0.34|
| +0444444| 444444.0|
| 9.| 9.0|
+------------------+------------+
For the regex:
(\+0+|): this captures the initial (optional) + followed by one or more 0
(\d+(\.\d*|)): this captures the whole figure, described as a sequence of numbers followed by a (optional) sequence composed of a . followed by any number of decimals
The second argument of regex_extract is the group you are interested into; in this case is the second one, i.e., (\d+(\.\d*|)).
Instead of regex you might like to use TRIM. I find this easier to read and it better conveys the intention of the code. Note this code will also remove any + signs directly after your leading zeros.
import pyspark.sql.functions as F
df = spark.createDataFrame([('+00000000995510.32',)], ['number'])
df.withColumn('trimmed', F.expr("TRIM(LEADING '+0' FROM number)")).show()
+------------------+---------+
| number| trimmed|
+------------------+---------+
|+00000000995510.32|995510.32|
+------------------+---------+
Or if you want an actual number, you could simply cast it to float (or decimal). Note any value which cannot be cast will become NULL.
df.withColumn('trimmed', F.col('number').cast('float')).show()
+------------------+--------+
| number| trimmed|
+------------------+--------+
|+00000000995510.32|995510.3|
+------------------+--------+
I need to parse lines having multiple language codes as below
008800002 Bruxelles-Nord$Br�ussel Nord$<deu>$Brussel Noord$<nld>
008800002 being a id
Bruxelles-Nord$Br�ussel Nord$ being name1
deu being language one
$Brussel Noord$ being name two
nld being language two.
SO, the idea is name and language can appear N number of times. I need to collect them all.
the language in <> is 3 characters in length (fixed)
and all names end with $ sign.
I tried this one but it is not giving expected output.
x = re.compile('(?P<stop_id>\d{9})\s(?P<authority>[[\x00-\x7F]{3}|\s{3}])\s(?P<stop_name>.*)
(?P<lang_code>(?:[<]\S{0,4}))',flags=re.UNICODE)
I have no idea how to get repeated elements.
It takes
Bruxelles-Nord$Br�ussel Nord$<deu>$Brussel Noord$ as stop_name and <nld> as language.
Do it in two steps. First separate ID from name/language pairs; then use re.finditer on the name/language section to iterate over the pairs and stuff them into a dict.
import re
line = u"008800002 Bruxelles-Nord$Br�ussel Nord$<deu>$Brussel Noord$<nld>"
m = re.search("(\d+)\s+(.*)", line, re.UNICODE)
id = m.group(1)
names = {}
for m in re.finditer("(.*?)<(.*?)>", m.group(2), re.UNICODE):
names[m.group(2)] = m.group(1)
print id, names
\b(\d+)\b\s*|(.*?)(?=<)<(.*?)>
Try this.Just grab the captures.see demo.
http://regex101.com/r/hS3dT7/4
I would like to fill regex variables with string.
import re
hReg = re.compile("/robert/(?P<action>([a-zA-Z0-9]*))/$")
hMatch = hReg.match("/robert/delete/")
args = hMatch.groupdict()
args variable is now a dict with {"action":"delete"}.
How i can reverse this process ? With args dict and regex pattern, how i can obtain the string "/robert/delete/" ?
it's possible to have a function just like this ?
def reverse(pattern, dictArgs):
Thank you
This function should do it
def reverse(regex, dict):
replacer_regex = re.compile('''
\(\?P\< # Match the opening
(.+?) # Match the group name into group 1
\>\(.*?\)\) # Match the rest
'''
, re.VERBOSE)
return replacer_regex.sub(lambda m : dict[m.group(1)], regex)
You basically match the (\?P...) block and replace it with a value from the dict.
EDIT: regex is the regex string in my exmple. You can get it from patter by
regex_compiled.pattern
EDIT2: verbose regex added
Actually, i thinks it's doable for some narrow cases, but pretty complex thing "in general case".
You'll need to write some sort of finite state machine, parsing your regex string, and splitting different parts, then take appropriate action for this parts.
For regular symbols — simply put symbols "as is" into results string.
For named groups — put values from dictArgs in place of them
For optional blocks — put some of it's values
And so on.
One requllar expression often can match big (or even infinite) set of strings, so this "reverse" function wouldn't be very useful.
Building upon #Dimitri's answer, more sanitisation is possible.
retype = type(re.compile('hello, world'))
def reverse(ptn, dict):
if isinstance(ptn, retype):
ptn = ptn.pattern
ptn = ptn.replace(r'\.','.')
replacer_regex = re.compile(r'''
\(\?P # Match the opening
\<(.+?)\>
(.*?)
\) # Match the rest
'''
, re.VERBOSE)
# return replacer_regex.findall(ptn)
res = replacer_regex.sub( lambda m : dict[m.group(1)], ptn)
return res
Pythonistas:
Suppose you want to parse the following string using Pyparsing:
'ABC_123_SPEED_X 123'
were ABC_123 is an identifier; SPEED_X is a parameter, and 123 is a value. I thought of the following BNF using Pyparsing:
Identifier = Word( alphanums + '_' )
Parameter = Keyword('SPEED_X') or Keyword('SPEED_Y') or Keyword('SPEED_Z')
Value = # assume I already have an expression valid for any value
Entry = Identifier + Literal('_') + Parameter + Value
tokens = Entry.parseString('ABC_123_SPEED_X 123')
#Error: pyparsing.ParseException: Expected "_" (at char 16), (line:1, col:17)
If I remove the underscore from the middle (and adjust the Entry definition accordingly) it parses correctly.
How can I make this parser be a bit lazier and wait until it matches the Keyword (as opposed to slurping the entire string as an Identifier and waiting for the _, which does not exist.
Thank you.
[Note: This is a complete rewrite of my question; I had not realized what the real problem was]
I based my answer off of this one, since what you're trying to do is get a non-greedy match. It seems like this is difficult to make happen in pyparsing, but not impossible with some cleverness and compromise. The following seems to work:
from pyparsing import *
Parameter = Literal('SPEED_X') | Literal('SPEED_Y') | Literal('SPEED_Z')
UndParam = Suppress('_') + Parameter
Identifier = SkipTo(UndParam)
Value = Word(nums)
Entry = Identifier + UndParam + Value
When we run this from the interactive interpreter, we can see the following:
>>> Entry.parseString('ABC_123_SPEED_X 123')
(['ABC_123', 'SPEED_X', '123'], {})
Note that this is a compromise; because I use SkipTo, the Identifier can be full of evil, disgusting characters, not just beautiful alphanums with the occasional underscore.
EDIT: Thanks to Paul McGuire, we can concoct a truly elegant solution by setting Identifier to the following:
Identifier = Combine(Word(alphanums) +
ZeroOrMore('_' + ~Parameter + Word(alphanums)))
Let's inspect how this works. First, ignore the outer Combine; we'll get to this later. Starting with Word(alphanums) we know we'll get the 'ABC' part of the reference string, 'ABC_123_SPEED_X 123'. It's important to note that we didn't allow the "word" to contain underscores in this case. We build that separately in to the logic.
Next, we need to capture the '_123' part without also sucking in '_SPEED_X'. Let's also skip over ZeroOrMore at this point and return to it later. We start with the underscore as a Literal, but we can shortcut with just '_', which will get us the leading underscore, but not all of '_123'. Instictively, we would place another Word(alphanums) to capture the rest, but that's exactly what will get us in trouble by consuming all of the remaining '_123_SPEED_X'. Instead, we say, "So long as what follows the underscore is not the Parameter, parse that as part of my Identifier. We state that in pyparsing terms as '_' + ~Parameter + Word(alphanums). Since we assume we can have an arbitrary number of underscore + WordButNotParameter repeats, we wrap that expression a ZeroOrMore construct. (If you always expect at least underscore + WordButNotParameter following the initial, you can use OneOrMore.)
Finally, we need to wrap the initial Word and the special underscore + Word repeats together so that it's understood they are contiguous, not separated by whitespace, so we wrap the whole expression up in a Combine construct. This way 'ABC _123_SPEED_X' will raise a parse error, but 'ABC_123_SPEED_X' will parse correctly.
Note also that I had to change Keyword to Literal because the ways of the former are far too subtle and quick to anger. I do not trust Keywords, nor could I get matching with them.
If you are sure that the identifier never ends with an underscore, you can enforce it in the definition:
from pyparsing import *
my_string = 'ABC_123_SPEED_X 123'
Identifier = Combine(Word(alphanums) + Literal('_') + Word(alphanums))
Parameter = Literal('SPEED_X') | Literal('SPEED_Y') | Literal('SPEED_Z')
Value = Word(nums)
Entry = Identifier + Literal('_').suppress() + Parameter + Value
tokens = Entry.parseString(my_string)
print tokens # prints: ['ABC_123', 'SPEED_X', '123']
If it's not the case but if the identifier length is fixed you can define Identifier like this:
Identifier = Word( alphanums + '_' , exact=7)
You can also parse the identifier and parameter as one token, and split them in a parse action:
from pyparsing import *
import re
def split_ident_and_param(tokens):
mo = re.match(r"^(.*?_.*?)_(.*?_.*?)$", tokens[0])
return [mo.group(1), mo.group(2)]
ident_and_param = Word(alphanums + "_").setParseAction(split_ident_and_param)
value = Word(nums)
entry = ident_and_param + value
print entry.parseString("APC_123_SPEED_X 123")
The example above assumes that the identifiers and parameters always have the format XXX_YYY (containing one single underscore).
If this is not the case, you need to adjust the split_ident_and_param() method.
This answers a question that you probably have also asked yourself: "What's a real-world application for reduce?):
>>> keys = ['CAT', 'DOG', 'HORSE', 'DEER', 'RHINOCEROS']
>>> p = reduce(lambda x, y: x | y, [Keyword(x) for x in keys])
>>> p
{{{{"CAT" | "DOG"} | "HORSE"} | "DEER"} | "RHINOCEROS"}
Edit:
This was a pretty good answer to the original question. I'll have to work on the new one.
Further edit:
I'm pretty sure you can't do what you're trying to do. The parser that pyparsing creates doesn't do lookahead. So if you tell it to match Word(alphanums + '_'), it's going to keep matching characters until it finds one that's not a letter, number, or underscore.