Extract the data specified in brackets '[ ]' from a string message in python - python

I want to extract fields from below Log message.
Example:
Ignoring entry, Affected columns [column1:column2], reason[some reason], Details[some entry details]
I need to extract the data specified in the brackets [ ] for "Affected columns,reason, Details"
What would be the efficient way to extract these fields in Python?
Note: I can modify the log message format if needed.

If you are free to change the log format, it's easiest to use a common data format - I'd recommend JSON for such data. It is structured, but lightweight enough to write it even from custom bash scripts. The json module allows you to directly convert it to native python objects:
import json # python has a default parser
# assume this is your log message
log_line = '{"Ignoring entry" : {"Affected columns": [1, 3], "reason" : "some reason", "Details": {}}}'
data = json.loads(log_line)
print("Columns to ignore:", data["Ignoring entry"]["Affected columns"])
If you want to work with the current format, you'll have to work with str methods or the re module.
For example, you could do this:
log_msg = "Ignoring entry, Affected columns [column1:column2], reason[some reason], Details[some entry details]"
def parse_log_line(log_line):
if log_line.startswith("Ignoring entry"):
log_data = {
for element in log_line.split(',')[1:]: # parse all elements but the header
key, value = element.partition('[')
if value[-1] != ']':
raise ValueError('Malformed Content. Expected %r to end with "]"' % element)
value = value[:-1]
log_data[key] = value
return log_data
raise ValueError('Unrecognized log line type')
Many parsing tasks are best compactly handled by the re module. It allows you to use regular expressions. They are very powerful, but difficult to maintain if you are not used to it. In your case, the following would work:
log_data = {key: value for key, value in re.findall(',\s?(.+?)\s?\[(.+?)\]', log_line)}
The re works like this:
, a literal comma, separating your entries
\s* an arbitrary sequence of whitespace after the comma, before the next element
(.+?) any non-whitespace characters (the key, captured via '()')
\s* an arbitrary sequence of whitespace between key and value
\[ a literal [
(.+?) the shortest sequence of non-whitespace characters before the next element (the value, captured via '()')
\] a literal ]
The symbols *, + and ? mean "any", "more than one", and "as few as possible".

Related

Python regex parse file name with underscore separated fields

I have the following format which parameterises a file name.
"{variable}_{domain}_{GCMsource}_{scenario}_{member}_{RCMsource}_{RCMversion}_{frequency}_{start}-{end}_{fid}.nc"
e.g.
"pr_EUR-11_CNRM-CERFACS-CNRM-CM5_rcp45_r1i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20101231.nc"
(Note that {start}-{end} is meant to be hyphon separated instead of underscore)
The various fields are always separated by underscores and contain a predictable (but variable) format. In the example file name I have left out the final {fid} field as I would like that to be optional.
I'd like to use regex in python to parse such a file name to give me a dict or similar with keys for the field names in the format string and the corresponding values of the parsed file name. e.g.
{
"variable": "pr",
"domain", "EUR-11",
"GCMsource": "CNRM-CERFACS-CNRM-CM5",
"scenario": "rcp45",
"member": "r1i1p1",
"RCMsource": "CLMcom-CCLM4-8-17",
"RCMversion": "v1",
"frequency": "day",
"start": "20060101",
"end": "20101231".
"fid": None
}
The regex patten for each field can be constrained depending on the field. e.g.
"domain" is always 3 letters - 2 numbers
"member" is always rWiXpY where W, X and Y are numbers.
"scenario" always contains the letters "rcp" followed by 2 numbers.
"start" and "end" are always 8 digit numbers (YYYYMMDD)
There are never underscores within a field, underscores are only used to separate fields.
Note that I have used https://github.com/r1chardj0n3s/parse with some success but I don't think it is flexible enough for my needs (trying to parse other similar filenames with similar formats can often get confused with one another).
It would be great if the answer can explain some regex principles which will allow me to do this.
document for regular expression in python: https://docs.python.org/3/howto/regex.html#regex-howto
named group in regular expression in python:
https://docs.python.org/3/howto/regex.html#non-capturing-and-named-groups
import re
test_string = """pr_EUR-11_CNRM-CERFACS-CNRM-CM5_rcp45_r1i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20101231.nc"""
pattern = r"""
(?P<variable>\w+)_
(?P<domain>[a-zA-Z]{3}-\d{2})_
(?P<GCMsource>([A-Z0-9]+[-]?)+)_
(?P<scenario>rcp\d{2})_
(?P<member>([rip]\d)+)_
(?P<RCMsource>([a-zA-Z0-9]-?)+)_
(?P<RCMversion>[a-zA-Z0-9]+)_
(?P<frequency>[a-zA-Z-0-9]+)_
(?P<start>\d{8})-
(?P<end>\d{8})
_?
(?P<fid>[a-zA-Z0-9]+)?
.nc
"""
re_object = re.compile(pattern, re.VERBOSE) # we use VERBOSE flag
search_result = re_object.match(test_string)
print(search_result.groupdict())
# result:
"""
{'variable': 'pr', 'domain': 'EUR-11', 'GCMsource': 'CNRM-CERFACS-CNRM-CM5', 'scenario': 'rcp45', 'member': 'r1i1p1', 'RCMsource': 'CLMcom-CCLM4-8-17', 'RCMversion': 'v1', 'frequency': 'day', 'start': '20060101', 'end': '20101231', 'fid': None}
"""

Parsing Regular expression from YAML file adds extra \

I have a bunch of regular expression I am using to scrape lot of specific fields from a text document. Those all work fine when used directly inside the python script.
But I thought of putting them in a YAML file and reading from there. Here's how it looks:
# Document file for Regular expression patterns for a company invoice
---
issuer: ABCCorp
fields:
invoice_number: INVOICE\s*(\S+)
invoice_date: INVOICE DATE\s*(\S+)
cusotmer_id: CUSTOMER ID\s*(\S+)
origin: ORIGIN\s*(.*)ETD
destination: DESTINATION\s*(.*)ETA
sub_total: SUBTOTAL\s*(\S+)
add_gst: SUBTOTAL\s*(\S+)
total_cost: TOTAL USD\s*(\S+)
description_breakdown: (?s)(DESCRIPTION\s*GST IN USD\s*.+?TOTAL CHARGES)
package_details_fields: (?s)(WEIGHT\s*VOLUME\s*.+?FLIGHT|ROAD REFERENCE)
mawb_hawb: (?s)((FLIGHT|ROAD REFERENCE).*(MAWB|MASTER BILL)\s*.+?GOODS COLLECTED FROM)
When I retrieve it using pyyml in python, it is adding a string quote around that (which is ok as I can add r'' later) but I see it is also adding extra \ in between the regex. That would make the regex go wrong when used in code now
import yaml
with open(os.path.join(TEMPLATES_DIR,"regex_template.yml")) as f:
my_dict = yaml.safe_load(f)
print(my_dict)
{'issuer': 'ABCCorp', 'fields': {'invoice_number': 'INVOICE\\s*(\\S+)', 'invoice_date': 'INVOICE DATE\\s*(\\S+)', 'cusotmer_id': 'CUSTOMER ID\\s*(\\S+)', 'origin': 'ORIGIN\\s*(.*)ETD', 'destination': 'DESTINATION\\s*(.*)ETA', 'sub_total': 'SUBTOTAL\\s*(\\S+)', 'add_gst': 'SUBTOTAL\\s*(\\S+)', 'total_cost': 'TOTAL USD\\s*(\\S+)', 'description_breakdown': '(?s)(DESCRIPTION\\s*GST IN USD\\s*.+?TOTAL CHARGES)', 'package_details_fields': '(?s)(WEIGHT\\s*VOLUME\\s*.+?FLIGHT|ROAD REFERENCE)', 'mawb_hawb'
How to read the right regex as I have it in yaml file? Does any string written in yaml file gets a quotation mark around that when read in python because that is a string?
EDIT:
The main regex in yaml file is:
INVOICE\s*(\S+)
Output in dict is:
'INVOICE\\s*(\\S+)'
This is too long to do as a comment.
The backslash character is used to escape special characters. For example:
'\n': newline
'\a': alarm
When you use it before a letter that has no special meaning it is just taken to be a backslash character:
'\s': backslash followed by 's'
But to be sure, whenever you want to enter a backslash character in a string and not have it interpreted as the start of an escape sequence, you double it up:
'\\s': also a backslash followed by a 's'
'\\a': a backslash followed by a 'a'
If you use a r'' type literal, then a backslash is never interpreted as the start of an escape sequence:
r'\a': a backslash followed by 'a' (not an alarm character)
r'\n': a backslash followed by n (not a newline -- however when used in a regex. it will match a newline)
Now here is the punchline:
When you print out these Python objects, such as:
d = {'x': 'ab\sd'}
print(d)
Python will print the string representation of the dictionary and the string will print:
'ab\\sd'. If you just did:
print('ab\sd')
You would see ab\sd. Quite a difference.
Why the difference. See if this makes sense:
d = {'x': 'ab\ncd'}
print(d)
print('ab\ncd')
Results:
d = {'x': 'ab\ncd'}
ab
cd
The bottom line is that when you print a Python object other than a string, it prints a representation of the object showing how you would have created it. And if the object contains a string and that string contains a backslash, you would have doubled up on that backslash when entering it.
Update
To process your my_dict: Since you did not provide the complete value of my_dict, I can only use a truncated version for demo purposes. But this will demonstrate that my_dict has perfectly good regular expressions:
import re
my_dict = {'issuer': 'ABCCorp', 'fields': {'invoice_number': 'INVOICE\\s*(\\S+)', 'invoice_date': 'INVOICE DATE\\s*(\\S+)'}}
fields = my_dict['fields']
invoice_number_re = fields['invoice_number']
m = re.search(invoice_number_re, 'blah-blah INVOICE 12345 blah-blah')
print(m[1])
Prints:
12345
If you are going to be using the same regular expressions over and over again, then it is best to compile them:
import re
my_dict = {'issuer': 'ABCCorp', 'fields': {'invoice_number': 'INVOICE\\s*(\\S+)', 'invoice_date': 'INVOICE DATE\\s*(\\S+)'}}
#compile the strings to regular expressions
fields = my_dict['fields']
for k, v in fields.items():
fields[k] = re.compile(v)
invoice_number_re = fields['invoice_number']
m = invoice_number_re.search('blah-blah INVOICE 12345 blah-blah')
print(m[1])

Python regex match anything enclosed in either quotations brackets braces or parenthesis

UPDATE
This is still not entirely the solution so far. It is only for preceding repeated closing characters (e.g )), ]], }}). I'm still looking for a way to capture enclosed contents and will update this.
Code:
>>> import re
>>> re.search(r'(\(.+?[?<!)]\))', '((x(y)z))', re.DOTALL).groups()
('((x(y)z))',)
Details:
r'(\(.+?[?<!)]\))'
() - Capturing group special characters.
\( and \) - The open and closing characters (e.g ', ", (), {}, [])
.+? - Match any character content (use with re.DOTALL flag)
[?<!)] - The negative lookbehind for character ) (replace this with the matching closing character). This will basically find any ) character where \) character does not precede (more info here).
I was trying to parse something like a variable assignment statement for this lexer thing I'm working with, just trying to get the basic logic behind interpreters/compilers.
Here's the basic assignment statements and literals I'm dealing with:
az = none
az_ = true
az09 = false
az09_ = +0.9
az_09 = 'az09_'
_az09 = "az09_"
_az = [
"az",
0.9
]
_09 = {
0: az
1: 0.9
}
_ = (
true
)
Somehow, I managed to parse those simple assignments like none, true, false, and numeric literals. Here's where I'm currently stuck at:
import sys
import re
# validate command-line arguments
if (len(sys.argv) != 2): raise ValueError('usage: parse <script>')
# parse the variable name and its value
def handle_assignment(index, source):
# TODO: handle quotations, brackets, braces, and parenthesis values
variable = re.search(r'[\S\D]([\w]+)\s+?=\s+?(none|true|false|[-+]?\d+\.?\d+|[\'\"].*[\'\"])', source[index:])
if variable is not None:
print('{}={}'.format(variable.group(1), variable.group(2)))
index += source[index:].index(variable.group(2))
return index
# parse through the source element by element
with open(sys.argv[1]) as file:
source = file.read()
index = 0
while index < len(source):
# checks if the line matches a variable assignment statement
if re.match(r'[\S\D][\w]+\s+?=', source[index:]):
index = handle_assignment(index, source)
index += 1
I was looking for a way to capture those values with enclosed quotations, brackets, braces, and parenthesis.
Probably, will update this post if I found an answer.
Use a regexp with multiple alternatives for each matching pair.
re.match(r'\'.*?\'|".*?"|\(.*?\)|\[.*?\]|\{.*?\}', s)
Note, however, that if there are nested brackets, this will match the first ending bracket, e.g. if the input is
(words (and some more words))
the result will be
(words (and some more words)
Regular expressions are not appropriate for matching nested structures, you should use a more powerful parsing technique.
Solution for #Barmar's recursive characters using the regex third-party module:
pip install regex
python3
>>> import regex
>>> recurParentheses = regex.compile(r'[(](?:[^()]|(?R))*[)]')
>>> recurParentheses.findall('(z(x(y)z)x) ((x)(y)(z))')
['(z(x(y)z)x)', '((x)(y)(z))']
>>> recurCurlyBraces = regex.compile(r'[{](?:[^{}]|(?R))*[}]')
>>> recurCurlyBraces.findall('{z{x{y}z}x} {{x}{y}{z}}')
['{z{x{y}z}x}', '{{x}{y}{z}}']
>>> recurSquareBrackets = regex.compile(r'[[](?:[^][]|(?R))*[]]')
>>> recurSquareBrackets.findall('[z[x[y]z]x] [[x][y][z]]')
['[z[x[y]z]x]', '[[x][y][z]]']
For string literal recursion, I suggest take a look at this.

What's a better way to process inconsistently structured strings?

I have an output string like this:
read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec
And I want to just extract one of the numerical values for computation, say iops. I'm processing it like this:
if 'read ' in key:
my_read_iops = value.split(",")[2].split("=")[1]
result['test_details']['read'] = my_read_iops
But there are slight inconsistencies with some of the strings I'm reading in and my code is getting super complicated and verbose. So instead of manually counting the number of commas vs "=" chars, what's a better way to handle this?
You can use regular expression \s* to handle inconsistent spacing, it matches zero or more whitespaces:
import re
s = 'read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec'
for m in re.finditer(r'\s*(?P<name>\w*)\s*=\s*(?P<value>[\w/]*)\s*', s):
print(m.group('name'), m.group('value'))
# io 131220KB
# bw 14016KB/s
# iops 3504
# runt 9362msec
Using group name, you can construct pattern string from a list of column names and do it like:
names = ['io', 'bw', 'iops', 'runt']
name_val_pat = r'\s*{name}\s*=\s*(?P<{group_name}>[\w/]*)\s*'
pattern = ','.join([name_val_pat.format(name=name, group_name=name) for name in names])
# '\s*io\s*=\s*(?P<io>[\w/]*)\s*,\s*bw\s*=\s*(?P<bw>[\w/]*)\s*,\s*iops\s*=\s*(?P<iops>[\w/]*)\s*,\s*runt\s*=\s*(?P<runt>[\w/]*)\s*'
match = re.search(pattern, s)
data_dict = {name: match.group(name) for name in names}
print(data_dict)
# {'io': '131220KB', 'bw': '14016KB/s', 'runt': '9362msec', 'iops': '3504'}
In this way, you only need to change names and keep the order correct.
If I were you,I'd use regex(regular expression) as first choice.
import re
s= "read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec"
re.search(r"iops=(\d+)",s).group(1)
By this python code, I find the string pattern that starts 'iops=' and continues number expression at least 1 digit.I extract the target string(3504) by using round bracket.
you can find more information about regex from
https://docs.python.org/3.6/library/re.html#module-re
regex is powerful language for complex pattern matching with simple syntax.
from re import match
string = 'read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec'
iops = match(r'.+(iops=)([0-9]+)', string).group(2)
iops
'3504'

How fill a regex string with parameters

I would like to fill regex variables with string.
import re
hReg = re.compile("/robert/(?P<action>([a-zA-Z0-9]*))/$")
hMatch = hReg.match("/robert/delete/")
args = hMatch.groupdict()
args variable is now a dict with {"action":"delete"}.
How i can reverse this process ? With args dict and regex pattern, how i can obtain the string "/robert/delete/" ?
it's possible to have a function just like this ?
def reverse(pattern, dictArgs):
Thank you
This function should do it
def reverse(regex, dict):
replacer_regex = re.compile('''
\(\?P\< # Match the opening
(.+?) # Match the group name into group 1
\>\(.*?\)\) # Match the rest
'''
, re.VERBOSE)
return replacer_regex.sub(lambda m : dict[m.group(1)], regex)
You basically match the (\?P...) block and replace it with a value from the dict.
EDIT: regex is the regex string in my exmple. You can get it from patter by
regex_compiled.pattern
EDIT2: verbose regex added
Actually, i thinks it's doable for some narrow cases, but pretty complex thing "in general case".
You'll need to write some sort of finite state machine, parsing your regex string, and splitting different parts, then take appropriate action for this parts.
For regular symbols — simply put symbols "as is" into results string.
For named groups — put values from dictArgs in place of them
For optional blocks — put some of it's values
And so on.
One requllar expression often can match big (or even infinite) set of strings, so this "reverse" function wouldn't be very useful.
Building upon #Dimitri's answer, more sanitisation is possible.
retype = type(re.compile('hello, world'))
def reverse(ptn, dict):
if isinstance(ptn, retype):
ptn = ptn.pattern
ptn = ptn.replace(r'\.','.')
replacer_regex = re.compile(r'''
\(\?P # Match the opening
\<(.+?)\>
(.*?)
\) # Match the rest
'''
, re.VERBOSE)
# return replacer_regex.findall(ptn)
res = replacer_regex.sub( lambda m : dict[m.group(1)], ptn)
return res

Categories