pyparsing conditional parser - python

I need to parse the following three lines:
Uptime is 1w2d
Last reset at 23:05:56
Reason: reload
But last two lines are not always there, output could look like this prior to 1st reboot:
Uptime is 1w2d
Last reset
My parser looks like this:
parser = SkipTo(Literal('is'), include=True)('uptime') +
delimitedList(Suppress(SkipTo(Literal('at'), include=True))'(reset)' +
SkipTo(Literal(':'), include=true) +
SkipTo(lineEnd)('reason'), combine=True)
)
It works in first case with 3 lines, but doesnt work with second case.

I will use for the file that you've reported this syntax (supposing that the order is relevant):
from pyparsing import Literal, Word, alphanums, nums, alphas, Optional, delimitedList
def createParser():
firstLine = Literal('Uptime is') + Word(alphanums)
secLine = Literal('Last reset at') + delimitedList(Word(nums) + Literal(':') + Word(nums) + Literal(':') + Word(nums))
thirdLine = Literal('Reason:') + Word(alphas)
return firstLine + secLine + Optional(thirdLine)
if __name__ == '__main__':
parser = createParser()
firstText = """Uptime is 1w2d\n
Last reset at 23:05:56\n
Reason: reload"""
print(parser.parseString(firstText))
Declaring a parsing element optional you are able to let the parser skip it when it is not present, without raising any errors.

Related

Python: Remove character only from end of string if character is ="/"

I add different Values to the Houdini Variables with Python.
Some of these Variables are file pathes and end with an "/" - others are just names and do not end with an "/".
In my current code I use [:-1] to remove the last character of the filepath, so I dont have the "/".
The problem is, that if I add a Value like "Var_ABC", the result will be "Var_AB" since it also removes the last character.
How can i remove the last character only if the last character is a "/"?
Thats what I have and it works so far:
def set_vars():
count = hou.evalParm('vars_names')
user_name = hou.evalParm('user_name')
for idx in range( 1,count+1):
output = hou.evalParm('vars_' + str(idx))
vars_path_out = hou.evalParm('vars_path_' + str(idx))
vars_path = vars_path_out[:-1]
hou.hscript("setenv -g " + output + "=" + vars_path)
final_vars = hou.hscript("setenv -g " + output + "=" + vars_path)
hou.ui.displayMessage(user_name +", " + "all variables are set.")
Thank you
As #jasonharper mentioned in the comments, you should probably use rstrip here. It is built-in and IMO more readable than the contitional one-liner:
vars_path_out.rstrip('/')
This will strip out those strings which end with / and return without that ending. Otherwise it will return your string as-is.
Try this in your code:
vars_path_out = hou.evalParm('vars_path_' + str(idx))
if vars_path_out[-1] == '/':
vars_path = vars_path_out[:-1]
or
based on the comment of jasonharper
vars_path = vars_path_out.rstrip('/')
This is much better than the first
Use endswith method to check if it ends with /
if vars_path_out.endswith('/')
Or simply check the last character:
if vars_path_out[-1] == '/'
Like this:
vars_path = vars_path_out[:-1] if vars_path_out.endswith('/') else vars_path_out
Or like this:
if vars_path_out.endswith('\'):
vars_path = vars_path_out[:-1]
else:
vars_path = vars_path_out
another way is rstrip method:
vars_path = vars_path_out.rstrip('/')

How to use pyparsing LineStart?

I'm trying to use pyparsing to parse key:value pairs from the comments in a document. A key starts at the beginning of a line, and a value follows. Values may be continued on multiple lines that begin with whitespace.
import pyparsing as pp
instring = """
-- This is (a) #%^& comment
/*
name1: val
name2: val2 with $*&##) junk
name3: val3: with #)(*% multi-
line: content
*/
"""
comment1 = pp.Literal("--") + pp.originalTextFor(pp.SkipTo(pp.LineEnd())).setDebug()
identifier = pp.Word(pp.alphanums + "_").setDebug()
meta1 = pp.LineStart() + identifier + pp.Literal(":") + pp.SkipTo(pp.LineEnd())
meta2 = pp.LineStart() + pp.White() + pp.SkipTo(pp.LineEnd())
metaval = meta1 + pp.ZeroOrMore(meta2)
metalist = pp.ZeroOrMore(comment1) + pp.Literal("/*") + pp.OneOrMore(metaval) + pp.Literal("*/")
if __name__ == "__main__":
p = metalist.parseString(instring)
print(p)
Fails with:
Matched {Empty SkipTo:(LineEnd) Empty} -> ['This is (a) #%^& comment']
File "C:\Users\user\py3\lib\site-packages\pyparsing.py", line 2305, in parseImpl
raise ParseException(instring, loc, self.errmsg, self)
pyparsing.ParseException: Expected start of line (at char 32), (line:4, col:1)
The answer to pyparsing whitespace match issues says
LineStart has always been difficult to work with, but ...
If the parser is at line 4 column 1 (the first key:value pair), then why is it not finding a start of line? What is the correct pyparsing syntax to recognize lines beginning with no whitespace and lines beginning with whitespace?
I think the confusion I have with LineStart is that, for LineEnd, I can look for a '\n' character, but there is no separate character for LineStart. So in LineStart I look to see if the current parser location is positioned just after a '\n'; or if it is currently on a '\n', move past it and still continue. Unfortunately, I implemented this in a place that messes up the reporting location, so you get those weird errors that read like "failed to find a start of line on line X col 1," which really does sound like it should be a successfully matched start of a line. Also, I think I need to revisit this implicit newline-skipping, or for that matter, all whitespace-skipping in general for LineStart.
For now, I've gotten your code to work by expanding your line-starting expression slightly, as:
LS = pp.Optional(pp.LineEnd()) + pp.LineStart()
and replaced the LineStart references in meta1 and meta2 with LS:
comment1 = pp.Literal("--") + pp.originalTextFor(pp.SkipTo(pp.LineEnd())).setDebug()
identifier = pp.Word(pp.alphanums + "_").setDebug()
meta1 = LS + identifier + pp.Literal(":") + pp.SkipTo(pp.LineEnd())
meta2 = LS + pp.White() + pp.SkipTo(pp.LineEnd())
metaval = meta1 + pp.ZeroOrMore(meta2)
metalist = pp.ZeroOrMore(comment1) + pp.Literal("/*") + pp.OneOrMore(metaval) + pp.Literal("*/")
If this situation with LineStart leaves you uncomfortable, here is another tactic you can try: using a parse-time condition to only accept identifiers that start in column 1:
comment1 = pp.Literal("--") + pp.originalTextFor(pp.SkipTo(pp.LineEnd())).setDebug()
identifier = pp.Word(pp.alphanums + "_").setName("identifier")
identifier.addCondition(lambda instring,loc,toks: pp.col(loc,instring) == 1)
meta1 = identifier + pp.Literal(":") + pp.SkipTo(pp.LineEnd()).setDebug()
meta2 = pp.White().setDebug() + pp.SkipTo(pp.LineEnd()).setDebug()
metaval = meta1 + pp.ZeroOrMore(meta2, stopOn=pp.Literal('*/'))
metalist = pp.ZeroOrMore(comment1) + pp.Literal("/*") + pp.LineEnd() + pp.OneOrMore(metaval) + pp.Literal("*/")
This code does away with LineStart completely, while I figure out just what I want this particular token to do. I also had to modify the ZeroOrMore repetition in metaval so that */ would not be accidentally processed as continued comment content.
Thanks for your patience with this - I am not keen to quickly put out a patched LineStart change and then find that I have overlooked other compatibility or other edge cases that just put me back in the current less-than-great state on this class. But I'll put some effort into clarifying this behavior before putting out 2.1.10.

Replace single quotes with double quotes in python, for use with insert into database

Was wondering whether anyone has a clever solution for fixing bad
insert statements in Python, exported by a not so clever program. It didn't add
two single quotes for strings with a single quote in the string. To
make it a bit easier all the values being inserted are strings.
So it has:
INSERT INTO addresses VALUES ('1','1','CUCKOO'S NEST','CUCKOO'S NEST STREET');
instead of:
INSERT INTO addresses VALUES ('1','1','CUCKOO''S NEST','CUCKOO''S NEST STREET');
Obviously there are multiple lines of this and I don't want to replace
the enclosing single quotes as well.
Was thinking of using split and join, but I'm not sure how to easily update the split values while looping in a loop. Sorry I'm a noob. Something like the below, where I'm not sure how to do #update bit
import sys
fileIN = open('a.sql', "r")
line = fileIN.readline()
while line:
bits = line.split("','")
for bit in bits:
if bit.find("'") > -1:
#update bit
line_out = "','".join(bits)
sys.stdout.write(line_out)
line = fileIN.readline()
Thanks
Based on katrielalex's suggestion, how about this:
>>> import re
>>> s = "INSERT INTO addresses VALUES ('1','1','CUCKOO'S NEST','CUCKOO'S NEST STREET');"
>>> def repl(m):
if m.group(1) in ('(', ',') or m.group(2) in (',', ')'):
return m.group(0)
return m.group(1) + "''" + m.group(2)
>>> re.sub("(.)'(.)", repl, s)
"INSERT INTO addresses VALUES ('1','1','CUCKOO''S NEST','CUCKOO''S NEST STREET');"
and if you're into negative lookbehind assertions, this is the headache inducing pure regex version:
re.sub("((?<![(,])'(?![,)]))", "''", s)
while line:
# Restrain line2 to inside parentheses
line1, rest = line.split('(')
line2, line3 = rest.split(')')
# A bit more cleaner
new_bits = []
for bit in line2.split(','):
# Remove border ' characters
bit = bit[1:-1]
# Duplicate the ones inside
if "'" in bit:
bit = bit.replace("'", "''")
# Re-add border '
new_bits.append("'" + bit + "'")
sys.stdout.write(line1 + '(' + ','.join(new_bits + ')' + line3)
line = fileIN.readline()
Warning: This depends way too much on the formatting of the SQL statement. However, if your input is only ever going to have the format "statements (params) end" then this will work every time.
import sys
fileIN = open('a.sql', "r")
line = fileIN.readline()
while line:
#split out the parameters (between the ()'s)
start, temp = line.split("(")
params, end = temp.split(")")
#replace the "'"s in the parameters (without the start and end quote)
newParams = "','".join([x.replace("'", "''") for x in params[1:-1].split("','")])
#join the statement back together
line_out = start + "('" + newParams + "')" + end
#next line
sys.stdout.write(line_out)
line = fileIN.readline()
Explanation:
Split the string into 3 parts: The query start, the parameters, and the end.
The generator takes the parameters (without the starting/ending 's), splits it on ',', and, for every element in the list the split generates (the individual data entries), replaces the 's with ''s.
The last line then joins the query start, the new params (with the parenthesis and quotes that were removed previously), and the end of the statement.
Another answer:
a = "INSERT INTO addresses VALUES ('1','1','CUCKOO'S NEST','CUCKOO'S NEST STREET');"
open_par = a.find("(")
close_par = a.find(")")
b = a[open_par+1:close_par]
c = b.split(",")
d = map(lambda x: '"' + x.strip().strip("'") + '"',c)
result = a[:open_par+1] + ",".join(d) + a[close_par:]
Went with:
import sys
import re
def repl(m):
if m.group(1) in ('(', ',') or m.group(2) in (',', ')'):
return m.group(0)
return m.group(1) + "''" + m.group(2)
fileIN = open('a.sql', "r")
line = fileIN.readline()
while line:
line_out = re.sub("(.)'(.)", repl, line)
sys.stdout.write(line_out)
# Next line.
line = fileIN.readline()

Python RegEx Woes

I'm not sure why this isn't working:
import re
import csv
def check(q, s):
match = re.search(r'%s' % q, s, re.IGNORECASE)
if match:
return True
else:
return False
tstr = []
# test strings
tstr.append('testthisisnotworking')
tstr.append('This is a TEsT')
tstr.append('This is a TEST mon!')
f = open('testwords.txt', 'rU')
reader = csv.reader(f)
for type, term, exp in reader:
for i in range(2):
if check(exp, tstr[i]):
print exp + " hit on " + tstr[i]
else:
print exp + " did NOT hit on " + tstr[i]
f.close()
testwords.txt contains this line:
blah, blah, test
So essentially 'test' is the RegEx pattern. Nothing complex, just a simple word. Here's the output:
test did NOT hit on testthisisnotworking
test hit on This is a TEsT
test hit on This is a TEST mon!
Why does it NOT hit on the first string? I also tried \s*test\s* with no luck. Help?
The csv module by default returns blank spaces around words in the input (this can be changed by using a different "dialect"). So exp contains " test" with a leading space.
A quick way to fix this would be to add:
exp = exp.strip()
after you read from the CSV file.
Adding a print repr(exp) to the top of the first for loop shows that exp is ' test', note the leading space.
This isn't that surprising since csv.reader() splits on commas, try changing your code to the following:
for type, term, exp in reader:
exp = exp.strip()
for s in tstr:
if check(exp, s):
print exp + " hit on " + s
else:
print exp + " did NOT hit on " + s
Note that in addition to the strip() call which will remove the leading a trailing whitespace, I change your second for loop to just loop directly over the strings in tstr instead of over a range. There was actually a bug in your current code because tstr contained three values but you only checked the first two because for i in range(2) will only give you i=0 and i=1.

Pyparsing 'no such attribute _ParseResults__tokdict' on multi-line inputs

The following code gives me the error 'no such attribute _ParseResuls__tokdict' when run on an input with more than one line.
With single-line files, there is no error. If I comment out either the second or third line shown here, then I don't get that error either, no matter how long the file is.
for line in input:
final = delimitedList(expr).parseString(line)
notid = delimitedList(notid).parseString(line)
dash_tags = ', '.join(format_tree(notid))
print final.lineId + ": " + dash_tags
Does anyone know what's going on here?
EDIT: As suggested, I'm adding the complete code to allow others to reproduce the error.
from pyparsing import *
#first are the basic elements of the expression
#number at the beginning of the line, unique for each line
#top-level category for a sentiment
#semicolon should eventually become a line break
lineId = Word(nums)
topicString = Word(alphanums+'-'+' '+"'")
semicolon = Literal(';')
#call variable early to allow for recursion
#recursive function allowing for a line id at first, then the topic,
#then any subtopics, and so on. Finally, optional semicolon and repeat.
#set results name lineId.lineId here
expr = Forward()
expr << Optional(lineId.setResultsName("lineId")) + topicString.setResultsName("topicString") + \
Optional(nestedExpr(content=delimitedList(expr))).setResultsName("parenthetical") + \
Optional(Suppress(semicolon).setResultsName("semicolon") + expr.setResultsName("subsequentlines"))
notid = Suppress(lineId) + topicString + \
Optional(nestedExpr(content=delimitedList(expr))) + \
Optional(Suppress(semicolon) + expr)
#naming the parenthetical portion for independent reference later
parenthetical = nestedExpr(content=delimitedList(expr))
#open files for read and write
input = open('parserinput.txt')
output = open('parseroutput.txt', 'w')
#defining functions
#takes nested list output of parser grammer and translates it into
#strings suited for the final output
def format_tree(tree):
prefix = ''
for node in tree:
if isinstance(node, basestring):
prefix = node
yield node
else:
for elt in format_tree(node):
yield prefix + '_' + elt
#function for passing tokens from setResultsName
def id_number(tokens):
#print tokens.dump()
lineId = tokens
lineId["lineId"] = lineId.lineId
def topic_string(tokens):
topicString = tokens
topicString["topicString"] = topicString.topicString
def parenthetical_fun(tokens):
parenthetical = tokens
parenthetical["parenthetical"] = parenthetical.parenthetical
#function for splitting line at semicolon and appending numberId
#not currently in use
def split_and_prepend(tokens):
return '\n' + final.lineId
#setting parse actions
lineId.setParseAction(id_number)
topicString.setParseAction(topic_string)
parenthetical.setParseAction(parenthetical)
#reads each line in the input file
#calls the grammar expressed in 'expr' and uses it to read the line and assign names to the tokens for later use
#calls the 'notid' varient to easily return the other elements in the line aside from the lineId
#applies the format tree function and joins the tokens in a comma-separated string
#prints the lineId + the tokens from that line
for line in input:
final = delimitedList(expr).parseString(line)
notid = delimitedList(notid).parseString(line)
dash_tags = ', '.join(format_tree(notid))
print final.lineId + ": " + dash_tags
The input file is a txt document with the following two lines:
1768 dummy; data
1768 dummy data; price
Reassigning of notid breaks the second iteration when used in delimitedList. Your third line destroys the notid expression defined earlier in the code, so it will only work the first iteration. Use a different name for the notid assignment.

Categories