PLY Parse C Files for curly brace construct - python

I want to parse some C Code with PLY.
What I want to extract is the following:
{ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4}
This structure can be hidden in some more curly braces.
{SOME, RANDOM, STUFF {ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4}, SOME, MORE, RANDOM, STUFF }
Currently I am able to lex for the structure I want to extract ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4 but only if its the only match.
{SOME, RANDOM, STUFF {ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4}, SOME, MORE, RANDOM, STUFF }{Argument1, Argument2, Argument3, Argument4}
This is where my current approach fails as the lexing output for above example would be:
ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4}, SOME, MORE, RANDOM, STUFF }{Argument1, Argument2, Argument3, Argument4
How can I only receive following:
ARGUMENT1, ARGUMENT2, ARGUMENT3, ARGUMENT4
Argument1, Argument2, Argument3, Argument4
Short explanation:
I do have a conditional lexer which searches for left curly braces to save its position.
For each new left brace I increment a counter.
For each right brace i decrement the counter.
If the counter is zero, I start to set t.value to all the elements from the latest left brace to the following right brace.
I guess that should work for more than one hit in an example string.
In my opinion, I fail to switch back from ccode state to initial state.
Now to my actual code (in this example i left out the commas in curly braces to make it a bit simpler for me to program):
import ply.lex as lex
import ply.yacc as yacc
# Declare the state
states = (
('ccode', 'exclusive'),
)
tokens = [
'TEXT',
'CCODE'
]
# this saves all rbrace positions
# to get the inner curly brace construct you want to use first element
# text lib call should always be the inner curly brace construct
rbrace_positions = []
def t_ANY_TEXT(t):
r'\w+'
t.value = str(t.value)
return t
# Match the first {. Enter ccode state.
def t_ccode(t):
r'\{'
t.lexer.code_start = t.lexer.lexpos # Record the starting position
print(t.lexer.code_start)
t.lexer.level = 1 # Initial brace level
t.lexer.begin('ccode') # Enter 'ccode' state
def t_lbrace(t):
r'\{'
t.lexer.level += 1
def t_rbrace(t):
r'\}'
t.lexer.level -= 1
# Rules for the ccode state
def t_ccode_lbrace(t):
r'\{'
t.lexer.current_lbrace = t.lexer.lexpos
t.lexer.level += 1
def t_ccode_rbrace(t):
r'\}'
rbrace_positions.append(t.lexer.lexpos)
t.lexer.level -= 1
# If closing brace, return the code fragment
if t.lexer.level == 0:
t.value = t.lexer.lexdata[t.lexer.current_lbrace:rbrace_positions[0]-1]
t.type = "CCODE"
t.lexer.lineno += t.value.count('\n')
t.lexer.begin('INITIAL')
for _ in reversed(rbrace_positions):
rbrace_positions.pop()
return t
# C or C++ comment (ignore)
def t_ccode_comment(t):
r'(/\*(.|\n)*?\*/)|(//.*)'
pass
# C string
def t_ccode_string(t):
r'\"([^\\\n]|(\\.))*?\"'
# C character literal
def t_ccode_char(t):
r'\'([^\\\n]|(\\.))*?\''
# Any sequence of non-whitespace characters (not braces, strings)
def t_ccode_nonspace(t):
r'[^\s\{\}\'\"]+'
# Ignored characters (whitespace)
t_ccode_ignore = " \t\n"
# For bad characters, we just skip over it
def t_ccode_error(t):
t.lexer.skip(1)
def t_error(t):
t.lexer.skip(1)
lexer = lex.lex()
data = '''{ I DONT WANT TO RECEIVE THIS
{THIS IS WHAT I WANT TO SEE}
AS WELL AS I DONT WANT TO RECEIVE THIS}
OUTSIDE OF CURLY BRACES
{I WANT TO SEE THIS AGAIN}
'''
lexer.input(data)
for tok in lexer:
print(tok)
Data is just a test string to have an easy example.
But in my C source files there are some constructs where I want to extract Argument1, Argument2, Argument3, Argument4.
Apparently those C files will not compile but there is no need to since they are included in some other files.
Thank you for all of your input!

Your description is not really clear. Your example seems to indicate that you want to find a braced list which doesn't contain any sublists. So that's the question I'm addressing.
Note that trying to do all this work in the lexer is not generally recommended. Lexers should normally return simple atomic tokens, leaving it to the parser's grammar to do the work of putting the tokens together into a useful structure. But if I've got your use case right, it is possible to do this with the lexer.
You code decides whether or not to return a CCODE token based on whether the depth counter is 0 when it hits a close brace. But that's apparently not what you want: you don't care how deeply nested the braces are; rather, when a closing brace is encountered, you want to know whether it's the innermost brace or not. You don't need a stack for that, since you only ever need the position of the last open brace read, and you only need that while it is unclosed. So every time you see an open brace, you set the last open brace position, and when you see a closing brace, you check whether the last open brace position is set. If it is, you can return the string since that position and set the last open brace position to None. If it is not set, then just continue the scan.
Here's a simplified example based on your code:
import ply.lex as lex
# Declare the state
states = (
('ccode', 'exclusive'),
)
tokens = [
'TEXT',
'CCODE'
]
# Changed from t_ANY_TEXT because otherwise you get all the text inside
# braces as well. Perhaps that's what you wanted but it makes the output less
# clear.
def t_TEXT(t):
r'\w+'
t.value = str(t.value)
return t
# Match the first {. Enter ccode state.
def t_ccode(t):
r'\{'
t.lexer.current_open = t.lexer.lexpos # Record the starting position
t.lexer.level = 1 # Initial brace level
t.lexer.begin('ccode') # Enter 'ccode' state
# t_lbrace and t_rbrace deleted because they never match
# Rules for the ccode state
def t_ccode_lbrace(t):
r'\{'
t.lexer.current_open = t.lexer.lexpos
t.lexer.level += 1
def t_ccode_rbrace(t):
r'\}'
t.lexer.level -= 1
if t.lexer.level == 0:
t.lexer.begin('INITIAL')
if t.lexer.current_open is not None:
t.value = t.lexer.lexdata[t.lexer.current_open:t.lexer.lexpos - 1]
t.type = "CCODE"
t.lexer.current_open = None
return t
# C or C++ comment (ignore)
def t_ccode_comment(t):
r'(/\*(.|\n)*?\*/)|(//.*)'
# C string
def t_ccode_string(t):
r'\"([^\\\n]|(\\.))*?\"'
# C character literal
def t_ccode_char(t):
r'\'([^\\\n]|(\\.))*?\''
# Any sequence of non-whitespace characters (not braces, strings)
def t_ccode_nonspace(t):
r'''[^\s{}'"]+''' # No need to escape inside a character class
# Ignored characters (whitespace)
t_ccode_ignore = " \t\n"
# For bad characters, we just skip over it
def t_ccode_error(t):
t.lexer.skip(1)
def t_error(t):
t.lexer.skip(1)
lexer = lex.lex()
data = '''{ I DONT WANT TO RECEIVE THIS
{THIS IS WHAT I WANT TO SEE}
AS WELL AS I DONT WANT TO RECEIVE THIS}
OUTSIDE OF CURLY BRACES
{I WANT TO SEE THIS AGAIN}
'''
lexer.input(data)
for tok in lexer:
print(tok)
Sample run:
$ python3 nested_brace.py
LexToken(CCODE,'THIS IS WHAT I WANT TO SEE',1,58)
LexToken(TEXT,'OUTSIDE',1,102)
LexToken(TEXT,'OF',1,110)
LexToken(TEXT,'CURLY',1,113)
LexToken(TEXT,'BRACES',1,119)
LexToken(CCODE,'I WANT TO SEE THIS AGAIN',1,152)

Related

Regular expression matching with re but not lex

I am trying to parse a file in order to reformat it. For this, I need to be able to distinguish between full line comments and end of line comments. I have been able to get lex to recognize full line comments properly, but am having issues with end of line comments.
For example: "a = 0; //This; works; fine" but "a = 0; //This, does; not;".
What confuses me the most is that re is able to recognise both comments without issue and yet lex can not.
Here is the relevant code (FL=full line, EL=end of line):
tokens = (
'EQUAL',
'SEMICOLON',
'FL_COMMENT',
'EL_COMMENT',
'STRING'
)
t_EQUAL = r'='
t_SEMICOLON = r';'
def t_FL_COMMENT(t):
r"""(^|\n)\s*(//|\#).*"""
return t
def t_EL_COMMENT(t):
r"""(?<=;)\s*(//|\#).*"""
return t
def t_STRING(t):
r"""(".*")|([a-zA-Z0-9\</][\w.\-\+/]*)"""
return t
def t_newline(t):
r"""\n"""
t.lexer.lineno += len(t.value)
t_ignore = ' \t'
def t_error(t):
print("Illegal character '%s' on line %d" % (t.value[0], t.lineno))
t.lexer.skip(1)
def t_eof(t):
return None
lexer = lex.lex()
lexer.input(file_contents)
for token in lexer:
print(token)
Lex (including the Ply variety) builds lexical analysers, not regular expression searchers. Unlike a regular expression library, which generally attempts to scan the entire input to find a pattern, lex tries to decide what pattern matches at the current input point. It then advances the input to the point immediately following, and tries to find the matching pattern at that point. And so on. Every character in the text is contained in some matched token. (Although some tokens might be discarded.)
You can actually take advantage of this fact to simplify your regular expressions. In this case, for example, since you can count on t_FL_COMMENT to match a comment which does occur at the beginning of a line, any other comment must be not at the start of a line. So no lookbehind is needed:
def t_FL_COMMENT(t):
r"""(^|\n)\s*(//|\#).*"""
return t
def t_EL_COMMENT(t):
r"""(//|\#).*"""
return t
An alternative to (\n|^) is (?m)^ (which enables multiline mode so that the ^ can match right after a newline, as well as matching at the beginning of the string).

how to define two tokens as one token?

I am trying to define two words separated by space as one token in my lexical analyzer
but when I pass an input like in out it says LexToken(KEYIN,'in',1,0)
and LexToken(KEYOUT,'out',1,3)
I need it to be like this LexToken(KEYINOUT,'in out',1,0)
PS: KEYIN and KEYOUT are two different tokens as the grammar's definition
Following is the test which causes the problem:
import lex
reserved = {'in': 'KEYIN', 'out': 'KEYOUT', 'in\sout': 'KEYINOUT'} # the problem is in here
tokens = ['PLUS', 'MINUS', 'IDENTIFIER'] + list(reserved.values())
t_MINUS = r'-'
t_PLUS = r'\+'
t_ignore = ' \t'
def t_IDENTIFIER(t):
r'[a-zA-Z]+([(a-zA-Z)*|(\d+)*|(_*)])*'
t.type = reserved.get(t.value, 'IDENTIFIER') # Check for reserved words
return t
def t_error(t):
print("Illegal character '%s'" % t.value[0], "at line", t.lexer.lineno, "at position", t.lexer.lexpos)
t.lexer.skip(1)
lex.lex()
lex.input("in out inout + - ")
while True:
tok = lex.token()
print(tok)
if not tok:
break
Output:
LexToken(KEYIN,'in',1,0)
LexToken(KEYOUT,'out',1,3)
LexToken(IDENTIFIER,'inout',1,7)
LexToken(PLUS,'+',1,13)
LexToken(MINUS,'-',1,15)
None
This is your function which recognizes IDENTIFIERs and keywords:
def t_IDENTIFIER(t):
r'[a-zA-Z]+([(a-zA-Z)*|(\d+)*|(_*)])*'
t.type = reserved.get(t.value, 'IDENTIFIER') # Check for reserved words
return t
First, it is clear that the keywords it can recognize are precisely the keys of the dictionary reserved, which are:
in
out
in\sout
Since in out is not a key in that dictionary (in\sout is not the same string), it cannot be recognised as a keyword no matter what t.value happens to be.
But t.value cannot be in out either, because t.value will always match the regular expression which controls t_IDENTIFIER:
[a-zA-Z]+([(a-zA-Z)*|(\d+)*|(_*)])*
and that regular expression never matches anything with a space character. (That regular expression has various problems; the characters *, (, ), | and + inside the second character class are treated as ordinary characters. See below for a correct regex.)
You could certainly match in out as a token in a manner similar to that suggested in your original question, prior to the edit. However,
t_KEYINOUT = r'in\sout'
will not work, because Ply does not use the common "maximum munch" algorithm for deciding which regular expression pattern to accept. Instead, it simply orders all of the patterns and picks the first one which matches, where the order consists of all of the tokenizing functions (in the order in which they are defined), followed by the token variables sorted in reverse order of regex length. Since t_IDENTIFIER is a function, it will be tried before the variable t_KEYINOUT. To ensure that t_KEYINOUT is tried first, it must be made into a function and placed before t_IDENTIFIER.
However, that is still not exactly what you want, since it will tokenize
in outwards
as
LexToken(KEYINOUT,'in out',1,0)
LexToken(IDENTIFIER,'wards',1,6)
rather than
LexToken(KEYIN,'in',1,0)
LexToken(IDENTIFIER,'outwards',1,3)
To get the correct analysis, you need to ensure that in out only matches if out is a complete word; in other words, if there is a word boundary at the end of the match. So one solution is:
reserved = {'in': 'KEYIN', 'out': 'KEYOUT'}
def t_KEYINOUT(t):
r'in\sout\b'
return t
def t_IDENTIFIER(t):
r'[a-zA-Z][a-zA-Z0-9_]*'
t.type = reserved.get(t.value, 'IDENTIFIER') # Check for reserved words
return t
However, it is almost certainly not necessary for the lexer recognize in out as a single token. Since both in and out are keywords, it is easy to leave it to the parser to notice when they are used together as an in out designator:
parameter: KEYIN IDENTIFIER
| KEYOUT IDENTIFIER
| KEYIN KEYOUT IDENTIFIER

Does ply.lex parse the same token once?

I was reading a lexical parsing document so that I can parse some arguments and I exactly followed the document to create a parser. This is the whole code:
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import ply.lex as lex
args = ['[watashi]', '[anata]>500', '[kare]>400&&[kare]<800']
tokens = ('NUMBER', 'EXPRESSION', 'AND', 'LESS', 'MORE')
t_EXPRESSION = r'\[.*\]'
t_AND = r'&&'
t_LESS = r'<'
t_MORE = r'>'
t_ignore = '\t'
def t_NUMBER(t):
r'\d+'
t.value = int(t.value)
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print 'Illegal character "%s"' % t.value[0]
t.lexer.skip(1)
lexer = lex.lex()
for i in args:
lexer.input(i)
while True:
tok = lexer.token()
if not tok: break
print tok
print '#############'
I simply created a list of sample arguments and I got this output:
LexToken(EXPRESSION,'[watashi]',1,0)
#############
LexToken(EXPRESSION,'[anata]',1,0)
LexToken(MORE,'>',1,7)
LexToken(NUMBER,500,1,8)
#############
LexToken(EXPRESSION,'[kare]>400&&[kare]',1,0)
LexToken(LESS,'<',1,18)
LexToken(NUMBER,800,1,19)
#############
The first and second sample arguments are parsed correctly, but the third one is not. The third sample argument is EXPRESSION+LESS+NUMBER whereas it must be EXPRESSION+MORE+NUMBER+AND+EXPRESSION+LESS+NUMBER. So I thought there could be one of those problems:
ply.lex is only parsing one token: In the codes above, ply.lex cannot parse two seperate expressions and it returns the latest token as its type. "[kare]>400&&[kare]" is EXPRESSION because it ends with the latest EXPRESSION token which is second [kare] and 800 is NUMBER because it is the latest NUMBER token.
!!! OR !!!
There is a mistake in t_EXPRESSION variable: I defined this variable as "[.*]" to get all characters in those two brackets ([]). The first token of third sample argument is "[kare]>400&&[kare]" since it simply starts and ends with those brackets and contains .* (every single character) in them, but I thought the interpreter would stop in the first (]) character due to being first.
So I could not find a way to solve but asked here.
in general this is what I am struggling with
lexer.input("[kare]>400&&[kare]<800")
while True:
tok = lexer.token()
if not tok: break
print tok
I get
LexToken(EXPRESSION,'[kare]>400&&[kare]',1,0)
LexToken(LESS,'<',1,18)
LexToken(NUMBER,800,1,19)
but I expected something more like
LexToken(EXPRESSION,'[kare]',1.0)
LexToken(LESS,'>',?)
LexToken(NUMBER,400,?)
LexToken(AND,'&&',?)
LexToken(EXPRESSION,'[kare]',1,0)
LexToken(LESS,'<',1,18)
LexToken(NUMBER,800,1,19)
I think I see your problem
t_EXPRESSION = r'\[.*\]'
is greedy and will match the biggest match it can ie '[kare]>400&&[kare]'
instead try
t_EXPRESSION = r'\[[^\]]*\]'
this will match only one set since it looks for not open bracket([^\]]) instead of anything(.)
you can also use not greedy matching
t_EXPRESSION = r'\[.*?\]'
the ? makes it match as few characters as possible rather than the maximum

Python, how do I parse key=value list ignoring what is inside parentheses?

Suppose I have a string like this:
"key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)"
I would like to get a dictionary corresponding to the above, where the value for key3 is the string
"(key3.1=value3.1;key3.2=value3.2)"
and eventually the corresponding sub-dictionary.
I know how to split the string at the semicolons, but how can I tell the parser to ignore the semicolon between parentheses?
This includes potentially nested parentheses.
Currently I am using an ad-hoc routine that looks for pairs of matching parentheses, "clears" its content, gets split positions and applies them to the original string, but this does not appear very elegant, there must be some prepackaged pythonic way to do this.
If anyone is interested, here is the code I am currently using:
def pparams(parameters, sep=';', defs='=', brc='()'):
'''
unpackages parameter string to struct
for example, pippo(a=21;b=35;c=pluto(h=zzz;y=mmm);d=2d3f) becomes:
a: '21'
b: '35'
c.fn: 'pluto'
c.h='zzz'
d: '2d3f'
fn_: 'pippo'
'''
ob=strfind(parameters,brc[0])
dp=strfind(parameters,defs)
out={}
if len(ob)>0:
if ob[0]<dp[0]:
#opening function
out['fn_']=parameters[:ob[0]]
parameters=parameters[(ob[0]+1):-1]
if len(dp)>0:
temp=smart_tokenize(parameters,sep,brc);
for v in temp:
defp=strfind(v,defs)
pname=v[:defp[0]]
pval=v[1+defp[0]:]
if len(strfind(pval,brc[0]))>0:
out[pname]=pparams(pval,sep,defs,brc);
else:
out[pname]=pval
else:
out['fn_']=parameters
return out
def smart_tokenize( instr, sep=';', brc='()' ):
'''
tokenize string ignoring separators contained within brc
'''
tstr=instr;
ob=strfind(instr,brc[0])
while len(ob)>0:
cb=findclsbrc(tstr,ob[0])
tstr=tstr[:ob[0]]+'?'*(cb-ob[0]+1)+tstr[cb+1:]
ob=strfind(tstr,brc[1])
sepp=[-1]+strfind(tstr,sep)+[len(instr)+1]
out=[]
for i in range(1,len(sepp)):
out.append(instr[(sepp[i-1]+1):(sepp[i])])
return out
def findclsbrc(instr, brc_pos, brc='()'):
'''
given a string containing an opening bracket, finds the
corresponding closing bracket
'''
tstr=instr[brc_pos:]
o=strfind(tstr,brc[0])
c=strfind(tstr,brc[1])
p=o+c
p.sort()
s1=[1 if v in o else 0 for v in p]
s2=[-1 if v in c else 0 for v in p]
s=[s1v+s2v for s1v,s2v in zip(s1,s2)]
s=[sum(s[:i+1]) for i in range(len(s))] #cumsum
return p[s.index(0)]+brc_pos
def strfind(instr, substr):
'''
returns starting position of each occurrence of substr within instr
'''
i=0
out=[]
while i<=len(instr):
try:
p=instr[i:].index(substr)
out.append(i+p)
i+=p+1
except:
i=len(instr)+1
return out
If you want to build a real parser, use one of the Python parsing libraries, like PLY or PyParsing. If you figure such a full-fledged library is overkill for the task at hand, go for some hack like the one you already have. I'm pretty sure there is no clean few-line solution without an external library.
Expanding on Sven Marnach's answer, here's an example of a pyparsing grammar that should work for you:
from pyparsing import (ZeroOrMore, Word, printables, Forward,
Group, Suppress, Dict)
collection = Forward()
simple_value = Word(printables, excludeChars='()=;')
key = simple_value
inner_collection = Suppress('(') + collection + Suppress(')')
value = simple_value ^ inner_collection
key_and_value = Group(key + Suppress('=') + value)
collection << Dict(key_and_value + ZeroOrMore(Suppress(';') + key_and_value))
coll = collection.parseString(
"key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)")
print coll['key1'] # value1
print coll['key2'] # value2
print coll['key3']['key3.1'] # value3.1
You could use a regex to capture the groups:
>>> import re
>>> s = "key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)"
>>> r = re.compile('(\w+)=(\w+|\([^)]+\));?')
>>> dict(r.findall(s))
This regex says:
(\w)+ # Find and capture a group with 1 or more word characters (letters, digits, underscores)
= # Followed by the literal character '='
(\w+ # Followed by a group with 1 or more word characters
|\([^)]+\) # or a group that starts with an open paren (parens escaped with '\(' or \')'), followed by anything up until a closed paren, which terminates the alternate grouping
);? # optionally this grouping might be followed by a semicolon.
Gotta say, kind of a strange grammar. You should consider using a more standard format. If you need guidance choosing one maybe ask another question. Good luck!

Splitting strings in python

I have a string which is like this:
this is [bracket test] "and quotes test "
I'm trying to write something in Python to split it up by space while ignoring spaces within square braces and quotes. The result I'm looking for is:
['this','is','bracket test','and quotes test ']
Here's a simplistic solution that works with your test input:
import re
re.findall('\[[^\]]*\]|\"[^\"]*\"|\S+',s)
This will return any code that matches either
a open bracket followed by zero or more non-close-bracket characters followed by a close bracket,
a double-quote followed by zero or more non-quote characters followed by a quote,
any group of non-whitespace characters
This works with your example, but might fail for many real-world strings you may encounter. For example, you didn't say what you expect with unbalanced brackets or quotes,or how you want single quotes or escape characters to work. For simple cases, though, the above might be good enough.
To complete Bryan post and match exactly the answer :
>>> import re
>>> txt = 'this is [bracket test] "and quotes test "'
>>> [x[1:-1] if x[0] in '["' else x for x in re.findall('\[[^\]]*\]|\"[^\"]*\"|\S+', txt)]
['this', 'is', 'bracket test', 'and quotes test ']
Don't misunderstand the whole syntax used : This is not several statments on a single line but a single functional statment (more bugproof).
Here's a simplistic parser (tested against your example input) that introduces the State design pattern.
In real world, you probably want to build a real parser using something like PLY.
class SimpleParser(object):
def __init__(self):
self.mode = None
self.result = None
def parse(self, text):
self.initial_mode()
self.result = []
for word in text.split(' '):
self.mode.handle_word(word)
return self.result
def initial_mode(self):
self.mode = InitialMode(self)
def bracket_mode(self):
self.mode = BracketMode(self)
def quote_mode(self):
self.mode = QuoteMode(self)
class InitialMode(object):
def __init__(self, parser):
self.parser = parser
def handle_word(self, word):
if word.startswith('['):
self.parser.bracket_mode()
self.parser.mode.handle_word(word[1:])
elif word.startswith('"'):
self.parser.quote_mode()
self.parser.mode.handle_word(word[1:])
else:
self.parser.result.append(word)
class BlockMode(object):
end_marker = None
def __init__(self, parser):
self.parser = parser
self.result = []
def handle_word(self, word):
if word.endswith(self.end_marker):
self.result.append(word[:-1])
self.parser.result.append(' '.join(self.result))
self.parser.initial_mode()
else:
self.result.append(word)
class BracketMode(BlockMode):
end_marker = ']'
class QuoteMode(BlockMode):
end_marker = '"'
Here's a more procedural approach:
#!/usr/bin/env python
a = 'this is [bracket test] "and quotes test "'
words = a.split()
wordlist = []
while True:
try:
word = words.pop(0)
except IndexError:
break
if word[0] in '"[':
buildlist = [word[1:]]
while True:
try:
word = words.pop(0)
except IndexError:
break
if word[-1] in '"]':
buildlist.append(word[:-1])
break
buildlist.append(word)
wordlist.append(' '.join(buildlist))
else:
wordlist.append(word)
print wordlist
Well, I've encountered this problem quite a few times, which led me to write my own system for parsing any kind of syntax.
The result of this can be found here; note that this may be overkill, and it will provide you with something that lets you parse statements with both brackets and parentheses, single and double quotes, as nested as you want. For example, you could parse something like this (example written in Common Lisp):
(defun hello_world (&optional (text "Hello, World!"))
(format t text))
You can use nesting, brackets (square) and parentheses (round), single- and double-quoted strings, and it's very extensible.
The idea is basically a configurable implementation of a Finite State Machine which builds up an abstract syntax tree character-by-character. I recommend you look at the source code (see link above), so that you can get an idea of how to do it. It's capable via regular expressions, but try writing a system using REs and then trying to extend it (or even understand it) later.
Works for quotes only.
rrr = []
qqq = s.split('\"')
[ rrr.extend( qqq[x].split(), [ qqq[x] ] )[ x%2]) for x in range( len( qqq ) )]
print rrr

Categories