How is generated the python grammar and how the interpreter understand it - python

I wonder how is generated the grammar of the Python language and how it is understood by the interpreter.
In python, the file graminit.c seems to implement the grammar, but i don't clearly understand it.
More broadly, what are the different ways to generate a grammar and are there differences between how the grammar is implemented in languages such as Perl, Python or Lua.

Grammars are generally of the same form: Backus-Naur Form (BNF) is typical.
Lexer/parsers can take very different forms.
The lexer breaks up the input file into tokens. The parser uses the grammar to see if the stream of tokens is "valid" according to its rules.
Usually the outcome is an abstract syntax tree (AST) that can then be used to generate whatever you want, such as byte code or assembly.

There are many ways to implement lexing/parsing, it really comes down to identifing the patterns and how they fit together. There are a few very nice Python packages for doing this that range from pure python to wrapped C code. Pyparsing in-particular has many excellent examples. One thing worth noting, finding a straight EBNF/BNF parser is kind of hard -- writing a parser with Python code isn't awful but it is one step further from the raw grammar which might be important to you.
Code Talker
SimpleParse
Python Lex Yacc
pyparsing

Related

Use PLY to read YACC file (file.y)

PLY has a somewhat complex system of defining tokens, lexems, grammar etc. but I would like to create a parse tree using an already existing Ruby's file - parse.y.
Is there a way to read the file parse.y and create a parse tree for Ruby program in PLY?
Short answer: no.
That file contains 13,479 lines; the actual grammar 769 lines, including 46 Mid-Rule Actions (MRAs), so there are close to 13 thousand lines of C code which would have to be rewritten in Python in order to reproduce the functionality. That functionality includes the lexical analyser, which is about a thousand lines of C code plus supporting functions. (If you think Ply's method of defining lexical analysis is complicated, wait until you try to reproduce a hand-written analyser written in C. :-) )
I extracted the grammar from that file using bison (although I had to edit the file a bit in order for bison to not choke on it; I don't know where the Makefile is in that source repository but I presume that it includes a preprocessing step to make a valid bison grammar file out of parse.y). So you could do that, too, and use the result as the basis of a Ply grammar. You might be able to automate the construction of the grammar, but my guess is that you would still have to do quite a lot of work by hand, and if you don't have at least some experience in writing parsers that work is not going to simple. (It may be educational, though.)
Good luck with your project.

Generating parser in Python language from JavaCC source?

I do mean the ??? in the title because I'm not exactly sure. Let me explain the situation.
I'm not a computer science student & I never did any compilers course. Till now I used to think that compiler writers or students who did compilers course are outstanding because they had to write Parser component of the compiler in whatever language they are writing the compiler. It's not an easy job right?
I'm dealing with Information Retrieval problem. My desired programming language is Python.
Parser Nature:
http://ir.iit.edu/~dagr/frDocs/fr940104.0.txt is the sample corpus. This file contains around 50 documents with some XML style markup. (You can see it in above link). I need to note down other some other values like <DOCNO> FR940104-2-00001 </DOCNO> & <PARENT> FR940104-2-00001 </PARENT> and I only need to index the <TEXT> </TEXT> portion of document which contains some varying tags which I need to strip down and a lot of <!-- --> comments that are to be neglected and some &hyph; &space; & character entities. I don't know why corpus has things like this when its know that it's neither meant to be rendered by browser nor a proper XML document.
I thought of using any Python XML parser and extract desired text. But after little searching I found JavaCC parser source code (Parser.jj) for the same corpus I'm using here. A quick look up on JavaCC followed by Compiler-compiler revealed that after all compiler writers aren't as great as I thought. They use Compiler-compiler to generate parser code in desired language. Wiki says input to compiler-compiler is input is a grammar (usually in BNF). This is where I'm lost.
Is Parser.jj the grammar (Input to compiler-compiler called JavaCC)? It's definitely not BNF. What is this grammar called? Why is this grammar has Java language? Isn't there any universal grammar language?
I want python parser for parsing the corpus. Is there any way I can translate Parser.jj to get python equivalent? If yes, what is it? If no, what are my other options?
By any chance does any one know what is this corpus? Where is its original source? I would like to see some description for it. It is distributed on internet with name frDocs.tar.gz
Why do you call this "XML-style" markup? - this looks like pretty standard/basic XML to me.
Try elementTree or lxml. Instead of writing a parser, use one of the stable, well-hardened libraries that are already out there.
You can't build a parser - let alone a whole compiler - from a(n E)BNF grammar - it's just the grammar, i.e. syntax (and some syntax, like Python's indentation-based block rules, can't be modeled in it at all), not the semantics. Either you use seperate tools for these aspects, or use a more advances framework (like Boost::Spirit in C++ or Parsec in Haskell) that unifies both.
JavaCC (like yacc) is responsible for generating a parser, i.e. the subprogram that makes sense of the tokens read from the source code. For this, they mix a (E)BNF-like notation with code written in the language the resulting parser will be in (for e.g. building a parse tree) - in this case, Java. Of course it would be possible to make up another language - but since the existing languages can handle those tasks relatively well, it would be rather pointless. And since other parts of the compiler might be written by hand in the same language, it makes sense to leave the "I got ze tokens, what do I do wit them?" part to the person who will write these other parts ;)
I never heard of "PythonCC", and google didn't either (well, theres a "pythoncc" project on google code, but it's describtion just says "pythoncc is a program that tries to generate optimized machine Code for Python scripts." and there was no commit since march). Do you mean any of these python parsing libraries/tools? But I don't think there's a way to automatically convert the javaCC code to a Python equivalent - but the whole thing looks rather simple, so if you dive in and learn a bit about parsing via javaCC and [python library/tool of your choice], you might be able to translate it...

Mini-languages in Python

I'm after creating a simple mini-language parser in Python, programming close to the problem domain and all that.
Anyway, I was wondering how the people on here would go around doing that - what are the preferred ways of doing this kind of thing in Python?
I'm not going to give specific details of what I'm after because at the moment I'm just investigating how easy this whole field is in Python.
Pyparsing is handy for writing "little languages". I gave a presentation at PyCon'06 on writing a simple adventure game engine, in which the language being parsed and interpreted was the game command set ("inventory", "take sword", "drop book", etc.). (Source code here.)
You can also find links to other pyparsing articles at the pyparsing wiki.
I have limited but positive experience with PLY (Python Lex-Yacc). It combines Lex and Yacc functionality in a single Python class. You may want to check it out.
Fellow Stackoverflow'er Ned Batchelder has a nice overview of available tools on his website. There's also an overview on the Python website itself.
I would recommend funcparserlib. It was written especially for parsing little languages and DSLs and it is faster and smaller than pyparsing (see stats on its homepage). Minimalists and functional programmers should like funcparserlib.
Edit: By the way, I'm the author of this library, so my opinion may be biased.
Python is such a wonderfully simple and extensible language that I'd suggest merely creating a comprehensive python module, and coding against that.
I see that while I typed up the above, PLY has already been mentioned.
If you ask me this now, I would try the textx library for python. You can very easily create a dsl in that with python! Advantages are that it creates an AST for you, and lexing and parsing is combined.
http://igordejanovic.net/textX/
In order to be productive, I'd always use a parser generator like CocoPy (Tutorial) to have your grammar transformed into a (correct) parser (unless you want to implement the parser manually for the sake of learning).
The rest is writing the actual interpreter/compiler (Create stack-based byte code or memory AST to be interpreted and then evaluate it).

Python implementation of Parsec?

I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language.
Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options.
In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python?
EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here.
I believe that pyparsing is based on the same principles as parsec.
PySec is another monadic parser, I don't know much about it, but it's worth looking at here
An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with).
Nothing prevents you for diverting your parser from the "context free" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way.
For a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not.
There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir.
ANTLR is great and has the added benefit of working across multiple languages.

Resources for lexing, tokenising and parsing in python

Can people point me to resources on lexing, parsing and tokenising with Python?
I'm doing a little hacking on an open source project (hotwire) and wanted to do a few changes to the code that lexes, parses and tokenises the commands entered into it. As it is real working code it is fairly complex and a bit hard to work out.
I haven't worked on code to lex/parse/tokenise before, so I was thinking one approach would be to work through a tutorial or two on this aspect. I would hope to learn enough to navigate around the code I actually want to alter. Is there anything suitable out there? (Ideally it could be done in an afternoon without having to buy and read the dragon book first ...)
Edit: (7 Oct 2008) None of the below answers quite give what I want. With them I could generate parsers from scratch, but I want to learn how to write my own basic parser from scratch, not using lex and yacc or similar tools. Having done that I can then understand the existing code better.
So could someone point me to a tutorial where I can build a basic parser from scratch, using just python?
I'm a happy user of PLY. It is a pure-Python implementation of Lex & Yacc, with lots of small niceties that make it quite Pythonic and easy to use. Since Lex & Yacc are the most popular lexing & parsing tools and are used for the most projects, PLY has the advantage of standing on giants' shoulders. A lot of knowledge exists online on Lex & Yacc, and you can freely apply it to PLY.
PLY also has a good documentation page with some simple examples to get you started.
For a listing of lots of Python parsing tools, see this.
This question is pretty old, but maybe my answer would help someone who wants to learn the basics. I find this resource to be very good. It is a simple interpreter written in python without the use of any external libraries. So this will help anyone who would like to understand the internal working of parsing, lexing, and tokenising:
"A Simple Intepreter from Scratch in Python:" Part 1, Part 2,
Part 3, and Part 4.
For medium-complex grammars, PyParsing is brilliant. You can define grammars directly within Python code, no need for code generation:
>>> from pyparsing import Word, alphas
>>> greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here
>>> hello = "Hello, World!"
>>>> print hello, "->", greet.parseString( hello )
Hello, World! -> ['Hello', ',', 'World', '!']
(Example taken from the PyParsing home page).
With parse actions (functions that are invoked when a certain grammar rule is triggered), you can convert parses directly into abstract syntax trees, or any other representation.
There are many helper functions that encapsulate recurring patterns, like operator hierarchies, quoted strings, nesting or C-style comments.
pygments is a source code syntax highlighter written in python. It has lexers and formatters, and may be interesting to peek at the source.
Here's a few things to get you started (roughly from simplest-to-most-complex, least-to-most-powerful):
http://en.wikipedia.org/wiki/Recursive_descent_parser
http://en.wikipedia.org/wiki/Top-down_parsing
http://en.wikipedia.org/wiki/LL_parser
http://effbot.org/zone/simple-top-down-parsing.htm
http://en.wikipedia.org/wiki/Bottom-up_parsing
http://en.wikipedia.org/wiki/LR_parser
http://en.wikipedia.org/wiki/GLR_parser
When I learned this stuff, it was in a semester-long 400-level university course. We did a number of assignments where we did parsing by hand; if you want to really understand what's going on under the hood, I'd recommend the same approach.
This isn't the book I used, but it's pretty good: Principles of Compiler Design.
Hopefully that's enough to get you started :)
Have a look at the standard module shlex and modify one copy of it to match the syntax you use for your shell, it is a good starting point
If you want all the power of a complete solution for lexing/parsing, ANTLR can generate python too.
Frederico Tomassetti had a good (but short) concise write-up to all things related from BNF to binary deciphering on:
lexical,
parser,
abstract-syntax tree (AST), and
Construct/code-generator.
He even mentioned the new Parsing Expression Grammar (PEG).
https://tomassetti.me/parsing-in-python/
I suggest http://www.canonware.com/Parsing/, since it is pure python and you don't need to learn a grammar, but it isn't widely used, and has comparatively little documentation. The heavyweight is ANTLR and PyParsing. ANTLR can generate java and C++ parsers too, and AST walkers but you will have to learn what amounts to a new language.

Categories