Out of curiosity, are there many compilers out there which target .pyc files?
After a bit of Googling, the only two I can find are:
unholy: why_'s Ruby-to-pyc compiler
Python: The PSF's Python to pyc compiler
So… Are there any more?
(as a side note, I got thinking about this because I want to write a Scheme-to-pyc compiler)
(as a second side note, I'm not under any illusion that a Scheme-to-pyc compiler would be useful, but it would give me an incredible excuse to learn some internals of both Scheme and Python)
"I want to write a Scheme-to-pyc compiler".
My brain hurts! Why would you want to do that? Python byte code is an intermediate language specifically designed to meet the needs of the Python language and designed to run on Python virtual machines that, again, have been tailored to the needs of Python. Some of the most important areas of Python development these days are moving Python to other "virtual machines", such as Jython (JVM), IronPython (.NET), PyPy and the Unladen Swallow project (moving CPython to an LLVM-based representation). Trying to squeeze the syntax and semantics of another, very different language (Scheme) into the intermediate representation of another high-level language seems to be attacking the problem (whatever the problem is) at the wrong level. So, in general, it doesn't seem like there would be many .pyc compilers out there and there's a good reason for that.
I wrote a compiler several years ago which accepted a lisp-like language called "Noodle" and produced Python bytecode. While it never became particularly useful, it was a tremendously good learning experience both for understanding Common Lisp better (I copied several of its features) and for understanding Python better.
I can think of two particular cases when it might be useful to target Python bytecode directly, instead of producing Python and passing it on to a Python compiler:
Full closures: in Python before 3.0 (before the nonlocal keyword), you can't modify the value of a closed-over variable without resorting to bytecode hackery. You can mutate values instead, so it's common practice to have a closure referencing a list, for example, and changing the first element in it from the inner scope. That can get real annoying. The restriction is part of the syntax, though, not the Python VM. My language had explicit variable declaration, so it successfully provided "normal" closures with modifiable closed-over values.
Getting at a traceback object without referencing any builtins. Real niche case, for sure, but I used it to break an early version of the "safelite" jail. See my posting about it.
So yeah, it's probably way more work than it's worth, but I enjoyed it, and you might too.
I suggest you focus on CPython.
http://www.network-theory.co.uk/docs/pytut/CompiledPythonfiles.html
Rather than a Scheme to .pyc translator, I suggest you write a Scheme to Python translator, and then let CPython handle the conversion to .pyc. (There is precedent for doing it this way; the first C++ compiler was Cfront which translated C++ into C, and then let the system C compiler do the rest.)
From what I know of Scheme, it wouldn't be that difficult to translate Scheme to Python.
One warning: the Python virtual machine is probably not as fast for Scheme as Scheme itself. For example, Python doesn't automatically turn tail recursion into iteration; and Python has a relatively shallow stack, so you would actually need to turn tail recursion to iteration for your translator.
As a bonus, once Unladen Swallow speeds up Python, your Scheme-to-Python translator would benefit, and at that point might even become practical!
If this seems like a fun project to you, I say go for it. Not every project has to be immediately practical.
P.S. If you want a project that is somewhat more practical, you might want to write an AWK to Python translator. That way, people with legacy AWK scripts could easily make the leap forward to Python!
Just for your interest, I have written a toy compiler from a simple LISP to Python. Practically, this is a LISP to pyc compiler.
Have a look: sinC - The tiniest LISP compiler
Probably a bit late at the party but if you're still interested the clojure-py project (https://github.com/halgari/clojure-py) is now able to compile a significant subset of clojure to python bytecode -- but some help is always welcome.
Targeting bytecode is not that hard in itself, except for one thing: it is not stable across platforms (e.g. MAKE_FUNCTION pops 2 elements from the stack in Python 3 but only 1 in Python 2), and these differences are not clearly documented in a single spot (afaict) -- so you probably have some abstraction layer needed.
Related
I'm currently developing, in Python, a very simple, stack-oriented programming language intended to introduce complete novices to programming concepts. The language does allow users to craft their own functions. While speed isn't a big concern for my language, I thought of creating a "simple" JIT compiler to generate Python byte code for the user's functions.
I was listening to an excellent talk from PyCon on how to hand-craft byte code and make functions from them. However, the speakers did add a caveat that the specific byte values of Python byte code are in no way portable and can even change between, say, 3.5.1 and 3.5.2.
So, I brought up the documentation for the dis module and saw dis.opmap, described as
Dictionary mapping operation names to bytecodes.
Therefore, if I wanted to put a BINARY_ADD into a byte code object, I wouldn't need to know its specific value. I could just look it up in dis.opmap.
This finally brings me to my question: Are there any other portability pitfalls of which I need to be aware (e.g., Endianness, sizes/numbers of arguments per opcode) in order to make my JIT compiler compatible with any version of Python 3? I imagine that there will be certain opcodes that were only made available in a specific version. However, as I mentally work out my JIT compiler, I can't see myself using anything but the most basic instructions.
I am fairly certain that Python bytecode is undocumented. It's a messy place and it's a scary place. I'll offer an alternative at the end, but first.... why is it scary? First of all Python is interpreted to bytecode and that bytecode gets ran on a virtual machine. That virtual machine is definitely undocumented. You can take a look here at the opcode commit history. Notice that it changes... a lot. Beyond that you also have things like f-strings getting implemented which means the underlying C code is going to change. It's a very messy place because so many people are changing it.
Now, here is where my suggestion comes in. The reason that stuff is complicated is because many people are changing it. You daughter is 11 weeks, she ain't gonna be programming for at least another 3 weeks ;). So instead, why not make your own language? I recommend reading https://craftinginterpreters.com/contents.html. It's completely free and walks you through making an interpreted language in Java using AST followed by how to make a virtual machine with byte code and various chunk operations (just like Python has). It's a very easy to read book with good, thought-provoking questions at the end of chapters. You could make a completely customizable language that you ultimately control. Want to change an op code? Go for it. Want all users to be on the same playing field and guarantee backwards compatibility? It's your programming language, do whatever you want.
At the end of the day this is something that is going to be fun for you. And if you have to worry about opcodes being added or changed or overloaded, you're probably not going to be having fun. And when something eventually goes wrong you're going to have to debug your interpreted language, your JIT compiler and Python's source. That's just a headache in the making.
I have read that Julia has access to the AST of the code it runs. What exactly does this mean? Is it that the runtime can access it, that code itself can access it, or both?
Building on this:
Is this a key difference of Julia with respect to other dynamic languages, specifically Python?
What are the practical benefits of being able to access the AST?
What would be a good example of something that you can't easily do in Python, but that you can do in Julia, because of this?
What distinguishes Julia from languages like Python is that Julia allows you to intercept code before it is evaluated. Macros are just functions, written in Julia, which let you access that code and manipulate it before it runs. Furthermore, rather than treating code as a string (like "f(x)"), it's provided as a Julian object (like Expr(:call, :f, :x)).
There are plenty of things this allows which just aren't possible in Python. The main ones are:
You can do more work at compile time, increasing performance
Two good examples of this are regexes and printf. Both of these take a format specification of some kind and interpret it in some way. Now, these can fairly straightforwardly be implemented as functions, which might look like this:
match(Regex(".*"), str)
printf("%d", num)
The problem with this is that these specifications must be re-interpreted every time the statement is run. Every time the interpreter goes over this block, the regex must be re-compiled into a state machine, and the format must be run through a mini-interpreter. On the other hand, if we implement these as macros:
match(r".*", str)
#printf("%d", num)
Then the r and #printf macros will intercept the code at compile time, and run their respective interpreters then. The regex turns into a fast state machine, and the #printf statement turns into a simple println(num). At run time the minimum of work is done, so the code is blazing fast. Now, other languages are able to provide fast regexes, for example, by providing special syntax for it – but the fact that they're not special-cased in Julia means that developers can use the same techniques in their own code.
You can make mini-compilers for, well, pretty much anything
Languages with macros tend to have more capable embedded DSLs, because you can change the semantics of the language at will. For example, the algebraic modelling language, JuMP.jl. Clojure also has some neat examples of this too, like its embedded logic programming language. Mathematica.jl even embeds Mathematica's semantics in Julia, so that you can write really natural symbolic expressions like #Integrate(log(x), {x,0,2}). You can fake this to a point in Python (SymPy does a good job), but not as cleanly or as efficiently.
If that doesn't convince you, consider that someone managed to implement an interactive Julia debugger in pure Julia using macros. Try that in Python.
Edit: Another great example of something that's difficult in other languages is Cartestian.jl, which lets you write generic algorithms across arrays of any number of dimensions.
I am not familiar with Julia and only first heard of it with your question, but this sounded an awfully lot like Lisp (and indeed Julia seems to be a new grandchild/dialect of Lisp from what I'm reading) and it's powerful macros. The ability to access the AST at run/compile time brings a whole new dimension to the programmers code: metaprogramming.
See http://docs.julialang.org/en/latest/manual/metaprogramming/ and especially http://docs.julialang.org/en/latest/manual/metaprogramming/#macros for some of the practical uses. Basically you can 'inject/modify' code in places where it would be impossible for python/R to do the same.
Example: loop unrolling without any copy & paste, which takes a compile time argument to easily vary how much you want to unroll the loop.
Here's an excellent resource on Julia metaprogramming: https://en.wikibooks.org/wiki/Introducing_Julia/Metaprogramming
I found that when I ask something more to Python, python doesn't use my machine resource at 100% and it's not really fast, it's fast if compared to many other interpreted languages, but when compared to compiled languages i think that the difference is really remarkable.
Is it possible to speedup things with a Just In Time (JIT) compiler in Python 3?
Usually a JIT compiler is the only thing that can improve performances in interpreted languages, so i'm referring to this one, if other solutions are available i would love to accept new answers.
First off, Python 3(.x) is a language, for which there can be any number of implementations. Okay, to this day no implementation except CPython actually implements those versions of the language. But that will change (PyPy is catching up).
To answer the question you meant to ask: CPython, 3.x or otherwise, does not, never did, and likely never will, contain a JIT compiler. Some other Python implementations (PyPy natively, Jython and IronPython by re-using JIT compilers for the virtual machines they build on) do have a JIT compiler. And there is no reason their JIT compilers would stop working when they add Python 3 support.
But while I'm here, also let me address a misconception:
Usually a JIT compiler is the only thing that can improve performances in interpreted languages
This is not correct. A JIT compiler, in its most basic form, merely removes interpreter overhead, which accounts for some of the slow down you see, but not for the majority. A good JIT compiler also performs a host of optimizations which remove the overhead needed to implement numerous Python features in general (by detecting special cases which permit a more efficient implementation), prominent examples being dynamic typing, polymorphism, and various introspective features.
Just implementing a compiler does not help with that. You need very clever optimizations, most of which are only valid in very specific circumstances and for a limited time window. JIT compilers have it easy here, because they can generate specialized code at run time (it's their whole point), can analyze the program easier (and more accurately) by observing it as it runs, and can undo optimizations when they become invalid. They can also interact with interpreters, unlike ahead of time compilers, and often do it because it's a sensible design decision. I guess this is why they are linked to interpreters in people's minds, although they can and do exist independently.
There are also other approaches to make Python implementation faster, apart from optimizing the interpreter's code itself - for example, the HotPy (2) project. But those are currently in research or experimentation stage, and are yet to show their effectiveness (and maturity) w.r.t. real code.
And of course, a specific program's performance depends on the program itself much more than the language implementation. The language implementation only sets an upper bound for how fast you can make a sequence of operations. Generally, you can improve the program's performance much better simply by avoiding unnecessary work, i.e. by optimizing the program. This is true regardless of whether you run the program through an interpreter, a JIT compiler, or an ahead-of-time compiler. If you want something to be fast, don't go out of your way to get at a faster language implementation. There are applications which are infeasible with the overhead of interpretation and dynamicness, but they aren't as common as you'd think (and often, solved by calling into machine code-compiled code selectively).
The only Python implementation that has a JIT is PyPy. Byt - PyPy is both a Python 2 implementation and a Python 3 implementation.
The Numba project should work on Python 3. Although it is not exactly what you asked, you may want to give it a try:
https://github.com/numba/numba/blob/master/docs/source/doc/userguide.rst.
It does not support all Python syntax at this time.
You can try the pypy py3 branch, which is more or less python compatible, but the official CPython implementation has no JIT.
This will best be answered by some of the remarkable Python developer folks on this site.
Still I want to comment: When discussing speed of interpreted languages, I just love to point to a project hosted at this location: Computer Language Benchmarks Game
It's a site dedicated to running benchmarks. There are specified tasks to do. Anybody can submit a solution in his/her preferred language and then the tests compare the runtime of each solution. Solutions can be peer reviewed, are often further improved by others, and results are checked against the spec. In the long run this is the most fair benchmarking system to compare different languages.
As you can see from indicative summaries like this one, compiled languages are quite fast compared to interpreted languages. However, the difference is probably not so much in the exact type of compilation, it's the fact that Python (and the others in the graph slower than python) are fully dynamic. Objects can be modified on the fly. Types can be modified on the fly. So some type checking has to be deferred to runtime, instead of compile time.
So while you can argue about compiler benefits, you have to take into account that there are different features in different languages. And those features may come at an intrinsic price.
Finally, when talking about speed: Most often it's not the language and the perceived slowness of a language that's causing the issue, it's a bad algorithm. I never had to switch languages because one was too slow: When there's a speed issue in my code, I fix the algorithm. However, if there are time-consuming, computational intensive loops in your code it is usually worth the while to recompile those. A prominent example are libraries coded in C used by scripting languages (Perl XS libs, or e.g. numpy/scipy for Python, lapack/blas are examples of libs available with bindings for many scripting languages)
If you mean JIT as in Just in time compiler to a Bytecode representation then it has such a feature(since 2.2). If you mean JIT to machine code, then no. Yet the compilation to byte code provides a lot of performance improvement. If you want it to compile to machine code, then Pypy is the implementation you're looking for.
Note: pypy doesn't work with Python 3.x
If you are looking for speed improvements in a block of code, then you may want to have a look to rpythonic, that compiles down to C using pypy. It uses a decorator that converts it in a JIT for Python.
After reading How do I protect Python code? , I decided to try a really simple extension module on Windows. I compiled my own extension module on Linux before, but this is the first time I compiled it on Windows. I was expecting to get a .dll file, but instead, I got a .pyd file. Docs says they are kind of same, but it must have an init[insert-module-name]() function.
Is it safe to assume, it is as hard to reverse engineer them as dll files. If not, what is their hardness to reverse engineer in a scale from .pyc file to .dll files?
They are, as you already found out, equivalent to DLL files with a certain structure. In principle, they are equally hard to reverse-engineer, they are machine code, need very little metadata, and the code may have been optimized beyond recognition.
However, the required structure, and knowing that many functions will be handling PyObject *s and other well-defined CPython types, may have some effect. It won't really help with mapping the assembly code to C (if anything, it gets harder due to CPython-specific macros). Code that mostly interacts with Python types will look quite different from code manipulating C structs (and comparatively bloated). This may make it even harder to comprehend, or it may give away code which does nothing interesting and allows an reverse engineer to skip over it and get to your trade secrets earlier.
None of these concerns apply to pieces of code which are pure C code (i.e. do not interact with Python). And you probably have a lot of those. So it shouldn't make a significant difference in the end.
They are basically native code. But because every function have funny argument lists, it might be harder to see what each function does. I would say they are as hard as dll, if not harder.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Right now I'm developing mostly in C/C++, but I wrote some small utilities in Python to automatize some tasks and I really love it as language (especially the productivity).
Except for the performances (a problem that could be sometimes solved thanks to the ease of interfacing Python with C modules), do you think it is proper for production use in the development of stand-alone complex applications (think for example to a word processor or a graphic tool)?
What IDE would you suggest? The IDLE provided with Python is not enough even for small projects in my opinion.
We've used IronPython to build our flagship spreadsheet application (40kloc production code - and it's Python, which IMO means loc per feature is low) at Resolver Systems, so I'd definitely say it's ready for production use of complex apps.
There are two ways in which this might not be a useful answer to you :-)
We're using IronPython, not the more usual CPython. This gives us the huge advantage of being able to use .NET class libraries. I may be setting myself up for flaming here, but I would say that I've never really seen a CPython application that looked "professional" - so having access to the WinForms widget set was a huge win for us. IronPython also gives us the advantage of being able to easily drop into C# if we need a performance boost. (Though to be honest we have never needed to do that. All of our performance problems to date have been because we chose dumb algorithms rather than because the language was slow.) Using C# from IP is much easier than writing a C Extension for CPython.
We're an Extreme Programming shop, so we write tests before we write code. I would not write production code in a dynamic language without writing the tests first; the lack of a compile step needs to be covered by something, and as other people have pointed out, refactoring without it can be tough. (Greg Hewgill's answer suggests he's had the same problem. On the other hand, I don't think I would write - or especially refactor - production code in any language these days without writing the tests first - but YMMV.)
Re: the IDE - we've been pretty much fine with each person using their favourite text editor; if you prefer something a bit more heavyweight then WingIDE is pretty well-regarded.
You'll find mostly two answers to that – the religous one (Yes! Of course! It's the best language ever!) and the other religious one (you gotta be kidding me! Python? No... it's not mature enough). I will maybe skip the last religion (Python?! Use Ruby!). The truth, as always, is far from obvious.
Pros: it's easy, readable, batteries included, has lots of good libraries for pretty much everything. It's expressive and dynamic typing makes it more concise in many cases.
Cons: as a dynamic language, has way worse IDE support (proper syntax completion requires static typing, whether explicit in Java or inferred in SML), its object system is far from perfect (interfaces, anyone?) and it is easy to end up with messy code that has methods returning either int or boolean or object or some sort under unknown circumstances.
My take – I love Python for scripting, automation, tiny webapps and other simple well defined tasks. In my opinion it is by far the best dynamic language on the planet. That said, I would never use it any dynamically typed language to develop an application of substantial size.
Say – it would be fine to use it for Stack Overflow, which has three developers and I guess no more than 30k lines of code. For bigger things – first your development would be super fast, and then once team and codebase grow things are slowing down more than they would with Java or C#. You need to offset lack of compilation time checks by writing more unittests, refactorings get harder cause you never know what your refacoring broke until you run all tests or even the whole big app, etc.
Now – decide on how big your team is going to be and how big the app is supposed to be once it is done. If you have 5 or less people and the target size is roughly Stack Overflow, go ahead, write in Python. You will finish in no time and be happy with good codebase. But if you want to write second Google or Yahoo, you will be much better with C# or Java.
Side-note on C/C++ you have mentioned: if you are not writing performance critical software (say massive parallel raytracer that will run for three months rendering a film) or a very mission critical system (say Mars lander that will fly three years straight and has only one chance to land right or you lose $400mln) do not use it. For web apps, most desktop apps, most apps in general it is not a good choice. You will die debugging pointers and memory allocation in complex business logic.
In my opinion python is more than ready for developing complex applications. I see pythons strength more on the server side than writing graphical clients. But have a look at http://www.resolversystems.com/. They develop a whole spreadsheet in python using the .net ironpython port.
If you are familiar with eclipse have a look at pydev which provides auto-completion and debugging support for python with all the other eclipse goodies like svn support. The guy developing it has just been bought by aptana, so this will be solid choice for the future.
#Marcin
Cons: as a dynamic language, has way
worse IDE support (proper syntax
completion requires static typing,
whether explicit in Java or inferred
in SML),
You are right, that static analysis may not provide full syntax completion for dynamic languages, but I thing pydev gets the job done very well. Further more I have a different development style when programming python. I have always an ipython session open and with one F5 I do not only get the perfect completion from ipython, but object introspection and manipulation as well.
But if you want to write second Google
or Yahoo, you will be much better with
C# or Java.
Google just rewrote jaiku to work on top of App Engine, all in python. And as far as I know they use a lot of python inside google too.
I really like python, it's usually my language of choice these days for small (non-gui) stuff that I do on my own.
However, for some larger Python projects I've tackled, I'm finding that it's not quite the same as programming in say, C++. I was working on a language parser, and needed to represent an AST in Python. This is certainly within the scope of what Python can do, but I had a bit of trouble with some refactoring. I was changing the representation of my AST and changing methods and classes around a lot, and I found I missed the strong typing that would be available to me in a C++ solution. Python's duck typing was almost too flexible and I found myself adding a lot of assert code to try to check my types as the program ran. And then I couldn't really be sure that everything was properly typed unless I had 100% code coverage testing (which I didn't at the time).
Actually, that's another thing that I miss sometimes. It's possible to write syntactically correct code in Python that simply won't run. The compiler is incapable of telling you about it until it actually executes the code, so in infrequently-used code paths such as error handlers you can easily have unseen bugs lurking around. Even code that's as simple as printing an error message with a % format string can fail at runtime because of mismatched types.
I haven't used Python for any GUI stuff so I can't comment on that aspect.
Python is considered (among Python programmers :) to be a great language for rapid prototyping. There's not a lot of extraneous syntax getting in the way of your thought processes, so most of the work you do tends to go into the code. (There's far less idioms required to be involved in writing good Python code than in writing good C++.)
Given this, most Python (CPython) programmers ascribe to the "premature optimization is the root of all evil" philosophy. By writing high-level (and significantly slower) Python code, one can optimize the bottlenecks out using C/C++ bindings when your application is nearing completion. At this point it becomes more clear what your processor-intensive algorithms are through proper profiling. This way, you write most of the code in a very readable and maintainable manner while allowing for speedups down the road. You'll see several Python library modules written in C for this very reason.
Most graphics libraries in Python (i.e. wxPython) are just Python wrappers around C++ libraries anyway, so you're pretty much writing to a C++ backend.
To address your IDE question, SPE (Stani's Python Editor) is a good IDE that I've used and Eclipse with PyDev gets the job done as well. Both are OSS, so they're free to try!
[Edit] #Marcin: Have you had experience writing > 30k LOC in Python? It's also funny that you should mention Google's scalability concerns, since they're Python's biggest supporters! Also a small organization called NASA also uses Python frequently ;) see "One coder and 17,000 Lines of Code Later".
Nothing to add to the other answers, besides that if you choose python you must use something like pylint which nobody mentioned so far.
One way to judge what python is used for is to look at what products use python at the moment. This wikipedia page has a long list including various web frameworks, content management systems, version control systems, desktop apps and IDEs.
As it says here - "Some of the largest projects that use Python are the Zope application server, YouTube, and the original BitTorrent client. Large organizations that make use of Python include Google, Yahoo!, CERN and NASA. ITA uses Python for some of its components."
So in short, yes, it is "proper for production use in the development of stand-alone complex applications". So are many other languages, with various pros and cons. Which is the best language for your particular use case is too subjective to answer, so I won't try, but often the answer will be "the one your developers know best".
Refactoring is inevitable on larger codebases and the lack of static typing makes this much harder in python than in statically typed languages.
And as far as I know they use a lot of python inside google too.
Well i'd hope so, the maker of python still works at google if i'm not mistaken?
As for the use of Python, i think it's a great language for stand-alone apps. It's heavily used in a lot of Linux programs, and there are a few nice widget sets out there to aid in the development of GUI's.
Python is a delight to use. I use it routinely and also write a lot of code for work in C#. There are two drawbacks to writing UI code in Python. one is that there is not a single ui framework that is accepted by the majority of the community. when you write in c# the .NET runtime and class libraries are all meant to work together. With Python every UI library has at's own semantics which are often at odds with the pythonic mindset in which you are trying to write your program. I am not blaming the library writers. I've tried several libraries (wxwidgets, PythonWin[Wrapper around MFC], Tkinter), When doing so I often felt that I was writing code in a language other than Python (despite the fact that it was python) because the libraries aren't exactly pythonic they are a port from another language be it c, c++, tk.
So for me I will write UI code in .NET (for me C#) because of the IDE & the consistency of the libraries. But when I can I will write business logic in python because it is more clear and more fun.
I know I'm probably stating the obvious, but don't forget that the quality of the development team and their familiarity with the technology will have a major impact on your ability to deliver.
If you have a strong team, then it's probably not an issue if they're familiar. But if you have people who are more 9 to 5'rs who aren't familiar with the technology, they will need more support and you'd need to make a call if the productivity gains are worth whatever the cost of that support is.
I had only one python experience, my trash-cli project.
I know that probably some or all problems depends of my inexperience with python.
I found frustrating these things:
the difficult of finding a good IDE for free
the limited support to automatic refactoring
Moreover:
the need of introduce two level of grouping packages and modules confuses me.
it seems to me that there is not a widely adopted code naming convention
it seems to me that there are some standard library APIs docs that are incomplete
the fact that some standard libraries are not fully object oriented annoys me
Although some python coders tell me that they does not have these problems, or they say these are not problems.
Try Django or Pylons, write a simple app with both of them and then decide which one suits you best. There are others (like Turbogears or Werkzeug) but those are the most used.