Why wasn't PyPy included in standard Python? - python

I was looking at PyPy and I was just wondering why it hasn't been adopted into the mainline Python distributions. Wouldn't things like JIT compilation and lower memory footprint greatly improve the speeds of all Python code?
In short, what are the main drawbacks of PyPy that cause it to remain a separate project?

PyPy is not a fork of CPython, so it could never be merged directly into CPython.
Theoretically the Python community could universally adopt PyPy, PyPy could be made the reference implementation, and CPython could be discontinued. However, PyPy has its own weaknesses:
CPython is easy to integrate with Python modules written in C, which is traditionally the way Python applications have handled CPU-intensive tasks (see for instance the SciPy project).
The PyPy JIT compilation step itself costs CPU time -- it's only through repeated running of compiled code that it becomes faster overall. This means startup times can be higher, and therefore PyPy isn't necessarily as efficient for running glue code or trivial scripts.
PyPy and CPython behavior is not identical in all respects, especially when it comes to "implementation details" (behavior that is not specified by the language but is still important at a practical level).
CPython runs on more architectures than PyPy and has been successfully adapted to run in embedded architectures in ways that may be impractical for PyPy.
CPython's reference counting scheme for memory management arguably has more predictable performance impacts than PyPy's various GC systems, although this isn't necessarily true of all "pure GC" strategies.
PyPy does not yet fully support Python 3.x, although that is an active work item.
PyPy is a great project, but runtime speed on CPU-intensive tasks isn't everything, and in many applications it's the least of many concerns. For instance, Django can run on PyPy and that makes templating faster, but CPython's database drivers are faster than PyPy's; in the end, which implementation is more efficient depends on where the bottleneck in a given application is.
Another example: you'd think PyPy would be great for games, but most GC strategies like those used in PyPy cause noticeable jitter. For CPython, most of the CPU-intensive game stuff is offloaded to the PyGame library, which PyPy can't take advantage of since PyGame is primarily implemented as a C extension (though see: pygame-cffi). I still think PyPy can be a great platform for games, but I've never seen it actually used.
PyPy and CPython have radically different approaches to fundamental design questions and make different tradeoffs, so neither one is "better" than the other in every case.

For one, it's not 100% compatible with Python 2.x, and has only preliminary support for 3.x.
It's also not something that could be merged - The Python implementation that is provided by PyPy is generated using a framework they have created, which is extremely cool, but also completely disparate with the existing CPython implementation. It would have to be a complete replacement.
There are some very concrete differences between PyPy and CPython, a big one being how extension modules are supported - which, if you want to go beyond the standard library, is a big deal.
It's also worth noting that PyPy isn't universally faster.

See this video by Guido van Rossum. He talks about the same question you asked at 12 min 33 secs.
Highlights:
lack of Python 3 compatibility
lack of extension support
not appropriate as glue code
speed is not everything
After all, he's the one to decide...

One reason might be that according to PyPy site, it currently runs only on 32- and 64-bit Intel x86 architecture, while CPython runs on other platforms as well. This is probably due to platform-specific speed enhancements in PyPy. While speed is a good thing, people often want language implementations to be as "platform-independent" as possible.

I recommend watching this keynote by David Beazley for more insights. It answers your question by giving clarity on nature & intricacies of PyPy.

In addition to everything that's been said here, PyPy is not nearly as rock solid as CPython in terms of bugs. With SymPy, we've found at about a dozen bugs in PyPy over the past couple of years, both in released versions and in the nightlies.
On the other hand, we've only ever found one bug in CPython, and that was in a prerelease.
Plus, don't discount the lack of Python 3 support. No one in the core Python community even cares about Python 2 any more. They are working on the next big things in Python 3.4, which will be the fifth major release of Python 3. The PyPy guys still haven't gotten one of them. So they've got some catching up to do before they can start to be contenders.
Don't get me wrong. PyPy is awesome. But it's still far from being better than CPython in a lot of very important ways.
And by the way, if you use SymPy in PyPy, you won't see a smaller memory footprint (or a speedup either). See https://bitbucket.org/pypy/pypy/issues/1447/.

Related

Whats the difference between PyPy and PyPI

This is probably a really stupid question but whats the difference between 'PyPy' and 'PyPI'? Are they the same thing?
PyPy is an alternative implementation of python:
PyPy is a fast, compliant alternative implementation of the Python
language (2.7.13 and 3.5.3). It has several advantages and distinct
features: Speed: thanks to its Just-in-Time compiler, Python programs
often run faster on PyPy. (What is a JIT compiler?) “If you want your
code to run faster, you should probably just use PyPy.” — Guido van
Rossum (creator of Python) Memory usage: memory-hungry Python programs
(several hundreds of MBs or more) might end up taking less space than
they do in CPython. Compatibility: PyPy is highly compatible with
existing python code. It supports cffi and can run popular python
libraries like twisted and django. Stackless: PyPy comes by default
with support for stackless mode, providing micro-threads for massive
concurrency. As well as other features.
PyPi is the repository for python packages, modules and libraries that you can install.
The Python Package Index is a repository of software for the Python
programming language. There are currently 120970 packages

What exactly is PyOpenGL-accelerate?

The title is the main question here. I had some PyOpenGL code I was running on my computer, which was running somewhat slow. I realized I hadn't installed PyOpenGL-accelerate. This didn't change the speed at all, but most tutorials with the Python OpenGL bindings suggest that PyOpenGL-accelerate should be installed.
What exactly does this module do?
First of all note that PyOpenGL-accelerate isn't a silver bullet. Thereby if you're already poorly optimizing your application, then PyOpenGL-accelerate wouldn't gain you that much if any additional performance.
That being said. PyOpenGL-accelerate consist of Cython accelerator modules which attempt to speed up various aspects of PyOpenGL 3.x. Thus if you're using glBegin() and glEnd() to draw with, then you won't gain any performance from this.
So what is Cython accelerator modules?
These modules are completely self-contained, and are created solely to run faster than the equivalent pure Python code runs in CPython. Ideally, accelerator modules will always have a pure Python equivalent to use as a fallback if the accelerated version isn’t available on a given system. The CPython standard library makes extensive use of accelerator modules.
– Python – Binary Extensions
In more layman's terms. Cython is a bit of a mix between Python and C so to speak. With a goal being optimization and execution speed.
In relation to PyOpenGL-accelerate this means that the various helper classes PyOpenGL offers. Is instead implemented in a manner that offers more performance.
From the documentation:
This set of C (Cython) extensions provides acceleration of common operations for slow points in PyOpenGL 3.x. For code which uses large arrays extensively speed-up is around 10% compared to unaccelerated code.
You could dig through the code if you want to know precisely which optimizations are defined, but OpenGL is usually built around surprisingly coarse optimizations to account for different hardware - i suppose that extends to running off of an interpreter as well.

Difference between pyOpenCL and opencl4py

Today I stumbled over a post in stackoverflow (see also here):
We are developing opencl4py, higher level bindings. This project uses CFFI, so it works on Pypy.
The major issue we encountered with pyopencl is that 'import pyopencl' does OpenCL initialization and takes the whole virtual memory in case of NVIDIA driver, preventing from correct forking and effectively disabling multiprocessing (yes, we claim that using pyopencl disables multiprocessing at least with NVIDIA). opencl4py uses lazy OpenCL initialization, resolving this "import hell".
Later, it gained some nice features like super easy binary program caching, etc. Unfortunately, the documentation is somewhat brief. The best way to learn how it works is go through the tests.
As there is also pyOpenCL, I was woundering what the difference between these two packages is. Does anybody know where I can find an overview on the pro's and con's for these both packages?
Edit: To include benshope's comment as I would also be interested: what does "disable[s] multiprocessing" mean? Like, it can't run kernels on several devices at one time?
As far as I know, there is no such overview. I'll try to list some key points:
pyOpenCL is a mature project with a relatively large user base. There are tutorials, FAQ, etc. opencl4py appeared on 03/2014; no tutorials, FAQ and so on - only unit tests and docstrings.
pyOpenCL is a native cPython extension, whereas opencl4py uses cffi, so that it works on PyPy (pyOpenCL does NOT) and it does not require to be recompiled each time cPython changes version.
PyOpenCL has extras, such as random number generator and OpenGL interoperability.
opencl4py is extensively tested in Samsung production real world scenarios and is being actively developed.
what does "disable[s] multiprocessing" mean? Like, it can't run kernels on several devices at one time?
Of course, it can, I was trying to say that after importing pyopencl, os.fork() or multiprocessing.Process() lead to crashes inside NVIDIA OpenCL userspace library. It is always a bad idea of doing work during import.

Python CPU and OS

Wouldn't it be possible to have an OS entirely in Python if the Python VM itself is build into a hardware? Something like the good old Lisp Machine?
Suppose I have a cpu that is the hardware implementation of the python virtual machine, then all programs written in python would perform with the speed of assembly, won't it (but Python is mostly interpreted but we can compile it)?
If we have such a 'python-microprocessor', what about the memory and other subsystems? Would it be compatible with the current memory.
Is there any information on the registers and the Python VM architecture, something similar to what we have for 8086?
Wouldn't it be possible to have an OS
entirely in Python if the Python VM
itself is build into a hardware?
Something like the good old Lisp
Machine?
Yes, theoretically it would be possible.
Suppose I have a cpu that is the
hardware implementation of the python
virtual machine, then all programs
written in python would perform with
the speed of assembly, won't it (but
Python is mostly interpreted but we
can compile it)?
Python doesn't have a speed, it's a language. The speed of the interpreter (in this case the processor) can be tested. But just as it's difficult to compare the performance of a RISC and a CISC processor, comparing Assembly with Python will be difficult too.
If we have such a
'python-microprocessor', what about
the memory and other subsystems? Would
it be compatible with the current
memory.
The python microprocessor would have to do the memory management (and thus the garbage collection). Since that's normally done by the interpreter, now the microprocessor has to do it.
Is there any information on the
registers and the Python VM
architecture, something similar to
what we have for 8086?
Normally you don't access the memory directly in Python, so the registers shouldn't be relevant here.
Similar things were tried for Java, but none really took the world by storm.
Yeah, it might be possible, but designing new hardware is expensive. Would the return on investment justify building such a toy? I'd guess not, otherwise someone would have tried it by now. :)
Suppose I have a cpu that is the hardware implementation of the python virtual machine, then all programs written in python would perform with the speed of assembly, won't it (but Python is mostly interpreted but we can compile it)?
Yes it would be assembly speed. See this link for a comparison with an avr microcontroller assembly code. http://pycpu.wordpress.com/code-examples/speed-pycpu-vs-8bit-avr/.
It is a hardware implemntation of a cpu that can do very very limited python bytecode. But enought for ifs conditions and while loops with simple integers.
In the 70ties such ideas were quite popular. The idea was to close the semantic gap between compilers/virtual machines and instruction set architectures, and thereby bring programming languages and hardware closer together. However, when Patterson and Ditzel published The Case for the Reduced Instruction Set Computer (PDF, 672KB) and after the success of RISC and the microprocessor, the idea of closing the semantic gap was basically dead.
Now, with ever increasing transistor counts the idea may become interesting again. But, as others already noted, designing chips is costly. You need a very good reason to sink so much money. But it is definitely possible. IBM and Azul have shown this with their massively parallel Java Chips.
I guess you should call Google and convince them that they urgently need a Python processor. ;-)
New operating systems are interesting and cool, and basing one off of python would be cool. Then again, linux is so good and has so much development for it already. It would have to be the "right time".

PyPy -- How can it possibly beat CPython?

From the Google Open Source Blog:
PyPy is a reimplementation of Python
in Python, using advanced techniques
to try to attain better performance
than CPython. Many years of hard work
have finally paid off. Our speed
results often beat CPython, ranging
from being slightly slower, to
speedups of up to 2x on real
application code, to speedups of up to
10x on small benchmarks.
How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score?
(On a related note... why would anyone try something like this?)
"PyPy is a reimplementation of Python in Python" is a rather misleading way to describe PyPy, IMHO, although it's technically true.
There are two major parts of PyPy.
The translation framework
The interpreter
The translation framework is a compiler. It compiles RPython code down to C (or other targets), automatically adding in aspects such as garbage collection and a JIT compiler. It cannot handle arbitrary Python code, only RPython.
RPython is a subset of normal Python; all RPython code is Python code, but not the other way around. There is no formal definition of RPython, because RPython is basically just "the subset of Python that can be translated by PyPy's translation framework". But in order to be translated, RPython code has to be statically typed (the types are inferred, you don't declare them, but it's still strictly one type per variable), and you can't do things like declaring/modifying functions/classes at runtime either.
The interpreter then is a normal Python interpreter written in RPython.
Because RPython code is normal Python code, you can run it on any Python interpreter. But none of PyPy's speed claims come from running it that way; this is just for a rapid test cycle, because translating the interpreter takes a long time.
With that understood, it should be immediately obvious that speculations about PyPyPy or PyPyPyPy don't actually make any sense. You have an interpreter written in RPython. You translate it to C code that executes Python quickly. There the process stops; there's no more RPython to speed up by processing it again.
So "How is it possible for PyPy to be faster than CPython" also becomes fairly obvious. PyPy has a better implementation, including a JIT compiler (it's generally not quite as fast without the JIT compiler, I believe, which means PyPy is only faster for programs susceptible to JIT-compilation). CPython was never designed to be a highly optimising implementation of the Python language (though they do try to make it a highly optimised implementation, if you follow the difference).
The really innovative bit of the PyPy project is that they don't write sophisticated GC schemes or JIT compilers by hand. They write the interpreter relatively straightforwardly in RPython, and for all RPython is lower level than Python it's still an object-oriented garbage collected language, much more high level than C. Then the translation framework automatically adds things like GC and JIT. So the translation framework is a huge effort, but it applies equally well to the PyPy python interpreter however they change their implementation, allowing for much more freedom in experimentation to improve performance (without worrying about introducing GC bugs or updating the JIT compiler to cope with the changes). It also means when they get around to implementing a Python3 interpreter, it will automatically get the same benefits. And any other interpreters written with the PyPy framework (of which there are a number at varying stages of polish). And all interpreters using the PyPy framework automatically support all platforms supported by the framework.
So the true benefit of the PyPy project is to separate out (as much as possible) all the parts of implementing an efficient platform-independent interpreter for a dynamic language. And then come up with one good implementation of them in one place, that can be re-used across many interpreters. That's not an immediate win like "my Python program runs faster now", but it's a great prospect for the future.
And it can run your Python program faster (maybe).
Q1. How is this possible?
Manual memory management (which is what CPython does with its counting) can be slower than automatic management in some cases.
Limitations in the implementation of the CPython interpreter preclude certain optimisations that PyPy can do (eg. fine grained locks).
As Marcelo mentioned, the JIT. Being able to on the fly confirm the type of an object can save you the need to do multiple pointer dereferences to finally arrive at the method you want to call.
Q2. Which Python implementation was used to implement PyPy?
The PyPy interpreter is implemented in RPython which is a statically typed subset of Python (the language and not the CPython interpreter). - Refer https://pypy.readthedocs.org/en/latest/architecture.html for details.
Q3. And what are the chances of a PyPyPy or PyPyPyPy beating their score?
That would depend on the implementation of these hypothetical interpreters. If one of them for example took the source, did some kind of analysis on it and converted it directly into tight target specific assembly code after running for a while, I imagine it would be quite faster than CPython.
Update: Recently, on a carefully crafted example, PyPy outperformed a similar C program compiled with gcc -O3. It's a contrived case but does exhibit some ideas.
Q4. Why would anyone try something like this?
From the official site. https://pypy.readthedocs.org/en/latest/architecture.html#mission-statement
We aim to provide:
a common translation and support framework for producing
implementations of dynamic languages, emphasizing a clean
separation between language specification and implementation
aspects. We call this the RPython toolchain_.
a compliant, flexible and fast implementation of the Python_
Language which uses the above toolchain to enable new advanced
high-level features without having to encode the low-level
details.
By separating concerns in this way, our implementation of Python - and
other dynamic languages - is able to automatically generate a
Just-in-Time compiler for any dynamic language. It also allows a
mix-and-match approach to implementation decisions, including many
that have historically been outside of a user's control, such as
target platform, memory and threading models, garbage collection
strategies, and optimizations applied, including whether or not to
have a JIT in the first place.
The C compiler gcc is implemented in C, The Haskell compiler GHC is written in Haskell. Do you have any reason for the Python interpreter/compiler to not be written in Python?
PyPy is implemented in Python, but it implements a JIT compiler to generate native code on the fly.
The reason to implement PyPy on top of Python is probably that it is simply a very productive language, especially since the JIT compiler makes the host language's performance somewhat irrelevant.
PyPy is written in Restricted Python. It does not run on top of the CPython interpreter, as far as I know. Restricted Python is a subset of the Python language. AFAIK, the PyPy interpreter is compiled to machine code, so when installed it does not utilize a python interpreter at runtime.
Your question seems to expect the PyPy interpreter is running on top of CPython while executing code.
Edit: Yes, to use PyPy you first translate the PyPy python code, either to C and build with gcc, to jvm byte code, or to .Net CLI code. See Getting Started

Categories