Why there are not any real lightweight threads for python? - python

I'm new to Python and seems that the multiprocessing and threads module are not very interesting and suffer from the same problems such as threads in Perl. Is there a technical reason why the interpreter can't use lightweight threads such as posix threads to make an efficient thread implementation that really runs on several cores?

It is using POSIX threads. The problem is the GIL.
Note that the GIL is not part of the Python spec --- it's part of the CPython reference implementation. Jython, for example, does not suffer from this problem.
That said, looked into Stackless ?

Piotr,
You might want to take a look at stackless (http://www.stackless.com/) which is a modified version of python running lightweight tasklets in message passing (erlang style) fashion.
I'm not sure if you're looking for a multicore solution, but poking around in stackless might give you what you're looking for.
Ben

Related

Python modules "Processing", "Multiprocessing" and other concurrency modules: what are the differences?

I am starting to read up over possible ways to parallelise Python code.
DISCLAIMER. This is NOT a question about Multiprocessing vs Multithreading.
At this link https://ipyparallel.readthedocs.io/en/latest/demos.html one finds references to several
concurrency packages for Python to avoid the GIL: https://scipy.github.io/old-wiki/pages/ParallelProgramming
-IPython1
-mpi4py
-parallel python
-Numba
There is also a multiprocessing package:
https://docs.python.org/3/library/multiprocessing.html
And another one called processing:
https://pypi.org/project/processing/
First of all, it is not at all clear to me the difference between the latter two above; what is the difference in using between the multiprocessing module and the processing module?.
In general, I fail to understand the differences between those all -- which must be there, given some developers made the effort to create a mpi4py version for the MPI used in C++. I guess this is not just about the dualism between "threading" and "multiprocessing" approaches, where in one case the memory is shared while the other has each process with its own memory and interpreter, something more must be different between all of those different packages out there.
Thanks to all of those who will dedicate time to answer this!
The difference is that the last version of processing was released in April of 2008 and multiprocessing was added in Python 2.6 in October 2008.
processing was a library that was used before multiprocessing was distributed with Python.
As far as the specific difference between other modules designed for multiprocessing: The scipy page you linked says that "This is a subject for graduate courses in computer science, and I'm not going to address it here....there are some python tools you can use to implement the things you learn in that graduate course." While they admit that may be a bit of an exaggeration, independent study of multiprocessing in general will be required to discern the difference between these libraries, you should probably just stick to the built in multiprocessing module for your initial experiments while you learn how it works. One you're more comfortable with multiprocessing, you might want to check out the pathos framework.
But here are the basics for the packages you mention:
Numba adds decorators that automatically compile functions to make them run faster, it isn't really a multiprocessing tool as much as a JIT compiling tool.
Parallel Python overcomes the GIL to utilize multiple cores or multiple computers, it's designed to be easy to use and to handle all the complex stuff behind the scenes.
MPI for Python is like Paralell Python with less emphasis on simplicity.
IPython is a toolkit with many features, including a shell and Jupyter kernel, it's also not really a multiprocessing tool.
Keep in mind that plenty of libraries/modules do the same thing, there doesn't need to be a reason more than one exists. Use whatever works for you.

Could pypy speed up parts of my python code?

I need pypy to speed up my python code. While the pypy doesn't support a lot of modules I need (e.g. GNU Radio). Could I use pypy to speed up parts of my python code. Could I use pypy to only speed up some of my python files? How can I do that?
No, you can't. You can only have one interpreter instance running all of the code in a single program at a time. The exception is if you break out some of your functionality into a totally separate program that communicates with the other part of your code through some form of inter-process communication; then you can run those totally separate programs however you like. But for code that is not separated like that, it's not possible.
It will probably be more straightforward to adapt the entirety of your code to work with PyPy one way or another, instead of trying to break out bits and pieces. If that's absolutely not possible, then PyPy probably can't help you.
No, you can't. And GNU Radio does the signal processing and scheduling in C++, so that's totally opaque to your python interpreter. Also, GNU Radio itself is highly optimized and contains specialized implementations for most of the CPU intense tasks for SSE, SSE4, and some NEON.
I need pypy to speed up my python code.
I doubt that. If your program runs too slow, it's probably nothing your Python interpreter can solve -- you might have to look into what could take so much time, and solve this on a higher level.

Confusion about multithreading in Python and C

AFAIK, Python, using import thread, (or C#) doesn't do "real" multithreading, meaning all threads run on 1 CPU core.
But in C, using pthreads in linux, You get real multithreading.
Is this true ?
Assuming it is true, is there any difference between them when you have only 1 CPU core (I have it in a VM)?
Python uses something called a Global Interpreter Lock which means multiple python threads can only run within one native Thread.
There is more documentation in the official Docs here: https://wiki.python.org/moin/GlobalInterpreterLock
There shouldn't be a real performance difference on single core systems. On multicore systems the difference will varie based on what you do. (I/O is for the most part not affected by the GIL).
I'm not aware of how it works C# internally, but for CPython (the "official" python interpreter) it is true: threads are not really parallel due to GIL.
Other implementation of the Python interpreter do not suffer of this problem (like C's pthreads library).
Howevere if you only have 1 CPU you won't notice any difference.
As a side note: if you need real parallelism in CPython you could you multiprocessing module, which uses processes instead of threads.
EDIT:
Also thread module is a bit deprecated, you should consider using threading.

Why wasn't PyPy included in standard Python?

I was looking at PyPy and I was just wondering why it hasn't been adopted into the mainline Python distributions. Wouldn't things like JIT compilation and lower memory footprint greatly improve the speeds of all Python code?
In short, what are the main drawbacks of PyPy that cause it to remain a separate project?
PyPy is not a fork of CPython, so it could never be merged directly into CPython.
Theoretically the Python community could universally adopt PyPy, PyPy could be made the reference implementation, and CPython could be discontinued. However, PyPy has its own weaknesses:
CPython is easy to integrate with Python modules written in C, which is traditionally the way Python applications have handled CPU-intensive tasks (see for instance the SciPy project).
The PyPy JIT compilation step itself costs CPU time -- it's only through repeated running of compiled code that it becomes faster overall. This means startup times can be higher, and therefore PyPy isn't necessarily as efficient for running glue code or trivial scripts.
PyPy and CPython behavior is not identical in all respects, especially when it comes to "implementation details" (behavior that is not specified by the language but is still important at a practical level).
CPython runs on more architectures than PyPy and has been successfully adapted to run in embedded architectures in ways that may be impractical for PyPy.
CPython's reference counting scheme for memory management arguably has more predictable performance impacts than PyPy's various GC systems, although this isn't necessarily true of all "pure GC" strategies.
PyPy does not yet fully support Python 3.x, although that is an active work item.
PyPy is a great project, but runtime speed on CPU-intensive tasks isn't everything, and in many applications it's the least of many concerns. For instance, Django can run on PyPy and that makes templating faster, but CPython's database drivers are faster than PyPy's; in the end, which implementation is more efficient depends on where the bottleneck in a given application is.
Another example: you'd think PyPy would be great for games, but most GC strategies like those used in PyPy cause noticeable jitter. For CPython, most of the CPU-intensive game stuff is offloaded to the PyGame library, which PyPy can't take advantage of since PyGame is primarily implemented as a C extension (though see: pygame-cffi). I still think PyPy can be a great platform for games, but I've never seen it actually used.
PyPy and CPython have radically different approaches to fundamental design questions and make different tradeoffs, so neither one is "better" than the other in every case.
For one, it's not 100% compatible with Python 2.x, and has only preliminary support for 3.x.
It's also not something that could be merged - The Python implementation that is provided by PyPy is generated using a framework they have created, which is extremely cool, but also completely disparate with the existing CPython implementation. It would have to be a complete replacement.
There are some very concrete differences between PyPy and CPython, a big one being how extension modules are supported - which, if you want to go beyond the standard library, is a big deal.
It's also worth noting that PyPy isn't universally faster.
See this video by Guido van Rossum. He talks about the same question you asked at 12 min 33 secs.
Highlights:
lack of Python 3 compatibility
lack of extension support
not appropriate as glue code
speed is not everything
After all, he's the one to decide...
One reason might be that according to PyPy site, it currently runs only on 32- and 64-bit Intel x86 architecture, while CPython runs on other platforms as well. This is probably due to platform-specific speed enhancements in PyPy. While speed is a good thing, people often want language implementations to be as "platform-independent" as possible.
I recommend watching this keynote by David Beazley for more insights. It answers your question by giving clarity on nature & intricacies of PyPy.
In addition to everything that's been said here, PyPy is not nearly as rock solid as CPython in terms of bugs. With SymPy, we've found at about a dozen bugs in PyPy over the past couple of years, both in released versions and in the nightlies.
On the other hand, we've only ever found one bug in CPython, and that was in a prerelease.
Plus, don't discount the lack of Python 3 support. No one in the core Python community even cares about Python 2 any more. They are working on the next big things in Python 3.4, which will be the fifth major release of Python 3. The PyPy guys still haven't gotten one of them. So they've got some catching up to do before they can start to be contenders.
Don't get me wrong. PyPy is awesome. But it's still far from being better than CPython in a lot of very important ways.
And by the way, if you use SymPy in PyPy, you won't see a smaller memory footprint (or a speedup either). See https://bitbucket.org/pypy/pypy/issues/1447/.

Named semaphores in Python?

I have a script in python which uses a resource which can not be used by more than a certain amount of concurrent scripts running.
Classically, this would be solved by a named semaphores but I can not find those in the documentation of the multiprocessing module or threading .
Am I missing something or are named semaphores not implemented / exposed by Python? and more importantly, if the answer is no, what is the best way to emulate one?
Thanks,
Boaz
PS. For reasons which are not so relevant to this question, I can not aggregate the task to a continuously running process/daemon or work with spawned processes - both of which, it seems, would have worked with the python API.
I suggest a third party extension like these, ideally the posix_ipc one -- see in particular the sempahore section in the docs.
These modules are mostly about exposing the "system V IPC" (including semaphores) in a unixy way, but at least one of them (posix_ipc specifically) is claimed to work with Cygwin on Windows (I haven't verified that claim). There are some documented limitations on FreeBSD 7.2 and Mac OSX 10.5, so take care if those platforms are important to you.
You can emulate them by using the filesystem instead of a kernel path (named semaphores are implemented this way on some platforms anyhow). You'll have to implement sem_[open|wait|post|unlink] yourself, but it ought to be relatively trivial to do so. Your synchronization overhead might be significant (depending on how often you have to fiddle with the semaphore in your app), so you might want to initialize a ramdisk when you launch your process in which to store named semaphores.
Alternatively if you're not comfortable rolling your own, you could probably wrap boost::interprocess::named_semaphore (docs here) in a simple extension module.

Categories