Is it save if I just using put and get_nowait functions in a queue, where the queue is shared between the threads. When Do I need to use a thread lock?
The essential idea of queue is to share it between multiple threads.
The Queue class implements all the required locking semantics.
So you don't have to acquire lock explicitly.
http://docs.python.org/library/queue.html#module-Queue
The Queue module (called queue in Python 3) is specifically designed to work in multithreaded environments.
If that's what you're using, you don't need any additional locking.
Related
I have been using queue.Queue extensively in situations, where I execute multiple threads e.g. by using concurrent.futures.ThreadPoolExecutor.
I've read from blogs that queue.Queue should be thread-safe, but does that mean it's thread-safe under the assumption that the Python interpreter only executes one thread at a time (GIL), or is it also thread-safe in situations using multiprocessing, which side-steps the GIL by using subprocesses instead of threads?
https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor
ProcessPoolExecutor uses multiprocessing.queues.Queue for the call queue and a mp_context.SimpleQueue (multiprocessing) for the result queue - which are used to communicate between a local thread and the processes.
Nice graphic of ProcessPoolExecutor
concurrent.futures.ProcessPoolExecutor stuff uses multiprocessing Queues to communicate between threads and processes.
The multiprocessing.queues.Queue docs specifically state it is thread and process safe
At the bottom of the queue documentation there is a note referring to the multiprocessing.Queue object ... for use in a multi-processing (rather than multi-threading) context
There is a Queue developed for this in the multiprocessing library
from multiprocessing import Queue
This uses sockets to send byte data which is thread-safe.
I have a system which contains -
Queue & 2 types of instances -
1.push to the Queue
2.pull from Queue
I want to push and pull from the Queue in the same time but i don't sure (I didn't find in documentation and didn't find the implementation)
if the queue protects from collisions of access to the same memory
for example:
There is zero elements in the Queue -> I push and then I pull in the same time
My question is if the Queue not protects it, there is any way to lock only the entrance or the exit of the Queue?
The Queue class knows about concurrent access and handles it correctly. If you pull from the queue (queue.get()) and there is nothing in the queue then the call will block or time out. If you push to the queue (queue.put()) then this will be correctly handled and the call will only block or time out if you have set a maximum size for the queue and it is full.
Documentation says:
The queue module implements multi-producer, multi-consumer queues. It
is especially useful in threaded programming when information must be
exchanged safely between multiple threads. The Queue class in this
module implements all the required locking semantics. It depends on
the availability of thread support in Python; see the threading
module.
I would like to combine threading and asyncio with some synchronisation.
For example: A thread write-combines frames from a camera into some variable or buffer. Multiple readers (asyncio or threads) are woken on each write to take the latest available frame.
I have tried deriving from the asyncio.Event to no avail.
class EventThreadSafe(asyncio.Event):
def set(self):
self._loop.call_soon_threadsafe(super().set)
Is there a mechanism that does this already (https://github.com/aio-libs/janus?) or how is best to implement it?
you can use asyncio.Queue, however it is not thread safe, just task safe. If you want thread safe use queue.Queue, however this is not task safe as it will block your thread. Personally for multiprocess I use a 0MQ Push-Pull Pattern, which feeds to/from a asyncio.Queue adapter.
If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing?
The threading module's synchronization primitive are lighter and faster than multiprocessing, due to the lack of dealing with shared semaphores, etc. If you are using threads; use threading's locks. Processes should use multiprocessing's locks.
I would expect the multi-threading synchronization primitives to be quite faster as they can use shared memory area easily. But I suppose you will have to perform speed test to be sure of it. Also, you might have side-effects that are quite unwanted (and unspecified in the doc).
For example, a process-wise lock could very well block all threads of the process. And if it doesn't, releasing a lock might not wake up the threads of the process.
In short, if you want your code to work for sure, you should use the thread-synchronization primitives if you are using threads and the process-synchronization primitives if you are using processes. Otherwise, it might work on your platform only, or even just with your specific version of Python.
multiprocessing and threading packages have slightly different aims, though both are concurrency related. threading coordinates threads within one process, while multiprocessing provide thread-like interface for coordinating multiple processes.
If your application doesn't spawn new processes which require data synchronization, multiprocessing is a bit more heavy weight, and threading package should be better suited.
Are the locks from the threading module interchangeable with those from the multiprocessing module?
You can typically use the two interchangeably, but you need to cognizant of the differences. For example, multiprocessing.Event is backed by a named semaphore, which is sensitive to the platform under the application.
Multiprocessing.Lock is backed by Multiprocessing.SemLock - so it needs named semaphores. In essence, you can use them interchangeably, but using multiprocessing's locks introduces some platform requirements on the application (namely, it doesn't run on BSD :))
I don't think so. Threading locks are within the same process, while the multiprocessing lock would likely be in shared memory.
Last time I checked, multiprocessing doesn't allow you to share the lock in a Queue, which is a threading lock.
Yes, you can use locks from the multiprocessing module as normal in your one-process application, but if you're using multiprocessing, you should use its locks.