Correctness of modified consumer/producer - python

I am creating a Sound class to play notes and would like feedback on the correctness and conciseness of my design. This class differs from the typical consumer/producer in two ways:
The consumer should respond to events, such as to shut down the thread, or otherwise continue forever. The typical consumer/producer exits when the queue is empty. For example, a thread waiting in queue.get cannot handle additional notifications.
Each set of notes submitted by the producer should overwrite any unprocessed notes remaining on the queue.
Originally I had the consumer process one note at a time using the queue module. I found continually acquiring and releasing the lock without any competition to be inefficient, and as previously noted, queue.get prevents waiting on additional events. So instead of building upon that, I rewrote it into:
import threading
queue = []
condition = threading.Condition()
interrupt = threading.Event()
stop = threading.Event()
def producer():
while some_condition:
ns = get_notes() # [(float,float)]
with condition:
queue.clear()
queue.append(ns)
interrupt.set()
condition.notify()
with condition:
stop.set()
condition.notify()
consumer.join()
def consumer():
while not stop.is_set():
with condition:
while not (queue or stop.is_set()):
condition.wait()
if stop.is_set():
break
interrupt.clear()
ns = queue.pop()
ss = gen_samples(ns) # iterator/fast
for b in grouper(ss, size/2):
if interrupt.is_set() or stop.is_set()
break
stream.write(b)
thread = threading.Thread(target=consumer)
thread.start()
producer()
My questions are as follows:
Is this thread-safe? I want to specifically point out my use of is_set without locks or synchronization (in the for-loop).
Can the events be replaced with boolean variables? I believe so as conflicting writes in both threads (data race) are guarded by the condition variable. There is a race condition between setting and checking events but I do not believe it affects program flow.
Is there a more efficient approach/algorithm utilizing different synchronization primitives from the threading module?
edit: Found and fixed a possible deadlock described in Why does Python threading.Condition() notify() require a lock?

Analyzing thread-safety in Python can take into account the Global Interpreter Lock (GIL): no two threads will execute Python code simultaneously. Assignments to variables or object fields are effectively atomic (there are no half-assigned variables) and changes propagate effectively immediately to other threads.
This means that your use of Event.is_set() is already equivalent to using plain booleans. An event is a bool guarded by a Condition. The is_set() method checks the boolean directly. The set() method acquires the Condition, sets the boolean, and notifies all waiting threads. The wait() methods waits until the set() method is invoked. The clear() method acquires the Condition and unsets the boolean. Since you never wait() for any Event, and setting the boolean is atomic, the Condition in the Event is effectively unused.
This might get rid of a couple of locks, but isn't really a huge efficiency win. A Condition is still an abstraction over a lock, but the built-in Queue type uses locks directly. Thus, I would assume that the built-in queue is no less performant than your solution, even for a single consumer.
Your main issue with the built-in queue is that “continually acquiring and releasing the lock without any competition [is] inefficient”. This is wrong on two counts:
Due to Python's GIL, there is little competition in either case.
Acquiring uncontested locks is very efficient.
So while your solution is probably sufficiently correct (I can see no opportunity for deadlock) it is unlikely to be particularly efficient. (There are just some small mistakes, like using stop instead of stop.is_set() and some syntax errors.)
If you are seeing poor performance with Python threads that's probably because of CPython, not because of the Queue type. I already mentioned that only one thread can run at a time due to the GIL. If multiple threads want to run, they must be scheduled by the operating system to do so and acquire the GIL. Each thread will wait for 5ms before asking the running thread to give up the GIL (in a manner quite similar to your interrupt flag). And then the thread can do useful work like acquiring a lock for a critical section that must not be interrupted by other threads.
Possibly, the solution could be to avoid CPython's threads.
If you have multiple CPU-bound tasks, you must use multiple processes. CPython's threads will not run in parallel. However, communication between processes is more expensive.
Consider whether you can combine the producer+consumer directly, possibly using features such as generators.
For an easier time with juggling multiple tasks in the same thread, consider using async/await. Event loops are provided by the asyncio module. This is just as fast as Python's threads, with the caveat that tasks don't pre-empt (interrupt) each other. But this can be advantage: since a task can only be suspended at an await, you don't need most locks and it is easier to reason about correctness of the code. The downside is that async/await might have even higher latency than using threads.
Python has a concept of “executors” that make it easy and efficient to run tasks in separate threads (for I/O-bound tasks) or separate processes (for CPU-bound tasks).
For communicating between multiple processes, use the types from the multiprocessing module (e.g. Queue, Connection, or Value).

Related

Why calling notify() while still holding a condition? [duplicate]

My question refers specifically to why it was designed that way, due to the unnecessary performance implication.
When thread T1 has this code:
cv.acquire()
cv.wait()
cv.release()
and thread T2 has this code:
cv.acquire()
cv.notify() # requires that lock be held
cv.release()
what happens is that T1 waits and releases the lock, then T2 acquires it, notifies cv which wakes up T1. Now, there is a race-condition between T2's release and T1's reacquiring after returning from wait(). If T1 tries to reacquire first, it will be unnecessarily resuspended until T2's release() is completed.
Note: I'm intentionally not using the with statement, to better illustrate the race with explicit calls.
This seems like a design flaw. Is there any rationale known for this, or am I missing something?
This is not a definitive answer, but it's supposed to cover the relevant details I've managed to gather about this problem.
First, Python's threading implementation is based on Java's. Java's Condition.signal() documentation reads:
An implementation may (and typically does) require that the current thread hold the lock associated with this Condition when this method is called.
Now, the question was why enforce this behavior in Python in particular. But first I want to cover the pros and cons of each approach.
As to why some think it's often a better idea to hold the lock, I found two main arguments:
From the minute a waiter acquire()s the lock—that is, before releasing it on wait()—it is guaranteed to be notified of signals. If the corresponding release() happened prior to signalling, this would allow the sequence(where P=Producer and C=Consumer) P: release(); C: acquire(); P: notify(); C: wait() in which case the wait() corresponding to the acquire() of the same flow would miss the signal. There are cases where this doesn't matter (and could even be considered to be more accurate), but there are cases where that's undesirable. This is one argument.
When you notify() outside a lock, this may cause a scheduling priority inversion; that is, a low-priority thread might end up taking priority over a high-priority thread. Consider a work queue with one producer and two consumers (LC=Low-priority consumer and HC=High-priority consumer), where LC is currently executing a work item and HC is blocked in wait().
The following sequence may occur:
P LC HC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
execute(item) (in wait())
lock()
wq.push(item)
release()
acquire()
item = wq.pop()
release();
notify()
(wake-up)
while (wq.empty())
wait();
Whereas if the notify() happened before release(), LC wouldn't have been able to acquire() before HC had been woken-up. This is where the priority inversion occurred. This is the second argument.
The argument in favor of notifying outside of the lock is for high-performance threading, where a thread need not go back to sleep just to wake-up again the very next time-slice it gets—which was already explained how it might happen in my question.
Python's threading Module
In Python, as I said, you must hold the lock while notifying. The irony is that the internal implementation does not allow the underlying OS to avoid priority inversion, because it enforces a FIFO order on the waiters. Of course, the fact that the order of waiters is deterministic could come in handy, but the question remains why enforce such a thing when it could be argued that it would be more precise to differentiate between the lock and the condition variable, for that in some flows that require optimized concurrency and minimal blocking, acquire() should not by itself register a preceding waiting state, but only the wait() call itself.
Arguably, Python programmers would not care about performance to this extent anyway—although that still doesn't answer the question of why, when implementing a standard library, one should not allow several standard behaviors to be possible.
One thing which remains to be said is that the developers of the threading module might have specifically wanted a FIFO order for some reason, and found that this was somehow the best way of achieving it, and wanted to establish that as a Condition at the expense of the other (probably more prevalent) approaches. For this, they deserve the benefit of the doubt until they might account for it themselves.
There are several reasons which are compelling (when taken together).
1. The notifier needs to take a lock
Pretend that Condition.notifyUnlocked() exists.
The standard producer/consumer arrangement requires taking locks on both sides:
def unlocked(qu,cv): # qu is a thread-safe queue
qu.push(make_stuff())
cv.notifyUnlocked()
def consume(qu,cv):
with cv:
while True: # vs. other consumers or spurious wakeups
if qu: break
cv.wait()
x=qu.pop()
use_stuff(x)
This fails because both the push() and the notifyUnlocked() can intervene between the if qu: and the wait().
Writing either of
def lockedNotify(qu,cv):
qu.push(make_stuff())
with cv: cv.notify()
def lockedPush(qu,cv):
x=make_stuff() # don't hold the lock here
with cv: qu.push(x)
cv.notifyUnlocked()
works (which is an interesting exercise to demonstrate). The second form has the advantage of removing the requirement that qu be thread-safe, but it costs no more locks to take it around the call to notify() as well.
It remains to explain the preference for doing so, especially given that (as you observed) CPython does wake up the notified thread to have it switch to waiting on the mutex (rather than simply moving it to that wait queue).
2. The condition variable itself needs a lock
The Condition has internal data that must be protected in case of concurrent waits/notifications. (Glancing at the CPython implementation, I see the possibility that two unsynchronized notify()s could erroneously target the same waiting thread, which could cause reduced throughput or even deadlock.) It could protect that data with a dedicated lock, of course; since we need a user-visible lock already, using that one avoids additional synchronization costs.
3. Multiple wake conditions can need the lock
(Adapted from a comment on the blog post linked below.)
def setSignal(box,cv):
signal=False
with cv:
if not box.val:
box.val=True
signal=True
if signal: cv.notifyUnlocked()
def waitFor(box,v,cv):
v=bool(v) # to use ==
while True:
with cv:
if box.val==v: break
cv.wait()
Suppose box.val is False and thread #1 is waiting in waitFor(box,True,cv). Thread #2 calls setSignal; when it releases cv, #1 is still blocked on the condition. Thread #3 then calls waitFor(box,False,cv), finds that box.val is True, and waits. Then #2 calls notify(), waking #3, which is still unsatisfied and blocks again. Now #1 and #3 are both waiting, despite the fact that one of them must have its condition satisfied.
def setTrue(box,cv):
with cv:
if not box.val:
box.val=True
cv.notify()
Now that situation cannot arise: either #3 arrives before the update and never waits, or it arrives during or after the update and has not yet waited, guaranteeing that the notification goes to #1, which returns from waitFor.
4. The hardware might need a lock
With wait morphing and no GIL (in some alternate or future implementation of Python), the memory ordering (cf. Java's rules) imposed by the lock-release after notify() and the lock-acquire on return from wait() might be the only guarantee of the notifying thread's updates being visible to the waiting thread.
5. Real-time systems might need it
Immediately after the POSIX text you quoted we find:
however, if predictable scheduling behavior is required, then that mutex
shall be locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
One blog post contains further discussion of the rationale and history of this recommendation (as well as of some of the other issues here).
A couple of months ago exactly the same question occurred to me. But since I had ipython opened, looking at threading.Condition.wait?? result (the source for the method) didn't take long to answer it myself.
In short, the wait method creates another lock called waiter, acquires it, appends it to a list and then, surprise, releases the lock on itself. After that it acquires the waiter once again, that is it starts to wait until someone releases the waiter. Then it acquires the lock on itself again and returns.
The notify method pops a waiter from the waiter list (waiter is a lock, as we remember) and releases it allowing the corresponding wait method to continue.
That is the trick is that the wait method is not holding the lock on the condition itself while waiting for the notify method to release the waiter.
UPD1: I seem to have misunderstood the question. Is it correct that you are bothered that T1 might try to reacquire the lock on itself before the T2 release it?
But is it possible in the context of python's GIL? Or you think that one can insert an IO call before releasing the condition, which would allow T1 to wake up and wait forever?
It's explained in Python 3 documentation: https://docs.python.org/3/library/threading.html#condition-objects.
Note: the notify() and notify_all() methods don’t release the lock; this means that the thread or threads awakened will not return from their wait() call immediately, but only when the thread that called notify() or notify_all() finally relinquishes ownership of the lock.
What happens is that T1 waits and releases the lock, then T2 acquires it, notifies cv which wakes up T1.
Not quite. The cv.notify() call does not wake the T1 thread: It only moves it to a different queue. Before the notify(), T1 was waiting for the condition to be true. After the notify(), T1 is waiting to acquire the lock. T2 does not release the lock, and T1 does not "wake up" until T2 explicitly calls cv.release().
There is no race condition, this is how condition variables work.
When wait() is called, then the underlying lock is released until a notification occurs. It is guaranteed that the caller of wait will reacquire the lock before the function returns (eg, after the wait completes).
You're right that there could be some inefficiency if T1 was directly woken up when notify() is called. However, condition variables are typically implemented via OS primitives, and the OS will often be smart enough to realize that T2 still has the lock, so it won't immediately wake up T1 but instead queue it to be woken.
Additionally, in python, this doesn't really matter anyways, as there's only a single thread due to the GIL, so the threads wouldn't be able to run concurrently anyways.
Additionally, it's preferred to use the following forms instead of calling acquire/release directly:
with cv:
cv.wait()
And:
with cv:
cv.notify()
This ensures that the underlying lock is released even if an exception occurs.

Using async-await over Thread + Queue or ThreadPoolExecutor?

I've never used the async-await syntax but I do often need to make HTTP/S requests and parse responses while awaiting future responses. To accomplish this task, I currently use the ThreadPoolExecutor class which execute the calls asynchronously anyways; effectively I'm achieving (I believe) the same result I would get with more lines of code to use async-await.
Operating under the assumption that my current implementations work asynchronously, I am wondering how the async-await implementation would differ from that of my original one which used Threads and a Queue to manage workers; it also used a Semaphore to limit workers.
That implementation was devised under the following conditions:
There may be any number of requests
Total number of active requests may be 4
Only send next request when a response is received
The basic flow of the implementation was as follows:
Generate container of requests
Create a ListeningQueue
For each request create a Thread and pass the URL, ListeningQueue and Semaphore
Each Thread attempts to acquire the Semaphore (limited to 4 Threads)
Main Thread continues in a while checking ListeningQueue
When a Thread receives a response, place in ListeningQueue and release Semaphore
A waiting Thread acquires Semaphore (process repeats)
Main Thread processes responses until count equals number of requests
Because I need to limit the number of active Threads I use a Semaphore, and if I were to try this using async-await I would have to devise some logic in the Main Thread or in the async def that prevents a request from being sent if the limit has been reached. Apart from that constraint, I don't see where using async-await would be any more useful. Is it that it lowers overhead and race condition chances by eliminating Threads? Is that the main benefit? If so, even though using a ThreadPoolExecutor is making asynchronous calls it is using a pool of Threads, thus making async-await a better option?
Operating under the assumption that my current implementations work asynchronously, I am wondering how the async-await implementation would differ from that of my original one which used Threads and a Queue to manage workers
It would not be hard to implement very similar logic using asyncio and async-await, which has its own version of semaphore that is used in much the same way. See answers to this question for examples of limiting the number of parallel requests with a fixed number of tasks or by using a semaphore.
As for advantages of asyncio over equivalent code using threads, there are several:
Everything runs in a single thread regardless of the number of active connections. Your program can scale to a large number of concurrent tasks without swamping the OS with an unreasonable number of threads or the downloads having to wait for a free slot in the thread pool before they even start.
As you pointed out, single-threaded execution is less susceptible to race conditions because the points where a task switch can occur are clearly marked with await, and everything in-between is effectively atomic. The advantage of this is less obvious in small threaded programs where the executor just hands tasks to threads in a fire-and-collect fashion, but as the logic grows more complex and the threads begin to share more state (e.g. due to caching or some synchronization logic), this becomes more pronounced.
async/await allows you to easily create additional independent tasks for things like monitoring, logging and cleanup. When using threads, those do not fit the executor model and require additional threads, always with a design smell that suggests threads are being abused. With asyncio, each task can be as if it were running in its own thread, and use await to wait for something to happen (and yield control to others) - e.g. a timer-based monitoring task would consist of a loop that awaits asyncio.sleep(), but the logic could be arbitrarily complex. Despite the code looking sequential, each task is lightweight and carries no more weight to the OS than that of a small allocated object.
async/await supports reliable cancellation, which threads never did and likely never will. This is often overlooked, but in asyncio it is perfectly possible to cancel a running task, which causes it to wake up from await with an exception that terminates it. Cancellation makes it straightforward to implement timeouts, task groups, and other patterns that are impossible or a huge chore when using threads.
On the flip side, the disadvantage of async/await is that all your code must be async. Among other things, it means that you cannot use libraries like requests, you have to switch to asyncio-aware alternatives like aiohttp.

Does the lock in asyncio.Condition have other purpose besides compatibility with threading.Condition?

I'd like to ask about asyncio.Condition. I'm not familiar with the concept, but I know and understand locks, semaphores, and queues since my student years.
I could not find a good explanation or typical use cases, just this example. I looked at the source. The core fnctionality is achieved with a FIFO of futures. Each waiting coroutine adds a new future and awaits it. Another coroutine may call notify() which sets the result of one or optionally more futures from the FIFO and that wakes up the same number of waiting coroutines. Really simple up to this point.
However, the implementation and the usage is more complicated than this. A waiting coroutine must first acquire a lock associated with the condition in order to be able to wait (and the wait() releases it while waiting). Also the notifier must acquire a lock to be able to notify(). This leads to with statement before each operation:
async with condition:
# condition operation (wait or notify)
or else a RuntimeError occurrs.
I do not understand the point of having this lock. What resource do we need to protect with the lock? In asyncio there could be always only one coroutine executing in the event loop, there are no "critical sections" as known from threading.
Is this lock really needed (why?) or is it for compatibility with threading code only?
My first idea was it is for the compatibility, but in such case why didn't they remove the lock while preserving the usage? i.e. making
async with condition:
basically an optional no-op.
The answer for this is essentially the same as for threading.Condition vs threading.Event; a condition without a lock is an event, not a condition(*).
Conditions are used to signal that a resource is available. Whomever was waiting for the condition, can use that resource until they are done with it. To ensure that no-one else can use the resource, you need to lock the resource:
resource = get_some_resource()
async with resource.condition:
await resource.condition.wait()
# this resource is mine, no-one will touch it
await resource.do_something_async()
# lock released, resource is available again for the next user
Note how the lock is not released after wait() resumes! Until the lock is released, no other co-routine waiting for the same condition can proceed, access to the resource is made exclusive by virtue of the lock. Note that the lock is released while waiting, so other coroutines can add themselves to the queue, but for wait() to finally return the lock must first be re-acquired.
If you don't need to coordinate access to a shared resource, use an event; a condition is basically a lock and event combined into one primitive, avoiding common implementation pitfalls.
Note that multiple conditions can share locks. This would let you signal specific stages, and other coroutines can wait for that specific stage to arrive. The shared lock would coordinate access to a single resource, but different conditions are signalled when each stage is initiated.
For threading, the typical use-case for conditions offered is that of a single producer, and multiple consumers all waiting on items from the producer to process. The work queue is the shared resource, the producer acquires the condition lock to push an item into the queue and then call notify(), at which point the next consumer waiting on the condition is given the lock (as it returns from wait()) and can remove the item from the queue to work on. This doesn't quite translate to a coroutine-based application, as coroutines don't have the sitting-idle-waiting-for-work-to-be-done problems threading systems have, it's much easier to just spin up consumer co-routines as needed (with perhaps a semaphore to impose a ceiling).
Perhaps a better example is the aioimaplib library, which supports IMAP4 transactions in full. These transactions are asynchronous, but you need to have access to the shared connection resource. So the library uses a single Condition object and wait_for() to wait for a specific state to arrive and thus give exclusive connection access to the coroutine waiting for that transaction state.
(*): Events have a different use-case from conditions, and thus behave a little different from a condition without locking. Once set, an event needs to be cleared explicitly, while a condition 'auto-clears' when used, and is never 'set' when no-one is waiting on the condition. But if you want to signal between tasks and don't need to control access to a shared resource, then you probably wanted an event.

Why does Python threading.Condition() notify() require a lock?

My question refers specifically to why it was designed that way, due to the unnecessary performance implication.
When thread T1 has this code:
cv.acquire()
cv.wait()
cv.release()
and thread T2 has this code:
cv.acquire()
cv.notify() # requires that lock be held
cv.release()
what happens is that T1 waits and releases the lock, then T2 acquires it, notifies cv which wakes up T1. Now, there is a race-condition between T2's release and T1's reacquiring after returning from wait(). If T1 tries to reacquire first, it will be unnecessarily resuspended until T2's release() is completed.
Note: I'm intentionally not using the with statement, to better illustrate the race with explicit calls.
This seems like a design flaw. Is there any rationale known for this, or am I missing something?
This is not a definitive answer, but it's supposed to cover the relevant details I've managed to gather about this problem.
First, Python's threading implementation is based on Java's. Java's Condition.signal() documentation reads:
An implementation may (and typically does) require that the current thread hold the lock associated with this Condition when this method is called.
Now, the question was why enforce this behavior in Python in particular. But first I want to cover the pros and cons of each approach.
As to why some think it's often a better idea to hold the lock, I found two main arguments:
From the minute a waiter acquire()s the lock—that is, before releasing it on wait()—it is guaranteed to be notified of signals. If the corresponding release() happened prior to signalling, this would allow the sequence(where P=Producer and C=Consumer) P: release(); C: acquire(); P: notify(); C: wait() in which case the wait() corresponding to the acquire() of the same flow would miss the signal. There are cases where this doesn't matter (and could even be considered to be more accurate), but there are cases where that's undesirable. This is one argument.
When you notify() outside a lock, this may cause a scheduling priority inversion; that is, a low-priority thread might end up taking priority over a high-priority thread. Consider a work queue with one producer and two consumers (LC=Low-priority consumer and HC=High-priority consumer), where LC is currently executing a work item and HC is blocked in wait().
The following sequence may occur:
P LC HC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
execute(item) (in wait())
lock()
wq.push(item)
release()
acquire()
item = wq.pop()
release();
notify()
(wake-up)
while (wq.empty())
wait();
Whereas if the notify() happened before release(), LC wouldn't have been able to acquire() before HC had been woken-up. This is where the priority inversion occurred. This is the second argument.
The argument in favor of notifying outside of the lock is for high-performance threading, where a thread need not go back to sleep just to wake-up again the very next time-slice it gets—which was already explained how it might happen in my question.
Python's threading Module
In Python, as I said, you must hold the lock while notifying. The irony is that the internal implementation does not allow the underlying OS to avoid priority inversion, because it enforces a FIFO order on the waiters. Of course, the fact that the order of waiters is deterministic could come in handy, but the question remains why enforce such a thing when it could be argued that it would be more precise to differentiate between the lock and the condition variable, for that in some flows that require optimized concurrency and minimal blocking, acquire() should not by itself register a preceding waiting state, but only the wait() call itself.
Arguably, Python programmers would not care about performance to this extent anyway—although that still doesn't answer the question of why, when implementing a standard library, one should not allow several standard behaviors to be possible.
One thing which remains to be said is that the developers of the threading module might have specifically wanted a FIFO order for some reason, and found that this was somehow the best way of achieving it, and wanted to establish that as a Condition at the expense of the other (probably more prevalent) approaches. For this, they deserve the benefit of the doubt until they might account for it themselves.
There are several reasons which are compelling (when taken together).
1. The notifier needs to take a lock
Pretend that Condition.notifyUnlocked() exists.
The standard producer/consumer arrangement requires taking locks on both sides:
def unlocked(qu,cv): # qu is a thread-safe queue
qu.push(make_stuff())
cv.notifyUnlocked()
def consume(qu,cv):
with cv:
while True: # vs. other consumers or spurious wakeups
if qu: break
cv.wait()
x=qu.pop()
use_stuff(x)
This fails because both the push() and the notifyUnlocked() can intervene between the if qu: and the wait().
Writing either of
def lockedNotify(qu,cv):
qu.push(make_stuff())
with cv: cv.notify()
def lockedPush(qu,cv):
x=make_stuff() # don't hold the lock here
with cv: qu.push(x)
cv.notifyUnlocked()
works (which is an interesting exercise to demonstrate). The second form has the advantage of removing the requirement that qu be thread-safe, but it costs no more locks to take it around the call to notify() as well.
It remains to explain the preference for doing so, especially given that (as you observed) CPython does wake up the notified thread to have it switch to waiting on the mutex (rather than simply moving it to that wait queue).
2. The condition variable itself needs a lock
The Condition has internal data that must be protected in case of concurrent waits/notifications. (Glancing at the CPython implementation, I see the possibility that two unsynchronized notify()s could erroneously target the same waiting thread, which could cause reduced throughput or even deadlock.) It could protect that data with a dedicated lock, of course; since we need a user-visible lock already, using that one avoids additional synchronization costs.
3. Multiple wake conditions can need the lock
(Adapted from a comment on the blog post linked below.)
def setSignal(box,cv):
signal=False
with cv:
if not box.val:
box.val=True
signal=True
if signal: cv.notifyUnlocked()
def waitFor(box,v,cv):
v=bool(v) # to use ==
while True:
with cv:
if box.val==v: break
cv.wait()
Suppose box.val is False and thread #1 is waiting in waitFor(box,True,cv). Thread #2 calls setSignal; when it releases cv, #1 is still blocked on the condition. Thread #3 then calls waitFor(box,False,cv), finds that box.val is True, and waits. Then #2 calls notify(), waking #3, which is still unsatisfied and blocks again. Now #1 and #3 are both waiting, despite the fact that one of them must have its condition satisfied.
def setTrue(box,cv):
with cv:
if not box.val:
box.val=True
cv.notify()
Now that situation cannot arise: either #3 arrives before the update and never waits, or it arrives during or after the update and has not yet waited, guaranteeing that the notification goes to #1, which returns from waitFor.
4. The hardware might need a lock
With wait morphing and no GIL (in some alternate or future implementation of Python), the memory ordering (cf. Java's rules) imposed by the lock-release after notify() and the lock-acquire on return from wait() might be the only guarantee of the notifying thread's updates being visible to the waiting thread.
5. Real-time systems might need it
Immediately after the POSIX text you quoted we find:
however, if predictable scheduling behavior is required, then that mutex
shall be locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
One blog post contains further discussion of the rationale and history of this recommendation (as well as of some of the other issues here).
A couple of months ago exactly the same question occurred to me. But since I had ipython opened, looking at threading.Condition.wait?? result (the source for the method) didn't take long to answer it myself.
In short, the wait method creates another lock called waiter, acquires it, appends it to a list and then, surprise, releases the lock on itself. After that it acquires the waiter once again, that is it starts to wait until someone releases the waiter. Then it acquires the lock on itself again and returns.
The notify method pops a waiter from the waiter list (waiter is a lock, as we remember) and releases it allowing the corresponding wait method to continue.
That is the trick is that the wait method is not holding the lock on the condition itself while waiting for the notify method to release the waiter.
UPD1: I seem to have misunderstood the question. Is it correct that you are bothered that T1 might try to reacquire the lock on itself before the T2 release it?
But is it possible in the context of python's GIL? Or you think that one can insert an IO call before releasing the condition, which would allow T1 to wake up and wait forever?
It's explained in Python 3 documentation: https://docs.python.org/3/library/threading.html#condition-objects.
Note: the notify() and notify_all() methods don’t release the lock; this means that the thread or threads awakened will not return from their wait() call immediately, but only when the thread that called notify() or notify_all() finally relinquishes ownership of the lock.
What happens is that T1 waits and releases the lock, then T2 acquires it, notifies cv which wakes up T1.
Not quite. The cv.notify() call does not wake the T1 thread: It only moves it to a different queue. Before the notify(), T1 was waiting for the condition to be true. After the notify(), T1 is waiting to acquire the lock. T2 does not release the lock, and T1 does not "wake up" until T2 explicitly calls cv.release().
There is no race condition, this is how condition variables work.
When wait() is called, then the underlying lock is released until a notification occurs. It is guaranteed that the caller of wait will reacquire the lock before the function returns (eg, after the wait completes).
You're right that there could be some inefficiency if T1 was directly woken up when notify() is called. However, condition variables are typically implemented via OS primitives, and the OS will often be smart enough to realize that T2 still has the lock, so it won't immediately wake up T1 but instead queue it to be woken.
Additionally, in python, this doesn't really matter anyways, as there's only a single thread due to the GIL, so the threads wouldn't be able to run concurrently anyways.
Additionally, it's preferred to use the following forms instead of calling acquire/release directly:
with cv:
cv.wait()
And:
with cv:
cv.notify()
This ensures that the underlying lock is released even if an exception occurs.

Is this multi-threaded function asynchronous

I'm afraid I'm still a bit confused (despite checking other threads) whether:
all asynchronous code is multi-threaded
all multi-threaded functions are asynchronous
My initial guess is no to both and that proper asynchronous code should be able to run in one thread - however it can be improved by adding threads for example like so:
So I constructed this toy example:
from threading import *
from queue import Queue
import time
def do_something_with_io_lag(in_work):
out = in_work
# Imagine we do some work that involves sending
# something over the internet and processing the output
# once it arrives
time.sleep(0.5) # simulate IO lag
print("Hello, bee number: ",
str(current_thread().name).replace("Thread-",""))
class WorkerBee(Thread):
def __init__(self, q):
Thread.__init__(self)
self.q = q
def run(self):
while True:
# Get some work from the queue
work_todo = self.q.get()
# This function will simiulate I/O lag
do_something_with_io_lag(work_todo)
# Remove task from the queue
self.q.task_done()
if __name__ == '__main__':
def time_me(nmbr):
number_of_worker_bees = nmbr
worktodo = ['some input for work'] * 50
# Create a queue
q = Queue()
# Fill with work
[q.put(onework) for onework in worktodo]
# Launch processes
for _ in range(number_of_worker_bees):
t = WorkerBee(q)
t.start()
# Block until queue is empty
q.join()
# Run this code in serial mode (just one worker)
%time time_me(nmbr=1)
# Wall time: 25 s
# Basically 50 requests * 0.5 seconds IO lag
# For me everything gets processed by bee number: 59
# Run this code using multi-tasking (launch 50 workers)
%time time_me(nmbr=50)
# Wall time: 507 ms
# Basically the 0.5 second IO lag + 0.07 seconds it took to launch them
# Now everything gets processed by different bees
Is it asynchronous?
To me this code does not seem asynchronous because it is Figure 3 in my example diagram. The I/O call blocks the thread (although we don't feel it because they are blocked in parallel).
However, if this is the case I am confused why requests-futures is considered asynchronous since it is a wrapper around ThreadPoolExecutor:
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
future_to_url = {executor.submit(load_url, url, 10): url for url in get_urls()}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
Can this function on just one thread?
Especially when compared to asyncio, which means it can run single-threaded
There are only two ways to have a program on a single processor do
“more than one thing at a time.” Multi-threaded programming is the
simplest and most popular way to do it, but there is another very
different technique, that lets you have nearly all the advantages of
multi-threading, without actually using multiple threads. It’s really
only practical if your program is largely I/O bound. If your program
is processor bound, then pre-emptive scheduled threads are probably
what you really need. Network servers are rarely processor bound,
however.
First of all, one note: concurrent.futures.Future is not the same as asyncio.Future. Basically it's just an abstraction - an object, that allows you to refer to job result (or exception, which is also a result) in your program after you assigned a job, but before it is completed. It's similar to assigning common function's result to some variable.
Multithreading: Regarding your example, when using multiple threads you can say that your code is "asynchronous" as several operations are performed in different threads at the same time without waiting for each other to complete, and you can see it in the timing results. And you're right, your function due to sleep is blocking, it blocks the worker thread for the specified amount of time, but when you use several threads those threads are blocked in parallel. So if you would have one job with sleep and the other one without and run multiple threads, the one without sleep would perform calculations while the other would sleep. When you use single thread, the jobs are performed in in a serial manner one after the other, so when one job sleeps the other jobs wait for it, actually they just don't exist until it's their turn. All this is pretty much proven by your time tests. The thing happened with print has to do with "thread safety", i.e. print uses standard output, which is a single shared resource. So when your multiple threads tried to print at the same time the switching happened inside and you got your strange output. (This also show "asynchronicity" of your multithreaded example.) To prevent such errors there are locking mechanisms, e.g. locks, semaphores, etc.
Asyncio: To better understand the purpose note the "IO" part, it's not 'async computation', but 'async input/output'. When talking about asyncio you usually don't think about threads at first. Asyncio is about event loop and generators (coroutines). The event loop is the arbiter, that governs the execution of coroutines (and their callbacks), that were registered to the loop. Coroutines are implemented as generators, i.e. functions that allow to perform some actions iteratively, saving state at each iteration and 'returning', and on the next call continuing with the saved state. So basically the event loop is while True: loop, that calls all coroutines/generators, assigned to it, one after another, and they provide result or no-result on each such call - this provides possibility for "asynchronicity". (A simplification, as there's scheduling mechanisms, that optimize this behavior.) The event loop in this situation can run in single thread and if coroutines are non-blocking it will give you true "asynchronicity", but if they are blocking then it's basically a linear execution.
You can achieve the same thing with explicit multithreading, but threads are costly - they require memory to be assigned, switching them takes time, etc. On the other hand asyncio API allows you to abstract from actual implementation and just consider your jobs to be performed asynchronously. It's implementation may be different, it includes calling the OS API and the OS decides what to do, e.g. DMA, additional threads, some specific microcontroller use, etc. The thing is it works well for IO due to lower level mechanisms, hardware stuff. On the other hand, performing computation will require explicit breaking of computation algorithm into pieces to use as asyncio coroutine, so a separate thread might be a better decision, as you can launch the whole computation as one there. (I'm not talking about algorithms that are special to parallel computing). But asyncio event loop might be explicitly set to use separate threads for coroutines, so this will be asyncio with multithreading.
Regarding your example, if you'll implement your function with sleep as asyncio coroutine, shedule and run 50 of them single threaded, you'll get time similar to the first time test, i.e. around 25s, as it is blocking. If you will change it to something like yield from [asyncio.sleep][3](0.5) (which is a coroutine itself), shedule and run 50 of them single threaded, it will be called asynchronously. So while one coroutine will sleep the other will be started, and so on. The jobs will complete in time similar to your second multithreaded test, i.e. close to 0.5s. If you will add print here you'll get good output as it will be used by single thread in serial manner, but the output might be in different order then the order of coroutine assignment to the loop, as coroutines could be run in different order. If you will use multiple threads, then the result will obviously be close to the last one anyway.
Simplification: The difference in multythreading and asyncio is in blocking/non-blocking, so basicly blocking multithreading will somewhat come close to non-blocking asyncio, but there're a lot of differences.
Multithreading for computations (i.e. CPU bound code)
Asyncio for input/output (i.e. I/O bound code)
Regarding your original statement:
all asynchronous code is multi-threaded
all multi-threaded functions are asynchronous
I hope that I was able to show, that:
asynchronous code might be both single threaded and multi-threaded
all multi-threaded functions could be called "asynchronous"
I think the main confusion comes from the meaning of asynchronous. From the Free Online Dictionary of Computing, "A process [...] whose execution can proceed independently" is asynchronous. Now, apply that to what your bees do:
Retrieve an item from the queue. Only one at a time can do that, while the order in which they get an item is undefined. I wouldn't call that asynchronous.
Sleep. Each bee does so independently of all others, i.e. the sleep duration runs on all, otherwise the time wouldn't go down with multiple bees. I'd call that asynchronous.
Call print(). While the calls are independent, at some point the data is funneled into the same output target, and at that point a sequence is enforced. I wouldn't call that asynchronous. Note however that the two arguments to print() and also the trailing newline are handled independently, which is why they can be interleaved.
Lastly, the call to q.join(). Here of course the calling thread is blocked until the queue is empty, so some kind of synchronization is enforced and wanted. I don't see why this "seems to break" for you.

Categories