Putting a thread to sleep until event X occurs - python

I'm writing to many files in a threaded app and I'm creating one handler per file. I have HandlerFactory class that manages the distribution of these handlers. What I'd like to do is that
thread A requests and gets foo.txt's file handle from the HandlerFactory class
thread B requests foo.txt's file handler
handler class recognizes that this file handle has been checked out
handler class puts thread A to sleep
thread B closes file handle using a wrapper method from HandlerFactory
HandlerFactory notifies sleeping threads
thread B wakes and successfully gets foo.txt's file handle
This is what I have so far,
def get_handler(self, file_path, type):
self.lock.acquire()
if file_path not in self.handlers:
self.handlers[file_path] = open(file_path, type)
elif not self.handlers[file_path].closed:
time.sleep(1)
self.lock.release()
return self.handlers[file_path][type]
I believe this covers the sleeping and handler retrieval successfully, but I am unsure how to wake up all threads, or even better wake up a specific thread.

What you're looking for is known as a condition variable.
Condition Variables
Here is the Python 2 library reference.
For Python 3 it can be found here

Looks like you want a threading.Semaphore associated with each handler (other synchronization objects like Events and Conditions are also possible, but a Semaphore seems simplest for your needs). (Specifically, use a BoundedSemaphore: for your use case, that will raise an exception immediately for programming errors that erroneously release the semaphone more times than they acquire it -- and that's exactly the reason for being of the bounded version of semaphones;-).
Initialize each semaphore to a value of 1 when you build it (so that means the handler is available). Each using-thread calls acquire on the semaphore to get the handler (that may block it), and release on it when it's done with the handler (that will unblock exactly one of the waiting threads). That's simpler than the acquire/wait/notify/release lifecycle of a Condition, and more future-proof too, since as the docs for Condition say:
The current implementation wakes up
exactly one thread, if any are
waiting. However, it’s not safe to
rely on this behavior. A future,
optimized implementation may
occasionally wake up more than one
thread.
while with a Semaphore you're playing it safe (the semantics whereof are safe to rely on: if a semaphore is initialized to N, there are at all times between 0 and N-1 [[included]] threads that have successfully acquired the semaphore and not yet released it).

You do realize that Python has a giant lock, so that most of the benefits of multi-threading you do not get, right?
Unless there is some reason for the master thread to do something with the results of each worker, you may wish to consider just forking off another process for each request. You won't have to deal with locking issues then. Have the children do what they need to do, then die. If they do need to communicate back, do it over a pipe, with XMLRPC, or through a sqlite database (which is threadsafe).

Related

Correctness of modified consumer/producer

I am creating a Sound class to play notes and would like feedback on the correctness and conciseness of my design. This class differs from the typical consumer/producer in two ways:
The consumer should respond to events, such as to shut down the thread, or otherwise continue forever. The typical consumer/producer exits when the queue is empty. For example, a thread waiting in queue.get cannot handle additional notifications.
Each set of notes submitted by the producer should overwrite any unprocessed notes remaining on the queue.
Originally I had the consumer process one note at a time using the queue module. I found continually acquiring and releasing the lock without any competition to be inefficient, and as previously noted, queue.get prevents waiting on additional events. So instead of building upon that, I rewrote it into:
import threading
queue = []
condition = threading.Condition()
interrupt = threading.Event()
stop = threading.Event()
def producer():
while some_condition:
ns = get_notes() # [(float,float)]
with condition:
queue.clear()
queue.append(ns)
interrupt.set()
condition.notify()
with condition:
stop.set()
condition.notify()
consumer.join()
def consumer():
while not stop.is_set():
with condition:
while not (queue or stop.is_set()):
condition.wait()
if stop.is_set():
break
interrupt.clear()
ns = queue.pop()
ss = gen_samples(ns) # iterator/fast
for b in grouper(ss, size/2):
if interrupt.is_set() or stop.is_set()
break
stream.write(b)
thread = threading.Thread(target=consumer)
thread.start()
producer()
My questions are as follows:
Is this thread-safe? I want to specifically point out my use of is_set without locks or synchronization (in the for-loop).
Can the events be replaced with boolean variables? I believe so as conflicting writes in both threads (data race) are guarded by the condition variable. There is a race condition between setting and checking events but I do not believe it affects program flow.
Is there a more efficient approach/algorithm utilizing different synchronization primitives from the threading module?
edit: Found and fixed a possible deadlock described in Why does Python threading.Condition() notify() require a lock?
Analyzing thread-safety in Python can take into account the Global Interpreter Lock (GIL): no two threads will execute Python code simultaneously. Assignments to variables or object fields are effectively atomic (there are no half-assigned variables) and changes propagate effectively immediately to other threads.
This means that your use of Event.is_set() is already equivalent to using plain booleans. An event is a bool guarded by a Condition. The is_set() method checks the boolean directly. The set() method acquires the Condition, sets the boolean, and notifies all waiting threads. The wait() methods waits until the set() method is invoked. The clear() method acquires the Condition and unsets the boolean. Since you never wait() for any Event, and setting the boolean is atomic, the Condition in the Event is effectively unused.
This might get rid of a couple of locks, but isn't really a huge efficiency win. A Condition is still an abstraction over a lock, but the built-in Queue type uses locks directly. Thus, I would assume that the built-in queue is no less performant than your solution, even for a single consumer.
Your main issue with the built-in queue is that “continually acquiring and releasing the lock without any competition [is] inefficient”. This is wrong on two counts:
Due to Python's GIL, there is little competition in either case.
Acquiring uncontested locks is very efficient.
So while your solution is probably sufficiently correct (I can see no opportunity for deadlock) it is unlikely to be particularly efficient. (There are just some small mistakes, like using stop instead of stop.is_set() and some syntax errors.)
If you are seeing poor performance with Python threads that's probably because of CPython, not because of the Queue type. I already mentioned that only one thread can run at a time due to the GIL. If multiple threads want to run, they must be scheduled by the operating system to do so and acquire the GIL. Each thread will wait for 5ms before asking the running thread to give up the GIL (in a manner quite similar to your interrupt flag). And then the thread can do useful work like acquiring a lock for a critical section that must not be interrupted by other threads.
Possibly, the solution could be to avoid CPython's threads.
If you have multiple CPU-bound tasks, you must use multiple processes. CPython's threads will not run in parallel. However, communication between processes is more expensive.
Consider whether you can combine the producer+consumer directly, possibly using features such as generators.
For an easier time with juggling multiple tasks in the same thread, consider using async/await. Event loops are provided by the asyncio module. This is just as fast as Python's threads, with the caveat that tasks don't pre-empt (interrupt) each other. But this can be advantage: since a task can only be suspended at an await, you don't need most locks and it is easier to reason about correctness of the code. The downside is that async/await might have even higher latency than using threads.
Python has a concept of “executors” that make it easy and efficient to run tasks in separate threads (for I/O-bound tasks) or separate processes (for CPU-bound tasks).
For communicating between multiple processes, use the types from the multiprocessing module (e.g. Queue, Connection, or Value).

Does the lock in asyncio.Condition have other purpose besides compatibility with threading.Condition?

I'd like to ask about asyncio.Condition. I'm not familiar with the concept, but I know and understand locks, semaphores, and queues since my student years.
I could not find a good explanation or typical use cases, just this example. I looked at the source. The core fnctionality is achieved with a FIFO of futures. Each waiting coroutine adds a new future and awaits it. Another coroutine may call notify() which sets the result of one or optionally more futures from the FIFO and that wakes up the same number of waiting coroutines. Really simple up to this point.
However, the implementation and the usage is more complicated than this. A waiting coroutine must first acquire a lock associated with the condition in order to be able to wait (and the wait() releases it while waiting). Also the notifier must acquire a lock to be able to notify(). This leads to with statement before each operation:
async with condition:
# condition operation (wait or notify)
or else a RuntimeError occurrs.
I do not understand the point of having this lock. What resource do we need to protect with the lock? In asyncio there could be always only one coroutine executing in the event loop, there are no "critical sections" as known from threading.
Is this lock really needed (why?) or is it for compatibility with threading code only?
My first idea was it is for the compatibility, but in such case why didn't they remove the lock while preserving the usage? i.e. making
async with condition:
basically an optional no-op.
The answer for this is essentially the same as for threading.Condition vs threading.Event; a condition without a lock is an event, not a condition(*).
Conditions are used to signal that a resource is available. Whomever was waiting for the condition, can use that resource until they are done with it. To ensure that no-one else can use the resource, you need to lock the resource:
resource = get_some_resource()
async with resource.condition:
await resource.condition.wait()
# this resource is mine, no-one will touch it
await resource.do_something_async()
# lock released, resource is available again for the next user
Note how the lock is not released after wait() resumes! Until the lock is released, no other co-routine waiting for the same condition can proceed, access to the resource is made exclusive by virtue of the lock. Note that the lock is released while waiting, so other coroutines can add themselves to the queue, but for wait() to finally return the lock must first be re-acquired.
If you don't need to coordinate access to a shared resource, use an event; a condition is basically a lock and event combined into one primitive, avoiding common implementation pitfalls.
Note that multiple conditions can share locks. This would let you signal specific stages, and other coroutines can wait for that specific stage to arrive. The shared lock would coordinate access to a single resource, but different conditions are signalled when each stage is initiated.
For threading, the typical use-case for conditions offered is that of a single producer, and multiple consumers all waiting on items from the producer to process. The work queue is the shared resource, the producer acquires the condition lock to push an item into the queue and then call notify(), at which point the next consumer waiting on the condition is given the lock (as it returns from wait()) and can remove the item from the queue to work on. This doesn't quite translate to a coroutine-based application, as coroutines don't have the sitting-idle-waiting-for-work-to-be-done problems threading systems have, it's much easier to just spin up consumer co-routines as needed (with perhaps a semaphore to impose a ceiling).
Perhaps a better example is the aioimaplib library, which supports IMAP4 transactions in full. These transactions are asynchronous, but you need to have access to the shared connection resource. So the library uses a single Condition object and wait_for() to wait for a specific state to arrive and thus give exclusive connection access to the coroutine waiting for that transaction state.
(*): Events have a different use-case from conditions, and thus behave a little different from a condition without locking. Once set, an event needs to be cleared explicitly, while a condition 'auto-clears' when used, and is never 'set' when no-one is waiting on the condition. But if you want to signal between tasks and don't need to control access to a shared resource, then you probably wanted an event.

Trying to stop a QThread gracefully, what's wrong with this implementation?

When running my code I start a thread that runs for around 50 seconds and does a lot of background stuff. If I run this program and then close it soon after, the stuff still goes on in the background for a while because the thread never dies. How can I kill the thread gracefully in my closeEvent method in my MianWindow class? I've tried setting up a method called exit(), creating a signal 'quitOperation' in the thread in question, and then tried to use
myThread.quitOperation.emit()
I expected that this would call my exit() function in my thread because I have this line in my constructor:
self.quitOperation.connect(self.exit)
However, when I use the first line it breaks, saying that 'myThread' has no attribute 'quitOperation'. Why is this? Is there a better way?
I'm not sure for python, but I assume this myThread.quitOperation.emit() emits a signal for the thread to exit. The point is that while your worker is using the thread and does not return, nor runs QCoreApplication::processEvents(), myThread will never have the chance to actually process your request (this is called thread starvation).
Correct answer may depend on the situation, and the nature of the "stuff" your thread is doing. The most common practice is that the main thread sends a signal to the worker thread where a slot sets a flag. In the blocking process you regularly check this flag. It it is set you stop whatever "stuff" you are doing, tell your worker thread that it can quit (with a signal preferably with queued connection), call a deleteLater() on the worker object itself, and return from any functions you are currently in, so that the thread's event handler can run, and clear your worker object and itself up, the finally quit.
In case your "stuff" is a huge cycle of very fast operation like simple mathematics or directory navigation one-by-one that takes only a few milliseconds each, this will be enough.
In case your "stuff" contain huge blocking parts that you have no control of (an thus you can't place this flag checking call in it), you may need to wait in the main thread until the worker thread quits.
In case you use direct connect to set the flag, or you set it directly, be sure to protect the read/write access of the flag with a QMutex to prevent inconsistent reads, or user a queued connection to ensure single thread access of the flag.
While highly discouraged, optionally you can use QThread's terminate() method to instantaneously kill the thread. You should never do this as it may cause memory leak, heap corruption, resource leaking and any nasty stuff as destructors and clean-up codes will not run, and the execution can be halted at an undesired state.

python: waiting on multiple objects (Queue, Lock, Condition, etc.)

I'm using the Python threading library. Works fine (subject to the Global Interpreter Lock, of course).
Now I have a condundrum. I have two separate sources of concurrency: either two Queues, or a Queue and a Condition. How can I wait for the first one that is ready? (They have to be separate objects since they are owned by different modular parts of my application.)
Windows has the WaitForMultipleObjects function; is there something similar for Python concurrency primitives?
There is not an already existing function that I know of that you asked about. However there is the threading.enumaerate() which I think just might return a list off all current daemon threads no matter the source. Once you have that list you could iterate over it looking for the condition you want. To set a thread as a daemon each thread has a method that can be called like thread.setDaemon(True) before the thread is started.
I cant say for sure that this is your answer. I don't have as much experience as apparently you do, but I looked this up in a book I have, The Python Standard Library by Example - by Doug Hellmann. He has 23 pages on managing concurrent operations in the section on threading and enumerate seamed to be something that would help.
You could create a new synchronization object (queue, condition, etc.) let's call it the ready_event, and one Thread for each sync object you want to watch. Each thread waits for its sync object to be ready, when a thread's sync object is ready, the thread signals it via the ready_event. after you created and started the threads, you can wait on that ready_event.

Does Python's main thread get garbage collected when it stops?

In a multi-threaded Python process I have a number of non-daemon threads, by which I mean threads which keep the main process alive even after the main thread has exited / stopped.
My non-daemon threads hold weak references to certain objects in the main thread, but when the main thread ends (control falls off the bottom of the file) these objects do not appear to be garbage collected, and my weak reference finaliser callbacks don't fire.
Am I wrong to expect the main thread to be garbage collected? I would have expected that the thread-locals would be deallocated (i.e. garbage collected)...
What have I missed?
Supporting materials
Output from pprint.pprint( threading.enumerate() ) showing the main thread has stopped while others soldier on.
[<_MainThread(MainThread, stopped 139664516818688)>,
<LDQServer(testLogIOWorkerThread, started 139664479889152)>,
<_Timer(Thread-18, started 139663928870656)>,
<LDQServer(debugLogIOWorkerThread, started 139664437925632)>,
<_Timer(Thread-17, started 139664463103744)>,
<_Timer(Thread-19, started 139663937263360)>,
<LDQServer(testLogIOWorkerThread, started 139664471496448)>,
<LDQServer(debugLogIOWorkerThread, started 139664446318336)>]
And since someone always asks about the use-case...
My network service occasionally misses its real-time deadlines (which causes a total system failure in the worst case). This turned out to be because logging of (important) DEBUG data would block whenever the file-system has a tantrum. So I am attempting to retrofit a number of established specialised logging libraries to defer blocking I/O to a worker thread.
Sadly the established usage pattern is a mix of short-lived logging channels which log overlapping parallel transactions, and long-lived module-scope channels which are never explicitly closed.
So I created a decorator which defers method calls to a worker thread. The worker thread is non-daemon to ensure that all (slow) blocking I/O completes before the interpreter exits, and holds a weak reference to the client-side (where method calls get enqueued). When the client-side is garbage collected the weak reference's callback fires and the worker thread knows no more work will be enqueued, and so will exit at its next convenience.
This seems to work fine in all but one important use-case: when the logging channel is in the main thread. When the main thread stops / exits the logging channel is not finalised, and so my (non-daemon) worker thread lives on keeping the entire process alive.
It's a bad idea for your main thread to end without calling join on all non-daemon threads, or to make any assumptions about what happens if you don't.
If you don't do anything very unusual, CPython (at least 2.0-3.3) will cover for you by automatically calling join on all non-daemon threads as pair of _MainThread._exitfunc. This isn't actually documented, so you shouldn't rely on it, but it's what's happening to you.
Your main thread hasn't actually exited at all; it's blocking inside its _MainThread._exitfunc trying to join some arbitrary non-daemon thread. Its objects won't be finalized until the atexit handler is called, which doesn't happen until after it finishes joining all non-daemon threads.
Meanwhile, if you avoid this (e.g., by using thread/_thread directly, or by detaching the main thread from its object or forcing it into a normal Thread instance), what happens? It isn't defined. The threading module makes no reference to it at all, but in CPython 2.0-3.3, and likely in any other reasonable implementation, it falls to the thread/_thread module to decide. And, as the docs say:
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
So, if you manage to avoid joining all of your non-daemon threads, you have to write code that can handle both having them hard-killed like daemon threads, and having them continue running until exit.
If they do continue running, at least in CPython 2.7 and 3.3 on POSIX systems, that the main thread's OS-level thread handle, and various higher-level Python objects representing it, may be still retained, and not get cleaned up by the GC.
On top of that, even if everything were released, you can't rely on the GC ever deleting anything. If your code depends on deterministic GC, there are many cases you can get away with it in CPython (although your code will then break in PyPy, Jython, IronPython, etc.), but at exit time is not one of them. CPython can, and will, leak objects at exit time and let the OS sort 'em out. (This is why writable files that you never close may lose the last few writes—the __del__ method never gets called, and therefore there's nobody to tell them to flush, and at least on POSIX the underlying FILE* doesn't automatically flush either.)
If you want something to be cleaned up when the main thread finishes, you have to use some kind of close function rather than relying on __del__, and you have to make sure it gets triggered via a with block around the main block of code, an atexit function, or some other mechanism.
One last thing:
I would have expected that the thread-locals would be deallocated (i.e. garbage collected)...
Do you actually have thread locals somewhere? Or do you just mean locals and/or globals that are only accessed in one thread?

Categories