I'm currently monitoring a folder using fsevents. Every time a file is added, a code is executed on this file. A new file is added to the folder every second.
from fsevents import Observer, Stream
def file_event_callback(event):
# code 256 for adding file to folder
if event.mask == 256:
fileChanged = event.name
# do stuff with fileChanged file
if __name__ == "__main__":
observer = Observer()
observer.start()
stream = Stream(file_event_callback, 'folder', file_events=True)
observer.schedule(stream)
observer.join()
This works quite well. The only problem is, that the libary is building a queue for every file added to the folder. The code executed within the file_event_callback can take more then a second. When that happens the other items in the queue should be skipped so that only the newest one is used.
How can I skip items from the queue so that only the latest addition to the folder used after the last one is finished?
I tried using watchdog first but as this has to run on a mac I had some troubles making it work the way I wanted.
I don't know exactly what library you're using, and when you say "this is building a queue…" I have no idea what "this" you're referring to… but an obvious answer is to stick your own queue in front of whatever it's using, so you can manipulate that queue directly. For example:
import queue
import threading
def skip_get(q):
value = q.get(block=True)
try:
while True:
value = q.get(block=False)
except queue.Empty:
return value
q = queue.Queue()
def file_event_callback(event):
# code 256 for adding file to folder
if event.mask == 256:
fileChanged = event.name
q.put(fileChanged)
def consumer():
while True:
fileChanged = skip_get(q)
if fileChanged is None:
return
# do stuff with fileChanged
Now, before you start up the observer, do this:
t = threading.Thread(target=consumer)
t.start()
And at the end:
observer.join()
q.put(None)
t.join()
So, how does this work?
First, let's look at the consumer side. When you call q.get(), this pops the first thing off the queue. But what if nothing is there? That's what the block argument is for. If it's false, the get will raise a queue.Empty exception. If it's true, the get will wait forever (in a thread-safe way) until something appears to be popped. So, by blocking once, we handle the case where there's nothing to read yet. By then looping without blocking, we consume anything else on the queue, to handle the case where there are too many things to read. Because we keep reassigning value to whatever we popped, what we end up with is the last thing put on the queue.
Now, let's look at the producer side. When you call q.put(value), that just puts value on the queue. Unless you've put a size limit on the queue (which I haven't), there's no way this could block, so you don't have to worry about any of that. But now, how do you signal the consumer thread that you're finished? It's going to be waiting in q.get(block=True) forever; the only way to wake it up is to give it some value to pop. By pushing a sentinel value (in this case, None is fine, because it's not valid as a filename), and making the consumer handle that None by quitting, we give ourselves a nice, clean way to shutdown. (And because we never push anything after the None, there's no chance of accidentally skipping it.) So, we can just push None, then be sure that (barring any other bugs) the consumer thread will eventually quit, which means we can do t.join() to wait until it does without fear of deadlock.
I mentioned above that you could do this more simply with a Condition. If you think about how a queue actually works, it's just a list (or deque, or whatever) protected by a condition: the consumer waits on the condition until there's something available, and the producer makes something available by adding it to the list and signaling the condition. If you only ever want the last value, there's really no reason for the list. So, you can do this:
class OneQueue(object):
def __init__(self):
self.value = None
self.condition = threading.Condition()
self.sentinel = object()
def get(self):
with self.condition:
while self.value is None:
self.condition.wait()
value, self.value = self.value, None
return value
def put(self, value):
with self.condition:
self.value = value
self.condition.notify()
def close(self):
self.put(self.sentinel)
(Because I'm now using None to signal that nothing is available, I had to create a separate sentinel to signal that we're done.)
The problem with this design is that if the producers puts multiple values while the consumer is too busy to handle them, it can miss some of them—but in this case, that "problem" is exactly what you were looking for.
Still, using lower-level tools always means there's a lot more to get wrong, and this is especially dangerous with threading synchronization, because it involves problems that are hard to wrap your head around, and hard to debug even when you understand them, so you might be better off using a Queue anyway.
Related
I'm very familiar with Python queue.Queue. This is definitely the thing you want when you want to have a reliable stream between consumer and producer threads.
However, sometimes you have producers that are faster than consumers and are forced to drop data (as for live video frame capture, for example. We may typically want to buffer just the last one, or two frames).
Does Python provide an asynchronous buffer class, similar to queue.Queue?
It's not exactly obvious how to correctly implement one using queue.Queue.
I could, for example:
buf = queue.Queue(maxsize=3)
def produce(msg):
if buf.full():
buf.get(block=False) # Make space
buf.put(msg, block=False)
def consume():
msg = buf.get(block=True)
work(msg)
although I don't particularly like that produce is not a locked, queue-atomic operation. A consume may start between full and get, for example, and it would be (probably) broken for a multi-producer scenario.
Is there's an out-of-the-box solution?
There's nothing built in for this, but it appears straightforward enough to build your own buffer class that wraps a Queue and provides mutual exclusion between .put() and .get() with its own lock, and using a Condition variable to wake up would-be consumers whenever an item is added. Like so:
import threading
class SBuf:
def __init__(self, maxsize):
import queue
self.q = queue.Queue()
self.maxsize = maxsize
self.nonempty = threading.Condition()
def get(self):
with self.nonempty:
while not self.q.qsize():
self.nonempty.wait()
assert self.q.qsize()
return self.q.get()
def put(self, v):
with self.nonempty:
while self.q.qsize() >= self.maxsize:
self.q.get()
self.q.put(v)
assert 0 < self.q.qsize() <= self.maxsize
self.nonempty.notify_all()
BTW, I advise against trying to build this kind of logic out of raw locks. Of course it can be done, but Condition variables are very carefully designed to save you from universes of unintended race conditions. There's a learning curve for Condition variables, but one well worth climbing: they often make things easy instead of brain-busting. Indeed, Python's threading module uses them internally to implement all sort of things.
An Alternative
In the above, we only invoke queue.Queue methods under the protection of our own lock, so there's really no need to use a thread-safe container - we're supplying all the thread safety already.
So it would be a bit leaner to use a simpler container. Happily, a collections.deque can be configured to discard all but the most recent N entries itself, but "at C speed". Like so:
class SBuf:
def __init__(self, maxsize):
import collections
self.q = collections.deque(maxlen=maxsize)
self.maxsize = maxsize
self.nonempty = threading.Condition()
def get(self):
with self.nonempty:
while not self.q:
self.nonempty.wait()
assert self.q
return self.q.popleft()
def put(self, v):
with self.nonempty:
self.q.append(v) # discards oldest, if needed
assert 0 < len(self.q) <= self.maxsize
self.nonempty.notify()
This also changed .notify_all() to .notify(). In this use case, either works correctly, but we're only adding one item so there's no need to notify more than one consumer. If there are multiple consumers waiting, .notify_all() will wake all of them up but only the first will find a non-empty queue. The others will see that it's empty, and just .wait() again.
Queue is already multiprocessing and multithreading safe, in that you can't write and read from the queue at the same time. However, you are correct that there's nothing stopping the queue from getting modified between the full() and get commands.
As such you can use a lock, which is how you can control thread access between multiple lines. The lock can only be acquired once, so if its currently locked, all other threads will wait until it has been released before they continue.
import threading
lock = threading.Lock()
def produce(msg):
lock.acquire()
if buf.full():
buf.get(block=False) # Make space
buf.put(msg, block=False)
lock.release()
def consume():
msg = None
while !msg:
lock.acquire()
try:
msg = buf.get(block=False)
except queue.Empty:
# buffer is empty, wait and try again
sleep(0.01)
lock.release()
work(msg)
I know there are a few questions and answers related to hanging threads in Python, but my situation is slightly different as the script is hanging AFTER all the threads have been completed. The threading script is below, but obviously the first 2 functions are simplified massively.
When I run the script shown, it works. When I use my real functions, the script hangs AFTER THE LAST LINE. So, all the scenarios are processed (and a message printed to confirm), logStudyData() then collates all the results and writes to a csv. "Script Complete" is printed. And THEN it hangs.
The script with threading functionality removed runs fine.
I have tried enclosing the main script in try...except but no exception gets logged. If I use a debugger with a breakpoint on the final print and then step it forward, it hangs.
I know there is not much to go on here, but short of including the whole 1500-line script, I don't know hat else to do. Any suggestions welcome!
def runScenario(scenario):
# Do a bunch of stuff
with lock:
# access global variables
pass
pass
def logStudyData():
# Combine results from all scenarios into a df and write to csv
pass
def worker():
global q
while True:
next_scenario = q.get()
if next_scenario is None:
break
runScenario(next_scenario)
print(next_scenario , " is complete")
q.task_done()
import threading
from queue import Queue
global q, lock
q = Queue()
threads = []
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
num_worker_threads = 6
lock = threading.Lock()
for i in range(num_worker_threads):
print("Thread number ",i)
this_thread = threading.Thread(target=worker)
this_thread.start()
threads.append(this_thread)
for scenario_name in scenario_list:
q.put(scenario_name)
q.join()
print("q.join completed")
logStudyData()
print("script complete")
As the docs for Queue.get say:
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
In other words, there is no way get can ever return None, except by you calling q.put(None) on the main thread, which you don't do.
Notice that the example directly below those docs does this:
for i in range(num_worker_threads):
q.put(None)
for t in threads:
t.join()
The second one is technically necessary, but you usually get away with not doing it.
But the first one is absolutely necessary. You need to either do this, or come up with some other mechanism to tell your workers to quit. Without that, your main thread just tries to exit, which means it tries to join every worker, but those workers are all blocked forever on a get that will never happen, so your program hangs forever.
Building a thread pool may not be rocket science (if only because rocket scientists tend to need their calculations to be deterministic and hard real-time…), but it's not trivial, either, and there are plenty of things you can get wrong. You may want to consider using one of the two already-built threadpools in the Python standard library, concurrent.futures.ThreadPoolExecutor or multiprocessing.dummy.Pool. This would reduce your entire program to:
import concurrent.futures
def work(scenario):
runScenario(scenario)
print(scenario , " is complete")
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as x:
results = list(x.map(work, scenario_list))
print("q.join completed")
logStudyData()
print("script complete")
Obviously you'll still need a lock around any mutable variables you change inside runScenario—although if you're only using a mutable variable there because you couldn't figure out how to return values to the main thread, that's trivial with an Executor: just return the values from work, and then you can use them like this:
for result in x.map(work, scenario_list):
do_something(result)
I have a concurrent.futures.ThreadPoolExecutor and a list. And with the following code I add futures to the ThreadPoolExecutor:
for id in id_list:
future = self._thread_pool.submit(self.myfunc, id)
self._futures.append(future)
And then I wait upon the list:
concurrent.futures.wait(self._futures)
However, self.myfunc does some network I/O and thus there will be some network exceptions. When errors occur, self.myfunc submits a new self.myfunc with the same id to the same thread pool and add a new future to the same list, just as the above:
try:
do_stuff(id)
except:
future = self._thread_pool.submit(self.myfunc, id)
self._futures.append(future)
return None
Here comes the problem: I got an error on the line of concurrent.futures.wait(self._futures):
File "/usr/lib/python3.4/concurrent/futures/_base.py", line 277, in wait
f._waiters.remove(waiter)
ValueError: list.remove(x): x not in list
How should I properly add new Futures to a list while already waiting upon it?
Looking at the implementation of wait(), it certainly doesn't expect that anything outside concurrent.futures will ever mutate the list passed to it. So I don't think you'll ever get that "to work". It's not just that it doesn't expect the list to mutate, it's also that significant processing is done on list entries, and the implementation has no way to know that you've added more entries.
Untested, I'd suggest trying this instead: skip all that, and just keep a running count of threads still active. A straightforward way is to use a Condition guarding a count.
Initialization:
self._count_cond = threading.Condition()
self._thread_count = 0
When my_func is entered (i.e., when a new thread starts):
with self._count_cond:
self._thread_count += 1
When my_func is done (i.e., when a thread ends), for whatever reason (exceptional or not):
with self._count_cond:
self._thread_count -= 1
self._count_cond.notify() # wake up the waiting logic
And finally the main waiting logic:
with self._count_cond:
while self._thread_count:
self._count_cond.wait()
POSSIBLE RACE
It seems possible that the thread count could reach 0 while work for a new thread has been submitted, but before its my_func invocation starts running (and so before _thread_count is incremented to account for the new thread).
So the:
with self._count_cond:
self._thread_count += 1
part should really be done instead right before each occurrence of
self._thread_pool.submit(self.myfunc, id)
Or write a new method to encapsulate that pattern; e.g., like so:
def start_new_thread(self, id):
with self._count_cond:
self._thread_count += 1
self._thread_pool.submit(self.myfunc, id)
A DIFFERENT APPROACH
Offhand, I expect this could work too (but, again, haven't tested it): keep all your code the same except change how you're waiting:
while self._futures:
self._futures.pop().result()
So this simply waits for one thread at a time, until none remain.
Note that .pop() and .append() on lists are atomic in CPython, so no need for your own lock. And because your my_func() code appends before the thread it's running in ends, the list won't become empty before all threads really are done.
AND YET ANOTHER APPROACH
Keep the original waiting code, but rework the rest not to create new threads in case of exception. Like rewrite my_func to return True if it quits due to an exception, return False otherwise, and start threads running a wrapper instead:
def my_func_wrapper(self, id):
keep_going = True
while keep_going:
keep_going = self.my_func(id)
This may be especially attractive if you someday decide to use multiple processes instead of multiple threads (creating new processes can be a lot more expensive on some platforms).
AND A WAY USING cf.wait()
Another way is to change just the waiting code:
while self._futures:
fs = self._futures[:]
for f in fs:
self._futures.remove(f)
concurrent.futures.wait(fs)
Clear? This makes a copy of the list to pass to .wait(), and the copy is never mutated. New threads show up in the original list, and the whole process is repeated until no new threads show up.
Which of these ways makes most sense seems to me to depend mostly on pragmatics, but there's not enough info about all you're doing for me to make a guess about that.
This has been discussed many, many times, but I still don't have a good grasp on how to best accomplish this.
Suppose I have two threads: a main app thread and a worker thread. The main app thread (say it's a WXWidgets GUI thread, or a thread that is looping and accepting user input at the console) could have a reason to stop the worker thread - the user's closing the application, a stop button was clicked, some error occurred in the main thread, whatever.
Commonly suggested is to setup a flag that the thread checks frequently to determine whether to exit. I have two problems with the suggested ways to approach this, however:
First, writing constant checks of a flag into my code makes my code really ugly, and it's very, very prone to problems due to the huge amount of code duplication. Take this example:
def WorkerThread():
while (True):
doOp1() # assume this takes say 100ms.
if (exitThread == True):
safelyEnd()
return
doOp2() # this one also takes some time, say 200ms
if (exitThread == True):
safelyEnd()
return
if (somethingIsTrue == True):
doSomethingImportant()
if (exitThread == True): return
doSomethingElse()
if (exitThread == True): return
doOp3() # this blocks for an indeterminate amount of time - say, it's waiting on a network respond
if (exitThread == True):
safelyEnd()
return
doOp4() # this is doing some math
if (exitThread == True):
safelyEnd()
return
doOp5() # This calls a buggy library that might block forever. We need a way to detect this and kill this thread if it's stuck for long enough...
saveSomethingToDisk() # might block while the disk spins up, or while a network share is accessed...whatever
if (exitThread == True):
safelyEnd()
return
def safelyEnd():
cleanupAnyUnfinishedBusiness() # do whatever is needed to get things to a workable state even if something was interrupted
writeWhatWeHaveToDisk() # it's OK to wait for this since it's so important
If I add more code or change code, I have to make sure I'm adding those check blocks all over the place. If my worker thread is a very lengthy thread, I could easily have tens or even hundreds of those checks. Very cumbersome.
Think of the other problems. If doOp4() does accidentally deadlock, my app will spin forever and never exit. Not a good user experience!
Using daemon threads isn't really a good option either because it denies me the opportunity to execute the safelyEnd() code. This code might be important - flushing disk buffers, writing log data for debugging purposes, etc.
Second, my code might call functions that block where I don't have the opportunity to check frequently. Let's say this function exists but it's in code that I don't have access to - say part of a library:
def doOp4():
time.sleep(60) # imagine that this is a network thread, that waits for 60 seconds for a reply before returning.
If that timeout is 60 seconds, even if my main thread gives the signal for the thread to end, it still might sit there for 60 seconds, when it would be perfectly reasonable for it to just stop waiting for a network response and exit. If that code is part of a library I didn't write, however, I have no control over how that works.
Even if I did write the code for a network check, I'd basically have to refactor it so that rather than waiting 60 seconds, it loops 60 times and waits 1 second before checking the exit thread! Again, very messy!
The upshot of all of this, is it feels like a good way to be able to implement this easily would be to somehow cause an exception on a specific thread. If I could do that, I could wrap the entire worker thread's code in a try block, and put the safelyEnd() code in the exception handler, or even a finally block.
Is there a way to either accomplish this, or refactor this code with a different technique that will make things work? The thing is, ideally, when the user requests a quit, we want to make them wait the minimum possible amount. It seems that there has to be a simple way to accomplish this, as this is a very common thing in apps!
Most of the thread communication objects don't allow for this type of setup. They might allow for a cleaner way to have an exit flag, but it still doesn't eliminate the need to constantly check that exit flag, and it still won't deal with the thread blocking because of an external call or because it's simply in a busy loop.
The biggest thing for me is really that if I have a long worker thread procedure I have to litter it with hundreds of checks of the flag. This just seems way too messy and doesn't feel like it's very good coding practice. There has to be a better way...
Any advice would be greatly appreciated.
First, you can make this a lot less verbose and repetitive by using an exception, without needing the ability to raise exceptions into the thread from outside, or any other new tricks or language features:
def WorkerThread():
class ExitThreadError(Exception):
pass
def CheckEnd():
if exitThread:
raise ExitThreadError()
try:
while True:
doOp1() # assume this takes say 100ms.
CheckEnd()
doOp2() # this one also takes some time, say 200ms
CheckEnd()
# etc.
except ExitThreadError:
safelyEnd()
Note that you really ought to be guarding exitThread with a Lock or Condition—which is another good reason to wrap up the check, so you only need to fix that in one place.
Anyway, I've taken out some excessive parentheses, == True checks, etc. that added nothing to the code; hopefully you can still see how it's equivalent to the original.
You can take this even farther by restructuring your function into a simple state machine; then you don't even need an exception. I'll show a ridiculously trivial example, where every state always implicitly transitions to the next state no matter what. For this case, the refactor is obviously reasonable; whether it's reasonable for your real code, only you can really tell.
def WorkerThread():
states = (doOp1, doOp2, doOp3, doOp4, doOp5)
current = 0
while not exitThread:
states[current]()
current += 1
safelyEnd()
Neither of these does anything to help you interrupt in the middle of one of your steps.
If you have some function that takes 60 seconds and there's not a damn thing you can do about it, then there's no way to cancel your thread during those 60 seconds and there's not a damn thing you can do about it. That's just the way it is.
But usually, things that take 60 seconds are really doing something like blocking on a select, and there is something you can do about that—create a pipe, stick its read end in the select, and write on the other end to wake up the thread.
Or, in you're feeling hacky, often just closing/deleting/etc. a file or other object that the function is waiting on/processing/otherwise using will often guarantee that it fails quickly with an exception. Of course sometimes it guarantees a segfault, or corrupted data, or a 50% chance of exiting and a 50% chance of hanging forever, or… So, even if you can't control that doOp4 function, you'd better be able to analyze its source and/or whitebox test it.
If worst comes to worst, then yes, you do have to either change that one 60-second timeout into 60 1-second timeouts. But usually it won't come to that.
Finally, if you really do need to be able to kill a thread, don't use a thread, use a child process. Those are killable.
Just make sure that your process is always in a state where it's safe to kill it—or, if you only care about Unix, use a USR signal and mask it out when the process isn't in a safe-to-kill state.
But if it's not safe to kill your process in the middle of that 60-second doOp4 call, this isn't really going to help you, because you still won't be able to kill it during those 60 seconds.
In some cases, you can have the child process arrange for the parent to clean up for it if it gets killed unexpectedly, or even arrange for it to be cleaned up on the next run (e.g., think of a typical database journal).
But ultimately, what you're asking for is ultimately a contradiction: You want to hard-kill a thread without giving it a chance to finish what it's doing, but you want to guarantee that it finishes what it's doing, and you don't want to rewrite the code to make that possible. So, you need to rethink your design so that it requires something that isn't impossible.
If you do not mind your code running about ten times slower, you can use the Thread2 class implemented below. An example follows that shows how calling the new stop method should kill the thread on the next bytecode instruction. Implementing a cleanup system is left as an exercise for the reader to accomplish.
import threading
import sys
class StopThread(StopIteration): pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
################################################################################
import time
def main():
test = Thread2(target=printer)
test.start()
time.sleep(1)
test.stop()
test.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I have a queue that always needs to be ready to process items when they are added to it. The function that runs on each item in the queue creates and starts thread to execute the operation in the background so the program can go do other things.
However, the function I am calling on each item in the queue simply starts the thread and then completes execution, regardless of whether or not the thread it started completed. Because of this, the loop will move on to the next item in the queue before the program is done processing the last item.
Here is code to better demonstrate what I am trying to do:
queue = Queue.Queue()
t = threading.Thread(target=worker)
t.start()
def addTask():
queue.put(SomeObject())
def worker():
while True:
try:
# If an item is put onto the queue, immediately execute it (unless
# an item on the queue is still being processed, in which case wait
# for it to complete before moving on to the next item in the queue)
item = queue.get()
runTests(item)
# I want to wait for 'runTests' to complete before moving past this point
except Queue.Empty, err:
# If the queue is empty, just keep running the loop until something
# is put on top of it.
pass
def runTests(args):
op_thread = SomeThread(args)
op_thread.start()
# My problem is once this last line 't.start()' starts the thread,
# the 'runTests' function completes operation, but the operation executed
# by some thread is not yet done executing because it is still running in
# the background. I do not want the 'runTests' function to actually complete
# execution until the operation in thread t is done executing.
"""t.join()"""
# I tried putting this line after 't.start()', but that did not solve anything.
# I have commented it out because it is not necessary to demonstrate what
# I am trying to do, but I just wanted to show that I tried it.
Some notes:
This is all running in a PyGTK application. Once the 'SomeThread' operation is complete, it sends a callback to the GUI to display the results of the operation.
I do not know how much this affects the issue I am having, but I thought it might be important.
A fundamental issue with Python threads is that you can't just kill them - they have to agree to die.
What you should do is:
Implement the thread as a class
Add a threading.Event member which the join method clears and the thread's main loop occasionally checks. If it sees it's cleared, it returns. For this override threading.Thread.join to check the event and then call Thread.join on itself
To allow (2), make the read from Queue block with some small timeout. This way your thread's "response time" to the kill request will be the timeout, and OTOH no CPU choking is done
Here's some code from a socket client thread I have that has the same issue with blocking on a queue:
class SocketClientThread(threading.Thread):
""" Implements the threading.Thread interface (start, join, etc.) and
can be controlled via the cmd_q Queue attribute. Replies are placed in
the reply_q Queue attribute.
"""
def __init__(self, cmd_q=Queue.Queue(), reply_q=Queue.Queue()):
super(SocketClientThread, self).__init__()
self.cmd_q = cmd_q
self.reply_q = reply_q
self.alive = threading.Event()
self.alive.set()
self.socket = None
self.handlers = {
ClientCommand.CONNECT: self._handle_CONNECT,
ClientCommand.CLOSE: self._handle_CLOSE,
ClientCommand.SEND: self._handle_SEND,
ClientCommand.RECEIVE: self._handle_RECEIVE,
}
def run(self):
while self.alive.isSet():
try:
# Queue.get with timeout to allow checking self.alive
cmd = self.cmd_q.get(True, 0.1)
self.handlers[cmd.type](cmd)
except Queue.Empty as e:
continue
def join(self, timeout=None):
self.alive.clear()
threading.Thread.join(self, timeout)
Note self.alive and the loop in run.