Does python provide a synchronized buffer? - python

I'm very familiar with Python queue.Queue. This is definitely the thing you want when you want to have a reliable stream between consumer and producer threads.
However, sometimes you have producers that are faster than consumers and are forced to drop data (as for live video frame capture, for example. We may typically want to buffer just the last one, or two frames).
Does Python provide an asynchronous buffer class, similar to queue.Queue?
It's not exactly obvious how to correctly implement one using queue.Queue.
I could, for example:
buf = queue.Queue(maxsize=3)
def produce(msg):
if buf.full():
buf.get(block=False) # Make space
buf.put(msg, block=False)
def consume():
msg = buf.get(block=True)
work(msg)
although I don't particularly like that produce is not a locked, queue-atomic operation. A consume may start between full and get, for example, and it would be (probably) broken for a multi-producer scenario.
Is there's an out-of-the-box solution?

There's nothing built in for this, but it appears straightforward enough to build your own buffer class that wraps a Queue and provides mutual exclusion between .put() and .get() with its own lock, and using a Condition variable to wake up would-be consumers whenever an item is added. Like so:
import threading
class SBuf:
def __init__(self, maxsize):
import queue
self.q = queue.Queue()
self.maxsize = maxsize
self.nonempty = threading.Condition()
def get(self):
with self.nonempty:
while not self.q.qsize():
self.nonempty.wait()
assert self.q.qsize()
return self.q.get()
def put(self, v):
with self.nonempty:
while self.q.qsize() >= self.maxsize:
self.q.get()
self.q.put(v)
assert 0 < self.q.qsize() <= self.maxsize
self.nonempty.notify_all()
BTW, I advise against trying to build this kind of logic out of raw locks. Of course it can be done, but Condition variables are very carefully designed to save you from universes of unintended race conditions. There's a learning curve for Condition variables, but one well worth climbing: they often make things easy instead of brain-busting. Indeed, Python's threading module uses them internally to implement all sort of things.
An Alternative
In the above, we only invoke queue.Queue methods under the protection of our own lock, so there's really no need to use a thread-safe container - we're supplying all the thread safety already.
So it would be a bit leaner to use a simpler container. Happily, a collections.deque can be configured to discard all but the most recent N entries itself, but "at C speed". Like so:
class SBuf:
def __init__(self, maxsize):
import collections
self.q = collections.deque(maxlen=maxsize)
self.maxsize = maxsize
self.nonempty = threading.Condition()
def get(self):
with self.nonempty:
while not self.q:
self.nonempty.wait()
assert self.q
return self.q.popleft()
def put(self, v):
with self.nonempty:
self.q.append(v) # discards oldest, if needed
assert 0 < len(self.q) <= self.maxsize
self.nonempty.notify()
This also changed .notify_all() to .notify(). In this use case, either works correctly, but we're only adding one item so there's no need to notify more than one consumer. If there are multiple consumers waiting, .notify_all() will wake all of them up but only the first will find a non-empty queue. The others will see that it's empty, and just .wait() again.

Queue is already multiprocessing and multithreading safe, in that you can't write and read from the queue at the same time. However, you are correct that there's nothing stopping the queue from getting modified between the full() and get commands.
As such you can use a lock, which is how you can control thread access between multiple lines. The lock can only be acquired once, so if its currently locked, all other threads will wait until it has been released before they continue.
import threading
lock = threading.Lock()
def produce(msg):
lock.acquire()
if buf.full():
buf.get(block=False) # Make space
buf.put(msg, block=False)
lock.release()
def consume():
msg = None
while !msg:
lock.acquire()
try:
msg = buf.get(block=False)
except queue.Empty:
# buffer is empty, wait and try again
sleep(0.01)
lock.release()
work(msg)

Related

How to make Python's multiprocessing Queue's .empty() method return the correct value? Or alternatives?

I have this snippet that uses the Queue class from the multiprocess module. I am very confused that the .empty() method of an instance of Queue does not give me a correct value as i would expect. This is my code:
from time import sleep
from multiprocessing import Queue, Lock
foo = Queue()
locker = Lock()
with locker: # even with this, still True
foo.put("bar")
print(foo.empty()) # True, obviously not
print(foo.empty()) # True
print(foo.empty()) # True
print(foo.qsize()) # 1L
print(foo.empty()) # True
However, if i use the sleep function from time, as in cause a chronological delay in the execution. It works.
from time import sleep
from multiprocessing import Queue, Lock
foo = Queue()
locker = Lock()
foo.put("bar")
sleep(0.01)
print(foo.empty()) # False
print(foo.empty()) # False
print(foo.empty()) # False
print(foo.qsize()) # 1L
print(foo.empty()) # False
I know my alternative is the .qsize() > 0 expression, but i am sure that i just doing this in a wrong way.
What am i doing wrong?
*EDIT*
I understand now that is it unreliable, thank you #Mathias Ettinger. Any clean alternatives? I need to know hot to reliably tell if my Queue is empty or not.
Unfortunately, the Queue's complex implementation that means that .empty() and .qsize() check different things to make their judgments. That means that they may disagree for a while, as you've seen.
Since .qsize() is supported on your platform (which is not true everywhere), you can re-implement the .empty() check in terms of .qsize(), and this will work for you:
# mp.Queue() is a function, not a class, so we need to find the true class
# to subclass
import multiprocessing.queues
class XQueue(multiprocessing.queues.Queue):
def empty(self):
try:
return self.qsize() == 0
except NotImplementedError: # OS X -- see qsize() implementation
return super(XQueue, self).empty()
Under the hood, the Queue .put() is a complex process: the Queue places objects in a buffer and acquires an interprocess semaphore, while a hidden daemon thread is responsible for draining the buffer and serializing its contents to a pipe. (Consumers then .get() by reading from this pipe and releasing the interprocess semaphore.) So, that's why sleeping in your example works: the daemon thread has enough time to move the object from in-memory buffer to I/O representation before you call .empty().
As an aside, I find this behavior surprising: a Queue in the very same internal state can give two different answers to the question, "do you have any elements enqueued?" (qsize will say "yes", and empty "no".)
I think I understand how this came about. Since not all platforms support sem_getvalue(), not all platforms can implement qsize, but empty can be reasonably implemented by just polling the FIFO. I'd have expected empty to be implemented in terms of qsize on platforms that support the latter.
As per the documentation, neither empty(), full(), nor qsize() are reliable.
Alternatives includes:
Reading the exact amount of items going through the Queue:
AMT = 8
for _ in range(AMT):
queue.put('some stuff')
for _ in range(AMT):
print(queue.get())
This is useful if you know beforehand how many items must be processed in total or how many will be processed by each thread.
Reading items until a guardian appears:
num_threads = 8
guardian = 'STUFF DONE'
while num_threads:
item = queue.get()
if item == guardian:
num_threads -= 1
else:
process(item)
This is helpful if every thread have a variable amount of work (and you don't know the total beforehand) to do but can determine when it’s done.

Data type for a "closable" queue to handle a stream of items for multiple producers and consumers

Is there a specific type of Queue that is "closable", and is suitable for when there are multiple producers, consumers, and the data comes from a stream (so its not known when it will end)?
I've been unable to find a queue that implements this sort of behavior, or a name for one, but it seems like a integral type for producer-consumer type problems.
As an example, ideally I could write code where (1) each producer would tell the queue when it was done, (2) consumers would blindly call a blocking get(), and (3) when all consumers were done, and the queue was empty, all the producers would unblock and receive a "done" notification:
As code, it'd look something like this:
def produce():
for x in range(randint()):
queue.put(x)
sleep(randint())
queue.close() # called once for every producer
def consume():
while True:
try:
print queue.get()
except ClosedQueue:
print 'done!'
break
num_producers = randint()
queue = QueueTypeThatICantFigureOutANameFor(num_producers)
[Thread(target=produce).start() for _ in range(num_producers)]
[Thread(target=consume).start() for _ in range(random())
Also, I'm not looking for the "Poison Pill" solution, where a "done" value is added to the queue for every consumer -- I don't like the inelegance of producers needing to know how many consumers there are.
I'd call that a self-latching queue.
For your primary requirement, combine the queue with a condition variable check that gracefully latches (shuts down) the queue when all producers have vacated:
class SelfLatchingQueue(LatchingQueue):
...
def __init__(self, num_producers):
...
def close(self):
'''Called by a producer to indicate that it is done producing'''
... perhaps check that current thread is a known producer? ...
with self.a_mutex:
self._num_active_producers -= 1
if self._num_active_producers <= 0:
# Future put()s throw QueueLatched. get()s will empty the queue
# and then throw QueueEmpty thereafter
self.latch() # Guess what superclass implements this?
For your secondary requirement (#3 in the original post, finished producers apparently block until all consumers are finished), I'd perhaps use a barrier or just another condition variable. This could be implemented in a subclass of the SelfLatchingQueue, of course, but without knowing the codebase I'd keep this behavior separate from the automatic latching.

multithreading check membership in Queue and stop the threads

I want to iterate over a list using 2 thread. One from leading and other from trailing, and put the elements in a Queue on each iteration. But before putting the value in Queue I need to check for existence of the value within Queue (its when that one of the threads has putted that value in Queue), So when this happens I need to stop the thread and return list of traversed values for each thread.
This is what I have tried so far :
from Queue import Queue
from threading import Thread, Event
class ThreadWithReturnValue(Thread):
def __init__(self, group=None, target=None, name=None,
args=(), kwargs={}, Verbose=None):
Thread.__init__(self, group, target, name, args, kwargs, Verbose)
self._return = None
def run(self):
if self._Thread__target is not None:
self._return = self._Thread__target(*self._Thread__args,
**self._Thread__kwargs)
def join(self):
Thread.join(self)
return self._return
main_path = Queue()
def is_in_queue(x, q):
with q.mutex:
return x in q.queue
def a(main_path,g,l=[]):
for i in g:
l.append(i)
print 'a'
if is_in_queue(i,main_path):
return l
main_path.put(i)
def b(main_path,g,l=[]):
for i in g:
l.append(i)
print 'b'
if is_in_queue(i,main_path):
return l
main_path.put(i)
g=['a','b','c','d','e','f','g','h','i','j','k','l']
t1 = ThreadWithReturnValue(target=a, args=(main_path,g))
t2 = ThreadWithReturnValue(target=b, args=(main_path,g[::-1]))
t2.start()
t1.start()
# Wait for all produced items to be consumed
print main_path.join()
I used ThreadWithReturnValue that will create a custom thread that returns the value.
And for membership checking I used the following function :
def is_in_queue(x, q):
with q.mutex:
return x in q.queue
Now if I first start the t1 and then the t2 I will get 12 a then one b then it doesn't do any thing and I need to terminate the python manually!
But if I first run the t2 then t1 I will get the following result:
b
b
b
b
ab
ab
b
b
b
b
a
a
So my questions is that why python treads different in this cases? and how can I terminate the threads and make them communicate with each other?
Before we get into bigger problems, you're not using Queue.join right.
The whole point of this function is that a producer who adds a bunch of items to a queue can wait until the consumer or consumers have finished working on all of those items. This works by having the consumer call task_done after they finish working on each item that they pulled off with get. Once there have been as many task_done calls as put calls, the queue is done. You're not doing a get anywhere, much less a task_done, so there's no way the queue can ever be finished. So, that's why you block forever after the two threads finish.
The first problem here is that your threads are doing almost no work outside of the actual synchronization. If the only thing they do is fight over a queue, only one of them is going to be able to run at a time.
Of course that's common in toy problems, but you have to think through your real problem:
If you're doing a lot of I/O work (listening on sockets, waiting for user input, etc.), threads work great.
If you're doing a lot of CPU work (calculating primes), threads don't work in Python because of the GIL, but processes do.
If you're actually primarily dealing with synchronizing separate tasks, neither one is going to work well (and processes will be worse). It may still be simpler to think in terms of threads, but it'll be the slowest way to do things. You may want to look into coroutines; Greg Ewing has a great demonstration of how to use yield from to use coroutines to build things like schedulers or many-actor simulations.
Next, as I alluded to in your previous question, making threads (or processes) work efficiently with shared state requires holding locks for as short a time as possible.
So, if you have to search a whole queue under a lock, that had better be a constant-time search, not a linear-time search. That's why I suggested using something like an OrderedSet recipe rather than a list, like the one inside the stdlib's Queue.Queue. Then this function:
def is_in_queue(x, q):
with q.mutex:
return x in q.queue
… is only blocking the queue for a tiny fraction of a second—just long enough to look up a hash value in a table, instead of long enough to compare every element in the queue against x.
Finally, I tried to explain about race conditions on your other question, but let me try again.
You need a lock around every complete "transaction" in your code, not just around the individual operations.
For example, if you do this:
with queue locked:
see if x is in the queue
if x was not in the queue:
with queue locked:
add x to the queue
… then it's always possible that x was not in the queue when you checked, but in the time between when you unlocked it and relocked it, someone added it. This is exactly why it's possible for both threads to stop early.
To fix this, you need to put a lock around the whole thing:
with queue locked:
if x is not in the queue:
add x to the queue
Of course this goes directly against what I said before about locking the queue for as short a time as possible. Really, that's what makes multithreading hard in a nutshell. It's easy to write safe code that just locks everything for as long as might conceivably be necessary, but then your code ends up only using a single core, while all the other threads are blocked waiting for the lock. And it's easy to write fast code that just locks everything as briefly as possible, but then it's unsafe and you get garbage values or even crashes all over the place. Figuring out what needs to be a transaction, and how to minimize the work inside those transactions, and how to deal with the multiple locks you'll probably need to make that work without deadlocking them… that's not so easy.
A couple of things that I think can be improved:
Due to the GIL, you might want to use the multiprocessing (rather than threading) module. In general, CPython threading will not cause CPU intensive work to speed up. (Depending on what exactly is the context of your question, it's also possible that multiprocessing won't, but threading almost certainly won't.)
A function like your is_inqueue would likely lead to high contention.
The locked time seems linear in the number of items that need to be traversed:
def is_in_queue(x, q):
with q.mutex:
return x in q.queue
So, instead, you could possibly do the following.
Use multiprocessing with a shared dict:
from multiprocessing import Process, Manager
manager = Manager()
d = manager.dict()
# Fn definitions and such
p1 = Process(target=p1, args=(d,))
p2 = Process(target=p2, args=(d,))
within each function, check for the item like this:
def p1(d):
# Stuff
if 'foo' in d:
return

Skip steps in fsevents queue

I'm currently monitoring a folder using fsevents. Every time a file is added, a code is executed on this file. A new file is added to the folder every second.
from fsevents import Observer, Stream
def file_event_callback(event):
# code 256 for adding file to folder
if event.mask == 256:
fileChanged = event.name
# do stuff with fileChanged file
if __name__ == "__main__":
observer = Observer()
observer.start()
stream = Stream(file_event_callback, 'folder', file_events=True)
observer.schedule(stream)
observer.join()
This works quite well. The only problem is, that the libary is building a queue for every file added to the folder. The code executed within the file_event_callback can take more then a second. When that happens the other items in the queue should be skipped so that only the newest one is used.
How can I skip items from the queue so that only the latest addition to the folder used after the last one is finished?
I tried using watchdog first but as this has to run on a mac I had some troubles making it work the way I wanted.
I don't know exactly what library you're using, and when you say "this is building a queue…" I have no idea what "this" you're referring to… but an obvious answer is to stick your own queue in front of whatever it's using, so you can manipulate that queue directly. For example:
import queue
import threading
def skip_get(q):
value = q.get(block=True)
try:
while True:
value = q.get(block=False)
except queue.Empty:
return value
q = queue.Queue()
def file_event_callback(event):
# code 256 for adding file to folder
if event.mask == 256:
fileChanged = event.name
q.put(fileChanged)
def consumer():
while True:
fileChanged = skip_get(q)
if fileChanged is None:
return
# do stuff with fileChanged
Now, before you start up the observer, do this:
t = threading.Thread(target=consumer)
t.start()
And at the end:
observer.join()
q.put(None)
t.join()
So, how does this work?
First, let's look at the consumer side. When you call q.get(), this pops the first thing off the queue. But what if nothing is there? That's what the block argument is for. If it's false, the get will raise a queue.Empty exception. If it's true, the get will wait forever (in a thread-safe way) until something appears to be popped. So, by blocking once, we handle the case where there's nothing to read yet. By then looping without blocking, we consume anything else on the queue, to handle the case where there are too many things to read. Because we keep reassigning value to whatever we popped, what we end up with is the last thing put on the queue.
Now, let's look at the producer side. When you call q.put(value), that just puts value on the queue. Unless you've put a size limit on the queue (which I haven't), there's no way this could block, so you don't have to worry about any of that. But now, how do you signal the consumer thread that you're finished? It's going to be waiting in q.get(block=True) forever; the only way to wake it up is to give it some value to pop. By pushing a sentinel value (in this case, None is fine, because it's not valid as a filename), and making the consumer handle that None by quitting, we give ourselves a nice, clean way to shutdown. (And because we never push anything after the None, there's no chance of accidentally skipping it.) So, we can just push None, then be sure that (barring any other bugs) the consumer thread will eventually quit, which means we can do t.join() to wait until it does without fear of deadlock.
I mentioned above that you could do this more simply with a Condition. If you think about how a queue actually works, it's just a list (or deque, or whatever) protected by a condition: the consumer waits on the condition until there's something available, and the producer makes something available by adding it to the list and signaling the condition. If you only ever want the last value, there's really no reason for the list. So, you can do this:
class OneQueue(object):
def __init__(self):
self.value = None
self.condition = threading.Condition()
self.sentinel = object()
def get(self):
with self.condition:
while self.value is None:
self.condition.wait()
value, self.value = self.value, None
return value
def put(self, value):
with self.condition:
self.value = value
self.condition.notify()
def close(self):
self.put(self.sentinel)
(Because I'm now using None to signal that nothing is available, I had to create a separate sentinel to signal that we're done.)
The problem with this design is that if the producers puts multiple values while the consumer is too busy to handle them, it can miss some of them—but in this case, that "problem" is exactly what you were looking for.
Still, using lower-level tools always means there's a lot more to get wrong, and this is especially dangerous with threading synchronization, because it involves problems that are hard to wrap your head around, and hard to debug even when you understand them, so you might be better off using a Queue anyway.

Is it possible to manually lock/unlock a Queue?

I'm curious if there is a way to lock a multiprocessing.Queue object manually.
I have a pretty standard Producer/Consumer pattern set up in which my main thread is constantly producing a series of values, and a pool of multiprocessing.Process workers is acting on the values produced.
It is all controlled via a sole multiprocessing.Queue().
import time
import multiprocessing
class Reader(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def run(self):
while True:
item = self.queue.get()
if isinstance(item, str):
break
if __name__ == '__main__':
queue = multiprocessing.Queue()
reader = Reader(queue)
reader.start()
start_time = time.time()
while time.time() - start_time < 10:
queue.put(1)
queue.put('bla bla bla sentinal')
queue.join()
The issue I'm running into is that my worker pool cannot consume and process the queue as fast as the main thread insert values into it. So after some period of time, the Queue is so unwieldy that it pops a MemoryError.
An obvious solution would be to simply add a wait check in the producer to stall it from putting any more values into the queue. Something along the lines of:
while time.time() - start_time < 10:
queue.put(1)
while queue.qsize() > some_size:
time.sleep(.1)
queue.put('bla bla bla sentinal')
queue.join()
However, because of the funky nature of the program, I'd like to dump everything in the Queue to a file for later processing. But! Without being able to temporarily lock the queue, the worker can't consume everything in it as the producer is constantly filling it back up with junk -- conceptually anyway. After numerous tests it seems that at some point one of the locks wins (but usually the one adding to the queue).
Edit: Also, I realize it'd be possible to simply stop the producer and consume it from that thread... but that makes the Single Responsibility guy in me feel sad, as the producer is a Producer, not a Consumer.
Edit:
After looking through the source of Queue, I came up with this:
def dump_queue(q):
q._rlock.acquire()
try:
res = []
while not q.empty():
res.append(q._recv())
q._sem.release()
return res
finally:
q._rlock.release()
However, I'm too scared to use it! I have no idea if this is "correct" or not. I don't have a firm enough grasp to know if this'll hold up without blowing up any of Queues internals.
Anyone know if this'll break? :)
Given what was said in the comments, a Queue is simply a wrong data structure for your problem - but is likely part of a usable solution.
It sounds like you have only one Producer. Create a new, Producer-local (not shared across processes) class implementing the semantics you really need. For example,
class FlushingQueue:
def __init__(self, mpqueue, path_to_spill_file, maxsize=1000, dumpsize=1000000):
from collections import deque
self.q = mpqueue # a shared `multiprocessing.Queue`
self.dump_path = path_to_spill_file
self.maxsize = maxsize
self.dumpsize = dumpsize
self.d = deque() # buffer for overflowing values
def put(self, item):
if self.q.qsize() < self.maxsize:
self.q.put(item)
# in case consumers have made real progress
while self.d and self.q.qsize() < self.maxsize:
self.q.put(self.d.popleft())
else:
self.d.append(item)
if len(self.d) >= self.dumpsize:
self.dump()
def dump(self):
# code to flush self.d to the spill file; no
# need to look at self.q at all
I bet you can make this work :-)

Categories