In one of my asyncio projects I use one synchronisation method quite a lot and was wondering, if it is some kind of standard tool with a name I could give to google to learn more. I used the term "1-item queue" only because I don't have a better name. It is a degraded queue and it is NOT related to Queue(maxsize=1).
# [controller] ---- commands ---> [worker]
The controller sends commands to a worker (queue.put, actually put_nowait) and the worker waits for them (queue.get) and executes them, but the special rule is that the only the last command is important and immediately replaces all prior unfinished commands. For this reason, there is never more than 1 command waiting for the execution in the queue.
To implement this, the controller clears the queue before the put. There is no queue.clear, so it must discard (with get_nowait) the waiting item, if any. (The absence of queue.clear started my doubts resulting in this question.)
On the worker's side, if a command execution requires a sleep, it is replaced by a newcmd=queue.get with a timeout. When the timeout occurs, it was a sleep; when the get succeeds, the current work is aborted and the execution of newcmd starts.
The type of queue you are using is not standard - there is such a thing as a one-shot queue, but it's a different thing altogether.
The queue doesn't really fit your use case, though you made it work with some effort. You don't really need queuing of any kind, you need a slot that holds a single object (which can be replaced) and a wakeup mechanism. asyncio.Event can be used for the wakeup and you can attach the payload object (the command) to an attribute of the event. For example:
async def worker(evt):
while True:
await evt.wait()
evt.clear()
if evt.last_command is None:
continue
last_command = evt.last_command
evt.last_command = None
# execute last_command, possibly with timeout
print(last_command)
async def main():
evt = asyncio.Event()
workers = [asyncio.create_task(worker(evt)) for _ in range(5)]
for i in itertools.count():
await asyncio.sleep(1)
evt.last_command = f"foo {i}"
evt.set()
asyncio.run(main())
One difference between this and the queue-based approach is that setting the event will wake up all workers (if there is more than one), even if the first worker immediately calls evt.clear(). A queue item, on the other hand, will be guaranteed to be handed off to a single awaiter of queue.get().
Related
Using python, I was trying to demonstrate how a producer/consumer multi-threading scenario could bring to a deadlock when consumer thread ends up waiting on an empty queue that will stay empty for the rest of the execution and 'till the end of it and how to solve this avoiding starvation or a sudden "dirty interruption" of the program.
So I took code from the producer/consumer threading using a queue on this nice RealPython article, here's original code excerpts:
def consumer(queue, event):
"""Pretend we're saving a number in the database."""
while not event.is_set() or not queue.empty():
message = queue.get()
logging.info(
"Consumer storing message: %s (size=%d)", message, queue.qsize()
)
logging.info("Consumer received event. Exiting")
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")
pipeline = queue.Queue(maxsize=10)
event = threading.Event()
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
executor.submit(producer, pipeline, event)
executor.submit(consumer, pipeline, event)
time.sleep(0.1)
logging.info("Main: about to set event")
event.set()
I noticed that, however unlikely to occur, the code as it is would bring to the situation I described in the case 'main' thread sets the 'event', and 'producer' comes to its end while the 'consumer' is still waiting to get a message from the queue.
To solve things in the case of a single 'consumer' a simple check for the queue not being empty before the 'get' instruction calling would suffice (for instance: if (not q.empty()): message = q.get()). BUT the problem, however, would still persist in a multi-consumer scenario 'cause the thread could swap immediately after the queue emptiness check and another consumer (a second one) could get the message leaving the queue empty so that swapping back to previous consumer (the first one) it would call the get on an empty queue and... that's it.
I wanted to go for a solution that would potentially work even in an hypothetical multi-consumer scenario. So I modified the 'consumer' code this way, substantially adding a timeout on the queue get instruction and managing the exception:
def consumer(q, event, n):
while not event.is_set() or not q.empty():
print("Consumer"+n+": Q-get")
try:
time.sleep(0.1) #(I don't really need this, I just want to force a consumer-thread swap at this precise point :=> showing that, just as expected, things will work well in a multi-consumer scenario also)
message = q.get(True,1)
except queue.Empty:
print("Consumer"+n+": looping on empty queue")
time.sleep(0.1) #(I don't really need this at all... just hoping -unfortunately without success- _main_ to swap on ThreadPoolExecutor)
continue
logging.info("Consumer%s storing message: %s (size=%d)", n,message,q.qsize())
print("Consumer"+n+": ended")
and also modified the "main" part to make it put a message in the queue and to make it spawn a second consumer instead of a producer...
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,datefmt="%H:%M:%S")
pipeline = queue.Queue(maxsize=10)
event = threading.Event()
pipeline.put("XxXxX")
print("Let's start (ThreadPoolExecutor)")
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
#executor.submit(producer, pipeline, event)
executor.submit(consumer, pipeline, event, '1')
executor.submit(consumer, pipeline, event, '2')
print("(_main_ won't even get this far... Why?)") #!#
time.sleep(2)
logging.info("Main: about to set event")
event.set()
(please mind that my purpose here is thwarting the consumer deadlock risk and show it is voided actually, in this phase I don't really need the producer, that's why I made the code not spawn it)
Now, the problem is, I can't understand why, all seems working well if threads are spawned with
threading.Thread(...).start(), for instance:
print("Let's start (simple Thread)")
for i in range(1,3): threading.Thread(target=consumer, args=(pipeline, event, str(i))).start()
whereas using concurrent.futures.ThreadPoolExecutor for it seems making the 'main' thread to never resume (seems like it not even get to its sleep call), so that the execution never ends resulting in an infinite consumer looping...
Can you help me understand why this "difference"? Knowing this is important for me and I think almost certainly would help me understand if it can be solved somehow or if I'll be necessarily forced to not use the ThreadPoolExecutor, so... Thank you in advance for your precious help on this!
The problem is that you put event.set() outside the with block that manages the ThreadPoolExecutor. When used with with, on exiting the with, ThreadPoolExecutor performs the equivalent of .shutdown(wait=True). So you're waiting for the workers to finish, which they won't, because you haven't yet set the event.
If you want to be able to tell it to shutdown when it can, but not wait immediately, you could do:
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
try:
#executor.submit(producer, pipeline, event)
executor.submit(consumer, pipeline, event, '1')
executor.submit(consumer, pipeline, event, '2')
executor.shutdown(wait=False) # No new work can be submitted after this point, and
# workers will opportunistically exit if no work left
time.sleep(2)
logging.info("Main: about to set event")
finally:
event.set() # Set the event *before* we block awaiting shutdown
# Done in a finally block to ensure its set even on exception
# so the with doesn't try to block in a case where
# it will never finish
# Blocks for full shutdown here; after this, the pool is definitely finished and cleaned up
I have two threads in a producer consumer pattern. When the consumer receives data it calls an time consuming function expensive() and then enters in a for loop.
But if while the consumer is working new data arrives, it should abort the current work, (exit the loop) and start with the new data.
I tried with a queue.Queue something like this:
q = queue.Queue()
def producer():
while True:
...
q.put(d)
def consumer():
while True:
d = q.get()
expensive(d)
for i in range(10000):
...
if not q.empty():
break
But the problem with this code is that if the producer put data too too fast, and the queue get to have many items, the consumer will do the expensive(d) call plus one loop iteration and then abort for each item, which is time consuming. The code should work, but is not optimized.
Without modifying the code in expensive one solution could be to run it as a separate process which will provide you the ability to terminateit prematurely. Since there's no mention to how long expensive runs this may or may not be more time efficient, however.
import multiprocessing as mp
q = queue.Queue()
def producer():
while True:
...
q.put(d)
def consumer():
while True:
d = q.get()
exp = mp.Thread(target=expensive, args=(d,))
for i in range(10000):
...
if not q.empty():
exp.terminate() # or exp.kill()
break
Well, one way is to use a queue design that can keep an internal lists of waiting and working threads. You can then create several consumer threads to wait on the queue and, when work arrives, set a known consumer thread to do the work. When the thread has finished, it calls into the queue to remove itself from the working list and add itself to the waiting list.
The consumer threads each have an 'abort' atomic that can signal the thread to finish early. There will be some latency while the thread performs inner loops, but that will not matter....
If new work arrives at the queue from the producer, and the working queue is not empty, the 'abort' bool of the working thread/s can be set and their priority set to the minimum possible. The new work can then be dispatched onto one of the waiting threads from the pool, so setting it working.
The waiting threads will need a 'start' function that signals an event/sema/condvar that the wait thread..well..waits on. That allows the producer that supplied work to set that specific thread running, rather than the 'usual' practice where any thread from a pool may pick up work.
Such a design allows new work to be started 'immediately', makes the previous work thread irrelevant by de-prioritizing it and avoids the overheads of thread/process termination.
I have some testcases where I start a webserver process and then
run some URL tests to check if every function runs fine.
The server process start-up time is depending on the system where it is executed. It's a matter of seconds and I work with a time.sleep(5) for now.
But honestly I'm not a huge fan of sleep() since it might work for my systems but what if the test runs on a system where server needs 6 secs to start ... (so it's never really safe to go that way..)
Tests will fail for no reason at all.
So the question is: is there a nice way to check if the process really started.
I use the python multiprocessing module
Example:
from multiprocessing import Process
import testapp.server
import requests
import testapp.config as cfg
import time
p = Process(target=testapp.server.main)
p.start()
time.sleep(5)
testurl=cfg.server_settings["protocol"] + cfg.server_settings["host"] + ":" +str(cfg.server_settings["port"]) + "/test/12"
r = requests.get(testurl)
p.terminate()
assert int(r.text)==12
So it would be nice to avoid the sleep() and really check when the process started ...
You should use is_alive (docs) but that would almost always return True after you initiated start() on the process. If you want to make sure the process is already doing something important, there's no getting around the time.sleep (at least from this end, look at the last paragraph for another idea)
In any case, you could implement is_alive like this:
p = Process(target=testapp.server.main)
p.start()
while not p.is_alive():
time.sleep(0.1)
do_something_once_alive()
As you can see we still need to "sleep" and check again (just 0.1 seconds), but it will probably be much less than 5 seconds until is_alive returns True.
If both is_alive and time.sleep aren't accurate enough for you to know if the process really does something specific yet, and if you're controlling the other program as well, you should have it raise another kind of flag so you know you're good to go.
I suggest creating your process with a connection object as argument (other synchronization primitives may work) and use the send() method within your child process to notify your parent process that business can go on. Use the recv() method on the parent end of the connection object.
import multiprocessing as mp
def worker(conn):
conn.send(0) # argument object must be pickable
# your worker is ready to do work and just signaled it to the parent
out_conn, in_conn = mp.Pipe()
process = mp.Process(target=worker,
args=(out_conn,))
process.start()
in_conn.recv() # Will block until something is received
# worker in child process signaled it is ready. Business can go on
Is there a specific type of Queue that is "closable", and is suitable for when there are multiple producers, consumers, and the data comes from a stream (so its not known when it will end)?
I've been unable to find a queue that implements this sort of behavior, or a name for one, but it seems like a integral type for producer-consumer type problems.
As an example, ideally I could write code where (1) each producer would tell the queue when it was done, (2) consumers would blindly call a blocking get(), and (3) when all consumers were done, and the queue was empty, all the producers would unblock and receive a "done" notification:
As code, it'd look something like this:
def produce():
for x in range(randint()):
queue.put(x)
sleep(randint())
queue.close() # called once for every producer
def consume():
while True:
try:
print queue.get()
except ClosedQueue:
print 'done!'
break
num_producers = randint()
queue = QueueTypeThatICantFigureOutANameFor(num_producers)
[Thread(target=produce).start() for _ in range(num_producers)]
[Thread(target=consume).start() for _ in range(random())
Also, I'm not looking for the "Poison Pill" solution, where a "done" value is added to the queue for every consumer -- I don't like the inelegance of producers needing to know how many consumers there are.
I'd call that a self-latching queue.
For your primary requirement, combine the queue with a condition variable check that gracefully latches (shuts down) the queue when all producers have vacated:
class SelfLatchingQueue(LatchingQueue):
...
def __init__(self, num_producers):
...
def close(self):
'''Called by a producer to indicate that it is done producing'''
... perhaps check that current thread is a known producer? ...
with self.a_mutex:
self._num_active_producers -= 1
if self._num_active_producers <= 0:
# Future put()s throw QueueLatched. get()s will empty the queue
# and then throw QueueEmpty thereafter
self.latch() # Guess what superclass implements this?
For your secondary requirement (#3 in the original post, finished producers apparently block until all consumers are finished), I'd perhaps use a barrier or just another condition variable. This could be implemented in a subclass of the SelfLatchingQueue, of course, but without knowing the codebase I'd keep this behavior separate from the automatic latching.
As almost everyone is aware when they first look at threading in Python, there is the GIL that makes life miserable for people who actually want to do processing in parallel - or at least give it a chance.
I am currently looking at implementing something like the Reactor pattern. Effectively I want to listen for incoming socket connections on one thread-like, and when someone tries to connect, accept that connection and pass it along to another thread-like for processing.
I'm not (yet) sure what kind of load I might be facing. I know there is currently setup a 2MB cap on incoming messages. Theoretically we could get thousands per second (though I don't know if practically we've seen anything like that). The amount of time spent processing a message isn't terribly important, though obviously quicker would be better.
I was looking into the Reactor pattern, and developed a small example using the multiprocessing library that (at least in testing) seems to work just fine. However, now/soon we'll have the asyncio library available, which would handle the event loop for me.
Is there anything that could bite me by combining asyncio and multiprocessing?
You should be able to safely combine asyncio and multiprocessing without too much trouble, though you shouldn't be using multiprocessing directly. The cardinal sin of asyncio (and any other event-loop based asynchronous framework) is blocking the event loop. If you try to use multiprocessing directly, any time you block to wait for a child process, you're going to block the event loop. Obviously, this is bad.
The simplest way to avoid this is to use BaseEventLoop.run_in_executor to execute a function in a concurrent.futures.ProcessPoolExecutor. ProcessPoolExecutor is a process pool implemented using multiprocessing.Process, but asyncio has built-in support for executing a function in it without blocking the event loop. Here's a simple example:
import time
import asyncio
from concurrent.futures import ProcessPoolExecutor
def blocking_func(x):
time.sleep(x) # Pretend this is expensive calculations
return x * 5
#asyncio.coroutine
def main():
#pool = multiprocessing.Pool()
#out = pool.apply(blocking_func, args=(10,)) # This blocks the event loop.
executor = ProcessPoolExecutor()
out = yield from loop.run_in_executor(executor, blocking_func, 10) # This does not
print(out)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
For the majority of cases, this is function alone is good enough. If you find yourself needing other constructs from multiprocessing, like Queue, Event, Manager, etc., there is a third-party library called aioprocessing (full disclosure: I wrote it), that provides asyncio-compatible versions of all the multiprocessing data structures. Here's an example demoing that:
import time
import asyncio
import aioprocessing
import multiprocessing
def func(queue, event, lock, items):
with lock:
event.set()
for item in items:
time.sleep(3)
queue.put(item+5)
queue.close()
#asyncio.coroutine
def example(queue, event, lock):
l = [1,2,3,4,5]
p = aioprocessing.AioProcess(target=func, args=(queue, event, lock, l))
p.start()
while True:
result = yield from queue.coro_get()
if result is None:
break
print("Got result {}".format(result))
yield from p.coro_join()
#asyncio.coroutine
def example2(queue, event, lock):
yield from event.coro_wait()
with (yield from lock):
yield from queue.coro_put(78)
yield from queue.coro_put(None) # Shut down the worker
if __name__ == "__main__":
loop = asyncio.get_event_loop()
queue = aioprocessing.AioQueue()
lock = aioprocessing.AioLock()
event = aioprocessing.AioEvent()
tasks = [
asyncio.async(example(queue, event, lock)),
asyncio.async(example2(queue, event, lock)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
Yes, there are quite a few bits that may (or may not) bite you.
When you run something like asyncio it expects to run on one thread or process. This does not (by itself) work with parallel processing. You somehow have to distribute the work while leaving the IO operations (specifically those on sockets) in a single thread/process.
While your idea to hand off individual connections to a different handler process is nice, it is hard to implement. The first obstacle is that you need a way to pull the connection out of asyncio without closing it. The next obstacle is that you cannot simply send a file descriptor to a different process unless you use platform-specific (probably Linux) code from a C-extension.
Note that the multiprocessing module is known to create a number of threads for communication. Most of the time when you use communication structures (such as Queues), a thread is spawned. Unfortunately those threads are not completely invisible. For instance they can fail to tear down cleanly (when you intend to terminate your program), but depending on their number the resource usage may be noticeable on its own.
If you really intend to handle individual connections in individual processes, I suggest to examine different approaches. For instance you can put a socket into listen mode and then simultaneously accept connections from multiple worker processes in parallel. Once a worker is finished processing a request, it can go accept the next connection, so you still use less resources than forking a process for each connection. Spamassassin and Apache (mpm prefork) can use this worker model for instance. It might end up easier and more robust depending on your use case. Specifically you can make your workers die after serving a configured number of requests and be respawned by a master process thereby eliminating much of the negative effects of memory leaks.
Based on #dano's answer above I wrote this function to replace places where I used to use multiprocess pool + map.
def asyncio_friendly_multiproc_map(fn: Callable, l: list):
"""
This is designed to replace the use of this pattern:
with multiprocessing.Pool(5) as p:
results = p.map(analyze_day, list_of_days)
By letting caller drop in replace:
asyncio_friendly_multiproc_map(analyze_day, list_of_days)
"""
tasks = []
with ProcessPoolExecutor(5) as executor:
for e in l:
tasks.append(asyncio.get_event_loop().run_in_executor(executor, fn, e))
res = asyncio.get_event_loop().run_until_complete(asyncio.gather(*tasks))
return res
See PEP 3156, in particular the section on Thread interaction:
http://www.python.org/dev/peps/pep-3156/#thread-interaction
This documents clearly the new asyncio methods you might use, including run_in_executor(). Note that the Executor is defined in concurrent.futures, I suggest you also have a look there.