Python event handler with Async (non-blocking while loop) - python

import queue
qq = queue.Queue()
qq.put('hi')
class MyApp():
def __init__(self, q):
self._queue = q
def _process_item(self, item):
print(f'Processing this item: {item}')
def get_item(self):
try:
item = self._queue.get_nowait()
self._process_item(item)
except queue.Empty:
pass
async def listen_for_orders(self):
'''
Asynchronously check the orders queue for new incoming orders
'''
while True:
self.get_item()
await asyncio.sleep(0)
a = MyApp(qq)
loop = asyncio.get_event_loop()
loop.run_until_complete(a.listen_for_orders())
Using Python 3.6.
I'm trying to write an event handler that constantly listens for messages in the queue, and processes them (prints them in this case). But it must be asynchronous - I need to be able to run it in a terminal (IPython) and manually feed things to the queue (at least initially, for testing).
This code does not work - it blocks forever.
How do I make this run forever but return control after each iteration of the while loop?
Thanks.
side note:
To make the event loop work with IPython (version 7.2), I'm using this code from the ib_insync library, I'm using this library for the real-world problem in the example above.

You need to make your queue an asyncio.Queue, and add things to the queue in a thread-safe manner. For example:
qq = asyncio.Queue()
class MyApp():
def __init__(self, q):
self._queue = q
def _process_item(self, item):
print(f'Processing this item: {item}')
async def get_item(self):
item = await self._queue.get()
self._process_item(item)
async def listen_for_orders(self):
'''
Asynchronously check the orders queue for new incoming orders
'''
while True:
await self.get_item()
a = MyApp(qq)
loop = asyncio.get_event_loop()
loop.run_until_complete(a.listen_for_orders())
Your other thread must put stuff in the queue like this:
loop.call_soon_threadsafe(qq.put_nowait, <item>)
call_soon_threadsafe will ensure correct locking, and also that the event loop is woken up when a new queue item is ready.

This is not an async queue. You need to use asyncio.Queue
qq = queue.Queue()
Async is an event loop. You call the loop transferring control to it and it loops until your function is complete which never happens:
loop.run_until_complete(a.listen_for_orders())
You commented:
I have another Thread that polls an external network resource for data (I/O intensive) and dumps the incoming messages into this thread.
Write that code async - so you'd have:
async def run():
while 1:
item = await get_item_from_network()
process_item(item)
loop = asyncio.get_event_loop()
loop.run_until_complete( run() )
If you don't want to do that what you can do is step through the loop though you don't want to do this.
import asyncio
def run_once(loop):
loop.call_soon(loop.stop)
loop.run_forever()
loop = asyncio.get_event_loop()
for x in range(100):
print(x)
run_once(loop)
Then you simply call your async function and each time you call run_once it will check your (asyncio queue) and pass control to your listen for orders function if the queue has an item in it.

Related

Run event loop until all tasks are blocked in python

I am writing code that has some long-running coroutines that interact with each other. These coroutines can be blocked on await until something external happens. I want to be able to drive these coroutines in a unittest. The regular way of doing await on the coroutine doesn't work, because I want to be able to intercept something in the middle of their operation. I would also prefer not to mess with the coroutine internals either, unless there is something generic/reusable that can be done.
Ideally I would want to run an event loop until all tasks are currently blocked. This should be fairly easy to tell in an event loop implementation. Once everything is blocked, the event loop yields back control, where I can assert some state about the coroutines, and poke them externally. Then I can resume the loop until it gets blocked again. This would allow for deterministic simulation of tasks in an event loop.
Minimal example of the desired API:
import asyncio
from asyncio import Event
# Imagine this is a complicated "main" with many coroutines.
# But event is some external "mockable" event
# that can be used to drive in unit tests
async def wait_on_event(event: Event):
print("Waiting on event")
await event.wait()
print("Done waiting on event")
def test_deterministic():
loop = asyncio.get_event_loop()
event = Event()
task = loop.create_task(wait_on_event(event))
run_until_blocked_or_complete(loop) # define this magic function
# Should print "Waiting on event"
# can make some test assertions here
event.set()
run_until_blocked_or_complete(loop)
# Should print "Done waiting on event"
Anything like that possible? Or would this require writing a custom event loop just for tests?
Additionally, I am currently on Python 3.9 (AWS runtime limitation). If it's not possible to do this in 3.9, what version would support this?
This question has puzzled me since I first read it, because it's almost do-able with standard asyncio functions. The key is Alexander's "magic" is_not_blocked method, which I give verbatim below (except for moving it to the outer indentation level). I also use his wait_on_event method, and his test_deterministic_loop function. I added some extra tests to show how to start and stop other tasks, and how to drive the event loop step-by-step until all tasks are finished.
Instead of his DeterministicLoop class, I use a function run_until_blocked that makes only standard asyncio function calls. The two lines of code:
loop.call_soon(loop.stop)
loop.run_forever()
are a convenient means of advancing the loop by exactly one cycle. And asyncio already provides a method for obtaining all the tasks that run within a given event loop, so there is no need to store them independently.
A comment on the Alexander's "magic" method: if you look at the comments in the asyncio.Task code, the "private" variable _fut_waiter is described as an important invariant. That's very unlikely to change in future versions. So I think it's quite safe in practice.
import asyncio
from typing import Optional, cast
def _is_not_blocked(task: asyncio.Task):
# pylint: disable-next=protected-access
wait_for = cast(Optional[asyncio.Future], task._fut_waiter) # type: ignore
if wait_for is None:
return True
return wait_for.done()
def run_until_blocked():
"""Runs steps of the event loop until all tasks are blocked."""
loop = asyncio.get_event_loop()
# Always run one step.
loop.call_soon(loop.stop)
loop.run_forever()
# Continue running until all tasks are blocked
while any(_is_not_blocked(task) for task in asyncio.all_tasks(loop)):
loop.call_soon(loop.stop)
loop.run_forever()
# This coroutine could spawn many others. Keeping it simple here
async def wait_on_event(event: asyncio.Event) -> int:
print("Waiting")
await event.wait()
print("Done")
return 42
def test_deterministic_loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
event = asyncio.Event()
task = loop.create_task(wait_on_event(event))
assert not task.done()
run_until_blocked()
print("Task done", task.done())
assert not task.done()
print("Tasks running", asyncio.all_tasks(loop))
assert asyncio.all_tasks(loop)
event.set()
# You can start and stop tasks
loop.run_until_complete(asyncio.sleep(2.0))
run_until_blocked()
print("Task done", task.done())
assert task.done()
print("Tasks running", asyncio.all_tasks(loop))
assert task.result() == 42
assert not asyncio.all_tasks(loop)
# If you create a task you must loop run_until_blocked until
# the task is done.
task2 = loop.create_task(asyncio.sleep(2.0))
assert not task2.done()
while not task2.done():
assert asyncio.all_tasks(loop)
run_until_blocked()
assert task2.done()
assert not asyncio.all_tasks(loop)
test_deterministic_loop()
Yes, you can achieve this by creating a custom event loop policy and using a mock event loop in your test. The basic idea is to create a loop that only runs until all the coroutines are blocked, then yield control back to the test code to perform any necessary assertions or external pokes, and then continue running the loop until everything is blocked again, and so on.
import asyncio
class DeterministicEventLoopPolicy(asyncio.DefaultEventLoopPolicy):
def new_event_loop(self):
loop = super().new_event_loop()
loop._blocked = set()
return loop
def get_event_loop(self):
loop = super().get_event_loop()
if not hasattr(loop, "_blocked"):
loop._blocked = set()
return loop
def _enter_task(self, task):
super()._enter_task(task)
if not task._source_traceback:
task._source_traceback = asyncio.Task.current_task().get_stack()
task._loop._blocked.add(task)
def _leave_task(self, task):
super()._leave_task(task)
task._loop._blocked.discard(task)
def run_until_blocked(self, coro):
loop = self.new_event_loop()
asyncio.set_event_loop(loop)
try:
task = loop.create_task(coro)
while loop._blocked:
loop.run_until_complete(asyncio.sleep(0))
finally:
task.cancel()
loop.run_until_complete(task)
asyncio.set_event_loop(None)
This policy creates a new event loop with a _blocked set attribute that tracks the tasks that are currently blocked. When a new task is scheduled on the loop, the _enter_task method is called, and we add it to the _blocked set. When a task is completed or canceled, the _leave_task method is called, and we remove it from the _blocked set.
The run_until_blocked method takes a coroutine and runs the event loop until all the tasks are blocked. It creates a new event loop using the custom policy, schedules the coroutine on the loop, and then repeatedly runs the loop until the _blocked set is empty. This is the point where you can perform any necessary assertions or external pokes.
Here's an example usage of this policy:
async def wait_on_event(event: asyncio.Event):
print("Waiting on event")
await event.wait()
print("Done waiting on event")
def test_deterministic():
asyncio.set_event_loop_policy(DeterministicEventLoopPolicy())
event = asyncio.Event()
asyncio.get_event_loop().run_until_blocked(wait_on_event(event))
assert not event.is_set() # assert that the event has not been set yet
event.set() # set the event
asyncio.get_event_loop().run_until_blocked(wait_on_event(event))
assert event.is_set() # assert that the event has been set
asyncio.get_event_loop().close()
In this test, we create a new Event object and pass it to the wait_on_event coroutine. We use the run_until_blocked method to run the coroutine until it blocks on the event.wait() call. At this point, we can perform any necessary assertions, such as checking that the event has not been set yet. We then set the event, and call run_until_blocked again to resume the coroutine until it completes.
This pattern allows for deterministic simulation of tasks in an event loop and can be used to test coroutines that block on external events.
Hope this helps!
The default event loop simply runs everything that is scheduled in each "pass". If you simply schedule your pause with "loop.call_soon" after getting your tasks running, you should be called at the desired point:
import asyncio
async def worker(n=1):
await asyncio.sleep(n)
def event():
print("blah")
breakpoint()
print("bleh")
async def worker(id):
print(f"starting task {id}")
await asyncio.sleep(0.1)
print(f"ending task {id}")
async def main():
t = []
for id in (1,2,3):
t.append(asyncio.create_task(worker(id)))
loop = asyncio.get_running_loop()
loop.call_soon(event)
await asyncio.sleep(0.2)
And running this on the REPL:
In [8]: asyncio.run(main())
starting task 1
starting task 2
starting task 3
blah
> <ipython-input-3-450374919d79>(4)event()
-> print("bleh")
(Pdb)
Exception in callback event() at <ipython-input-3-450374919d79>:1
[...]
bdb.BdbQuit
ending task 1
ending task 2
ending task 3
After some experimenting I came up with something. Here is the usage first:
# This coroutine could spawn many others. Keeping it simple here
async def wait_on_event(event: asyncio.Event) -> int:
print("Waiting")
await event.wait()
print("Done")
return 42
def test_deterministic_loop():
loop = DeterministicLoop()
event = asyncio.Event()
task = loop.add_coro(wait_on_event(event))
assert not task.done()
loop.step()
# prints Waiting
assert not task.done()
assert not loop.done()
event.set()
loop.step()
# prints Done
assert task.done()
assert task.result() == 42
assert loop.done()
The implementation:
"""Module for testing facilities. Don't use these in production!"""
import asyncio
from enum import IntEnum
from typing import Any, Optional, TypeVar, cast
from collections.abc import Coroutine, Awaitable
def _get_other_tasks(loop: Optional[asyncio.AbstractEventLoop]) -> set[asyncio.Task]:
"""Get a set of currently scheduled tasks in an event loop that are not the current task"""
current = asyncio.current_task(loop)
tasks = asyncio.all_tasks(loop)
if current is not None:
tasks.discard(current)
return tasks
# Works on python 3.9, cannot guarantee on other versions
def _get_unblocked_tasks(tasks: set[asyncio.Task]) -> set[asyncio.Task]:
"""Get the subset of tasks that can make progress. This is the most magic
function, and is heavily dependent on eventloop implementation and python version"""
def is_not_blocked(task: asyncio.Task):
# pylint: disable-next=protected-access
wait_for = cast(Optional[asyncio.Future], task._fut_waiter) # type: ignore
if wait_for is None:
return True
return wait_for.done()
return set(filter(is_not_blocked, tasks))
class TasksState(IntEnum):
RUNNING = 0
BLOCKED = 1
DONE = 2
def _get_tasks_state(
prev_tasks: set[asyncio.Task], cur_tasks: set[asyncio.Task]
) -> TasksState:
"""Given set of tasks for previous and current pass of the event loop,
determine the overall state of the tasks. Are the tasks making progress,
blocked, or done?"""
if not cur_tasks:
return TasksState.DONE
unblocked: set[asyncio.Task] = _get_unblocked_tasks(cur_tasks)
# check if there are tasks that can make progress
if unblocked:
return TasksState.RUNNING
# if no tasks appear to make progress, check if this and last step the state
# has been constant
elif prev_tasks == cur_tasks:
return TasksState.BLOCKED
return TasksState.RUNNING
async def _stop_when_blocked():
"""Schedule this task to stop the event loop when all other tasks are
blocked, or they all complete"""
prev_tasks: set[asyncio.Task] = set()
loop = asyncio.get_running_loop()
while True:
tasks = _get_other_tasks(loop)
state = _get_tasks_state(prev_tasks, tasks)
prev_tasks = tasks
# stop the event loop if all other tasks cannot make progress
if state == TasksState.BLOCKED:
loop.stop()
# finish this task too, if no other tasks exist
if state == TasksState.DONE:
break
# yield back to the event loop
await asyncio.sleep(0.0)
loop.stop()
T = TypeVar("T")
class DeterministicLoop:
"""An event loop for writing deterministic tests."""
def __init__(self):
self.loop = asyncio.get_event_loop_policy().new_event_loop()
asyncio.set_event_loop(self.loop)
self.stepper_task = self.loop.create_task(_stop_when_blocked())
self.tasks: list[asyncio.Task] = []
def add_coro(self, coro: Coroutine[Any, Any, T]) -> asyncio.Task[T]:
"""Add a coroutine to the set of running coroutines, so they can be stepped through"""
if self.done():
raise RuntimeError("No point in adding more tasks. All tasks have finished")
task = self.loop.create_task(coro)
self.tasks.append(task)
return task
def step(self, awaitable: Optional[Awaitable[T]] = None) -> Optional[T]:
if self.done() or not self.tasks:
raise RuntimeError(
"No point in stepping. No tasks to step or all are finished"
)
step_future: Optional[asyncio.Future[T]] = None
if awaitable is not None:
step_future = asyncio.ensure_future(awaitable, loop=self.loop)
# stepper_task should halt us if we're blocked or all tasks are done
self.loop.run_forever()
if step_future is not None:
assert (
step_future.done()
), "Can't step the event loop, where the step function itself might get blocked"
return step_future.result()
return None
def done(self) -> bool:
return self.stepper_task.done()
Here is a simple generalized implementation.
If the loop finds that all tasks are stuck in some async instruction (Event, Semaphore, whatever) for some constant amount of iterations it will exit the loop context until run_until_blocked is called once again.
import asyncio
MAX_LOOP_ITER = 100
def run_until_blocked(loop):
global _tasks
_tasks = {}
while True:
loop.call_soon(loop.stop)
loop.run_forever()
for task in asyncio.all_tasks(loop=loop):
if task.done():
continue
lasti = task.get_stack()[-1].f_lasti
if task in _tasks and _tasks[task]["lasti"] == lasti:
_tasks[task]["iter"] += 1
else:
_tasks[task] = {"iter": 0, "lasti": lasti}
if all(val["iter"] < MAX_LOOP_ITER for val in _tasks.values()):
break
async def wait_on_event(event: asyncio.Event):
print("Waiting on event")
await event.wait()
print("Done waiting on event")
return 42
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
event = asyncio.Event()
coro = wait_on_event(event)
task = loop.create_task(coro)
run_until_blocked(loop)
event.set()
run_until_blocked(loop)
print(task.result())

Cancelling asyncio task run in executor

I'm scraping some websites, paralelizing requests library using asyncio:
def run():
asyncio.run(scrape());
def check_link(link):
#.... code code code ...
response = requests.get(link)
#.... code code code ...
write_some_stats_into_db()
async def scrape():
#.... code code code ...
task = asyncio.get_event_loop().run_in_executor(check_link(link));
#.... code code code ...
if done:
for task in all_tasks:
task.cancel();
I only need to find one 'correct' link, after that, I can stop the program. However, because the check_link is run in executor, it's threads are automatically daemonized, thus even after calling taks.cancel(), I have to wait for all of the other still running check_link to complete.
Do you have any ideas how to 'force-kill' the other running checks in the thread executor?
You can do it the following way, actually from my point of view, if you do not have to use asyncio for the task, use only threads without any async loop, since it makes your code more complicated.
import asyncio
from random import randint
import time
from functools import partial
# imagine that this is links array
LINKS = list(range(1000))
# how many thread-worker you want to have simultaneously
WORKERS_NUM = 10
# stops the app
STOP_EVENT = asyncio.Event()
STOP_EVENT.clear()
def check_link(link: str) -> int:
"""checks link in another thread and returns result"""
time.sleep(3)
r = randint(1, 11)
print(f"{link}____{r}\n")
return r
async def check_link_wrapper(q: asyncio.Queue):
"""Async wrapper around sync function"""
loop = asyncio.get_event_loop()
while not STOP_EVENT.is_set():
link = await q.get()
if not link:
break
value = await loop.run_in_executor(None, func=partial(check_link, link))
if value == 10:
STOP_EVENT.set()
print("Hurray! We got TEN !")
async def feeder(q: asyncio.Queue):
"""Send tasks and "poison pill" to all workers"""
# send tasks to workers
for link in LINKS:
await q.put(link)
# ask workers to stop
for _ in range(WORKERS_NUM):
await q.put(None)
async def amain():
"""Main async function of the app"""
# maxsize is one since we want the app
# to stop as fast as possible if stop condition is met
q = asyncio.Queue(maxsize=1)
# we create separate task, since we do not want to await feeder
# we are interested only in workers
asyncio.create_task(feeder(q))
await asyncio.gather(
*[check_link_wrapper(q) for _ in range(WORKERS_NUM)],
)
if __name__ == '__main__':
asyncio.run(amain())

Is it possible to run multiple asyncio in the same time in python?

Based on the solution that i got: Running multiple sockets using asyncio in python
i tried to add also the computation part using asyncio
Setup: Python 3.7.4
import msgpack
import threading
import os
import asyncio
import concurrent.futures
import functools
import nest_asyncio
nest_asyncio.apply()
class ThreadSafeElem(bytes):
def __init__(self, * p_arg, ** n_arg):
self._lock = threading.Lock()
def __enter__(self):
self._lock.acquire()
return self
def __exit__(self, type, value, traceback):
self._lock.release()
elem = ThreadSafeElem()
async def serialize(data):
return msgpack.packb(data, use_bin_type=True)
async def serialize1(data1):
return msgpack.packb(data1, use_bin_type=True)
async def process_data(data,data1):
loop = asyncio.get_event_loop()
future = await loop.run_in_executor(None, functools.partial(serialize, data))
future1 = await loop.run_in_executor(None, functools.partial(serialize1, data1))
return await asyncio.gather(future,future1)
################ Calculation#############################
def calculate_data():
global elem
while True:
try:
... data is calculated (some dictionary))...
elem, elem1= asyncio.run(process_data(data, data1))
except:
pass
#####################################################################
def get_data():
return elem
def get_data1():
return elem1
########### START SERVER AND get data contionusly ################
async def client_thread(reader, writer):
while True:
try:
bytes_received = await reader.read(100)
package_type = np.frombuffer(bytes_received, dtype=np.int8)
if package_type ==1 :
nn_output = get_data1()
if package_type ==2 :
nn_output = get_data()
writer.write(nn_output)
await writer.drain()
except:
pass
async def start_servers(host, port):
server = await asyncio.start_server(client_thread, host, port)
await server.serve_forever()
async def start_calculate():
await asyncio.run(calculate_data())
def enable_sockets():
try:
host = '127.0.0.1'
port = 60000
sockets_number = 6
loop = asyncio.get_event_loop()
for i in range(sockets_number):
loop.create_task(start_servers(host,port+i))
loop.create_task(start_calculate())
loop.run_forever()
except:
print("weird exceptions")
##############################################################################
enable_sockets()
The issue is that when i make a call from client, the server does not give me anything.
I tested the program with dummy data and no asyncio on calculation part so without this loop.create_task(start_calculate()) and the server responded correctly.
I also run the calculate data without adding it in the enable sockets and it worked. It also working with this implementation, but the problem is the server is not returning anything.
I did it like this cos i need the calculate part to run continuously and when one of the clients is calling to return the data at that point.
An asyncio event loop cannot be nested inside another, and there is no point in doing so: asyncio.run (and similar) blocks the current thread until done. This does not increase parallelism, and merely disables any outer event loop.
If you want to nest another asyncio task, directly run it in the current event loop. If you want to run a non-cooperative, blocking task, run it in the event loop executor.
async def start_calculate():
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, calculate_data)
The default executor uses threads – this allows running blocking tasks, but does not increase parallelism. Use a custom ProcessPoolExecutor to use additional cores:
import concurrent.futures
async def start_calculate():
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
await loop.run_in_executor(pool, calculate_data)
Why do you call asyncio.run() multiple times?
This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally >only be called once.
I would advise you to read the docs

Python 3: How to submit an async function to a threadPool?

I want to use both ThreadPoolExecutor from concurrent.futures and async functions.
My program repeatedly submits a function with different input values to a thread pool. The final sequence of tasks that are executed in that larger function can be in any order, and I don't care about the return value, just that they execute at some point in the future.
So I tried to do this
async def startLoop():
while 1:
for item in clients:
arrayOfFutures.append(await config.threadPool.submit(threadWork, obj))
wait(arrayOfFutures, timeout=None, return_when=ALL_COMPLETED)
where the function submitted is:
async def threadWork(obj):
bool = do_something() # needs to execute before next functions
if bool:
do_a() # can be executed at any time
do_b() # ^
where do_b and do_a are async functions.The problem with this is that I get the error: TypeError: object Future can't be used in 'await' expression and if I remove the await, I get another error saying I need to add await.
I guess I could make everything use threads, but I don't really want to do that.
I recommend a careful readthrough of Python 3's asyncio development guide, particularly the "Concurrency and Multithreading" section.
The main conceptual issue in your example that event loops are single-threaded, so it doesn't make sense to execute an async coroutine in a thread pool. There are a few ways for event loops and threads to interact:
Event loop per thread. For example:
async def threadWorkAsync(obj):
b = do_something()
if b:
# Run a and b as concurrent tasks
task_a = asyncio.create_task(do_a())
task_b = asyncio.create_task(do_b())
await task_a
await task_b
def threadWork(obj):
# Create run loop for this thread and block until completion
asyncio.run(threadWorkAsync())
def startLoop():
while 1:
arrayOfFutures = []
for item in clients:
arrayOfFutures.append(config.threadPool.submit(threadWork, item))
wait(arrayOfFutures, timeout=None, return_when=ALL_COMPLETED)
Execute blocking code in an executor. This allows you to use async futures instead of concurrent futures as above.
async def startLoop():
while 1:
arrayOfFutures = []
for item in clients:
arrayOfFutures.append(asyncio.run_in_executor(
config.threadPool, threadWork, item))
await asyncio.gather(*arrayOfFutures)
Use threadsafe functions to submit tasks to event loops across threads. For example, instead of creating a run loop for each thread you could run all async coroutines in the main thread's run loop:
def threadWork(obj, loop):
b = do_something()
if b:
future_a = asyncio.run_coroutine_threadsafe(do_a(), loop)
future_b = asyncio.run_coroutine_threadsafe(do_b(), loop)
concurrent.futures.wait([future_a, future_b])
async def startLoop():
loop = asyncio.get_running_loop()
while 1:
arrayOfFutures = []
for item in clients:
arrayOfFutures.append(asyncio.run_in_executor(
config.threadPool, threadWork, item, loop))
await asyncio.gather(*arrayOfFutures)
Note: This example should not be used literally as it will result in all coroutines executing in the main thread while the thread pool workers just block. This is just to show an example of the run_coroutine_threadsafe() method.

How to run a coroutine and wait it result from a sync func when the loop is running?

I have a code like the foolowing:
def render():
loop = asyncio.get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
if loop.is_running():
result = asyncio.ensure_future(test())
else:
result = loop.run_until_complete(test())
When the loop is not running is quite easy, just use loop.run_until_complete and it return the coro result but if the loop is already running (my blocking code running in app which is already running the loop) I cannot use loop.run_until_complete since it will raise an exception; when I call asyncio.ensure_future the task gets scheduled and run, but I want to wait there for the result, does anybody knows how to do this? Docs are not very clear how to do this.
I tried passing a concurrent.futures.Future calling set_result inside the coro and then calling Future.result() on my blocking code, but it doesn't work, it blocks there and do not let anything else to run. ANy help would be appreciated.
To implement runner with the proposed design, you would need a way to single-step the event loop from a callback running inside it. Asyncio explicitly forbids recursive event loops, so this approach is a dead end.
Given that constraint, you have two options:
make render() itself a coroutine;
execute render() (and its callers) in a thread different than the thread that runs the asyncio event loop.
Assuming #1 is out of the question, you can implement the #2 variant of render() like this:
def render():
loop = _event_loop # can't call get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
future = asyncio.run_coroutine_threadsafe(test(), loop)
result = future.result()
Note that you cannot use asyncio.get_event_loop() in render because the event loop is not (and should not be) set for that thread. Instead, the code that spawns the runner thread must call asyncio.get_event_loop() and send it to the thread, or just leave it in a global variable or a shared structure.
Waiting Synchronously for an Asynchronous Coroutine
If an asyncio event loop is already running by calling loop.run_forever, it will block the executing thread until loop.stop is called [see the docs]. Therefore, the only way for a synchronous wait is to run the event loop on a dedicated thread, schedule the asynchronous function on the loop and wait for it synchronously from another thread.
For this I have composed my own minimal solution following the answer by user4815162342. I have also added the parts for cleaning up the loop when all work is finished [see loop.close].
The main function in the code below runs the event loop on a dedicated thread, schedules several tasks on the event loop, plus the task the result of which is to be awaited synchronously. The synchronous wait will block until the desired result is ready. Finally, the loop is closed and cleaned up gracefully along with its thread.
The dedicated thread and the functions stop_loop, run_forever_safe, and await_sync can be encapsulated in a module or a class.
For thread-safery considerations, see section “Concurrency and Multithreading” in asyncio docs.
import asyncio
import threading
#----------------------------------------
def stop_loop(loop):
''' stops an event loop '''
loop.stop()
print (".: LOOP STOPPED:", loop.is_running())
def run_forever_safe(loop):
''' run a loop for ever and clean up after being stopped '''
loop.run_forever()
# NOTE: loop.run_forever returns after calling loop.stop
#-- cancell all tasks and close the loop gracefully
print(".: CLOSING LOOP...")
# source: <https://xinhuang.github.io/posts/2017-07-31-common-mistakes-using-python3-asyncio.html>
loop_tasks_all = asyncio.Task.all_tasks(loop=loop)
for task in loop_tasks_all: task.cancel()
# NOTE: `cancel` does not guarantee that the Task will be cancelled
for task in loop_tasks_all:
if not (task.done() or task.cancelled()):
try:
# wait for task cancellations
loop.run_until_complete(task)
except asyncio.CancelledError: pass
#END for
print(".: ALL TASKS CANCELLED.")
loop.close()
print(".: LOOP CLOSED:", loop.is_closed())
def await_sync(task):
''' synchronously waits for a task '''
while not task.done(): pass
print(".: AWAITED TASK DONE")
return task.result()
#----------------------------------------
async def asyncTask(loop, k):
''' asynchronous task '''
print("--start async task %s" % k)
await asyncio.sleep(3, loop=loop)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def main():
loop = asyncio.new_event_loop() # construct a new event loop
#-- closures for running and stopping the event-loop
run_loop_forever = lambda: run_forever_safe(loop)
close_loop_safe = lambda: loop.call_soon_threadsafe(stop_loop, loop)
#-- make dedicated thread for running the event loop
thread = threading.Thread(target=run_loop_forever)
#-- add some tasks along with my particular task
myTask = asyncio.run_coroutine_threadsafe(asyncTask(loop, 100200300), loop=loop)
otherTasks = [asyncio.run_coroutine_threadsafe(asyncTask(loop, i), loop=loop)
for i in range(1, 10)]
#-- begin the thread to run the event-loop
print(".: EVENT-LOOP THREAD START")
thread.start()
#-- _synchronously_ wait for the result of my task
result = await_sync(myTask) # blocks until task is done
print("* final result of my task:", result)
#... do lots of work ...
print("*** ALL WORK DONE ***")
#========================================
# close the loop gracefully when everything is finished
close_loop_safe()
thread.join()
#----------------------------------------
main()
here is my case, my whole programe is async, but call some sync lib, then callback to my async func.
follow the answer by user4815162342.
import asyncio
async def asyncTask(k):
''' asynchronous task '''
print("--start async task %s" % k)
# await asyncio.sleep(3, loop=loop)
await asyncio.sleep(3)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def my_callback():
print("here i want to call my async func!")
future = asyncio.run_coroutine_threadsafe(asyncTask(1), LOOP)
return future.result()
def sync_third_lib(cb):
print("here will call back to your code...")
cb()
async def main():
print("main start...")
print("call sync third lib ...")
await asyncio.to_thread(sync_third_lib, my_callback)
# await loop.run_in_executor(None, func=sync_third_lib)
print("another work...keep async...")
await asyncio.sleep(2)
print("done!")
LOOP = asyncio.get_event_loop()
LOOP.run_until_complete(main())

Categories