understanding asyncio already running forever loop and pending tasks - python

I'm having problems understanding how to pend a new task to an already running event loop.
This code:
import asyncio
import logging
#asyncio.coroutine
def blocking(cmd):
while True:
logging.info("in blocking coroutine")
yield from asyncio.sleep(0.01)
print("ping")
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
loop.run_forever()
asyncio.async(blocking("ls"))
logging.basicConfig(level = logging.INFO)
main()
Changing run_forever() to run_until_complete(asyncio.async(blocking("ls")) works fine. But I'm really confused - why I can't pend a task on the already running loop?

The problem is that the call to loop.run_forever() blocks; it starts the event loop, and won't return until you explicitly stop the loop - hence the forever part of run_forever. Your program never explicitly stops the event loop, so your asyncio.async(blocking("ls")) call is never reached.
Using asyncio.async to add a new task to an already running loop is fine, you just need to make sure the function is actually called from inside a coroutine or callback inside the event loop. Here are some examples:
Schedule blocking to run as soon as the event loop starts:
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
asyncio.async(blocking("ls"))
loop.run_forever()
Schedule blocking from a callback executed by the event loop:
def start_blocking():
asyncio.async(blocking("ls"))
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
loop.call_soon(start_blocking) # Calls start_blocking once the event loop starts
loop.run_forever()

Related

Why nest_asyncio makes the asyncio.run not to block the main thread where event loop works in although it is a blocking function?

Consider the follwing program:
import asyncio
import signal
import nest_asyncio
nest_asyncio.apply()
async_event_obj = asyncio.Event()
shutdown_command_issued = False
async def async_exit_handler():
await async_event_obj.wait()
def exit_handler(signal, frame):
global shutdown_command_issued
shutdown_command_issued = True
asyncio.run(async_exit_handler())
quit()
signal.signal(signal.SIGINT, exit_handler)
async def coroutine_one():
while True:
if not shutdown_command_issued:
print('coroutine one works')
await asyncio.sleep(1)
else:
break
print('Coroutine one finished.')
async_event_obj.set()
loop = asyncio.new_event_loop()
loop.create_task(coroutine_one())
loop.run_forever()
What I have done is: I added a sync signal handler (i.e.: exit_handler) to gently wait for the running tasks to be completed in the only event loop that runs in the main thread. As everybody knows, the asyncio.run is a synchronize blocking function and because it runs in the main thread where all the singal handlers, handle the signal and my event loop runs in, it has to block the main thread and stop other coroutines. But magically, when I use the nest_asyncio module, the asyncio.run function becomes non blocking and other coroutines in the event loop (i.e.: coroutine_one) continue their execution. What nest_asyncio does exactly under the hood? I know that it lets multiple event loops to be run in one single thread but how it makes a blocking function non blocking?

Running multiple async tasks and waiting for them all to complete in django,

I have a function which is
data=[]
async def connect(id):
d= await database_sync_to_async(model.objects.filter())
data.append(d)
and I call connect funciton like
import asyncio
loop = asyncio.get_event_loop()
try:
# run_forever() returns after calling loop.stop()
tasks =[connect(1),connect(2),connect(3),connect(4),connect(5)]
a, b = loop.run_until_complete(asyncio.gather(*tasks))
finally:
loop.close()
But this is not working,it says There is no current event loop in thread 'Thread-3'..
How can I implement it?
Quoting the doc for asyncio.get_event_loop():
If there is no current event loop set in the current OS thread, the OS thread is main, and set_event_loop() has not yet been called, asyncio will create a new event loop and set it as the current one.
A Django application typically runs multiple threads, in which case asyncio.get_event_loop() raises the exception you get when you are not in the main thread.
A possibility would be the following:
import asyncio
try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
tasks =[connect(1),connect(2),connect(3),connect(4),connect(5)]
results = loop.run_until_complete(asyncio.gather(*tasks))
finally:
loop.close()
Depending on which python version you are using (>=3.7) and what you are trying to achieve you could also use asyncio.run().

How to run a coroutine and wait it result from a sync func when the loop is running?

I have a code like the foolowing:
def render():
loop = asyncio.get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
if loop.is_running():
result = asyncio.ensure_future(test())
else:
result = loop.run_until_complete(test())
When the loop is not running is quite easy, just use loop.run_until_complete and it return the coro result but if the loop is already running (my blocking code running in app which is already running the loop) I cannot use loop.run_until_complete since it will raise an exception; when I call asyncio.ensure_future the task gets scheduled and run, but I want to wait there for the result, does anybody knows how to do this? Docs are not very clear how to do this.
I tried passing a concurrent.futures.Future calling set_result inside the coro and then calling Future.result() on my blocking code, but it doesn't work, it blocks there and do not let anything else to run. ANy help would be appreciated.
To implement runner with the proposed design, you would need a way to single-step the event loop from a callback running inside it. Asyncio explicitly forbids recursive event loops, so this approach is a dead end.
Given that constraint, you have two options:
make render() itself a coroutine;
execute render() (and its callers) in a thread different than the thread that runs the asyncio event loop.
Assuming #1 is out of the question, you can implement the #2 variant of render() like this:
def render():
loop = _event_loop # can't call get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
future = asyncio.run_coroutine_threadsafe(test(), loop)
result = future.result()
Note that you cannot use asyncio.get_event_loop() in render because the event loop is not (and should not be) set for that thread. Instead, the code that spawns the runner thread must call asyncio.get_event_loop() and send it to the thread, or just leave it in a global variable or a shared structure.
Waiting Synchronously for an Asynchronous Coroutine
If an asyncio event loop is already running by calling loop.run_forever, it will block the executing thread until loop.stop is called [see the docs]. Therefore, the only way for a synchronous wait is to run the event loop on a dedicated thread, schedule the asynchronous function on the loop and wait for it synchronously from another thread.
For this I have composed my own minimal solution following the answer by user4815162342. I have also added the parts for cleaning up the loop when all work is finished [see loop.close].
The main function in the code below runs the event loop on a dedicated thread, schedules several tasks on the event loop, plus the task the result of which is to be awaited synchronously. The synchronous wait will block until the desired result is ready. Finally, the loop is closed and cleaned up gracefully along with its thread.
The dedicated thread and the functions stop_loop, run_forever_safe, and await_sync can be encapsulated in a module or a class.
For thread-safery considerations, see section “Concurrency and Multithreading” in asyncio docs.
import asyncio
import threading
#----------------------------------------
def stop_loop(loop):
''' stops an event loop '''
loop.stop()
print (".: LOOP STOPPED:", loop.is_running())
def run_forever_safe(loop):
''' run a loop for ever and clean up after being stopped '''
loop.run_forever()
# NOTE: loop.run_forever returns after calling loop.stop
#-- cancell all tasks and close the loop gracefully
print(".: CLOSING LOOP...")
# source: <https://xinhuang.github.io/posts/2017-07-31-common-mistakes-using-python3-asyncio.html>
loop_tasks_all = asyncio.Task.all_tasks(loop=loop)
for task in loop_tasks_all: task.cancel()
# NOTE: `cancel` does not guarantee that the Task will be cancelled
for task in loop_tasks_all:
if not (task.done() or task.cancelled()):
try:
# wait for task cancellations
loop.run_until_complete(task)
except asyncio.CancelledError: pass
#END for
print(".: ALL TASKS CANCELLED.")
loop.close()
print(".: LOOP CLOSED:", loop.is_closed())
def await_sync(task):
''' synchronously waits for a task '''
while not task.done(): pass
print(".: AWAITED TASK DONE")
return task.result()
#----------------------------------------
async def asyncTask(loop, k):
''' asynchronous task '''
print("--start async task %s" % k)
await asyncio.sleep(3, loop=loop)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def main():
loop = asyncio.new_event_loop() # construct a new event loop
#-- closures for running and stopping the event-loop
run_loop_forever = lambda: run_forever_safe(loop)
close_loop_safe = lambda: loop.call_soon_threadsafe(stop_loop, loop)
#-- make dedicated thread for running the event loop
thread = threading.Thread(target=run_loop_forever)
#-- add some tasks along with my particular task
myTask = asyncio.run_coroutine_threadsafe(asyncTask(loop, 100200300), loop=loop)
otherTasks = [asyncio.run_coroutine_threadsafe(asyncTask(loop, i), loop=loop)
for i in range(1, 10)]
#-- begin the thread to run the event-loop
print(".: EVENT-LOOP THREAD START")
thread.start()
#-- _synchronously_ wait for the result of my task
result = await_sync(myTask) # blocks until task is done
print("* final result of my task:", result)
#... do lots of work ...
print("*** ALL WORK DONE ***")
#========================================
# close the loop gracefully when everything is finished
close_loop_safe()
thread.join()
#----------------------------------------
main()
here is my case, my whole programe is async, but call some sync lib, then callback to my async func.
follow the answer by user4815162342.
import asyncio
async def asyncTask(k):
''' asynchronous task '''
print("--start async task %s" % k)
# await asyncio.sleep(3, loop=loop)
await asyncio.sleep(3)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def my_callback():
print("here i want to call my async func!")
future = asyncio.run_coroutine_threadsafe(asyncTask(1), LOOP)
return future.result()
def sync_third_lib(cb):
print("here will call back to your code...")
cb()
async def main():
print("main start...")
print("call sync third lib ...")
await asyncio.to_thread(sync_third_lib, my_callback)
# await loop.run_in_executor(None, func=sync_third_lib)
print("another work...keep async...")
await asyncio.sleep(2)
print("done!")
LOOP = asyncio.get_event_loop()
LOOP.run_until_complete(main())

Schedule task to running event loop from synchronous code

Consider this program, where the mainloop and the coroutine to stop it are actually implemented by a library I'm using.
import asyncio
import signal
running = True
async def stop():
global running
print("setting false")
running = False
await asyncio.sleep(3)
print("reached end")
async def mainloop():
while running:
print("loop")
await asyncio.sleep(1)
def handle_signal():
loop.create_task(stop())
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(mainloop())
loop.close()
I need to call the stop coroutine to stop the mainloop when the program recieves a signal. Although when scheduling the stop coroutine using asyncio.BaseEventLoop.create_task it first stops the mainloop which stops the event loop and the stop coroutine can't finish:
$ ./test.py
loop
loop
loop
^Csetting false
Task was destroyed but it is pending!
task: <Task pending coro=<stop() done, defined at ./test.py:7> wait_for=<Future pending cb=[Task._wakeup()]>>
How to add the coroutine to the running event loop while making the event loop wait until it is complete?
As you discovered, the problem is that the event loop is only waiting for mainloop() to complete, leaving stop() pending, which asyncio correctly complains about.
If handle_signal and the top-level code are under your control, you can easily replace looping until mainloop completes with looping until a custom coroutine completes. This coroutine would invoke mainloop and then wait for the cleanup code to finish:
# ... omitted definition of mainloop() and stop()
# list of tasks that must be waited for before we can actually exit
_cleanup = []
async def run():
await mainloop()
# wait for all _cleanup tasks to finish
await asyncio.wait(_cleanup)
def handle_signal():
# schedule stop() to run, and also add it to the list of
# tasks run() must wait for before it is done
_cleanup.append(loop.create_task(stop()))
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(run())
loop.close()
Another option, which doesn't require the new run() coroutine (but still requires the modified handle_signal), is to issue a second run_until_complete() after mainloop completes:
# handle_signal and _cleanup defined as above
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(mainloop())
if _cleanup:
loop.run_until_complete(asyncio.wait(_cleanup))
loop.close()

Should I use two asyncio event loops in one program?

I want use the Python 3 asyncio module to create a server application.
I use a main event loop to listen to the network, and when new data is received it will do some compute and send the result to the client. Does 'do some compute' need a new event loop? or can it use the main event loop?
You can do the compute work in the main event loop, but the whole event loop will be blocked while that happens - no other requests can be served, and anything else you have running in the event loop will be blocked. If this isn't acceptable, you probably want to run the compute work in a separate process, using BaseEventLoop.run_in_executor. Here's a very simple example demonstrating it:
import time
import asyncio
from concurrent.futures import ProcessPoolExecutor
def cpu_bound_worker(x, y):
print("in worker")
time.sleep(3)
return x +y
#asyncio.coroutine
def some_coroutine():
yield from asyncio.sleep(1)
print("done with coro")
#asyncio.coroutine
def main():
loop = asyncio.get_event_loop()
loop.set_default_executor(ProcessPoolExecutor())
asyncio.async(some_coroutine())
out = yield from loop.run_in_executor(None, cpu_bound_worker, 3, 4)
print("got {}".format(out))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Output:
in worker
done with coro
got 7
cpu_bound_worker gets executed in a child process, and the event loop will wait for the result like it would any other non-blocking I/O operation, so it doesn't block other coroutines from running.

Categories