Consider this program, where the mainloop and the coroutine to stop it are actually implemented by a library I'm using.
import asyncio
import signal
running = True
async def stop():
global running
print("setting false")
running = False
await asyncio.sleep(3)
print("reached end")
async def mainloop():
while running:
print("loop")
await asyncio.sleep(1)
def handle_signal():
loop.create_task(stop())
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(mainloop())
loop.close()
I need to call the stop coroutine to stop the mainloop when the program recieves a signal. Although when scheduling the stop coroutine using asyncio.BaseEventLoop.create_task it first stops the mainloop which stops the event loop and the stop coroutine can't finish:
$ ./test.py
loop
loop
loop
^Csetting false
Task was destroyed but it is pending!
task: <Task pending coro=<stop() done, defined at ./test.py:7> wait_for=<Future pending cb=[Task._wakeup()]>>
How to add the coroutine to the running event loop while making the event loop wait until it is complete?
As you discovered, the problem is that the event loop is only waiting for mainloop() to complete, leaving stop() pending, which asyncio correctly complains about.
If handle_signal and the top-level code are under your control, you can easily replace looping until mainloop completes with looping until a custom coroutine completes. This coroutine would invoke mainloop and then wait for the cleanup code to finish:
# ... omitted definition of mainloop() and stop()
# list of tasks that must be waited for before we can actually exit
_cleanup = []
async def run():
await mainloop()
# wait for all _cleanup tasks to finish
await asyncio.wait(_cleanup)
def handle_signal():
# schedule stop() to run, and also add it to the list of
# tasks run() must wait for before it is done
_cleanup.append(loop.create_task(stop()))
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(run())
loop.close()
Another option, which doesn't require the new run() coroutine (but still requires the modified handle_signal), is to issue a second run_until_complete() after mainloop completes:
# handle_signal and _cleanup defined as above
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, handle_signal)
loop.run_until_complete(mainloop())
if _cleanup:
loop.run_until_complete(asyncio.wait(_cleanup))
loop.close()
Related
Consider the follwing program:
import asyncio
import signal
import nest_asyncio
nest_asyncio.apply()
async_event_obj = asyncio.Event()
shutdown_command_issued = False
async def async_exit_handler():
await async_event_obj.wait()
def exit_handler(signal, frame):
global shutdown_command_issued
shutdown_command_issued = True
asyncio.run(async_exit_handler())
quit()
signal.signal(signal.SIGINT, exit_handler)
async def coroutine_one():
while True:
if not shutdown_command_issued:
print('coroutine one works')
await asyncio.sleep(1)
else:
break
print('Coroutine one finished.')
async_event_obj.set()
loop = asyncio.new_event_loop()
loop.create_task(coroutine_one())
loop.run_forever()
What I have done is: I added a sync signal handler (i.e.: exit_handler) to gently wait for the running tasks to be completed in the only event loop that runs in the main thread. As everybody knows, the asyncio.run is a synchronize blocking function and because it runs in the main thread where all the singal handlers, handle the signal and my event loop runs in, it has to block the main thread and stop other coroutines. But magically, when I use the nest_asyncio module, the asyncio.run function becomes non blocking and other coroutines in the event loop (i.e.: coroutine_one) continue their execution. What nest_asyncio does exactly under the hood? I know that it lets multiple event loops to be run in one single thread but how it makes a blocking function non blocking?
I'm having some trouble debugging an issue, I have an asyncio project and I would like it to shutdown gracefully.
import asyncio
import signal
async def clean_loop(signal, loop):
print("something")
tasks = [t for t in asyncio.all_tasks() if t is not
asyncio.current_task()]
[task.cancel() for task in tasks]
await asyncio.gather(*tasks, return_exceptions=True)
loop.stop()
def main():
loop = asyncio.get_event_loop()
signals = (signal.SIGTERM, signal.SIGINT)
for s in signals:
loop.add_signal_handler(s, lambda s=s: asyncio.create_task(clean_loop(s, loop)))
task = loop.create_task(run(some_code))
loop.call_later(60, task.cancel)
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
finally:
loop.close()
if __name__ == "__main__":
main()
When I run my code and send a KeyboardInterrupt or TERM signal from a different screen, nothing seems to happen and it doesn't look like clean_loop is being called.
EDIT: I think I was able to isolate the problem a bit more, the code that I'm running within some_code which has an infinte loop and contains another asyncio.gather(*tasks) within it, when I comment it out I am able to catch the signal and clean_loop runs. Could anyone explain why this conflict is happening?
If you're on a Unix-like systems,
"""Add a handler for a signal. UNIX only.
it should work fine. The only thing is clean_loop itself is a task that getting destroyed with loop.stop() and because of that you see:
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<clean_loop() running at ...> wait_for=<_GatheringFuture finished result=[CancelledError('')]>>
It shouldn't be important though because it's just there to cancel tasks.
I've made some slight changes that have nothing to do with the actual question:
import asyncio
import signal
async def run():
for i in range(4):
print(i)
await asyncio.sleep(2)
async def clean_loop(signal, loop):
print("-----------------------handling signal")
tasks = asyncio.all_tasks() - {asyncio.current_task()}
for task in tasks:
task.cancel()
print("--------------------------reached here")
await asyncio.gather(*tasks, return_exceptions=True)
loop.stop()
def main():
loop = asyncio.new_event_loop()
signals = (signal.SIGTERM, signal.SIGINT)
for s in signals:
loop.add_signal_handler(s, lambda s=s: asyncio.create_task(clean_loop(s, loop)))
task = loop.create_task(run())
loop.call_later(60, task.cancel)
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
finally:
loop.close()
if __name__ == "__main__":
main()
output:
0
1
2
^C-----------------------handling signal
--------------------------reached here
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<clean_loop() running at ...> wait_for=<_GatheringFuture finished result=[CancelledError('')]>>
According to your edit:
Documentation of add_signal_handler says:
The callback will be invoked by loop, along with other queued
callbacks and runnable coroutines of that event loop.
Just like other coroutines it should be invoked by the loop in a cooperative way. In other words, a coroutine should give the control back to the event loop so that event loop can run another coroutine.
In your case, your coroutine which has infinite loop prevents event loop from running another coroutines especially the the callback which registered through add_signal_handler. It doesn't cooperate! That's why you though it didn't work. It sat idle in queue waiting for run.
I have a code like the foolowing:
def render():
loop = asyncio.get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
if loop.is_running():
result = asyncio.ensure_future(test())
else:
result = loop.run_until_complete(test())
When the loop is not running is quite easy, just use loop.run_until_complete and it return the coro result but if the loop is already running (my blocking code running in app which is already running the loop) I cannot use loop.run_until_complete since it will raise an exception; when I call asyncio.ensure_future the task gets scheduled and run, but I want to wait there for the result, does anybody knows how to do this? Docs are not very clear how to do this.
I tried passing a concurrent.futures.Future calling set_result inside the coro and then calling Future.result() on my blocking code, but it doesn't work, it blocks there and do not let anything else to run. ANy help would be appreciated.
To implement runner with the proposed design, you would need a way to single-step the event loop from a callback running inside it. Asyncio explicitly forbids recursive event loops, so this approach is a dead end.
Given that constraint, you have two options:
make render() itself a coroutine;
execute render() (and its callers) in a thread different than the thread that runs the asyncio event loop.
Assuming #1 is out of the question, you can implement the #2 variant of render() like this:
def render():
loop = _event_loop # can't call get_event_loop()
async def test():
await asyncio.sleep(2)
print("hi")
return 200
future = asyncio.run_coroutine_threadsafe(test(), loop)
result = future.result()
Note that you cannot use asyncio.get_event_loop() in render because the event loop is not (and should not be) set for that thread. Instead, the code that spawns the runner thread must call asyncio.get_event_loop() and send it to the thread, or just leave it in a global variable or a shared structure.
Waiting Synchronously for an Asynchronous Coroutine
If an asyncio event loop is already running by calling loop.run_forever, it will block the executing thread until loop.stop is called [see the docs]. Therefore, the only way for a synchronous wait is to run the event loop on a dedicated thread, schedule the asynchronous function on the loop and wait for it synchronously from another thread.
For this I have composed my own minimal solution following the answer by user4815162342. I have also added the parts for cleaning up the loop when all work is finished [see loop.close].
The main function in the code below runs the event loop on a dedicated thread, schedules several tasks on the event loop, plus the task the result of which is to be awaited synchronously. The synchronous wait will block until the desired result is ready. Finally, the loop is closed and cleaned up gracefully along with its thread.
The dedicated thread and the functions stop_loop, run_forever_safe, and await_sync can be encapsulated in a module or a class.
For thread-safery considerations, see section “Concurrency and Multithreading” in asyncio docs.
import asyncio
import threading
#----------------------------------------
def stop_loop(loop):
''' stops an event loop '''
loop.stop()
print (".: LOOP STOPPED:", loop.is_running())
def run_forever_safe(loop):
''' run a loop for ever and clean up after being stopped '''
loop.run_forever()
# NOTE: loop.run_forever returns after calling loop.stop
#-- cancell all tasks and close the loop gracefully
print(".: CLOSING LOOP...")
# source: <https://xinhuang.github.io/posts/2017-07-31-common-mistakes-using-python3-asyncio.html>
loop_tasks_all = asyncio.Task.all_tasks(loop=loop)
for task in loop_tasks_all: task.cancel()
# NOTE: `cancel` does not guarantee that the Task will be cancelled
for task in loop_tasks_all:
if not (task.done() or task.cancelled()):
try:
# wait for task cancellations
loop.run_until_complete(task)
except asyncio.CancelledError: pass
#END for
print(".: ALL TASKS CANCELLED.")
loop.close()
print(".: LOOP CLOSED:", loop.is_closed())
def await_sync(task):
''' synchronously waits for a task '''
while not task.done(): pass
print(".: AWAITED TASK DONE")
return task.result()
#----------------------------------------
async def asyncTask(loop, k):
''' asynchronous task '''
print("--start async task %s" % k)
await asyncio.sleep(3, loop=loop)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def main():
loop = asyncio.new_event_loop() # construct a new event loop
#-- closures for running and stopping the event-loop
run_loop_forever = lambda: run_forever_safe(loop)
close_loop_safe = lambda: loop.call_soon_threadsafe(stop_loop, loop)
#-- make dedicated thread for running the event loop
thread = threading.Thread(target=run_loop_forever)
#-- add some tasks along with my particular task
myTask = asyncio.run_coroutine_threadsafe(asyncTask(loop, 100200300), loop=loop)
otherTasks = [asyncio.run_coroutine_threadsafe(asyncTask(loop, i), loop=loop)
for i in range(1, 10)]
#-- begin the thread to run the event-loop
print(".: EVENT-LOOP THREAD START")
thread.start()
#-- _synchronously_ wait for the result of my task
result = await_sync(myTask) # blocks until task is done
print("* final result of my task:", result)
#... do lots of work ...
print("*** ALL WORK DONE ***")
#========================================
# close the loop gracefully when everything is finished
close_loop_safe()
thread.join()
#----------------------------------------
main()
here is my case, my whole programe is async, but call some sync lib, then callback to my async func.
follow the answer by user4815162342.
import asyncio
async def asyncTask(k):
''' asynchronous task '''
print("--start async task %s" % k)
# await asyncio.sleep(3, loop=loop)
await asyncio.sleep(3)
print("--end async task %s." % k)
key = "KEY#%s" % k
return key
def my_callback():
print("here i want to call my async func!")
future = asyncio.run_coroutine_threadsafe(asyncTask(1), LOOP)
return future.result()
def sync_third_lib(cb):
print("here will call back to your code...")
cb()
async def main():
print("main start...")
print("call sync third lib ...")
await asyncio.to_thread(sync_third_lib, my_callback)
# await loop.run_in_executor(None, func=sync_third_lib)
print("another work...keep async...")
await asyncio.sleep(2)
print("done!")
LOOP = asyncio.get_event_loop()
LOOP.run_until_complete(main())
I am learning asyncio with Python 3.4.2 and I use it to continuously listen on an IPC bus, while gbulb listens on the DBus.
I created a function listen_to_ipc_channel_layer that continuously listens for incoming messages on the IPC channel and passes the message to message_handler.
I am also listening to SIGTERM and SIGINT. When I send a SIGTERM to the python process running the code you find at the bottom, the script should terminate gracefully.
The problem I am having is the following warning:
got signal 15: exit
Task was destroyed but it is pending!
task: <Task pending coro=<listen_to_ipc_channel_layer() running at /opt/mainloop-test.py:23> wait_for=<Future cancelled>>
Process finished with exit code 0
…with the following code:
import asyncio
import gbulb
import signal
import asgi_ipc as asgi
def main():
asyncio.async(listen_to_ipc_channel_layer())
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
# Start listening on the Linux IPC bus for incoming messages
loop.run_forever()
loop.close()
#asyncio.coroutine
def listen_to_ipc_channel_layer():
"""Listens to the Linux IPC bus for messages"""
while True:
message_handler(message=channel_layer.receive(["my_channel"]))
try:
yield from asyncio.sleep(0.1)
except asyncio.CancelledError:
break
def ask_exit():
loop = asyncio.get_event_loop()
for task in asyncio.Task.all_tasks():
task.cancel()
loop.stop()
if __name__ == "__main__":
gbulb.install()
# Connect to the IPC bus
channel_layer = asgi.IPCChannelLayer(prefix="my_channel")
main()
I still only understand very little of asyncio, but I think I know what is going on. While waiting for yield from asyncio.sleep(0.1) the signal handler caught the SIGTERM and in that process it calls task.cancel().
Shouldn't this trigger the CancelledError within the while True: loop? (Because it is not, but that is how I understand "Calling cancel() will throw a CancelledError to the wrapped coroutine").
Eventually loop.stop() is called which stops the loop without waiting for either yield from asyncio.sleep(0.1) to return a result or even the whole coroutine listen_to_ipc_channel_layer.
Please correct me if I am wrong.
I think the only thing I need to do is to make my program wait for the yield from asyncio.sleep(0.1) to return a result and/or coroutine to break out the while loop and finish.
I believe I confuse a lot of things. Please help me get those things straight so that I can figure out how to gracefully close the event loop without warning.
The problem comes from closing the loop immediately after cancelling the tasks. As the cancel() docs state
"This arranges for a CancelledError to be thrown into the wrapped coroutine on the next cycle through the event loop."
Take this snippet of code:
import asyncio
import signal
async def pending_doom():
await asyncio.sleep(2)
print(">> Cancelling tasks now")
for task in asyncio.Task.all_tasks():
task.cancel()
print(">> Done cancelling tasks")
asyncio.get_event_loop().stop()
def ask_exit():
for task in asyncio.Task.all_tasks():
task.cancel()
async def looping_coro():
print("Executing coroutine")
while True:
try:
await asyncio.sleep(0.25)
except asyncio.CancelledError:
print("Got CancelledError")
break
print("Done waiting")
print("Done executing coroutine")
asyncio.get_event_loop().stop()
def main():
asyncio.async(pending_doom())
asyncio.async(looping_coro())
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
loop.run_forever()
# I had to manually remove the handlers to
# avoid an exception on BaseEventLoop.__del__
for sig in (signal.SIGINT, signal.SIGTERM):
loop.remove_signal_handler(sig)
if __name__ == '__main__':
main()
Notice ask_exit cancels the tasks but does not stop the loop, on the next cycle looping_coro() stops it. The output if you cancel it is:
Executing coroutine
Done waiting
Done waiting
Done waiting
Done waiting
^CGot CancelledError
Done executing coroutine
Notice how pending_doom cancels and stops the loop immediately after. If you let it run until the pending_doom coroutines awakes from the sleep you can see the same warning you're getting:
Executing coroutine
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
>> Cancelling tasks now
>> Done cancelling tasks
Task was destroyed but it is pending!
task: <Task pending coro=<looping_coro() running at canceling_coroutines.py:24> wait_for=<Future cancelled>>
The meaning of the issue is that a loop doesn't have time to finish all the tasks.
This arranges for a CancelledError to be thrown into the wrapped coroutine on the next cycle through the event loop.
There is no chance to do a "next cycle" of the loop in your approach. To make it properly you should move a stop operation to a separate non-cyclic coroutine to give your loop a chance to finish.
Second significant thing is CancelledError raising.
Unlike Future.cancel(), this does not guarantee that the task will be cancelled: the exception might be caught and acted upon, delaying cancellation of the task or preventing cancellation completely. The task may also return a value or raise a different exception.
Immediately after this method is called, cancelled() will not return True (unless the task was already cancelled). A task will be marked as cancelled when the wrapped coroutine terminates with a CancelledError exception (even if cancel() was not called).
So after cleanup your coroutine must raise CancelledError to be marked as cancelled.
Using an extra coroutine to stop the loop is not an issue because it is not cyclic and be done immediately after execution.
def main():
loop = asyncio.get_event_loop()
asyncio.ensure_future(listen_to_ipc_channel_layer())
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
loop.run_forever()
print("Close")
loop.close()
#asyncio.coroutine
def listen_to_ipc_channel_layer():
while True:
try:
print("Running")
yield from asyncio.sleep(0.1)
except asyncio.CancelledError as e:
print("Break it out")
raise e # Raise a proper error
# Stop the loop concurrently
#asyncio.coroutine
def exit():
loop = asyncio.get_event_loop()
print("Stop")
loop.stop()
def ask_exit():
for task in asyncio.Task.all_tasks():
task.cancel()
asyncio.ensure_future(exit())
if __name__ == "__main__":
main()
I had this message and I believe it was caused by garbage collection of pending task. The Python developers were debating whether tasks created in asyncio should create strong references and decided they shouldn't (after 2 days of looking into this problem I strongly disagree! ... see the discussion here https://bugs.python.org/issue21163)
I created this utility for myself to make strong references to tasks and automatically clean it up (still need to test it thoroughly)...
import asyncio
#create a strong reference to tasks since asyncio doesn't do this for you
task_references = set()
def register_ensure_future(coro):
task = asyncio.ensure_future(coro)
task_references.add(task)
# Setup cleanup of strong reference on task completion...
def _on_completion(f):
task_references.remove(f)
task.add_done_callback(_on_completion)
return task
It seems to me that tasks should have a strong reference for as long as they are active! But asyncio doesn't do that for you so you can have some bad surprises once gc happens and long hours of debugging.
The reasons this happens is as explained by #Yeray Diaz Diaz
In my case, I wanted to cancel all the tasks that were not done after the first finished, so I ended up cancelling the extra jobs, then using loop._run_once() to run the loop a bit more and allow them to stop:
loop = asyncio.get_event_loop()
job = asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
tasks_finished,tasks_pending, = loop.run_until_complete(job)
tasks_done = [t for t in tasks_finished if t.exception() is None]
if tasks_done == 0:
raise Exception("Failed for all tasks.")
assert len(tasks_done) == 1
data = tasks_done[0].result()
for t in tasks_pending:
t.cancel()
t.cancel()
while not all([t.done() for t in tasks_pending]):
loop._run_once()
I'm having problems understanding how to pend a new task to an already running event loop.
This code:
import asyncio
import logging
#asyncio.coroutine
def blocking(cmd):
while True:
logging.info("in blocking coroutine")
yield from asyncio.sleep(0.01)
print("ping")
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
loop.run_forever()
asyncio.async(blocking("ls"))
logging.basicConfig(level = logging.INFO)
main()
Changing run_forever() to run_until_complete(asyncio.async(blocking("ls")) works fine. But I'm really confused - why I can't pend a task on the already running loop?
The problem is that the call to loop.run_forever() blocks; it starts the event loop, and won't return until you explicitly stop the loop - hence the forever part of run_forever. Your program never explicitly stops the event loop, so your asyncio.async(blocking("ls")) call is never reached.
Using asyncio.async to add a new task to an already running loop is fine, you just need to make sure the function is actually called from inside a coroutine or callback inside the event loop. Here are some examples:
Schedule blocking to run as soon as the event loop starts:
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
asyncio.async(blocking("ls"))
loop.run_forever()
Schedule blocking from a callback executed by the event loop:
def start_blocking():
asyncio.async(blocking("ls"))
def main():
logging.info("in main funciton")
loop = asyncio.get_event_loop()
logging.info("new loop created")
logging.info("loop running forever")
loop.call_soon(start_blocking) # Calls start_blocking once the event loop starts
loop.run_forever()