I have been trying to understand the asyncio module in order to
implement a server. I was looking at this question, which seems
to ask a similar, if not the same question, however I am still trying
to grasp the workflow happening behind the sceens.
I have the following simple program that has two coroutines, one reading from the terminal
and putting it into a Queue and one coroutine that waits for items in the queue and
simply prints them back to the screen.
import asyncio
q = asyncio.Queue()
async def put():
while True:
await q.put(input()) #Input would be normaly something like client.recv()
await asyncio.sleep(1) #This is neccessarry but I dont understand why
async def get():
while True:
print(await q.get())
def run():
loop = asyncio.get_event_loop()
task1 = loop.create_task(put())
task2 = loop.create_task(get())
loop.run_forever()
run()
This program works as expected, however when one removes the await asyncio.sleep(1)
statement from the put method, it stops working. I assume because the loop keeps eating up the thread and the message doesn't get pushed through. I don't understand why though because I would think input() would be a blocking function and the coroutine should thus suspend until a new line is available on the tty.
The second problem is, if I use asyncio.get_event_loop() in the run() call, the interpreter warns me that there is no active loop, however, as stated in the documentation this call is deprecated and thus I tried replacing it with asyncio.new_event_loop(). The programm still works the same, however I get a traceback upon KeyboardInterrupt (which does not happen when calling asyncio.get_event_loop())
Task was destroyed but it is pending!
task: <Task pending name='Task-1' coro=<put() running at test.py:10> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<get() running at test.py:15> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Exception ignored in: <coroutine object get at 0x7f32f51369d0>
Traceback (most recent call last):
File "test.py", line 15, in get
File "/usr/lib64/python3.10/asyncio/queues.py", line 161, in get
File "/usr/lib64/python3.10/asyncio/base_events.py", line 745, in call_soon
File "/usr/lib64/python3.10/asyncio/base_events.py", line 510, in _check_closed
RuntimeError: Event loop is closed
A third variant I tried was to make the run method itself async and call it via the asyncio.run(run()) call.
import asyncio
q = asyncio.Queue()
async def put():
while True:
await q.put(input())
await asyncio.sleep(1)
async def get():
while True:
print(await q.get())
async def run():
loop = asyncio.get_event_loop()
task1 = loop.create_task(put())
task2 = loop.create_task(get())
await task1
asyncio.run(run())
This works just fine as well, however if I replace await task1 with await task2,
again I get errors when I interrupt the program.
Why is the execution different between these and how is it supposed to be done in the end?
As stated in the comments, with StreamReader (your original code) problem #1 will work flawlessly and cause no issue. input() does not give a chance for aio to switch coroutines, and you can try and limit the queue to a certain length if StreamReader constantly has data.
For problem #2, during cleanup, Python uses the assigned loop for the current thread. Under the hood, asyncio.run() and asyncio.get_event_loop() assigns the loop to the main thread. When it doesn't find a loop, all hell breaks loose.
If you wish to assign it yourself, you can do so like that:
def run():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
task1 = loop.create_task(put())
task2 = loop.create_task(get())
loop.run_forever()
Keep in mind you're still missing some manual cleanup (i.e. shutdown_asyncgens and shutdown_executor) but that's for a different topic. Overall, using asyncio.run() is usually the correct choice.
I'm unable to reproduce problem #3, both await task1 and await task2 work flawlessly.
Related
I am reading the book and face the code snippet, which doesn't makes sense for me. Can someone clarify that for me ?
import asyncio
async def main():
print(f'{time.ctime()} Hello!')
await asyncio.sleep(1.0)
print(f'{time.ctime()} Goodbye!')
loop = asyncio.get_event_loop()
task = loop.create_task(main())
loop.run_until_complete(task) # This line is responsible to block the thread (Which is MainThread in my case), until every coroutine won't be finished.
pending = asyncio.all_tasks(loop=loop) # asyncio.all_tasks() Return a set of not yet finished Task objects run by the loop. Based on definition, pending will always be an empty set.
for task in pending:
task.cancel()
group = asyncio.gather(*pending, return_exceptions=True)
loop.run_until_complete(group)
loop.close()
I think asyncio.all_tasks() should be used before loop.run_until_complete() function. Besides I find many other places where it is useful, but this example absolutely does not makes sense for me. I am really interested in, why author did that ? What was the point ?
What you are thinking is correct. There is no point here for having .all_tasks() as it always returns an empty set. You only have one task and you pass it to .run_until_complete(), so it blocks until it gets done.
But things change when you have another task that takes longer than your main coroutine:
import asyncio
import time
async def longer_task():
print("inside longer coroutine")
await asyncio.sleep(2)
async def main():
print(f"{time.ctime()} Hello!")
await asyncio.sleep(1.0)
print(f"{time.ctime()} Goodbye!")
loop = asyncio.new_event_loop()
task1 = loop.create_task(main())
task2 = loop.create_task(longer_task())
loop.run_until_complete(task1)
pending = asyncio.all_tasks(loop=loop)
print(pending)
for task in pending:
task.cancel()
group = asyncio.gather(*pending, return_exceptions=True)
loop.run_until_complete(group)
loop.close()
Event loop only cares to finish task1 so task2 is gonna be in pending mode.
I think asyncio.all_tasks() should be used before
loop.run_until_complete() function.
As soon as you create_task(), it will be included in the set returned by all_tasks() even if the loop has not started yet.
Just a side note: (version 3.10) Since you don't have a running event loop, .get_event_loop() will warn you. use .new_event_loop() instead.
My Source Code:
import asyncio
async def mycoro(number):
print(f'Starting {number}')
await asyncio.sleep(1)
print(f'Finishing {number}')
return str(number)
c = mycoro(3)
task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
Error:
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'mycoro' was never awaited
I was watching a tutorial and according to my code it was never awaited when I did and it clearly does in the video I was watching.
Simply run the coroutine directly without creating a task for it:
import asyncio
async def mycoro(number):
print(f'Starting {number}')
await asyncio.sleep(1)
print(f'Finishing {number}')
return str(number)
c = mycoro(3)
loop = asyncio.get_event_loop()
loop.run_until_complete(c)
loop.close()
The purpose of asyncio.create_task is to create an additional task from inside a running task. Since it directly starts the new task, it must be used inside a running event loop – hence the error when using it outside.
Use loop.create_task(c) if a task must be created from outside a task.
In more recent version of asyncio, use asyncio.run to avoid having to handle the event loop explicitly:
c = mycoro(3)
asyncio.run(c)
In general, use asyncio.create_task only to increase concurrency. Avoid using it when another task would block immediately.
# bad task usage: concurrency stays the same due to blocking
async def bad_task():
task = asyncio.create_task(mycoro(0))
await task
# no task usage: concurrency stays the same due to stacking
async def no_task():
await mycoro(0)
# good task usage: concurrency is increased via multiple tasks
async def good_task():
tasks = [asyncio.create_task(mycoro(i)) for i in range(3)]
print('Starting done, sleeping now...')
await asyncio.sleep(1.5)
await asyncio.gather(*tasks) # ensure subtasks finished
Change the line
task = asyncio.Task(c)
I try to cancel process when timeout but asyncio.wait_for not working. How do i cancel process when reached time out. My code below:
import asyncio
async def process():
# do something take a long time like this
for i in range(0,10000000000,1):
for j in range(0,10000000000,1):
continue
print('done!')
async def main():
# I want to cancel process when reached timeout
try:
await asyncio.wait_for(process(), timeout=1.0)
except asyncio.TimeoutError:
print('timeout!')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
This doesn't work because your process function is async in name only - it doesn't await anything. That means that it finishes in its entirety without giving the event loop a chance to interrupt it. Since asyncio is cooperative (as are other async/await based systems), such a function is not a correctly written async function and cannot be interrupted.
If you add an await asyncio.sleep(0.001) into the inner loop (or anything else that awaits something that actually suspends), your code will work fine.
I'm trying to asynchronously download a file in Python, using wget in a subprocess. My code looks like this:
async def download(url, filename):
wget = await asyncio.create_subprocess_exec(
'wget', url,
'O', filename
)
await wget.wait()
def main(url):
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(download(url, 'test.zip'), loop=loop)
print("Downloading..")
time.sleep(15)
print("Still downloading...")
loop.run_until_complete(future)
loop.close()
What I'm trying to do is witness the printing of "Downloading.." then 15 seconds later "Still downloading...", all while the download of the file has started. What I'm actually seeing is that the download of the file only starts when the code hits loop.run_until_complete(future)
My understanding is that asyncio.ensure_future should start running the code of the download coroutine, but apparently I'm missing something.
When passed a coroutine, asyncio.ensure_future converts it to a task - a special kind of future that knows how to drive the coroutine - and enqueues it in the event loop. "Enqueue" means that the code inside the coroutine will be executed by a running event loop that schedules the coroutines. If the event loop is not running, then none of the coroutines will get a chance to run either. The loop is told to run by a call to loop.run_forever() or loop.run_until_complete(some_future). In the question the event loop is only started after the call to time.sleep(), so the beginning of the download is delayed by 15 seconds.
time.sleep should never be called in a thread that runs the asyncio event loop. The correct way to sleep is with asyncio.sleep, which yields the control to the event loop while waiting. asyncio.sleep returns a future that can be submitted to the event loop or awaited from a coroutine:
# ... definition of download omitted ...
async def report():
print("Downloading..")
await asyncio.sleep(15)
print("Still downloading...")
def main(url):
loop = asyncio.get_event_loop()
dltask = loop.create_task(download(url, 'test.zip'))
loop.create_task(report())
loop.run_until_complete(dltask)
loop.close()
The above code has a different problem. When the download is shorter than 15 seconds, it results in a Task was destroyed but it is pending! warning being printed. The problem is that the report task was never canceled when the download task finished and the loop closed, it was just abandoned. This happening often indicates a bug or a misunderstanding of how asyncio works, so asyncio flags it with a warning.
The obvious way to eliminate the warning is by explicitly canceling the task of the report coroutine, but the resulting code ends up being verbose and not very elegant. An simpler and shorter fix is to change report to await the download task, specifying a timeout for displaying the "Still downloading..." message:
async def dl_and_report(dltask):
print("Downloading..")
try:
await asyncio.wait_for(asyncio.shield(dltask), 15)
except asyncio.TimeoutError:
print("Still downloading...")
# assuming we want the download to continue; otherwise
# remove the shield(), and dltask will be canceled
await dltask
def main(url):
loop = asyncio.get_event_loop()
dltask = loop.create_task(download(url, 'test.zip'))
loop.run_until_complete(dl_and_report(dltask))
loop.close()
I am learning asyncio with Python 3.4.2 and I use it to continuously listen on an IPC bus, while gbulb listens on the DBus.
I created a function listen_to_ipc_channel_layer that continuously listens for incoming messages on the IPC channel and passes the message to message_handler.
I am also listening to SIGTERM and SIGINT. When I send a SIGTERM to the python process running the code you find at the bottom, the script should terminate gracefully.
The problem I am having is the following warning:
got signal 15: exit
Task was destroyed but it is pending!
task: <Task pending coro=<listen_to_ipc_channel_layer() running at /opt/mainloop-test.py:23> wait_for=<Future cancelled>>
Process finished with exit code 0
…with the following code:
import asyncio
import gbulb
import signal
import asgi_ipc as asgi
def main():
asyncio.async(listen_to_ipc_channel_layer())
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
# Start listening on the Linux IPC bus for incoming messages
loop.run_forever()
loop.close()
#asyncio.coroutine
def listen_to_ipc_channel_layer():
"""Listens to the Linux IPC bus for messages"""
while True:
message_handler(message=channel_layer.receive(["my_channel"]))
try:
yield from asyncio.sleep(0.1)
except asyncio.CancelledError:
break
def ask_exit():
loop = asyncio.get_event_loop()
for task in asyncio.Task.all_tasks():
task.cancel()
loop.stop()
if __name__ == "__main__":
gbulb.install()
# Connect to the IPC bus
channel_layer = asgi.IPCChannelLayer(prefix="my_channel")
main()
I still only understand very little of asyncio, but I think I know what is going on. While waiting for yield from asyncio.sleep(0.1) the signal handler caught the SIGTERM and in that process it calls task.cancel().
Shouldn't this trigger the CancelledError within the while True: loop? (Because it is not, but that is how I understand "Calling cancel() will throw a CancelledError to the wrapped coroutine").
Eventually loop.stop() is called which stops the loop without waiting for either yield from asyncio.sleep(0.1) to return a result or even the whole coroutine listen_to_ipc_channel_layer.
Please correct me if I am wrong.
I think the only thing I need to do is to make my program wait for the yield from asyncio.sleep(0.1) to return a result and/or coroutine to break out the while loop and finish.
I believe I confuse a lot of things. Please help me get those things straight so that I can figure out how to gracefully close the event loop without warning.
The problem comes from closing the loop immediately after cancelling the tasks. As the cancel() docs state
"This arranges for a CancelledError to be thrown into the wrapped coroutine on the next cycle through the event loop."
Take this snippet of code:
import asyncio
import signal
async def pending_doom():
await asyncio.sleep(2)
print(">> Cancelling tasks now")
for task in asyncio.Task.all_tasks():
task.cancel()
print(">> Done cancelling tasks")
asyncio.get_event_loop().stop()
def ask_exit():
for task in asyncio.Task.all_tasks():
task.cancel()
async def looping_coro():
print("Executing coroutine")
while True:
try:
await asyncio.sleep(0.25)
except asyncio.CancelledError:
print("Got CancelledError")
break
print("Done waiting")
print("Done executing coroutine")
asyncio.get_event_loop().stop()
def main():
asyncio.async(pending_doom())
asyncio.async(looping_coro())
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
loop.run_forever()
# I had to manually remove the handlers to
# avoid an exception on BaseEventLoop.__del__
for sig in (signal.SIGINT, signal.SIGTERM):
loop.remove_signal_handler(sig)
if __name__ == '__main__':
main()
Notice ask_exit cancels the tasks but does not stop the loop, on the next cycle looping_coro() stops it. The output if you cancel it is:
Executing coroutine
Done waiting
Done waiting
Done waiting
Done waiting
^CGot CancelledError
Done executing coroutine
Notice how pending_doom cancels and stops the loop immediately after. If you let it run until the pending_doom coroutines awakes from the sleep you can see the same warning you're getting:
Executing coroutine
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
Done waiting
>> Cancelling tasks now
>> Done cancelling tasks
Task was destroyed but it is pending!
task: <Task pending coro=<looping_coro() running at canceling_coroutines.py:24> wait_for=<Future cancelled>>
The meaning of the issue is that a loop doesn't have time to finish all the tasks.
This arranges for a CancelledError to be thrown into the wrapped coroutine on the next cycle through the event loop.
There is no chance to do a "next cycle" of the loop in your approach. To make it properly you should move a stop operation to a separate non-cyclic coroutine to give your loop a chance to finish.
Second significant thing is CancelledError raising.
Unlike Future.cancel(), this does not guarantee that the task will be cancelled: the exception might be caught and acted upon, delaying cancellation of the task or preventing cancellation completely. The task may also return a value or raise a different exception.
Immediately after this method is called, cancelled() will not return True (unless the task was already cancelled). A task will be marked as cancelled when the wrapped coroutine terminates with a CancelledError exception (even if cancel() was not called).
So after cleanup your coroutine must raise CancelledError to be marked as cancelled.
Using an extra coroutine to stop the loop is not an issue because it is not cyclic and be done immediately after execution.
def main():
loop = asyncio.get_event_loop()
asyncio.ensure_future(listen_to_ipc_channel_layer())
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, ask_exit)
loop.run_forever()
print("Close")
loop.close()
#asyncio.coroutine
def listen_to_ipc_channel_layer():
while True:
try:
print("Running")
yield from asyncio.sleep(0.1)
except asyncio.CancelledError as e:
print("Break it out")
raise e # Raise a proper error
# Stop the loop concurrently
#asyncio.coroutine
def exit():
loop = asyncio.get_event_loop()
print("Stop")
loop.stop()
def ask_exit():
for task in asyncio.Task.all_tasks():
task.cancel()
asyncio.ensure_future(exit())
if __name__ == "__main__":
main()
I had this message and I believe it was caused by garbage collection of pending task. The Python developers were debating whether tasks created in asyncio should create strong references and decided they shouldn't (after 2 days of looking into this problem I strongly disagree! ... see the discussion here https://bugs.python.org/issue21163)
I created this utility for myself to make strong references to tasks and automatically clean it up (still need to test it thoroughly)...
import asyncio
#create a strong reference to tasks since asyncio doesn't do this for you
task_references = set()
def register_ensure_future(coro):
task = asyncio.ensure_future(coro)
task_references.add(task)
# Setup cleanup of strong reference on task completion...
def _on_completion(f):
task_references.remove(f)
task.add_done_callback(_on_completion)
return task
It seems to me that tasks should have a strong reference for as long as they are active! But asyncio doesn't do that for you so you can have some bad surprises once gc happens and long hours of debugging.
The reasons this happens is as explained by #Yeray Diaz Diaz
In my case, I wanted to cancel all the tasks that were not done after the first finished, so I ended up cancelling the extra jobs, then using loop._run_once() to run the loop a bit more and allow them to stop:
loop = asyncio.get_event_loop()
job = asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
tasks_finished,tasks_pending, = loop.run_until_complete(job)
tasks_done = [t for t in tasks_finished if t.exception() is None]
if tasks_done == 0:
raise Exception("Failed for all tasks.")
assert len(tasks_done) == 1
data = tasks_done[0].result()
for t in tasks_pending:
t.cancel()
t.cancel()
while not all([t.done() for t in tasks_pending]):
loop._run_once()