How to use async/await in python 3.5+ - python

I was trying to explain an example of async programming in python but I failed.
Here is my code.
import asyncio
import time
async def asyncfoo(t):
time.sleep(t)
print("asyncFoo")
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncfoo(10)) # I think Here is the problem
print("Foo")
loop.close()
My expectation is that I would see:
Foo
asyncFoo
With a wait of 10s before asyncFoo was displayed.
But instead I got nothing for 10s, and then they both displayed.
What am I doing wrong, and how can I explain it?

run_until_complete will block until asyncfoo is done. Instead, you would need two coroutines executed in the loop. Use asyncio.gather to easily start more than one coroutine with run_until_complete.
Here is a an example:
import asyncio
async def async_foo():
print("asyncFoo1")
await asyncio.sleep(3)
print("asyncFoo2")
async def async_bar():
print("asyncBar1")
await asyncio.sleep(1)
print("asyncBar2")
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(async_foo(), async_bar()))
loop.close()

Your expectation would work in contexts where you run your coroutine as a Task independent of the flow of the code. Another situation where it would work is if you are running multiple coroutines side-by-side, in which case the event-loop will juggle the code execution from await to await statement.
Within the context of your example, you can achieve your anticipated behaviour by wrapping your coroutine in a Task object, which will continue-on in the background without holding up the remainder of the code in the code-block from whence it is called.
For example.
import asyncio
async def asyncfoo(t):
await asyncio.sleep(t)
print("asyncFoo")
async def my_app(t):
my_task = asyncio.ensure_future(asyncfoo(t))
print("Foo")
await asyncio.wait([my_task])
loop = asyncio.get_event_loop()
loop.run_until_complete(my_app(10))
loop.close()
Note that you should use asyncio.sleep() instead of the time module.

run_until_complete is blocking. So, even if it'll happen in 10 seconds, it will wait for it. After it's completed, the other print occurs.
You should launch your loop.run_until_complete(asyncfoo(10)) in a thread or a subprocess if you want the "Foo" to be print before.

Related

python how to make early return when asyncio generator

I want to return the first element of async generator and handle the remainning values without return like fire and forget. How to make early return of coroutine in python?
After passing the iterator to asyncio.create_task, it doesn't print the remaining values.
import asyncio
import time
async def async_iter(num):
for i in range(num):
await asyncio.sleep(0.5)
yield i
async def handle_remains(it):
async for i in it:
print(i)
async def run() -> None:
it = async_iter(10)
async for i in it:
print(i)
break
# await handle_remains(it)
# want to make this `fire and forget`(no await), expecting just printing the remainning values.
asyncio.create_task(handle_remains(it))
return i
if __name__ == '__main__':
asyncio.run(run())
time.sleep(10)
You’re close with the code, but not quite there yet (see also my comments above). In short, creating the Task isn’t enough: the Task needs to run:
task = asyncio.create_task(handle_remains(it)) # Creates a Task in `pending` state.
await task # Run the task, i.e. execute the wrapped coroutine.
A Task, along with coroutines and Futures, is an “Awaitable”. In fact:
When a coroutine is wrapped into a Task with functions like asyncio.create_task() the coroutine is automatically scheduled to run soon.
Notice the “scheduled to run soon”, now you have to make sure to actually run the task by calling await, a keyword which…
is used to obtain a result of coroutine execution.

How asyncio understands that task is complete for non-blocking operations

I'm trying to understand how asyncio works. As for I/O operation i got understand that when await was called, we register Future object in EventLoop, and then calling epoll for get sockets which belongs to Future objects, that ready for give us data. After we run registred callback and resume function execution.
But, the thing that i cant understant, what's happening if we use await not for I/O operation. How eventloop understands that task is complete? Is it create socket for that or use another kind of loop? Is it use epoll? Or doesnt it add to Loop and used it as generator?
There is an example:
import asyncio
async def test():
return 10
async def my_coro(delay):
loop = asyncio.get_running_loop()
end_time = loop.time() + delay
while True:
print("Blocking...")
await test()
if loop.time() > end_time:
print("Done.")
break
async def main():
await my_coro(3.0)
asyncio.run(main())
await doesn't automatically yield to the event loop, that happens only when an async function (anywhere in the chain of awaits) requests suspension, typically due to IO or timeout not being ready.
In your example the event loop is never returned to, which you can easily verify by moving the "Blocking" print before the while loop and changing main to await asyncio.gather(my_coro(3.0), my_coro(3.0)). What you'll observe is that the coroutines are executed in series ("blocking" followed by "done", all repeated twice), not in parallel ("blocking" followed by another "blocking" and then twice "done"). The reason for that was that there was simply no opportunity for a context switch - my_coro executed in one go as if they were an ordinary function because none of its awaits ever chose to suspend.

How to run async function forever (Python)

How do I use asyncio and run the function forever. I know that there's run_until_complete(function_name) but how do I use run_forever how do I call the async function?
async def someFunction():
async with something as some_variable:
# do something
I'm not sure how to start the function.
run_forever doesn't mean that an async function will magically run forever, it means that the loop will run forever, or at least until someone calls loop.stop(). To literally run an async function forever, you need to create an async function that does that. For example:
async def some_function():
async with something as some_variable:
# do something
async def forever():
while True:
await some_function()
loop = asyncio.get_event_loop()
loop.run_until_complete(forever())
This is why run_forever() doesn't accept an argument, it doesn't care about any particular coroutine. The typical pattern is to add some coroutines using loop.create_task or equivalent before invoking run_forever(). But even an event loop that runs no tasks whatsoever and sits idly can be useful since another thread can call asyncio.run_coroutine_threadsafe and give it work.
I'm unsure as to exactly what you mean when you say I'm not sure how to start the function. If you're asking the question in the literal sense:
loop = asyncio.get_event_loop()
loop.run_forever()
If you wish to add a function to the loop before initialising the loop then the following line prior to loop.run_forever() will suffice:
asyncio.async(function())
To add a function to a loop that is already running you'll need ensure_future:
asyncio.ensure_future(function(), loop=loop)
In both cases the function you intend to call must be designated in some way as asynchronous, i.e. using the async function prefix or the #asyncio.coroutine decorator.

Is there a difference between 'await future' and 'await asyncio.wait_for(future, None)'?

With python 3.5 or later, is there any difference between directly applying await to a future or task, and wrapping it with asyncio.wait_for? The documentation is unclear on when it is appropriate to use wait_for and I'm wondering if it's a vestige of the old generator-based library. The test program below appears to show no difference but that doesn't really prove anything.
import asyncio
async def task_one():
await asyncio.sleep(0.1)
return 1
async def task_two():
await asyncio.sleep(0.1)
return 2
async def test(loop):
t1 = loop.create_task(task_one())
t2 = loop.create_task(task_two())
print(repr(await t1))
print(repr(await asyncio.wait_for(t2, None)))
def main():
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(test(loop))
finally:
loop.close()
main()
The wait_for gives two more functionalities:
allow to define timeout,
let you specify the loop
Your example:
await f1
await asyncio.wait_for(f1, None) # or simply asyncio.wait_for(f1)
besides an overhead of calling additional wrapper (wait_for), they're the same (https://github.com/python/cpython/blob/master/Lib/asyncio/tasks.py#L318).
Both awaits will wait indefinitely for the results (or exception). In this case plain await is more appropriate.
On the other hand if you provide timeout argument it will wait for the results with time constraint. And if it will take more than the timeout it will raise TimeoutError and the future will be cancelled.
async def my_func():
await asyncio.sleep(10)
return 'OK'
# will wait 10s
await my_func()
# will wait only 5 seconds and then will raise TimeoutError
await asyncio.wait_for(my_func(), 5)
Another thing is the loop argument. In the most cases you shouldn't be bothered, the use case is limited: inject different loop for tests, run other loop ...
The problem with this parameter is, that all subsequent tasks/functions should also have that loop passed along...
More info https://github.com/python/asyncio/issues/362
Passing asyncio loop by argument or using default asyncio loop
Why use explicit loop parameter with aiohttp?
Unfortunately the python documentation is a little bit unclear here, but if you have a look into the sources its pretty obvious:
In contrary to await the coroutine asyncio.wait_for() allows to wait only for a limited time until the future/task completes. If it does not complete within this time a concurrent.futures.TimeoutError is raised.
This timeout can be specified as second parameter. In your sample code this timeout parameter is None which results in exactly the same functionality as directly applying await/yield from.

Asyncio two loops for different I/O tasks?

I am using Python3 Asyncio module to create a load balancing application. I have two heavy IO tasks:
A SNMP polling module, which determines the best possible server
A "proxy-like" module, which balances the petitions to the selected server.
Both processes are going to run forever, are independent from eachother and should not be blocked by the other one.
I cant use 1 event loop because they would block eachother, is there any way to have 2 event loops or do I have to use multithreading/processing?
I tried using asyncio.new_event_loop() but havent managed to make it work.
The whole point of asyncio is that you can run multiple thousands of I/O-heavy tasks concurrently, so you don't need Threads at all, this is exactly what asyncio is made for. Just run the two coroutines (SNMP and proxy) in the same loop and that's it.
You have to make both of them available to the event loop BEFORE calling loop.run_forever(). Something like this:
import asyncio
async def snmp():
print("Doing the snmp thing")
await asyncio.sleep(1)
async def proxy():
print("Doing the proxy thing")
await asyncio.sleep(2)
async def main():
while True:
await snmp()
await proxy()
loop = asyncio.get_event_loop()
loop.create_task(main())
loop.run_forever()
I don't know the structure of your code, so the different modules might have their own infinite loop or something, in this case you can run something like this:
import asyncio
async def snmp():
while True:
print("Doing the snmp thing")
await asyncio.sleep(1)
async def proxy():
while True:
print("Doing the proxy thing")
await asyncio.sleep(2)
loop = asyncio.get_event_loop()
loop.create_task(snmp())
loop.create_task(proxy())
loop.run_forever()
Remember, both snmp and proxy needs to be coroutines (async def) written in an asyncio-aware manner. asyncio will not make simple blocking Python functions suddenly "async".
In your specific case, I suspect that you are confused a little bit (no offense!), because well-written async modules will never block each other in the same loop. If this is the case, you don't need asyncio at all and just simply run one of them in a separate Thread without dealing with any asyncio stuff.
Answering my own question to post my solution:
What I ended up doing was creating a thread and a new event loop inside the thread for the polling module, so now every module runs in a different loop. It is not a perfect solution, but it is the only one that made sense to me(I wanted to avoid threads, but since it is only one...). Example:
import asyncio
import threading
def worker():
second_loop = asyncio.new_event_loop()
execute_polling_coroutines_forever(second_loop)
return
threads = []
t = threading.Thread(target=worker)
threads.append(t)
t.start()
loop = asyncio.get_event_loop()
execute_proxy_coroutines_forever(loop)
Asyncio requires that every loop runs its coroutines in the same thread. Using this method you have one event loop foreach thread, and they are totally independent: every loop will execute its coroutines on its own thread, so that is not a problem.
As I said, its probably not the best solution, but it worked for me.
Though in most cases, you don't need multiple event loops running when using asyncio, people shouldn't assume their assumptions apply to all the cases or just give you what they think are better without directly targeting your original question.
Here's a demo of what you can do for creating new event loops in threads. Comparing to your own answer, the set_event_loop does the trick for you not to pass the loop object every time you do an asyncio-based operation.
import asyncio
import threading
async def print_env_info_async():
# As you can see each work thread has its own asyncio event loop.
print(f"Thread: {threading.get_ident()}, event loop: {id(asyncio.get_running_loop())}")
async def work():
while True:
await print_env_info_async()
await asyncio.sleep(1)
def worker():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
new_loop.run_until_complete(work())
return
number_of_threads = 2
for _ in range(number_of_threads):
threading.Thread(target=worker).start()
Ideally, you'll want to put heavy works in worker threads and leave the asncyio thread run as light as possible. Think the asyncio thread as the GUI thread of a desktop or mobile app, you don't want to block it. Worker threads are usually very busy, this is one of the reason you don't want to create separate asyncio event loops in worker threads. Here's an example of how to manage heavy worker threads with a single asyncio event loop. And this is the most common practice in this kind of use cases:
import asyncio
import concurrent.futures
import threading
import time
def print_env_info(source_thread_id):
# This will be called in the main thread where the default asyncio event loop lives.
print(f"Thread: {threading.get_ident()}, event loop: {id(asyncio.get_running_loop())}, source thread: {source_thread_id}")
def work(event_loop):
while True:
# The following line will fail because there's no asyncio event loop running in this worker thread.
# print(f"Thread: {threading.get_ident()}, event loop: {id(asyncio.get_running_loop())}")
event_loop.call_soon_threadsafe(print_env_info, threading.get_ident())
time.sleep(1)
async def worker():
print(f"Thread: {threading.get_ident()}, event loop: {id(asyncio.get_running_loop())}")
loop = asyncio.get_running_loop()
number_of_threads = 2
executor = concurrent.futures.ThreadPoolExecutor(max_workers=number_of_threads)
for _ in range(number_of_threads):
asyncio.ensure_future(loop.run_in_executor(executor, work, loop))
loop = asyncio.get_event_loop()
loop.create_task(worker())
loop.run_forever()
I know it's an old thread but it might be still helpful for someone.
I'm not good in asyncio but here is a bit improved solution of #kissgyorgy answer. Instead of awaiting each closure separately we create list of tasks and fire them later (python 3.9):
import asyncio
async def snmp():
while True:
print("Doing the snmp thing")
await asyncio.sleep(0.4)
async def proxy():
while True:
print("Doing the proxy thing")
await asyncio.sleep(2)
async def main():
tasks = []
tasks.append(asyncio.create_task(snmp()))
tasks.append(asyncio.create_task(proxy()))
await asyncio.gather(*tasks)
asyncio.run(main())
Result:
Doing the snmp thing
Doing the proxy thing
Doing the snmp thing
Doing the snmp thing
Doing the snmp thing
Doing the snmp thing
Doing the proxy thing
Asyncio event loop is a single thread running and it will not run anything in parallel, it is how it is designed. The closest thing which I can think of is using asyncio.wait.
from asyncio import coroutine
import asyncio
#coroutine
def some_work(x, y):
print("Going to do some heavy work")
yield from asyncio.sleep(1.0)
print(x + y)
#coroutine
def some_other_work(x, y):
print("Going to do some other heavy work")
yield from asyncio.sleep(3.0)
print(x * y)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait([asyncio.async(some_work(3, 4)),
asyncio.async(some_other_work(3, 4))]))
loop.close()
an alternate way is to use asyncio.gather() - it returns a future results from the given list of futures.
tasks = [asyncio.Task(some_work(3, 4)), asyncio.Task(some_other_work(3, 4))]
loop.run_until_complete(asyncio.gather(*tasks))
If the proxy server is running all the time it cannot switch back and forth. The proxy listens for client requests and makes them asynchronous, but the other task cannot execute, because this one is serving forever.
If the proxy is a coroutine and is starving the SNMP-poller (never awaits), isn't the client requests being starved aswell?
every coroutine will run forever, they will not end
This should be fine, as long as they do await/yield from. The echo server will also run forever, it doesn't mean you can't run several servers (on differents ports though) in the same loop.

Categories