PEP 0492 adds the async keyword to Python 3.5.
How does Python benefit from the use of this operator? The example that is given for a coroutine is
async def read_data(db):
data = await db.fetch('SELECT ...')
According to the docs this achieves
suspend[ing] execution of read_data coroutine until db.fetch awaitable completes and returns the result data.
Does this async keyword actually involve creation of new threads or perhaps the use of an existing reserved async thread?
In the event that async does use a reserved thread, is it a single shared thread each in their own?
No, co-routines do not involve any kind of threads. Co-routines allow for cooperative multi-tasking in that each co-routine yields control voluntarily. Threads on the other hand switch between units at arbitrary points.
Up to Python 3.4, it was possible to write co-routines using generators; by using yield or yield from expressions in a function body you create a generator object instead, where code is only executed when you iterate over the generator. Together with additional event loop libraries (such as asyncio) you could write co-routines that would signal to an event loop that they were going to be busy (waiting for I/O perhaps) and that another co-routine could be run in the meantime:
import asyncio
import datetime
#asyncio.coroutine
def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
yield from asyncio.sleep(1)
Every time the above code advances to the yield from asyncio.sleep(1) line, the event loop is free to run a different co-routine, because this routine is not going to do anything for the next second anyway.
Because generators can be used for all sorts of tasks, not just co-routines, and because writing a co-routine using generator syntax can be confusing to new-comers, the PEP introduces new syntax that makes it clearer that you are writing a co-routine.
With the PEP implemented, the above sample could be written instead as:
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
The resulting coroutine object still needs an event loop to drive the co-routines; an event loop would await on each co-routine in turn, which would execute those co-routines that are not currently awaiting for something to complete.
The advantages are that with native support, you can also introduce additional syntax to support asynchronous context managers and iterators. Entering and exiting a context manager, or looping over an iterator then can become more points in your co-routine that signal that other code can run instead because something is waiting again.
Related
I'm trying to understand how asyncio works. As for I/O operation i got understand that when await was called, we register Future object in EventLoop, and then calling epoll for get sockets which belongs to Future objects, that ready for give us data. After we run registred callback and resume function execution.
But, the thing that i cant understant, what's happening if we use await not for I/O operation. How eventloop understands that task is complete? Is it create socket for that or use another kind of loop? Is it use epoll? Or doesnt it add to Loop and used it as generator?
There is an example:
import asyncio
async def test():
return 10
async def my_coro(delay):
loop = asyncio.get_running_loop()
end_time = loop.time() + delay
while True:
print("Blocking...")
await test()
if loop.time() > end_time:
print("Done.")
break
async def main():
await my_coro(3.0)
asyncio.run(main())
await doesn't automatically yield to the event loop, that happens only when an async function (anywhere in the chain of awaits) requests suspension, typically due to IO or timeout not being ready.
In your example the event loop is never returned to, which you can easily verify by moving the "Blocking" print before the while loop and changing main to await asyncio.gather(my_coro(3.0), my_coro(3.0)). What you'll observe is that the coroutines are executed in series ("blocking" followed by "done", all repeated twice), not in parallel ("blocking" followed by another "blocking" and then twice "done"). The reason for that was that there was simply no opportunity for a context switch - my_coro executed in one go as if they were an ordinary function because none of its awaits ever chose to suspend.
Lets say I have a C++ function result_type compute(input_type input), which I have made available to python using cython. My python code executes multiple computations like this:
def compute_total_result()
inputs = ...
total_result = ...
for input in inputs:
result = compute_python_wrapper(input)
update_total_result(total_result)
return total_result
Since the computation takes a long time, I have implemented a C++ thread pool (like this) and written a function std::future<result_type> compute_threaded(input_type input), which returns a future that becomes ready as soon as the thread pool is done executing.
What I would like to do is to use this C++ function in python as well. A simple way to do this would be to wrap the std::future<result_type> including its get() function, wait for all results like this:
def compute_total_results_parallel()
inputs = ...
total_result = ...
futures = []
for input in inputs:
futures.append(compute_threaded_python_wrapper(input))
for future in futures:
update_total_result(future.get())
return total_result
I suppose this works well enough in this case, but it becomes very complicated very fast, because I have to pass futures around.
However, I think that conceptually, waiting for these C++ results is no different from waiting for file or network I/O.
To facilitate I/O operations, the python devs introduced the async / await keywords. If my compute_threaded_python_wrapper would be part of asyncio, I could simply rewrite it as
async def compute_total_results_async()
inputs = ...
total_result = ...
for input in inputs:
result = await compute_threaded_python_wrapper(input)
update_total_result(total_result)
return total_result
And I could execute the whole code via result = asyncio.run(compute_total_results_async()).
There are a lot of tutorials regarding async programming in python, but most of them deal with using coroutines where the bedrock seem to be some call into the asyncio package, mostly calling asyncio.sleep(delay) as a proxy for I/O.
My question is: (How) Can I implement coroutines in python, enabling python to await the wrapped future object (There is some mention of a __await__ method returning an iterator)?
First, an inaccuracy in the question needs to be corrected:
If my compute_threaded_python_wrapper would be part of asyncio, I could simply rewrite it as [...]
The rewrite is incorrect: await means "wait until the computation finishes", so the loop as written would execute the code sequentially. A rewrite that actually runs the tasks in parallel would be something like:
# a direct translation of the "parallel" version
def compute_total_results_async()
inputs = ...
total_result = ...
tasks = []
# first spawn all the tasks
for input in inputs:
tasks.append(
asyncio.create_task(compute_threaded_python_wrapper(input))
)
# and then await them
for task in tasks:
update_total_result(await task)
return total_result
This spawn-all-await-all pattern is so uniquitous that asyncio provides a helper function, asyncio.gather(), which makes it much shorter, especially when combined with a list comprehension:
# a more idiomatic version
def compute_total_results_async()
inputs = ...
total_result = ...
results = await asyncio.gather(
*[compute_threaded_python_wrapper(input) for input in inputs]
)
for result in results:
update_total_result(result)
return total_result
With that out of the way, we can proceed to the main question:
My question is: (How) Can I implement coroutines in python, enabling python to await the wrapped future object (There is some mention of a __await__ method returning an iterator)?
Yes, awaitable objects are implemented using iterators that yield to indicate suspension. But that is way too low-level a tool for what you actually need. You don't need just any awaitable, but one that works with the asyncio event loop, which has specific expectations of the underlying iterator. You need a mechanism to resume the awaitable when the result is ready, where you again depend on asyncio.
Asyncio already provides awaitable objects that can be externally assigned a value: futures. An asyncio future represents an async value that will become available at some point in the future. They are related to, but not semantically equivalent to C++ futures, and should not to be confused with multi-threaded futures from the concurrent.futures stdlib module.
To create an awaitable object that is activated by something that happens in another thread, you need to create a future, and then start your off-thread task, instructing it to mark the future as completed when it finishes execution. Since asyncio futures are not thread-safe, this must be done using the call_soon_threadsafe event loop method provided by asyncio for such situations. In Python it would be done like this:
def run_async():
loop = asyncio.get_event_loop()
future = loop.create_future()
def on_done(result):
# when done, notify the future in a thread-safe manner
loop.call_soon_threadsafe(future.set_result, resut)
# start the worker in a thread owned by the pool
pool.submit(_worker, on_done)
# returning a future makes run_async() awaitable, and
# passable to asyncio.gather() etc.
return future
def _worker(on_done):
# this runs in a different thread
... processing goes here ...
result = ...
on_done(result)
In your case, the worker would be presumably implemented in Cython combined with C++.
I'm studying Python's asyncio library and i'm having a hard time understanding how some Python functions work.
For example:
import asyncio
async def coroutine():
print('Initialize coroutine')
await asyncio.sleep(1)
print('Done !')
loop = asyncio.get_event_loop()
#asyncio.ensure_future(coroutine())
#loop.create_task(coroutine())
loop.run_forever()
For example in the code if I make use of ensure_future it automatically starts the execution of my task, so does create_task, but in the case of create task I even understand that it uses the loop used in the loop.create_task method call. My question is because using ensure_future it already starts executing the task, and if it does this because in some case I see codes like this:
import asyncio
async def coroutine():
print('Initialize coroutine')
await asyncio.sleep(1)
print('Done !')
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(coroutine())
loop.run_until_complete(asyncio.wait([task]))
My question is because using ensure_future it already starts executing the task, and if it does this
Yes, ensure_future starts executing the coroutine as soon as the event loop is resumed, even if no one awaits the returned future.
This feature of ensure_future follows from the implementation, but it also makes sense conceptually.
On the implementation level, it is a consequence of ensure_future internally calling create_task when given a coroutine object. In other words, for coroutines there is no difference between loop.create_task and asyncio.ensure_future. Otherwise the difference is that ensure_future is more general: it takes any awaitable object, and converts it to a Future. If it receives a coroutine object, create_task performs the appropriate conversion and the new Task (itself a subclass of Future) is returned. If the argument to ensure_future is already a Future or its subclass, it is returned unchanged.
On the conceptual level, ensure_future returns an object that represents the "future" value of a computation, i.e. it encapsulates the result of execution that happens in the background. In case of a corroutine, running in the background means being scheduled with the event loop. Without ensuring the coroutine's execution, returning a Future would be misleading because that future would never arrive.
I'm getting the flow of using asyncio in Python 3.5 but I haven't seen a description of what things I should be awaiting and things I should not be or where it would be neglible. Do I just have to use my best judgement in terms of "this is an IO operation and thus should be awaited"?
By default all your code is synchronous. You can make it asynchronous defining functions with async def and "calling" these functions with await. A More correct question would be "When should I write asynchronous code instead of synchronous?". Answer is "When you can benefit from it". In cases when you work with I/O operations as you noted you will usually benefit:
# Synchronous way:
download(url1) # takes 5 sec.
download(url2) # takes 5 sec.
# Total time: 10 sec.
# Asynchronous way:
await asyncio.gather(
async_download(url1), # takes 5 sec.
async_download(url2) # takes 5 sec.
)
# Total time: only 5 sec. (+ little overhead for using asyncio)
Of course, if you created a function that uses asynchronous code, this function should be asynchronous too (should be defined as async def). But any asynchronous function can freely use synchronous code. It makes no sense to cast synchronous code to asynchronous without some reason:
# extract_links(url) should be async because it uses async func async_download() inside
async def extract_links(url):
# async_download() was created async to get benefit of I/O
html = await async_download(url)
# parse() doesn't work with I/O, there's no sense to make it async
links = parse(html)
return links
One very important thing is that any long synchronous operation (> 50 ms, for example, it's hard to say exactly) will freeze all your asynchronous operations for that time:
async def extract_links(url):
data = await download(url)
links = parse(data)
# if search_in_very_big_file() takes much time to process,
# all your running async funcs (somewhere else in code) will be frozen
# you need to avoid this situation
links_found = search_in_very_big_file(links)
You can avoid it calling long running synchronous functions in separate process (and awaiting for result):
executor = ProcessPoolExecutor(2)
async def extract_links(url):
data = await download(url)
links = parse(data)
# Now your main process can handle another async functions while separate process running
links_found = await loop.run_in_executor(executor, search_in_very_big_file, links)
One more example: when you need to use requests in asyncio. requests.get is just synchronous long running function, which you shouldn't call inside async code (again, to avoid freezing). But it's running long because of I/O, not because of long calculations. In that case, you can use ThreadPoolExecutor instead of ProcessPoolExecutor to avoid some multiprocessing overhead:
executor = ThreadPoolExecutor(2)
async def download(url):
response = await loop.run_in_executor(executor, requests.get, url)
return response.text
You do not have much freedom. If you need to call a function you need to find out if this is a usual function or a coroutine. You must use the await keyword if and only if the function you are calling is a coroutine.
If async functions are involved there should be an "event loop" which orchestrates these async functions. Strictly speaking it's not necessary, you can "manually" run the async method sending values to it, but probably you don't want to do it. The event loop keeps track of not-yet-finished coroutines and chooses the next one to continue running. asyncio module provides an implementation of event loop, but this is not the only possible implementation.
Consider these two lines of code:
x = get_x()
do_something_else()
and
x = await aget_x()
do_something_else()
Semantic is absolutely the same: call a method which produces some value, when the value is ready assign it to variable x and do something else. In both cases the do_something_else function will be called only after the previous line of code is finished. It doesn't even mean that before or after or during the execution of asynchronous aget_x method the control will be yielded to event loop.
Still there are some differences:
the second snippet can appear only inside another async function
aget_x function is not usual, but coroutine (that is either declared with async keyword or decorated as coroutine)
aget_x is able to "communicate" with the event loop: that is yield some objects to it. The event loop should be able to interpret these objects as requests to do some operations (f.e. to send a network request and wait for response, or just suspend this coroutine for n seconds). Usual get_x function is not able to communicate with event loop.
PEP 0492 adds the async keyword to Python 3.5.
How does Python benefit from the use of this operator? The example that is given for a coroutine is
async def read_data(db):
data = await db.fetch('SELECT ...')
According to the docs this achieves
suspend[ing] execution of read_data coroutine until db.fetch awaitable completes and returns the result data.
Does this async keyword actually involve creation of new threads or perhaps the use of an existing reserved async thread?
In the event that async does use a reserved thread, is it a single shared thread each in their own?
No, co-routines do not involve any kind of threads. Co-routines allow for cooperative multi-tasking in that each co-routine yields control voluntarily. Threads on the other hand switch between units at arbitrary points.
Up to Python 3.4, it was possible to write co-routines using generators; by using yield or yield from expressions in a function body you create a generator object instead, where code is only executed when you iterate over the generator. Together with additional event loop libraries (such as asyncio) you could write co-routines that would signal to an event loop that they were going to be busy (waiting for I/O perhaps) and that another co-routine could be run in the meantime:
import asyncio
import datetime
#asyncio.coroutine
def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
yield from asyncio.sleep(1)
Every time the above code advances to the yield from asyncio.sleep(1) line, the event loop is free to run a different co-routine, because this routine is not going to do anything for the next second anyway.
Because generators can be used for all sorts of tasks, not just co-routines, and because writing a co-routine using generator syntax can be confusing to new-comers, the PEP introduces new syntax that makes it clearer that you are writing a co-routine.
With the PEP implemented, the above sample could be written instead as:
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
The resulting coroutine object still needs an event loop to drive the co-routines; an event loop would await on each co-routine in turn, which would execute those co-routines that are not currently awaiting for something to complete.
The advantages are that with native support, you can also introduce additional syntax to support asynchronous context managers and iterators. Entering and exiting a context manager, or looping over an iterator then can become more points in your co-routine that signal that other code can run instead because something is waiting again.