Is is possible to start a function like this
async def foo():
while True:
print("Hello!")
without importing the asyncio package (and getting the event loop)?
I am looking for a principle similar to Go's goroutines, where one can launch a coroutine with only go statement.
Edit: The reason why I'm not importing the asyncio package is simply because I think it should be possible to launch coroutine without event loop (explicit). I don't understand why async def and similar statements are part of core language (even part of syntax) and the way to launch created coroutines is available only through package.
Of course it is possible to start an async function without explicitly using asyncio. After all, asyncio is written in Python, so all it does, you can do too (though sometimes you might need other modules like selectors or threading if you intend to concurrently wait for external events, or paralelly execute some other code).
In this case, since your function has no await points inside, it just needs a single push to get going. You push a coroutine by sending None into it.
>>> foo().send(None)
Hello!
Hello!
...
Of course, if your function (coroutine) had yield expressions inside, it would suspend execution at each yield point, and you would need to push additional values into it (by coro.send(value) or next(gen)) - but you already know that if you know how generators work.
import types
#types.coroutine
def bar():
to_print = yield 'What should I print?'
print('Result is', to_print)
to_return = yield 'And what should I return?'
return to_return
>>> b = bar()
>>> next(b)
'What should I print?'
>>> b.send('Whatever you want')
Result is Whatever you want
'And what should I return?'
>>> b.send(85)
Traceback...
StopIteration: 85
Now, if your function had await expressions inside, it would suspend at evaluating each of them.
async def baz():
first_bar, second_bar = bar(), bar()
print('Sum of two bars is', await first_bar + await second_bar)
return 'nothing important'
>>> t = baz()
>>> t.send(None)
'What should I print?'
>>> t.send('something')
Result is something
'And what should I return?'
>>> t.send(35)
'What should I print?'
>>> t.send('something else')
Result is something else
'And what should I return?'
>>> t.send(21)
Sum of two bars is 56
Traceback...
StopIteration: nothing important
Now, all these .sends are starting to get tedious. It would be nice to have them semiautomatically generated.
import random, string
def run_until_complete(t):
prompt = t.send(None)
try:
while True:
if prompt == 'What should I print?':
prompt = t.send(random.choice(string.ascii_uppercase))
elif prompt == 'And what should I return?':
prompt = t.send(random.randint(10, 50))
else:
raise ValueError(prompt)
except StopIteration as exc:
print(t.__name__, 'returned', exc.value)
t.close()
>>> run_until_complete(baz())
Result is B
Result is M
Sum of two bars is 56
baz returned nothing important
Congratulations, you just wrote your first event loop! (Didn't expect it to happen, did you?;) Of course, it is horribly primitive: it only knows how to handle two types of prompts, it doesn't enable t to spawn additional coroutines that run concurrently with it, and it fakes events by a random generator.
(In fact, if you want to get philosophical: what we did above that manually, could also be called an event loop: Python REPL was printing prompts to a console window, and it was relying on you to provide events by typing t.send(whatever) into it.:)
asyncio is just an immensely generalized variant of the above: prompts are replaced by Futures, multiple coroutines are kept in queues so each of them eventually gets its turn, and events are much richer and include network/socket communication, filesystem reads/writes, signal handling, thread/process side-execution, and so on. But the basic idea is still the same: you grab some coroutines, juggle them in the air routing the Futures from one to another, until they all raise StopIteration. When all coroutines have nothing to do, you go to external world and grab some additional events for them to chew on, and continue.
I hope it's all much clearer now. :-)
Python coroutines are a syntactic sugar for generators, with some added restrictions in their behavior (so that their purpose is explicitly different and doesn't mix). You can't do:
next(foo())
TypeError: 'coroutine' object is not an iterator
because it's disabled explicitly. However you can do:
foo().send(None)
Hello
Hello
Hello
...
Which is equivalent to next() for a generator.
Coroutines should be able to
run
yield control to caller (optionally producing some intermediate
results)
be able to get some information from caller and resume
So, here is a small demo of async functions (aka native coroutines) which do it without using asyncio or any other modules/frameworks which provide event loop. At least python 3.5 is required. See comments inside the code.
#!/usr/bin/env python
import types
# two simple async functions
async def outer_af(x):
print("- start outer_af({})".format(x))
val = await inner_af(x) # Normal way to call native coroutine.
# Without `await` keyword it wouldn't
# actually start
print("- inner_af result: {}".format(val))
return "outer_af_result"
async def inner_af(x):
print("-- start inner_af({})".format(x))
val = await receiver() # 'await' can be used not only with native
# coroutines, but also with `generator-based`
# coroutines!
print("-- received val {}".format(val))
return "inner_af_result"
# To yiled execution control to caller it's necessary to use
# 'generator-based' coroutine: the one created with types.coroutine
# decorator
#types.coroutine
def receiver():
print("--- start receiver")
# suspend execution / yield control / communicate with caller
r = yield "value request"
print("--- receiver received {}".format(r))
return r
def main():
# We want to call 'outer_af' async function (aka native coroutine)
# 'await' keyword can't be used here!
# It can only be used inside another async function.
print("*** test started")
c = outer_af(42) # just prepare coroutine object. It's not running yet.
print("*** c is {}".format(c))
# To start coroutine execution call 'send' method.
w = c.send(None) # The first call must have argument None
# Execution of coroutine is now suspended. Execution point is on
# the 'yield' statement inside the 'receiver' coroutine.
# It is waiting for another 'send' method to continue.
# The yielded value can give us a hint about what exectly coroutine
# expects to receive from us.
print("*** w = {}".format(w))
# After next 'send' the coroutines execution would finish.
# Even though the native coroutine object is not iterable it will
# throw StopIteration exception on exit!
try:
w = c.send(25)
# w here would not get any value. This is unreachable.
except StopIteration as e:
print("*** outer_af finished. It returned: {}".format(e.value))
if __name__ == '__main__':
main()
Output looks like:
*** test started
*** c is <coroutine object outer_af at 0x7f4879188620>
- start outer_af(42)
-- start inner_af(42)
--- start receiver
*** w = value request
--- receiver received 25
-- received val 25
- inner_af result: inner_af_result
*** outer_af finished. It returned: outer_af_result
Additional comment.
Looks like it's not possible to yield control from inside native coroutine. yield is not permitted inside async functions! So it is necessary to import types and use coroutine decorator. It does some black magic! Frankly speaking I do not understand why yield is prohibited so that a mixture of native and generator-based coroutines is required.
No, that is not possible. You need an event loop. Take a look at what happens if you just call foo():
>>> f = foo()
>>> print(f)
<coroutine object foo at 0x7f6e13edac50>
So you get a coroutine object, nothing get's executed right now! Only by passing it to an event loop does it get executed. You can use asyncio or another event loop like Curio.
Related
I would like to know if the following is a safe use of futures to bridge
callback-based code to asynchronous code.
I hope the following illustrates what I mean.
I would like to communicate between two coroutines. One coroutine is mainly
running a library (not my code, represented by b method below), which provides
a synchronous callback which is called whenever an event happens. (Note that the library is using asyncio, but this particular API it provides uses a synchronous callback). I would like
to get data from that callback into a second coroutine (my code, represented by
a method below).
I don't care about missing messages (X objects). callback is called
frequently, much more often than we actually need X objects. It is no problem to
just wait for the next if we miss it. And we don't always need the latest X,
there are only certain situations when we need to refresh it (get a new one),
the rest of the time we just ignore them.
Can I share a future across coroutines like this, or do I need some kind of locking around setting/accessing self.fut? I can't actually use an asyncio.Lock in callback (a synchronous function) because I would need to use await lock.acquire() or async with lock: .... As mentioned above, I can't change this API to make callback an async function.
Also, are there any better alternative approaches? I could use an unlimited asyncio.Queue and use put_nowait synchronously from
the callback function. But it seems like overkill to use a Queue for the reasons
above (I only care about the latest, I don't care about missing messages, I
ignore most of them). The queue would (should) only ever have at most 1 item.
import asyncio
from typing import Optional
class C:
fut: Optional[asyncio.Future[X]]
def __init__(self):
self.fut = None
async def a(self):
while True:
...
# Some condition comes up where we need an X
# We are signalling to b() that we want an X object by setting
# self.fut to a Future (making it not None)
fut = asyncio.get_running_loop().create_future()
self.fut = fut
x = await fut
...
# (use the X object)
async def b(self):
while True:
...
# Calls synchronous callback when it has an X object ready
self.callback(x)
def callback(self, x: X):
...
# callback does other things as well as sending X objects over the Future
# But if the future is not None, it will use it send the X object
fut = self.fut
# Reset it to None because we only want one X object
self.fut = None
if fut is not None:
fut.set_result(x)
async def main():
c = C()
tasks = asyncio.create_task(c.a()), asyncio.create_task(c.b())
for coro in asyncio.as_completed(tasks)
await coro
if __name__ == '__main__':
asyncio.run(main())
My problem: starting a threaded function and, asynchronously, act upon the returned value
I know how to:
start a threaded function with threading. The problem: no simple way to get the result back
get the return value from a threaded function. The problem: it is synchronous
What I would like to achieve is similar to JavaScript's
aFunctionThatReturnsAPromise()
.then(r => {// do something with the returned value when it is available})
// the code here runs synchronously right after aFunctionThatReturnsAPromise is started
In pseudo-Python, I would think about something like (modifying the example from the answer to the linked thread)
import time
import concurrent.futures
def foo(bar):
print('hello {}'.format(bar))
time.sleep(10)
return 'foo'
def the_callback(something):
print(f"the thread returned {something}")
with concurrent.futures.ThreadPoolExecutor() as executor:
# submit the threaded call ...
future = executor.submit(foo, 'world!')
# ... and set a callback
future.callback(the_callback, future.result()) # ← this is the made up part
# or, all in one: future = executor.submit(foo, 'world!', callback=the_callback) # in which case the parameters probably would need to be passed the JS way
# the threaded call runs at its pace
# the following line is ran right after the call above
print("after submit")
# after some time (~10 seconds) the callback is finished (and has printed out what was passed to it)
# there should probably be some kind of join() so that the scripts waits until the thread is done
I want to stay if possible with threads (which do things at their own pace and I do not care when they are done), rather than asyncio (where I have to explicitly await things in a single thread)
You can use concurrent.futures.add_done_callback as shown below. The callback must be a callable taking a single argument, the Future instance — and it must get the result from that as shown. The example also adds some additional information to it which the callback function uses for printing its messages.
Note that the callback function(s) will be called concurrently, so the usual mutex precautions should be taken if there are shared resources involved. This wasn't been done in the example below, so sometimes the printed output will be jumbled.
from concurrent import futures
import random
import time
def foo(bar, delay):
print(f'hello {bar} - {delay}')
time.sleep(delay)
return bar
def the_callback(fn):
if fn.cancelled():
print(f'args {fn.args}: canceled')
elif fn.done():
error = fn.exception()
if error:
print(f'args {fn.args}: caused error {error}')
else:
print(f'args {fn.args}: returned: {fn.result()}')
with futures.ThreadPoolExecutor(max_workers=2) as executor:
for name in ('foo', 'bar', 'bas'):
delay = random.randint(1, 5)
f = executor.submit(foo, name, delay)
f.args = name, delay
f.add_done_callback(the_callback)
print('fini')
Sample output:
hello foo - 5
hello bar - 3
args ('bar', 3): returned: bar
hello bas - 4
args ('foo', 5): returned: foo
args ('bas', 4): returned: bas
fini
You can use add_done_callback of concurrent.futures library, so you can modify your example like this:
def the_callback(something):
print(f"the thread returned {something.result()}")
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(foo, 'world!')
future.add_done_callback(the_callback)
I was wondering what exactly happens when we await a coroutine in async Python code, for example:
await send_message(string)
(1) send_message is added to the event loop, and the calling coroutine gives up control to the event loop, or
(2) We jump directly into send_message
Most explanations I read point to (1), as they describe the calling coroutine as exiting. But my own experiments suggest (2) is the case: I tried to have a coroutine run after the caller but before the callee and could not achieve this.
Disclaimer: Open to correction (particularly as to details and correct terminology) since I arrived here looking for the answer to this myself. Nevertheless, the research below points to a pretty decisive "main point" conclusion:
Correct OP answer: No, await (per se) does not yield to the event loop, yield yields to the event loop, hence for the case given: "(2) We jump directly into send_message". In particular, certain yield expressions are the only points, at bottom, where async tasks can actually be switched out (in terms of nailing down the precise spot where Python code execution can be suspended).
To be proven and demonstrated: 1) by theory/documentation, 2) by implementation code, 3) by example.
By theory/documentation
PEP 492: Coroutines with async and await syntax
While the PEP is not tied to any specific Event Loop implementation, it is relevant only to the kind of coroutine that uses yield as a signal to the scheduler, indicating that the coroutine will be waiting until an event (such as IO) is completed. ...
[await] uses the yield from implementation [with an extra step of validating its argument.] ...
Any yield from chain of calls ends with a yield. This is a fundamental mechanism of how Futures are implemented. Since, internally, coroutines are a special kind of generators, every await is suspended by a yield somewhere down the chain of await calls (please refer to PEP 3156 for a detailed explanation). ...
Coroutines are based on generators internally, thus they share the implementation. Similarly to generator objects, coroutines have throw(), send() and close() methods. ...
The vision behind existing generator-based coroutines and this proposal is to make it easy for users to see where the code might be suspended.
In context, "easy for users to see where the code might be suspended" seems to refer to the fact that in synchronous code yield is the place where execution can be "suspended" within a routine allowing other code to run, and that principle now extends perfectly to the async context wherein a yield (if its value is not consumed within the running task but is propagated up to the scheduler) is the "signal to the scheduler" to switch out tasks.
More succinctly: where does a generator yield control? At a yield. Coroutines (including those using async and await syntax) are generators, hence likewise.
And it is not merely an analogy, in implementation (see below) the actual mechanism by which a task gets "into" and "out of" coroutines is not anything new, magical, or unique to the async world, but simply by calling the coro's <generator>.send() method. That was (as I understand the text) part of the "vision" behind PEP 492: async and await would provide no novel mechanism for code suspension but just pour async-sugar on Python's already well-beloved and powerful generators.
And
PEP 3156: The "asyncio" module
The loop.slow_callback_duration attribute controls the maximum execution time allowed between two yield points before a slow callback is reported [emphasis in original].
That is, an uninterrupted segment of code (from the async perspective) is demarcated as that between two successive yield points (whose values reached up to the running Task level (via an await/yield from tunnel) without being consumed within it).
And this:
The scheduler has no public interface. You interact with it by using yield from future and yield from task.
Objection: "That says 'yield from', but you're trying to argue that the task can only switch out at a yield itself! yield from and yield are different things, my friend, and yield from itself doesn't suspend code!"
Ans: Not a contradiction. The PEP is saying you interact with the scheduler by using yield from future/task. But as noted above in PEP 492, any chain of yield from (~aka await) ultimately reaches a yield (the "bottom turtle"). In particular (see below), yield from future does in fact yield that same future after some wrapper work, and that yield is the actual "switch out point" where another task takes over. But it is incorrect for your code to directly yield a Future up to the current Task because you would bypass the necessary wrapper.
The objection having been answered, and its practical coding considerations being noted, the point I wish to make from the above quote remains: that a suitable yield in Python async code is ultimately the one thing which, having suspended code execution in the standard way that any other yield would do, now futher engages the scheduler to bring about a possible task switch.
By implementation code
asyncio/futures.py
class Future:
...
def __await__(self):
if not self.done():
self._asyncio_future_blocking = True
yield self # This tells Task to wait for completion.
if not self.done():
raise RuntimeError("await wasn't used with future")
return self.result() # May raise too.
__iter__ = __await__ # make compatible with 'yield from'.
Paraphrase: The line yield self is what tells the running task to sit out for now and let other tasks run, coming back to this one sometime after self is done.
Almost all of your awaitables in asyncio world are (multiple layers of) wrappers around a Future. The event loop remains utterly blind to all higher level await awaitable expressions until the code execution trickles down to an await future or yield from future and then (as seen here) calls yield self, which yielded self is then "caught" by none other than the Task under which the present coroutine stack is running thereby signaling to the task to take a break.
Possibly the one and only exception to the above "code suspends at yield self within await future" rule, in an asyncio context, is the potential use of a bare yield such as in asyncio.sleep(0). And since the sleep function is a topic of discourse in the comments of this post, let's look at that.
asyncio/tasks.py
#types.coroutine
def __sleep0():
"""Skip one event loop run cycle.
This is a private helper for 'asyncio.sleep()', used
when the 'delay' is set to 0. It uses a bare 'yield'
expression (which Task.__step knows how to handle)
instead of creating a Future object.
"""
yield
async def sleep(delay, result=None, *, loop=None):
"""Coroutine that completes after a given time (in seconds)."""
if delay <= 0:
await __sleep0()
return result
if loop is None:
loop = events.get_running_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
future = loop.create_future()
h = loop.call_later(delay,
futures._set_result_unless_cancelled,
future, result)
try:
return await future
finally:
h.cancel()
Note: We have here the two interesting cases at which control can shift to the scheduler:
(1) The bare yield in __sleep0 (when called via an await).
(2) The yield self immediately within await future.
The crucial line (for our purposes) in asyncio/tasks.py is when Task._step runs its top-level coroutine via result = self._coro.send(None) and recognizes fourish cases:
(1) result = None is generated by the coro (which, again, is a generator): the task "relinquishes control for one event loop iteration".
(2) result = future is generated within the coro, with further magic member field evidence that the future was yielded in a proper manner from out of Future.__iter__ == Future.__await__: the task relinquishes control to the event loop until the future is complete.
(3) A StopIteration is raised by the coro indicating the coroutine completed (i.e. as a generator it exhausted all its yields): the final result of the task (which is itself a Future) is set to the coroutine return value.
(4) Any other Exception occurs: the task's set_exception is set accordingly.
Modulo details, the main point for our concern is that coroutine segments in an asyncio event loop ultimately run via coro.send(). Initial startup and final termination aside, send() proceeds precisely from the last yield value it generated to the next one.
By example
import asyncio
import types
def task_print(s):
print(f"{asyncio.current_task().get_name()}: {s}")
async def other_task(s):
task_print(s)
class AwaitableCls:
def __await__(self):
task_print(" 'Jumped straight into' another `await`; the act of `await awaitable` *itself* doesn't 'pause' anything")
yield
task_print(" We're back to our awaitable object because that other task completed")
asyncio.create_task(other_task("The event loop gets control when `yield` points (from an iterable coroutine) propagate up to the `current_task` through a suitable chain of `await` or `yield from` statements"))
async def coro():
task_print(" 'Jumped straight into' coro; the `await` keyword itself does nothing to 'pause' the current_task")
await AwaitableCls()
task_print(" 'Jumped straight back into' coro; we have another pending task, but leaving an `__await__` doesn't 'pause' the task any more than entering the `__await__` does")
#types.coroutine
def iterable_coro(context):
task_print(f"`{context} iterable_coro`: pre-yield")
yield None # None or a Future object are the only legitimate yields to the task in asyncio
task_print(f"`{context} iterable_coro`: post-yield")
async def original_task():
asyncio.create_task(other_task("Aha, but a (suitably unconsumed) *`yield`* DOES 'pause' the current_task allowing the event scheduler to `_wakeup` another task"))
task_print("Original task")
await coro()
task_print("'Jumped straight out of' coro. Leaving a coro, as with leaving/entering any awaitable, doesn't give control to the event loop")
res = await iterable_coro("await")
assert res is None
asyncio.create_task(other_task("This doesn't run until the very end because the generated None following the creation of this task is consumed by the `for` loop"))
for y in iterable_coro("for y in"):
task_print(f"But 'ordinary' `yield` points (those which are consumed by the `current_task` itself) behave as ordinary without relinquishing control at the async/task-level; `y={y}`")
task_print("Done with original task")
asyncio.get_event_loop().run_until_complete(original_task())
run in python3.8 produces
Task-1: Original task
Task-1: 'Jumped straight into' coro; the await keyword itself does nothing to 'pause' the current_task
Task-1: 'Jumped straight into' another await; the act of await awaitable itself doesn't 'pause' anything
Task-2: Aha, but a (suitably unconsumed) yield DOES 'pause' the current_task allowing the event scheduler to _wakeup another task
Task-1: We're back to our awaitable object because that other task completed
Task-1: 'Jumped straight back into' coro; we have another pending task, but leaving an __await__ doesn't 'pause' the task any more than entering the __await__ does
Task-1: 'Jumped straight out of' coro. Leaving a coro, as with leaving/entering any awaitable, doesn't give control to the event loop
Task-1: await iterable_coro: pre-yield
Task-3: The event loop gets control when yield points (from an iterable coroutine) propagate up to the current_task through a suitable chain of await or yield from statements
Task-1: await iterable_coro: post-yield
Task-1: for y in iterable_coro: pre-yield
Task-1: But 'ordinary' yield points (those which are consumed by the current_task itself) behave as ordinary without relinquishing control at the async/task-level; y=None
Task-1: for y in iterable_coro: post-yield
Task-1: Done with original task
Task-4: This doesn't run until the very end because the generated None following the creation of this task is consumed by the for loop
Indeed, exercises such as the following can help one's mind to decouple the functionality of async/await from notion of "event loops" and such. The former is conducive to nice implementations and usages of the latter, but you can use async and await just as specially syntaxed generator stuff without any "loop" (whether asyncio or otherwise) whatsoever:
import types # no asyncio, nor any other loop framework
async def f1():
print(1)
print(await f2(),'= await f2()')
return 8
#types.coroutine
def f2():
print(2)
print((yield 3),'= yield 3')
return 7
class F3:
def __await__(self):
print(4)
print((yield 5),'= yield 5')
print(10)
return 11
task1 = f1()
task2 = F3().__await__()
""" You could say calls to send() represent our
"manual task management" in this script.
"""
print(task1.send(None), '= task1.send(None)')
print(task2.send(None), '= task2.send(None)')
try:
print(task1.send(6), 'try task1.send(6)')
except StopIteration as e:
print(e.value, '= except task1.send(6)')
try:
print(task2.send(9), 'try task2.send(9)')
except StopIteration as e:
print(e.value, '= except task2.send(9)')
produces
1
2
3 = task1.send(None)
4
5 = task2.send(None)
6 = yield 3
7 = await f2()
8 = except task1.send(6)
9 = yield 5
10
11 = except task2.send(9)
Yes, await passes control back to the asyncio eventloop, and allows it to schedule other async functions.
Another way is
await asyncio.sleep(0)
Is is possible to start a function like this
async def foo():
while True:
print("Hello!")
without importing the asyncio package (and getting the event loop)?
I am looking for a principle similar to Go's goroutines, where one can launch a coroutine with only go statement.
Edit: The reason why I'm not importing the asyncio package is simply because I think it should be possible to launch coroutine without event loop (explicit). I don't understand why async def and similar statements are part of core language (even part of syntax) and the way to launch created coroutines is available only through package.
Of course it is possible to start an async function without explicitly using asyncio. After all, asyncio is written in Python, so all it does, you can do too (though sometimes you might need other modules like selectors or threading if you intend to concurrently wait for external events, or paralelly execute some other code).
In this case, since your function has no await points inside, it just needs a single push to get going. You push a coroutine by sending None into it.
>>> foo().send(None)
Hello!
Hello!
...
Of course, if your function (coroutine) had yield expressions inside, it would suspend execution at each yield point, and you would need to push additional values into it (by coro.send(value) or next(gen)) - but you already know that if you know how generators work.
import types
#types.coroutine
def bar():
to_print = yield 'What should I print?'
print('Result is', to_print)
to_return = yield 'And what should I return?'
return to_return
>>> b = bar()
>>> next(b)
'What should I print?'
>>> b.send('Whatever you want')
Result is Whatever you want
'And what should I return?'
>>> b.send(85)
Traceback...
StopIteration: 85
Now, if your function had await expressions inside, it would suspend at evaluating each of them.
async def baz():
first_bar, second_bar = bar(), bar()
print('Sum of two bars is', await first_bar + await second_bar)
return 'nothing important'
>>> t = baz()
>>> t.send(None)
'What should I print?'
>>> t.send('something')
Result is something
'And what should I return?'
>>> t.send(35)
'What should I print?'
>>> t.send('something else')
Result is something else
'And what should I return?'
>>> t.send(21)
Sum of two bars is 56
Traceback...
StopIteration: nothing important
Now, all these .sends are starting to get tedious. It would be nice to have them semiautomatically generated.
import random, string
def run_until_complete(t):
prompt = t.send(None)
try:
while True:
if prompt == 'What should I print?':
prompt = t.send(random.choice(string.ascii_uppercase))
elif prompt == 'And what should I return?':
prompt = t.send(random.randint(10, 50))
else:
raise ValueError(prompt)
except StopIteration as exc:
print(t.__name__, 'returned', exc.value)
t.close()
>>> run_until_complete(baz())
Result is B
Result is M
Sum of two bars is 56
baz returned nothing important
Congratulations, you just wrote your first event loop! (Didn't expect it to happen, did you?;) Of course, it is horribly primitive: it only knows how to handle two types of prompts, it doesn't enable t to spawn additional coroutines that run concurrently with it, and it fakes events by a random generator.
(In fact, if you want to get philosophical: what we did above that manually, could also be called an event loop: Python REPL was printing prompts to a console window, and it was relying on you to provide events by typing t.send(whatever) into it.:)
asyncio is just an immensely generalized variant of the above: prompts are replaced by Futures, multiple coroutines are kept in queues so each of them eventually gets its turn, and events are much richer and include network/socket communication, filesystem reads/writes, signal handling, thread/process side-execution, and so on. But the basic idea is still the same: you grab some coroutines, juggle them in the air routing the Futures from one to another, until they all raise StopIteration. When all coroutines have nothing to do, you go to external world and grab some additional events for them to chew on, and continue.
I hope it's all much clearer now. :-)
Python coroutines are a syntactic sugar for generators, with some added restrictions in their behavior (so that their purpose is explicitly different and doesn't mix). You can't do:
next(foo())
TypeError: 'coroutine' object is not an iterator
because it's disabled explicitly. However you can do:
foo().send(None)
Hello
Hello
Hello
...
Which is equivalent to next() for a generator.
Coroutines should be able to
run
yield control to caller (optionally producing some intermediate
results)
be able to get some information from caller and resume
So, here is a small demo of async functions (aka native coroutines) which do it without using asyncio or any other modules/frameworks which provide event loop. At least python 3.5 is required. See comments inside the code.
#!/usr/bin/env python
import types
# two simple async functions
async def outer_af(x):
print("- start outer_af({})".format(x))
val = await inner_af(x) # Normal way to call native coroutine.
# Without `await` keyword it wouldn't
# actually start
print("- inner_af result: {}".format(val))
return "outer_af_result"
async def inner_af(x):
print("-- start inner_af({})".format(x))
val = await receiver() # 'await' can be used not only with native
# coroutines, but also with `generator-based`
# coroutines!
print("-- received val {}".format(val))
return "inner_af_result"
# To yiled execution control to caller it's necessary to use
# 'generator-based' coroutine: the one created with types.coroutine
# decorator
#types.coroutine
def receiver():
print("--- start receiver")
# suspend execution / yield control / communicate with caller
r = yield "value request"
print("--- receiver received {}".format(r))
return r
def main():
# We want to call 'outer_af' async function (aka native coroutine)
# 'await' keyword can't be used here!
# It can only be used inside another async function.
print("*** test started")
c = outer_af(42) # just prepare coroutine object. It's not running yet.
print("*** c is {}".format(c))
# To start coroutine execution call 'send' method.
w = c.send(None) # The first call must have argument None
# Execution of coroutine is now suspended. Execution point is on
# the 'yield' statement inside the 'receiver' coroutine.
# It is waiting for another 'send' method to continue.
# The yielded value can give us a hint about what exectly coroutine
# expects to receive from us.
print("*** w = {}".format(w))
# After next 'send' the coroutines execution would finish.
# Even though the native coroutine object is not iterable it will
# throw StopIteration exception on exit!
try:
w = c.send(25)
# w here would not get any value. This is unreachable.
except StopIteration as e:
print("*** outer_af finished. It returned: {}".format(e.value))
if __name__ == '__main__':
main()
Output looks like:
*** test started
*** c is <coroutine object outer_af at 0x7f4879188620>
- start outer_af(42)
-- start inner_af(42)
--- start receiver
*** w = value request
--- receiver received 25
-- received val 25
- inner_af result: inner_af_result
*** outer_af finished. It returned: outer_af_result
Additional comment.
Looks like it's not possible to yield control from inside native coroutine. yield is not permitted inside async functions! So it is necessary to import types and use coroutine decorator. It does some black magic! Frankly speaking I do not understand why yield is prohibited so that a mixture of native and generator-based coroutines is required.
No, that is not possible. You need an event loop. Take a look at what happens if you just call foo():
>>> f = foo()
>>> print(f)
<coroutine object foo at 0x7f6e13edac50>
So you get a coroutine object, nothing get's executed right now! Only by passing it to an event loop does it get executed. You can use asyncio or another event loop like Curio.
I have a little bit of experience with promises in Javascript. I am quite experienced with Python, but new to its coroutines, and there is a bit that I just fail to understand: where does the asynchronicity kick in?
Let's consider the following minimal example:
async def gen():
await something
return 42
As I understand it, await something puts execution of our function aside and lets the main program run other bits. At some point something has a new result and gen will have a result soon after.
If gen and something are coroutines, then by all internet wisdom they are generators. And the only way to know when a generator has a new item available, afaik, is by polling it: x=gen(); next(x). But this is blocking! How does the scheduler "know" when x has a result? The answer can't be "when something has a result" because something must be a generator, too (for it is a coroutine). And this argument applies recursively.
I can't get past this idea that at some point the process will just have to sit and wait synchronously.
The secret sauce here is the asyncio module. Your something object has to be an awaitable object itself, and either depend on more awaitable objects, or must yield from a Future object.
For example, the asyncio.sleep() coroutine yields a Future:
#coroutine
def sleep(delay, result=None, *, loop=None):
"""Coroutine that completes after a given time (in seconds)."""
if delay == 0:
yield
return result
if loop is None:
loop = events.get_event_loop()
future = loop.create_future()
h = future._loop.call_later(delay,
futures._set_result_unless_cancelled,
future, result)
try:
return (yield from future)
finally:
h.cancel()
(The syntax here uses the older generator syntax, to remain backwards compatible with older Python 3 releases).
Note that a future doesn't use await or yield from; they simply use yield self until some condition is met. In the above async.sleep() coroutine, that condition is met when a result has been produced (in the async.sleep() code above, via the futures._set_result_unless_cancelled() function called after a delay).
An event loop then keeps pulling in the next 'result' from each pending future it manages (polling them efficiently) until the future signals it is done (by raising a StopIteration exception holding the results; return from a co-routine would do that, for example). At that point the coroutine that yielded the future can be signalled to continue (either by sending the future result, or by throwing an exception if the future raised anything other than StopIteration).
So for your example, the loop will kick off your gen() coroutine, and await something then (directly or indirectly) yields a future. That future is polled until it raises StopIteration (signalling it is done) or raises some other exception. If the future is done, coroutine.send(result) is executed, allowing it to then advance to the return 42 line, triggering a new StopIteration exception with that value, allowing a calling coroutine awaiting on gen() to continue, etc.