When will yield actually yield in the function call stack? - python

I am working on tornado and motor in python 3.4.3.
I got three files. Lets name it like main.py, model.py, core.py
I have three functions, one in each...
main.py
def getLoggedIn(request_handler):
# request_handler = tornado.web.RequestHandler()
db = request_handler.settings["db"]
uid = request_handler.get_secure_cookie("uid")
result = model.Session.get(db, uid=uid)
return result.get("_id", None) if result else None
model.py
#classmethod
def get(cls, db, user_id=None, **kwargs):
session = core.Session(db)
return session.get(user_id, **kwargs)
core.py
#gen.coroutine
def get(self, user_id, **kwargs):
params = kwargs
if user_id:
params.update({"_id": ObjectId(user_id)}) #This does not exist in DB
future = self.collection.find_one(params)
print(future) #prints <tornado.concurrent.Future object at 0x04152A90>
result = yield future
print(result) #prints None
return result
The calls look like getLoggedIn => model.get => core.get
core.get is decorated with #gen.coroutine and I call yield self.collection.find_one(params)
The print(result) prints None but if I return result and try to print the return value in getLoggedIn function it prints .
I believe this is related to asynchronous nature of tornado and the print gets called before yield but I am not sure. It would be a great help if someone could explain about coroutine/generators principles and behavior in different possible cases.

PEP 255 covers the original specification for generators.
However, tornado uses yield inside of coroutines in a very specific way: http://www.tornadoweb.org/en/stable/guide/coroutines.html#how-it-works
Your code doesn't really look or smell like an ordinary generator because the Python notion of generators is being co-opted by tornado to define coroutines.
I would say that you don't really want the principles of generator writing, but the principles of tornado generators -- a wholly different beast.
Assigning the value of the yield is a way for the wrapping #gen.coroutine decorator to pass the result of the future back into core.get.
That way, result is not assigned the future object, but future.result().
yield future essentially suspends your function and turns it into a callback that the future will invoke, resuming execution at the location of the yield.
The asynchronous nature of tornado does not allow the yield to run before the print, as you worried.
Most likely, your Future is not returning anything, or is returning None (semantically equivalent, I know).
It might be best to think of result = yield future as a specialized version of result = future.result()

Every call to a coroutine must be yielded, and the caller must also be a coroutine. So getLoggedIn must be a coroutine that calls:
result = yield model.Session.get(db, uid=uid)
And so on. See my article on refactoring Tornado coroutines for a detailed example and explanation.

Related

python - how to implement a C-function as awaitable (coroutine)

Environment: cooperative RTOS in C and micropython virtual machine is one of the tasks.
To make the VM not block the other RTOS tasks, I insert RTOS_sleep() in vm.c:DISPATCH() so that after every bytecode is executed, the VM relinquishes control to the next RTOS task.
I created a uPy interface to asynchronously obtain data from a physical data bus - could be CAN, SPI, ethernet - using producer-consumer design pattern.
Usage in uPy:
can_q = CANbus.queue()
message = can_q.get()
The implementation in C is such that can_q.get() does NOT block the RTOS: it polls a C-queue and if message is not received, it calls RTOS_sleep() to give another task the chance to fill the queue. Things are synchronized because the C-queue is only updated by another RTOS task and RTOS tasks only switch when RTOS_sleep() is called i.e. cooperative
The C-implementation is basically:
// gives chance for c-queue to be filled by other RTOS task
while(c_queue_empty() == true) RTOS_sleep();
return c_queue_get_message();
Although the Python statement can_q.get() does not block the RTOS, it does block the uPy script.
I'd like to rewrite it so I can use it with async def i.e. coroutine and have it not block the uPy script.
Not sure of the syntax but something like this:
can_q = CANbus.queue()
message = await can_q.get()
QUESTION
How do I write a C-function so I can await on it?
I would prefer a CPython and micropython answer but I would accept a CPython-only answer.
Note: this answer covers CPython and the asyncio framework. The concepts, however, should apply to other Python implementations as well as other async frameworks.
How do I write a C-function so I can await on it?
The simplest way to write a C function whose result can be awaited is by having it return an already made awaitable object, such as an asyncio.Future. Before returning the Future, the code must arrange for the future's result to be set by some asynchronous mechanism. All of these coroutine-based approaches assume that your program is running under some event loop that knows how to schedule the coroutines.
But returning a future isn't always enough - maybe we'd like to define an object with an arbitrary number of suspension points. Returning a future suspends only once (if the returned future is not complete), resumes once the future is completed, and that's it. An awaitable object equivalent to an async def that contains more than one await cannot be implemented by returning a future, it has to implement a protocol that coroutines normally implement. This is somewhat like an iterator implementing a custom __next__ and be used instead of a generator.
Defining a custom awaitable
To define our own awaitable type, we can turn to PEP 492, which specifies exactly which objects can be passed to await. Other than Python functions defined with async def, user-defined types can make objects awaitable by defining the __await__ special method, which Python/C maps to the tp_as_async.am_await part of the PyTypeObject struct.
What this means is that in Python/C, you must do the following:
specify a non-NULL value for the tp_as_async field of your extension type.
have its am_await member point to a C function that accepts an instance of your type and returns an instance of another extension type that implements the iterator protocol, i.e. defines tp_iter (trivially defined as PyIter_Self) and tp_iternext.
the iterator's tp_iternext must advance the coroutine's state machine. Each non-exceptional return from tp_iternext corresponds to a suspension, and the final StopIteration exception signifies the final return from the coroutine. The return value is stored in the value property of StopIteration.
For the coroutine to be useful, it must also be able to communicate with the event loop that drives it, so that it can specify when it is to be resumed after it has suspended. Most of coroutines defined by asyncio expect to be running under the asyncio event loop, and internally use asyncio.get_event_loop() (and/or accept an explicit loop argument) to obtain its services.
Example coroutine
To illustrate what the Python/C code needs to implement, let's consider simple coroutine expressed as a Python async def, such as this equivalent of asyncio.sleep():
async def my_sleep(n):
loop = asyncio.get_event_loop()
future = loop.create_future()
loop.call_later(n, future.set_result, None)
await future
# we get back here after the timeout has elapsed, and
# immediately return
my_sleep creates a Future, arranges for it to complete (its result to become set) in n seconds, and suspends itself until the future completes. The last part uses await, where await x means "allow x to decide whether we will now suspend or keep executing". An incomplete future always decides to suspend, and the asyncio Task coroutine driver special-cases yielded futures to suspend them indefinitely and connects their completion to resuming the task. Suspension mechanisms of other event loops (curio etc) can differ in details, but the underlying idea is the same: await is an optional suspension of execution.
__await__() that returns a generator
To translate this to C, we have to get rid of the magic async def function definition, as well as of the await suspension point. Removing the async def is fairly simple: the equivalent ordinary function simply needs to return an object that implements __await__:
def my_sleep(n):
return _MySleep(n)
class _MySleep:
def __init__(self, n):
self.n = n
def __await__(self):
return _MySleepIter(self.n)
The __await__ method of the _MySleep object returned by my_sleep() will be automatically called by the await operator to convert an awaitable object (anything passed to await) to an iterator. This iterator will be used to ask the awaited object whether it chooses to suspend or to provide a value. This is much like how the for o in x statement calls x.__iter__() to convert the iterable x to a concrete iterator.
When the returned iterator chooses to suspend, it simply needs to produce a value. The meaning of the value, if any, will be interpreted by the coroutine driver, typically part of an event loop. When the iterator chooses to stop executing and return from await, it needs to stop iterating. Using a generator as a convenience iterator implementation, _MySleepIter would look like this:
def _MySleepIter(n):
loop = asyncio.get_event_loop()
future = loop.create_future()
loop.call_later(n, future.set_result, None)
# yield from future.__await__()
for x in future.__await__():
yield x
As await x maps to yield from x.__await__(), our generator must exhaust the iterator returned by future.__await__(). The iterator returned by Future.__await__ will yield if the future is incomplete, and return the future's result (which we here ignore, but yield from actually provides) otherwise.
__await__() that returns a custom iterator
The final obstacle for a C implementation of my_sleep in C is the use of generator for _MySleepIter. Fortunately, any generator can be translated to a stateful iterator whose __next__ executes the piece of code up to the next await or return. __next__ implements a state machine version of the generator code, where yield is expressed by returning a value, and return by raising StopIteration. For example:
class _MySleepIter:
def __init__(self, n):
self.n = n
self.state = 0
def __iter__(self): # an iterator has to define __iter__
return self
def __next__(self):
if self.state == 0:
loop = asyncio.get_event_loop()
self.future = loop.create_future()
loop.call_later(self.n, self.future.set_result, None)
self.state = 1
if self.state == 1:
if not self.future.done():
return next(iter(self.future))
self.state = 2
if self.state == 2:
raise StopIteration
raise AssertionError("invalid state")
Translation to C
The above is quite some typing, but it works, and only uses constructs that can be defined with native Python/C functions.
Actually translating the two classes to C quite straightforward, but beyond the scope of this answer.

Python generators and reduce

I am working on a Python3 tornado web server with asynchronous coroutines for GET requests, using the #gen.coroutine decorator. I want to use this function from a library:
#gen.coroutine
def foo(x):
yield do_something(x)
which is simple enough:
#gen.coroutine
def get(self):
x = self.some_parameter
yield response(foo(x))
Now assume there are multiple functions foo1, foo2, etc. of the same type. I want to do something like ...foo3(foo2(foo1(x).result()).result())... and yield that instead of just response(foo(x)) in the get method.
I thought this would be easy with reduce and the result method. However, because of how tornado works, I cannot force the foos to return something with the result method. This means that yield reduce(...) gives an error: "DummyFuture does not support blocking for results". From other answers on SO and elsewhere, I know I will have to use IOLoop or something, which I didn't really understand, and...
...my question is, how can I avoid evaluating all the foos and yield that unevaluated chunk from the get method?
Edit: This is not a duplicate of this question because I want to: 1. nest a lot of functions and 2. try not to evaluate immediately.
In Tornado, you must yield a Future inside a coroutine in order to get a result. Review Tornado's coroutine guide.
You could write a reducer that is a coroutine. It runs each coroutine to get a Future, calls yield with the Future to get a result, then runs the next coroutine on that result:
from tornado.ioloop import IOLoop
from tornado import gen
#gen.coroutine
def f(x):
# Just to prove we're really a coroutine.
yield gen.sleep(1)
return x * 2
#gen.coroutine
def g(x):
return x + 1
#gen.coroutine
def h():
return 10
#gen.coroutine
def coreduce(*funcs):
# Start by calling last function in list.
result = yield funcs[-1]()
# Call remaining functions.
for func in reversed(funcs[:-1]):
result = yield func(result)
return result
# Wrap in lambda to satisfy your requirement, to
# NOT evaluate immediately.
latent_result = lambda: coreduce(f, g, h)
final_result = IOLoop.current().run_sync(latent_result)
print(final_result)

what's the difference between yield from and yield in python 3.3.2+

After python 3.3.2+ python support a new syntax for create generator function
yield from <expression>
I have made a quick try for this by
>>> def g():
... yield from [1,2,3,4]
...
>>> for i in g():
... print(i)
...
1
2
3
4
>>>
It seems simple to use but the PEP document is complex. My question is that is there any other difference compare to the previous yield statement? Thanks.
For most applications, yield from just yields everything from the left iterable in order:
def iterable1():
yield 1
yield 2
def iterable2():
yield from iterable1()
yield 3
assert list(iterable2) == [1, 2, 3]
For 90% of users who see this post, I'm guessing that this will be explanation enough for them. yield from simply delegates to the iterable on the right hand side.
Coroutines
However, there are some more esoteric generator circumstances that also have importance here. A less known fact about Generators is that they can be used as co-routines. This isn't super common, but you can send data to a generator if you want:
def coroutine():
x = yield None
yield 'You sent: %s' % x
c = coroutine()
next(c)
print(c.send('Hello world'))
Aside: You might be wondering what the use-case is for this (and you're not alone). One example is the contextlib.contextmanager decorator. Co-routines can also be used to parallelize certain tasks. I don't know too many places where this is taken advantage of, but google app-engine's ndb datastore API uses it for asynchronous operations in a pretty nifty way.
Now, lets assume you send data to a generator that is yielding data from another generator ... How does the original generator get notified? The answer is that it doesn't in python2.x where you need to wrap the generator yourself:
def python2_generator_wapper():
for item in some_wrapped_generator():
yield item
At least not without a whole lot of pain:
def python2_coroutine_wrapper():
"""This doesn't work. Somebody smarter than me needs to fix it. . .
Pain. Misery. Death lurks here :-("""
# See https://www.python.org/dev/peps/pep-0380/#formal-semantics for actual working implementation :-)
g = some_wrapped_generator()
for item in g:
try:
val = yield item
except Exception as forward_exception: # What exceptions should I not catch again?
g.throw(forward_exception)
else:
if val is not None:
g.send(val) # Oops, we just consumed another cycle of g ... How do we handle that properly ...
This all becomes trivial with yield from:
def coroutine_wrapper():
yield from coroutine()
Because yield from truly delegates (everything!) to the underlying generator.
Return semantics
Note that the PEP in question also changes the return semantics. While not directly in OP's question, it's worth a quick digression if you are up for it. In python2.x, you can't do the following:
def iterable():
yield 'foo'
return 'done'
It's a SyntaxError. With the update to yield, the above function is not legal. Again, the primary use-case is with coroutines (see above). You can send data to the generator and it can do it's work magically (maybe using threads?) while the rest of the program does other things. When flow control passes back to the generator, StopIteration will be raised (as is normal for the end of a generator), but now the StopIteration will have a data payload. It is the same thing as if a programmer instead wrote:
raise StopIteration('done')
Now the caller can catch that exception and do something with the data payload to benefit the rest of humanity.
At first sight, yield from is an algorithmic shortcut for:
def generator1():
for item in generator2():
yield item
# do more things in this generator
Which is then mostly equivalent to just:
def generator1():
yield from generator2()
# more things on this generator
In English: when used inside an iterable, yield from issues each element in another iterable, as if that item were coming from the first generator, from the point of view of the code calling the first generator.
The main reasoning for its creation is to allow easy refactoring of code relying heavily on iterators - code which use ordinary functions always could, at very little extra cost, have blocks of one function refactored to other functions, which are then called - that divides tasks, simplifies reading and maintaining the code, and allows for more reusability of small code snippets -
So, large functions like this:
def func1():
# some calculation
for i in somesequence:
# complex calculation using i
# ...
# ...
# ...
# some more code to wrap up results
# finalizing
# ...
Can become code like this, without drawbacks:
def func2(i):
# complex calculation using i
# ...
# ...
# ...
return calculated_value
def func1():
# some calculation
for i in somesequence:
func2(i)
# some more code to wrap up results
# finalizing
# ...
When getting to iterators however, the form
def generator1():
for item in generator2():
yield item
# do more things in this generator
for item in generator1():
# do things
requires that for each item consumed from generator2, the running context be first switched to generator1, nothing is done in that context, and the cotnext have to be switched to generator2 - and when that one yields a value, there is another intermediate context switch to generator1, before getting the value to the actual code consuming those values.
With yield from these intermediate context switches are avoided, which can save quite some resources if there are a lot of iterators chained: the context switches straight from the context consuming the outermost generator to the innermost generator, skipping the context of the intermediate generators altogether, until the inner ones are exhausted.
Later on, the language took advantage of this "tunelling" through intermediate contexts to use these generators as co-routines: functions that can make asynchronous calls. With the proper framework in place, as descibed in https://www.python.org/dev/peps/pep-3156/ , these co-routines are written in a way that when they will call a function that would take a long time to resolve (due to a network operation, or a CPU intensive operation that can be offloaded to another thread) - that call is made with a yield from statement - the framework main loop then arranges so that the called expensive function is properly scheduled, and retakes execution (the framework mainloop is always the code calling the co-routines themselves). When the expensive result is ready, the framework makes the called co-routine behave like an exhausted generator, and execution of the first co-routine resumes.
From the programmer's point of view it is as if the code was running straight forward, with no interruptions. From the process point of view, the co-routine was paused at the point of the expensive call, and other (possibly parallel calls to the same co-routine) continued running.
So, one might write as part of a web crawler some code along:
#asyncio.coroutine
def crawler(url):
page_content = yield from async_http_fetch(url)
urls = parse(page_content)
...
Which could fetch tens of html pages concurrently when called from the asyncio loop.
Python 3.4 added the asyncio module to the stdlib as the default provider for this kind of functionality. It worked so well, that in Python 3.5 several new keywords were added to the language to distinguish co-routines and asynchronous calls from the generator usage, described above. These are described in https://www.python.org/dev/peps/pep-0492/
Here is an example that illustrates it:
>>> def g():
... yield from range(5)
...
>>> list(g())
[0, 1, 2, 3, 4]
>>> def g():
... yield range(5)
...
>>> list(g())
[range(0, 5)]
>>>
yield from yields each item of the iterable, but yield yields the iterable itself.
The difference is simple:
yield:
[extra info, if you know the working of generator you can skip that]
yield is used to produce a single value from the generator function. When the generator function is called, it starts executing, and when a yield statement is encountered, it temporarily suspends the execution of the function, returns the value to the caller, and saves its current state. The next time the function is called, it resumes execution from where it left off, and continues until it hits the next yield statement.
In example below, generator1 and generator2 returning a value wrapped in a generator object but combined_generator is also returning a generator object but that object has another generator object, Now, to get the value of these nested generator we were using yield from
class Gen:
def generator1(self):
yield 1
yield 2
yield 3
def generator2(self):
yield 'a'
yield 'b'
yield 'c'
def combined_generator(self):
"""
This function yielding a generator, which inturn yielding a generator
so we need to use `yield from` so that our end function can directly consume the values instead.
"""
yield from self.generator1()
yield from self.generator2()
def run(self):
print("Gen running ...")
for item in self.combined_generator():
print(item)
g = Gen()
g.run()
The output of above is:
Gen calling ...
1
2
3
a
b
c

Creating an asynchronous method with Google App Engine's NDB

I want to make sure I got down how to create tasklets and asyncrounous methods. What I have is a method that returns a list. I want it to be called from somewhere, and immediatly allow other calls to be made. So I have this:
future_1 = get_updates_for_user(userKey, aDate)
future_2 = get_updates_for_user(anotherUserKey, aDate)
somelist.extend(future_1)
somelist.extend(future_2)
....
#ndb.tasklet
def get_updates_for_user(userKey, lastSyncDate):
noteQuery = ndb.GqlQuery('SELECT * FROM Comments WHERE ANCESTOR IS :1 AND modifiedDate > :2', userKey, lastSyncDate)
note_list = list()
qit = noteQuery.iter()
while (yield qit.has_next_async()):
note = qit.next()
noteDic = note.to_dict()
note_list.append(noteDic)
raise ndb.Return(note_list)
Is this code doing what I'd expect it to do? Namely, will the two calls run asynchronously? Am I using futures correctly?
Edit: Well after testing, the code does produce the desired results. I'm a newbie to Python - what are some ways to test to see if the methods are running async?
It's pretty hard to verify for yourself that the methods are running concurrently -- you'd have to put copious logging in. Also in the dev appserver it'll be even harder as it doesn't really run RPCs in parallel.
Your code looks okay, it uses yield in the right place.
My only recommendation is to name your function get_updates_for_user_async() -- that matches the convention NDB itself uses and is a hint to the reader of your code that the function returns a Future and should be yielded to get the actual result.
An alternative way to do this is to use the map_async() method on the Query object; it would let you write a callback that just contains the to_dict() call:
#ndb.tasklet
def get_updates_for_user_async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
note_list = yield noteQuery.map_async(lambda note: note.to_dict())
raise ndb.Return(note_list)
Advanced tip: you can simplify this even more by dropping the #ndb.tasklet decorator and just returning the Future returned by map_async():
def get_updates_for_user_Async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
return noteQuery.map_async(lambda note: note.to_dict())
This is a general slight optimization for async functions that contain only one yield and immediately return the value yielded. (If you don't immediately get this you're in good company, and it runs the risk to be broken by a future maintainer who doesn't either. :-)

Python generator's 'yield' in separate function

I'm implementing a utility library which is a sort-of task manager intended to run within the distributed environment of Google App Engine cloud computing service. (It uses a combination of task queues and memcache to execute background processing). I plan to use generators to control the execution of tasks, essentially enforcing a non-preemptive "concurrency" via the use of yield in the user's code.
The trivial example - processing a bunch of database entities - could be something like the following:
class EntityWorker(Worker):
def setup():
self.entity_query = Entity.all()
def run():
for e in self.entity_query:
do_something_with(e)
yield
As we know, yield is two way communication channel, allowing to pass values to code that uses generators. This allows to simulate a "preemptive API" such as the SLEEP call below:
def run():
for e in self.entity_query:
do_something_with(e)
yield Worker.SLEEP, timedelta(seconds=1)
But this is ugly. It would be great to hide the yield within seperate function which could invoked in simple way:
self.sleep(timedelta(seconds=1))
The problem is that putting yield in function sleep turns it into a generator function. The call above would therefore just return another generator. Only after adding .next() and yield back again we would obtain previous result:
yield self.sleep(timedelta(seconds=1)).next()
which is of course even more ugly and unnecessarily verbose that before.
Hence my question: Is there a way to put yield into function without turning it into generator function but making it usable by other generators to yield values computed by it?
You seem to be missing the obvious:
class EntityWorker(Worker):
def setup(self):
self.entity_query = Entity.all()
def run(self):
for e in self.entity_query:
do_something_with(e)
yield self.sleep(timedelta(seconds=1))
def sleep(self, wait):
return Worker.SLEEP, wait
It's the yield that turns functions into generators, it's impossible to leave it out.
To hide the yield you need a higher order function, in your example it's map:
from itertools import imap
def slowmap(f, sleep, *iters):
for row in imap(f, self.entity_query):
yield Worker.SLEEP, wait
def run():
return slowmap(do_something_with,
(Worker.SLEEP, timedelta(seconds=1)),
self.entity_query)
Alas, this won't work. But a "middle-way" could be fine:
def sleepjob(*a, **k):
if a:
return Worker.SLEEP, a[0]
else:
return Worker.SLEEP, timedelta(**k)
So
yield self.sleepjob(timedelta(seconds=1))
yield self.sleepjob(seconds=1)
looks ok for me.
I would suggest you have a look at the ndb. It uses generators as co-routines (as you are proposing here), allowing you to write programs that work with rpcs asynchronously.
The api does this by wrapping the generator with another function that 'primes' the generator (it calls .next() immediately so that the code begins execution). The tasklets are also designed to work with App Engine's rpc infrastructure, making it possible to use any of the existing asynchronous api calls.
With the concurreny model used in ndb, you yield either a future object (similar to what is described in pep-3148) or an App Engine rpc object. When that rpc has completed, the execution in the function that yielded the object is allowed to continue.
If you are using a model derived from ndb.model.Model then the following will allow you to asynchronously iterate over a query:
from ndb import tasklets
#tasklets.tasklet
def run():
it = iter(Entity.query())
# Other tasklets will be allowed to run if the next call has to wait for an rpc.
while (yield it.has_next_async()):
entity = it.next()
do_something_with(entity)
Although ndb is still considered experimental (some of its error handling code still needs some work), I would recommend you have a look at it. I have used it in my last 2 projects and found it to be an excellent library.
Make sure you read through the documentation linked from the main page, and also the companion documentation for the tasklet stuff.

Categories