What is an idiomatic way to create an infinite iterator from a function? For example
from itertools import islice
import random
rand_characters = to_iterator( random.randint(0,256) )
print ' '.join( islice( rand_characters, 100))
would produce 100 random numbers
You want an iterator which continuously yields values until you stop asking it for new ones? Simply use
it = iter(function, sentinel)
which calls function() for each iteration step until the result == sentinel.
So choose a sentinel which can never be returned by your wanted function, such as None, in your case.
rand_iter = lambda start, end: iter(random.randint(start, end), None)
rand_bytes = rand_iter(0, 256)
If you want to monitor some state on your machine, you could do
iter_mystate = iter(getstate, None)
which, in turn, infinitely calls getstate() for each iteration step.
But beware of functions returning None as a valid value! In this case, you should choose a sentinel which is guaranteed to be unique, maybe an object created for exactly this job:
iter_mystate = iter(getstate, object())
Every time I see iter with 2 arguments, I need to scratch my head an look up the documentation to figure out exactly what is going on. Simply because of that, I would probably roll my own:
def call_forever(callback):
while True:
yield callback()
Or, as stated in the comments by Jon Clements, you could use the itertools.repeatfunc recipe which allows you to pass arguments to the function as well:
import itertools as it
def repeatfunc(func, times=None, *args):
"""
Repeat calls to func with specified arguments.
Example: repeatfunc(random.random)
"""
if times is None:
return it.starmap(func, it.repeat(args))
return it.starmap(func, it.repeat(args, times))
Although I think that the function signature def repeatfunc(func,times=None,*args) is a little awkward. I'd prefer to pass a tuple as args (it seems more explicit to me, and "explicit is better than implicit"):
import itertools as it
def repeatfunc(func, args=(),times=None):
"""
Repeat calls to func with specified arguments.
Example: repeatfunc(random.random)
"""
if times is None:
return it.starmap(func, it.repeat(args))
return it.starmap(func, it.repeat(args, times))
which allows it to be called like:
repeatfunc(func,(arg1,arg2,...,argN),times=4) #repeat 4 times
repeatfunc(func,(arg1,arg2,...)) #repeat infinitely
instead of the vanilla version from itertools:
repeatfunc(func,4,arg1,arg2,...) #repeat 4 times
repeatfunc(func,None,arg1,arg2,...) #repeat infinitely
Related
I just started Python and I've got no idea what memoization is and how to use it. Also, may I have a simplified example?
Memoization effectively refers to remembering ("memoization" → "memorandum" → to be remembered) results of method calls based on the method inputs and then returning the remembered result rather than computing the result again. You can think of it as a cache for method results. For further details, see page 387 for the definition in Introduction To Algorithms (3e), Cormen et al.
A simple example for computing factorials using memoization in Python would be something like this:
factorial_memo = {}
def factorial(k):
if k < 2: return 1
if k not in factorial_memo:
factorial_memo[k] = k * factorial(k-1)
return factorial_memo[k]
You can get more complicated and encapsulate the memoization process into a class:
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
#Warning: You may wish to do a deepcopy here if returning objects
return self.memo[args]
Then:
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
factorial = Memoize(factorial)
A feature known as "decorators" was added in Python 2.4 which allow you to now simply write the following to accomplish the same thing:
#Memoize
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
The Python Decorator Library has a similar decorator called memoized that is slightly more robust than the Memoize class shown here.
functools.cache decorator:
Python 3.9 released a new function functools.cache. It caches in memory the result of a functional called with a particular set of arguments, which is memoization. It's easy to use:
import functools
import time
#functools.cache
def calculate_double(num):
time.sleep(1) # sleep for 1 second to simulate a slow calculation
return num * 2
The first time you call caculate_double(5), it will take a second and return 10. The second time you call the function with the same argument calculate_double(5), it will return 10 instantly.
Adding the cache decorator ensures that if the function has been called recently for a particular value, it will not recompute that value, but use a cached previous result. In this case, it leads to a tremendous speed improvement, while the code is not cluttered with the details of caching.
(Edit: the previous example calculated a fibonacci number using recursion, but I changed the example to prevent confusion, hence the old comments.)
functools.lru_cache decorator:
If you need to support older versions of Python, functools.lru_cache works in Python 3.2+. By default, it only caches the 128 most recently used calls, but you can set the maxsize to None to indicate that the cache should never expire:
#functools.lru_cache(maxsize=None)
def calculate_double(num):
# etc
The other answers cover what it is quite well. I'm not repeating that. Just some points that might be useful to you.
Usually, memoisation is an operation you can apply on any function that computes something (expensive) and returns a value. Because of this, it's often implemented as a decorator. The implementation is straightforward and it would be something like this
memoised_function = memoise(actual_function)
or expressed as a decorator
#memoise
def actual_function(arg1, arg2):
#body
I've found this extremely useful
from functools import wraps
def memoize(function):
memo = {}
#wraps(function)
def wrapper(*args):
# add the new key to dict if it doesn't exist already
if args not in memo:
memo[args] = function(*args)
return memo[args]
return wrapper
#memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(25)
Memoization is keeping the results of expensive calculations and returning the cached result rather than continuously recalculating it.
Here's an example:
def doSomeExpensiveCalculation(self, input):
if input not in self.cache:
<do expensive calculation>
self.cache[input] = result
return self.cache[input]
A more complete description can be found in the wikipedia entry on memoization.
Let's not forget the built-in hasattr function, for those who want to hand-craft. That way you can keep the mem cache inside the function definition (as opposed to a global).
def fact(n):
if not hasattr(fact, 'mem'):
fact.mem = {1: 1}
if not n in fact.mem:
fact.mem[n] = n * fact(n - 1)
return fact.mem[n]
Memoization is basically saving the results of past operations done with recursive algorithms in order to reduce the need to traverse the recursion tree if the same calculation is required at a later stage.
see http://scriptbucket.wordpress.com/2012/12/11/introduction-to-memoization/
Fibonacci Memoization example in Python:
fibcache = {}
def fib(num):
if num in fibcache:
return fibcache[num]
else:
fibcache[num] = num if num < 2 else fib(num-1) + fib(num-2)
return fibcache[num]
Memoization is the conversion of functions into data structures. Usually one wants the conversion to occur incrementally and lazily (on demand of a given domain element--or "key"). In lazy functional languages, this lazy conversion can happen automatically, and thus memoization can be implemented without (explicit) side-effects.
Well I should answer the first part first: what's memoization?
It's just a method to trade memory for time. Think of Multiplication Table.
Using mutable object as default value in Python is usually considered bad. But if use it wisely, it can actually be useful to implement a memoization.
Here's an example adapted from http://docs.python.org/2/faq/design.html#why-are-default-values-shared-between-objects
Using a mutable dict in the function definition, the intermediate computed results can be cached (e.g. when calculating factorial(10) after calculate factorial(9), we can reuse all the intermediate results)
def factorial(n, _cache={1:1}):
try:
return _cache[n]
except IndexError:
_cache[n] = factorial(n-1)*n
return _cache[n]
Here is a solution that will work with list or dict type arguments without whining:
def memoize(fn):
"""returns a memoized version of any function that can be called
with the same list of arguments.
Usage: foo = memoize(foo)"""
def handle_item(x):
if isinstance(x, dict):
return make_tuple(sorted(x.items()))
elif hasattr(x, '__iter__'):
return make_tuple(x)
else:
return x
def make_tuple(L):
return tuple(handle_item(x) for x in L)
def foo(*args, **kwargs):
items_cache = make_tuple(sorted(kwargs.items()))
args_cache = make_tuple(args)
if (args_cache, items_cache) not in foo.past_calls:
foo.past_calls[(args_cache, items_cache)] = fn(*args,**kwargs)
return foo.past_calls[(args_cache, items_cache)]
foo.past_calls = {}
foo.__name__ = 'memoized_' + fn.__name__
return foo
Note that this approach can be naturally extended to any object by implementing your own hash function as a special case in handle_item. For example, to make this approach work for a function that takes a set as an input argument, you could add to handle_item:
if is_instance(x, set):
return make_tuple(sorted(list(x)))
Solution that works with both positional and keyword arguments independently of order in which keyword args were passed (using inspect.getargspec):
import inspect
import functools
def memoize(fn):
cache = fn.cache = {}
#functools.wraps(fn)
def memoizer(*args, **kwargs):
kwargs.update(dict(zip(inspect.getargspec(fn).args, args)))
key = tuple(kwargs.get(k, None) for k in inspect.getargspec(fn).args)
if key not in cache:
cache[key] = fn(**kwargs)
return cache[key]
return memoizer
Similar question: Identifying equivalent varargs function calls for memoization in Python
Just wanted to add to the answers already provided, the Python decorator library has some simple yet useful implementations that can also memoize "unhashable types", unlike functools.lru_cache.
cache = {}
def fib(n):
if n <= 1:
return n
else:
if n not in cache:
cache[n] = fib(n-1) + fib(n-2)
return cache[n]
If speed is a consideration:
#functools.cache and #functools.lru_cache(maxsize=None) are equally fast, taking 0.122 seconds (best of 15 runs) to loop a million times on my system
a global cache variable is quite a lot slower, taking 0.180 seconds (best of 15 runs) to loop a million times on my system
a self.cache class variable is a bit slower still, taking 0.214 seconds (best of 15 runs) to loop a million times on my system
The latter two are implemented similar to how it is described in the currently top-voted answer.
This is without memory exhaustion prevention, i.e. I did not add code in the class or global methods to limit that cache's size, this is really the barebones implementation. The lru_cache method has that for free, if you need this.
One open question for me would be how to unit test something that has a functools decorator. Is it possible to empty the cache somehow? Unit tests seem like they would be cleanest using the class method (where you can instantiate a new class for each test) or, secondarily, the global variable method (since you can do yourimportedmodule.cachevariable = {} to empty it).
I just started Python and I've got no idea what memoization is and how to use it. Also, may I have a simplified example?
Memoization effectively refers to remembering ("memoization" → "memorandum" → to be remembered) results of method calls based on the method inputs and then returning the remembered result rather than computing the result again. You can think of it as a cache for method results. For further details, see page 387 for the definition in Introduction To Algorithms (3e), Cormen et al.
A simple example for computing factorials using memoization in Python would be something like this:
factorial_memo = {}
def factorial(k):
if k < 2: return 1
if k not in factorial_memo:
factorial_memo[k] = k * factorial(k-1)
return factorial_memo[k]
You can get more complicated and encapsulate the memoization process into a class:
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
#Warning: You may wish to do a deepcopy here if returning objects
return self.memo[args]
Then:
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
factorial = Memoize(factorial)
A feature known as "decorators" was added in Python 2.4 which allow you to now simply write the following to accomplish the same thing:
#Memoize
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
The Python Decorator Library has a similar decorator called memoized that is slightly more robust than the Memoize class shown here.
functools.cache decorator:
Python 3.9 released a new function functools.cache. It caches in memory the result of a functional called with a particular set of arguments, which is memoization. It's easy to use:
import functools
import time
#functools.cache
def calculate_double(num):
time.sleep(1) # sleep for 1 second to simulate a slow calculation
return num * 2
The first time you call caculate_double(5), it will take a second and return 10. The second time you call the function with the same argument calculate_double(5), it will return 10 instantly.
Adding the cache decorator ensures that if the function has been called recently for a particular value, it will not recompute that value, but use a cached previous result. In this case, it leads to a tremendous speed improvement, while the code is not cluttered with the details of caching.
(Edit: the previous example calculated a fibonacci number using recursion, but I changed the example to prevent confusion, hence the old comments.)
functools.lru_cache decorator:
If you need to support older versions of Python, functools.lru_cache works in Python 3.2+. By default, it only caches the 128 most recently used calls, but you can set the maxsize to None to indicate that the cache should never expire:
#functools.lru_cache(maxsize=None)
def calculate_double(num):
# etc
The other answers cover what it is quite well. I'm not repeating that. Just some points that might be useful to you.
Usually, memoisation is an operation you can apply on any function that computes something (expensive) and returns a value. Because of this, it's often implemented as a decorator. The implementation is straightforward and it would be something like this
memoised_function = memoise(actual_function)
or expressed as a decorator
#memoise
def actual_function(arg1, arg2):
#body
I've found this extremely useful
from functools import wraps
def memoize(function):
memo = {}
#wraps(function)
def wrapper(*args):
# add the new key to dict if it doesn't exist already
if args not in memo:
memo[args] = function(*args)
return memo[args]
return wrapper
#memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(25)
Memoization is keeping the results of expensive calculations and returning the cached result rather than continuously recalculating it.
Here's an example:
def doSomeExpensiveCalculation(self, input):
if input not in self.cache:
<do expensive calculation>
self.cache[input] = result
return self.cache[input]
A more complete description can be found in the wikipedia entry on memoization.
Let's not forget the built-in hasattr function, for those who want to hand-craft. That way you can keep the mem cache inside the function definition (as opposed to a global).
def fact(n):
if not hasattr(fact, 'mem'):
fact.mem = {1: 1}
if not n in fact.mem:
fact.mem[n] = n * fact(n - 1)
return fact.mem[n]
Memoization is basically saving the results of past operations done with recursive algorithms in order to reduce the need to traverse the recursion tree if the same calculation is required at a later stage.
see http://scriptbucket.wordpress.com/2012/12/11/introduction-to-memoization/
Fibonacci Memoization example in Python:
fibcache = {}
def fib(num):
if num in fibcache:
return fibcache[num]
else:
fibcache[num] = num if num < 2 else fib(num-1) + fib(num-2)
return fibcache[num]
Memoization is the conversion of functions into data structures. Usually one wants the conversion to occur incrementally and lazily (on demand of a given domain element--or "key"). In lazy functional languages, this lazy conversion can happen automatically, and thus memoization can be implemented without (explicit) side-effects.
Well I should answer the first part first: what's memoization?
It's just a method to trade memory for time. Think of Multiplication Table.
Using mutable object as default value in Python is usually considered bad. But if use it wisely, it can actually be useful to implement a memoization.
Here's an example adapted from http://docs.python.org/2/faq/design.html#why-are-default-values-shared-between-objects
Using a mutable dict in the function definition, the intermediate computed results can be cached (e.g. when calculating factorial(10) after calculate factorial(9), we can reuse all the intermediate results)
def factorial(n, _cache={1:1}):
try:
return _cache[n]
except IndexError:
_cache[n] = factorial(n-1)*n
return _cache[n]
Here is a solution that will work with list or dict type arguments without whining:
def memoize(fn):
"""returns a memoized version of any function that can be called
with the same list of arguments.
Usage: foo = memoize(foo)"""
def handle_item(x):
if isinstance(x, dict):
return make_tuple(sorted(x.items()))
elif hasattr(x, '__iter__'):
return make_tuple(x)
else:
return x
def make_tuple(L):
return tuple(handle_item(x) for x in L)
def foo(*args, **kwargs):
items_cache = make_tuple(sorted(kwargs.items()))
args_cache = make_tuple(args)
if (args_cache, items_cache) not in foo.past_calls:
foo.past_calls[(args_cache, items_cache)] = fn(*args,**kwargs)
return foo.past_calls[(args_cache, items_cache)]
foo.past_calls = {}
foo.__name__ = 'memoized_' + fn.__name__
return foo
Note that this approach can be naturally extended to any object by implementing your own hash function as a special case in handle_item. For example, to make this approach work for a function that takes a set as an input argument, you could add to handle_item:
if is_instance(x, set):
return make_tuple(sorted(list(x)))
Solution that works with both positional and keyword arguments independently of order in which keyword args were passed (using inspect.getargspec):
import inspect
import functools
def memoize(fn):
cache = fn.cache = {}
#functools.wraps(fn)
def memoizer(*args, **kwargs):
kwargs.update(dict(zip(inspect.getargspec(fn).args, args)))
key = tuple(kwargs.get(k, None) for k in inspect.getargspec(fn).args)
if key not in cache:
cache[key] = fn(**kwargs)
return cache[key]
return memoizer
Similar question: Identifying equivalent varargs function calls for memoization in Python
Just wanted to add to the answers already provided, the Python decorator library has some simple yet useful implementations that can also memoize "unhashable types", unlike functools.lru_cache.
cache = {}
def fib(n):
if n <= 1:
return n
else:
if n not in cache:
cache[n] = fib(n-1) + fib(n-2)
return cache[n]
If speed is a consideration:
#functools.cache and #functools.lru_cache(maxsize=None) are equally fast, taking 0.122 seconds (best of 15 runs) to loop a million times on my system
a global cache variable is quite a lot slower, taking 0.180 seconds (best of 15 runs) to loop a million times on my system
a self.cache class variable is a bit slower still, taking 0.214 seconds (best of 15 runs) to loop a million times on my system
The latter two are implemented similar to how it is described in the currently top-voted answer.
This is without memory exhaustion prevention, i.e. I did not add code in the class or global methods to limit that cache's size, this is really the barebones implementation. The lru_cache method has that for free, if you need this.
One open question for me would be how to unit test something that has a functools decorator. Is it possible to empty the cache somehow? Unit tests seem like they would be cleanest using the class method (where you can instantiate a new class for each test) or, secondarily, the global variable method (since you can do yourimportedmodule.cachevariable = {} to empty it).
While doing programming exercises on codewars.com, I encountered an exercise on currying and partial functions.
Being a novice in programming and new to the topic, I searched on the internet for information on the topic and got quite far into solving the exercise. However I have now stumbled upon an obstacle I can't seem to overcome and am here looking for a nudge in the right direction.
The exercise is rather simple: write a function that can curry and/or partial any input function and evaluates the input function once enough input parameters are supplied. The input function can accept any number of input parameters. Also the curry/partial function should be very flexible in how it is called, being able to handle many, many different ways of calling the function. Also, the curry/partial function is allowed to be called with more inputs than required by the input function, in that case all the excess inputs need to be ignored.
Following the exercise link, all the test cases can be found that the function needs to be able to handle.
The code I came up with is the following:
from functools import partial
from inspect import signature
def curry_partial(func, *initial_args):
""" Generates a 'curried' version of a function. """
# Process any initial arguments that where given. If the number of arguments that are given exceeds
# minArgs (the number of input arguments that func needs), func is evaluated
minArgs = len(signature(func).parameters)
if initial_args:
if len(initial_args) >= minArgs:
return func(*initial_args[:minArgs])
func = partial(func, *initial_args)
minArgs = len(signature(func).parameters)
# Do the currying
def g(*myArgs):
nonlocal minArgs
# Evaluate function if we have the necessary amount of input arguments
if minArgs is not None and minArgs <= len(myArgs):
return func(*myArgs[:minArgs])
def f(*args):
nonlocal minArgs
newArgs = myArgs + args if args else myArgs
if minArgs is not None and minArgs <= len(newArgs):
return func(*newArgs[:minArgs])
else:
return g(*newArgs)
return f
return g
Now this code fails when the following test is executed:
test.assert_equals(curry_partial(curry_partial(curry_partial(add, a), b), c), sum)
where add = a + b + c (properly defined function), a = 1, b = 2, c = 3, and sum = 6.
The reason this fails is because curry_partial(add, a) returns a function handle to the function g. In the second call, curry_partial(<function_handle to g>, b), the calculation minArgs = len(signature(func).parameters) doesn't work like I want it to, because it will now calculate how many input arguments function g requires (which is 1: i.e. *myArgs), and not how many the original func still requires. So the question is, how can I write my code such that I can keep track of how many input arguments my original func still needs (reducing that number each time I am partialling the function with any given initial arguments).
I still have much to learn about programming and currying/partial, so most likely I have not chosen the most convenient approach. But I'd like to learn. The difficulty in this exercise for me is the combination of partial and curry, i.e. doing a curry loop while partialling any initial arguments that are encountered.
Try this out.
from inspect import signature
# Here `is_set` acts like a flip-flop
is_set = False
params = 0
def curry_partial(func, *partial_args):
"""
Required argument: func
Optional argument: partial_args
Return:
1) Result of the `func` if
`partial_args` contains
required number of items.
2) Function `wrapper` if `partial_args`
contains less than the required
number of items.
"""
global is_set, params
if not is_set:
is_set = True
# if func is already a value
# we should return it
try: params = len(signature(func).parameters)
except: return func
try:
is_set = False
return func(*partial_args[:params])
except:
is_set = True
def wrapper(*extra_args):
"""
Optional argument: extra_args
Return:
1) Result of the `func` if `args`
contains required number of
items.
2) Result of `curry_partial` if
`args` contains less than the
required number of items.
"""
args = (partial_args + extra_args)
try:
is_set = False
return func(*args[:params])
except:
is_set = True
return curry_partial(func, *args)
return wrapper
This indeed isn't very good by design. Instead you should use class, to do all the internal works like, for example, the flip-flop (don't worry we don't need any flip-flop there ;-)).
Whenever there's a function that takes arbitrary arguments, you can always instantiate that class passing the function. But this time however, I leave that on you.
I am not sure about currying, but if you need a simple partial function generator, you could try something like this:
from functools import partial
from inspect import signature
def execute_or_partial(f, *args):
max = len(signature(f).parameters)
if len(args) >= max:
return f(*args[:max])
else:
return partial(f, *args)
s = lambda x, y, z: x + y + z
t = execute_or_partial(s, 1)
u = execute_or_partial(t, 2)
v = execute_or_partial(u, 3)
print(v)
or
print(execute_or_partial(execute_or_partial(execute_or_partial(s, 1), 2), 3))
Even if it doesn't solve your original problem, see if you can use the above code to reduce code repetition (I am not sure, but I think there is some code repetition in the inner function?); that will make the subsequent problems easier to solve.
There could be functions in the standard library that already solve this problem. Many pure functional languages like Haskell have this feature built into the language.
I have a function that calls subprocess.check_call() twice. I want to test all their possible outputs. I want to be able to set the first check_call() to return 1 and the second to return 0 and to do so for all possible combinations. The below is what I have so far. I am not sure how to adjust the expected return value
#patch('subprocess.check_call')
def test_hdfs_dir_func(mock_check_call):
for p, d in list(itertools.product([1, 0], repeat=2)):
if p or d:
You can assign the side_effect of your mock to an iterable and that will return the next value in the iterable each time it's called. In this case, you could do something like this:
import copy
import itertools
import subprocess
from unittest.mock import patch
#patch('subprocess.check_call')
def test_hdfs_dir_func(mock_check_call):
return_values = itertools.product([0, 1], repeat=2)
# Flatten the list; only one return value per call
mock_check_call.side_effect = itertools.chain.from_iterable(copy.copy(return_values))
for p, d in return_values:
assert p == subprocess.check_call()
assert d == subprocess.check_call()
Note a few things:
I don't have your original functions so I put my own calls to check_call in the loop.
I'm using copy on the original itertools.product return value because if I don't, it uses the original iterator. This exhausts that original iterator when what we want is 2 separate lists: one for the mock's side_effect and one for you to loop through in your test.
You can do other neat stuff with side_effect, not just raise. As shown above, you can change the return value for multiple calls: https://docs.python.org/3/library/unittest.mock-examples.html#side-effect-functions-and-iterables
Not only that, but you can see from the link above that you can also give it a function pointer. That allows you to do even more complex logic when keeping track of multiple mock calls.
There are some cases where its convenient to use a generator with yield to pass back data, to the caller over an extended period. Is there a way to do something similar to yield, without having to make the function into a generator?
The reason for this, is in some cases I end up having to make all callee's into generators when those nested functions may have useful return values.
# currently this works fine, but requires a return arg
def nested(return_store):
return_store[0] = some_test()
yield from some_generator()
def do_stuff(return_store):
yield some_data
for more_data in data:
yield more_data
# Annoying workaround!
return_store = [None]
yield from nested(return_store)
if return_store[0]:
pass # do anything
def main():
return Reply(do_stuff())
Instead I'd like to pass an object as an argument which I can pass arguments to (instead of using yield)
# is something like this possible?
def nested(iter_obj):
iter_obj.yield_replacement(some_generator())
return some_test()
def do_stuff(iter_obj):
iter_obj.yield_replacement(some_data)
for more_data in data:
iter_obj.yield_replacement(more_data)
# No annoying workaround
if nested(iter_obj):
pass # do anything
def main():
iter_obj = yield_replacement_object(consumer=print)
# sets up the generator (Reply should consume iter_obj)
do_stuff(iter_obj)
return Reply(iter_obj)
Generators are just one form of iterators. Anything that implements the iterator protocol will do.
This means you can replace your nested function with an object with more attributes:
class Nested():
def __iter__:
self.some_flag = some_test()
yield data
I implemented the __iter__ method as a generator function even.
Then use the object in your generator at will:
n = Nested()
yield from n
if n.some_flag:
# ...
Another method is to throw exceptions; if you are trying to communicate some out-of-band state change, throw an exception and catch it in the parent generator.