My understanding of yield from is that it is similar to yielding every item from an iterable. Yet, I observe the different behavior in the following example.
I have Class1
class Class1:
def __init__(self, gen):
self.gen = gen
def __iter__(self):
for el in self.gen:
yield el
and Class2 that different only in replacing yield in for loop with yield from
class Class2:
def __init__(self, gen):
self.gen = gen
def __iter__(self):
yield from self.gen
The code below reads the first element from an instance of a given class and then reads the rest in a for loop:
a = Class1((i for i in range(3)))
print(next(iter(a)))
for el in iter(a):
print(el)
This produces different outputs for Class1 and Class2. For Class1 the output is
0
1
2
and for Class2 the output is
0
Live demo
What is the mechanism behind yield from that produces different behavior?
What Happened?
When you use next(iter(instance_of_Class2)), iter() calls .close() on the inner generator when it (the iterator, not the generator!) goes out of scope (and is deleted), while with Class1, iter() only closes its instance
>>> g = (i for i in range(3))
>>> b = Class2(g)
>>> i = iter(b) # hold iterator open
>>> next(i)
0
>>> next(i)
1
>>> del(i) # closes g
>>> next(iter(b))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
This behavior is described in PEP 342 in two parts
the new .close() method (well, new to Python 2.5)
from the Specification Summary
Add support to ensure that close() is called when a generator iterator is garbage-collected.
What happens is a little clearer (if perhaps surprising) when multiple generator delegations occur; only the generator being delegated is closed when its wrapping iter is deleted
>>> g1 = (a for a in range(10))
>>> g2 = (a for a in range(10, 20))
>>> def test3():
... yield from g1
... yield from g2
...
>>> next(test3())
0
>>> next(test3())
10
>>> next(test3())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Fixing Class2
What options are there to make Class2 behave more the way you expect?
Notably, other strategies, though they don't have the visually pleasing sugar of yield from or some of its potential benefits gives you a way to interact with the values, which seems like a primary benefit
avoid creating a structure like this at all ("just don't do that!")
if you don't interact with the generator and don't intend to keep a reference to the iterator, why bother wrapping it at all? (see above comment about interacting)
create the iterator yourself internally (this may be what you expected)
>>> class Class3:
... def __init__(self, gen):
... self.iterator = iter(gen)
...
... def __iter__(self):
... return self.iterator
...
>>> c = Class3((i for i in range(3)))
>>> next(iter(c))
0
>>> next(iter(c))
1
make the whole class a "proper" Generator
while testing this, it plausibly highlights some iter() inconsistency - see comments below (ie. why isn't e closed?)
also an opportunity to pass multiple generators with itertools.chain.from_iterable
>>> class Class5(collections.abc.Generator):
... def __init__(self, gen):
... self.gen = gen
... def send(self, value):
... return next(self.gen)
... def throw(self, value):
... raise StopIteration
... def close(self): # optional, but more complete
... self.gen.close()
...
>>> e = Class5((i for i in range(10)))
>>> next(e) # NOTE iter is not necessary!
0
>>> next(e)
1
>>> next(iter(e)) # but still works
2
>>> next(iter(e)) # doesn't close e?? (should it?)
3
>>> e.close()
>>> next(e)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/_collections_abc.py", line 330, in __next__
return self.send(None)
File "<stdin>", line 5, in send
StopIteration
Hunting the Mystery
A better clue is that if you directly try again, next(iter(instance)) raises StopIteration, indicating the generator is permanently closed (either through exhaustion or .close()), and why iterating over it with a for loop yields no more values
>>> a = Class1((i for i in range(3)))
>>> next(iter(a))
0
>>> next(iter(a))
1
>>> b = Class2((i for i in range(3)))
>>> next(iter(b))
0
>>> next(iter(b))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
However, if we name the iterator, it works as expected
>>> b = Class2((i for i in range(3)))
>>> i = iter(b)
>>> next(i)
0
>>> next(i)
1
>>> j = iter(b)
>>> next(j)
2
>>> next(i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
To me, this suggests that when the iterator doesn't have a name, it calls .close() when it goes out of scope
>>> def gen_test(iterable):
... yield from iterable
...
>>> g = gen_test((i for i in range(3)))
>>> next(iter(g))
0
>>> g.close()
>>> next(iter(g))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Disassembling the result, we find the internals are a little different
>>> a = Class1((i for i in range(3)))
>>> dis.dis(a.__iter__)
6 0 LOAD_FAST 0 (self)
2 LOAD_ATTR 0 (gen)
4 GET_ITER
>> 6 FOR_ITER 10 (to 18)
8 STORE_FAST 1 (el)
7 10 LOAD_FAST 1 (el)
12 YIELD_VALUE
14 POP_TOP
16 JUMP_ABSOLUTE 6
>> 18 LOAD_CONST 0 (None)
20 RETURN_VALUE
>>> b = Class2((i for i in range(3)))
>>> dis.dis(b.__iter__)
6 0 LOAD_FAST 0 (self)
2 LOAD_ATTR 0 (gen)
4 GET_YIELD_FROM_ITER
6 LOAD_CONST 0 (None)
8
10 POP_TOP
12 LOAD_CONST 0 (None)
14 RETURN_VALUE
Notably, the yield from version has GET_YIELD_FROM_ITER
If TOS is a generator iterator or coroutine object it is left as is. Otherwise, implements TOS = iter(TOS).
(subtly, YIELD_FROM keyword appears to be removed in 3.11)
So if the given iterable (to the class) is a generator iterator, it'll be handed off directly, giving the result we (might) expect
Extras
Passing an iterator which isn't a generator (iter() creates a new iterator each time in both cases)
>>> a = Class1([i for i in range(3)])
>>> next(iter(a))
0
>>> next(iter(a))
0
>>> b = Class2([i for i in range(3)])
>>> next(iter(b))
0
>>> next(iter(b))
0
Expressly closing Class1's internal generator
>>> g = (i for i in range(3))
>>> a = Class1(g)
>>> next(iter(a))
0
>>> next(iter(a))
1
>>> a.gen.close()
>>> next(iter(a))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
generator is only closed by iter when deleted if instance is popped
>>> g = (i for i in range(10))
>>> b = Class2(g)
>>> i = iter(b)
>>> next(i)
0
>>> j = iter(b)
>>> del(j) # next() not called on j
>>> next(i)
1
>>> j = iter(b)
>>> next(j)
2
>>> del(j) # generator closed
>>> next(i) # now fails, despite range(10) above
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
updated
I don't see it as that complicated, and the resulting behavior can be seen as actually unsurprising.
When the iterator goes out of scope, Python will throw a "GeneratorExit" exception in the (innermost) generator.
On the "classic" for form, the exception happens in the user-written __iter__ method, is not catch, and is suppressed when bubbling up by the generator mechanisms.
On the yield from form, the same exception is thrown in the inner self.gen, thus "killing" it, and bubbles up to the user-written __iter__ .
Writing another intermediate generator can make this easily visible:
def inner_gen(gen):
try:
for item in gen:
yield item
except GeneratorExit:
print("Generator exit thrown in inner generator")
class Class1:
def __init__(self, gen):
self.gen = inner_gen(gen)
def __iter__(self):
try:
for el in self.gen:
yield el
except GeneratorExit:
print("Generator exit thrown in outer generator for 'classic' form")
class Class2(Class1):
def __iter__(self):
try:
yield from self.gen
except GeneratorExit as exit:
print("Generator exit thrown in outer generator for 'yield from' form" )
first = lambda g:next(iter(g))
And now:
In [324]: c1 = Class1((i for i in range(2)))
In [325]: first(c1)
Generator exit thrown in outer generator for 'classic' form
Out[325]: 0
In [326]: first(c1)
Generator exit thrown in outer generator for 'classic' form
Out[326]: 1
In [327]: c2 = Class2((i for i in range(2)))
In [328]: first(c2)
Generator exit thrown in inner generator
Generator exit thrown in outter generator for 'yield from' form
Out[328]: 0
In [329]: first(c2)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
Cell In[329], line 1
(...)
StopIteration:
update
I had a previous answer text speculating how the call to close would take place, skipping the intermediate generator - it is not that simple regarding close though: Python will always call __del__ - not close, which is only called by the user, or in certain circunstances that were hard to pin down. But it will always throw the GeneratorExit exception in a generator-function body (not in a class with explict __next__ and throw , though - let's skip this for another question :-D )
Suppose I have a function which produces a value, say:
import random
def f():
return random.randint(1,10)
Elsewhere in my program, I want to iterate over a sequence of values produced by this function:
def g(n):
while True:
if n == k:
return
else:
print("x")
return
Using the while True statement seems inelegant, and I would prefer to use a generator, like the following:
def f2():
while True:
yield random.randint(1,10)
def g(n):
for k in f2():
if n == k:
return
else:
print("x")
However, this just pushes the issue of the while True into the definition of f(). Is there a built-in function which takes in a function and returns a generator, like the following pseudocode:
def make_generator_from_function(f):
def gen_f():
while True:
yield f()
return gen_f
def g3(n):
for k in make_generator_from_function(f):
if n == k:
return
else:
print("x")
The built-in iter function can do this:
If the second argument, sentinel, is given, then object must be a callable object. The iterator created in this case will call object with no arguments for each call to its __next__() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned.
Example: the number 5 is the sentinel, so the iteration will stop when random.randint(1, 10) returns 5.
>>> import random
>>> def f():
... return random.randint(1, 10)
...
>>> iter(f, 5)
<callable_iterator object at 0x7f3e11becc50>
>>> for x in iter(f, 5):
... print(x)
...
3
3
1
10
9
7
9
6
10
10
9
7
7
3
2
7
2
3
You could itertools.repeat() in a generator expression like:
Code:
import itertools as it
import random
for i in (random.randint(1, 10) for _ in it.repeat(1)):
if i == 5:
break
print(i)
Results:
6
4
1
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am new to Python coming from C++ background and this is the first time I am seeing a language which contains nothing but objects. I have just learned that class and functions are also just objects. So, is there a way to convert the following function to a class?
In [1]: def somefnc(a, b):
...: return a+b
...:
I have first tried assigning the __call__ variable to None to take away the "callable nature" from the function. But as you can see, the __call__ was successfully replaced by None but this didn't cause the function to stop adding numbers when called, though, somefnc.__call__(1,3) was working before assigning somefnc.__call__ to None
In [2]: somefnc.__dict__
Out[2]: {}
In [3]: somefnc.__call__
Out[3]: <method-wrapper '__call__' of function object at 0x7f282e8ff7b8>
In [4]: somefnc.__call__ = None
In [5]: x = somefnc(1, 2)
In [6]: print(x)
3
In [7]: somefnc.__call__
In [8]: print(somefnc.__call__(1, 2))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
ipython-input-8-407663da97ca
in <module>()
----> 1 print(somefnc.__call__(1, 2))
TypeError: 'NoneType' object is not callable
In [9]: print (somefnc(1,2))
3
In [10]:
I am not doing this for developing purpose here, so claiming this to be a bad practice will not make any sense. I am just trying to understand Python very well. Of course, for development purpose, I could rather create a class than to convert a function to one!
After robbing the function off its ability to add two numbers, I am thinking of assigning a valid function to the attribute somefnc.__init__ and some members by modifying somefun.__dict__, to convert it to a class.
In Python functions are instances of the function class. So I'll give you a general answer about any class and instance.
In [10]: class Test:
...: def __getitem__(self, i):
...: return i
...:
In [11]: t = Test()
In [12]: t[0]
Out[12]: 0
In [13]: t.__getitem__(0)
Out[13]: 0
In [14]: t.__getitem__ = None
In [15]: t[0]
Out[15]: 0
In [16]: t.__getitem__(0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-16-c72f91d2bfbc> in <module>()
----> 1 t.__getitem__(0)
TypeError: 'NoneType' object is not callable
In Python all special methods (the ones with double underscores in the prefix and the postfix) are accessed from the class when triggered via operators, not from the instance.
In [17]: class Test2:
...: def test(self, i):
...: return i
...:
In [18]: t = Test2()
In [19]: t.test(1)
Out[19]: 1
In [20]: t.test = None
In [20]: t.test(1)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-261b43cb55fe> in <module>()
----> 1 t.test(1)
TypeError: 'NoneType' object is not callable
All methods are accessed via the instance first, when accessed by name. The difference is due to different search mechanics. When you access a method/attribute by name, you invoke __getattribute__ which will first search in the instance's namespace by default. When you trigger a method via operators, __getattribute__ is not invoked. You can see it in the disassembly.
In [22] import dis
In [23]: def test():
...: return Test()[0]
...:
In [24]: dis.dis(test)
2 0 LOAD_GLOBAL 0 (Test)
3 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
6 LOAD_CONST 1 (0)
9 BINARY_SUBSCR
10 RETURN_VALUE
In [25]: def test2():
...: return Test().__getitem__(0)
...:
In [26]: dis.dis(test2)
2 0 LOAD_GLOBAL 0 (Test)
3 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
6 LOAD_ATTR 1 (__getitem__)
9 LOAD_CONST 1 (0)
12 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
15 RETURN_VALUE
As you can see, there is no LOAD_ATTR in the first case. The [] operator is assembled as a special virtual-machine instruction BINARY_SUBSCR.
I wanted to understand a bit more about iterators, so please correct me if I'm wrong.
An iterator is an object which has a pointer to the next object and is read as a buffer or stream (i.e. a linked list). They're particularly efficient cause all they do is tell you what is next by references instead of using indexing.
However I still don't understand why is the following behavior happening:
In [1]: iter = (i for i in range(5))
In [2]: for _ in iter:
....: print _
....:
0
1
2
3
4
In [3]: for _ in iter:
....: print _
....:
In [4]:
After a first loop through the iterator (In [2]) it's as if it was consumed and left empty, so the second loop (In [3]) prints nothing.
However I never assigned a new value to the iter variable.
What is really happening under the hood of the for loop?
Your suspicion is correct: the iterator has been consumed.
In actuality, your iterator is a generator, which is an object which has the ability to be iterated through only once.
type((i for i in range(5))) # says it's type generator
def another_generator():
yield 1 # the yield expression makes it a generator, not a function
type(another_generator()) # also a generator
The reason they are efficient has nothing to do with telling you what is next "by reference." They are efficient because they only generate the next item upon request; all of the items are not generated at once. In fact, you can have an infinite generator:
def my_gen():
while True:
yield 1 # again: yield means it is a generator, not a function
for _ in my_gen(): print(_) # hit ctl+c to stop this infinite loop!
Some other corrections to help improve your understanding:
The generator is not a pointer, and does not behave like a pointer as you might be familiar with in other languages.
One of the differences from other languages: as said above, each result of the generator is generated on the fly. The next result is not produced until it is requested.
The keyword combination for in accepts an iterable object as its second argument.
The iterable object can be a generator, as in your example case, but it can also be any other iterable object, such as a list, or dict, or a str object (string), or a user-defined type that provides the required functionality.
The iter function is applied to the object to get an iterator (by the way: don't use iter as a variable name in Python, as you have done - it is one of the keywords). Actually, to be more precise, the object's __iter__ method is called (which is, for the most part, all the iter function does anyway; __iter__ is one of Python's so-called "magic methods").
If the call to __iter__ is successful, the function next() is applied to the iterable object over and over again, in a loop, and the first variable supplied to for in is assigned to the result of the next() function. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method, which is another "magic method".
The for loop ends when next() raises the StopIteration exception (which usually happens when the iterable does not have another object to yield when next() is called).
You can "manually" implement a for loop in python this way (probably not perfect, but close enough):
try:
temp = iterable.__iter__()
except AttributeError():
raise TypeError("'{}' object is not iterable".format(type(iterable).__name__))
else:
while True:
try:
_ = temp.__next__()
except StopIteration:
break
except AttributeError:
raise TypeError("iter() returned non-iterator of type '{}'".format(type(temp).__name__))
# this is the "body" of the for loop
continue
There is pretty much no difference between the above and your example code.
Actually, the more interesting part of a for loop is not the for, but the in. Using in by itself produces a different effect than for in, but it is very useful to understand what in does with its arguments, since for in implements very similar behavior.
When used by itself, the in keyword first calls the object's __contains__ method, which is yet another "magic method" (note that this step is skipped when using for in). Using in by itself on a container, you can do things like this:
1 in [1, 2, 3] # True
'He' in 'Hello' # True
3 in range(10) # True
'eH' in 'Hello'[::-1] # True
If the iterable object is NOT a container (i.e. it doesn't have a __contains__ method), in next tries to call the object's __iter__ method. As was said previously: the __iter__ method returns what is known in Python as an iterator. Basically, an iterator is an object that you can use the built-in generic function next() on1. A generator is just one type of iterator.
If the call to __iter__ is successful, the in keyword applies the function next() to the iterable object over and over again. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method).
If the object doesn't have a __iter__ method to return an iterator, in then falls back on the old-style iteration protocol using the object's __getitem__ method2.
If all of the above attempts fail, you'll get a TypeError exception.
If you wish to create your own object type to iterate over (i.e, you can use for in, or just in, on it), it's useful to know about the yield keyword, which is used in generators (as mentioned above).
class MyIterable():
def __iter__(self):
yield 1
m = MyIterable()
for _ in m: print(_) # 1
1 in m # True
The presence of yield turns a function or method into a generator instead of a regular function/method. You don't need the __next__ method if you use a generator (it brings __next__ along with it automatically).
If you wish to create your own container object type (i.e, you can use in on it by itself, but NOT for in), you just need the __contains__ method.
class MyUselessContainer():
def __contains__(self, obj):
return True
m = MyUselessContainer()
1 in m # True
'Foo' in m # True
TypeError in m # True
None in m # True
1 Note that, to be an iterator, an object must implement the iterator protocol. This only means that both the __next__ and __iter__ methods must be correctly implemented (generators come with this functionality "for free", so you don't need to worry about it when using them). Also note that the ___next__ method is actually next (no underscores) in Python 2.
2 See this answer for the different ways to create iterable classes.
For loop basically calls the next method of an object that is applied to (__next__ in Python 3).
You can simulate this simply by doing:
iter = (i for i in range(5))
print(next(iter))
print(next(iter))
print(next(iter))
print(next(iter))
print(next(iter))
# this prints 1 2 3 4
At this point there is no next element in the input object. So doing this:
print(next(iter))
Will result in StopIteration exception thrown. At this point for will stop. And iterator can be any object which will respond to the next() function and throws the exception when there are no more elements. It does not have to be any pointer or reference (there are no such things in python anyway in C/C++ sense), linked list, etc.
There is an iterator protocol in python that defines how the for statement will behave with lists and dicts, and other things that can be looped over.
It's in the python docs here and here.
The way the iterator protocol works typically is in the form of a python generator. We yield a value as long as we have a value until we reach the end and then we raise StopIteration
So let's write our own iterator:
def my_iter():
yield 1
yield 2
yield 3
raise StopIteration()
for i in my_iter():
print i
The result is:
1
2
3
A couple of things to note about that. The my_iter is a function. my_iter() returns an iterator.
If I had written using iterator like this instead:
j = my_iter() #j is the iterator that my_iter() returns
for i in j:
print i #this loop runs until the iterator is exhausted
for i in j:
print i #the iterator is exhausted so we never reach this line
And the result is the same as above. The iter is exhausted by the time we enter the second for loop.
But that's rather simplistic what about something more complicated? Perhaps maybe in a loop why not?
def capital_iter(name):
for x in name:
yield x.upper()
raise StopIteration()
for y in capital_iter('bobert'):
print y
And when it runs, we use the iterator on the string type (which is built into iter). This in turn, allows us run a for loop on it, and yield the results until we are done.
B
O
B
E
R
T
So now this begs the question, so what happens between yields in the iterator?
j = capital_iter("bobert")
print i.next()
print i.next()
print i.next()
print("Hey there!")
print i.next()
print i.next()
print i.next()
print i.next() #Raises StopIteration
The answer is the function is paused at the yield waiting for the next call to next().
B
O
B
Hey There!
E
R
T
Traceback (most recent call last):
File "", line 13, in
StopIteration
Some additional details about the behaviour of iter() with __getitem__ classes that lack their own __iter__ method.
Before __iter__ there was __getitem__. If the __getitem__ works with ints from 0 - len(obj)-1, then iter() supports these objects. It will construct a new iterator that repeatedly calls __getitem__ with 0, 1, 2, ... until it gets an IndexError, which it converts to a StopIteration.
See this answer for more details of the different ways to create an iterator.
Excerpt from the Python Practice book:
5. Iterators & Generators
5.1. Iterators
We use for statement for looping over a list.
>>> for i in [1, 2, 3, 4]:
... print i,
...
1
2
3
4
If we use it with a string, it loops over its characters.
>>> for c in "python":
... print c
...
p
y
t
h
o
n
If we use it with a dictionary, it loops over its keys.
>>> for k in {"x": 1, "y": 2}:
... print k
...
y
x
If we use it with a file, it loops over lines of the file.
>>> for line in open("a.txt"):
... print line,
...
first line
second line
So there are many types of objects which can be used with a for loop. These are called iterable objects.
There are many functions which consume these iterables.
>>> ",".join(["a", "b", "c"])
'a,b,c'
>>> ",".join({"x": 1, "y": 2})
'y,x'
>>> list("python")
['p', 'y', 't', 'h', 'o', 'n']
>>> list({"x": 1, "y": 2})
['y', 'x']
5.1.1. The Iteration Protocol
The built-in function iter takes an iterable object and returns an iterator.
>>> x = iter([1, 2, 3])
>>> x
<listiterator object at 0x1004ca850>
>>> x.next()
1
>>> x.next()
2
>>> x.next()
3
>>> x.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Each time we call the next method on the iterator gives us the next element. If there are no more elements, it raises a StopIteration.
Iterators are implemented as classes. Here is an iterator that works like built-in xrange function.
class yrange:
def __init__(self, n):
self.i = 0
self.n = n
def __iter__(self):
return self
def next(self):
if self.i < self.n:
i = self.i
self.i += 1
return i
else:
raise StopIteration()
The iter method is what makes an object iterable. Behind the scenes, the iter function calls iter method on the given object.
The return value of iter is an iterator. It should have a next method and raise StopIteration when there are no more elements.
Lets try it out:
>>> y = yrange(3)
>>> y.next()
0
>>> y.next()
1
>>> y.next()
2
>>> y.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 14, in next
StopIteration
Many built-in functions accept iterators as arguments.
>>> list(yrange(5))
[0, 1, 2, 3, 4]
>>> sum(yrange(5))
10
In the above case, both the iterable and iterator are the same object. Notice that the iter method returned self. It need not be the case always.
class zrange:
def __init__(self, n):
self.n = n
def __iter__(self):
return zrange_iter(self.n)
class zrange_iter:
def __init__(self, n):
self.i = 0
self.n = n
def __iter__(self):
# Iterators are iterables too.
# Adding this functions to make them so.
return self
def next(self):
if self.i < self.n:
i = self.i
self.i += 1
return i
else:
raise StopIteration()
If both iteratable and iterator are the same object, it is consumed in a single iteration.
>>> y = yrange(5)
>>> list(y)
[0, 1, 2, 3, 4]
>>> list(y)
[]
>>> z = zrange(5)
>>> list(z)
[0, 1, 2, 3, 4]
>>> list(z)
[0, 1, 2, 3, 4]
5.2. Generators
Generators simplifies creation of iterators. A generator is a function that produces a sequence of results instead of a single value.
def yrange(n):
i = 0
while i < n:
yield i
i += 1
Each time the yield statement is executed the function generates a new value.
>>> y = yrange(3)
>>> y
<generator object yrange at 0x401f30>
>>> y.next()
0
>>> y.next()
1
>>> y.next()
2
>>> y.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
So a generator is also an iterator. You don’t have to worry about the iterator protocol.
The word “generator” is confusingly used to mean both the function that generates and what it generates. In this chapter, I’ll use the word “generator” to mean the generated object and “generator function” to mean the function that generates it.
Can you think about how it is working internally?
When a generator function is called, it returns a generator object without even beginning execution of the function. When next method is called for the first time, the function starts executing until it reaches yield statement. The yielded value is returned by the next call.
The following example demonstrates the interplay between yield and call to next method on generator object.
>>> def foo():
... print "begin"
... for i in range(3):
... print "before yield", i
... yield i
... print "after yield", i
... print "end"
...
>>> f = foo()
>>> f.next()
begin
before yield 0
0
>>> f.next()
after yield 0
before yield 1
1
>>> f.next()
after yield 1
before yield 2
2
>>> f.next()
after yield 2
end
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Lets see an example:
def integers():
"""Infinite sequence of integers."""
i = 1
while True:
yield i
i = i + 1
def squares():
for i in integers():
yield i * i
def take(n, seq):
"""Returns first n values from the given sequence."""
seq = iter(seq)
result = []
try:
for i in range(n):
result.append(seq.next())
except StopIteration:
pass
return result
print take(5, squares()) # prints [1, 4, 9, 16, 25]
Concept 1
All generators are iterators but all iterators are not generator
Concept 2
An iterator is an object with a next (Python 2) or next (Python 3)
method.
Concept 3
Quoting from wiki
Generators Generators
functions allow you to declare a function that behaves like an
iterator, i.e. it can be used in a for loop.
In your case
>>> it = (i for i in range(5))
>>> type(it)
<type 'generator'>
>>> callable(getattr(it, 'iter', None))
False
>>> callable(getattr(it, 'next', None))
True