Iterate twice through an object - python

I'm trying to create an iterable object, and when I do 1 loop it is okay, but when doing multiple loops, it doesn't work. Here is my simplified code:
class test():
def __init__(self):
self.n = 0
def __iter__(self):
return self
def __next__(self):
if self.n < len(self)-1:
self.n += 1
return self.n
else:
raise StopIteration
def __len__(self):
return 5
#this is an example iteration
test = test()
for i in test:
for j in test:
print(i,j)
#it prints is
1 2
1 3
1 4
#What i expect is
1 1
1 2
1 3
1 4
2 1
2 2
2 3
...
4 3
4 4
How can I make this object (in this case test) to iterate twice and get all the combinations of number i and j in the example loop?

You want an instance of test to be iterable, but not its own iterator. What's the difference?
An iterable is something that, upon request, can supply an iterator. Lists are iterable, because iter([1,2,3]) returns a new listiterator object (not the list itself). To make test iterable, you just need to supply an __iter__ method (more on how to define it in a bit).
An iterator is something that, upon request, can produce a new element. It does this by calling its __next__ method. An iterator can be thought of as two pieces of information: a sequence of items to produce, and a cursor indicating how far along that sequence it currently is. When it reaches the end of its sequence, it raises a StopIteration exception to indicate that the iteration is at an end. To make an instance an iterator, you supply a __next__ method in its class. An iterator should also have a __iter__ method that just returns itself.
So how do you make test iterable without being an iterator? By having its __iter__ method return a new iterator each time it is called, and getting rid of its __next__ method. The simplest way to do that is to make __iter__ a generator function. Define your class something like:
class Test():
def __init__(self):
self._size = 5
def __iter__(self):
n = 0
while n < self._size:
yield n
n += 1
def __len__(self):
return self._size
Now when you write
test = Test()
for i in test: # implicit call to iter(test)
for j in test: # implicit call to iter(test)
print(i, j)
i and j both draw values from separate iterators over the same iterable. Each call to test.__iter__ returns a different generator object that keeps track of its own n.

Take a look at itertools.product.
You should be able to accomplish what you're looking for:
from itertools import product
...
test = test()
for i, j in product(test, repeat=2):
print(i,j)
I love this library!

Related

When I create a class based generator, why do I have to call the next? [duplicate]

This question already has answers here:
How to write a generator class?
(5 answers)
Closed 4 years ago.
I'm new to Python and I'm reading the book Python Tricks. In the chapter about generators, it gives the following example (with some changes)
class BoundedGenerator:
def __init__(self, value, max_times):
self.value = value
self.max_times = max_times
self.count = 0
def __iter__(self):
return self
def __next__(self):
if self.count < self.max_times:
self.count += 1
yield self.value
After that, I write a loop, instantiate the generator and print the value:
for x in BoundedGenerator('Hello world', 4):
print(next(x))
Why do I have to call the next(X) inside the loop?
I (think) I understand that the __iter__ function will be called in the loop line definition and the __next__ will be called in each iteration, but I don't understand why I have to call the next again inside the loop. Is this not redundant?
If I don't call the __next__ function, my loop will run forever.
Your __next__ method itself is a generator function due to using yield. It must be a regular function that uses return instead.
def __next__(self):
if self.count < self.max_times:
self.count += 1
return self.value # return to provide one value on call
raise StopIteration # raise to end iteration
When iterating, python calls iter.__next__ to receive the new value. If this is a generator function, the call merely returns a generator. This is the same behaviour as for any other generator function:
>>> def foo():
... yield 1
...
>>> foo()
<generator object foo at 0x106134ca8>
This requires you to call next on the generator to actually get a value. Similarly, since you defined BoundedGenerator.__next__ as a generator function, each iteration step provides only a new generator.
Using return instead of yield indeed returns the value, not a generator yielding said value. Additionally, you should raise StopIteration when done - this signals the end of the iteration.

Python function that produces both generator and aggregate results

What is the Pythonic way to make a generator that also produces aggregate results? In meta code, something like this (but not for real, as my Python version does not support mixing yield and return):
def produce():
total = 0
for item in find_all():
total += 1
yield item
return total
As I see it, I could:
Not make produce() a generator, but pass it a callback function to call on every item.
With every yield, also yield the aggregate results up until now. I'd rather not calculate the intermediate results with every yield, only when finishing.
Send a dict as argument to produce() that will be populated with the aggregate results.
Use a global to store aggregate results.
All of them don't seem very attractive...
NB. total is a simple example, my actual code requires complex aggregations. And I need intermediate results before produce() finishes, hence a generator.
Maybe you shouldn't use a generator but an iterator.
def findall(): # no idea what your "find_all" does so I use this instead. :-)
yield 1
yield 2
yield 3
class Produce(object):
def __init__(self, iterable):
self._it = iterable
self.total = 0
def __iter__(self):
return self
def __next__(self):
self.total += 1
return next(self._it)
next = __next__ # only necessary for python2 compatibility
Maybe better to see this with an example:
>>> it = Produce(findall())
>>> it.total
0
>>> next(it)
1
>>> next(it)
2
>>> it.total
2
you can use enumerate to count stuff, for example
i=0
for i,v in enumerate(range(10), 1 ):
print(v)
print("total",i)
(notice the start value of the enumerate)
for more complex stuff, you can use the same principle, make produce a generator that yield both values and ignore one in the iteration and use it later when finished.
other alternative is passing a modifiable object, for example
def produce(mem):
t=0
for x in range(10):
t+=1
yield x
mem.append(t)
aggregate=[]
for x in produce(aggregate):
print(x)
print("total",aggregate[0])
in either case the result is the same for this example
0
1
2
3
4
5
6
7
8
9
total 10
Am I missing something? Why not:
def produce():
total = 0
for item in find_all():
total += 1
yield item
yield total

Equivalent code of __getitem__ in __iter__

I am trying to understand more about __iter__ in Python 3. For some reason __getitem__ is better understood by me than __iter__. I think I get somehow don't get the corresponding next implemention followed with __iter__.
I have this following code:
class Item:
def __getitem__(self,pos):
return range(0,30,10)[pos]
item1= Item()
print (f[1]) # 10
for i in item1:
print (i) # 0 10 20
I understand the code above, but then again how do i write the equivalent code using __iter__ and __next__() ?
class Item:
def __iter__(self):
return self
#Lost here
def __next__(self,pos):
#Lost here
I understand when python sees a __getitem__ method, it tries iterating over that object by calling the method with the integer index starting with 0.
In general, a really good approach is to make __iter__ a generator by yielding values. This might be less intuitive but it is straight-forward; you just yield back the results you want and __next__ is then provided automatically for you:
class Item:
def __iter__(self):
for item in range(0, 30, 10):
yield item
This just uses the power of yield to get the desired effect, when Python calls __iter__ on your object, it expects back an iterator (i.e an object that supports __next__ calls), a generator does just that, producing each item as defined in your generator function (i.e __iter__ in this case) when __next__ is called:
>>> i = iter(Item())
>>> print(i) # generator, supports __next__
<generator object __iter__ at 0x7f6aeaf9e6d0>
>>> next(i)
0
>>> next(i)
10
>>> next(i)
20
Now you get the same effect as __getitem__. The difference is that no index is passed in, you have to manually loop through it in order to yield the result:
>>> for i in Item():
... print(i)
0
10
20
Apart from this, there's two other alternatives for creating an object that supports Iteration.
One time looping: Make item an iterator
Make Item an iterator by defining __next__ and returning self from __iter__ in this case, since you're not using yield the __iter__ method returns self and __next__ handles the logic of returning values:
class Item:
def __init__(self):
self.val = 0
def __iter__(self):
return self
def __next__(self):
if self.val > 2: raise StopIteration
res = range(0, 30, 10)[self.val]
self.val += 1
return res
This also uses an auxiliary val to get the result from the range and check if we should still be iterating (if not, we raise StopIteration):
>>> for i in Item():
... print(i)
0
10
20
The problem with this approach is that it is a one time ride, after iterating once, the self.val points to 3 and iteration can't be performed again. (using yield resolves this issue). (Yes, you could go and set val to 0 but that's just being sneaky.)
Many times looping: create custom iterator object.
The second approach is to use a custom iterator object specifically for your Item class and return it from Item.__iter__ instead of self:
class Item:
def __iter__(self):
return IterItem()
class IterItem:
def __init__(self):
self.val = 0
def __iter__(self):
return self
def __next__(self):
if self.val > 2: raise StopIteration
res = range(0, 30, 10)[self.val]
self.val += 1
return res
Now every time you iterate a new custom iterator is supplied and you can support multiple iterations over Item objects.
Iter returns a iterator, mainly a generator as #machineyearning told at the comments, with next you can iterate over the object, see the example:
class Item:
def __init__(self):
self.elems = range(10)
self.current = 0
def __iter__(self):
return (x for x in self.elems)
def __next__(self):
if self.current >= len(self.elems):
self.current = 0
raise StopIteration
return self.elems[self.current]
>>> i = Item()
>>> a = iter(i)
>>> for x in a:
... print x
...
0
1
2
3
4
5
6
7
8
9
>>> for x in i:
... print x
...
0
1
2
3
4
5
6
7
8
9

can someone tell me how to understand the python program below(what do we use __iter__() to do?)

can someone tell me how that can happen?
I have a program like this:
class Fib(object):
def __init__(self):
self.a, self.b = 0, 1
def __iter__(self):
return self
def next(self):
self.a, self.b = self.b, self.a + self.b
if self.a > 100000:
raise StopIteration();
return self.a
if I enter :
>>> for n in Fib():
... print n
the output is:
1
1
2
3
5
...
46368
75025
the question is that I have no idea how __iter__ is related to next(self), and how computer read the program?can someone help me explain this program?
In Python, there is the concept of iterables and iterators. Loosely, an iterable is anything with an iter method. An example is a list. When a for loop is ran on this list, say:
for i in range(5):
print i
Python returns an iterator, which automagically calls produces all the result.
Now an iterator(not to be confused with iterables) by itself is any object with an iter function and a next method. When this iterator loops, it calls the next method on every loop, which basically produces the next output, much like a for loop produces outputs within the range.
When the next call reaches the last element, a StopIteration error is raised, ending the loop. This is very analogous to how a for loop works behind the scenes.
In a nutshell, what your code is doing is that it is creating an iterator, and defining what happens every time the iterator wants to fetch the next value i.e the next() method.
Check this out
class Fib: ①
def __init__(self, max): ②
self.max = max
def __iter__(self): ③
self.a = 0
self.b = 1
return self
def next(self): ④
fib = self.a
if fib > self.max:
raise StopIteration ⑤
self.a, self.b = self.b, self.a + self.b
return fib ⑥
Thoroughly confused yet? Excellent. Let’s see how to call this
iterator:
from fibonacci2 import Fib
for n in Fib(1000):
print n,
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
Here’s what happens:
The for loop calls Fib(1000), as shown. This returns an instance of the Fib class. Call this fib_inst.
Secretly, and quite cleverly, the for loop calls iter(fib_inst), which returns an iterator object. Call this fib_iter. In this case,
fib_iter == fib_inst, because the iter() method returns self, but
the for loop doesn’t know (or care) about that.
To “loop through” the iterator, the for loop calls next(fib_iter), which calls the next() method on the fib_iter object, which does
the next-Fibonacci-number calculations and returns a value. The for
loop takes this value and assigns it to n, then executes the body of
the for loop for that value of n.
How does the for loop know when to stop? I’m glad you asked! When next(fib_iter) raises a StopIteration exception, the for loop will
swallow the exception and gracefully exit. (Any other exception will
pass through and be raised as usual.) And where have you seen a
StopIteration exception? In the next() method, of course!
Python has iterators, similar to Java's Iterator and C#'s IEnumerable(T).
Read this:
https://docs.python.org/2/library/stdtypes.html#iterator-types
https://www.python.org/dev/peps/pep-0234/

How to build a basic iterator?

How would one create an iterative function (or iterator object) in python?
Iterator objects in python conform to the iterator protocol, which basically means they provide two methods: __iter__() and __next__().
The __iter__ returns the iterator object and is implicitly called
at the start of loops.
The __next__() method returns the next value and is implicitly called at each loop increment. This method raises a StopIteration exception when there are no more value to return, which is implicitly captured by looping constructs to stop iterating.
Here's a simple example of a counter:
class Counter:
def __init__(self, low, high):
self.current = low - 1
self.high = high
def __iter__(self):
return self
def __next__(self): # Python 2: def next(self)
self.current += 1
if self.current < self.high:
return self.current
raise StopIteration
for c in Counter(3, 9):
print(c)
This will print:
3
4
5
6
7
8
This is easier to write using a generator, as covered in a previous answer:
def counter(low, high):
current = low
while current < high:
yield current
current += 1
for c in counter(3, 9):
print(c)
The printed output will be the same. Under the hood, the generator object supports the iterator protocol and does something roughly similar to the class Counter.
David Mertz's article, Iterators and Simple Generators, is a pretty good introduction.
There are four ways to build an iterative function:
create a generator (uses the yield keyword)
use a generator expression (genexp)
create an iterator (defines __iter__ and __next__ (or next in Python 2.x))
create a class that Python can iterate over on its own (defines __getitem__)
Examples:
# generator
def uc_gen(text):
for char in text.upper():
yield char
# generator expression
def uc_genexp(text):
return (char for char in text.upper())
# iterator protocol
class uc_iter():
def __init__(self, text):
self.text = text.upper()
self.index = 0
def __iter__(self):
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += 1
return result
# getitem method
class uc_getitem():
def __init__(self, text):
self.text = text.upper()
def __getitem__(self, index):
return self.text[index]
To see all four methods in action:
for iterator in uc_gen, uc_genexp, uc_iter, uc_getitem:
for ch in iterator('abcde'):
print(ch, end=' ')
print()
Which results in:
A B C D E
A B C D E
A B C D E
A B C D E
Note:
The two generator types (uc_gen and uc_genexp) cannot be reversed(); the plain iterator (uc_iter) would need the __reversed__ magic method (which, according to the docs, must return a new iterator, but returning self works (at least in CPython)); and the getitem iteratable (uc_getitem) must have the __len__ magic method:
# for uc_iter we add __reversed__ and update __next__
def __reversed__(self):
self.index = -1
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += -1 if self.index < 0 else +1
return result
# for uc_getitem
def __len__(self)
return len(self.text)
To answer Colonel Panic's secondary question about an infinite lazily evaluated iterator, here are those examples, using each of the four methods above:
# generator
def even_gen():
result = 0
while True:
yield result
result += 2
# generator expression
def even_genexp():
return (num for num in even_gen()) # or even_iter or even_getitem
# not much value under these circumstances
# iterator protocol
class even_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
# getitem method
class even_getitem():
def __getitem__(self, index):
return index * 2
import random
for iterator in even_gen, even_genexp, even_iter, even_getitem:
limit = random.randint(15, 30)
count = 0
for even in iterator():
print even,
count += 1
if count >= limit:
break
print
Which results in (at least for my sample run):
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
How to choose which one to use? This is mostly a matter of taste. The two methods I see most often are generators and the iterator protocol, as well as a hybrid (__iter__ returning a generator).
Generator expressions are useful for replacing list comprehensions (they are lazy and so can save on resources).
If one needs compatibility with earlier Python 2.x versions use __getitem__.
I see some of you doing return self in __iter__. I just wanted to note that __iter__ itself can be a generator (thus removing the need for __next__ and raising StopIteration exceptions)
class range:
def __init__(self,a,b):
self.a = a
self.b = b
def __iter__(self):
i = self.a
while i < self.b:
yield i
i+=1
Of course here one might as well directly make a generator, but for more complex classes it can be useful.
First of all the itertools module is incredibly useful for all sorts of cases in which an iterator would be useful, but here is all you need to create an iterator in python:
yield
Isn't that cool? Yield can be used to replace a normal return in a function. It returns the object just the same, but instead of destroying state and exiting, it saves state for when you want to execute the next iteration. Here is an example of it in action pulled directly from the itertools function list:
def count(n=0):
while True:
yield n
n += 1
As stated in the functions description (it's the count() function from the itertools module...) , it produces an iterator that returns consecutive integers starting with n.
Generator expressions are a whole other can of worms (awesome worms!). They may be used in place of a List Comprehension to save memory (list comprehensions create a list in memory that is destroyed after use if not assigned to a variable, but generator expressions can create a Generator Object... which is a fancy way of saying Iterator). Here is an example of a generator expression definition:
gen = (n for n in xrange(0,11))
This is very similar to our iterator definition above except the full range is predetermined to be between 0 and 10.
I just found xrange() (suprised I hadn't seen it before...) and added it to the above example. xrange() is an iterable version of range() which has the advantage of not prebuilding the list. It would be very useful if you had a giant corpus of data to iterate over and only had so much memory to do it in.
This question is about iterable objects, not about iterators. In Python, sequences are iterable too so one way to make an iterable class is to make it behave like a sequence, i.e. give it __getitem__ and __len__ methods. I have tested this on Python 2 and 3.
class CustomRange:
def __init__(self, low, high):
self.low = low
self.high = high
def __getitem__(self, item):
if item >= len(self):
raise IndexError("CustomRange index out of range")
return self.low + item
def __len__(self):
return self.high - self.low
cr = CustomRange(0, 10)
for i in cr:
print(i)
If you looking for something short and simple, maybe it will be enough for you:
class A(object):
def __init__(self, l):
self.data = l
def __iter__(self):
return iter(self.data)
example of usage:
In [3]: a = A([2,3,4])
In [4]: [i for i in a]
Out[4]: [2, 3, 4]
All answers on this page are really great for a complex object. But for those containing builtin iterable types as attributes, like str, list, set or dict, or any implementation of collections.Iterable, you can omit certain things in your class.
class Test(object):
def __init__(self, string):
self.string = string
def __iter__(self):
# since your string is already iterable
return (ch for ch in self.string)
# or simply
return self.string.__iter__()
# also
return iter(self.string)
It can be used like:
for x in Test("abcde"):
print(x)
# prints
# a
# b
# c
# d
# e
Include the following code in your class code.
def __iter__(self):
for x in self.iterable:
yield x
Make sure that you replace self.iterablewith the iterable which you iterate through.
Here's an example code
class someClass:
def __init__(self,list):
self.list = list
def __iter__(self):
for x in self.list:
yield x
var = someClass([1,2,3,4,5])
for num in var:
print(num)
Output
1
2
3
4
5
Note: Since strings are also iterable, they can also be used as an argument for the class
foo = someClass("Python")
for x in foo:
print(x)
Output
P
y
t
h
o
n
This is an iterable function without yield. It make use of the iter function and a closure which keeps it's state in a mutable (list) in the enclosing scope for python 2.
def count(low, high):
counter = [0]
def tmp():
val = low + counter[0]
if val < high:
counter[0] += 1
return val
return None
return iter(tmp, None)
For Python 3, closure state is kept in an immutable in the enclosing scope and nonlocal is used in local scope to update the state variable.
def count(low, high):
counter = 0
def tmp():
nonlocal counter
val = low + counter
if val < high:
counter += 1
return val
return None
return iter(tmp, None)
Test;
for i in count(1,10):
print(i)
1
2
3
4
5
6
7
8
9
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
Improving previous answer, one of the advantage of using class is that you can add __call__ to return self.value or even next_value.
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
def __call__(self):
next_value = self.value
self.value += 2
return next_value
c = uc_iter()
print([c() for _ in range(10)])
print([next(c) for _ in range(5)])
# [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# [20, 22, 24, 26, 28]
Other example of a class based on Python Random that can be both called and iterated could be seen on my implementation here

Categories