How can I limit iterations of a loop? - python

Say I have a list of items, and I want to iterate over the first few of it:
items = list(range(10)) # I mean this to represent any kind of iterable.
limit = 5
Naive implementation
The Python naïf coming from other languages would probably write this perfectly serviceable and performant (if unidiomatic) code:
index = 0
for item in items: # Python's `for` loop is a for-each.
print(item) # or whatever function of that item.
index += 1
if index == limit:
break
More idiomatic implementation
But Python has enumerate, which subsumes about half of that code nicely:
for index, item in enumerate(items):
print(item)
if index == limit: # There's gotta be a better way.
break
So we've about cut the extra code in half. But there's gotta be a better way.
Can we approximate the below pseudocode behavior?
If enumerate took another optional stop argument (for example, it takes a start argument like this: enumerate(items, start=1)) that would, I think, be ideal, but the below doesn't exist (see the documentation on enumerate here):
# hypothetical code, not implemented:
for _, item in enumerate(items, start=0, stop=limit): # `stop` not implemented
print(item)
Note that there would be no need to name the index because there is no need to reference it.
Is there an idiomatic way to write the above? How?
A secondary question: why isn't this built into enumerate?

How can I limit iterations of a loop in Python?
for index, item in enumerate(items):
print(item)
if index == limit:
break
Is there a shorter, idiomatic way to write the above? How?
Including the index
zip stops on the shortest iterable of its arguments. (In contrast with the behavior of zip_longest, which uses the longest iterable.)
range can provide a limited iterable that we can pass to zip along with our primary iterable.
So we can pass a range object (with its stop argument) to zip and use it like a limited enumerate.
zip(range(limit), items)
Using Python 3, zip and range return iterables, which pipeline the data instead of materializing the data in lists for intermediate steps.
for index, item in zip(range(limit), items):
print(index, item)
To get the same behavior in Python 2, just substitute xrange for range and itertools.izip for zip.
from itertools import izip
for index, item in izip(xrange(limit), items):
print(item)
If not requiring the index, itertools.islice
You can use itertools.islice:
for item in itertools.islice(items, 0, stop):
print(item)
which doesn't require assigning to the index.
Composing enumerate(islice(items, stop)) to get the index
As Pablo Ruiz Ruiz points out, we can also compose islice with enumerate.
for index, item in enumerate(islice(items, limit)):
print(index, item)
Why isn't this built into enumerate?
Here's enumerate implemented in pure Python (with possible modifications to get the desired behavior in comments):
def enumerate(collection, start=0): # could add stop=None
i = start
it = iter(collection)
while 1: # could modify to `while i != stop:`
yield (i, next(it))
i += 1
The above would be less performant for those using enumerate already, because it would have to check whether it is time to stop every iteration. We can just check and use the old enumerate if don't get a stop argument:
_enumerate = enumerate
def enumerate(collection, start=0, stop=None):
if stop is not None:
return zip(range(start, stop), collection)
return _enumerate(collection, start)
This extra check would have a slight negligible performance impact.
As to why enumerate does not have a stop argument, this was originally proposed (see PEP 279):
This function was originally proposed with optional start
and stop arguments. GvR [Guido van Rossum] pointed out that the function call
enumerate(seqn, 4, 6) had an alternate, plausible interpretation as
a slice that would return the fourth and fifth elements of the
sequence. To avoid the ambiguity, the optional arguments were
dropped even though it meant losing flexibility as a loop counter.
That flexibility was most important for the common case of
counting from one, as in:
for linenum, line in enumerate(source,1): print linenum, line
So apparently start was kept because it was very valuable, and stop was dropped because it had fewer use-cases and contributed to confusion on the usage of the new function.
Avoid slicing with subscript notation
Another answer says:
Why not simply use
for item in items[:limit]: # or limit+1, depends
Here's a few downsides:
It only works for iterables that accept slicing, thus it is more limited.
If they do accept slicing, it usually creates a new data structure in memory, instead of iterating over the reference data structure, thus it wastes memory (All builtin objects make copies when sliced, but, for example, numpy arrays make a view when sliced).
Unsliceable iterables would require the other kind of handling. If you switch to a lazy evaluation model, you'll have to change the code with slicing as well.
You should only use slicing with subscript notation when you understand the limitations and whether it makes a copy or a view.
Conclusion
I would presume that now the Python community knows the usage of enumerate, the confusion costs would be outweighed by the value of the argument.
Until that time, you can use:
for index, element in zip(range(limit), items):
...
or
for index, item in enumerate(islice(items, limit)):
...
or, if you don't need the index at all:
for element in islice(items, 0, limit):
...
And avoid slicing with subscript notation, unless you understand the limitations.

You can use itertools.islice for this. It accepts start, stop and step arguments, if you're passing only one argument then it is considered as stop. And it will work with any iterable.
itertools.islice(iterable, stop)
itertools.islice(iterable, start, stop[, step])
Demo:
>>> from itertools import islice
>>> items = list(range(10))
>>> limit = 5
>>> for item in islice(items, limit):
print item,
...
0 1 2 3 4
Example from docs:
islice('ABCDEFG', 2) --> A B
islice('ABCDEFG', 2, 4) --> C D
islice('ABCDEFG', 2, None) --> C D E F G
islice('ABCDEFG', 0, None, 2) --> A C E G

Why not simply use
for item in items[:limit]: # or limit+1, depends
print(item) # or whatever function of that item.
This will only work for some iterables, but since you specified Lists, it works.
It doesn't work if you use Sets or dicts etc.

Pass islice with the limit inside enumerate
a = [2,3,4,2,1,4]
for a, v in enumerate(islice(a, 3)):
print(a, v)
Output:
0 2
1 3
2 4

Why not loop until the limit or the end of list, whichever occurs earlier, like this:
items = range(10)
limit = 5
for i in range(min(limit, len(items))):
print items[i]
Output:
0
1
2
3
4

short solution
items = range(10)
limit = 5
for i in items[:limit]: print(i)

Related

How to create a generator function that produces a specific number of values?

I am trying to create a generator function that accepts 1 or 1+ arguments. It should return an iterable of the left overvalues of the iterable with the most values after the other iterabless no longer produce values.
For example:
for i in func('dfg', 'dfghjk', [1,2]):
print(i)
prints -> h
j
k
because the argument 'dfg', which has the second most values, stop producing values at 3.
The argument 'dfghjk', which has the most values, then yields 'hjk' (the leftover values).
The catch here is that the iterables may or may not be finite, so I am not able to compute the length of the argument or add it to other data structures.
Any help would be great, thanks!
To solve your problem I'd try to manually exhaust all the iterators until only one remains.
def foo(*iterables):
iterators = [iter(x) for x in iterables]
while len(iterators) > 1:
for idx, it in enumerate(iterators[:]):
try:
peek = next(it)
except StopIteration:
# remove the iterator when exhausted
iterators.pop(idx)
yield peek
yield from iterators[0]
Notice that for it to work you will need to ensure that the iterables are indeed iterators (converting them with iter). You also need to store the last element obtained from the iterator. Notice that because we only use the iterator interface infinite iterators work just fine:
>>> it = func([1], itertools.count(), [2,2])
>>> for _ in range(3): print(next(it))
2
3
4

Can Python slicing be used to skip one specific element by index?

Suppose I am writing a recursive function where I want to pass a list to a function with a single element missing, as part of a loop. Here is one possible solution:
def Foo(input):
if len(input) == 0: return
for j in input:
t = input[:]
t.remove(j)
Foo(t)
Is there a way to abuse the slice operator to pass the list minus the element j without explicitly copying the list and removing the item from it?
What about this?
for i in range(len(list_)):
Foo(list_[:i] + list_[i+1:])
You are stilling copying items, though you ignore the element at index i while copying.
BTW, you can always try to avoid overriding built-in names like list by appending underscores.
If your lists are small, I recommend using the approach in the answer from #satoru.
If your lists are very large, and you want to avoid the "churn" of creating and deleting list instances, how about using a generator?
import itertools as it
def skip_i(seq, i):
return it.chain(it.islice(seq, 0, i), it.islice(seq, i+1, None))
This pushes the work of skipping the i'th element down into the C guts of itertools, so this should be faster than writing the equivalent in pure Python.
To do it in pure Python I would suggest writing a generator like this:
def gen_skip_i(seq, i):
for j, x in enumerate(seq):
if i != j:
yield x
EDIT: Here's an improved version of my answer, thanks to #Blckknght in comments below.
import itertools as it
def skip_i(iterable, i):
itr = iter(iterable)
return it.chain(it.islice(itr, 0, i), it.islice(itr, 1, None))
This is a big improvement over my original answer. My original answer only worked properly on indexable things like lists, but this will work correctly for any iterable, including iterators! It makes an explicit iterator from the iterable, then (in a "chain") pulls the first i values, and skips only a single value before pulling all remaining values.
Thank you very much #Blckknght!
Here is a code equivalent to satoru's, but is faster, as it makes one copy of the list per iteration instead of two:
before = []
after = list_[:]
for x in range(0, len(list_)):
v = after.pop(0)
Foo(before + after)
before.append(v)
(11ms instead of 18ms on my computer, for a list generated with list(range(1000)))

Is "for x in range(3): print x" guaranteed to print "0, 1, 2" in that order?

Is a loop of the form
for x in range(3):
print x
guaranteed to output
0
1
2
in that order? In other words, if you loop over a list with a for item in mylist statement, is the loop guaranteed to start at mylist[0] and proceed sequentially (mylist[1], mylist[2], ...)?
Yes, the builtin list and range will always iterate in the order you expect. Classes define their own iteration sequence, so the iteration order will vary between different classes. Due to their nature set and dict (amongst others) won't iterate in a predictable order.
You can define any iteration sequence you want for a class. For example, you can make a list that will iterate in reverse.
class reversedlist(list):
def __iter__(self):
self.current = len(self)
return self
def next(self):
if self.current <= 0:
raise StopIteration
self.current -= 1
return self[self.current]
x = reversedlist([0, 1, 2, 3, 4, 5])
for i in x:
print i,
# Outputs 5 4 3 2 1 0
Yes it does. It is not the for loop that guarantees anything, but the range function though. range(3) gives you an iterator that returns 0, then 1 and then 2. Iterators can only be accessed one element at a time, so that is the only order the for loop can access the elements.
Other iterators (ones not generated by the range function for example) could return elements in other orders.
is the loop guaranteed to start at mylist[0] and proceed sequentially (mylist[1], mylist[2], ...)?
When you use a for loop, the list gets used as an iterator. That is, the for loop actually does not index into it. It just keeps calling the next function until there are no more elements. In this way the for loop itself has no say in what order elements gets processed.
Yes, it is.
A python for loop like this:
for e in list:
print e
can be traslated as:
iterator = list.__iter__()
while True:
try:
e = iterator.next()
except StopIteration:
break
print e
So, while the "next" method of the object iterator returns values in the "correct" order you will get the elements in the "correct" order.
For python list this is guaranteed to happen.
For more information look here and here
Yes. for loops in Python traverse a list or an iter in order. range returns a list in Python 2.x and an iterator in Python 3.x, so your for x in range(3) loop will indeed always be ordered.
However, the same cannot be said for dicts or sets. They will not be traversed in order - this is because the order of their keys is undefined.
Yes, it starts with the first element of a list and goes to the last.
Not all data types in python do that, such as dicts, but lists certainly do. range(x) certainly will.
Yes.
But with a dictionary there´s no order guaranteed.
Yes. Python docs about For Loop say:
Basically, any object with an iterable method can be used in a for loop in Python ... Having an iterable method basically means that the data can be presented in list form, where there's multiple values in an orderly fashion

for-in loop's upper limit changing in each loop

How can I update the upper limit of a loop in each iteration? In the following code, List is shortened in each loop. However, the lenList in the for, in loop is not, even though I defined lenList as global. Any ideas how to solve this? (I'm using Python 2.sthg)
Thanks!
def similarity(List):
import difflib
lenList = len(List)
for i in range(1,lenList):
import numpy as np
global lenList
a = List[i]
idx = [difflib.SequenceMatcher(None, a, x).ratio() for x in List]
z = idx > .9
del List[z]
lenList = len(List)
X = ['jim','jimmy','luke','john','jake','matt','steve','tj','pat','chad','don']
similarity(X)
Looping over indices is bad practice in python. You may be able to accomplish what you want like this though (edited for comments):
def similarity(alist):
position = 0
while position < len(alist):
item = alist[position]
position += 1
# code here that modifies alist
A list will evaluate True if it has any entries, or False when it is empty. In this way you can consume a list that may grow during the manipulation of its items.
Additionally, if you absolutely have to have indices, you can get those as well:
for idx, item in enumerate(alist):
# code here, where items are actual list entries, and
# idx is the 0-based index of the item in the list.
In ... 3.x (I believe) you can even pass an optional parameter to enumerate to control the starting value of idx.
The issue here is that range() is only evaluated once at the start of the loop and produces a range generator (or list in 2.x) at that time. You can't then change the range. Not to mention that numbers and immutable, so you are assigning a new value to lenList, but that wouldn't affect any uses of it.
The best solution is to change the way your algorithm works not to rely on this behaviour.
The range is an object which is constructed before the first iteration of your loop, so you are iterating over the values in that object. You would instead need to use a while loop, although as Lattyware and g.d.d.c point out, it would not be very Pythonic.
What you are effectively looping on in the above code is a list which got generated in the first iteration itself.
You could have as well written the above as
li = range(1,lenList)
for i in li:
... your code ...
Changing lenList after li has been created has no effect on li
This problem will become quite a lot easier with one small modification to how your function works: instead of removing similar items from the existing list, create and return a new one with those items omitted.
For the specific case of just removing similarities to the first item, this simplifies down quite a bit, and removes the need to involve Numpy's fancy indexing (which you weren't actually using anyway, because of a missing call to np.array):
import difflib
def similarity(lst):
a = lst[0]
return [a] + \
[x for x in lst[1:] if difflib.SequenceMatcher(None, a, x).ratio() > .9]
From this basis, repeating it for every item in the list can be done recursively - you need to pass the list comprehension at the end back into similarity, and deal with receiving an empty list:
def similarity(lst):
if not lst:
return []
a = lst[0]
return [a] + similarity(
[x for x in lst[1:] if difflib.SequenceMatcher(None, a, x).ratio() > .9])
Also note that importing inside a function, and naming a variable list (shadowing the built-in list) are both practices worth avoiding, since they can make your code harder to follow.

Remove items from a list while iterating without using extra memory in Python

My problem is simple: I have a long list of elements that I want to iterate through and check every element against a condition. Depending on the outcome of the condition I would like to delete the current element of the list, and continue iterating over it as usual.
I have read a few other threads on this matter. Two solutions seam to be proposed. Either make a dictionary out of the list (which implies making a copy of all the data that is already filling all the RAM in my case). Either walk the list in reverse (which breaks the concept of the alogrithm I want to implement).
Is there any better or more elegant way than this to do it ?
def walk_list(list_of_g):
g_index = 0
while g_index < len(list_of_g):
g_current = list_of_g[g_index]
if subtle_condition(g_current):
list_of_g.pop(g_index)
else:
g_index = g_index + 1
li = [ x for x in li if condition(x)]
and also
li = filter(condition,li)
Thanks to Dave Kirby
Here is an alternative answer for if you absolutely have to remove the items from the original list, and you do not have enough memory to make a copy - move the items down the list yourself:
def walk_list(list_of_g):
to_idx = 0
for g_current in list_of_g:
if not subtle_condition(g_current):
list_of_g[to_idx] = g_current
to_idx += 1
del list_of_g[to_idx:]
This will move each item (actually a pointer to each item) exactly once, so will be O(N). The del statement at the end of the function will remove any unwanted items at the end of the list, and I think Python is intelligent enough to resize the list without allocating memory for a new copy of the list.
removing items from a list is expensive, since python has to copy all the items above g_index down one place. If the number of items you want to remove is proportional to the length of the list N, then your algorithm is going to be O(N**2). If the list is long enough to fill your RAM then you will be waiting a very long time for it to complete.
It is more efficient to create a filtered copy of the list, either using a list comprehension as Marcelo showed, or use the filter or itertools.ifilter functions:
g_list = filter(not_subtle_condition, g_list)
If you do not need to use the new list and only want to iterate over it once, then it is better to use ifilter since that will not create a second list:
for g_current in itertools.ifilter(not_subtle_condtion, g_list):
# do stuff with g_current
The built-in filter function is made just to do this:
list_of_g = filter(lambda x: not subtle_condition(x), list_of_g)
How about this?
[x for x in list_of_g if not subtle_condition(x)]
its return the new list with exception from subtle_condition
For simplicity, use a list comprehension:
def walk_list(list_of_g):
return [g for g in list_of_g if not subtle_condition(g)]
Of course, this doesn't alter the original list, so the calling code would have to be different.
If you really want to mutate the list (rarely the best choice), walking backwards is simpler:
def walk_list(list_of_g):
for i in xrange(len(list_of_g), -1, -1):
if subtle_condition(list_of_g[i]):
del list_of_g[i]
Sounds like a really good use case for the filter function.
def should_be_removed(element):
return element > 5
a = range(10)
a = filter(should_be_removed, a)
This, however, will not delete the list while iterating (nor I recommend it). If for memory-space (or other performance reasons) you really need it, you can do the following:
i = 0
while i < len(a):
if should_be_removed(a[i]):
a.remove(a[i])
else:
i+=1
print a
If you perform a reverse iteration, you can remove elements on the fly without affecting the next indices you'll visit:
numbers = range(20)
# remove all numbers that are multiples of 3
l = len(numbers)
for i, n in enumerate(reversed(numbers)):
if n % 3 == 0:
del numbers[l - i - 1]
print numbers
The enumerate(reversed(numbers)) is just a stylistic choice. You may use a range if that's more legible to you:
l = len(numbers)
for i in range(l-1, -1, -1):
n = numbers[i]
if n % 3 == 0:
del numbers[i]
If you need to travel the list in order, you can reverse it in place with .reverse() before and after the reversed iteration. This won't duplicate your list either.

Categories