Python - Count Elements in Iterator Without Consuming - python

Given an iterator it, I would like a function it_count that returns the count of elements that iterator produces, without destroying the iterator. For example:
ita = iter([1, 2, 3])
print(it_count(ita))
print(it_count(ita))
should print
3
3
It has been pointed out that this may not be a well-defined question for all iterators, so I am not looking for a completely general solution, but it should function as anticipated on the example given.
Okay, let me clarify further to my specific case. Given the following code:
ita = iter([1, 2, 3])
itb, itc = itertools.tee(ita)
print(sum(1 for _ in itb))
print(sum(1 for _ in itc))
...can we write the it_count function described above, so that it will function in this manner? Even if the answer to the question is "That cannot be done," that's still a perfectly valid answer. It doesn't make the question bad. And the proof that it is impossible would be far from trivial...

Not possible. Until the iterator has been completely consumed, it doesn't have a concrete element count.

The only way to get the length of an arbitary iterator is by iterating over it, so the basic question here is ill-defined. You can't get the length of any iterator without iterating over it.
Also the iterator itself may change it's contents while being iterated over, so the count may not be constant anyway.
But there are possibilities that might do what you ask, be warned none of them is foolproof or really efficient:
When using python 3.4 or later you can use operator.length_hint and hope the iterator supports it (be warned: not many iterators do! And it's only meant as a hint, the actual length might be different!):
>>> from operator import length_hint
>>> it_count = length_hint
>>> ita = iter([1, 2, 3])
>>> print(it_count(ita))
3
>>> print(it_count(ita))
3
As alternative: You can use itertools.tee but read the documentation of that carefully before using it. It may solve your issue but it won't really solve the underlying problem.
import itertools
def it_count(iterator):
return sum(1 for _ in iterator)
ita = iter([1, 2, 3])
it1, it2 = itertools.tee(ita, 2)
print(it_count(it1)) # 3
print(it_count(it2)) # 3
But this is less efficient (memory and speed) than casting it to a list and using len on it.

I have not been able to come up with an exact solution (because iterators may be immutable types), but here are my best attempts. I believe the second should be faster, according to the documentation (final paragraph of itertools.tee).
Option 1
def it_count(it):
tmp_it, new_it = itertools.tee(it)
return sum(1 for _ in tmp_it), new_it
Option 2
def it_count2(it):
lst = list(it)
return len(lst), lst
It functions well, but has the slight annoyance of returning the pair rather than simply the count.
ita = iter([1, 2, 3])
count, ita = it_count(ita)
print(count)
Output: 3
count, ita = it_count2(ita)
print(count)
Output: 3
count, ita = it_count(ita)
print(count)
Output: 3
print(list(ita))
Output: [1, 2, 3]

There's no generic way to do what you want. An iterator may not have a well defined length (e.g. itertools.count which iterates forever). Or it might have a length that's expensive to calculate up front, so it won't let you know how far you have to go until you've reached the end (e.g. a file object, which can be iterated yielding lines, which are not easy to count without reading the whole file's contents).
Some kinds of iterators might implement a __length_hint__ method that returns an estimated length, but that length may not be accurate. And not all iterators will implement that method at all, so you probably can't rely upon it (it does work for list iterators, but not for many others).
Often the best way to deal with the whole contents of an iterator is to dump it into a list or other container. After you're done doing whatever operation you need (like calling len on it), you can iterate over the list again. Obviously this requires the iterator to be finite (and for all of its contents to fit into memory), but that's the limitation you have to deal with.
If you only need to peek ahead by a few elements, you might be able to use itertools.tee, but it's no better than dumping into a list if you need to consume the whole contents (since it keeps the values seen by one of its returned iterators but another in a data structure similar to a deque). It wouldn't be any use for finding the length of the iterator.

Related

Pythonic way to unpack an iterator inside of a list

I'm trying to figure out what is the pythonic way to unpack an iterator inside of a list.
For example:
my_iterator = zip([1, 2, 3, 4], [1, 2, 3, 4])
I have come with the following ways to unpack my iterator inside of a list:
1)
my_list = [*my_iterator]
2)
my_list = [e for e in my_iterator]
3)
my_list = list(my_iterator)
No 1) is my favorite way to do it since is less code, but I'm wondering if this is also the pythonic way. Or maybe there is another way to achieve this besides those 3 which is the pythonic way?
This might be a repeat of Fastest way to convert an iterator to a list, but your question is a bit different since you ask which is the most Pythonic. The accepted answer is list(my_iterator) over [e for e in my_iterator] because the prior runs in C under the hood. One commenter suggests [*my_iterator] is faster than list(my_iterator), so you might want to test that. My general vote is that they are all equally Pythonic, so I'd go with the faster of the two for your use case. It's also possible that the older answer is out of date.
After exploring more the subject I've come with some conclusions.
There should be one-- and preferably only one --obvious way to do it
(zen of python)
Deciding which option is the "pythonic" one should take into consideration some criteria :
how explicit,
simple,
and readable it is.
And the obvious "pythonic" option winning in all criteria is option number 3):
list = list(my_iterator)
Here is why is "obvious" that no 3) is the pythonic one:
Option 3) is close to natural language making you to 'instantly'
think what is the output.
Option 2) (using list comprehension) if you see for the first time
that line of code will take you to read a little bit more and to pay
a bit more attention. For example, I use list comprehension when I
want to add some extra steps(calling a function with the iterated
elements or having some checking using if statement), so when I see a
list comprehension I check for any possible function call inside or
for any if statment.
option 1) (unpacking using *) asterisk operator can be a bit confusing
if you don't use it regularly, there are 4 cases for using the
asterisk in Python:
For multiplication and power operations.
For repeatedly extending the list-type containers.
For using the variadic arguments. (so-called “packing”)
For unpacking the containers.
Another good argument is python docs themselves, I have done some statistics to check which options are chosen by the docs, for this I've chose 4 buil-in iterators and everything from the module itertools (that are used like: itertools.) to see how they are unpacked in a list:
map
range
filter
enumerate
itertools.
After exploring the docs I found: 0 iterators unpacked in a list using option 1) and 2) and 35 using option 3).
Conclusion :
The pythonic way to unpack an iterator inside of a list is: my_list = list(my_iterator)
While the unpacking operator * is not often used for unpacking a single iterable into a list (therefore [*it] is a bit less readable than list(it)), it is handy and more Pythonic in several other cases:
1. Unpacking an iterable into a single list / tuple / set, adding other values:
mixed_list = [a, *it, b]
This is more concise and efficient than
mixed_list = [a]
mixed_list.extend(it)
mixed_list.append(b)
2. Unpacking multiple iterables + values into a list / tuple / set
mixed_list = [*it1, *it2, a, b, ... ]
This is similar to the first case.
3. Unpacking an iterable into a list, excluding elements
first, *rest = it
This extracts the first element of it into first and unpacks the rest into a list. One can even do
_, *mid, last = it
This dumps the first element of it into a don't-care variable _, saves last element into last, and unpacks the rest into a list mid.
4. Nested unpacking of multiple levels of an iterable in one statement
it = (0, range(5), 3)
a1, (*a2,), a3 = it # Unpack the second element of it into a list a2
e1, (first, *rest), e3 = it # Separate the first element from the rest while unpacking it[1]
This can also be used in for statements:
from itertools import groupby
s = "Axyz123Bcba345D"
for k, (first, *rest) in groupby(s, key=str.isalpha):
...
If you're interested in the least amount of typing possible, you can actually do one character better than my_list = [*my_iterator] with iterable unpacking:
*my_list, = my_iterator
or (although this only equals my_list = [*my_iterator] in the number of characters):
[*my_list] = my_iterator
(Funny how it has the same effect as my_list = [*my_iterator].)
For the most Pythonic solution, however, my_list = list(my_iterator) is clearly the clearest and the most readable of all, and should therefore be considered the most Pythonic.
I tend to use zip if I need to convert a list to a dictionary or use it as a key-value pair in a loop or list comprehension.
However, if this is only for illustration to create an iterator. I will definitely vote for #3 for clarity.

Python: itertools.tee with dynamically-created iterators

Does anyone know of anything equivalent to itertools.tee but with the ability to dynamically add iterators?
The itertools.tee function does exactly what I want, except that the number of iterators must be fixed when the function is called. I'd like something with equivalent functionality, but which permits new iterators to be added later, potentially even after some of the iterators have started iterating.
I'd like to avoid calling itertools.tee multiple times, as this will potentially use a lot of memory (I could have millions of iterators). From the Python docs for itertools.tee:
The following Python code helps explain what tee does (although the
actual implementation is more complex and uses only a single
underlying FIFO queue).
...
If new iterators are added later, they only need to see newly-arriving data.
Here's a code example of what I'd like to have (may not be syntactically correct):
input_iterator = iter([1, 2, 3, 4, 5, 6])
tee = EnhancedTee(input_iterator)
it1 = iter(tee)
val = it1.next()
# val == 1
it2 = iter(tee)
val = it2.next()
# val == 2
val = it1.next()
# val == 2
This has almost been answered already in another question: In python, can I lazily generate copies of an iterator using tee?. However, the answer I needed was slightly different to the answer given to this question. I only require the newly-created iterators to see new data, so I only need to tee a single iterator initially, and make copies of this. So the following works well for me:
input_iterator = iter([1, 2, 3, 4, 5, 6])
it1 = itertools.tee(input_iterator, 1)[0]
print(next(it1))
# prints 1
it2 = copy(it1)
print(next(it2))
# prints 2
print(next(it1))
# prints 2

Python: List comprehension significantly faster than Filter? [duplicate]

I have a list that I want to filter by an attribute of the items.
Which of the following is preferred (readability, performance, other reasons)?
xs = [x for x in xs if x.attribute == value]
xs = filter(lambda x: x.attribute == value, xs)
It is strange how much beauty varies for different people. I find the list comprehension much clearer than filter+lambda, but use whichever you find easier.
There are two things that may slow down your use of filter.
The first is the function call overhead: as soon as you use a Python function (whether created by def or lambda) it is likely that filter will be slower than the list comprehension. It almost certainly is not enough to matter, and you shouldn't think much about performance until you've timed your code and found it to be a bottleneck, but the difference will be there.
The other overhead that might apply is that the lambda is being forced to access a scoped variable (value). That is slower than accessing a local variable and in Python 2.x the list comprehension only accesses local variables. If you are using Python 3.x the list comprehension runs in a separate function so it will also be accessing value through a closure and this difference won't apply.
The other option to consider is to use a generator instead of a list comprehension:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value: yield el
Then in your main code (which is where readability really matters) you've replaced both list comprehension and filter with a hopefully meaningful function name.
This is a somewhat religious issue in Python. Even though Guido considered removing map, filter and reduce from Python 3, there was enough of a backlash that in the end only reduce was moved from built-ins to functools.reduce.
Personally I find list comprehensions easier to read. It is more explicit what is happening from the expression [i for i in list if i.attribute == value] as all the behaviour is on the surface not inside the filter function.
I would not worry too much about the performance difference between the two approaches as it is marginal. I would really only optimise this if it proved to be the bottleneck in your application which is unlikely.
Also since the BDFL wanted filter gone from the language then surely that automatically makes list comprehensions more Pythonic ;-)
Since any speed difference is bound to be miniscule, whether to use filters or list comprehensions comes down to a matter of taste. In general I'm inclined to use comprehensions (which seems to agree with most other answers here), but there is one case where I prefer filter.
A very frequent use case is pulling out the values of some iterable X subject to a predicate P(x):
[x for x in X if P(x)]
but sometimes you want to apply some function to the values first:
[f(x) for x in X if P(f(x))]
As a specific example, consider
primes_cubed = [x*x*x for x in range(1000) if prime(x)]
I think this looks slightly better than using filter. But now consider
prime_cubes = [x*x*x for x in range(1000) if prime(x*x*x)]
In this case we want to filter against the post-computed value. Besides the issue of computing the cube twice (imagine a more expensive calculation), there is the issue of writing the expression twice, violating the DRY aesthetic. In this case I'd be apt to use
prime_cubes = filter(prime, [x*x*x for x in range(1000)])
Although filter may be the "faster way", the "Pythonic way" would be not to care about such things unless performance is absolutely critical (in which case you wouldn't be using Python!).
I thought I'd just add that in python 3, filter() is actually an iterator object, so you'd have to pass your filter method call to list() in order to build the filtered list. So in python 2:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = filter(lambda num: num % 2 == 0, lst_a)
lists b and c have the same values, and were completed in about the same time as filter() was equivalent [x for x in y if z]. However, in 3, this same code would leave list c containing a filter object, not a filtered list. To produce the same values in 3:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = list(filter(lambda num: num %2 == 0, lst_a))
The problem is that list() takes an iterable as it's argument, and creates a new list from that argument. The result is that using filter in this way in python 3 takes up to twice as long as the [x for x in y if z] method because you have to iterate over the output from filter() as well as the original list.
An important difference is that list comprehension will return a list while the filter returns a filter, which you cannot manipulate like a list (ie: call len on it, which does not work with the return of filter).
My own self-learning brought me to some similar issue.
That being said, if there is a way to have the resulting list from a filter, a bit like you would do in .NET when you do lst.Where(i => i.something()).ToList(), I am curious to know it.
EDIT: This is the case for Python 3, not 2 (see discussion in comments).
I find the second way more readable. It tells you exactly what the intention is: filter the list.
PS: do not use 'list' as a variable name
generally filter is slightly faster if using a builtin function.
I would expect the list comprehension to be slightly faster in your case
Filter is just that. It filters out the elements of a list. You can see the definition mentions the same(in the official docs link I mentioned before). Whereas, list comprehension is something that produces a new list after acting upon something on the previous list.(Both filter and list comprehension creates new list and not perform operation in place of the older list. A new list here is something like a list with, say, an entirely new data type. Like converting integers to string ,etc)
In your example, it is better to use filter than list comprehension, as per the definition. However, if you want, say other_attribute from the list elements, in your example is to be retrieved as a new list, then you can use list comprehension.
return [item.other_attribute for item in my_list if item.attribute==value]
This is how I actually remember about filter and list comprehension. Remove a few things within a list and keep the other elements intact, use filter. Use some logic on your own at the elements and create a watered down list suitable for some purpose, use list comprehension.
Here's a short piece I use when I need to filter on something after the list comprehension. Just a combination of filter, lambda, and lists (otherwise known as the loyalty of a cat and the cleanliness of a dog).
In this case I'm reading a file, stripping out blank lines, commented out lines, and anything after a comment on a line:
# Throw out blank lines and comments
with open('file.txt', 'r') as lines:
# From the inside out:
# [s.partition('#')[0].strip() for s in lines]... Throws out comments
# filter(lambda x: x!= '', [s.part... Filters out blank lines
# y for y in filter... Converts filter object to list
file_contents = [y for y in filter(lambda x: x != '', [s.partition('#')[0].strip() for s in lines])]
It took me some time to get familiarized with the higher order functions filter and map. So i got used to them and i actually liked filter as it was explicit that it filters by keeping whatever is truthy and I've felt cool that I knew some functional programming terms.
Then I read this passage (Fluent Python Book):
The map and filter functions are still builtins
in Python 3, but since the introduction of list comprehensions and generator ex‐
pressions, they are not as important. A listcomp or a genexp does the job of map and
filter combined, but is more readable.
And now I think, why bother with the concept of filter / map if you can achieve it with already widely spread idioms like list comprehensions. Furthermore maps and filters are kind of functions. In this case I prefer using Anonymous functions lambdas.
Finally, just for the sake of having it tested, I've timed both methods (map and listComp) and I didn't see any relevant speed difference that would justify making arguments about it.
from timeit import Timer
timeMap = Timer(lambda: list(map(lambda x: x*x, range(10**7))))
print(timeMap.timeit(number=100))
timeListComp = Timer(lambda:[(lambda x: x*x) for x in range(10**7)])
print(timeListComp.timeit(number=100))
#Map: 166.95695265199174
#List Comprehension 177.97208347299602
In addition to the accepted answer, there is a corner case when you should use filter instead of a list comprehension. If the list is unhashable you cannot directly process it with a list comprehension. A real world example is if you use pyodbc to read results from a database. The fetchAll() results from cursor is an unhashable list. In this situation, to directly manipulating on the returned results, filter should be used:
cursor.execute("SELECT * FROM TABLE1;")
data_from_db = cursor.fetchall()
processed_data = filter(lambda s: 'abc' in s.field1 or s.StartTime >= start_date_time, data_from_db)
If you use list comprehension here you will get the error:
TypeError: unhashable type: 'list'
In terms of performance, it depends.
filter does not return a list but an iterator, if you need the list 'immediately' filtering and list conversion it is slower than with list comprehension by about 40% for very large lists (>1M). Up to 100K elements, there is almost no difference, from 600K onwards there starts to be differences.
If you don't convert to a list, filter is practically instantaneous.
More info at: https://blog.finxter.com/python-lists-filter-vs-list-comprehension-which-is-faster/
Curiously on Python 3, I see filter performing faster than list comprehensions.
I always thought that the list comprehensions would be more performant.
Something like:
[name for name in brand_names_db if name is not None]
The bytecode generated is a bit better.
>>> def f1(seq):
... return list(filter(None, seq))
>>> def f2(seq):
... return [i for i in seq if i is not None]
>>> disassemble(f1.__code__)
2 0 LOAD_GLOBAL 0 (list)
2 LOAD_GLOBAL 1 (filter)
4 LOAD_CONST 0 (None)
6 LOAD_FAST 0 (seq)
8 CALL_FUNCTION 2
10 CALL_FUNCTION 1
12 RETURN_VALUE
>>> disassemble(f2.__code__)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x10cfcaa50, file "<stdin>", line 2>)
2 LOAD_CONST 2 ('f2.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_FAST 0 (seq)
8 GET_ITER
10 CALL_FUNCTION 1
12 RETURN_VALUE
But they are actually slower:
>>> timeit(stmt="f1(range(1000))", setup="from __main__ import f1,f2")
21.177661532000116
>>> timeit(stmt="f2(range(1000))", setup="from __main__ import f1,f2")
42.233950221000214
I would come to the conclusion: Use list comprehension over filter since its
more readable
more pythonic
faster (for Python 3.11, see attached benchmark, also see )
Keep in mind that filter returns a iterator, not a list.
python3 -m timeit '[x for x in range(10000000) if x % 2 == 0]'
1 loop, best of 5: 270 msec per loop
python3 -m timeit 'list(filter(lambda x: x % 2 == 0, range(10000000)))'
1 loop, best of 5: 432 msec per loop
Summarizing other answers
Looking through the answers, we have seen a lot of back and forth, whether or not list comprehension or filter may be faster or if it is even important or pythonic to care about such an issue. In the end, the answer is as most times: it depends.
I just stumbled across this question while optimizing code where this exact question (albeit combined with an in expression, not ==) is very relevant - the filter + lambda expression is taking up a third of my computation time (of multiple minutes).
My case
In my case, the list comprehension is much faster (twice the speed). But I suspect that this varies strongly based on the filter expression as well as the Python interpreter used.
Test it for yourself
Here is a simple code snippet that should be easy to adapt. If you profile it (most IDEs can do that easily), you will be able to easily decide for your specific case which is the better option:
whitelist = set(range(0, 100000000, 27))
input_list = list(range(0, 100000000))
proximal_list = list(filter(
lambda x: x in whitelist,
input_list
))
proximal_list2 = [x for x in input_list if x in whitelist]
print(len(proximal_list))
print(len(proximal_list2))
If you do not have an IDE that lets you profile easily, try this instead (extracted from my codebase, so a bit more complicated). This code snippet will create a profile for you that you can easily visualize using e.g. snakeviz:
import cProfile
from time import time
class BlockProfile:
def __init__(self, profile_path):
self.profile_path = profile_path
self.profiler = None
self.start_time = None
def __enter__(self):
self.profiler = cProfile.Profile()
self.start_time = time()
self.profiler.enable()
def __exit__(self, *args):
self.profiler.disable()
exec_time = int((time() - self.start_time) * 1000)
self.profiler.dump_stats(self.profile_path)
whitelist = set(range(0, 100000000, 27))
input_list = list(range(0, 100000000))
with BlockProfile("/path/to/create/profile/in/profile.pstat"):
proximal_list = list(filter(
lambda x: x in whitelist,
input_list
))
proximal_list2 = [x for x in input_list if x in whitelist]
print(len(proximal_list))
print(len(proximal_list2))
Your question is so simple yet interesting. It just shows how flexible python is, as a programming language. One may use any logic and write the program according to their talent and understandings. It is fine as long as we get the answer.
Here in your case, it is just an simple filtering method which can be done by both but i would prefer the first one my_list = [x for x in my_list if x.attribute == value] because it seems simple and does not need any special syntax. Anyone can understands this command and make changes if needs it.
(Although second method is also simple, but it still has more complexity than the first one for the beginner level programmers)

How to prevent iterator getting exhausted? [duplicate]

This question already has answers here:
Why can't I iterate twice over the same iterator? How can I "reset" the iterator or reuse the data?
(5 answers)
Closed last month.
If I create two lists and zip them
a=[1,2,3]
b=[7,8,9]
z=zip(a,b)
Then I typecast z into two lists
l1=list(z)
l2=list(z)
Then the contents of l1 turn out to be fine [(1,7),(2,8),(3,9)], but the contents of l2 is just [].
I guess this is the general behavior of python with regards to iterables. But as a novice programmer migrating from the C family, this doesn't make sense to me. Why does it behave in such a way? And is there a way to get past this problem?
I mean, yeah in this particular example, I can just copy l1 into l2, but in general is there a way to 'reset' whatever Python uses to iterate 'z' after I iterate it once?
There's no way to "reset" a generator. However, you can use itertools.tee to "copy" an iterator.
>>> z = zip(a, b)
>>> zip1, zip2 = itertools.tee(z)
>>> list(zip1)
[(1, 7), (2, 8), (3, 9)]
>>> list(zip2)
[(1, 7), (2, 8), (3, 9)]
This involves caching values, so it only makes sense if you're iterating through both iterables at about the same rate. (In other words, don't use it the way I have here!)
Another approach is to pass around the generator function, and call it whenever you want to iterate it.
def gen(x):
for i in range(x):
yield i ** 2
def make_two_lists(gen):
return list(gen()), list(gen())
But now you have to bind the arguments to the generator function when you pass it. You can use lambda for that, but a lot of people find lambda ugly. (Not me though! YMMV.)
>>> make_two_lists(lambda: gen(10))
([0, 1, 4, 9, 16, 25, 36, 49, 64, 81], [0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
I hope it goes without saying that under most circumstances, it's better just to make a list and copy it.
Also, as a more general way of explaining this behavior, consider this. The point of a generator is to produce a series of values, while maintaining some state between iterations. Now, at times, instead of simply iterating over a generator, you might want to do something like this:
z = zip(a, b)
while some_condition():
fst = next(z, None)
snd = next(z, None)
do_some_things(fst, snd)
if fst is None and snd is None:
do_some_other_things()
Let's say this loop may or may not exhaust z. Now we have a generator in an indeterminate state! So it's important, at this point, that the behavior of a generator is restrained in a well-defined way. Although we don't know where the generator is in its output, we know that a) all subsequent accesses will produce later values in the series, and b) once it's "empty", we've gotten all the items in the series exactly once. The more ability we have to manipulate the state of z, the harder it is to reason about it, so it's best that we avoid situations that break those two promises.
Of course, as Joel Cornett points out below, it is possible to write a generator that accepts messages via the send method; and it would be possible to write a generator that could be reset using send. But note that in that case, all we can do is send a message. We can't directly manipulate the generator's state, and so all changes to the state of the generator are well-defined (by the generator itself -- assuming it was written correctly!). send is really for implementing coroutines, so I wouldn't use it for this purpose. Everyday generators almost never do anything with values sent to them -- I think for the very reasons I give above.
If you need two copies of the list, which you do if you need to modify them, then I suggest you make the list once, and then copy it:
a=[1,2,3]
b=[7,8,9]
l1 = list(zip(a,b))
l2 = l1[:]
Just create a list out of your iterator using list() once, and use it afterwards.
It just happens that zip returns a generator, which is an iterator that you can only iterate once.
You can iterate a list as many times as you want.
No, there is no way to "reset them".
Generators generate their output once, one by one, on demand, and then are done when the output is exhausted.
Think of them like reading a file, once you are through, you'll have to restart if you want to have another go at the data.
If you need to keep the generator's output around, then consider storing it, for instance, in a list, and subsequently re-use it as often as you need. (Somewhat similar to the decisions that guided the use of xrange(), a generator vs range() which created a whole list of items in memory in v2)
Updated: corrected terminology, temporary brain-outage ...
Yet another explanation. As a programmer, you probably understand the difference between classes vs. instances (i.e. objects). The zip() is said to be a built-in function (in the official doc). Actually, it is a built-in generator function. It means it is rather the class. You can even try in the interactive mode:
>>> zip
<class 'zip'>
The classes are types. Because of that also the following should be clear:
>>> type(zip)
<class 'type'>
Your z is the instance of the class, and you can think about calling the zip() as about calling the class constructor:
>>> a = [1, 2, 3]
>>> b = [7, 8, 9]
>>> z = zip(a, b)
>>> z
<zip object at 0x0000000002342AC8>
>>> type(z)
<class 'zip'>
The z is an iterator (object) that keeps inside the iterators for the a and b. Because of its generic implementation, the z (or the zip class) has no mean to reset the iterators through the a or b or whatever sequences. Because of that there is no way to reset the z. The cleanest way to solve your concrete problem is to copy the list (as you mentioned in the question and Lennart Regebro suggests). Another understandable way is to use the zip(a, b) twice, thus constructing the two z-like iterators that behaves from the start the same way:
>>> lst1 = list(zip(a, b))
>>> lst2 = list(zip(a, b))
However, this cannot be used generally with the identical result. Think about a or b being unique sequences generated based on some current conditions (say temperatures read from several thermometers).

Most pythonic way to truncate a list to N indices when you can't guarantee the list is at least N length?

What is the most pythonic way to truncate a list to N indices when you can not guarantee the list is even N length? Something like this:
l = range(6)
if len(l) > 4:
l = l[:4]
I'm fairly new to python and am trying to learn to think pythonicly. The reason I want to even truncate the list is because I'm going to enumerate on it with an expected length and I only care about the first 4 elements.
Python automatically handles, the list indices that are out of range gracefully.
In [5]: k = range(2)
In [6]: k[:4]
Out[6]: [0, 1]
In [7]: k = range(6)
In [8]: k[:4]
Out[8]: [0, 1, 2, 3]
BTW, the degenerate slice behavior is explained in The Python tutorial. That is a good place to start because it covers a lot of concepts very quickly.
You've got it here:
lst = lst[:4]
This works regardless of the number of items in the list, even if it's less than 4.
If you want it to always have 4 elements, padding it with (say) zeroes or None if it's too short, try this:
lst = (lst + [0] * 4)[:4]
When you have a question like this, it's usually feasible to try it and see what happens, or look it up in the documentation.
It's bad idea to name a variable list, by the way; this will prevent you from referring to the built-in list type.
All of the answers so far don't truncate the list. They follow your example in assigning the name to a new list which contains the first up to 4 elements of the old list. To truncate the existing list, delete elements whose index is 4 or higher. This is done very simply:
del lst[4:]
Moving on to what you really want to do, one possibility is:
for i, value in enumerate(lst):
if i >= 4:
break
do_something_with(lst, i, value)
I think the most pythonic solution that answers your question is this:
for x in l[:4]:
do_something_with(x)
The advantages:
It's the most succinct and descriptive way of doing what you're after.
I suggest that the pythonic way to do this is to slice the source array rather than truncate it. Once you remove elements from the source array they are gone forever, and while in your trivial example that's fine, I think in general it's not a great practice.
Slicing the array to the first four is almost certainly more efficient than removing a possibly large number of elements from its tail.

Categories