How do I reverse an itertools.chain object? - python

My function creates a chain of generators:
def bar(num):
import itertools
some_sequence = (x*1.5 for x in range(num))
some_other_sequence = (x*2.6 for x in range(num))
chained = itertools.chain(some_sequence, some_other_sequence)
return chained
My function sometimes needs to return chained in reversed order. Conceptually, the following is what I would like to be able to do:
if num < 0:
return reversed(chained)
return chained
Unfortunately:
>>> reversed(chained)
TypeError: argument to reversed() must be a sequence
What are my options?
This is in some realtime graphic rendering code so I don't want to make it too complicated/slow.
EDIT:
When I first posed this question I hadn't thought about the reversibility of generators. As many have pointed out, generators can't be reversed.
I do in fact want to reverse the flattened contents of the chain; not just the order of the generators.
Based on the responses, there is no single call I can use to reverse an itertools.chain, so I think the only solution here is to use a list, at least for the reverse case, and perhaps for both.

if num < 0:
lst = list(chained)
lst.reverse()
return lst
else:
return chained
reversed() needs an actual sequence, because it iterates it backwards by index, and that wouldn't work for a generator (which only has the notion of "next" item).
Since you will need to unroll the whole generator anyway for reversing, the most efficient way is to read it to a list and reverse the list in-place with the .reverse() method.

You cannot reverse generators by definition. The interface of a generator is the iterator, which is a container that supports only forward iteration. When you want to reverse a iterator, you have to collect all it's items first and reverse them after that.
Use lists instead or generate the sequences backwards from the start.

itertools.chain would need to implement __reversed__() (this would be best) or __len__() and __getitem__()
Since it doesn't, and there's not even a way to access the internal sequences you'll need to expand the entire sequence to be able to reverse it.
reversed(list(CHAIN_INSTANCE))
It would be nice if chain would make __reversed__() available when all the sequences are reversable, but currently it does not do that. Perhaps you can write your own version of chain that does

def reversed2(iter):
return reversed(list(iter))

reversed only works on objects that support len and indexing. You have to first generate all results of a generator before wrapping reversed around them.
However, you could easily do this:
def bar(num):
import itertools
some_sequence = (x*1.5 for x in range(num, -1, -1))
some_other_sequence = (x*2.6 for x in range(num, -1, -1))
chained = itertools.chain(some_other_sequence, some_sequence)
return chained

Does this work in you real app?
def bar(num):
import itertools
some_sequence = (x*1.5 for x in range(num))
some_other_sequence = (x*2.6 for x in range(num))
list_of_chains = [some_sequence, some_other_sequence]
if num < 0:
list_of_chains.reverse()
chained = itertools.chain(*list_of_chains)
return chained

In theory you can't because chained objects may even contain infinite sequences such as itertools.count(...).
You should try to reverse your generators/sequences or use reversed(iterable) for each sequence if applicable and then chain them together last-to-first. Of course this highly depends on your use case.

Related

Using iter() with sentinel to replace while loops

Oftentimes the case arises where one would need to loop indefinitely until a certain condition has been attained. For example, if I want keep collecting random integers until I find a number == n, following which I break. I'd do this:
import random
rlist = []
n = ...
low, high = ..., ...
while True:
num = random.randint(low, high)
if num == n:
break
rlist.append(num)
And this works, but is quite clunky. There is a much more pythonic alternative using iter:
iter(o[, sentinel])
Return an iterator object. The first argument is
interpreted very differently depending on the presence of the second
argument. [...] If the second argument,
sentinel, is given, then o must be a callable object. The iterator
created in this case will call o with no arguments for each call to
its next() method; if the value returned is equal to sentinel,
StopIteration will be raised, otherwise the value will be returned.
The loop above can be replaced with
import random
from functools import partial
f = partial(random.randint, low, high)
rlist = list(iter(f, 10))
To extend this principle to lists that have already been created, a slight change is needed. I'll need to define a partial function like this:
f = partial(next, iter(x)) # where x is some list I want to keep taking items from until I hit a sentinel
The rest remains the same, but the main caveat with this approach versus the while loop is I cannot apply generic boolean conditions.
For example, I cannot apply a "generate numbers until the first even number greater than 1000 is encountered".
The bottom line is this: Is there another alternative to the while loop and iter that supports a callback sentinel?
If you want generic boolean conditions, then iter(object, sentinel) is insufficiently expressive for your needs. itertools.takewhile(), in contrast, seems to be more or less what you want: It takes an iterator, and cuts it off once a given predicate stops being true.
rlist = list(itertools.takewhile(lambda x: x >= 20, inputlist))
Incidentally, partial is not very Pythonic, and neither is itertools. GvR is on record as disliking higher-order functional-style programming (note the downgrading of reduce from built-in to a module member in 3.0). Attributes like "elegant" and "readable" are in the eye of the beholder, but if you're looking for Pythonic in the purest sense, you want the while loop.

Is there a way to specify the reduce() accumulator in Python?

I've been learning a lot of Haskell lately, and wanted to try out some of the neat tricks it has in Python. From what I can understand, Python's reduce automatically sets the iterative variable and the accumulator in the function passed to the first two values of the list given in reduce. In Haskell, when I use its equivalent, fold, I can specify what I want the accumulator to be. Is there a way I can do this with Python's reduce?
Quoting reduce docs, an interface is:
reduce(function, iterable[, initializer])
If the optional initializer is present, it is placed before the items
of the iterable in the calculation, and serves as a default when the
iterable is empty. If initializer is not given and iterable contains
only one item, the first item is returned.
So, an (academic) example for using initializer may be:
seq = ['s1', 's22', 's333']
len_sum_count = reduce(lambda accumulator, s: accumulator + len(s), seq, 0)
assert len_sum_count == 9

Python list slicing efficiency

In the following code:
def listSum(alist):
"""Get sum of numbers in a list recursively."""
sum = 0
if len(alist) == 1:
return alist[0]
else:
return alist[0] + listSum(alist[1:])
return sum
is a new list created every time when I do listSum(alist[1:])?
If yes, is this the recommended way or can I do something more efficient? (Not for the specific function -this serves as an example-, but rather when I want to process a specific part of a list in general.)
Edit:
Sorry if I confused anyone, I am not interested in an efficient sum implementation, this served as an example to use slicing this way.
Yes, it creates a new list every time. If you can get away with using an iterable, you can use itertools.islice, or juggle iter(list) (if you only need to skip some items at the start). But this gets messy when you need to determine if the argument is empty or only has one element - you have to use try and catch StopIteration. Edit: You could also add an extra argument that determines where to start. Unlike #marcadian's version, you should make it a default argument to not burden the caller with that and avoid bugs from the wrong index being passed in from the outside.
It's often better to not get in that sort of situation - either write your code such that you can let for deal with iteration (read: don't use recursion like that). Alternatively, if the slice is reasonably small (possibly because the list as a whole is small), bite the bullet and slice anyway - it's easier, and while slicing is linear time, the constant factor is really tiny.
I can think of some options:
use builtin function sum for this particular case
If you really need recursion (for some reason), pass in the same list to the function call and also index of current element, so you don't need to slice the list (which creates new list)
option 2 works like this:
def f_sum(alist, idx=0):
if idx >= len(alist):
return 0
return alist[idx] + f_sum(alist, idx + 1)
f_sum([1, 2, 3, 4])
a = range(5)
f_sum(a)
This is another way if you must use recursion (tail-recursive). In many other languages this is more efficient than regular recursions in terms of space complexity. Unfortunately this isn't the case in python because it does not have a built-in support for optimizing tail calls. It's a good practice to take note of if you are learning recursion nonetheless.
def helper(l, s, i):
if len(l) == 0:
return 0
elif i < len(l) - 1:
return helper(l, s + l[i], i + 1)
else:
return s + l[i]
def listSum(l):
return helper(l, 0, 0)

Inconsistent behavior of python generators

The following python code produces [(0, 0), (0, 7)...(0, 693)] instead of the expected list of tuples combining all of the multiples of 3 and multiples of 7:
multiples_of_3 = (i*3 for i in range(100))
multiples_of_7 = (i*7 for i in range(100))
list((i,j) for i in multiples_of_3 for j in multiples_of_7)
This code fixes the problem:
list((i,j) for i in (i*3 for i in range(100)) for j in (i*7 for i in range(100)))
Questions:
The generator object seems to play the role of an iterator instead of providing an iterator object each time the generated list is to be enumerated. The later strategy seems to be adopted by .Net LINQ query objects. Is there an elegant way to get around this?
How come the second piece of code works? Shall I understand that the generator's iterator is not reset after looping through all multiples of 7?
Don't you think that this behavior is counter intuitive if not inconsistent?
A generator object is an iterator, and therefore one-shot. It's not an iterable which can produce any number of independent iterators. This behavior is not something you can change with a switch somewhere, so any work around amounts to either using an iterable (e.g. a list) instead of an generator or repeatedly constructing generators.
The second snippet does the latter. It is by definition equivalent to the loops
for i in (i*3 for i in range(100)):
for j in (i*7 for i in range(100)):
...
Hopefully it isn't surprising that here, the latter generator expression is evaluated anew on each iteration of the outer loop.
As you discovered, the object created by a generator expression is an iterator (more precisely a generator-iterator), designed to be consumed only once. If you need a resettable generator, simply create a real generator and use it in the loops:
def multiples_of_3(): # generator
for i in range(100):
yield i * 3
def multiples_of_7(): # generator
for i in range(100):
yield i * 7
list((i,j) for i in multiples_of_3() for j in multiples_of_7())
Your second code works because the expression list of the inner loop ((i*7 ...)) is evaluated on each pass of the outer loop. This results in creating a new generator-iterator each time around, which gives you the behavior you want, but at the expense of code clarity.
To understand what is going on, remember that there is no "resetting" of an iterator when the for loop iterates over it. (This is a feature; such a reset would break iterating over a large iterator in pieces, and it would be impossible for generators.) For example:
multiples_of_2 = iter(xrange(0, 100, 2)) # iterator
for i in multiples_of_2:
print i
# prints nothing because the iterator is spent
for i in multiples_of_2:
print i
...as opposed to this:
multiples_of_2 = xrange(0, 100, 2) # iterable sequence, converted to iterator
for i in multiples_of_2:
print i
# prints again because a new iterator gets created
for i in multiples_of_2:
print i
A generator expression is equivalent to an invoked generator and can therefore only be iterated over once.
The real issue as I found out is about single versus multiple pass iterables and the fact that there is currently no standard mechanism to determine if an iterable single or multi pass: See Single- vs. Multi-pass iterability
If you want to convert a generator expression to a multipass iterable, then it can be done in a fairly routine fashion. For example:
class MultiPass(object):
def __init__(self, initfunc):
self.initfunc = initfunc
def __iter__(self):
return self.initfunc()
multiples_of_3 = MultiPass(lambda: (i*3 for i in range(20)))
multiples_of_7 = MultiPass(lambda: (i*7 for i in range(20)))
print list((i,j) for i in multiples_of_3 for j in multiples_of_7)
From the point of view of defining the thing it's a similar amount of work to typing:
def multiples_of_3():
return (i*3 for i in range(20))
but from the point of view of the user, they write multiples_of_3 rather than multiples_of_3(), which means the object multiples_of_3 is polymorphic with any other iterable, such as a tuple or list.
The need to type lambda: is a bit inelegant, true. I don't suppose there would be any harm in introducing "iterable comprehensions" to the language, to give you what you want while maintaining backward compatibility. But there are only so many punctuation characters, and I doubt this would be considered worth one.

Python: List comprehension significantly faster than Filter? [duplicate]

I have a list that I want to filter by an attribute of the items.
Which of the following is preferred (readability, performance, other reasons)?
xs = [x for x in xs if x.attribute == value]
xs = filter(lambda x: x.attribute == value, xs)
It is strange how much beauty varies for different people. I find the list comprehension much clearer than filter+lambda, but use whichever you find easier.
There are two things that may slow down your use of filter.
The first is the function call overhead: as soon as you use a Python function (whether created by def or lambda) it is likely that filter will be slower than the list comprehension. It almost certainly is not enough to matter, and you shouldn't think much about performance until you've timed your code and found it to be a bottleneck, but the difference will be there.
The other overhead that might apply is that the lambda is being forced to access a scoped variable (value). That is slower than accessing a local variable and in Python 2.x the list comprehension only accesses local variables. If you are using Python 3.x the list comprehension runs in a separate function so it will also be accessing value through a closure and this difference won't apply.
The other option to consider is to use a generator instead of a list comprehension:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value: yield el
Then in your main code (which is where readability really matters) you've replaced both list comprehension and filter with a hopefully meaningful function name.
This is a somewhat religious issue in Python. Even though Guido considered removing map, filter and reduce from Python 3, there was enough of a backlash that in the end only reduce was moved from built-ins to functools.reduce.
Personally I find list comprehensions easier to read. It is more explicit what is happening from the expression [i for i in list if i.attribute == value] as all the behaviour is on the surface not inside the filter function.
I would not worry too much about the performance difference between the two approaches as it is marginal. I would really only optimise this if it proved to be the bottleneck in your application which is unlikely.
Also since the BDFL wanted filter gone from the language then surely that automatically makes list comprehensions more Pythonic ;-)
Since any speed difference is bound to be miniscule, whether to use filters or list comprehensions comes down to a matter of taste. In general I'm inclined to use comprehensions (which seems to agree with most other answers here), but there is one case where I prefer filter.
A very frequent use case is pulling out the values of some iterable X subject to a predicate P(x):
[x for x in X if P(x)]
but sometimes you want to apply some function to the values first:
[f(x) for x in X if P(f(x))]
As a specific example, consider
primes_cubed = [x*x*x for x in range(1000) if prime(x)]
I think this looks slightly better than using filter. But now consider
prime_cubes = [x*x*x for x in range(1000) if prime(x*x*x)]
In this case we want to filter against the post-computed value. Besides the issue of computing the cube twice (imagine a more expensive calculation), there is the issue of writing the expression twice, violating the DRY aesthetic. In this case I'd be apt to use
prime_cubes = filter(prime, [x*x*x for x in range(1000)])
Although filter may be the "faster way", the "Pythonic way" would be not to care about such things unless performance is absolutely critical (in which case you wouldn't be using Python!).
I thought I'd just add that in python 3, filter() is actually an iterator object, so you'd have to pass your filter method call to list() in order to build the filtered list. So in python 2:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = filter(lambda num: num % 2 == 0, lst_a)
lists b and c have the same values, and were completed in about the same time as filter() was equivalent [x for x in y if z]. However, in 3, this same code would leave list c containing a filter object, not a filtered list. To produce the same values in 3:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = list(filter(lambda num: num %2 == 0, lst_a))
The problem is that list() takes an iterable as it's argument, and creates a new list from that argument. The result is that using filter in this way in python 3 takes up to twice as long as the [x for x in y if z] method because you have to iterate over the output from filter() as well as the original list.
An important difference is that list comprehension will return a list while the filter returns a filter, which you cannot manipulate like a list (ie: call len on it, which does not work with the return of filter).
My own self-learning brought me to some similar issue.
That being said, if there is a way to have the resulting list from a filter, a bit like you would do in .NET when you do lst.Where(i => i.something()).ToList(), I am curious to know it.
EDIT: This is the case for Python 3, not 2 (see discussion in comments).
I find the second way more readable. It tells you exactly what the intention is: filter the list.
PS: do not use 'list' as a variable name
generally filter is slightly faster if using a builtin function.
I would expect the list comprehension to be slightly faster in your case
Filter is just that. It filters out the elements of a list. You can see the definition mentions the same(in the official docs link I mentioned before). Whereas, list comprehension is something that produces a new list after acting upon something on the previous list.(Both filter and list comprehension creates new list and not perform operation in place of the older list. A new list here is something like a list with, say, an entirely new data type. Like converting integers to string ,etc)
In your example, it is better to use filter than list comprehension, as per the definition. However, if you want, say other_attribute from the list elements, in your example is to be retrieved as a new list, then you can use list comprehension.
return [item.other_attribute for item in my_list if item.attribute==value]
This is how I actually remember about filter and list comprehension. Remove a few things within a list and keep the other elements intact, use filter. Use some logic on your own at the elements and create a watered down list suitable for some purpose, use list comprehension.
Here's a short piece I use when I need to filter on something after the list comprehension. Just a combination of filter, lambda, and lists (otherwise known as the loyalty of a cat and the cleanliness of a dog).
In this case I'm reading a file, stripping out blank lines, commented out lines, and anything after a comment on a line:
# Throw out blank lines and comments
with open('file.txt', 'r') as lines:
# From the inside out:
# [s.partition('#')[0].strip() for s in lines]... Throws out comments
# filter(lambda x: x!= '', [s.part... Filters out blank lines
# y for y in filter... Converts filter object to list
file_contents = [y for y in filter(lambda x: x != '', [s.partition('#')[0].strip() for s in lines])]
It took me some time to get familiarized with the higher order functions filter and map. So i got used to them and i actually liked filter as it was explicit that it filters by keeping whatever is truthy and I've felt cool that I knew some functional programming terms.
Then I read this passage (Fluent Python Book):
The map and filter functions are still builtins
in Python 3, but since the introduction of list comprehensions and generator ex‐
pressions, they are not as important. A listcomp or a genexp does the job of map and
filter combined, but is more readable.
And now I think, why bother with the concept of filter / map if you can achieve it with already widely spread idioms like list comprehensions. Furthermore maps and filters are kind of functions. In this case I prefer using Anonymous functions lambdas.
Finally, just for the sake of having it tested, I've timed both methods (map and listComp) and I didn't see any relevant speed difference that would justify making arguments about it.
from timeit import Timer
timeMap = Timer(lambda: list(map(lambda x: x*x, range(10**7))))
print(timeMap.timeit(number=100))
timeListComp = Timer(lambda:[(lambda x: x*x) for x in range(10**7)])
print(timeListComp.timeit(number=100))
#Map: 166.95695265199174
#List Comprehension 177.97208347299602
In addition to the accepted answer, there is a corner case when you should use filter instead of a list comprehension. If the list is unhashable you cannot directly process it with a list comprehension. A real world example is if you use pyodbc to read results from a database. The fetchAll() results from cursor is an unhashable list. In this situation, to directly manipulating on the returned results, filter should be used:
cursor.execute("SELECT * FROM TABLE1;")
data_from_db = cursor.fetchall()
processed_data = filter(lambda s: 'abc' in s.field1 or s.StartTime >= start_date_time, data_from_db)
If you use list comprehension here you will get the error:
TypeError: unhashable type: 'list'
In terms of performance, it depends.
filter does not return a list but an iterator, if you need the list 'immediately' filtering and list conversion it is slower than with list comprehension by about 40% for very large lists (>1M). Up to 100K elements, there is almost no difference, from 600K onwards there starts to be differences.
If you don't convert to a list, filter is practically instantaneous.
More info at: https://blog.finxter.com/python-lists-filter-vs-list-comprehension-which-is-faster/
Curiously on Python 3, I see filter performing faster than list comprehensions.
I always thought that the list comprehensions would be more performant.
Something like:
[name for name in brand_names_db if name is not None]
The bytecode generated is a bit better.
>>> def f1(seq):
... return list(filter(None, seq))
>>> def f2(seq):
... return [i for i in seq if i is not None]
>>> disassemble(f1.__code__)
2 0 LOAD_GLOBAL 0 (list)
2 LOAD_GLOBAL 1 (filter)
4 LOAD_CONST 0 (None)
6 LOAD_FAST 0 (seq)
8 CALL_FUNCTION 2
10 CALL_FUNCTION 1
12 RETURN_VALUE
>>> disassemble(f2.__code__)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x10cfcaa50, file "<stdin>", line 2>)
2 LOAD_CONST 2 ('f2.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_FAST 0 (seq)
8 GET_ITER
10 CALL_FUNCTION 1
12 RETURN_VALUE
But they are actually slower:
>>> timeit(stmt="f1(range(1000))", setup="from __main__ import f1,f2")
21.177661532000116
>>> timeit(stmt="f2(range(1000))", setup="from __main__ import f1,f2")
42.233950221000214
I would come to the conclusion: Use list comprehension over filter since its
more readable
more pythonic
faster (for Python 3.11, see attached benchmark, also see )
Keep in mind that filter returns a iterator, not a list.
python3 -m timeit '[x for x in range(10000000) if x % 2 == 0]'
1 loop, best of 5: 270 msec per loop
python3 -m timeit 'list(filter(lambda x: x % 2 == 0, range(10000000)))'
1 loop, best of 5: 432 msec per loop
Summarizing other answers
Looking through the answers, we have seen a lot of back and forth, whether or not list comprehension or filter may be faster or if it is even important or pythonic to care about such an issue. In the end, the answer is as most times: it depends.
I just stumbled across this question while optimizing code where this exact question (albeit combined with an in expression, not ==) is very relevant - the filter + lambda expression is taking up a third of my computation time (of multiple minutes).
My case
In my case, the list comprehension is much faster (twice the speed). But I suspect that this varies strongly based on the filter expression as well as the Python interpreter used.
Test it for yourself
Here is a simple code snippet that should be easy to adapt. If you profile it (most IDEs can do that easily), you will be able to easily decide for your specific case which is the better option:
whitelist = set(range(0, 100000000, 27))
input_list = list(range(0, 100000000))
proximal_list = list(filter(
lambda x: x in whitelist,
input_list
))
proximal_list2 = [x for x in input_list if x in whitelist]
print(len(proximal_list))
print(len(proximal_list2))
If you do not have an IDE that lets you profile easily, try this instead (extracted from my codebase, so a bit more complicated). This code snippet will create a profile for you that you can easily visualize using e.g. snakeviz:
import cProfile
from time import time
class BlockProfile:
def __init__(self, profile_path):
self.profile_path = profile_path
self.profiler = None
self.start_time = None
def __enter__(self):
self.profiler = cProfile.Profile()
self.start_time = time()
self.profiler.enable()
def __exit__(self, *args):
self.profiler.disable()
exec_time = int((time() - self.start_time) * 1000)
self.profiler.dump_stats(self.profile_path)
whitelist = set(range(0, 100000000, 27))
input_list = list(range(0, 100000000))
with BlockProfile("/path/to/create/profile/in/profile.pstat"):
proximal_list = list(filter(
lambda x: x in whitelist,
input_list
))
proximal_list2 = [x for x in input_list if x in whitelist]
print(len(proximal_list))
print(len(proximal_list2))
Your question is so simple yet interesting. It just shows how flexible python is, as a programming language. One may use any logic and write the program according to their talent and understandings. It is fine as long as we get the answer.
Here in your case, it is just an simple filtering method which can be done by both but i would prefer the first one my_list = [x for x in my_list if x.attribute == value] because it seems simple and does not need any special syntax. Anyone can understands this command and make changes if needs it.
(Although second method is also simple, but it still has more complexity than the first one for the beginner level programmers)

Categories