I'm familiar with the for loop in a block-code context. eg:
for c in "word":
print c
I just came across some examples that use for differently. Rather than beginning with the for statement, they tag it at the end of an expression (and don't involve an indented code-block). eg:
sum(x*x for x in range(10))
Can anyone point me to some documentation that outlines this use of for? I've been able to find examples, but not explanations. All the for documentation I've been able to find describes the previous use (block-code example). I'm not even sure what to call this use, so I apologize if my question's title is unclear.
What you are pointing to is Generator in Python. Take a look at: -
http://wiki.python.org/moin/Generators
http://www.python.org/dev/peps/pep-0255/
http://docs.python.org/whatsnew/2.5.html#pep-342-new-generator-features
See the documentation: - Generator Expression which contains exactly the same example you have posted
From the documentation: -
Generators are a simple and powerful tool for creating iterators. They
are written like regular functions but use the yield statement
whenever they want to return data. Each time next() is called, the
generator resumes where it left-off (it remembers all the data values
and which statement was last executed)
Generators are similar to List Comprehension that you use with square brackets instead of brackets, but they are more memory efficient. They don't return the complete list of result at the same time, but they return generator object. Whenever you invoke next() on the generator object, the generator uses yield to return the next value.
List Comprehension for the above code would look like: -
[x * x for x in range(10)]
You can also add conditions to filter out results at the end of the for.
[x * x for x in range(10) if x % 2 != 0]
This will return a list of numbers multiplied by 2 in the range 1 to 5, if the number is not divisible by 2.
An example of Generators depicting the use of yield can be: -
def city_generator():
yield("Konstanz")
yield("Zurich")
yield("Schaffhausen")
yield("Stuttgart")
>>> x = city_generator()
>>> x.next()
Konstanz
>>> x.next()
Zurich
>>> x.next()
Schaffhausen
>>> x.next()
Stuttgart
>>> x.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
So, you see that, every call to next() executes the next yield() in generator. and at the end it throws StopIteration.
Those are generator expressions and they are related to list comprehensions
List comprehensions allow for the easy creation of lists. For example, if you wanted to create a list of perfect squares you could do this:
>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
But instead you could use a list comprehension:
squares = [x**2 for x in range(10)]
Generator expressions are like list comprehensions, except they return a generator object instead of a list. You can iterate over this generator object in a similar manner to list comprehensions, but you don't have to store the whole list in memory at once, as you would if you created the list in a list comprehension.
Documentation for generator expressions is here https://www.python.org/dev/peps/pep-0289/
Following is the code using generator expression .
list(x**2 for x in range(0,10))
Your specific example is called a generator expression. List comprehensions, dictionary comprehensions, and set comprehensions are similar in meaning (different result types, and generator expressions are lazy) and have the same syntax, modulo being inside other kinds of brackets, and in the case of a dict comprehension having expr1: expr2 instead of a single expression (x*x in your example).
Related
The following python tutorial says that:
List comprehension is a complete substitute for the lambda function as well as the functions map(), filter() and reduce().
http://python-course.eu/python3_list_comprehension.php
However, it does not mention an example how a list comprehension can substitute a reduce() and I can't think of an example how it should be possible.
Can please someone explain how to achieve a reduce-like functionality with list comprehension or confirm that it isn't possible?
Ideally, list comprehension is to create a new list. Quoting official documentation,
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
whereas reduce is used to reduce an iterable to a single value. Quoting functools.reduce,
Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value.
So, list comprehension cannot be used as a drop-in replacement for reduce.
I was surprised at first to find that Guido van Rossum, creator of Python, was against reduce. His reasoning was that beyond summing, multiplying, and-ing, and or-ing, using reduce yields an unreadable solution that is better suited by a function which iterates through and updates an accumulator. His article on the matter is here. So no, there isn't a list comprehension alternative to reduce, instead the "pythonic" way is to implement an accumulating function the old fashioned way:
Instead of:
out = reduce((lambda x,y: x*y),[1,2,3])
Use:
def prod(myList):
out = 1
for el in myList:
out *= el
return out
Of course nothing stops you from continuing to use reduce (python 2) or functools.reduce (python 3)
List comprehensions are supposed to return lists. If your reduce is supposed to return a list, then yes, you can replace it with a list comprehension.
But this is no obstacle to providing "reduce-like functionality". Python lists can contain any object. If you'll accept your result contained in a single-item list, then there is a [...][0] list comprehension form that can replace any reduce() whatsoever.
This should be obvious, but that form is
[x for x in [reduce(function, sequence, initial)]][0]
for some binary function and and some iterable sequence and some initial value. Or, if you want the initial from the first of the iterable,
[x for x in [reduce(function, sequence)]][0]
Arguably, the above is cheating, and also pointless, since you could just use reduce without the comprehension. So let's try it without reduce.
[stack.append(function(stack.pop(), e)) or stack[0]
for stack in ([initial],)
for e in sequence][-1]
This produces a list of all the intermediate values, and we want the last one. [-1] is just as easy as [0]. We need an accumulator to reduce, but can't use assignment statements in a comprehension, hence the stack (which is just a list), but we could have used many other data structures here. The .append() always returns None, so we use or stack[0] to put the value so far in the resulting list.
It's a little more difficult without initial,
[stack.append(function(stack.pop(), e)) or stack[0]
for it in [iter(sequence)]
for stack in [[next(it)]]
for e in it][-1]
Really, you might as well use a for statement at this point.
But this takes up memory for the list of intermediate values. For a very long sequence, that might be a problem. But we can avoid that too by using generator expressions.
Doing this is tricky, so let's start with an easier example and work up to it.
stack = [initial]
[stack.append(function(stack.pop(), e)) for e in sequence]
stack.pop() # returns the answer
It computes the answer, but also creates a useless list of Nones. We can avoid that by converting it to a generator expression inside a list comprehension.
stack = [initial]
[_ for _s in (stack.append(function(stack.pop(), e)) or ()
for e in sequence)
for _ in _s]
stack.pop()
The list comprehension exhausts the generator that updates the stack, but returns an empty list itself. This is possible because the inner loop always has zero iterations, because _s is always an empty tuple.
We can move the stack.pop() inside if the last _s has one element. It doesn't matter what that element is though. So we chain on a [None] as the final _s.
from itertools import chain
stack = [initial]
[stack.pop()
for _s in chain((stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]])
for _ in _s][0]
Again, we have a single-item list comprehension. We can also implement chain as a generator expression. And you've already seen how to move the stack variable inside using a single-item list.
[stack.pop()
for stack in [[initial]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]],
]
for x in xs)
for _ in _s][0]
And we can also get the initial from the sequence for the two-argument reduce.
[stack.pop()
for it in [iter(sequence)]
for stack in [[next(it)]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in it),
[[None]],
]
for x in xs)
for _ in _s][0]
This is insane. But it works. So yes, it's possible to get "reduce-like functionality" with comprehensions. That doesn't mean you should. Seven fors is too hard!
You could accomplish something like a reduce with a comprehension by using a couple of helper functions that I've named last and cofold:
>>> last(r(a+b) for a, b, r in cofold(range(10)))
45
This is functionally equivalent to
>>> reduce(lambda a, b: a+b, range(10))
45
Note that unlike reduce() the comprehension didn't use a lambda.
The trick is to use a generator with a callback to "return" the result of the operator. cofold is the corecursive dual of the reduce (or fold) function.
_sentinel = object()
def cofold(it, initial=_sentinel):
if initial is _sentinel:
it = iter(it)
accumulator = next(it)
else:
accumulator = initial
def callback(result):
nonlocal accumulator
accumulator = result
return result
for element in it:
yield accumulator, element, callback
Here's cofold in a list comprehension.
>>> [r(a+b) for a, b, r in cofold(range(10))]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
The elements represent each step in the dual reduction. The last one is our answer. The last function is trivial.
def last(it):
for e in it:
pass
return e
Unlike reduce, cofold is a lazy generator, so it can safely act on infinite iterables when used in a generator expression.
>>> from itertools import islice, count
>>> lazy_results = (r(a+b) for a, b, r in cofold(count()))
>>> [*islice(lazy_results, 0, 9)]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
>>> next(lazy_results)
55
>>> next(lazy_results)
66
primes = [2,3,5,7..] (prime numbers)
map(lambda x:print(x),primes)
It does not print anything.
Why is that?
I've tried
sys.stdout.write(x)
too, but doesn't work either.
Since lambda x: print(x) is a syntax error in Python < 3, I'm assuming Python 3. That means map returns a generator, meaning to get map to actually call the function on every element of a list, you need to iterate through the resultant generator.
Fortunately, this can be done easily:
list(map(lambda x:print(x),primes))
Oh, and you can get rid of the lambda too, if you like:
list(map(print,primes))
But, at that point you are better off with letting print handle it:
print(*primes, sep='\n')
NOTE: I said earlier that '\n'.join would be a good idea. That is only true for a list of str's.
This works for me:
>>> from __future__ import print_function
>>> map(lambda x: print(x), primes)
2
3
5
7
17: [None, None, None, None]
Are you using Python 2.x where print is a statement, not a function?
Alternatively, you can unpack it by putting * before map(...) like the following
[*map(...)]
or
{*map(...)}
Choose the output you desire, a list or a dictionary.
Another reason why you could be seeing this is that you're not evaluating the results of the map function. It returns a generator (an iterable) that evaluates your function lazily and not an actual list.
primes = [2,3,5,7]
map(print, primes) # no output, because it returns a generator
primes = [2,3,5,7]
for i in map(print, primes):
pass # prints 2,3,5,7
Alternately, you can do list(map(print, primes)) which will also force the generator to be evaluated and call the print function on each member of your list.
I am wondering if there is there is a simple Pythonic way (maybe using generators) to run a function over each item in a list and result in a list of returns?
Example:
def square_it(x):
return x*x
x_set = [0,1,2,3,4]
squared_set = square_it(x for x in x_set)
I notice that when I do a line by line debug on this, the object that gets passed into the function is a generator.
Because of this, I get an error:
TypeError: unsupported operand type(s) for *: 'generator' and 'generator'
I understand that this generator expression created a generator to be passed into the function, but I am wondering if there is a cool way to accomplish running the function multiple times only by specifying an iterable as the argument? (without modifying the function to expect an iterable).
It seems to me that this ability would be really useful to cut down on lines of code because you would not need to create a loop to fun the function and a variable to save the output in a list.
Thanks!
You want a list comprehension:
squared_set = [square_it(x) for x in x_set]
There's a builtin function, map(), for this common problem.
>>> map(square_it, x_set)
[0,1,4,9,16] # On Python 3, a generator is returned.
Alternatively, one can use a generator expression, which is memory-efficient but lazy (meaning the values will not be computed now, only when needed):
>>> (square_it(x) for x in x_set)
<generator object <genexpr> at ...>
Similarly, one can also use a list comprehension, which computes all the values upon creation, returning a list.
Additionally, here's a comparison of generator expressions and list comprehensions.
You want to call the square_it function inside the generator, not on the generator.
squared_set = (square_it(x) for x in x_set)
As the other answers have suggested, I think it is best (most "pythonic") to call your function explicitly on each element, using a list or generator comprehension.
To actually answer the question though, you can wrap your function that operates over scalers with a function that sniffs the input, and has different behavior depending on what it sees. For example:
>>> import types
>>> def scaler_over_generator(f):
... def wrapper(x):
... if isinstance(x, types.GeneratorType):
... return [f(i) for i in x]
... return f(x)
... return wrapper
>>> def square_it(x):
... return x * x
>>> square_it_maybe_over = scaler_over_generator(square_it)
>>> square_it_maybe_over(10)
100
>>> square_it_maybe_over(x for x in range(5))
[0, 1, 4, 9, 16]
I wouldn't use this idiom in my code, but it is possible to do.
You could also code it up with a decorator, like so:
>>> #scaler_over_generator
... def square_it(x):
... return x * x
>>> square_it(x for x in range(5))
[0, 1, 4, 9, 16]
If you didn't want/need a handle to the original function.
Note that there is a difference between list comprehension returning a list
squared_set = [square_it(x) for x in x_set]
and returning a generator that you can iterate over it:
squared_set = (square_it(x) for x in x_set)
I created a line that appends an object to a list in the following manner
>>> foo = list()
>>> def sum(a, b):
... c = a+b; return c
...
>>> bar_list = [9,8,7,6,5,4,3,2,1,0]
>>> [foo.append(sum(i,x)) for i, x in enumerate(bar_list)]
[None, None, None, None, None, None, None, None, None, None]
>>> foo
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
>>>
The line
[foo.append(sum(i,x)) for i, x in enumerate(bar_list)]
would give a pylint W1060 Expression is assigned to nothing, but since I am already using the foo list to append the values I don't need to assing the List Comprehension line to something.
My questions is more of a matter of programming correctness
Should I drop list comprehension and just use a simple for expression?
>>> for i, x in enumerate(bar_list):
... foo.append(sum(i,x))
or is there a correct way to use both list comprehension an assign to nothing?
Answer
Thank you #user2387370, #kindall and #Martijn Pieters. For the rest of the comments I use append because I'm not using a list(), I'm not using i+x because this is just a simplified example.
I left it as the following:
histogramsCtr = hist_impl.HistogramsContainer()
for index, tupl in enumerate(local_ranges_per_histogram_list):
histogramsCtr.append(doSubHistogramData(index, tupl))
return histogramsCtr
Yes, this is bad style. A list comprehension is to build a list. You're building a list full of Nones and then throwing it away. Your actual desired result is a side effect of this effort.
Why not define foo using the list comprehension in the first place?
foo = [sum(i,x) for i, x in enumerate(bar_list)]
If it is not to be a list but some other container class, as you mentioned in a comment on another answer, write that class to accept an iterable in its constructor (or, if it's not your code, subclass it to do so), then pass it a generator expression:
foo = MyContainer(sum(i, x) for i, x in enumerate(bar_list))
If foo already has some value and you wish to append new items:
foo.extend(sum(i,x) for i, x in enumerate(bar_list))
If you really want to use append() and don't want to use a for loop for some reason then you can use this construction; the generator expression will at least avoid wasting memory and CPU cycles on a list you don't want:
any(foo.append(sum(i, x)) for i, x in enumerate(bar_list))
But this is a good deal less clear than a regular for loop, and there's still some extra work being done: any is testing the return value of foo.append() on each iteration. You can write a function to consume the iterator and eliminate that check; the fastest way uses a zero-length collections.deque:
from collections import deque
do = deque([], maxlen=0).extend
do(foo.append(sum(i, x)) for i, x in enumerate(bar_list))
This is actually fairly readable, but I believe it's not actually any faster than any() and requires an extra import. However, either do() or any() is a little faster than a for loop, if that is a concern.
I think it's generally frowned upon to use list comprehensions just for side-effects, so I would say a for loop is better in this case.
But in any case, couldn't you just do foo = [sum(i,x) for i, x in enumerate(bar_list)]?
You should definitely drop the list comprehension. End of.
You are confusing anyone reading your code. You are building a list for the side-effects.
You are paying CPU cycles and memory for building a list you are discarding again.
In your simplified case, you are overlooking the fact you could have used a list comprehension directly:
[sum(i,x) for i, x in enumerate(bar_list)]
I have this:
>>> sum( i*i for i in xrange(5))
My question is, in this case am I passing a list comprehension or a generator object to sum ? How do I tell that? Is there a general rule around this?
Also remember sum by itself needs a pair of parentheses to surround its arguments. I'd think that the parentheses above are for sum and not for creating a generator object. Wouldn't you agree?
You are passing in a generator expression.
A list comprehension is specified with square brackets ([...]). A list comprehension builds a list object first, so it uses syntax closely related to the list literal syntax:
list_literal = [1, 2, 3]
list_comprehension = [i for i in range(4) if i > 0]
A generator expression, on the other hand, creates an iterator object. Only when iterating over that object is the contained loop executed and are items produced. The generator expression does not retain those items; there is no list object being built.
A generator expression always uses (...) round parethesis, but when used as the only argument to a call, the parenthesis can be omitted; the following two expressions are equivalent:
sum((i*i for i in xrange(5))) # with parenthesis
sum(i*i for i in xrange(5)) # without parenthesis around the generator
Quoting from the generator expression documentation:
The parentheses can be omitted on calls with only one argument. See section Calls for the detail.
List comprehensions are enclosed in []:
>>> [i*i for i in xrange(5)] # list comprehension
[0, 1, 4, 9, 16]
>>> (i*i for i in xrange(5)) # generator
<generator object <genexpr> at 0x2cee40>
You are passing a generator.
That is a generator:
>>> (i*i for i in xrange(5))
<generator object <genexpr> at 0x01A27A08>
>>>
List comprehensions are enclosed in [].
You might also be asking, "does this syntax truly cause sum to consume a generator one item at a time, or does it secretly create a list of every item in the generator first"? One way to check this is to try it on a very large range and watch memory usage:
sum(i for i in xrange(int(1e8)))
Memory usage for this case is constant, where as range(int(1e8)) creates the full list and consumes several hundred MB of RAM.
You can test that the parentheses are optional:
def print_it(obj):
print obj
print_it(i for i in xrange(5))
# prints <generator object <genexpr> at 0x03853C60>
I tried this:
#!/usr/bin/env python
class myclass:
def __init__(self,arg):
self.p = arg
print type(self.p)
print self.p
if __name__ == '__main__':
c = myclass(i*i for i in xrange(5))
And this prints:
$ ./genexprorlistcomp.py
<type 'generator'>
<generator object <genexpr> at 0x7f5344c7cf00>
Which is consistent with what Martin and mdscruggs explained in their post.
You are passing a generator object, list comprehension is surrounded by [].