Most useful list-comprehension construction? - python

What Python's user-made list-comprehension construction is the most useful?
I have created the following two quantifiers, which I use to do different verification operations:
def every(f, L): return not (False in [f(x) for x in L])
def some(f, L): return True in [f(x) for x in L]
an optimized versions (requres Python 2.5+) was proposed below:
def every(f, L): return all(f(x) for x in L)
def some(f, L): return any(f(x) for x in L)
So, how it works?
"""For all x in [1,4,9] there exists such y from [1,2,3] that x = y**2"""
answer = every([1,4,9], lambda x: some([1,2,3], lambda y: y**2 == x))
Using such operations, you can easily do smart verifications, like:
"""There exists at least one bot in a room which has a life below 30%"""
answer = some(bots_in_this_room, lambda x: x.life < 0.3)
and so on, you can answer even very complicated questions using such quantifiers. Of course, there is no infinite lists in Python (hey, it's not Haskell :) ), but Python's lists comprehensions are very practical.
Do you have your own favourite lists-comprehension constructions?
PS: I wonder, why most people tend not to answer questions but criticize presented examples? The question is about favourite list-comprehension construction actually.

anyand all are part of standard Python from 2.5. There's no need to make your own versions of these. Also the official version of any and all short-circuit the evaluation if possible, giving a performance improvement. Your versions always iterate over the entire list.
If you want a version that accepts a predicate, use something like this that leverages the existing any and all functions:
def anyWithPredicate(predicate, l): return any(predicate(x) for x in l)
def allWithPredicate(predicate, l): return all(predicate(x) for x in l)
I don't particularly see the need for these functions though, as it doesn't really save much typing.
Also, hiding existing standard Python functions with your own functions that have the same name but different behaviour is a bad practice.

There aren't all that many cases where a list comprehension (LC for short) will be substantially more useful than the equivalent generator expression (GE for short, i.e., using round parentheses instead of square brackets, to generate one item at a time rather than "all in bulk at the start").
Sometimes you can get a little extra speed by "investing" the extra memory to hold the list all at once, depending on vagaries of optimization and garbage collection on one or another version of Python, but that hardly amounts to substantial extra usefulness of LC vs GE.
Essentially, to get substantial extra use out of the LC as compared to the GE, you need use cases which intrinsically require "more than one pass" on the sequence. In such cases, a GE would require you to generate the sequence once per pass, while, with an LC, you can generate the sequence once, then perform multiple passes on it (paying the generation cost only once). Multiple generation may also be problematic if the GE / LC are based on an underlying iterator that's not trivially restartable (e.g., a "file" that's actually a Unix pipe).
For example, say you are reading a non-empty open text file f which has a bunch of (textual representations of) numbers separated by whitespace (including newlines here and there, empty lines, etc). You could transform it into a sequence of numbers with either a GE:
G = (float(s) for line in f for s in line.split())
or a LC:
L = [float(s) for line in f for s in line.split()]
Which one is better? Depends on what you're doing with it (i.e, the use case!). If all you want is, say, the sum, sum(G) and sum(L) will do just as well. If you want the average, sum(L)/len(L) is fine for the list, but won't work for the generator -- given the difficulty in "restarting f", to avoid an intermediate list you'll have to do something like:
tot = 0.0
for i, x in enumerate(G): tot += x
return tot/(i+1)
nowhere as snappy, fast, concise and elegant as return sum(L)/len(L).
Remember that sorted(G) does return a list (inevitably), so L.sort() (which is in-place) is the rough equivalent in this case -- sorted(L) would be supererogatory (as now you have two lists). So when sorting is needed a generator may often be preferred simply due to conciseness.
All in all, since L is identically equivalent to list(G), it's hard to get very excited about the ability to express it via punctuation (square brackets instead of round parentheses) instead of a single, short, pronounceable and obvious word like list;-). And that's all a LC is -- punctuation-based syntax shortcut for list(some_genexp)...!

This solution shadows builtins which is generally a bad idea. However the usage feels fairly pythonic, and it preserves the original functionality.
Note there are several ways to potentially optimize this based on testing, including, moving the imports out into the module level and changing f's default into None and testing for it instead of using a default lambda as I did.
def any(l, f=lambda x: x):
from __builtin__ import any as _any
return _any(f(x) for x in l)
def all(l, f=lambda x: x):
from __builtin__ import all as _all
return _all(f(x) for x in l)
Just putting that out there for consideration and to see what people think of doing something so potentially dirty.

For your information, the documentation for module itertools in python 3.x lists some pretty nice generator functions.

Related

Is using list declarations to shorthand operations a bad thing to do? [duplicate]

Think about a function that I'm calling for its side effects, not return values (like printing to screen, updating GUI, printing to a file, etc.).
def fun_with_side_effects(x):
...side effects...
return y
Now, is it Pythonic to use list comprehensions to call this func:
[fun_with_side_effects(x) for x in y if (...conditions...)]
Note that I don't save the list anywhere
Or should I call this func like this:
for x in y:
if (...conditions...):
fun_with_side_effects(x)
Which is better and why?
It is very anti-Pythonic to do so, and any seasoned Pythonista will give you hell over it. The intermediate list is thrown away after it is created, and it could potentially be very, very large, and therefore expensive to create.
You shouldn't use a list comprehension, because as people have said that will build a large temporary list that you don't need. The following two methods are equivalent:
consume(side_effects(x) for x in xs)
for x in xs:
side_effects(x)
with the definition of consume from the itertools man page:
def consume(iterator, n=None):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
Of course, the latter is clearer and easier to understand.
List comprehensions are for creating lists. And unless you are actually creating a list, you should not use list comprehensions.
So I would got for the second option, just iterating over the list and then call the function when the conditions apply.
Second is better.
Think of the person who would need to understand your code. You can get bad karma easily with the first :)
You could go middle between the two by using filter(). Consider the example:
y=[1,2,3,4,5,6]
def func(x):
print "call with %r"%x
for x in filter(lambda x: x>3, y):
func(x)
Depends on your goal.
If you are trying to do some operation on each object in a list, the second approach should be adopted.
If you are trying to generate a list from another list, you may use list comprehension.
Explicit is better than implicit.
Simple is better than complex. (Python Zen)
You can do
for z in (fun_with_side_effects(x) for x in y if (...conditions...)): pass
but it's not very pretty.
Using a list comprehension for its side effects is ugly, non-Pythonic, inefficient, and I wouldn't do it. I would use a for loop instead, because a for loop signals a procedural style in which side-effects are important.
But, if you absolutely insist on using a list comprehension for its side effects, you should avoid the inefficiency by using a generator expression instead. If you absolutely insist on this style, do one of these two:
any(fun_with_side_effects(x) and False for x in y if (...conditions...))
or:
all(fun_with_side_effects(x) or True for x in y if (...conditions...))
These are generator expressions, and they do not generate a random list that gets tossed out. I think the all form is perhaps slightly more clear, though I think both of them are confusing and shouldn't be used.
I think this is ugly and I wouldn't actually do it in code. But if you insist on implementing your loops in this fashion, that's how I would do it.
I tend to feel that list comprehensions and their ilk should signal an attempt to use something at least faintly resembling a functional style. Putting things with side effects that break that assumption will cause people to have to read your code more carefully, and I think that's a bad thing.

python generator is too slow to use it. why should I use it? and when?

Recently I got question about which one is the most fastest thing among iterator, list comprehension, iter(list comprehension) and generator.
and then make simple code as below.
n = 1000000
iter_a = iter(range(n))
list_comp_a = [i for i in range(n)]
iter_list_comp_a = iter([i for i in range(n)])
gene_a = (i for i in range(n))
import time
import numpy as np
for xs in [iter_a, list_comp_a, iter_list_comp_a, gene_a]:
start = time.time()
np.sum(xs)
end = time.time()
print((end-start)*100)
the result is below.
0.04439353942871094 # iterator
9.257078170776367 # list_comprehension
0.006318092346191406 # iterator of list_comprehension
7.491207122802734 # generator
generator is so slower than other thing.
and I don't know when it is useful?
generators do not store all elements in a memory in one go. They yield one at a time, and this behavior makes them memory efficient. Thus you can use them when memory is a constraint.
As a preamble : your whole benchmark is just plain wrong - the "list_comp_a" test doesn't test the construction time of a list using a list comprehension (nor does "iter_list_comp_a" fwiw), and the tests using iter() are mostly irrelevant - iter(iterable) is just a shortcut for iterable.__iter__() and is only of any use if you want to manipulate the iterator itself, which is practically quite rare.
If you hope to get some meaningful results, what you want to benchmark are the execution of a list comprehension, a generator expression and a generator function. To test their execution, the simplest way is to wrap all three cases in functions, one execution a list comprehension and the other two building lists from resp. a generator expression and a generator built from a generator function). In all cases I used xrange as the real source so we only benchmark the effective differences. Also we use timeit.timeit to do the benchmark as it's more reliable than manually messing with time.time(), and is actually the pythonic standard canonical way to benchmark small code snippets.
import timeit
# py2 / py3 compat
try:
xrange
except NameError:
xrange = range
n = 1000
def test_list_comp():
return [x for x in xrange(n)]
def test_genexp():
return list(x for x in xrange(n))
def mygen(n):
for x in xrange(n):
yield x
def test_genfunc():
return list(mygen(n))
for fname in "test_list_comp", "test_genexp", "test_genfunc":
result = timeit.timeit("fun()", "from __main__ import {} as fun".format(fname), number=10000)
print("{} : {}".format(fname, result))
Here (py 2.7.x on a 5+ years old standard desktop) I get the following results:
test_list_comp : 0.254354953766
test_genexp : 0.401108026505
test_genfunc : 0.403750896454
As you can see, list comprehensions are faster, and generator expressions and generator functions are mostly equivalent with a very slight (but constant if you repeat the test) advantage to generator expressions.
Now to answer your main question "why and when would you use generators", the answer is threefold: 1/ memory use, 2/ infinite iterations and 3/ coroutines.
First point : memory use. Actually, you don't need generators here, only lazy iteration, which can be obtained by writing your own iterable / iterable - like for example the builtin file type does - in a way to avoid loading everything in memory and only generating values on the fly. Here generators expressions and functions (and the underlying generator class) are a generic way to implement lazy iteration without writing your own iterable / iterator (just like the builtin property class is a generic way to use custom descriptors without wrting your own descriptor class).
Second point: infinite iteration. Here we have something that you can't get from sequence types (lists, tuples, sets, dicts, strings etc) which are, by definition, finite). An example is the itertools.cycle iterator:
Return elements from the iterable until it is exhausted.
Then repeat the sequence indefinitely.
Note that here again this ability comes not from generator functions or expressions but from the iterable/iterator protocol. There are obviously less use case for infinite iteration than for memory use optimisations, but it's still a handy feature when you need it.
And finally the third point: coroutines. Well, this is a rather complex concept specially the first time you read about it, so I'll let someone else do the introduction : https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/
Here you have something that only generators can offer, not a handy shortcut for iterables/iterators.
I think I asked a wrong question, maybe.
in original code, it was not correct because the np.sum doesn't works well.
np.sum(iterator) doesn't return correct answer. So, I changed my code like below.
n = 10000
iter_a = iter(range(n))
list_comp_a = [i for i in range(n)]
iter_list_comp_a = iter([i for i in range(n)])
gene_a = (i for i in range(n))
import time
import numpy as np
import timeit
for xs in [iter_a, list_comp_a, iter_list_comp_a, gene_a]:
start = time.time()
sum(xs)
end = time.time()
print("type: {}, performance: {}".format(type(xs), (end-start)*100))
and then, performance is like below. the performance of list is best and iterator is not good.
type: <class 'range_iterator'>, performance: 0.021791458129882812
type: <class 'list'>, performance: 0.013279914855957031
type: <class 'list_iterator'>, performance: 0.02429485321044922
type: <class 'generator'>, performance: 0.13570785522460938
and like #Kishor Pawar already mentioned, the list is better for performance, but when memory size is not enough, sum of list with too high n make the computer slower, but sum of iterator with too high n, maybe it it really a lot of time to compute, but didn't make the computer slow.
Thx for all.
When I have to compute a lot of lot of data, generator is better.
but,

Functional programming in Python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 months ago.
Improve this question
Functional programming is one of the programming paradigms in Python. It treats computation as the evaluation of mathematical functions and avoids state and mutable data. I am trying to understand how Python incorporates functional programming.
Consider the following factorial program (factorial.py):
def factorial(n, total):
if n == 0:
return total
else:
return factorial(n-1, total*n)
num = raw_input("Enter a natural number: ")
print factorial(int(num), 1)
That code avoids mutable data, because we are not changing the value of any variable. We are only recursively calling the factorial function with a new value.
If the example given above for functional programming is correct, then what does avoiding state mean?
Does functional programming only mean that I must use only functions whenever I have computations (as given in the above example)?
If the given example is wrong, what is a simple example with an explanation?
The example is correct for functional programming. And a good example of what not to do in Python because it is inefficient and doesn't scale. Python doesn't have any tail-call optimisation, so recursive calls should not be used solely to avoid imperative loops. If you really start programming in this style in Python, your programs will end with runtime errors eventually.
You are describing pure functional programming which is not something Python could be used for.
Python supports functional programming to some degree in the sense that functions are first class values. That means functions can be passed to other functions and returned as results from functions. And the standard library contains functions also found in most functional programming languages standard libraries, like map(), filter(), reduce(), and the stuff in the functools and itertools modules.
Borrowing from http://ua.pycon.org/static/talks/kachayev/#/8 where he makes a comparison between the way one thinks of imperative and functional programs. The example is borrowed.
Imperative:
expr, res = "28+32+++32++39", 0
for t in expr.split("+"):
if t != "":
res += int(t)
print res
Functional:
from operator import add
expr = "28+32+++32++39"
print reduce(add, map(int, filter(bool, expr.split("+"))))
If the given example is wrong, then kindly provide another simple example with an explanation.
It’s not, and it shows a realistic problem to start with, which you’ll see if you call factorial with a huge number. The maximum recursion depth will be reached. Python doesn’t have tail call optimization.
It the example given above for functional programming is correct, then what does avoiding state mean.
It means (in Python) that once a variable had been assigned, you should not reassign a new value, or change the value that you assigned to that variable.
Secondly, does functional programming only mean that, I must use only functions whenever I have computations (as given in the above example)
Functional programming is quite broad. Python is a multi-paradigm language that supports some functional programming concepts.
Functional programming means that all computations should be seen as mathematical functions.
I wrote a post about it that explains all the above in greater detail: Use Functional Programming In Python
Basics
Functional programming is a paradigm where we use only expressions and no statements. Statements do something like an if, whereas expressions evaluate mathematical stuff. We try to avoid mutability (values getting changed), since we only want pure functions. Pure functions don't have side effects, meaning that a function, given the same input, always produces the same output. We want functions to be pure, since they are easier to debug. By doing this, we are describing what a function is, as opposed to giving steps about how to do something and we write a lot less code. Functional programming is often called declarative, while other approaches are called imperative.
Utilities
Know about built-in functions, since they are so useful. For starters: abs, round, pow, max & min, sum, len, sorted, reversed, zip, and range.
Lambdas
Anonymous functions or lambdas are pure functions that take input and produce a value. There isn't any return statement, since we are returning what we are evaluating. You can also give them a name by just declaring a variable.
(lambda x, y: x + y)(1, 1)
add = lambda x, y: x + y
add(1, 1)
Ternary operator
Since we are working with expressions, instead of using if statements for logic, we use the ternary operator.
(expression) if (condition) else (expression2)
Map, filter & reduce
We also need a way of looping. This comes with list comprehension.
In this example we add one to each array element.
inc = lambda x: [i + 1 for i in x]
We can also operate on elements that meet a condition.
evens = lambda x: [i for i in x if (i % 2) == 0]
Some people say that that is the correct Pythonic way of doing things. But people who are more serious use a different approach: map, filter, and reduce. These are the building blocks of functional programming. While map and filter are built-in functions, reduce used to be, but now is in the functools module. To use it, import it:
from functools import reduce
Map is a function that takes a function and calls it on the elements of an array. The result of a map function is unfortunately unreadable, so you need to turn it into a tuple or a collection.
inc = lambda x: tuple(map(lambda y: y + 1, x))
Filter is a function that calls a function on an array, keeps the elements that output True and removes the ones that are False. Like map, it's not readable.
evens = lambda x: tuple(filter(lambda y: (y % 2) == 0, x))
Reduce takes a function that has two parameters. One is the result of the last time it was called & one is the new value. It keeps doing this until it reduces down to a value.
from functools import reduce
m_sum = lambda x: reduce(lambda y, z: y + z, x)
Let and do
A lot of functional programming languages have do. Do is a function that takes n arguments, evaluates all of them and returns the value of the last one. Python doesn't have do, but I created one myself.
do = lambda *args: tuple(map(lambda y: y, args))[-1]
Here's an example using do that prints something and exits.
print_err = lambda x: do(print(x), exit())
We use do to get imperative advantages.
Let allows some expression to be equal to something in a expression. In Python, most people do that as follows.
def something(x):
y = x + 10
return y * 3
Python 3.8 adds the expression assignment operators :=. So that code can now be written as a lambda.
something = lambda x: do(y := x + 10, y * 3)
Working with impure systems
The console, the file system, the web, etc. are immutable and impure and we need a way to work with them. In some languages like Haskell you have monads, which are wrappers for functions. Clojure allows the use of impure functions. In Python, a multi-paradigm language, we don't need anything, since it's not just functional. Print is an impure function.
Recursion
Your example uses recursion. Recursion uses the call stack, which is a small chunk of memory that is used for calling functions. It can get full and crash. A lot of functional programming languages use something known as lazy evaluation to help with the problem of recursion, but Python doesn't have it. So avoid recursion as much as possible.
Answers
Avoiding state is being immutable, meaning not changing the value of something.
In functional programming, everything is an expression, which is a function. We don't have that in Python though, since it's multi-paradigm.
In a more functional way, your code would be written as follows. Also it's not functional due to using an if statement instead of a ternary expression.
from functools import reduce
factorial = lambda x: reduce(lambda y, z: y*z, range(1,x+1))
This produces a range of 1 to x and multiplies all the values using range.

Is it Pythonic to use list comprehensions for just side effects?

Think about a function that I'm calling for its side effects, not return values (like printing to screen, updating GUI, printing to a file, etc.).
def fun_with_side_effects(x):
...side effects...
return y
Now, is it Pythonic to use list comprehensions to call this func:
[fun_with_side_effects(x) for x in y if (...conditions...)]
Note that I don't save the list anywhere
Or should I call this func like this:
for x in y:
if (...conditions...):
fun_with_side_effects(x)
Which is better and why?
It is very anti-Pythonic to do so, and any seasoned Pythonista will give you hell over it. The intermediate list is thrown away after it is created, and it could potentially be very, very large, and therefore expensive to create.
You shouldn't use a list comprehension, because as people have said that will build a large temporary list that you don't need. The following two methods are equivalent:
consume(side_effects(x) for x in xs)
for x in xs:
side_effects(x)
with the definition of consume from the itertools man page:
def consume(iterator, n=None):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
Of course, the latter is clearer and easier to understand.
List comprehensions are for creating lists. And unless you are actually creating a list, you should not use list comprehensions.
So I would got for the second option, just iterating over the list and then call the function when the conditions apply.
Second is better.
Think of the person who would need to understand your code. You can get bad karma easily with the first :)
You could go middle between the two by using filter(). Consider the example:
y=[1,2,3,4,5,6]
def func(x):
print "call with %r"%x
for x in filter(lambda x: x>3, y):
func(x)
Depends on your goal.
If you are trying to do some operation on each object in a list, the second approach should be adopted.
If you are trying to generate a list from another list, you may use list comprehension.
Explicit is better than implicit.
Simple is better than complex. (Python Zen)
You can do
for z in (fun_with_side_effects(x) for x in y if (...conditions...)): pass
but it's not very pretty.
Using a list comprehension for its side effects is ugly, non-Pythonic, inefficient, and I wouldn't do it. I would use a for loop instead, because a for loop signals a procedural style in which side-effects are important.
But, if you absolutely insist on using a list comprehension for its side effects, you should avoid the inefficiency by using a generator expression instead. If you absolutely insist on this style, do one of these two:
any(fun_with_side_effects(x) and False for x in y if (...conditions...))
or:
all(fun_with_side_effects(x) or True for x in y if (...conditions...))
These are generator expressions, and they do not generate a random list that gets tossed out. I think the all form is perhaps slightly more clear, though I think both of them are confusing and shouldn't be used.
I think this is ugly and I wouldn't actually do it in code. But if you insist on implementing your loops in this fashion, that's how I would do it.
I tend to feel that list comprehensions and their ilk should signal an attempt to use something at least faintly resembling a functional style. Putting things with side effects that break that assumption will cause people to have to read your code more carefully, and I think that's a bad thing.

Does this function have to use reduce() or is there a more pythonic way?

If I have a value, and a list of additional terms I want multiplied to the value:
n = 10
terms = [1,2,3,4]
Is it possible to use a list comprehension to do something like this:
n *= (term for term in terms) #not working...
Or is the only way:
n *= reduce(lambda x,y: x*y, terms)
This is on Python 2.6.2. Thanks!
reduce is the best way to do this IMO, but you don't have to use a lambda; instead, you can use the * operator directly:
import operator
n *= reduce(operator.mul, terms)
n is now 240. See the docs for the operator module for more info.
Reduce is not the only way. You can also write it as a simple loop:
for term in terms:
n *= term
I think this is much more clear than using reduce, especially when you consider that many Python programmers have never seen reduce and the name does little to convey to people who see it for the first time what it actually does.
Pythonic does not mean write everything as comprehensions or always use a functional style if possible. Python is a multi-paradigm language and writing simple imperative code when appropriate is Pythonic.
Guido van Rossum also doesn't want reduce in Python:
So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly.
There aren't a whole lot of associative operators. (Those are operators X for which (a X b) X c equals a X (b X c).) I think it's just about limited to +, *, &, |, ^, and shortcut and/or. We already have sum(); I'd happily trade reduce() for product(), so that takes care of the two most common uses. [...]
In Python 3 reduce has been moved to the functools module.
Yet another way:
import operator
n = reduce(operator.mul, terms, n)

Categories