I have a large number of arithmetic expressions that I store in a list. For example
exp_list = [exp1, exp2, ...,exp10000]
I also have indices of the few expressions I need to evaluate.
inds = [ind1,ind2,...,ind10]
exp_selected = [exp_list[i] for i in inds ]
Is there a way to avoid having to evaluate all the expressions in exp_list?
Suppose you decide to store you expressions as lambdas (to avoid them being immediately evaluated) then you could selectively evaluate them with a simple list comprehension:
exp_list = [lambda: 1+2, lambda: 3+4, lambda: 5+6, lambda: 7+8]
inds = [1, 3]
print [exp() for i, exp in enumerate(exp_list) if i in inds]
Produces:
[7, 15]
If those expressions share some pattern and can be created 'mid-air' it would be better to use generator instead of just creating the list. Especially if you don't need to remember the results, but just check if any (or all) of them are true/false.
Related
I have list like words = [MyOwnClass(), MyOwnClass(), None, MyOwnClass(), None]. I wanted to delete None.
I tried words.remove(None). but only worked when did it twice.
is this right way? or not, I want to know pythonic way.
class Dictionary:
words = [Word(), Word(), None, Word(), None]
def delete_None(self)
self.words.remove(None) # not works
self.words.remove(None) # now works
If you have to do multiple removes, it is better to rebuild from scratch:
words[:] = (x for x in words if x is not None)
This is a single linear operation, while every remove itself is linear as well because of the left-shifts needed for all subsequent elements.
Also note that the slice assignment words[:] = ... makes this a mutation on the original list object (just like the remove calls would be), not just a rebind of the variable whose effects may only be local. The generator (...) expression instead of a list comprehension [...] is more space efficient as it doesn't build an intermediate list, though the comprehension might be slightly faster.
For arbitrary objects, testing equality instead of identity would often be preferable:
lst[:] = (x for x in lst if x != obj_to_remove)
Or for a more functional approach:
from operator import ne # "not equal"
from functools import partial
lst[:] = filter(partial(ne, obj_to_remove), lst)
If you want to remove all the Nones, you can use filter:
words = list(filter(lambda x: x is not None, words))
The following python tutorial says that:
List comprehension is a complete substitute for the lambda function as well as the functions map(), filter() and reduce().
http://python-course.eu/python3_list_comprehension.php
However, it does not mention an example how a list comprehension can substitute a reduce() and I can't think of an example how it should be possible.
Can please someone explain how to achieve a reduce-like functionality with list comprehension or confirm that it isn't possible?
Ideally, list comprehension is to create a new list. Quoting official documentation,
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
whereas reduce is used to reduce an iterable to a single value. Quoting functools.reduce,
Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value.
So, list comprehension cannot be used as a drop-in replacement for reduce.
I was surprised at first to find that Guido van Rossum, creator of Python, was against reduce. His reasoning was that beyond summing, multiplying, and-ing, and or-ing, using reduce yields an unreadable solution that is better suited by a function which iterates through and updates an accumulator. His article on the matter is here. So no, there isn't a list comprehension alternative to reduce, instead the "pythonic" way is to implement an accumulating function the old fashioned way:
Instead of:
out = reduce((lambda x,y: x*y),[1,2,3])
Use:
def prod(myList):
out = 1
for el in myList:
out *= el
return out
Of course nothing stops you from continuing to use reduce (python 2) or functools.reduce (python 3)
List comprehensions are supposed to return lists. If your reduce is supposed to return a list, then yes, you can replace it with a list comprehension.
But this is no obstacle to providing "reduce-like functionality". Python lists can contain any object. If you'll accept your result contained in a single-item list, then there is a [...][0] list comprehension form that can replace any reduce() whatsoever.
This should be obvious, but that form is
[x for x in [reduce(function, sequence, initial)]][0]
for some binary function and and some iterable sequence and some initial value. Or, if you want the initial from the first of the iterable,
[x for x in [reduce(function, sequence)]][0]
Arguably, the above is cheating, and also pointless, since you could just use reduce without the comprehension. So let's try it without reduce.
[stack.append(function(stack.pop(), e)) or stack[0]
for stack in ([initial],)
for e in sequence][-1]
This produces a list of all the intermediate values, and we want the last one. [-1] is just as easy as [0]. We need an accumulator to reduce, but can't use assignment statements in a comprehension, hence the stack (which is just a list), but we could have used many other data structures here. The .append() always returns None, so we use or stack[0] to put the value so far in the resulting list.
It's a little more difficult without initial,
[stack.append(function(stack.pop(), e)) or stack[0]
for it in [iter(sequence)]
for stack in [[next(it)]]
for e in it][-1]
Really, you might as well use a for statement at this point.
But this takes up memory for the list of intermediate values. For a very long sequence, that might be a problem. But we can avoid that too by using generator expressions.
Doing this is tricky, so let's start with an easier example and work up to it.
stack = [initial]
[stack.append(function(stack.pop(), e)) for e in sequence]
stack.pop() # returns the answer
It computes the answer, but also creates a useless list of Nones. We can avoid that by converting it to a generator expression inside a list comprehension.
stack = [initial]
[_ for _s in (stack.append(function(stack.pop(), e)) or ()
for e in sequence)
for _ in _s]
stack.pop()
The list comprehension exhausts the generator that updates the stack, but returns an empty list itself. This is possible because the inner loop always has zero iterations, because _s is always an empty tuple.
We can move the stack.pop() inside if the last _s has one element. It doesn't matter what that element is though. So we chain on a [None] as the final _s.
from itertools import chain
stack = [initial]
[stack.pop()
for _s in chain((stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]])
for _ in _s][0]
Again, we have a single-item list comprehension. We can also implement chain as a generator expression. And you've already seen how to move the stack variable inside using a single-item list.
[stack.pop()
for stack in [[initial]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]],
]
for x in xs)
for _ in _s][0]
And we can also get the initial from the sequence for the two-argument reduce.
[stack.pop()
for it in [iter(sequence)]
for stack in [[next(it)]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in it),
[[None]],
]
for x in xs)
for _ in _s][0]
This is insane. But it works. So yes, it's possible to get "reduce-like functionality" with comprehensions. That doesn't mean you should. Seven fors is too hard!
You could accomplish something like a reduce with a comprehension by using a couple of helper functions that I've named last and cofold:
>>> last(r(a+b) for a, b, r in cofold(range(10)))
45
This is functionally equivalent to
>>> reduce(lambda a, b: a+b, range(10))
45
Note that unlike reduce() the comprehension didn't use a lambda.
The trick is to use a generator with a callback to "return" the result of the operator. cofold is the corecursive dual of the reduce (or fold) function.
_sentinel = object()
def cofold(it, initial=_sentinel):
if initial is _sentinel:
it = iter(it)
accumulator = next(it)
else:
accumulator = initial
def callback(result):
nonlocal accumulator
accumulator = result
return result
for element in it:
yield accumulator, element, callback
Here's cofold in a list comprehension.
>>> [r(a+b) for a, b, r in cofold(range(10))]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
The elements represent each step in the dual reduction. The last one is our answer. The last function is trivial.
def last(it):
for e in it:
pass
return e
Unlike reduce, cofold is a lazy generator, so it can safely act on infinite iterables when used in a generator expression.
>>> from itertools import islice, count
>>> lazy_results = (r(a+b) for a, b, r in cofold(count()))
>>> [*islice(lazy_results, 0, 9)]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
>>> next(lazy_results)
55
>>> next(lazy_results)
66
#L is a very large list
A = [x/sum(L) for x in L]
When the interpreter evaluates this, how many times will sum(L) be calculated? Just once, or once for each element?
A list comprehension executes the expression for each iteration.
sum(L) is executed for each x in L. Calculate it once outside the list comprehension:
s = sum(L)
A = [x/s for x in L]
Python has no way of knowing that the outcome of sum(L) is stable, and cannot optimize the call away for you.
sum() could be rebound to a different function that returns random values. The elements in L could implement __add__ methods that produce side effects; the built-in sum() would be calling these. L itself could implement a custom __iter__ method that alters the list in-place as you iterate, affecting both the list comprehension and the sum() call. Any of those hooks could rebind sum or give x elements a __div__ method that alters sum, etc.
In other words, Python is too dynamic to accurately predict expression outcomes.
I would opt for Martijn's approach, but thought I'd point out that you can (ab)use a lambda with a default argument and a map if you wanted to retain a "one-liner", eg:
L = range(1, 10)
A = map(lambda el, total=sum(L, 0.0): el / total, L)
I created a line that appends an object to a list in the following manner
>>> foo = list()
>>> def sum(a, b):
... c = a+b; return c
...
>>> bar_list = [9,8,7,6,5,4,3,2,1,0]
>>> [foo.append(sum(i,x)) for i, x in enumerate(bar_list)]
[None, None, None, None, None, None, None, None, None, None]
>>> foo
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
>>>
The line
[foo.append(sum(i,x)) for i, x in enumerate(bar_list)]
would give a pylint W1060 Expression is assigned to nothing, but since I am already using the foo list to append the values I don't need to assing the List Comprehension line to something.
My questions is more of a matter of programming correctness
Should I drop list comprehension and just use a simple for expression?
>>> for i, x in enumerate(bar_list):
... foo.append(sum(i,x))
or is there a correct way to use both list comprehension an assign to nothing?
Answer
Thank you #user2387370, #kindall and #Martijn Pieters. For the rest of the comments I use append because I'm not using a list(), I'm not using i+x because this is just a simplified example.
I left it as the following:
histogramsCtr = hist_impl.HistogramsContainer()
for index, tupl in enumerate(local_ranges_per_histogram_list):
histogramsCtr.append(doSubHistogramData(index, tupl))
return histogramsCtr
Yes, this is bad style. A list comprehension is to build a list. You're building a list full of Nones and then throwing it away. Your actual desired result is a side effect of this effort.
Why not define foo using the list comprehension in the first place?
foo = [sum(i,x) for i, x in enumerate(bar_list)]
If it is not to be a list but some other container class, as you mentioned in a comment on another answer, write that class to accept an iterable in its constructor (or, if it's not your code, subclass it to do so), then pass it a generator expression:
foo = MyContainer(sum(i, x) for i, x in enumerate(bar_list))
If foo already has some value and you wish to append new items:
foo.extend(sum(i,x) for i, x in enumerate(bar_list))
If you really want to use append() and don't want to use a for loop for some reason then you can use this construction; the generator expression will at least avoid wasting memory and CPU cycles on a list you don't want:
any(foo.append(sum(i, x)) for i, x in enumerate(bar_list))
But this is a good deal less clear than a regular for loop, and there's still some extra work being done: any is testing the return value of foo.append() on each iteration. You can write a function to consume the iterator and eliminate that check; the fastest way uses a zero-length collections.deque:
from collections import deque
do = deque([], maxlen=0).extend
do(foo.append(sum(i, x)) for i, x in enumerate(bar_list))
This is actually fairly readable, but I believe it's not actually any faster than any() and requires an extra import. However, either do() or any() is a little faster than a for loop, if that is a concern.
I think it's generally frowned upon to use list comprehensions just for side-effects, so I would say a for loop is better in this case.
But in any case, couldn't you just do foo = [sum(i,x) for i, x in enumerate(bar_list)]?
You should definitely drop the list comprehension. End of.
You are confusing anyone reading your code. You are building a list for the side-effects.
You are paying CPU cycles and memory for building a list you are discarding again.
In your simplified case, you are overlooking the fact you could have used a list comprehension directly:
[sum(i,x) for i, x in enumerate(bar_list)]
Given a list
a = range(10)
You can slice it using statements such as
a[1]
a[2:4]
However, I want to do this based on a variable set elsewhere in the code. I can easily do this for the first one
i = 1
a[i]
But how do I do this for the other one? I've tried indexing with a list:
i = [2, 3, 4]
a[i]
But that doesn't work. I've also tried using a string:
i = "2:4"
a[i]
But that doesn't work either.
Is this possible?
that's what slice() is for:
a = range(10)
s = slice(2,4)
print a[s]
That's the same as using a[2:4].
Why does it have to be a single variable? Just use two variables:
i, j = 2, 4
a[i:j]
If it really needs to be a single variable you could use a tuple.
With the assignments below you are still using the same type of slicing operations you show, but now with variables for the values.
a = range(10)
i = 2
j = 4
then
print a[i:j]
[2, 3]
>>> a=range(10)
>>> i=[2,3,4]
>>> a[i[0]:i[-1]]
range(2, 4)
>>> list(a[i[0]:i[-1]])
[2, 3]
I ran across this recently, while looking up how to have the user mimic the usual slice syntax of a:b:c, ::c, etc. via arguments passed on the command line.
The argument is read as a string, and I'd rather not split on ':', pass that to slice(), etc. Besides, if the user passes a single integer i, the intended meaning is clearly a[i]. Nevertheless, slice(i) will default to slice(None,i,None), which isn't the desired result.
In any case, the most straightforward solution I could come up with was to read in the string as a variable st say, and then recover the desired list slice as eval(f"a[{st}]").
This uses the eval() builtin and an f-string where st is interpolated inside the braces. It handles precisely the usual colon-separated slicing syntax, since it just plugs in that colon-containing string as-is.