Is there a 'foreach' function in Python 3? - python

When I meet the situation I can do it in javascript, I always think if there's an foreach function it would be convenience. By foreach I mean the function which is described below:
def foreach(fn,iterable):
for x in iterable:
fn(x)
they just do it on every element and didn't yield or return something,i think it should be a built-in function and should be more faster than writing it with pure Python, but I didn't found it on the list,or it just called another name?or I just miss some points here?
Maybe I got wrong, cause calling an function in Python cost high, definitely not a good practice for the example. Rather than an out loop, the function should do the loop in side its body looks like this below which already mentioned in many python's code suggestions:
def fn(*args):
for x in args:
dosomething
but I thought foreach is still welcome base on the two facts:
In normal cases, people just don't care about the performance
Sometime the API didn't accept iterable object and you can't rewrite its source.

Every occurence of "foreach" I've seen (PHP, C#, ...) does basically the same as pythons "for" statement.
These are more or less equivalent:
// PHP:
foreach ($array as $val) {
print($val);
}
// C#
foreach (String val in array) {
console.writeline(val);
}
// Python
for val in array:
print(val)
So, yes, there is a "foreach" in python. It's called "for".
What you're describing is an "array map" function. This could be done with list comprehensions in python:
names = ['tom', 'john', 'simon']
namesCapitalized = [capitalize(n) for n in names]

Python doesn't have a foreach statement per se. It has for loops built into the language.
for element in iterable:
operate(element)
If you really wanted to, you could define your own foreach function:
def foreach(function, iterable):
for element in iterable:
function(element)
As a side note the for element in iterable syntax comes from the ABC programming language, one of Python's influences.

Other examples:
Python Foreach Loop:
array = ['a', 'b']
for value in array:
print(value)
# a
# b
Python For Loop:
array = ['a', 'b']
for index in range(len(array)):
print("index: %s | value: %s" % (index, array[index]))
# index: 0 | value: a
# index: 1 | value: b

map can be used for the situation mentioned in the question.
E.g.
map(len, ['abcd','abc', 'a']) # 4 3 1
For functions that take multiple arguments, more arguments can be given to map:
map(pow, [2, 3], [4,2]) # 16 9
It returns a list in python 2.x and an iterator in python 3
In case your function takes multiple arguments and the arguments are already in the form of tuples (or any iterable since python 2.6) you can use itertools.starmap. (which has a very similar syntax to what you were looking for). It returns an iterator.
E.g.
for num in starmap(pow, [(2,3), (3,2)]):
print(num)
gives us 8 and 9

The correct answer is "python collections do not have a foreach". In native python we need to resort to the external for _element_ in _collection_ syntax which is not what the OP is after.
Python is in general quite weak for functionals programming. There are a few libraries to mitigate a bit. I helped author one of these infixpy
pip install infixpy https://pypi.org/project/infixpy/
from infixpy import Seq
(Seq([1,2,3]).foreach(lambda x: print(x)))
1
2
3
Also see: Left to right application of operations on a list in Python 3

Here is the example of the "foreach" construction with simultaneous access to the element indexes in Python:
for idx, val in enumerate([3, 4, 5]):
print (idx, val)

Yes, although it uses the same syntax as a for loop.
for x in ['a', 'b']: print(x)

This does the foreach in python 3
test = [0,1,2,3,4,5,6,7,8,"test"]
for fetch in test:
print(fetch)

Look at this article. The iterator object nditer from numpy package, introduced in NumPy 1.6, provides many flexible ways to visit all the elements of one or more arrays in a systematic fashion.
Example:
import random
import numpy as np
ptrs = np.int32([[0, 0], [400, 0], [0, 400], [400, 400]])
for ptr in np.nditer(ptrs, op_flags=['readwrite']):
# apply random shift on 1 for each element of the matrix
ptr += random.choice([-1, 1])
print(ptrs)
d:\>python nditer.py
[[ -1 1]
[399 -1]
[ 1 399]
[399 401]]

If I understood you right, you mean that if you have a function 'func', you want to check for each item in list if func(item) returns true; if you get true for all, then do something.
You can use 'all'.
For example: I want to get all prime numbers in range 0-10 in a list:
from math import sqrt
primes = [x for x in range(10) if x > 2 and all(x % i !=0 for i in range(2, int(sqrt(x)) + 1))]

If you really want you can do this:
[fn(x) for x in iterable]
But the point of the list comprehension is to create a list - using it for the side effect alone is poor style. The for loop is also less typing
for x in iterable: fn(x)

I know this is an old thread but I had a similar question when trying to do a codewars exercise.
I came up with a solution which nests loops, I believe this solution applies to the question, it replicates a working "for each (x) doThing" statement in most scenarios:
for elements in array:
while elements in array:
array.func()

If you're just looking for a more concise syntax you can put the for loop on one line:
array = ['a', 'b']
for value in array: print(value)
Just separate additional statements with a semicolon.
array = ['a', 'b']
for value in array: print(value); print('hello')
This may not conform to your local style guide, but it could make sense to do it like this when you're playing around in the console.

In short, the functional programming way to do this is:
def do_and_return_fn(og_fn: Callable[[T], None]):
def do_and_return(item: T) -> T:
og_fn(item)
return item
return do_and_return
# where og_fn is the fn referred to by the question.
# i.e. a function that does something on each element, but returns nothing.
iterable = map(do_and_return_fn(og_fn), iterable)
All of the answers that say "for" loops are the same as "foreach" functions are neglecting the point that other similar functions that operate on iters in python such as map, filter, and others in itertools are lazily evaluated.
Suppose, I have an iterable of dictionaries coming from my database and I want to pop an item off of each dictionary element when the iterator is iterated over. I can't use map because pop returns the item popped, not the original dictionary.
The approach I gave above would allow me to achieve this if I pass lambda x: x.pop() as my og_fn,
What would be nice is if python had a built-in lazy function with an interface like I constructed:
foreach(do_fn: Callable[[T], None], iterable: Iterable)
Implemented with the function given before, it would look like:
def foreach(do_fn: Callable[[T], None], iterable: Iterable[T]) -> Iterable[T]:
return map(do_and_return_fn(do_fn), iterable)
# being called with my db code.
# Lazily removes the INSERTED_ON_SEC_FIELD on every element:
doc_iter = foreach(lambda x: x.pop(INSERTED_ON_SEC_FIELD, None), doc_iter)

No there is no from functools import foreach support in python. However, you can just implement in the same number of lines as the import takes, anyway:
foreach = lambda f, iterable: (*map(f, iterable),)
Bonus:
variadic support: foreach = lambda f, iterable, *args: (*map(f, iterable, *args),) and you can be more efficient by avoiding constructing the tuple of Nones

Related

What is a better pythonic version of this conditional deleting?

i am refreshing my python (2.7) and i am discovering iterators and generators.
As i understood, they are an efficient way of navigating over values without consuming too much memory.
So the following code do some kind of logical indexing on a list:
removing the values of a list L that triggers a False conditional statement represented here by the function f.
I am not satisfied with my code because I feel this code is not optimal for three reasons:
I read somewhere that it is better to use a for loop than a while loop.
However, in the usual for i in range(10), i can't modify the value of 'i' because it seems that the iteration doesn't care.
Logical indexing is pretty strong in matrix-oriented languages, and there should be a way to do the same in python (by hand granted, but maybe better than my code).
Third reason is just that i want to use generator/iterator on this example to help me understand.
Third reason is just that i want to use generator/iterator on this example to help me understand.
TL;DR : Is this code a good pythonic way to do logical indexing ?
#f string -> bool
def f(s):
return 'c' in s
L=['','a','ab','abc','abcd','abcde','abde'] #example
length=len(L)
i=0
while i < length:
if not f(L[i]): #f is a conditional statement (input string output bool)
del L[i]
length-=1 #cut and push leftwise
else:
i+=1
print 'Updated list is :', L
print length
This code has a few problems, but the main one is that you must never modify a list you're iterating over. Rather, you create a new list from the elements that match your condition. This can be done simply in a for loop:
newlist = []
for item in L:
if f(item):
newlist.append(item)
which can be shortened to a simple list comprehension:
newlist = [item for item in L if f(item)]
It looks like filter() is what you're after:
newlist = filter(lambda x: not f(x), L)
filter() filters (...) an iterable and only keeps the items for which a predicate returns True. In your case f(..) is not quite the predicate but not f(...).
Simpler:
def f(s):
return 'c' not in s
newlist = filter(f, L)
See: https://docs.python.org/2/library/functions.html#filter
Never modify a list with del, pop or other methods that mutate the length of the list while iterating over it. Read this for more information.
The "pythonic" way to filter a list is to use reassignment and either a list comprehension or the built-in filter function:
List comprehension:
>>> [item for item in L if f(item)]
['abc', 'abcd', 'abcde']
i want to use generator/iterator on this example to help me understand
The for item in L part is implicitly making use of the iterator protocol. Python lists are iterable, and iter(somelist) returns an iterator .
>>> from collections import Iterable, Iterator
>>> isinstance([], Iterable)
True
>>> isinstance([], Iterator)
False
>>> isinstance(iter([]), Iterator)
True
__iter__ is not only being called when using a traditional for-loop, but also when you use a list comprehension:
>>> class mylist(list):
... def __iter__(self):
... print('iter has been called')
... return super(mylist, self).__iter__()
...
>>> m = mylist([1,2,3])
>>> [x for x in m]
iter has been called
[1, 2, 3]
Filtering:
>>> filter(f, L)
['abc', 'abcd', 'abcde']
In Python3, use list(filter(f, L)) to get a list.
Of course, to filter a list, Python needs to iterate over it, too:
>>> filter(None, mylist())
iter has been called
[]
"The python way" to do it would be to use a generator expression:
# list comprehension
L = [l for l in L if f(l)]
# alternative generator comprehension
L = (l for l in L if f(l))
It depends on your context if a list or a generator is "better" (see e.g. this so question). Because your source data is coming from a list, there is no real benefit of using a generator here.
For simply deleting elements, especially if the original list is no longer needed, just iterate backwards:
Python 2.x:
for i in xrange(len(L) - 1, -1, -1):
if not f(L[i]):
del L[i]
Python 3.x:
for i in range(len(L) - 1, -1, -1):
if not f(L[i]):
del L[i]
By iterating from the end, the "next" index does not change after deletion and a for loop is possible. Note that you should use the xrange generator in Python 2, or the range generator in Python 3, to save memory*.
In cases where you must iterate forward, use your given solution above.
*Note that Python 2's xrange will break if there are >= 2 ** 32 - 1 elements. Python 3's range, as well as the less efficient Python 2's range do not have this limitation.

List comprehension as substitute for reduce() in Python

The following python tutorial says that:
List comprehension is a complete substitute for the lambda function as well as the functions map(), filter() and reduce().
http://python-course.eu/python3_list_comprehension.php
However, it does not mention an example how a list comprehension can substitute a reduce() and I can't think of an example how it should be possible.
Can please someone explain how to achieve a reduce-like functionality with list comprehension or confirm that it isn't possible?
Ideally, list comprehension is to create a new list. Quoting official documentation,
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
whereas reduce is used to reduce an iterable to a single value. Quoting functools.reduce,
Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value.
So, list comprehension cannot be used as a drop-in replacement for reduce.
I was surprised at first to find that Guido van Rossum, creator of Python, was against reduce. His reasoning was that beyond summing, multiplying, and-ing, and or-ing, using reduce yields an unreadable solution that is better suited by a function which iterates through and updates an accumulator. His article on the matter is here. So no, there isn't a list comprehension alternative to reduce, instead the "pythonic" way is to implement an accumulating function the old fashioned way:
Instead of:
out = reduce((lambda x,y: x*y),[1,2,3])
Use:
def prod(myList):
out = 1
for el in myList:
out *= el
return out
Of course nothing stops you from continuing to use reduce (python 2) or functools.reduce (python 3)
List comprehensions are supposed to return lists. If your reduce is supposed to return a list, then yes, you can replace it with a list comprehension.
But this is no obstacle to providing "reduce-like functionality". Python lists can contain any object. If you'll accept your result contained in a single-item list, then there is a [...][0] list comprehension form that can replace any reduce() whatsoever.
This should be obvious, but that form is
[x for x in [reduce(function, sequence, initial)]][0]
for some binary function and and some iterable sequence and some initial value. Or, if you want the initial from the first of the iterable,
[x for x in [reduce(function, sequence)]][0]
Arguably, the above is cheating, and also pointless, since you could just use reduce without the comprehension. So let's try it without reduce.
[stack.append(function(stack.pop(), e)) or stack[0]
for stack in ([initial],)
for e in sequence][-1]
This produces a list of all the intermediate values, and we want the last one. [-1] is just as easy as [0]. We need an accumulator to reduce, but can't use assignment statements in a comprehension, hence the stack (which is just a list), but we could have used many other data structures here. The .append() always returns None, so we use or stack[0] to put the value so far in the resulting list.
It's a little more difficult without initial,
[stack.append(function(stack.pop(), e)) or stack[0]
for it in [iter(sequence)]
for stack in [[next(it)]]
for e in it][-1]
Really, you might as well use a for statement at this point.
But this takes up memory for the list of intermediate values. For a very long sequence, that might be a problem. But we can avoid that too by using generator expressions.
Doing this is tricky, so let's start with an easier example and work up to it.
stack = [initial]
[stack.append(function(stack.pop(), e)) for e in sequence]
stack.pop() # returns the answer
It computes the answer, but also creates a useless list of Nones. We can avoid that by converting it to a generator expression inside a list comprehension.
stack = [initial]
[_ for _s in (stack.append(function(stack.pop(), e)) or ()
for e in sequence)
for _ in _s]
stack.pop()
The list comprehension exhausts the generator that updates the stack, but returns an empty list itself. This is possible because the inner loop always has zero iterations, because _s is always an empty tuple.
We can move the stack.pop() inside if the last _s has one element. It doesn't matter what that element is though. So we chain on a [None] as the final _s.
from itertools import chain
stack = [initial]
[stack.pop()
for _s in chain((stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]])
for _ in _s][0]
Again, we have a single-item list comprehension. We can also implement chain as a generator expression. And you've already seen how to move the stack variable inside using a single-item list.
[stack.pop()
for stack in [[initial]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]],
]
for x in xs)
for _ in _s][0]
And we can also get the initial from the sequence for the two-argument reduce.
[stack.pop()
for it in [iter(sequence)]
for stack in [[next(it)]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in it),
[[None]],
]
for x in xs)
for _ in _s][0]
This is insane. But it works. So yes, it's possible to get "reduce-like functionality" with comprehensions. That doesn't mean you should. Seven fors is too hard!
You could accomplish something like a reduce with a comprehension by using a couple of helper functions that I've named last and cofold:
>>> last(r(a+b) for a, b, r in cofold(range(10)))
45
This is functionally equivalent to
>>> reduce(lambda a, b: a+b, range(10))
45
Note that unlike reduce() the comprehension didn't use a lambda.
The trick is to use a generator with a callback to "return" the result of the operator. cofold is the corecursive dual of the reduce (or fold) function.
_sentinel = object()
def cofold(it, initial=_sentinel):
if initial is _sentinel:
it = iter(it)
accumulator = next(it)
else:
accumulator = initial
def callback(result):
nonlocal accumulator
accumulator = result
return result
for element in it:
yield accumulator, element, callback
Here's cofold in a list comprehension.
>>> [r(a+b) for a, b, r in cofold(range(10))]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
The elements represent each step in the dual reduction. The last one is our answer. The last function is trivial.
def last(it):
for e in it:
pass
return e
Unlike reduce, cofold is a lazy generator, so it can safely act on infinite iterables when used in a generator expression.
>>> from itertools import islice, count
>>> lazy_results = (r(a+b) for a, b, r in cofold(count()))
>>> [*islice(lazy_results, 0, 9)]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
>>> next(lazy_results)
55
>>> next(lazy_results)
66

Use print inside lambda

I am trying to use print inside lambda. Something like that:
lambda x: print x
I understand, that in Python 2.7 print is not a function. So, basically, my question is: Is there a pretty way to use print as function in Python 2.7?
You can import print_function from the __future__ and use it as a function like this
from __future__ import print_function
map(print, [1, 2, 3])
# 1
# 2
# 3
The question is about Python 2, but I ended up here from Google trying to use the print function inside a lambda in Python 3. I'm adding this answer for context for others that come here for the same.
If you only want to see the code that works and not how I arrived there, skip to the last code sample at the bottom. I wanted to clearly document what didn't work for learning purposes.
Desired result
Let's suppose you want to define a lambda print_list that prints each item of a list with a newline in between.
lst = [1, 2, 3]
print_list = lambda lst: ...
The desired output is:
1
2
3
And there should be no unused return value.
Attempt 1 - A map doesn't evaluate the print function in Python 3
To start, here's what doesn't work well in Python 3:
map(print, lst)
However, the output is somewhat counterintuitively not printed lines, because the map call in Python 3 returns an iterator instead of an evaluated list.
Output:
n/a
Return value:
<map at 0x111b3a6a0>
Attempt 2 - Evaluate the map iterator
You can realize the printing by passing the map result to list(...), which produces the ideal output, but has the side effect of returning a list of nulls (as evaluated in the REPL).
list(map(print, lst))
Output:
1
2
3
Return value:
[None, None, None]
You could workaround this by using the underscore throwaway variable convention:
_ = list(map(print, lst))
A similar approach is calling print inside a list comprehension:
[print(i) for i in lst]
I don't love these approaches because they both still generate an unused return value.
Attempt 3 - Apply the unpacking operator to the map iterator
Like this:
[*map(print, [1, 2, 3])]
(This still returns a list of nulls which is non-ideal.)
In the comments above #thefourtheye suggests using a one-line for loop:
for item in [1, 2, 3]: print(item)
This works fine for most cases and avoids the side effect. Attempting to put this in a lambda throws a SyntaxError. I tried wrapping it in parens without success; though there is probably a way to achieve this, I haven't figured it out.
(SOLUTION!) Attempt 4 - Apply the unpacking operator inside of the print call
The answer I arrived at is to explode the list inside the print call alongside using the separator arg:
print(*lst, sep='\n')
Output:
1
2
3
This produces the intended result without a return value.
Finally, let's wrap it up in a lambda to use as desired:
print_list = lambda lst: print(*lst, sep='\n')
print_list([1, 2, 3])
This was the best solution for my use case in Python 3.
Related questions
Why map(print, a_list) doesn't work?
Print doesnt print when it's in map, Python
If you don't want to import from __future__ you can just make the lambda write to the standard output:
>>>import sys
>>>l = lambda x : sys.stdout.write(x)
>>>l('hi')
'hi'
I guess there is another scenario people may be interested in: "print out the intermediate step value of the lambda function variables"
For instance, say I want to find out the charset of a collection of char list:
In [5]: instances = [["C","O","c","1","c","c","c","c","c","1","O","C","C","N","C"],
...: ["C","C","O","C","(","=","O",")","C","C","(","=","O",")","c"],
...: ["C","N","1","C","C","N","(","C","c","2","c","c","c","(","N"],
...: ["C","l","c","1","c","c","c","2","c","(","N","C","C","C","["],
...: ["C","C","c","1","c","c","c","(","N","C","(","=","S",")","N"]]
one way of doing this is to use reduce:
def build_charset(instances):
return list(functools.reduce((lambda x, y: set(y) | x), instances, set()))
In this function, reduce takes a lambda function with two variables x, y, which at the beginning I thought it would be like x -> instance, and y -> set(). But its results give a different story, so I want to print their value on the fly. lambda function, however, only take a single expression, while the print would introduce another one.
Inspired by set(y) | x, I tried this one and it worked:
lambda x, y: print(x, y) or set(y) | x
Note that print() is of NoneType, so you cannot do and, xor these kinds of operation that would change the original value. But or works just fine in my case.
Hope this would be helpful to those who also want to see what's going on during the procedure.

Python, how to make a function which takes a function as an argument along with two arrays?

For learning purposes, I'm trying to make a function using Python that takes in another function and two arrays as parameters and calls the function parameter on each index of each array parameter. So this should call add on a1[0] & a2[0], a1[1] & a2[1], etc. But all I'm getting back is a generator object. What's wrong?
def add(a,b):
yield a + b
def generator(add,a1,a2):
for i in range(len(a1)):
yield add(a1[i],a2[i])
g = generator(add,a1,a2)
print g.next()
I've also tried replacing what I have for yield above with
yield map(add,a1[i],a2[i])
But that works even less. I get this:
TypeError: argument 2 to map() must support iteration
Your definition of add() is at least strange (I'm leaning twoards calling it "wrong"). You should return the result, not yield it:
def add(a, b):
return a + b
Now, your generator() will work, though
map(add, a1, a2)
is an easier and faster way to do (almost) the same thing. (If you want an iterator rather than a list, use itertools.imap() instead of map().)
You get a generator because your add is a generator. It should be just return a + b.
I'm trying to make a function using Python that takes in another function and two arrays as parameters and calls the function parameter on each index of each array parameter.
def my_function(func, array_1, array_2):
for e_1,e_2 in zip(array_1, array_2):
yield func(e_1,e_2)
Example:
def add(a, b):
return a + b
for result in my_function(add, [1, 2, 3], [9, 8, 7]):
print(result)
will print:
10
10
10
Now, a couple of notes:
The add function can be found in the operator module.
You see that I used zip, take a look at its the doc.
Even though what you actually need is izip() the generator expression under zip() which basically doesn't return a list but an iterator to each value.
my_function is almost like map(), the only difference is that my_function is a generator while map() gives you a list. Once again the stdlib gives you the generator version of map in the itertools module: imap()
Example, my_fuction is just like imap:
from operator import add
from itertools import imap
for result in imap(add, [1, 2, 3], [9, 8, 7]):
print(result)
#10
#10
#10
I obviously suppose that the add function was just a quick example, otherwise check the built-in sum.
As others have said, you are defining add incorrectly and it should return instead of yield. Also, you could import it:
from operator import add
The reason why this doesn't work:
yield map(add, a1[i], a2[i])
Is because map works on lists/iterables and not single values. If add were defined correctly this could work:
yield map(add, [a[i]], [a2[i]])
But you shouldn't actually do that because it's more complicated than it needs to be for no good reason (as Sven Marnach's answer shows, your generator function is just an attempt to implement map so it really shouldn't use map even if it is a learning exercise). Finally, if the point is to make a function that takes a function as a parameter, I wouldn't call the parameter "add"; otherwise, what's the point of making it at all?
def generator(f, a1, a2):
for x, y in zip(a1, a2):
yield f(x, y)
Speaking of which, take a look at zip.

Why is there no first(iterable) built-in function in Python?

I'm wondering if there's a reason that there's no first(iterable) in the Python built-in functions, somewhat similar to any(iterable) and all(iterable) (it may be tucked in a stdlib module somewhere, but I don't see it in itertools). first would perform a short-circuit generator evaluation so that unnecessary (and a potentially infinite number of) operations can be avoided; i.e.
def identity(item):
return item
def first(iterable, predicate=identity):
for item in iterable:
if predicate(item):
return item
raise ValueError('No satisfactory value found')
This way you can express things like:
denominators = (2, 3, 4, 5)
lcd = first(i for i in itertools.count(1)
if all(i % denominators == 0 for denominator in denominators))
Clearly you can't do list(generator)[0] in that case, since the generator doesn't terminate.
Or if you have a bunch of regexes to match against (useful when they all have the same groupdict interface):
match = first(regex.match(big_text) for regex in regexes)
You save a lot of unnecessary processing by avoiding list(generator)[0] and short-circuiting on a positive match.
In Python 2, if you have an iterator, you can just call its next method. Something like:
>>> (5*x for x in xrange(2,4)).next()
10
In Python 3, you can use the next built-in with an iterator:
>>> next(5*x for x in range(2,4))
10
There's a Pypi package called “first” that does this:
>>> from first import first
>>> first([0, None, False, [], (), 42])
42
Here's how you would use to return the first odd number, for example:
>> first([2, 14, 7, 41, 53], key=lambda x: x % 2 == 1)
7
If you just want to return the first element from the iterator regardless of whether is true or not, do this:
>>> first([0, None, False, [], (), 42], key=lambda x: True)
0
It's a very small package: it only contains this function, it has no dependencies, and it works on Python 2 and 3. It's a single file, so you don't even have to install it to use it.
In fact, here's almost the entire source code (from version 2.0.1, by Hynek Schlawack, released under the MIT licence):
def first(iterable, default=None, key=None):
if key is None:
for el in iterable:
if el:
return el
else:
for el in iterable:
if key(el):
return el
return default
I asked a similar question recently (it got marked as a duplicate of this question by now). My concern also was that I'd liked to use built-ins only to solve the problem of finding the first true value of a generator. My own solution then was this:
x = next((v for v in (f(x) for x in a) if v), False)
For the example of finding the first regexp match (not the first matching pattern!) this would look like this:
patterns = [ r'\d+', r'\s+', r'\w+', r'.*' ]
text = 'abc'
firstMatch = next(
(match for match in
(re.match(pattern, text) for pattern in patterns)
if match),
False)
It does not evaluate the predicate twice (as you would have to do if just the pattern was returned) and it does not use hacks like locals in comprehensions.
But it has two generators nested where the logic would dictate to use just one. So a better solution would be nice.
There is a "slice" iterator in itertools. It emulates the slice operations that we're familiar with in python. What you're looking for is something similar to this:
myList = [0,1,2,3,4,5]
firstValue = myList[:1]
The equivalent using itertools for iterators:
from itertools import islice
def MyGenFunc():
for i in range(5):
yield i
mygen = MyGenFunc()
firstValue = islice(mygen, 0, 1)
print firstValue
There's some ambiguity in your question. Your definition of first and the regex example imply that there is a boolean test. But the denominators example explicitly has an if clause; so it's only a coincidence that each integer happens to be true.
It looks like the combination of next and itertools.ifilter will give you what you want.
match = next(itertools.ifilter(None, (regex.match(big_text) for regex in regexes)))
Haskell makes use of what you just described, as the function take (or as the partial function take 1, technically). Python Cookbook has generator-wrappers written that perform the same functionality as take, takeWhile, and drop in Haskell.
But as to why that's not a built-in, your guess is as good as mine.

Categories