Related
I'm trying to iterate a bunch of data ranges at once, to get the combinations of all their values.
The number of ranges can differ, but I have them collected in a list.
Is there a way to iterate them using list comprehension or a similar clean, pythonic way?
This is what I mean by iterating together:
[print(i, j) for i in r1 for j in r2]
So that's a simple example with two known ranges, but what I need is more like
[print(i, j, ...) for i in r1 for j in r2 for k in r3...]
Note: i don't just need a list of number combinations, the iterators are my own iterator class which works similarly to range() but allows me to also get current state without calling next(), which would alter the state.
My iterator class sets its value back to the start on StopIteration, so it can be looped through more than once.
Here you can see the class:
#dataclass
class Range:
start: float
end: float
step: float = field(default=1)
includeEnd: bool = field(default=True)
def __post_init__(self):
self.value = self.start
def __next__(self):
v = self.value
end = v > self.end if self.includeEnd else v >= self.end
if not end:
self.value += self.step
return v
else:
self.value = self.start
raise StopIteration
def __iter__(self):
return self
But how would you get the product of n iterators using itertools.product(), when you have a list of n iterators?
itertools.product(*the_list). Nothing special about product() there. The leading * is general Python syntax for treating a list (more generally, an iterable) as a sequence of individual arguments.
>>> from itertools import product
>>> args = [range(2), range(3), (i**2 for i in [5, 9])]
>>> args
[range(0, 2), range(0, 3), <generator object <genexpr> at 0x000001E2E7A710B0>]
>>> for x in product(*args):
... print(x)
(0, 0, 25)
(0, 0, 81)
(0, 1, 25)
(0, 1, 81)
(0, 2, 25)
(0, 2, 81)
(1, 0, 25)
(1, 0, 81)
(1, 1, 25)
(1, 1, 81)
(1, 2, 25)
(1, 2, 81)
I want to write a Rem(a, b) which return a new tuple that is like a, with the first appearance of element b is removed. For example
Rem((0, 1, 9, 1, 4), 1) which will return (0, 9, 1, 4).
I am only allowed to use higher order functions such as lambda, filter, map, and reduce.
I am thinking about to use filter but this will delete all of the match elements
def myRem(T, E):
return tuple(filter(lambda x: (x!=E), T))
myRem((0, 1, 9, 1, 4), 1) I will have (0,9,4)
The following works (Warning: hacky code):
tuple(map(lambda y: y[1], filter(lambda x: (x[0]!=T.index(E)), enumerate(T))))
But I would never recommend doing this unless the requirements are rigid
Trick with temporary list:
def removeFirst(t, v):
tmp_lst = [v]
return tuple(filter(lambda x: (x != v or (not tmp_lst or v != tmp_lst.pop(0))), t))
print(removeFirst((0, 1, 9, 1, 4), 1))
tmp_lst.pop(0) - will be called only once (thus, excluding the 1st occurrence of the crucial value v)
not tmp_lst - all remaining/potential occurrences will be included due to this condition
The output:
(0, 9, 1, 4)
For fun, using itertools, you can sorta use mostly higher-order functions...
>>> from itertools import *
>>> data = (0, 1, 9, 1, 4)
>>> not1 = (1).__ne__
>>> tuple(chain(takewhile(not1, data), islice(dropwhile(not1, data), 1, None)))
(0, 9, 1, 4)
BTW, here's some timings comparing different approaches for dropping a particular index in a tuple:
>>> timeit.timeit("t[:i] + t[i+1:]", "t = tuple(range(100000)); i=50000", number=10000)
10.42419078599778
>>> timeit.timeit("(*t[:i], *t[i+1:])", "t = tuple(range(100000)); i=50000", number=10000)
20.06185237201862
>>> timeit.timeit("(*islice(t,None, i), *islice(t, i+1, None))", "t = tuple(range(100000)); i=50000; from itertools import islice", number=10000)
>>> timeit.timeit("tuple(chain(islice(t,None, i), islice(t, i+1, None)))", "t = tuple(range(100000)); i=50000; from itertools import islice, chain", number=10000)
19.71128663700074
>>> timeit.timeit("it = iter(t); tuple(chain(islice(it,None, i), islice(it, 1, None)))", "t = tuple(range(100000)); i=50000; from itertools import islice, chain", number=10000)
17.6895881179953
Looks like it is hard to beat the straightforward: t[:i] + t[i+1:], which is not surprising.
Note, this one is shockingly less performant:
>>> timeit.timeit("tuple(j for i, j in enumerate(t) if i != idx)", "t = tuple(range(100000)); idx=50000", number=10000)
111.66658291200292
Which makes me thing all these solutions using takewhile, filter and lambda will all suffer pretty bad...
Although:
>>> timeit.timeit("not1 = (i).__ne__; tuple(chain(takewhile(not1, t), islice(dropwhile(not1, t), 1, None)))", "t = tuple(range(100000)); i=50000; from itertools import chain, takewhile,dropwhile, islice", number=10000)
62.22159145199112
Almost twice as fast as the generator expression, which goes to show, generator overhead can be quite large. However, takewhile and dropwhile are implemented in C, albeit this implementation has redundancy (take-while and dropwhile will pass the dropwhile areas twice).
Another interesting observation, if we simply wrap the substitute a list-comp for the generator expression, it is significantly faster despite the fact that the list-comprehension + tuple call iterates over the result twice compared to only once with the generator expression:
>>> timeit.timeit("tuple([j for i, j in enumerate(t) if i != idx])", "t = tuple(range(100000)); idx=50000", number=10000)
82.59887028901721
Goes to show how steep the generator-expression price can be...
Here is a solution that only uses lambda, filter(), map(), reduce() and tuple().
def myRem(T, E):
# map the tuple into a list of tuples (value, indicator)
M = map(lambda x: [(x, 1)] if x == E else [(x,0)], T)
# make the indicator 0 once the first instance of E is found
# think of this as a boolean mask of items to remove
# here the second reduce can be changed to the sum function
R = reduce(
lambda x, y: x + (y if reduce(lambda a, b: a+b, map(lambda z: z[1], x)) < 1
else [(y[0][0], 0)]),
M
)
# filter the reduced output based on the indicator
F = filter(lambda x: x[1]==0, R)
# map the output back to the desired format
O = map(lambda x: x[0], F)
return tuple(O)
Explanation
A good way to understand what's going on is to print the outputs of the intermediate steps.
Step 1: First Map
For each value in the tuple, we return a tuple with the value and a flag to indicate if it's the value to remove. These tuples are encapsulated in a list because it makes combining easier in the next step.
# original example
T = (0, 1, 9, 1, 4)
E = 1
M = map(lambda x: [(x, 1)] if x == E else [(x,0)], T)
print(M)
#[[(0, 0)], [(1, 1)], [(9, 0)], [(1, 1)], [(4, 0)]]
Step 2: Reduce
This returns a list of tuples in a similar structure to the contents of M, but the flag variable is set to 1 for the first instance of E, and 0 for all subsequent instances. This is achieved by calculating the sum of the indicator up to that point (implemented as another reduce()).
R = reduce(
lambda x, y: x + (y if reduce(lambda a, b: a+b, map(lambda z: z[1], x)) < 1
else [(y[0][0], 0)]),
M
)
print(R)
#[(0, 0), (1, 1), (9, 0), (1, 0), (4, 0)]
Now the output is in the form of (value, to_be_removed).
Step 3: Filter
Filter out the value to be removed.
F = filter(lambda x: x[1]==0, R)
print(F)
#[(0, 0), (9, 0), (1, 0), (4, 0)]
Step 4: Second map and conversion to tuple
Extract the value from the filtered list, and convert it to a tuple.
O = map(lambda x: x[0], F)
print(tuple(O))
#(0, 9, 1, 4)
This violates your requirement for "only using higher order functions" - but since it's not clear why this is a requirement, I include the below solution.
def myRem(tup, n):
idx = tup.index(n)
return tuple(j for i, j in enumerate(tup) if i != idx)
myRem((0, 1, 9, 1, 4), 1)
# (0, 9, 1, 4)
Here is a numpy solution (still not using higher-order functions):
import numpy as np
def myRem(tup, n):
tup_arr = np.array(tup)
return tuple(np.delete(tup_arr, np.min(np.nonzero(tup_arr == n)[0])))
myRem((0, 1, 9, 1, 4), 1)
# (0, 9, 1, 4)
This is the function call below and I'm trying to construct the function fee, I need to map the tuple using functional prog. so that it turns into (6-7)**2 + (7-1)**2 + (1-4)**2, and the last one is (4-6)**2. Then I will sum these and return this value in fee.
fee((6, 7, 1, 4), lambda x, y: (x-y) ** 2)
You can play with python built-in functions :
>>> def fee(tup):
... return sum(map(lambda x,y:(x-y)**2,tup,tup[1:]+(tup[0],)))
Demo :
>>> t=(6, 7, 1, 4)
>>> fee(t)
50
You can use map function to apply the lambda function on pairs and sum the result :
>>> zip(t,t[1:]+(t[0],))
[(6, 7), (7, 1), (1, 4), (4, 6)]
Instead of map as a more efficient way you can use zip and a generator expression within sum :
>>> def fee(tup):
... return sum((x-y)**2 for x,y in zip(tup,tup[1:]+(tup[0],))))
You can do this with a combination of zip, map, and sum:
def fee(vals):
x1 = zip(vals, vals[1:] + [vals[0]])
x2 = map(lambda t: (t[0] - t[1]) ** 2, x1)
return sum(x2)
Explanation:
zip(vals, vals[:-1] + [vals[0]]) combines vals into a list of 2-tuple pairs.
map(lambda t: (t[0] - t[1]) ** 2, x1) performs the mathematical operation on each 2-tuple element.
sum(x2) sums the results of #2 together.
This should do it:
from itertools import tee
try:
from itertools import izip as zip # Python 2
except ImportError:
pass # Python 3
# An itertools recipe
# https://docs.python.org/3/library/itertools.html#itertools-recipes
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
def fee(args, func):
last_value = func(args[-1], args[0])
return sum(func(x, y) for x, y in pairwise(args)) + last_value
print(fee((6, 7, 1, 4), lambda x, y: (x-y) ** 2)) # 50
I'm currently trying to code an equivalent for the built-in min-max function in python, and my code return a pretty weird exception which I don't understand at all:
TypeError: 'generator' object is not subscriptable, min, 7, , 9
when i try it with:
min(abs(i) for i in range(-10, 10))
Here is my code:
def min(*args, **kwargs):
key = kwargs.get("key", None)
argv=0
for i in args:
argv+=1
if argv == 1 and (type(args) is list or type(args) is tuple or type(args) is str):
min=args[0][0]
for i in args[0]:
if key != None:
if key(i) < key(min):
min = i
else:
if i < min:
min = i
return min
else:
min=args[0]
for i in args:
if key != None:
if key(i) < key(min):
min = i
else:
if i < min:
min = i
return min
According to the documentation, i should be able to iterate over a generator...
Here is my implementation:
def max(*args, **kwargs):
key = kwargs.get("key", lambda x: x)
if len(args) == 1:
args = args[0]
maxi = None
for i in args:
if maxi == None or key(i) > key(maxi):
maxi = i
return maxi
def min(*args, **kwargs):
key = kwargs.get("key", lambda x: x)
if len(args) == 1:
args = args[0]
mini = None
for i in args:
if mini == None or key(i) < key(mini):
mini = i
return mini
A little bit more concise than preview post.
The issue you are having is due to the fact that min has two function signatures. From its docstring:
min(...)
min(iterable[, key=func]) -> value
min(a, b, c, ...[, key=func]) -> value
So, it will accept either a single positional argument (an iterable, who's values you need to compare) or several positional arguments which are the values themselves. I think you need to test which mode you're in at the start of your function. It is pretty easy to turn the one argument version into the multiple argument version simply by doing args = args[0].
Here's my attempt to implement the function. key is a keyword-only argument, since it appears after *args.
def min(*args, key=None): # args is a tuple of the positional arguments initially
if len(args) == 1: # if there's just one, assume it's an iterable of values
args = args[0] # replace args with the iterable
it = iter(args) # get an iterator
try:
min_val = next(it) # take the first value from the iterator
except StopIteration:
raise ValueError("min() called with no values")
if key is None: # separate loops for key=None and otherwise, for efficiency
for val in it: # loop on the iterator, which has already yielded one value
if val < min_val
min_val = val
else:
min_keyval = key(min_val) # initialize the minimum keyval
for val in it:
keyval = key(val)
if keyval < min_keyval: # compare keyvals, rather than regular values
min_val = val
min_keyval = keyval
return min_val
Here's some testing:
>>> min([4, 5, 3, 2])
2
>>> min([1, 4, 5, 3, 2])
1
>>> min(4, 5, 3, 2)
2
>>> min(4, 5, 3, 2, 1)
1
>>> min(4, 5, 3, 2, key=lambda x: -x)
5
>>> min(4, -5, 3, -2, key=abs)
-2
>>> min(abs(i) for i in range(-10, 10))
0
Functions in question have a lot in common. In fact, the only difference is comparison (< vs >). In the light of this fact we can implement generic function for finding and element, which will use comparison function passed as an argument. The min and max example might look as follows:
def lessThan(val1, val2):
return val1 < val2
def greaterThan(val1, val2):
return val1 > val2
def find(cmp, *args, **kwargs):
if len(args) < 1:
return None
key = kwargs.get("key", lambda x: x)
arguments = list(args[0]) if len(args) == 1 else args
result = arguments[0]
for val in arguments:
if cmp(key(val), key(result)):
result = val
return result
min = lambda *args, **kwargs: find(lessThan, *args, **kwargs)
max = lambda *args, **kwargs: find(greaterThan, *args, **kwargs)
Some tests:
>>> min(3, 2)
2
>>> max(3, 2)
3
>>> max([1, 2, 0, 3, 4])
4
>>> min("hello")
'e'
>>> max(2.2, 5.6, 5.9, key=int)
5.6
>>> min([[1, 2], [3, 4], [9, 0]], key=lambda x: x[1])
[9, 0]
>>> min((9,))
9
>>> max(range(6))
5
>>> min(abs(i) for i in range(-10, 10))
0
>>> max([1, 2, 3], [5, 6], [7], [0, 0, 0, 1])
[7]
Take for example the python built in pow() function.
xs = [1,2,3,4,5,6,7,8]
from functools import partial
list(map(partial(pow,2),xs))
>>> [2, 4, 8, 16, 32, 128, 256]
but how would I raise the xs to the power of 2?
to get [1, 4, 9, 16, 25, 49, 64]
list(map(partial(pow,y=2),xs))
TypeError: pow() takes no keyword arguments
I know list comprehensions would be easier.
No
According to the documentation, partial cannot do this (emphasis my own):
partial.args
The leftmost positional arguments that will be prepended to the positional arguments
You could always just "fix" pow to have keyword args:
_pow = pow
pow = lambda x, y: _pow(x, y)
I think I'd just use this simple one-liner:
import itertools
print list(itertools.imap(pow, [1, 2, 3], itertools.repeat(2)))
Update:
I also came up with a funnier than useful solution. It's a beautiful syntactic sugar, profiting from the fact that the ... literal means Ellipsis in Python3. It's a modified version of partial, allowing to omit some positional arguments between the leftmost and rightmost ones. The only drawback is that you can't pass anymore Ellipsis as argument.
import itertools
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(newfunc.leftmost_args + fargs + newfunc.rightmost_args), **newkeywords)
newfunc.func = func
args = iter(args)
newfunc.leftmost_args = tuple(itertools.takewhile(lambda v: v != Ellipsis, args))
newfunc.rightmost_args = tuple(args)
newfunc.keywords = keywords
return newfunc
>>> print partial(pow, ..., 2, 3)(5) # (5^2)%3
1
>>> print partial(pow, 2, ..., 3)(5) # (2^5)%3
2
>>> print partial(pow, 2, 3, ...)(5) # (2^3)%5
3
>>> print partial(pow, 2, 3)(5) # (2^3)%5
3
So the the solution for the original question would be with this version of partial list(map(partial(pow, ..., 2),xs))
Why not just create a quick lambda function which reorders the args and partial that
partial(lambda p, x: pow(x, p), 2)
You could create a helper function for this:
from functools import wraps
def foo(a, b, c, d, e):
print('foo(a={}, b={}, c={}, d={}, e={})'.format(a, b, c, d, e))
def partial_at(func, index, value):
#wraps(func)
def result(*rest, **kwargs):
args = []
args.extend(rest[:index])
args.append(value)
args.extend(rest[index:])
return func(*args, **kwargs)
return result
if __name__ == '__main__':
bar = partial_at(foo, 2, 'C')
bar('A', 'B', 'D', 'E')
# Prints: foo(a=A, b=B, c=C, d=D, e=E)
Disclaimer: I haven't tested this with keyword arguments so it might blow up because of them somehow. Also I'm not sure if this is what #wraps should be used for but it seemed right -ish.
you could use a closure
xs = [1,2,3,4,5,6,7,8]
def closure(method, param):
def t(x):
return method(x, param)
return t
f = closure(pow, 2)
f(10)
f = closure(pow, 3)
f(10)
You can do this with lambda, which is more flexible than functools.partial():
pow_two = lambda base: pow(base, 2)
print(pow_two(3)) # 9
More generally:
def bind_skip_first(func, *args, **kwargs):
return lambda first: func(first, *args, **kwargs)
pow_two = bind_skip_first(pow, 2)
print(pow_two(3)) # 9
One down-side of lambda is that some libraries are not able to serialize it.
One way of doing it would be:
def testfunc1(xs):
from functools import partial
def mypow(x,y): return x ** y
return list(map(partial(mypow,y=2),xs))
but this involves re-defining the pow function.
if the use of partial was not 'needed' then a simple lambda would do the trick
def testfunc2(xs):
return list(map(lambda x: pow(x,2), xs))
And a specific way to map the pow of 2 would be
def testfunc5(xs):
from operator import mul
return list(map(mul,xs,xs))
but none of these fully address the problem directly of partial applicaton in relation to keyword arguments
Even though this question was already answered, you can get the results you're looking for with a recipe taken from itertools.repeat:
from itertools import repeat
xs = list(range(1, 9)) # [1, 2, 3, 4, 5, 6, 7, 8]
xs_pow_2 = list(map(pow, xs, repeat(2))) # [1, 4, 9, 16, 25, 36, 49, 64]
Hopefully this helps someone.
Yes, you can do it, provided the function takes keyword arguments. You just need to know the name.
In the case of pow() (provided you are using Python 3.8 or newer) you need exp instead of y.
Try to do:
xs = [1,2,3,4,5,6,7,8]
print(list(map(partial(pow,exp=2),xs)))
As already said that's a limitation of functools.partial if the function you want to partial doesn't accept keyword arguments.
If you don't mind using an external library 1 you could use iteration_utilities.partial which has a partial that supports placeholders:
>>> from iteration_utilities import partial
>>> square = partial(pow, partial._, 2) # the partial._ attribute represents a placeholder
>>> list(map(square, xs))
[1, 4, 9, 16, 25, 36, 49, 64]
1 Disclaimer: I'm the author of the iteration_utilities library (installation instructions can be found in the documentation in case you're interested).
The very versatile funcy includes an rpartial function that exactly addresses this problem.
xs = [1,2,3,4,5,6,7,8]
from funcy import rpartial
list(map(rpartial(pow, 2), xs))
# [1, 4, 9, 16, 25, 36, 49, 64]
It's just a lambda under the hood:
def rpartial(func, *args):
"""Partially applies last arguments."""
return lambda *a: func(*(a + args))
If you can't use lambda functions, you can also write a simple wrapper function that reorders the arguments.
def _pow(y, x):
return pow(x, y)
and then call
list(map(partial(_pow,2),xs))
>>> [1, 4, 9, 16, 25, 36, 49, 64]
Yes
if you created your partial class
class MyPartial:
def __init__(self, func, *args):
self._func = func
self._args = args
def __call__(self, *args):
return self._func(*args, *self._args) # swap ordering
xs = [1,2,3,4,5,6,7,8]
list(map(MyPartial(pow,2),xs))
>>> [1, 4, 9, 16, 25, 36, 49, 64]