How to pass arguments to a function in "map" function call? - python

I know a map function gets a function as its first argument and the next arguments are iterators on which the passed function needs to be applied. My question here is say if I have a 2d list like so
l=[[1,2,3],[4,5,6],[7,8,9]]
how can I sort the individual lists in reverse order so my output is
l=[[3,2,1],[6,5,4],[9,8,7]]
I know a potential solution is using a lambda function such as
list(map(lambda x:x[::-1],l))
I want something like this
list(map(sorted, l,'reversed=True'))
where 'reversed=True' is an argument that sorted takes
eg:
>>> newList=[1,2,3]
>>> sorted(newList,reversed='True')
>>> [3,2,1]
I have seen how to pass arguments to a the pow function using the itertools.repeat module
map(pow,list,itertools.repeat(x))
x=power to which the list must be raised
I want to know if there is any way the arguments can be passed in a map function. In my case the 'reverse=True' for the sorted function.

You can use functools.partial for this:
import functools
new_list = list(map(functools.partial(sorted, reverse=True), l))

You can use a lambda to wrap the funtion:
map(lambda x: sorted(x, reversed=True), l)
or:
map(lambda i, j: pow(i, j), list,itertools.repeat(x))

There are many ways to do it.
You could use functools.partial. It creates a partial, for the lack of a better word, of the function you pass to it. It sort of creates a new function with some parameters already passed into it.
For your example, it would be:
from functools import partial
rev_sort = partial(sorted, reverse=True)
map(rev_sort, l)
The other way is using a simple lambda:
map(lambda arr: sorted(arr, reverse=True), l)
The other other way (my personal choice), is using generators:
(sorted(arr, reverse=True) for arr in l)

For this specific case, you can also use a list comprehension -
l=[[1,2,3],[4,5,6],[7,8,9]]
l = [list(reversed(sublist)) for sublist in l]
//[[3,2,1],[6,5,4],[9,8,7]]

Related

modifying argument value within function

I am trying to figure out a way to modify the order of a list of tuples within a function without returning the list.
For example:
L = [(2,4),(8,5),(1,3),(9,4)]
def sort_ratios(L):
L = sorted(L, key=lambda x: float(x[0])/float(x[1]))
return L
Thus, calling sort_ratios() outputs:
>>>sort_ratios(L)
[(1,3),(2,4),(8,5),(9,4)]
But L would still be [(2,4),(8,5),(1,3),(9,4)]
Instead, I would like to simply modify the value of L without returning anything so that sort_ratios() operates as follows:
>>>sort_ratios(L)
>>>L
[(1,3),(2,4),(8,5),(9,4)]
It seems trivial, but I just can't seem to get the function to operate this way.
Try L.sort(key=lambda x: float(x[0])/float(x[1])) for an in-place sort.

List comprehension as substitute for reduce() in Python

The following python tutorial says that:
List comprehension is a complete substitute for the lambda function as well as the functions map(), filter() and reduce().
http://python-course.eu/python3_list_comprehension.php
However, it does not mention an example how a list comprehension can substitute a reduce() and I can't think of an example how it should be possible.
Can please someone explain how to achieve a reduce-like functionality with list comprehension or confirm that it isn't possible?
Ideally, list comprehension is to create a new list. Quoting official documentation,
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
whereas reduce is used to reduce an iterable to a single value. Quoting functools.reduce,
Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value.
So, list comprehension cannot be used as a drop-in replacement for reduce.
I was surprised at first to find that Guido van Rossum, creator of Python, was against reduce. His reasoning was that beyond summing, multiplying, and-ing, and or-ing, using reduce yields an unreadable solution that is better suited by a function which iterates through and updates an accumulator. His article on the matter is here. So no, there isn't a list comprehension alternative to reduce, instead the "pythonic" way is to implement an accumulating function the old fashioned way:
Instead of:
out = reduce((lambda x,y: x*y),[1,2,3])
Use:
def prod(myList):
out = 1
for el in myList:
out *= el
return out
Of course nothing stops you from continuing to use reduce (python 2) or functools.reduce (python 3)
List comprehensions are supposed to return lists. If your reduce is supposed to return a list, then yes, you can replace it with a list comprehension.
But this is no obstacle to providing "reduce-like functionality". Python lists can contain any object. If you'll accept your result contained in a single-item list, then there is a [...][0] list comprehension form that can replace any reduce() whatsoever.
This should be obvious, but that form is
[x for x in [reduce(function, sequence, initial)]][0]
for some binary function and and some iterable sequence and some initial value. Or, if you want the initial from the first of the iterable,
[x for x in [reduce(function, sequence)]][0]
Arguably, the above is cheating, and also pointless, since you could just use reduce without the comprehension. So let's try it without reduce.
[stack.append(function(stack.pop(), e)) or stack[0]
for stack in ([initial],)
for e in sequence][-1]
This produces a list of all the intermediate values, and we want the last one. [-1] is just as easy as [0]. We need an accumulator to reduce, but can't use assignment statements in a comprehension, hence the stack (which is just a list), but we could have used many other data structures here. The .append() always returns None, so we use or stack[0] to put the value so far in the resulting list.
It's a little more difficult without initial,
[stack.append(function(stack.pop(), e)) or stack[0]
for it in [iter(sequence)]
for stack in [[next(it)]]
for e in it][-1]
Really, you might as well use a for statement at this point.
But this takes up memory for the list of intermediate values. For a very long sequence, that might be a problem. But we can avoid that too by using generator expressions.
Doing this is tricky, so let's start with an easier example and work up to it.
stack = [initial]
[stack.append(function(stack.pop(), e)) for e in sequence]
stack.pop() # returns the answer
It computes the answer, but also creates a useless list of Nones. We can avoid that by converting it to a generator expression inside a list comprehension.
stack = [initial]
[_ for _s in (stack.append(function(stack.pop(), e)) or ()
for e in sequence)
for _ in _s]
stack.pop()
The list comprehension exhausts the generator that updates the stack, but returns an empty list itself. This is possible because the inner loop always has zero iterations, because _s is always an empty tuple.
We can move the stack.pop() inside if the last _s has one element. It doesn't matter what that element is though. So we chain on a [None] as the final _s.
from itertools import chain
stack = [initial]
[stack.pop()
for _s in chain((stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]])
for _ in _s][0]
Again, we have a single-item list comprehension. We can also implement chain as a generator expression. And you've already seen how to move the stack variable inside using a single-item list.
[stack.pop()
for stack in [[initial]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in sequence),
[[None]],
]
for x in xs)
for _ in _s][0]
And we can also get the initial from the sequence for the two-argument reduce.
[stack.pop()
for it in [iter(sequence)]
for stack in [[next(it)]]
for _s in (
x
for xs in [
(stack.append(function(stack.pop(), e)) or ()
for e in it),
[[None]],
]
for x in xs)
for _ in _s][0]
This is insane. But it works. So yes, it's possible to get "reduce-like functionality" with comprehensions. That doesn't mean you should. Seven fors is too hard!
You could accomplish something like a reduce with a comprehension by using a couple of helper functions that I've named last and cofold:
>>> last(r(a+b) for a, b, r in cofold(range(10)))
45
This is functionally equivalent to
>>> reduce(lambda a, b: a+b, range(10))
45
Note that unlike reduce() the comprehension didn't use a lambda.
The trick is to use a generator with a callback to "return" the result of the operator. cofold is the corecursive dual of the reduce (or fold) function.
_sentinel = object()
def cofold(it, initial=_sentinel):
if initial is _sentinel:
it = iter(it)
accumulator = next(it)
else:
accumulator = initial
def callback(result):
nonlocal accumulator
accumulator = result
return result
for element in it:
yield accumulator, element, callback
Here's cofold in a list comprehension.
>>> [r(a+b) for a, b, r in cofold(range(10))]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
The elements represent each step in the dual reduction. The last one is our answer. The last function is trivial.
def last(it):
for e in it:
pass
return e
Unlike reduce, cofold is a lazy generator, so it can safely act on infinite iterables when used in a generator expression.
>>> from itertools import islice, count
>>> lazy_results = (r(a+b) for a, b, r in cofold(count()))
>>> [*islice(lazy_results, 0, 9)]
[1, 3, 6, 10, 15, 21, 28, 36, 45]
>>> next(lazy_results)
55
>>> next(lazy_results)
66

lambda in python can iterate dict?

I have an interview recently. The interviewer asked me the ways to iterate dict in python. I said all the ways use for statement. But he told me that how about lambda?
I feel confused very much and I consider lambda as an anonymity function, but how it iterates a dict? some code like this:
new_dict = sorted(old_dict.items(), lambda x: x[1]) # sorted by value in dict
But in this code, the lambda is used as a function to provide the compared key. What do you think this question?
You don't iterate with lambda. There are following ways to iterate an iterable object in Python:
for statement (your answer)
Comprehension, including list [x for x in y], dictionary {key: value for key, value in x} and set {x for x in y}
Generator expression: (x for x in y)
Pass to function that will iterate it (map, all, itertools module)
Manually call next function until StopIteration happens.
Note: 3 will not iterate it unless you iterate over that generator later. In case of 4 it depends on function.
For iterating specific collections like dict or list there can be more techniques like while col: remove element or with index slicing tricks.
Now lambda comes into the picture. You can use lambdas in some of those functions, for example: map(lambda x: x*2, [1, 2, 3]). But lambda here has nothing to do with iteration process itself, you can pass a regular function map(func, [1, 2, 3]).
You can iterate dict using lambda like this:
d = {'a': 1, 'b': 2}
values = map(lambda key: d[key], d.keys())
Using a plain lambda to iterate anything in Python sounds very wrong. Certainly the most Pythonic method to iterate sequences and collections is to use list comprehensions and generator expressions like #Andrey presented.
If the interviewer was leaning on the more theoretical/Computer Sciencey answers, it is worth noting that using lambdas to iterate is quite possible, although I must stress that this is not Pythonic nor useful at any context other than academic exercises:
# the legendary Y combinator makes it possible
# to let nameless functions recurse using an indirection
Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
# our iterator lambda
it = lambda f: lambda Lst: (Lst[0], f(Lst[1:])) if Lst else None
# see it in action:
Y(it)([1,2,3])
=> (1, (2, (3, None)))
lambda itself doesn't iterate anything. As you thought, it just defines an anonymous function - aside from the syntactic rule about only being able to have an expression, a lambda does nothing more than a similar function made using def. The code inside the lambda might iterate something, but only in the same ways as any other function might use (provided they are expressions, and so valid inside a lambda).
In the example you mention using sorted, the key function is called on each element of the list being sorted - but it is sorted itself that does this, and which does the iteration. When you provide a key function, sorted does something broadly similar to this:
def sorted(seq, key):
decorated = [(key(elem), i) for i, elem in enumerate(seq)]
# Sort using the normal tuple lexicographic comparisons
decorated.sort()
return [seq[i] for _,i in decorated]
As you can see, sorted does the iteration here, not the lambda. Indeed, there is no reason why the key has to be a lambda - any function (or any callable) will do as far as sorted is concerned.
At the lowest level, there is only really one way to iterate a dict (or, indeed, any other iterable) in Python, which is to use the iterator protocol. This is what the for loop does behind the scenes, and you could also use a while statement like this:
it = iter(my_iterable)
while True:
try:
val = next(it)
except StopIteration:
# Run else clause of for loop
break
else:
# Run for loop body
The comments in this aren't strictly part of the iterator protocol, they are instead part of the for loop (but having at least a loop body in there is mostly the point of iterating in the first place).
Other functions and syntax that consume iterables (such as list, set and dict comprehensions, generator expressions or builtins like sum, sorted or max) all use this protocol, by either:
Using a Python for loop,
Doing something like the above while loop (especially for modules written in C),
Delegating to another function or piece of syntax that uses one of these
A class can be made so that its instances become iterable in either of two ways:
Provide the iterator protocol directly. You need a method called __iter__ (called by iter), which returns an iterator. That iterator has a method called __next__ (just next in Python 2) which is called by next and returns the value at the iterator's current location and advances it (or raises StopIteration if it is already at the end); or
Implement part of the sequence protocol (which means behaving like a list or tuple). For forward iteration, it is sufficient to define __getitem__ in such a way that doing my_sequence[0], my_sequence[1], up until my_sequence[n-1] (where n is the number of items in the sequence), and higher indexes raise an error. You usually want to define __len__ as well, which is used when you do len(my_sequence).
Dictionary iteration using lambda
dct = {1: '1', 2 : '2'}
Iterating over Dictionary using lambda:
map(lambda x : str(x[0]) + x[1], dct.iteritems())
here x[0] is the key
and x[1] is the value
Result :
['11', '22']
Filtering on Dictionary using lambda:
filter(lambda x : x[0] > 1, dct.iteritems())
Result :
[(2, '2')]
The best way to iterate dict in python is:
dic ={}
iter_dic = dic.iteritems().next
iter_dic()
...
iter_dic()
But you can build it with lambda func:
iter_dic = lambda dic: dic.keys()[0],dic.pop(dic.keys()[0])
iter_dic(dic)
...
iter_dic(dic)
Could we say
count = {8: 'u', 4: 't', 9: 'z', 10: 'j', 5: 'k', 3: 's'}
count_updated = dict(filter(lambda val: val[0] % 3 == 0,
count.items()))
is iterating Dictionary with Lambda function?
It will produce:
   {8: 'u', 4: 't', 9: 'z', 10: 'j', 5: 'k', 3: 's'}
   {9: 'z', 3: 's'}
from Python dictionary filter + Examples.

Python: Pass 1 argument to a 2d array of functions and get results in 2d array of same size

if i define a 2d array of lambda functions like:
N_gsi = [ [lambda gsi:1/4*(1+gsi[1]), lambda gsi:1/4*(1+gsi[0])],
[lambda gsi:-1/4*(1+gsi[1]), lambda gsi:1/4*(1-gsi[0])],
[lambda gsi:-1/4*(1-gsi[1]), lambda gsi:-1/4*(1-gsi[0])],
[lambda gsi:1/4*(1-gsi[1]), lambda gsi:-1/4*(1+gsi[0])]]
is it then possible to get the result for each function (using the same argument gsi of course) into a array of the same size in an elegant way? (no loop)
something like:
resultArray = N_gsi(myArgumentGsi)
Yes, but you need do a little change.
N_gsi = lambda gsi: [ [1/4*(1+gsi[1]), 1/4*(1+gsi[0])], [-1/4*(1+gsi[1]), 1/4*(1-gsi[0])], [-1/4*(1-gsi[1]), -1/4*(1-gsi[0])], [1/4*(1-gsi[1]), -1/4*(1+gsi[0])]]
This little function should do the trick for collections with arbitrary nesting depth. I'm not sure if you would count recursion as a loop here.
def apply(fs, arg):
if callable(fs):
return fs(arg)
else:
return list(map(lambda f: apply(f, arg), fs))
With this you could then write lambda gsi: apply(N_gsi, gsi) for your particular use case.

How do you apply a list of lambda functions to a single element using an iterator?

I want to apply a list of lambda functions to a single element using an iterable that has to be created with yield.
The list of lambda functions would have something like:
[<function <lambda> at 0x1d310c8>, <function <lambda> at 0x1d355f0>]
And I want to apply every function, from left to right , to a single element using yield to construct an iterable to iterate the list
def apply_all(functions, item):
for f in functions:
yield f(item)
Example usage:
functions = [type, id, hex]
for result in apply_all(functions, 55):
print result
gives
<type 'int'>
20326112
0x37
Give this a shot:
def lambda_apply(unnamed_funcs, element):
for unnamed in unnamed_funcs:
yield unnamed(element)
>>> l = [lambda x: x**2, lambda x: 2*x]
>>> el = 5
>>> for y in lambda_apply(l, el):
... print y
...
25
10
Note that this works not only for a list of unnamed functions, but any list of functions of arity 1. This is because all functions, named or not, are first class objects in python. You can store them in a list, and use them later, as demonstrated above.
The answer could be formulated as
import numpy as np
def apply_funcs( funcs, val ):
for func in funcs:
yield func(val)
my_funcs = [lambda x: np.cos(x), np.sin, np.tan]
my_val = 0.1
for res in apply_funcs( my_funcs, my_val ):
print res
where the apply_funcs function does the trick and the rest is just for demonstration purposes.
Do you necessarily need a yield statement?
Because there is another way to create generator: to use ().
applied_it = (f(item) for f in functions)
def apply(value, lambda_list):
for function in lambda_list:
yield (function(value))

Categories