I have a chain of operations which needs to occur one after the other and each depends on the previous function's output.
Like this:
out1 = function1(initial_input)
out2 = function2(out1)
out3 = function3(out2)
out4 = function4(out3)
and so on about 10 times. It looks a little ugly in the code.
What is the best way to write it? Is there someway to handle it using some functional programming magic? Is there a better way to call and execute this function chain?
You can use functools.reduce:
out = functools.reduce(lambda x, y : y(x), [f1, f2, f3, f4], initial_value)
Quoting functools.reduce documentation:
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.
Here, we use the fact that functions can be treated as any variable in Python, and an anonymous functions which simply does "apply x to function y".
This "reduce" operation is part of a very general pattern which have been applied successfully to parallelize tasks (see http://en.wikipedia.org/wiki/MapReduce).
Use a loop:
out = initial_input
for func in [function1, function2, function3, function4]:
out = func(out)
To propagate functional programming a bit:
In [1]: compose = lambda f, g: lambda arg: f(g(arg))
In [2]: from functools import reduce
In [3]: funcs = [lambda x:x+1, lambda x:x*2]
In [4]: f = reduce(compose, funcs)
In [5]: f(1)
Out[5]: 3
In [6]: f(3)
Out[6]: 7
You can pass the return values directly to the next function:
out4 = function4(function3(function2(function1(initial_input))))
But this isn't necessarily better, and is perhaps less readable.
Related
I know a map function gets a function as its first argument and the next arguments are iterators on which the passed function needs to be applied. My question here is say if I have a 2d list like so
l=[[1,2,3],[4,5,6],[7,8,9]]
how can I sort the individual lists in reverse order so my output is
l=[[3,2,1],[6,5,4],[9,8,7]]
I know a potential solution is using a lambda function such as
list(map(lambda x:x[::-1],l))
I want something like this
list(map(sorted, l,'reversed=True'))
where 'reversed=True' is an argument that sorted takes
eg:
>>> newList=[1,2,3]
>>> sorted(newList,reversed='True')
>>> [3,2,1]
I have seen how to pass arguments to a the pow function using the itertools.repeat module
map(pow,list,itertools.repeat(x))
x=power to which the list must be raised
I want to know if there is any way the arguments can be passed in a map function. In my case the 'reverse=True' for the sorted function.
You can use functools.partial for this:
import functools
new_list = list(map(functools.partial(sorted, reverse=True), l))
You can use a lambda to wrap the funtion:
map(lambda x: sorted(x, reversed=True), l)
or:
map(lambda i, j: pow(i, j), list,itertools.repeat(x))
There are many ways to do it.
You could use functools.partial. It creates a partial, for the lack of a better word, of the function you pass to it. It sort of creates a new function with some parameters already passed into it.
For your example, it would be:
from functools import partial
rev_sort = partial(sorted, reverse=True)
map(rev_sort, l)
The other way is using a simple lambda:
map(lambda arr: sorted(arr, reverse=True), l)
The other other way (my personal choice), is using generators:
(sorted(arr, reverse=True) for arr in l)
For this specific case, you can also use a list comprehension -
l=[[1,2,3],[4,5,6],[7,8,9]]
l = [list(reversed(sublist)) for sublist in l]
//[[3,2,1],[6,5,4],[9,8,7]]
if i define a 2d array of lambda functions like:
N_gsi = [ [lambda gsi:1/4*(1+gsi[1]), lambda gsi:1/4*(1+gsi[0])],
[lambda gsi:-1/4*(1+gsi[1]), lambda gsi:1/4*(1-gsi[0])],
[lambda gsi:-1/4*(1-gsi[1]), lambda gsi:-1/4*(1-gsi[0])],
[lambda gsi:1/4*(1-gsi[1]), lambda gsi:-1/4*(1+gsi[0])]]
is it then possible to get the result for each function (using the same argument gsi of course) into a array of the same size in an elegant way? (no loop)
something like:
resultArray = N_gsi(myArgumentGsi)
Yes, but you need do a little change.
N_gsi = lambda gsi: [ [1/4*(1+gsi[1]), 1/4*(1+gsi[0])], [-1/4*(1+gsi[1]), 1/4*(1-gsi[0])], [-1/4*(1-gsi[1]), -1/4*(1-gsi[0])], [1/4*(1-gsi[1]), -1/4*(1+gsi[0])]]
This little function should do the trick for collections with arbitrary nesting depth. I'm not sure if you would count recursion as a loop here.
def apply(fs, arg):
if callable(fs):
return fs(arg)
else:
return list(map(lambda f: apply(f, arg), fs))
With this you could then write lambda gsi: apply(N_gsi, gsi) for your particular use case.
I need to simplify my code as much as possible: it needs to be one line of code.
I need to put a for loop inside a lambda expression, something like that:
x = lambda x: (for i in x : print i)
Just in case, if someone is looking for a similar problem...
Most solutions given here are one line and are quite readable and simple. Just wanted to add one more that does not need the use of lambda(I am assuming that you are trying to use lambda just for the sake of making it a one line code).
Instead, you can use a simple list comprehension.
[print(i) for i in x]
BTW, the return values will be a list on None s.
Since a for loop is a statement (as is print, in Python 2.x), you cannot include it in a lambda expression. Instead, you need to use the write method on sys.stdout along with the join method.
x = lambda x: sys.stdout.write("\n".join(x) + "\n")
To add on to chepner's answer for Python 3.0 you can alternatively do:
x = lambda x: list(map(print, x))
Of course this is only if you have the means of using Python > 3 in the future... Looks a bit cleaner in my opinion, but it also has a weird return value, but you're probably discarding it anyway.
I'll just leave this here for reference.
anon and chepner's answers are on the right track. Python 3.x has a print function and this is what you will need if you want to embed print within a function (and, a fortiori, lambdas).
However, you can get the print function very easily in python 2.x by importing from the standard library's future module. Check it out:
>>>from __future__ import print_function
>>>
>>>iterable = ["a","b","c"]
>>>map(print, iterable)
a
b
c
[None, None, None]
>>>
I guess that looks kind of weird, so feel free to assign the return to _ if you would like to suppress [None, None, None]'s output (you are interested in the side-effects only, I assume):
>>>_ = map(print, iterable)
a
b
c
>>>
If you are like me just want to print a sequence within a lambda, without get the return value (list of None).
x = range(3)
from __future__ import print_function # if not python 3
pra = lambda seq=x: map(print,seq) and None # pra for 'print all'
pra()
pra('abc')
lambda is nothing but an anonymous function means no need to define a function like def name():
lambda <inputs>: <expression>
[print(x) for x in a] -- This is the for loop in one line
a = [1,2,3,4]
l = lambda : [print(x) for x in a]
l()
output
1
2
3
4
We can use lambda functions in for loop
Follow below code
list1 = [1,2,3,4,5]
list2 = []
for i in list1:
f = lambda i: i /2
list2.append(f(i))
print(list2)
First of all, it is the worst practice to write a lambda function like x = some_lambda_function. Lambda functions are fundamentally meant to be executed inline. They are not meant to be stored. Thus when you write x = some_lambda_function is equivalent to
def some_lambda_funcion():
pass
Moving to the actual answer. You can map the lambda function to an iterable so something like the following snippet will serve the purpose.
a = map(lambda x : print(x),[1,2,3,4])
list(a)
If you want to use the print function for the debugging purpose inside the reduce cycle, then logical or operator will help to escape the None return value in the accumulator variable.
def test_lam():
'''printing in lambda within reduce'''
from functools import reduce
lam = lambda x, y: print(x,y) or x + y
print(reduce(lam,[1,2,3]))
if __name__ =='__main__':
test_lam()
Will print out the following:
1 2
3 3
6
You can make it one-liner.
Sample
myList = [1, 2, 3]
print_list = lambda list: [print(f'Item {x}') for x in list]
print_list(myList)
otherList = [11, 12, 13]
print_list(otherList)
Output
Item 1
Item 2
Item 3
Item 11
Item 12
Item 13
What is the most idiomatic way to achieve something like the following, in Haskell:
foldl (+) 0 [1,2,3,4,5]
--> 15
Or its equivalent in Ruby:
[1,2,3,4,5].inject(0) {|m,x| m + x}
#> 15
Obviously, Python provides the reduce function, which is an implementation of fold, exactly as above, however, I was told that the 'pythonic' way of programming was to avoid lambda terms and higher-order functions, preferring list-comprehensions where possible. Therefore, is there a preferred way of folding a list, or list-like structure in Python that isn't the reduce function, or is reduce the idiomatic way of achieving this?
The Pythonic way of summing an array is using sum. For other purposes, you can sometimes use some combination of reduce (from the functools module) and the operator module, e.g.:
def product(xs):
return reduce(operator.mul, xs, 1)
Be aware that reduce is actually a foldl, in Haskell terms. There is no special syntax to perform folds, there's no builtin foldr, and actually using reduce with non-associative operators is considered bad style.
Using higher-order functions is quite pythonic; it makes good use of Python's principle that everything is an object, including functions and classes. You are right that lambdas are frowned upon by some Pythonistas, but mostly because they tend not to be very readable when they get complex.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), which gives the possibility to name the result of an expression, we can use a list comprehension to replicate what other languages call fold/foldleft/reduce operations:
Given a list, a reducing function and an accumulator:
items = [1, 2, 3, 4, 5]
f = lambda acc, x: acc * x
accumulator = 1
we can fold items with f in order to obtain the resulting accumulation:
[accumulator := f(accumulator, x) for x in items]
# accumulator = 120
or in a condensed formed:
acc = 1; [acc := acc * x for x in [1, 2, 3, 4, 5]]
# acc = 120
Note that this is actually also a "scanleft" operation as the result of the list comprehension represents the state of the accumulation at each step:
acc = 1
scanned = [acc := acc * x for x in [1, 2, 3, 4, 5]]
# scanned = [1, 2, 6, 24, 120]
# acc = 120
Haskell
foldl (+) 0 [1,2,3,4,5]
Python
reduce(lambda a,b: a+b, [1,2,3,4,5], 0)
Obviously, that is a trivial example to illustrate a point. In Python you would just do sum([1,2,3,4,5]) and even Haskell purists would generally prefer sum [1,2,3,4,5].
For non-trivial scenarios when there is no obvious convenience function, the idiomatic pythonic approach is to explicitly write out the for loop and use mutable variable assignment instead of using reduce or a fold.
That is not at all the functional style, but that is the "pythonic" way. Python is not designed for functional purists. See how Python favors exceptions for flow control to see how non-functional idiomatic python is.
In Python 3, the reduce has been removed: Release notes. Nevertheless you can use the functools module
import operator, functools
def product(xs):
return functools.reduce(operator.mul, xs, 1)
On the other hand, the documentation expresses preference towards for-loop instead of reduce, hence:
def product(xs):
result = 1
for i in xs:
result *= i
return result
Not really answer to the question, but one-liners for foldl and foldr:
a = [8,3,4]
## Foldl
reduce(lambda x,y: x**y, a)
#68719476736
## Foldr
reduce(lambda x,y: y**x, a[::-1])
#14134776518227074636666380005943348126619871175004951664972849610340958208L
You can reinvent the wheel as well:
def fold(f, l, a):
"""
f: the function to apply
l: the list to fold
a: the accumulator, who is also the 'zero' on the first call
"""
return a if(len(l) == 0) else fold(f, l[1:], f(a, l[0]))
print "Sum:", fold(lambda x, y : x+y, [1,2,3,4,5], 0)
print "Any:", fold(lambda x, y : x or y, [False, True, False], False)
print "All:", fold(lambda x, y : x and y, [False, True, False], True)
# Prove that result can be of a different type of the list's elements
print "Count(x==True):",
print fold(lambda x, y : x+1 if(y) else x, [False, True, True], 0)
The actual answer to this (reduce) problem is: Just use a loop!
initial_value = 0
for x in the_list:
initial_value += x #or any function.
This will be faster than a reduce and things like PyPy can optimize loops like that.
BTW, the sum case should be solved with the sum function
I believe some of the respondents of this question have missed the broader implication of the fold function as an abstract tool. Yes, sum can do the same thing for a list of integers, but this is a trivial case. fold is more generic. It is useful when you have a sequence of data structures of varying shape and want to cleanly express an aggregation. So instead of having to build up a for loop with an aggregate variable and manually recompute it each time, a fold function (or the Python version, which reduce appears to correspond to) allows the programmer to express the intent of the aggregation much more plainly by simply providing two things:
A default starting or "seed" value for the aggregation.
A function that takes the current value of the aggregation (starting with the "seed") and the next element in the list, and returns the next aggregation value.
I was wondering whether for most examples it is more 'pythonic' to use lambda or the partial function?
For example, I might want to apply imap on some list, like add 3 to every element using:
imap(lambda x : x + 3, my_list)
Or to use partial:
imap(partial(operator.add, 3), my_list)
I realize in this example a loop could probably accomplish it easier, but I'm thinking about more non-trivial examples.
In Haskell, I would easily choose partial application in the above example, but I'm not sure for Python. To me, the lambda seems the the better choice, but I don't know what the prevailing choice is for most python programmers.
To be truly equivalent to imap, use a generator expression:
(x + 3 for x in mylist)
Like imap, this doesn't immediately construct an entire new list, but instead computes elements of the resulting sequence on-demand (and is thus much more efficient than a list comprehension if you're chaining the result into another iteration).
If you're curious about where partial would be a better option than lambda in the real world, it tends to be when you're dealing with variable numbers of arguments:
>>> from functools import partial
>>> def a(*args):
... return sum(args)
...
>>> b = partial(a, 2, 3)
>>> b(6, 7, 8)
26
The equivalent version using lambda would be...
>>> b = lambda *args: a(2, 3, *args)
>>> b(6, 7, 8)
26
which is slightly less concise - but lambda does give you the option of out-of-order application, which partial does not:
>>> def a(x, y, z):
... return x + y - z
...
>>> b = lambda m, n: a(m, 1, n)
>>> b(2, 5)
-2
In the given example, lambda seems most appropriate. It's also easier on the eyes.
I have never seen the use of partial functions in the wild.
lambda is certainly many times more common. Unless you're doing functional programming in an academic setting, you should probably steer away from functools.
This is pythonic. No library needed, or even builtins, just a simple generator expression.
( x + 3 for x in my_list )
This creates a generator, similar to imap.
If you're going to make a list out of it anyway, use a list comprehension instead:
[ x + 3 for x in my_list ]