I have an array of functions and I'm trying to produce one function which consists of the composition of the elements in my array.
My approach is:
def compose(list):
if len(list) == 1:
return lambda x:list[0](x)
list.reverse()
final=lambda x:x
for f in list:
final=lambda x:f(final(x))
return final
This method doesn't seems to be working, help will be appreciated.
(I'm reversing the list because this is the order of composition I want the functions to be)
The easiest approach would be first to write a composition of 2 functions:
def compose2(f, g):
return lambda *a, **kw: f(g(*a, **kw))
And then use reduce to compose more functions:
import functools
def compose(*fs):
return functools.reduce(compose2, fs)
Or you can use some library, which already contains compose function.
def compose (*functions):
def inner(arg):
for f in reversed(functions):
arg = f(arg)
return arg
return inner
Example:
>>> def square (x):
return x ** 2
>>> def increment (x):
return x + 1
>>> def half (x):
return x / 2
>>> composed = compose(square, increment, half) # square(increment(half(x)))
>>> composed(5) # square(increment(half(5))) = square(increment(2.5)) = square(3.5) = 12,25
12.25
It doesn't work because all the anonymous functions you create in the loop refer to the same loop variable and therefore share its final value.
As a quick fix, you can replace the assignment with:
final = lambda x, f=f, final=final: f(final(x))
Or, you can return the lambda from a function:
def wrap(accum, f):
return lambda x: f(accum(x))
...
final = wrap(final, f)
To understand what's going on, try this experiment:
>>> l = [lambda: n for n in xrange(10)]
>>> [f() for f in l]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
This result surprises many people, who expect the result to be [0, 1, 2, ...]. However, all the lambdas point to the same n variable, and all refer to its final value, which is 9. In your case, all the versions of final which are supposed to nest end up referring to the same f and, even worse, to the same final.
The topic of lambdas and for loops in Python has been already covered on SO.
One liner:
compose = lambda *F: reduce(lambda f, g: lambda x: f(g(x)), F)
Example usage:
f1 = lambda x: x+3
f2 = lambda x: x*2
f3 = lambda x: x-1
g = compose(f1, f2, f3)
assert(g(7) == 15)
Recursive implementation
Here's a fairly elegant recursive implementation, which uses features of Python 3 for clarity:
def strict_compose(*funcs):
*funcs, penultimate, last = funcs
if funcs:
penultimate = strict_compose(*funcs, penultimate)
return lambda *args, **kwargs: penultimate(last(*args, **kwargs))
Python 2 compatible version:
def strict_compose2(*funcs):
if len(funcs) > 2:
penultimate = strict_compose2(*funcs[:-1])
else:
penultimate = funcs[-2]
return lambda *args, **kwargs: penultimate(funcs[-1](*args, **kwargs))
This is an earlier version which uses lazy evaluation of the recursion:
def lazy_recursive_compose(*funcs):
def inner(*args, _funcs=funcs, **kwargs):
if len(_funcs) > 1:
return inner(_funcs[-1](*args, **kwargs), _funcs=_funcs[:-1])
else:
return _funcs[0](*args, **kwargs)
return inner
Both would seem to make a new tuple and dict of arguments each recursive call.
Comparison of all suggestions:
Let's test some of these implementations and determine which is most performant, first some single argument functions (Thank you poke):
def square(x):
return x ** 2
def increment(x):
return x + 1
def half(x):
return x / 2
Here's our implementations, I suspect my iterative version is the second most efficient (manual compose will naturally be fastest), but that may be in part due to it sidestepping the difficulty of passing any number of arguments or keyword arguments between functions - in most cases we'll only see the trivial one argument being passed.
from functools import reduce
def strict_recursive_compose(*funcs):
*funcs, penultimate, last = funcs
if funcs:
penultimate = strict_recursive_compose(*funcs, penultimate)
return lambda *args, **kwargs: penultimate(last(*args, **kwargs))
def strict_recursive_compose2(*funcs):
if len(funcs) > 2:
penultimate = strict_recursive_compose2(*funcs[:-1])
else:
penultimate = funcs[-2]
return lambda *args, **kwargs: penultimate(funcs[-1](*args, **kwargs))
def lazy_recursive_compose(*funcs):
def inner(*args, _funcs=funcs, **kwargs):
if len(_funcs) > 1:
return inner(_funcs[-1](*args, **kwargs), _funcs=_funcs[:-1])
else:
return _funcs[0](*args, **kwargs)
return inner
def iterative_compose(*functions):
"""my implementation, only accepts one argument."""
def inner(arg):
for f in reversed(functions):
arg = f(arg)
return arg
return inner
def _compose2(f, g):
return lambda *a, **kw: f(g(*a, **kw))
def reduce_compose1(*fs):
return reduce(_compose2, fs)
def reduce_compose2(*funcs):
"""bug fixed - added reversed()"""
return lambda x: reduce(lambda acc, f: f(acc), reversed(funcs), x)
And to test these:
import timeit
def manual_compose(n):
return square(increment(half(n)))
composes = (strict_recursive_compose, strict_recursive_compose2,
lazy_recursive_compose, iterative_compose,
reduce_compose1, reduce_compose2)
print('manual compose', min(timeit.repeat(lambda: manual_compose(5))), manual_compose(5))
for compose in composes:
fn = compose(square, increment, half)
result = min(timeit.repeat(lambda: fn(5)))
print(compose.__name__, result, fn(5))
Results
And we get the following output (same magnitude and proportion in Python 2 and 3):
manual compose 0.4963762479601428 12.25
strict_recursive_compose 0.6564744340721518 12.25
strict_recursive_compose2 0.7216697579715401 12.25
lazy_recursive_compose 1.260614730999805 12.25
iterative_compose 0.614982972969301 12.25
reduce_compose1 0.6768529079854488 12.25
reduce_compose2 0.9890829260693863 12.25
And my expectations were confirmed: the fastest is of course, manual function composition followed by the iterative implementation. The lazy recursive version is much slower - likely since a new stack frame is created by each function call and a new tuple of functions is created for each function.
For a better and perhaps more realistic comparison, if you remove **kwargs and change *args to arg in the functions, the ones that used them will be more performant, and we can better compare apples to apples - here, aside from manual composition, reduce_compose1 wins followed by the strict_recursive_compose:
manual compose 0.443808660027571 12.25
strict_recursive_compose 0.5409777010791004 12.25
strict_recursive_compose2 0.5698030130006373 12.25
lazy_recursive_compose 1.0381018499610946 12.25
iterative_compose 0.619289995986037 12.25
reduce_compose1 0.49532539502251893 12.25
reduce_compose2 0.9633988010464236 12.25
Functions with just one arg:
def strict_recursive_compose(*funcs):
*funcs, penultimate, last = funcs
if funcs:
penultimate = strict_recursive_compose(*funcs, penultimate)
return lambda arg: penultimate(last(arg))
def strict_recursive_compose2(*funcs):
if len(funcs) > 2:
penultimate = strict_recursive_compose2(*funcs[:-1])
else:
penultimate = funcs[-2]
return lambda arg: penultimate(funcs[-1](arg))
def lazy_recursive_compose(*funcs):
def inner(arg, _funcs=funcs):
if len(_funcs) > 1:
return inner(_funcs[-1](arg), _funcs=_funcs[:-1])
else:
return _funcs[0](arg)
return inner
def iterative_compose(*functions):
"""my implementation, only accepts one argument."""
def inner(arg):
for f in reversed(functions):
arg = f(arg)
return arg
return inner
def _compose2(f, g):
return lambda arg: f(g(arg))
def reduce_compose1(*fs):
return reduce(_compose2, fs)
def reduce_compose2(*funcs):
"""bug fixed - added reversed()"""
return lambda x: reduce(lambda acc, f: f(acc), reversed(funcs), x)
The most reliable implementation I have found is in the 3rd party library toolz. The compose function from this library also deals with docstring for the composition of functions.
The source code is freely available. Below is a simple example of usage.
from toolz import compose
def f(x):
return x+1
def g(x):
return x*2
def h(x):
return x+3
res = compose(f, g, h)(5) # 17
You can also create an array of functions and use reduce:
def f1(x): return x+1
def f2(x): return x+2
def f3(x): return x+3
x = 5
# Will print f3(f2(f1(x)))
print reduce(lambda acc, x: x(acc), [f1, f2, f3], x)
# As a function:
def compose(*funcs):
return lambda x: reduce(lambda acc, f: f(acc), funcs, x)
f = compose(f1, f2, f3)
pip install funcoperators is another library to implement it that allows infix notation:
from funcoperators import compose
# display = lambda x: hex(ord(list(x)))
display = hex *compose* ord *compose* list
# also works as a function
display = compose(hex, ord, list)
pip install funcoperators https://pypi.org/project/funcoperators/
Disclaimer: I'm the creator of the module
Suppose you have the following functions:
def square(x):
return x**2
def inc(x):
return x+1
def half(x):
return x/2
Define a compose function as follows:
import functools
def compose(*functions):
return functools.reduce(lambda f, g: lambda x: g(f(x)),
functions,
lambda x: x)
Usage:
composed = compose(square, inc, inc, half)
compose(10)
>>> 51.0
which executes the functions procedurally in the defined order:
square (= 100)
inc (= 101)
inc (= 102)
half (= 51)
Adapted from https://mathieularose.com/function-composition-in-python/.
I prefer this one due to readability/simplicity
from functools import reduce
def compose(*fs):
apply = lambda arg, f: f(arg)
composition = lambda x: reduce(apply, [x, *fs])
return composition
the pipe = compose(a, b, c) will first apply a, then b and then c.
With regard to maintainability (an debugging) I think actually this one is the easiest to use:
def compose(*fs):
def composition(x):
for f in fs:
x = f(x)
return x
return composition
You can use funcy.
Installation:
pip install funcy
Then you can use compose or rcompose as follows:
from funcy import compose, rcompose
def inc(x): return x + 1
def double(x): return x + x
def tripple(x): return x + x + x
print(compose(tripple, double, inc)(1)) # 12
print(rcompose(inc, double, tripple)(1)) # 12
I found this piece of code from GeeksforGeeks here for Python 3. Not sure of how efficient it is, but it is very simple to understand.
# importing reduce() from functools
from functools import reduce
# composite_function accepts N
# number of function as an
# argument and then compose them
def composite_function(*func):
def compose(f, g):
return lambda x : f(g(x))
return reduce(compose, func, lambda x : x)
# Function to add 2
def add(x):
return x + 2
# Function to multiply 2
def multiply(x):
return x * 2
# Function to subtract 2
def subtract(x):
return x - 1
# Here add_subtract_multiply will
# store lambda x : multiply(subtract(add(x)))
add_subtract_multiply = composite_function(multiply,
subtract,
add)
print("Adding 2 to 5, then subtracting 1 and multiplying the result with 2: ",
add_subtract_multiply(5))
You can keep adding more functions to composite_functions e.g.:
print(composite_function(multiply, add, subtract, multiply,subtract, add)(5))
More general solution of Imanol Luengo from my point of view (python notebook example):
from functools import reduce
from functools import partial
def f(*argv, **kwargs):
print('f: {} {}'.format(argv, kwargs))
return argv, kwargs
def g(*argv, **kwargs):
print('g: {} {}'.format(argv, kwargs))
return argv, kwargs
def compose(fs, *argv, **kwargs):
return reduce(lambda x, y: y(*x[0], **x[1]), fs, (argv, kwargs))
h = partial(compose, [f, g])
h('value', key='value')
output:
f: ('value',) {'key': 'value'}
g: ('value',) {'key': 'value'}
m = partial(compose, [h, f, g])
m('value', key='value')
output:
f: ('value',) {'key': 'value'}
g: ('value',) {'key': 'value'}
f: ('value',) {'key': 'value'}
g: ('value',) {'key': 'value'}
Perfectly good question, but the answers sure are unnecessarily complex. It's just:
def compose(*funs):
return (lambda x:
x if len(funs) == 0
else compose(*funs[:-1])(funs[-1](x)))
If you want no dependencies here is a one-liner recursive solution:
def compose(*f):
return f[0] if len(f) <= 1 else lambda *a,**kw: f[0](compose(*f[1:])(*a,**kw))
N.B. len(f) == 1 might seem more reasonable at first sight, but it allows to write compose() (i.e. no arguments) throwing an error only when you apply the empty compose function. On the contrary, with len(f) <= 1, compose() throws an error immediately, which is a more rational behavior.
This is my version
def compose(*fargs):
def inner(arg):
if not arg:
raise ValueError("Invalid argument")
if not all([callable(f) for f in fargs]):
raise TypeError("Function is not callable")
return reduce(lambda arg, func: func(arg), fargs, arg)
return inner
An example of how it's used
def calcMean(iterable):
return sum(iterable) / len(iterable)
def formatMean(mean):
return round(float(mean), 2)
def adder(val, value):
return val + value
def isEven(val):
return val % 2 == 0
if __name__ == '__main__':
# Ex1
rand_range = [random.randint(0, 10000) for x in range(0, 10000)]
isRandIntEven = compose(calcMean, formatMean,
partial(adder, value=0), math.floor.__call__, isEven)
print(isRandIntEven(rand_range))
I'm trying to write a function in python that is like:
def repeated(f, n):
...
where f is a function that takes one argument and n is a positive integer.
For example if I defined square as:
def square(x):
return x * x
and I called
repeated(square, 2)(3)
this would square 3, 2 times.
That should do it:
def repeated(f, n):
def rfun(p):
return reduce(lambda x, _: f(x), xrange(n), p)
return rfun
def square(x):
print "square(%d)" % x
return x * x
print repeated(square, 5)(3)
output:
square(3)
square(9)
square(81)
square(6561)
square(43046721)
1853020188851841
or lambda-less?
def repeated(f, n):
def rfun(p):
acc = p
for _ in xrange(n):
acc = f(acc)
return acc
return rfun
Using reduce and lamba.
Build a tuple starting with your parameter, followed by all functions you want to call:
>>> path = "/a/b/c/d/e/f"
>>> reduce(lambda val,func: func(val), (path,) + (os.path.dirname,) * 3)
"/a/b/c"
Something like this?
def repeat(f, n):
if n==0:
return (lambda x: x)
return (lambda x: f (repeat(f, n-1)(x)))
Use an itertools recipe called repeatfunc that performs this operation.
Given
def square(x):
"""Return the square of a value."""
return x * x
Code
From itertools recipes:
def repeatfunc(func, times=None, *args):
"""Repeat calls to func with specified arguments.
Example: repeatfunc(random.random)
"""
if times is None:
return starmap(func, repeat(args))
return starmap(func, repeat(args, times))
Demo
Optional: You can use a third-party library, more_itertools, that conveniently implements these recipes:
import more_itertools as mit
list(mit.repeatfunc(square, 2, 3))
# [9, 9]
Install via > pip install more_itertools
Using reduce and itertools.repeat (as Marcin suggested):
from itertools import repeat
from functools import reduce # necessary for python3
def repeated(func, n):
def apply(x, f):
return f(x)
def ret(x):
return reduce(apply, repeat(func, n), x)
return ret
You can use it as follows:
>>> repeated(os.path.dirname, 3)('/a/b/c/d/e/f')
'/a/b/c'
>>> repeated(square, 5)(3)
1853020188851841
(after importing os or defining square respectively)
I think you want function composition:
def compose(f, x, n):
if n == 0:
return x
return compose(f, f(x), n - 1)
def square(x):
return pow(x, 2)
y = compose(square, 3, 2)
print y
Here's a recipe using reduce:
def power(f, p, myapply = lambda init, g:g(init)):
ff = (f,)*p # tuple of length p containing only f in each slot
return lambda x:reduce(myapply, ff, x)
def square(x):
return x * x
power(square, 2)(3)
#=> 81
I call this power, because this is literally what the power function does, with composition replacing multiplication.
(f,)*p creates a tuple of length p filled with f in every index. If you wanted to get fancy, you would use a generator to generate such a sequence (see itertools) - but note it would have to be created inside the lambda.
myapply is defined in the parameter list so that it is only created once.
This question already has answers here:
Python function as a function argument?
(10 answers)
Python Argument Binders
(7 answers)
Closed 5 months ago.
Is it possible to pass functions with arguments to another function in Python?
Say for something like:
def perform(function):
return function()
But the functions to be passed will have arguments like:
action1()
action2(p)
action3(p,r)
Do you mean this?
def perform(fun, *args):
fun(*args)
def action1(args):
# something
def action2(args):
# something
perform(action1)
perform(action2, p)
perform(action3, p, r)
This is what lambda is for:
def perform(f):
f()
perform(lambda: action1())
perform(lambda: action2(p))
perform(lambda: action3(p, r))
You can use the partial function from functools like so.
from functools import partial
def perform(f):
f()
perform(Action1)
perform(partial(Action2, p))
perform(partial(Action3, p, r))
Also works with keywords
perform(partial(Action4, param1=p))
Use functools.partial, not lambdas! And ofc Perform is a useless function, you can pass around functions directly.
for func in [Action1, partial(Action2, p), partial(Action3, p, r)]:
func()
This is called partial functions and there are at least 3 ways to do this. My favorite way is using lambda because it avoids dependency on extra package and is the least verbose. Assume you have a function add(x, y) and you want to pass add(3, y) to some other function as parameter such that the other function decides the value for y.
Use lambda
# generic function takes op and its argument
def runOp(op, val):
return op(val)
# declare full function
def add(x, y):
return x+y
# run example
def main():
f = lambda y: add(3, y)
result = runOp(f, 1) # is 4
Create Your Own Wrapper
Here you need to create a function that returns the partial function. This is obviously lot more verbose.
# generic function takes op and its argument
def runOp(op, val):
return op(val)
# declare full function
def add(x, y):
return x+y
# declare partial function
def addPartial(x):
def _wrapper(y):
return add(x, y)
return _wrapper
# run example
def main():
f = addPartial(3)
result = runOp(f, 1) # is 4
Use partial from functools
This is almost identical to lambda shown above. Then why do we need this? There are few reasons. In short, partial might be bit faster in some cases (see its implementation) and that you can use it for early binding vs lambda's late binding.
from functools import partial
# generic function takes op and its argument
def runOp(op, val):
return op(val)
# declare full function
def add(x, y):
return x+y
# run example
def main():
f = partial(add, 3)
result = runOp(f, 1) # is 4
(months later) a tiny real example where lambda is useful, partial not:
say you want various 1-dimensional cross-sections through a 2-dimensional function,
like slices through a row of hills.
quadf( x, f ) takes a 1-d f and calls it for various x.
To call it for vertical cuts at y = -1 0 1 and horizontal cuts at x = -1 0 1,
fx1 = quadf( x, lambda x: f( x, 1 ))
fx0 = quadf( x, lambda x: f( x, 0 ))
fx_1 = quadf( x, lambda x: f( x, -1 ))
fxy = parabola( y, fx_1, fx0, fx1 )
f_1y = quadf( y, lambda y: f( -1, y ))
f0y = quadf( y, lambda y: f( 0, y ))
f1y = quadf( y, lambda y: f( 1, y ))
fyx = parabola( x, f_1y, f0y, f1y )
As far as I know, partial can't do this --
quadf( y, partial( f, x=1 ))
TypeError: f() got multiple values for keyword argument 'x'
(How to add tags numpy, partial, lambda to this ?)
Although all the responses are very accurate and well explained.
I want to make a clarification that you also can pass anonymous functions.
def perform(fun, *arg):
return fun(*arg)
# Pass anonymous function
print(perform(lambda x: x + 1, 3)) # output: 4
print(perform(lambda x, y: x + y + 1, 3, 2)) # output: 6
# Pass defined function
perform(lambda: action1())
perform(lambda: action2(p))
perform(lambda: action3(p, r))
Here is a way to do it with a closure:
def generate_add_mult_func(func):
def function_generator(x):
return reduce(func,range(1,x))
return function_generator
def add(x,y):
return x+y
def mult(x,y):
return x*y
adding=generate_add_mult_func(add)
multiplying=generate_add_mult_func(mult)
print adding(10)
print multiplying(10)
I think this is what you're looking for...
def action1(action):
print(f'doing {action} here!')
def perform(function):
return function()
perform(lambda : action1('business action'))
lambda packages up func and args in closure and passes to perform()
Thanks to David Beasley.