Consider the following code, which parses and evaluates strings like 567 +223 in Python.
import parsy as pr
from parsy import generate
def lex(p):
return p << pr.regex('\s*')
numberP = lex(pr.regex('[0-9]+').map(int))
#generate
def sumP():
a = yield numberP
yield lex(pr.string('+'))
b = yield numberP
return a+b
exp = sumP.parse('567 + 323')
print(exp)
The #generate is a total mystery for me. Does anyone have more information on how that trick works? It does allow us to write in a similar style to Haskell's monadic do notation. Is code reflection needed to make your own #generate, or is there a clever way to interpret that code literally.
Now here comes my main problem, I want to generalize sumP to opP that also takes an operator symbol and a combinator function:
import parsy as pr
from parsy import generate
def lex(p):
return p << pr.regex('\s*')
numberP = lex(pr.regex('[0-9]+').map(int))
#generate
def opP(symbol, f):
a = yield numberP
yield lex(pr.string(symbol))
b = yield numberP
return f(a,b)
exp = opP('+', lambda x,y:x+y).parse('567 + 323')
print(exp)
This gives an error. It seems that the generated opP already has two arguments, which I do not know how to deal with.
The way that decorators work in Python is that they're functions that are called with the decorated method as an argument and then their return value is assigned to the method name. In other words this:
#foo
def bar():
bla
Is equivalent to this:
def bar():
bla
bar = foo(bar)
Here foo can do anything it wants with bar. It may wrap it in something, it may introspect its code, it may call it.
What #generate does is to wrap the given function in a parser object. The parser object, when parsing, will call the function without arguments, which is why you get an error about missing arguments when you apply #generate to a function that takes arguments.
To create parameterized rules, you can apply #generate to an inner 0-argument function and return that:
def opP(symbol, f):
#generate
def op():
a = yield numberP
yield lex(pr.string(symbol))
b = yield numberP
return f(a,b)
return op
Related
I'm looking for a nice functional way to do the following:
def add(x, y):
return x + y
def neg(x):
return -x
def c(x, y):
# Apply neg to inputs for add
_x = neg(x)
_y = neg(y)
return add(_x, _y)
neg_sum = c(2, 2) # -4
It seems related to currying, but all of the examples I can find use functions that only have one input variable. I would like something that looks like this:
def add(x, y):
return x + y
def neg(x):
return -x
c = apply(neg, add)
neg_sum = c(2, 2) # -4
This is a fairly direct way to do it:
def add(x, y):
return x + y
def neg(x):
return -x
def apply(g, f):
# h is a function that returns
# f(g(arg1), g(arg2), ...)
def h(*args):
return f(*map(g, args))
return h
# or this:
# def apply(g, f):
# return lambda *args: f(*map(g, args))
c = apply(neg, add)
neg_sum = c(2, 2) # -4
Note that when you use *myvar as an argument in a function definition, myvar becomes a list of all non-keyword arguments that are received. And if you call a function with *expression as an argument, then all the items in expression are unpacked and sent as separate arguments to the function. I use these two behaviors to make h accept an unknown list of arguments, then apply function g to each one (with map), then pass all of them as arguments to f.
A different approach, depending on how extensible you need this to be, is to create an object which implements your operator methods, which each return the same object, allowing you to chain operators together in arbitrary orders.
If you can cope with it always returning a list, you might be able to make it work.
class mathifier:
def __init__(self,values):
self.values = values
def neg(self):
self.values = [-value for value in self.values]
return self
def add(self):
self.values = [sum(self.values)]
return self
print (mathifier([2,3]).neg().add().values)
And you can still get your named function for any set of chained functions:
neg_add = lambda x : mathifier(x).neg().add()
print(neg_add([2,3]).values)
From Matthias Fripp's answer, I asked myself : I'd like to compose add and neg both ways : add_neg(*args) and neg_add(*args). This requires hacking Matthias suggestion a bit. The idea is to get some hint on the arity (number of args) of the functions to compose. This information is obtained with a bit of introspection, thanks to inspect module. With this in mind, we adapt the way args are passed through the chain of funcs. The main assumption here is that we deal with real functions, in the mathematical sense, i.e. functions returning ONE float, and taking at least one argument.
from functools import reduce
from inspect import getfullargspec
def arity_one(func):
spec = getfullargspec(func)
return len(spec[0])==1 and spec[1] is None
def add(*args):
return reduce(lambda x,y:x+y, args, 0)
def neg(x):
return -x
def compose(fun1,fun2):
def comp(*args):
if arity_one(fun2): return fun1(*(map( fun2, args)))
else: return fun1(fun2(*args))
return comp
neg_add = compose(neg, add)
add_neg = compose(add, neg)
print(f"-2+(-3) = {add_neg(2, 3)}")
print(f"-(2+3) = {neg_add(2, 3)}")
The solution is still very adhoc...
In the code example below, I have two higher level functions, factory1 and factory2, that produce a function with identical behavior. The first factory, factory1, avoids having to explicitly define two different functions by letting the returned function change behavior based on a boolean from the factory. The usefulness of this is not as obvious in this example, but if the function to be produced were more complex, it would be detrimental to both readability and and maintainability to explicitly write out two almost identical copies of the function, like is done in factory2.
However, the factory2 implementation is faster, as can be seen by the timing results.
Is there a way to achieve the performance of factory2 without explicitly defining two alternative functions?
def factory1(condition):
def fn():
if condition:
return "foo"
else:
return "bar"
return fn
def factory2(condition):
def foo_fn():
return "foo"
def bar_fn():
return "bar"
if condition:
return foo_fn
else:
return bar_fn
def test1():
fn = factory1(True)
for _ in range(1000):
fn()
def test2():
fn = factory2(True)
for _ in range(1000):
fn()
if __name__ == '__main__':
import timeit
print(timeit.timeit("test1()", setup="from __main__ import test1"))
# >>> 62.458039999
print(timeit.timeit("test2()", setup="from __main__ import test2"))
# >>> 49.203676939
EDIT: Some more context
The reason I am asking is that I am trying to produce a function that looks something like this:
def function(data):
data = some_transform(data)
if condition:
# condition should be considered invariant at time of definition
data = transform1(data)
else:
data = transform2(data)
data = yet_another_transform(data)
return data
Depending on what you mean by "explicitly defining two functions", note that you don't have to execute a def statement until after you check the condition:
def factory3(condition):
if condition:
def fn():
return "foo"
else:
def fn():
return "bar"
return fn
One might object that this still has to compile two code objects before determining which one gets used to define the function at run-time. In the case, you might fallback on using exec on a dynamically constructed string. NOTE This needs to be done carefully for anything other than the static example I will show here. See the old definition for namedtuple for a good(?) example.
def factory4(condition):
code = """def fn():\n return "{}"\n""".format("foo" if condition else "bar")
exec(code)
return fn
A safer alternative might be to use a closure:
def factory5(condition):
def make_fun(val):
def _():
return val
return _
if condition:
return make_fun("foo")
else:
return make_fun("bar")
make_fun can be define outside of factory5 as well, as it doesn't rely on condition at all.
Based on your edit, I think you are just looking to implement dependency injection. Don't put an if statement inside your function; pass transform1 or transform2 as an argument:
def function(transform):
def _(data):
data = some_transform(data)
data = transform(data)
data = yet_another_transform(data)
return data
return _
if condition:
thing = function(transform1)
else:
thing = function(transform2)
I am trying to use currying to make a simple functional add in Python. I found this curry decorator here.
def curry(func):
def curried(*args, **kwargs):
if len(args) + len(kwargs) >= func.__code__.co_argcount:
return func(*args, **kwargs)
return (lambda *args2, **kwargs2:
curried(*(args + args2), **dict(kwargs, **kwargs2)))
return curried
#curry
def foo(a, b, c):
return a + b + c
Now this is great because I can do some simple currying:
>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
But this only works for exactly three variables. How do I write the function foo so that it can accept any number of variables and still be able to curry the result? I've tried the simple solution of using *args but it didn't work.
Edit: I've looked at the answers but still can't figure out how to write a function that can perform as shown below:
>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
>>> foo(1)(2)
3
>>> foo(1)(2)(3)(4)
10
Arguably, explicit is better than implicit:
from functools import partial
def example(*args):
print("This is an example function that was passed:", args)
one_bound = partial(example, 1)
two_bound = partial(one_bound, 2)
two_bound(3)
#JohnKugelman explained the design problem with what you're trying to do - a call to the curried function would be ambiguous between "add more curried arguments" and "invoke the logic". The reason this isn't a problem in Haskell (where the concept comes from) is that the language evaluates everything lazily, so there isn't a distinction you can meaningfully make between "a function named x that accepts no arguments and simply returns 3" and "a call to the aforementioned function", or even between those and "the integer 3". Python isn't like that. (You could, for example, use a zero-argument call to signify "invoke the logic now"; but that would break special cases aren't special enough, and require an extra pair of parentheses for simple cases where you don't actually want to do any currying.)
functools.partial is an out-of-box solution for partial application of functions in Python. Unfortunately, repeatedly calling partial to add more "curried" arguments isn't quite as efficient (there will be nested partial objects under the hood). However, it's much more flexible; in particular, you can use it with existing functions that don't have any special decoration.
You can implement the same thing as the functools.partial example for yourself like this:
def curry (prior, *additional):
def curried(*args):
return prior(*(args + additional))
return curried
def add(*args):
return sum(args)
x = curry(add, 3,4,5)
y = curry(b, 100)
print y(200)
# 312
It may be easier to think of curry as a function factory rather than a decorator; technically that's all a decorator does but the decorator usage pattern is static where a factory is something you expect to be invoking as part of a chain of operations.
You can see here that I'm starting with add as an argument to curry and not add(1) or something: the factory signature is <callable>, *<args> . That gets around the problem in the comments to the original post.
FACT 1: It is simply impossible to implement an auto currying function for a variadic function.
FACT 2: You might not be searching for curry, if you want the function that will be passed to it * to know* that its gonna be curried, so as to make it behave differently.
In case what you need is a way to curry a variadic function, you should go with something along these lines below (using your own snipped):
def curryN(arity, func):
"""curries a function with a pre-determined number of arguments"""
def curried(*args, **kwargs):
if len(args) + len(kwargs) >= arity:
return func(*args, **kwargs)
return (lambda *args2, **kwargs2:
curried(*(args + args2), **dict(kwargs, **kwargs2)))
return curried
def curry(func):
"""automatically curries a function"""
return curryN(func.__code__.co_argcount, func);
this way you can do:
def summation(*numbers):
return sum(numbers);
sum_two_numbers = curryN(2, summation)
sum_three_numbers = curryN(3, summation)
increment = curryN(2, summation)(1)
decrement = curryN(2, summation)(-1)
I think this is a decent solution:
from copy import copy
import functools
def curry(function):
def inner(*args, **kwargs):
partial = functools.partial(function, *args, **kwargs)
signature = inspect.signature(partial.func)
try:
signature.bind(*partial.args, **partial.keywords)
except TypeError as e:
return curry(copy(partial))
else:
return partial()
return inner
This just allow you to call functools.partial recursively in an automated way:
def f(x, y, z, info=None):
if info:
print(info, end=": ")
return x + y + z
g = curry_function(f)
print(g)
print(g())
print(g(2))
print(g(2,3))
print(g(2)(3))
print(g(2, 3)(4))
print(g(2)(3)(4))
print(g(2)(3, 4))
print(g(2, info="test A")(3, 4))
print(g(2, info="test A")(3, 4, info="test B"))
Outputs:
<function curry.<locals>.inner at 0x7f6019aa6f28>
<function curry.<locals>.inner at 0x7f6019a9a158>
<function curry.<locals>.inner at 0x7f6019a9a158>
<function curry.<locals>.inner at 0x7f6019a9a158>
<function curry.<locals>.inner at 0x7f6019a9a0d0>
9
9
9
test A: 9
test B: 9
I would like to do something like the following:
def getFunction(params):
f= lambda x:
do stuff with params and x
return f
I get invalid syntax on this. What is the Pythonic/correct way to do it?
This way I can call f(x) without having to call f(x,params) which is a little more messy IMO.
A lambda expression is a very limited way of creating a function, you can't have multiple lines/expressions (per the tutorial, "They are syntactically restricted to a single expression"). However, you can nest standard function definitions:
def getFunction(params):
def to_return(x):
# do stuff with params and x
return to_return
Functions are first-class objects in Python, so once defined you can pass to_return around exactly as you can with a function created using lambda, and either way they get access to the "closure" variables (see e.g. Why aren't python nested functions called closures?).
It looks like what you're actually trying to do is partial function application, for which functools provides a solution. For example, if you have a function multiply():
def multiply(a, b):
return a * b
... then you can create a double() function1 with one of the arguments pre-filled like this:
from functools import partial
double = partial(multiply, 2)
... which works as expected:
>>> double(7)
14
1 Technically a partial object, not a function, but it behaves in the same way.
You can't have a multiline lambda expression in Python, but you can return a lambda or a full function:
def get_function1(x):
f = lambda y: x + y
return f
def get_function2(x):
def f(y):
return x + y
return f
I was wondering if it is possible in python to do the following:
def func1(a,b):
return func2(c,d)
What I mean is that suppose I do something with a,b which leads to some coefficients that can define a new function, I want to create this function if the operations with a,b is indeed possible and be able to access this outside of func1.
An example would be a simple fourier series, F(x), of a given function f:
def fourier_series(f,N):
...... math here......
return F(x)
What I mean by this is I want to creat and store this new function for later use, maybe I want to derivate it, or integrate or plot or whatever I want to do, I do not want to send the point(s) x for evaluation in fourier_series (or func1(..)), I simply say that fourier_series creates a new function that takes a variable x, this function can be called later outside like y = F(3)... if I made myself clear enough?
You should be able to do this by defining a new function inline:
def fourier_series(f, N):
def F(x):
...
return F
You are not limited to the arguments you pass in to fourier_series:
def f(a):
def F(b):
return b + 5
return F
>>> fun = f(10)
>>> fun(3)
8
You could use a lambda (although I like the other solutions a bit more, I think :) ):
>>> def func2(c, d):
... return c, d
...
>>> def func1(a, b):
... c = a + 1
... d = b + 2
... return lambda: func2(c,d)
...
>>> result = func1(1, 2)
>>> print result
<function <lambda> at 0x7f3b80a3d848>
>>> print result()
(2, 4)
>>>
While I cannot give you an answer specific to what you plan to do. (Looks like math out of my league.)
I can tell you that Python does support first-class functions.
Python may return functions from functions, store functions in collections such as lists and generally treat them as you would any variable.
Cool things such as defining functions in other functions and returning functions are all possible.
>>> def func():
... def func2(x,y):
... return x*y
... return func2
>>> x = func()
>>> x(1,2)
2
Functions can be assigned to variables and stored in lists, they can be used as arguments for other functions and are as flexible as any other object.
If you define a function inside your outer function, you can use the parameters passed to the outer function in the definition of the inner function and return that inner function as the result of the outer function.
def outer_function(*args, **kwargs):
def some_function_based_on_args_and_kwargs(new_func_param, new_func_other_param):
# do stuff here
pass
return some_function_based_on_args_and_kwargs
I think what you want to do is:
def fourier_series(f,N):
#...... math here......
def F(x):
#... more math here ...
import math #blahblah, pseudo code
return math.pi #whatever you want to return from F
if f+N == 2: #pseudo, replace with condition where f,N turn out to be useful
return F
else:
return None
Outside, you can call this like:
F = fourier_series(a,b)
if F:
ans = F(x)
else:
print 'Fourier is not possible :('
The important thing from Python's point of view are:
Yes, you can write a function inside a function
Yes, you can return a function from a function. Just make sure to return it using return F (which returns the function object) as compared to return F(x) which calls the function and returns the value
I was scraping through some documentation and found this.
This is a Snippet Like your code:
def constant(a,b):
def pair(f):
return f(a,b)
return pair
a = constant(1,2) #If You Print variable-> a then it will display "<function constant.
#<locals>.pair at 0x02EC94B0>"
pair(lambda a, b: a) #This will return variable a.
Now, constant() function takes in both a and b and return a function called "Anonymous Function" which itself takes in f, and calls f with a and b.
This is called "closures". Closures is basically an Instance of a Function.
You can define functions inside functions and return these (I think these are technically closures):
def make_f(a, b):
def x(a, b):
return a+b
return x(a, b)