Python Function Equivalent to * for Expanding Arguments? - python

Is there a function equivalent to the * symbol, for expanding function arguments, in python? That's the entire question, but if you want an explanation of why I need that, continue reading.
In our code, we use tuples in certain places to define nested functions/conditions, to evaluate something like f(a, b, g(c, h(d))) at run time. The syntax is something like (fp = function pointer, c = constant):
nestedFunction = (fp1, c1, (fp2, c2, c3), (fp3,))
At run time, under certain conditions, that would be evaluated as:
fp1(c1, fp2(c2, c3), fp3())
Basically the first argument in each tuple is necessarily a function, the rest of the arguments in a tuple can either be constants or tuples representing other functions. The functions are evaluated from the inside out.
Anyways, you can see how the need for argument expansion, in the form of a function, could arise. And it turns out you cannot define something like:
def expand(myTuple):
return *myTuple
I can work around it by defining my functions carefully, but argument expansion would be nice to not have to hack around the issue. And just FYI, changing this design isn't an option.

You'll need to write your own recursive function that applies arguments to functions in nested tuples:
def recursive_apply(*args):
for e in args:
yield e[0](*recursive_apply(*e[1:])) if isinstance(e, tuple) else e
then use that in your function call:
next(recursive_apply(nestedFunction))
The next() is required because recursive_apply() is a generator; you can wrap the next(recursive_apply(...)) expression in a helper function to ease use; here I bundled the recursive function in the local namespace:
def apply(nested_structure):
def recursive_apply(*args):
for e in args:
yield e[0](*recursive_apply(*e[1:])) if isinstance(e, tuple) else e
return next(recursive_apply(nested_structure))
Demo:
>>> def fp(num):
... def f(*args):
... res = sum(args)
... print 'fp{}{} -> {}'.format(num, args, res)
... return res
... f.__name__ = 'fp{}'.format(num)
... return f
...
>>> for i in range(3):
... f = fp(i + 1)
... globals()[f.__name__] = f
...
>>> c1, c2, c3 = range(1, 4)
>>> nestedFunction = (fp1, c1, (fp2, c2, c3), (fp3,))
>>> apply(nestedFunction)
fp2(2, 3) -> 5
fp3() -> 0
fp1(1, 5, 0) -> 6
6

Related

Function that returns a function that returns a function

I've been given this function. It returns the function pair that it also returns the function f I think. That is the part that tricks me, I don't know what f(a, b) is and how to use it.
def cons(a, b):
def pair(f):
return f(a, b)
return pair
To help you understand what is going on, consider the following example:
def cons(a, b):
def pair(f):
return f(a, b)
return pair
def my_func(a, b):
return a + b
# cons returns a function that takes a function arg and calls it with args (a, b),
# in this case (1, 3). Here a, b are considered "closured" variables.
apply_func = cons(1, 3)
print apply_func(my_func) # prints 4
Lets analyse this from inside out:
def cons(a, b):
def pair(f):
return f(a, b)
return pair
The innermost level is return f(a, b) - that obviously calls function f with arguments (a, b) and returns whatever the result of that is.
The next level is pair:
def pair(f):
return f(a, b)
Function pair takes a function as an argument, calls that function with two arguments (a, b) and returns the result. For example:
def plus(x, y):
return x + y
a = 7
b = 8
pair(plus) # returns 15
The outermost level is cons - it constructs function pair which has arbitrary a and b and returns that version of pair. E.g.
pair_2_3 = cons(2,3)
pair_2_3(plus) # returns 5, because it calls plus(2, 3)
. . . I don't know what f(a, b) is and how to use it.
f(a, b) is simply a function call. All the code you provided does is define a function that returns a function. The function returned from the first function, itself returns a function. I assume the way it would be used is perhaps something like:
>>> cons(1, 2)(lambda x, y: x + y)
3
>>>
The above code would be equivalent to:
>>> pair_func = cons(1, 2) # return the `pair` function defined in `cons`
>>> f = lambda x, y: x + y
>>> pair_func(f) # apply the `f` function to the arguments passed into `cons`.
3
>>>
It might also help to note that the pair function defined in this case, is what's know as a closure. Essentially, a closure is a function which has access to local variables from an enclosing function's scope, after the function has finished execution. In your specific case, cons is the enclosing function, pair is the closure, and a and b are the variables the closure is accessing.
Well if you could share the complete question then we might be able to help you better. Meanwhile what I can tell you here is that in the return of pair(f) the program is calling a function f which takes two arguments a and b. This function f(a,b) is called and then its value will be returned to pair(f).
But the point to note here is that in pair function we already have a local variable f, so when we will try to call the function f(a,b) it will give us UnboundedLocalVariable error. Therefore, we will need to change the name of this function from f to something else.

Default values for iterable unpacking

I've often been frustrated by the lack of flexibility in Python's iterable unpacking.
Take the following example:
a, b = range(2)
Works fine. a contains 0 and b contains 1, just as expected. Now let's try this:
a, b = range(1)
Now, we get a ValueError:
ValueError: not enough values to unpack (expected 2, got 1)
Not ideal, when the desired result was 0 in a, and None in b.
There are a number of hacks to get around this. The most elegant I've seen is this:
a, *b = function_with_variable_number_of_return_values()
b = b[0] if b else None
Not pretty, and could be confusing to Python newcomers.
So what's the most Pythonic way to do this? Store the return value in a variable and use an if block? The *varname hack? Something else?
As mentioned in the comments, the best way to do this is to simply have your function return a constant number of values and if your use case is actually more complicated (like argument parsing), use a library for it.
However, your question explicitly asked for a Pythonic way of handling functions that return a variable number of arguments and I believe it can be cleanly accomplished with decorators. They're not super common and most people tend to use them more than create them so here's a down-to-earth tutorial on creating decorators to learn more about them.
Below is a decorated function that does what you're looking for. The function returns an iterator with a variable number of arguments and it is padded up to a certain length to better accommodate iterator unpacking.
def variable_return(max_values, default=None):
# This decorator is somewhat more complicated because the decorator
# itself needs to take arguments.
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
# This will fail if `actual_values` is a single value.
# Such as a single integer or just `None`.
actual_values = list(actual_values)
except:
actual_values = [actual_values]
extra = [default] * (max_values - len(actual_values))
actual_values.extend(extra)
return actual_values
return wrapper
return decorator
#variable_return(max_values=3)
# This would be a function that actually does something.
# It should not return more values than `max_values`.
def ret_n(n):
return list(range(n))
a, b, c = ret_n(1)
print(a, b, c)
a, b, c = ret_n(2)
print(a, b, c)
a, b, c = ret_n(3)
print(a, b, c)
Which outputs what you're looking for:
0 None None
0 1 None
0 1 2
The decorator basically takes the decorated function and returns its output along with enough extra values to fill in max_values. The caller can then assume that the function always returns exactly max_values number of arguments and can use fancy unpacking like normal.
Here's an alternative version of the decorator solution by #supersam654, using iterators rather than lists for efficiency:
def variable_return(max_values, default=None):
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
for count, value in enumerate(actual_values, 1):
yield value
except TypeError:
count = 1
yield actual_values
yield from [default] * (max_values - count)
return wrapper
return decorator
It's used in the same way:
#variable_return(3)
def ret_n(n):
return tuple(range(n))
a, b, c = ret_n(2)
This could also be used with non-user-defined functions like so:
a, b, c = variable_return(3)(range)(2)
Shortest known to me version (thanks to #KellyBundy in comments below):
a, b, c, d, e, *_ = *my_list_or_iterable, *[None]*5
Obviously it's possible to use other default value than None if necessary.
Also there is one nice feature in Python 3.10 which comes handy here when we know upfront possible numbers of arguments - like when unpacking sys.argv
Previous method:
import sys.argv
_, x, y, z, *_ = *sys.argv, *[None]*3
New method:
import sys
match sys.argv[1:]: #slice needed to drop first value of sys.argv
case [x]:
print(f'x={x}')
case [x,y]:
print(f'x={x}, y={y}')
case [x,y,z]:
print(f'x={x}, y={y}, z={z}')
case _:
print('No arguments')

What is the fastest way to create a function if its creation depends upon some other input parameters?

I have an API that supports (unfortunately) too many ways for a user to give inputs. I have to create a function depending upon the type of inputs. Once the function has been created (lets call it foo), it is run many times (around 10^7 times).
The user first gives 3 inputs - A, B and C that tell us about the types of input given to our foo function. A can have three possible values, B can have four and C can have 5. Thus giving me 60 possible permutations.
Example - if A is a list then the first parameter for foo will be a type list and similarly for B and C.
Now I cannot make these checks inside the function foo for obvious reasons. Hence, I create a function create_foo that checks for all the possible combinations and returns a unique function foo. But the drawback with this approach is that I have to write the foo function 60 times for all the permutations.
An example of this approach -
def create_foo(A,B,C):
if A=='a1' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a2' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a3' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
.
.
.
.(60 times)
.
.
.
if A=='a3' and B=='b4' and C=='c5':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
The foo function parses the parameters differently every time, but then it performs the same task after parsing.
Now I call the function
f = create_foo(A='a2',B='b3',C='c4')
Now foo is stored in f and f is called many times. Now f is very time efficient but the problem with this approach is the messy code which involves writing the foo function 60 times.
Is there a cleaner way to do this? I cannot compromise on performance hence, the new method must not take more time for evaluation that this method.
Currying lamda functions to handle all this takes more time than the above method because of the extra function calls. I cannot afford it.
A,B and C are not used by the function they are only used for parsing values and they would not change after the creation of foo.
For example - if A is a type list no change is required but if its a dict it needs to call a function that parses dict to list. A,B and C only tell us about the type of parameters in *args
It's not 100% clear what you want, but I'll assume A, B, and C are the same as the *args to foo.
Here are some thoughts toward possible solutions:
(1) I wouldn't take it for granted that your 60 if statements necessarily make an impact. If the rest of your function is at all computationally demanding, you might not even notice the proportional increase in running time from 60 if statements.
(2) You could parse/sanity-check the arguments once (slowly/inefficiently) and then pass the sanity-checked versions to a single function foo that runs 10^7 times.
(3) You could write a single function foo that handles every case, but only let it parse/sanity check arguments if a certain optional keyword is provided:
def foo( A, B, C, check=False ):
if check:
pass # TODO: implement sanity-checking on A, B and C here
pass # TODO: perform computationally-intensive stuff here
Call foo(..., check=True ) once. Then call it without check=True 10^7 times.
(4) If you want to pass around multiple versions of the same callable function, each version configured with different argument values pre-filled, that's what functools.partial is for. This may be what you want, rather than duplicating the same code 60 times.
Whoa whoa whoa. You're saying you're duplicating code SIXTY TIMES in your function? No, that will not do. Here's what you do instead.
def coerce_args(f):
def wrapped(a, b, c):
if isinstance(a, list):
pass
elif isinstance(a, dict):
a = list(a.items()) # turn it into a list
# etc for each argument
return f(a, b, c)
return wrapped
#coerce_args
def foo(a, b, c):
"""foo will handle a, b, c in a CONCRETE form"""
# do stuff
Essentially you're building one decorator that's going to change a, b, and c into a known format for foo to handle the business logic on. You've already built an API where it's acceptable to call this in a number of different ways (which is bad to begin with), so you need to support it. Doing so means internally treating it as the same way every time, and providing a helper to coerce that.
given your vague explanation I will make some assumptions and work with that.
I will assume that A, B and C are mutually independent, and that they are parsed to the same type, then I give you this
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
#parse your A arguments here
if parser_B is not None:
#parse your B arguments here
if parser_C is not None:
#parse your C arguments here
# do something
def foo_maker(A,B,C):
parser_A = None
parser_B = None
parser_C = None
if A == "list": #or isinstance(A,list)
pass
elif A == "dict": #or isinstance(A,dict)
#put your parse here, for example:
parser_A = lambda x: x.items()
...
#the same with B and C
return lambda *argv: _foo(*argv, parser_A=parser_A, parser_B=parser_B, parser_C=parser_C )
simple working example
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
argv = parser_A(argv)
if parser_B is not None:
argv = parser_B(argv)
if parser_C is not None:
argv = parser_C(argv)
print(argv)
def foo_maker(A,B,C):
pa = None
pb = None
pc = None
if A==1:
pa = lambda x: (23,32)+x
if B==2:
pb = lambda x: list(map(str,x))
if C==3:
pc = lambda x: set(x)
return lambda *x:_foo(*x,parser_A=pa,parser_B=pb,parser_C=pc)
test
>>> f1=foo_maker(1,4,3)
>>> f2=foo_maker(1,2,3)
>>> f1(1,2,3,5,8)
{32, 1, 2, 3, 5, 8, 23}
>>> f2(1,2,3,5,8)
{'8', '23', '2', '3', '5', '1', '32'}
>>> f3=foo_maker(0,0,3)
>>> f3(1,2,3,5,8)
{8, 1, 2, 3, 5}
>>> f4=foo_maker(0,0,0)
>>> f4(1,2,3,5,8)
(1, 2, 3, 5, 8)
>>> f5=foo_maker(1,0,0)
>>> f5(1,2,3,5,8)
(23, 32, 1, 2, 3, 5, 8)
>>>
Because you don't specify what actually happens in each individual foo, there is not way to generalize how foo's are created. But there is a way to speed up demarshaling.
def create_foo(A,B,C, d={}):
if len(d) == 0:
def _111():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a1','b1','c1')] = _111
def _211():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a2','b1','c1')] = _211
....
def _345():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a3','b4','c5')] = _345
return d[(A,B,C)]()
The _XXX functions will not be evaluated each time you call create_foo. So the functions will only be created at run-time. Also the default value of d will be set when the create_foo function is defined (only once). So after the first run, the code inside the if statement will not be entered anymore and the dictionary will have all the functions ready to start initializing and returning your foo's.
Edit: if the behavior in each foo is the same (as the edit of the question seems to suggest), then rather than passing types of foo's parameters in A,B and C, maybe its better to pass in conversion functions? Then the whole thing becomes:
def create_foo(L1, L2, L3):
def foo(a,b,c):
a=L1(a)
b=L2(b)
c=L3(c)
/*does something*/
return foo
If you want to (for example) convert all 3 parameters to sets from lists, then you can call it as:
f = create_foo(set,set,set)
If you want foo to add 1 to 1st parameter, subtract 5 from second and multiply 3rd by 4, you'd say
f = create_foo(lambda x:x+1, lambda x:x-5, lambda x:x*4)

Creating a new function as return in python function?

I was wondering if it is possible in python to do the following:
def func1(a,b):
return func2(c,d)
What I mean is that suppose I do something with a,b which leads to some coefficients that can define a new function, I want to create this function if the operations with a,b is indeed possible and be able to access this outside of func1.
An example would be a simple fourier series, F(x), of a given function f:
def fourier_series(f,N):
...... math here......
return F(x)
What I mean by this is I want to creat and store this new function for later use, maybe I want to derivate it, or integrate or plot or whatever I want to do, I do not want to send the point(s) x for evaluation in fourier_series (or func1(..)), I simply say that fourier_series creates a new function that takes a variable x, this function can be called later outside like y = F(3)... if I made myself clear enough?
You should be able to do this by defining a new function inline:
def fourier_series(f, N):
def F(x):
...
return F
You are not limited to the arguments you pass in to fourier_series:
def f(a):
def F(b):
return b + 5
return F
>>> fun = f(10)
>>> fun(3)
8
You could use a lambda (although I like the other solutions a bit more, I think :) ):
>>> def func2(c, d):
... return c, d
...
>>> def func1(a, b):
... c = a + 1
... d = b + 2
... return lambda: func2(c,d)
...
>>> result = func1(1, 2)
>>> print result
<function <lambda> at 0x7f3b80a3d848>
>>> print result()
(2, 4)
>>>
While I cannot give you an answer specific to what you plan to do. (Looks like math out of my league.)
I can tell you that Python does support first-class functions.
Python may return functions from functions, store functions in collections such as lists and generally treat them as you would any variable.
Cool things such as defining functions in other functions and returning functions are all possible.
>>> def func():
... def func2(x,y):
... return x*y
... return func2
>>> x = func()
>>> x(1,2)
2
Functions can be assigned to variables and stored in lists, they can be used as arguments for other functions and are as flexible as any other object.
If you define a function inside your outer function, you can use the parameters passed to the outer function in the definition of the inner function and return that inner function as the result of the outer function.
def outer_function(*args, **kwargs):
def some_function_based_on_args_and_kwargs(new_func_param, new_func_other_param):
# do stuff here
pass
return some_function_based_on_args_and_kwargs
I think what you want to do is:
def fourier_series(f,N):
#...... math here......
def F(x):
#... more math here ...
import math #blahblah, pseudo code
return math.pi #whatever you want to return from F
if f+N == 2: #pseudo, replace with condition where f,N turn out to be useful
return F
else:
return None
Outside, you can call this like:
F = fourier_series(a,b)
if F:
ans = F(x)
else:
print 'Fourier is not possible :('
The important thing from Python's point of view are:
Yes, you can write a function inside a function
Yes, you can return a function from a function. Just make sure to return it using return F (which returns the function object) as compared to return F(x) which calls the function and returns the value
I was scraping through some documentation and found this.
This is a Snippet Like your code:
def constant(a,b):
def pair(f):
return f(a,b)
return pair
a = constant(1,2) #If You Print variable-> a then it will display "<function constant.
#<locals>.pair at 0x02EC94B0>"
pair(lambda a, b: a) #This will return variable a.
Now, constant() function takes in both a and b and return a function called "Anonymous Function" which itself takes in f, and calls f with a and b.
This is called "closures". Closures is basically an Instance of a Function.
You can define functions inside functions and return these (I think these are technically closures):
def make_f(a, b):
def x(a, b):
return a+b
return x(a, b)

Assigning a name to a pattern-matched item in Python

I'm trying to do the following:
def f( a=(b,c,d) ):
pass
But the interpreter complains that b is not defined when I do this. Is there a "Pythonic" way of getting the intended result? I know that I could do something like the following:
def f( (b,c,d) ):
a = (b,c,d)
pass
But I'd rather a solution that didn't require me to repeat myself. Any ideas?
Edit for clarification: What I am trying to do is have a function that can be called as follows:
f( (1,2,3) )
Then, within the body of the function, the following names are assigned:
a = (1,2,3)
b = 1
c = 2
d = 3
There is no way to do precisely what you want. Moreover, tuple unpacking in formal function arguments is going the way of the dodo in python 3. The suggested replacement is to change
def f((a,b,c)):
...
to
def f(a_b_c):
a,b,c = a_b_c
...
(That's the style of new argument name the 2to3 conversion script would generate; obviously you can use whatever sort of name you want.)
In your case, the simplest thing to do would be this:
def f(a):
b,c,d = a
...
That has the minimum repetition.
Instead of having the function accept a tuple, why not have the caller unpack the tuple?
>>> def f(a, b, c):
print a, b, c
>>> t = (1, 2, 3)
>>> f(*t)
1 2 3
Otherwise consider the NamedTuple class.

Categories