I want to find a clear and efficient way to be able to change parameter value set for functools.partial.
Let's see a simple example:
from functools import partial
def fn(a,b,c,d,e):
print(a,b,c,d,e)
fn12 = partial(fn, 1,2)
Later, I want to have something like:
fn12 [0] = 7
to replace the value on specific place without create a new partial, because it's pretty heavy code there.
Addition: i ask about general possibility to change partial value.
The naive example would be like :
def printme( a,b,c,d,e):
print(a,b,c,d,e)
class my_partial:
def __init__(self, fn, *args):
self.__func__ = fn
self. args = list(args)
def __call__(self, *next_args):
call = self. args + list(next_args)
return self. __func__(* (call) )
fn12 = my_partial(printme,1,2)
fn12(3,4,5)
fn12.args[1] = 7
fn12(3,4,5)
I need that for example for widgets, where action function is defined like :
rb.config(command = partial(...))
but then I'd like to change some parameters given in partial. I could do a new partial again but that looks kinda messy.
If it is permissible to look into the implementation of partial, then using __reduce__ and __setstate__ you can replace the args wholesale:
from functools import partial
def fn(a,b,c,d,e):
print(a,b,c,d,e)
fn12 = partial(fn, 1,2)
def replace_args(part, new_args):
_,_, f = part.__reduce__()
f, _, k, n = f
part.__setstate__( (f, new_args, k, n) )
fn12('c','d','e')
replace_args(fn12, (7,2))
fn12('c','d','e')
Output:
1 2 c d e
7 2 c d e
You can update partial parameters. For example if you have a function like this:
def f(a, b):
return a*b
func = partial(f, b=2)
func(1) # result: 1*2=2
Now, you can update partial parameter b like this:
func(1, b=7) # result: 1*7=7
Related
Let there are some codes like
def f(a, b):
# some code (no return)
def g(c, d, e):
# some code (no return)
Then, I want to make a function "merge_functions" which
h = merge_functions(f, g)
works same with:
def h(a, b, c, d, e):
f(a, b)
g(c, d, e)
There could be more or fewer parameters in f and g, and I want to keep the name of the parameters the same.
There is no default value in any parameters.
I have tried:
from inspect import signature
getarg = lambda func: list(dict(signature(func).parameters).keys())
to_arg_form = lambda tup: ', '.join(tup)
def merge_functions(f, g):
fl = getarg(f)
gl = getarg(g)
result = eval(f"lambda {to_arg_form(fl+gl)}:{f.__name__}({to_arg_form(fl)}) and False or {g.__name__}({to_arg_form(gl)})")
return result
However, I could only use this function in the same file, not as a module.
How can I make the function that can also be used as a module?
You can try something like this code which creates a third function which you can use everywhere:
def f1(x1, x2, **args):
print(f"{x1} {x2}")
def f2(x1, x3, x4, **args):
print(f"{x1} {x3} {x4}")
def merge(f1, f2):
return lambda **args: (f1(**args), f2(**args))
f3 = merge(f1, f2)
f3(x1=1, x2=2, x3=3, x4=4)
Ok, so I can make a function which will call the functions passed to it in order with the right number of parameters.
I'm not sure that getting the names of the parameters is possible in the general case: There may be functions you want to compose which have the same parameter names.
from inspect import signature
def merge_functions(*funcs):
def composite(*args):
new_args = args[:]
for f in funcs:
sigs = signature(f).parameters
fargs = new_args[:len(sigs)]
new_args = new_args[len(sigs):]
f(*fargs)
return composite
def f(a, b):
print(f'Inside f({a},{b})')
def g(c, d, e):
print(f'Inside g({c},{d},{e})')
h = merge_functions(f, g)
h(1,2,3,4,5)
Output:
Inside f(1,2)
Inside g(3,4,5)
I'm looking for a nice functional way to do the following:
def add(x, y):
return x + y
def neg(x):
return -x
def c(x, y):
# Apply neg to inputs for add
_x = neg(x)
_y = neg(y)
return add(_x, _y)
neg_sum = c(2, 2) # -4
It seems related to currying, but all of the examples I can find use functions that only have one input variable. I would like something that looks like this:
def add(x, y):
return x + y
def neg(x):
return -x
c = apply(neg, add)
neg_sum = c(2, 2) # -4
This is a fairly direct way to do it:
def add(x, y):
return x + y
def neg(x):
return -x
def apply(g, f):
# h is a function that returns
# f(g(arg1), g(arg2), ...)
def h(*args):
return f(*map(g, args))
return h
# or this:
# def apply(g, f):
# return lambda *args: f(*map(g, args))
c = apply(neg, add)
neg_sum = c(2, 2) # -4
Note that when you use *myvar as an argument in a function definition, myvar becomes a list of all non-keyword arguments that are received. And if you call a function with *expression as an argument, then all the items in expression are unpacked and sent as separate arguments to the function. I use these two behaviors to make h accept an unknown list of arguments, then apply function g to each one (with map), then pass all of them as arguments to f.
A different approach, depending on how extensible you need this to be, is to create an object which implements your operator methods, which each return the same object, allowing you to chain operators together in arbitrary orders.
If you can cope with it always returning a list, you might be able to make it work.
class mathifier:
def __init__(self,values):
self.values = values
def neg(self):
self.values = [-value for value in self.values]
return self
def add(self):
self.values = [sum(self.values)]
return self
print (mathifier([2,3]).neg().add().values)
And you can still get your named function for any set of chained functions:
neg_add = lambda x : mathifier(x).neg().add()
print(neg_add([2,3]).values)
From Matthias Fripp's answer, I asked myself : I'd like to compose add and neg both ways : add_neg(*args) and neg_add(*args). This requires hacking Matthias suggestion a bit. The idea is to get some hint on the arity (number of args) of the functions to compose. This information is obtained with a bit of introspection, thanks to inspect module. With this in mind, we adapt the way args are passed through the chain of funcs. The main assumption here is that we deal with real functions, in the mathematical sense, i.e. functions returning ONE float, and taking at least one argument.
from functools import reduce
from inspect import getfullargspec
def arity_one(func):
spec = getfullargspec(func)
return len(spec[0])==1 and spec[1] is None
def add(*args):
return reduce(lambda x,y:x+y, args, 0)
def neg(x):
return -x
def compose(fun1,fun2):
def comp(*args):
if arity_one(fun2): return fun1(*(map( fun2, args)))
else: return fun1(fun2(*args))
return comp
neg_add = compose(neg, add)
add_neg = compose(add, neg)
print(f"-2+(-3) = {add_neg(2, 3)}")
print(f"-(2+3) = {neg_add(2, 3)}")
The solution is still very adhoc...
I've often been frustrated by the lack of flexibility in Python's iterable unpacking.
Take the following example:
a, b = range(2)
Works fine. a contains 0 and b contains 1, just as expected. Now let's try this:
a, b = range(1)
Now, we get a ValueError:
ValueError: not enough values to unpack (expected 2, got 1)
Not ideal, when the desired result was 0 in a, and None in b.
There are a number of hacks to get around this. The most elegant I've seen is this:
a, *b = function_with_variable_number_of_return_values()
b = b[0] if b else None
Not pretty, and could be confusing to Python newcomers.
So what's the most Pythonic way to do this? Store the return value in a variable and use an if block? The *varname hack? Something else?
As mentioned in the comments, the best way to do this is to simply have your function return a constant number of values and if your use case is actually more complicated (like argument parsing), use a library for it.
However, your question explicitly asked for a Pythonic way of handling functions that return a variable number of arguments and I believe it can be cleanly accomplished with decorators. They're not super common and most people tend to use them more than create them so here's a down-to-earth tutorial on creating decorators to learn more about them.
Below is a decorated function that does what you're looking for. The function returns an iterator with a variable number of arguments and it is padded up to a certain length to better accommodate iterator unpacking.
def variable_return(max_values, default=None):
# This decorator is somewhat more complicated because the decorator
# itself needs to take arguments.
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
# This will fail if `actual_values` is a single value.
# Such as a single integer or just `None`.
actual_values = list(actual_values)
except:
actual_values = [actual_values]
extra = [default] * (max_values - len(actual_values))
actual_values.extend(extra)
return actual_values
return wrapper
return decorator
#variable_return(max_values=3)
# This would be a function that actually does something.
# It should not return more values than `max_values`.
def ret_n(n):
return list(range(n))
a, b, c = ret_n(1)
print(a, b, c)
a, b, c = ret_n(2)
print(a, b, c)
a, b, c = ret_n(3)
print(a, b, c)
Which outputs what you're looking for:
0 None None
0 1 None
0 1 2
The decorator basically takes the decorated function and returns its output along with enough extra values to fill in max_values. The caller can then assume that the function always returns exactly max_values number of arguments and can use fancy unpacking like normal.
Here's an alternative version of the decorator solution by #supersam654, using iterators rather than lists for efficiency:
def variable_return(max_values, default=None):
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
for count, value in enumerate(actual_values, 1):
yield value
except TypeError:
count = 1
yield actual_values
yield from [default] * (max_values - count)
return wrapper
return decorator
It's used in the same way:
#variable_return(3)
def ret_n(n):
return tuple(range(n))
a, b, c = ret_n(2)
This could also be used with non-user-defined functions like so:
a, b, c = variable_return(3)(range)(2)
Shortest known to me version (thanks to #KellyBundy in comments below):
a, b, c, d, e, *_ = *my_list_or_iterable, *[None]*5
Obviously it's possible to use other default value than None if necessary.
Also there is one nice feature in Python 3.10 which comes handy here when we know upfront possible numbers of arguments - like when unpacking sys.argv
Previous method:
import sys.argv
_, x, y, z, *_ = *sys.argv, *[None]*3
New method:
import sys
match sys.argv[1:]: #slice needed to drop first value of sys.argv
case [x]:
print(f'x={x}')
case [x,y]:
print(f'x={x}, y={y}')
case [x,y,z]:
print(f'x={x}, y={y}, z={z}')
case _:
print('No arguments')
Is there a function equivalent to the * symbol, for expanding function arguments, in python? That's the entire question, but if you want an explanation of why I need that, continue reading.
In our code, we use tuples in certain places to define nested functions/conditions, to evaluate something like f(a, b, g(c, h(d))) at run time. The syntax is something like (fp = function pointer, c = constant):
nestedFunction = (fp1, c1, (fp2, c2, c3), (fp3,))
At run time, under certain conditions, that would be evaluated as:
fp1(c1, fp2(c2, c3), fp3())
Basically the first argument in each tuple is necessarily a function, the rest of the arguments in a tuple can either be constants or tuples representing other functions. The functions are evaluated from the inside out.
Anyways, you can see how the need for argument expansion, in the form of a function, could arise. And it turns out you cannot define something like:
def expand(myTuple):
return *myTuple
I can work around it by defining my functions carefully, but argument expansion would be nice to not have to hack around the issue. And just FYI, changing this design isn't an option.
You'll need to write your own recursive function that applies arguments to functions in nested tuples:
def recursive_apply(*args):
for e in args:
yield e[0](*recursive_apply(*e[1:])) if isinstance(e, tuple) else e
then use that in your function call:
next(recursive_apply(nestedFunction))
The next() is required because recursive_apply() is a generator; you can wrap the next(recursive_apply(...)) expression in a helper function to ease use; here I bundled the recursive function in the local namespace:
def apply(nested_structure):
def recursive_apply(*args):
for e in args:
yield e[0](*recursive_apply(*e[1:])) if isinstance(e, tuple) else e
return next(recursive_apply(nested_structure))
Demo:
>>> def fp(num):
... def f(*args):
... res = sum(args)
... print 'fp{}{} -> {}'.format(num, args, res)
... return res
... f.__name__ = 'fp{}'.format(num)
... return f
...
>>> for i in range(3):
... f = fp(i + 1)
... globals()[f.__name__] = f
...
>>> c1, c2, c3 = range(1, 4)
>>> nestedFunction = (fp1, c1, (fp2, c2, c3), (fp3,))
>>> apply(nestedFunction)
fp2(2, 3) -> 5
fp3() -> 0
fp1(1, 5, 0) -> 6
6
So I have a numeric value and on those two functions can be applied.
Lets say I have a number 8.
I want to take its square and then its log or first the log and then its square.
So my function looks something like this
def transform(value, transformation_list):
# value is int, transformation_list = ["log",square"] or ["square","log"]
# square function and log function
return transformed value
Now if the first argument of the transformation list is "square" and the second is "log",
then it should first execute square and then log
But if the first function in that list is "log" and second " square" then it should implement first log and then square.
I dont want if : else kinda thing as it will get ugly as i add more transformations
How should I design this.
Something like the following should work:
import math
func_dict = {'square': lambda x: x**2,
'cube': lambda x: x**3,
'log': math.log}
def transform(value, transformation_list):
for func_name in transformation_list:
value = func_dict[func_name](value)
return value
For example:
>>> transform(math.e, ['cube', 'log', 'square'])
9.0
Using this recipe for function composition (or alternatively, using the functional module), you can compose an arbitrary list of functions - no need to pass the names as strings, simply pass the functions:
class compose:
def __init__(self, f, g, *args, **kwargs):
self.f = f
self.g = g
self.pending = args[:]
self.kwargs = kwargs.copy()
def __call__(self, *args, **kwargs):
return self.f(self.g(*args, **kwargs), *self.pending, **self.kwargs)
def transform(value, transformation_list, inverted=False):
lst = transformation_list if inverted else reversed(transformation_list)
return reduce(compose, lst)(value)
Now you can call transform like this:
from math import *
value = 2
transformation_list = [sin, sqrt, log]
transform(value, transformation_list)
> -0.047541518047580299
And the above will be equivalent to log(sqrt(sin(2))). If you need to invert the order of function application, for example sin(sqrt(log(2))), then do this:
transform(value, transformation_list, inverted=True)
> 0.73965300649866683
Also, you can define functions in-line. For instance, F.J's example would look like this using my implementation:
transform(e, [lambda x:x**3, log, lambda x:x**2])
> 9.0