python functools partial confusion - python

consider the following:
from functools import partial
def add(a, b, c):
return 100 * a + 10 * b + c
add_part = partial(add, c = 2, b = 1)
add_part(3)
312
Works fine. However:
def foo(x, y, z):
return x+y+z
bar = partial(foo, y=3)
bar(1, 2)
barfs:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() got multiple values for argument 'y'
Obviously I am missing something obvious, but what?

The definition of partial() from the official functools documentation is:
def partial(func, /, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = {**keywords, **fkeywords}
return func(*args, *fargs, **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
That means that in your case partial() returns the foo() function with the signature modified as follow:
>>> from inspect import signature
>>> signature(bar)
<Signature (x, *, y=3, z)>
To solve your error, you could provide keyword arguments to the bar() function:
def foo(x, y, z):
return x+y+z
bar = partial(foo, y=3)
bar(x=1, z=2)

From the python 3.9 documentation:
Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords. If more arguments are supplied to the call, they are appended to args. If additional keyword arguments are supplied, they extend and override keywords.
def foo(x, y, z):
return x+y+z
bar = partial(foo, y=3)
print(bar.args) # ()
print(bar.keywords) # {'y': 3}
When bar(1, 2) is called, bar.args becomes (1, 2) and bar.keywords is still {'y': 3}
Taking note of this, the next thing is to refer to "Function Calling Behavior" as specified in PEP 3102
When a function is called, the input arguments are assigned to formal parameters as follows:
For each formal parameter, there is a slot which will be used to contain the value of the argument assigned to that parameter.
Slots which have had values assigned to them are marked as 'filled'. Slots which have no value assigned to them yet are considered 'empty'.
Initially, all slots are marked as empty.
Positional arguments are assigned first, followed by keyword arguments.
For each positional argument:
Attempt to bind the argument to the first unfilled parameter slot. If the slot is not a vararg slot, then mark the slot as 'filled'.
If the next unfilled slot is a vararg slot, and it does not have a name, then it is an error.
Otherwise, if the next unfilled slot is a vararg slot then all remaining non-keyword arguments are placed into the vararg slot.
For each keyword argument:
If there is a parameter with the same name as the keyword, then the argument value is assigned to that parameter slot. However, if the parameter slot is already filled, then that is an error.
Otherwise, if there is a 'keyword dictionary' argument, the argument is added to the dictionary using the keyword name as the dictionary key, unless there is already an entry with that key, in which case it is an error.
Otherwise, if there is no keyword dictionary, and no matching named parameter, then it is an error.
Finally:
If the vararg slot is not yet filled, assign an empty tuple as its value.
For each remaining empty slot: if there is a default value for that slot, then fill the slot with the default value. If there is no default value, then it is an error.
I'm not quite sure though if the above is up to date or how to apply it in this situation because I'm not sure if bar has positional parameters or varargs.

Related

Check if function can be called with another function's arguments

How can I check if a function can always be called with the same arguments as another function? For example, b can be called with all arguments provided to a.
def a(a, b, c=None):
pass
def b(a, *args, d=4,**kwargs):
pass
The reason I want this is that I have a base function:
def f(a, b):
print('f', a, b)
and a list of callbacks:
def g(b, a):
print('g', a, b)
def h(*args, **kwargs):
print('h', args, kwargs)
funcs = [g, h]
and a wrapper function that accepts anything:
def call(*args, **kwargs):
f(*args, **kwargs)
for func in funcs:
func(*args, **kwargs)
Now I want to check if all callbacks will accept the arguments provided to call(), assuming they're valid for f().
For performance reasons, I don't want to check the arguments every time call() is called, but rather check each callback before adding it to the list of callbacks.
For example, those calls are ok:
call(1, 2)
call(a=1, b=3)
But this one should fail because g has arguments in wrong order:
call(1, b=3)
This took a bit of fun research, but i think i've covered the corner cases. A number of them arise to keep things compatible with python 2 while new syntax being added.
Most problematic part is the fact that some named (keyword) parameters can be passed in as positional argument or be required based on order passed in.
For more see comments.
Below code will ensure that function b can be called using any possible combination of valid arguments to function a. (does not imply inverse).
Uncomment/add try except block to get true/valse result and not an AssertionError.
import inspect
def check_arg_spec(a,b):
"""
attrs of FullArgSpec object:
sp.args = pos or legacy keyword arguments, w/ keyword at back
sp.varargs = *args
sp.varkw = **kwargs
sp.defaults = default values for legacy keyword arguments #
sp.args
sp.kwdonly = keyword arguments follow *args or *, must be passed in by name
sp.kwdonlydefaults = {name: default_val, .... }
sp.annotatons -> currently not in use, except as standard flag for outside applications
Consume order:
(1) Positional arguments
(2) legacy keyword argument = default (can be filled by both keyword or positional parameter)
[
(3) *args
[
(4) keyword only arguments [=default]
]
]
(5) **kwds
"""
a_sp = inspect.getfullargspec(a)
b_sp = inspect.getfullargspec(b)
kwdfb = b_sp.kwonlydefaults or {}
kwdfa = a_sp.kwonlydefaults or {}
kwddefb = b_sp.defaults or []
kwddefa = a_sp.defaults or []
# try:
akwd = a_sp.kwonlyargs
if len(kwddefa):
akwd += a_sp.args[-len(kwddefa):]
bkwd = b_sp.kwonlyargs
if len(kwddefb):
bkwd += b_sp.args[-len(kwddefb):]
# all required arguments in b must have name match in a spec.
for bkey in (set(b_sp.args) ^ set(bkwd)) & set(b_sp.args) :
assert bkey in a_sp.args
# all required positional in b can be met by a
assert (len(a_sp.args)-len(kwddefb)) >= (len(b_sp.args)-len(kwddefb))
# if a has *args spec, so must b
assert not ( a_sp.varargs and b_sp.varargs is None )
# if a does not take *args, max number of pos args passed to a is len(a_sp.args). b must accept at least this many positional args unless it can consume *args
if b_sp.varargs is None:
# if neither a nor b accepts *args, check that total number of pos plus py2 style keyword arguments for sg of b is more than a can send its way.
assert len(a_sp.args) <= len(b_sp.args)
# Keyword only arguments of b -> they are required, must be present in a.
akws = set(a_sp.kwonlyargs) | set(a_sp.args[-len(kwddefa):])
for nmreq in (set(b_sp.kwonlyargs)^set(kwdfb)) & set(b_sp.kwonlyargs):
assert nmreq in akws
# if a and b both accept an arbitrary number of positional arguments or if b can but a cannot, no more checks neccessary here
# if a accepts optional arbitrary, **kwds, then so must b
assert not (a_sp.varkw and b_sp.varkw is None)
if b_sp.varkw is None:
# neither a nor b can consume arbitrary keyword arguments
# then b must be able to consume all keywords that a can be called w/th.
for akw in akwd:
assert akw in bkwd
# if b accepts **kwds, but not a, then there is no need to check further
# if both accept **kwds, then also no need to check further
# return True
#
# except AssertionError:
#
# return False
Not sure what you are really looking for and I'm pretty sure your issue could be solved in a better way, but anyway:
from inspect import getargspec
def foo(a, b, c=None):
pass
def bar(a, d=4, *args, **kwargs):
pass
def same_args(func1, func2):
return list(set(getargspec(func1)[0]).intersection(set(getargspec(func2)[0])))
print same_args(foo, bar)
# => ['a']
same_args just check arguments from func1 and func2 and returns a new list with only same arguments in both func1 and func2.

Understanding decorators: return type is a function when argument not specified

I am using a single decorator for two separate functions: one with specification of a decorator argument; and another one without it.
When the optional argument is not passed, the return type is a function (specifically, the inner_function in the decorator). However, when the optional argument is passed it works as expected.
Can you explain what is going on here and why it acts differently in these cases?
def cache_disk(cache_folder="./cache"):
def wrapper(f):
def inner_function(*args, **kwargs):
result = f(*args, **kwargs)
return result
return inner_function
return wrapper
#cache_disk
def func1(data):
return [d for d in data]
#cache_disk(cache_folder='./cache/')
def func2(data):
return [d for d in data]
data = [1,2,3]
print(func1(data))
print(func2(data))
Result:
<function inner_function at 0x7f1f283d5c08>
[1, 2, 3]
Note that:
#decorator # no arguments
def func(...):
...
is equivalent to:
def func(...):
...
func = decorator(func) # one 'level' of calls
and that:
#decorator(...): # arguments
def func(...):
...
is equivalent to:
def func(...):
...
func = decorator(...)(func) # two 'levels' of calls
In the first case, there is a single argument to the decorator, the func itself. In the second case, the arguments to the decorator are the ... from the # line, and it's the function returned by the decorator that is called with func as an argument.
In your example,
#cache_disk
def func1(data):
...
the decorator cache_disk gets a single, callable argument (func, which becomes args[0]) and returns the wrapper. Then when you call:
print(func1(data))
wrapper gets a single argument (data, which becomes f) and returns inner_function.
Therefore, you have three choices:
Decorate func1 with #cache_disk() (note parentheses), passing no arguments to cache_disk itself and func to wrapper;
Alter cache_disk to behave differently depending on whether it's passed a single, callable argument or something else; or
As #o11c pointed out in the comments, use e.g. cache_disk.wrapper = cache_disk() to provide a convenient alias for the parameter-less version, then decorate with #cache_disk.wrapper.
if you want default values, you need to call the function, which returns a decorator:
#cache_disk()
def func1(data):
return [d for d in data]

How to understand python decorator arguments pass

I try to understand python decorator
def dec(func):
def wrap(*args,**kw):
print args, kw
return func(*args,**kw)
return wrap
#dec
def myfunc(a=1,b=2,c=3):
return a+b+c
>>> myfunc()
() {}
6
>>> myfunc(1,2,3)
(1, 2, 3) {}
6
>>> myfunc(1,2,c=5)
(1, 2) {'c': 5}
8
>>>
When I run myfunc() args and kw are nothing, but when I run myfunc(1,2,3) and myfunc(1,2,c=5), args and kw were passed to dec function.
As I know,
#dec
def myfunc(...)
equals to myfunc = dec(myfunc) <-- no arguments were mentioned here.
How arguments were passed to wrap function in dec? How to understand these?
Not sure if i understand correctly your problem, but the default values for myfunc arguments are known only to myfunc - your decorator has no knowledge of them, so it cannot print them.
That's why:
myfunc()
results in printing:
() {}
Both *args and **kw are empty for the decorator, but the decorated function will use the default values in this case.
In the second and third case you get the parameters printed, as they are explicitly passed to the decorated function:
def wrap(*args,**kw): <- this is what is actually called when you invoke decorated function, no default values here
print args, kw
return func(*args,**kw) <- this is the function you decorate
#if func has default parameter values, then those will be used
#when you don't pass them to the wrapper but only **inside** func
return wrap
Edit:
It looks like you're mistaking calling the decorated function with decorating the function:
myfunc = dec(myfunc)
decorates myfunc using dec and is equivalent to
#dec
def myfunc(...)
On the other hand, after using either of them:
myfunc(whatever)
calls the wrap function defined in your decorator, which will in turn call the original myfunc
Another way to think of it is by saying:
def wrapper(some_function):
def _inner(*args, **kwargs):
#do some work
return func(*args, **kwargs)
return _inner
#wrapper
def foo(a, b, c):
return "Foo called, and I saw %d, %d, %d" % (a, b, c)
...you're getting a result which is roughly similar to the following:
def foo(a, b, c):
#do some work
return "Foo called, and I saw %d, %d, %d" % (a, b, c)
This isn't exactly right because the #do some work is occurring before the actual foo() call, but as an approximation this is what you're getting. For that reason, the wrapper can't really 'see' the default arguments for foo() if any exist. So a better way to think of it might be:
#always execute this code before calling...
def foo(a, b, c):
return "Foo called and I saw %d, %d, %d" % (a, b, c)
So something really basic might look like this.
def wrap(f):
... def _inner(*a, **k):
... new_a = (ar*2 for ar in a)
... new_k = {}
... for k, v in k.iteritems():
... new_k[k] = v*2
... return f(*new_a, **new_k)
... return _inner
...
>>> def foo(a=2, b=4, c=6):
... return "%d, %d, %d" % (a, b, c)
...
>>> foo()
'2, 4, 6'
>>> foo(1, 5, 7)
'1, 5, 7'
>>> foo = wrap(foo) #actually wrapping it here
>>> foo()
'2, 4, 6'
>>> foo(3, 5, 6)
'6, 10, 12'
>>> foo(3, 5, c=7)
'6, 10, 14'
>>>
Decorators are function wrappers. They give back a function that wraps the original one into some pre- and post-processing code, but still need to call the original function (normally with the same argument as you would call it in absence of a decorator).
Python has two types of arguments, positional and keyword arguments (this has nothing to do with decorators, that's generic python basics). * is for positional (internally is a list), ** for keyword (dictionary) arguments. By specifying both you allow your decorator to accept all at all possible types of arguments and pass them through to the underlying function. The contract of the call is, however, still defined by your function. E.g. if it only takes keyword arguments it will fail when the decorator function passes through a positional argument.
In your particular example, you have some pre-processing code (i.e. code that will run before the original function is called). For example in this code you can print out arguments *args that your original function might fail to accept all together because it does not take any position arguments.
You do not necessarily have to pass through *args and **kwargs. In fact you can define a decorator which makes some decisions based on the arguments you pass in about what to provide to the original function as arguments, e.g.
def dec(fun):
def wrap(*args, **kwargs):
if 'a' in kwargs:
return fun(kwargs[a])
else:
return fun(*args)
return wrap

Beazley 4e P.E.R: square(x) but he passes **kwargs within the wrapper?

This example is taken from Beazley, Python Essential Reference 4e,
pg:101.
How is he doing:
func(*args, **kwargs)
where 'func' is the square-function which takes 1 argument. Earlier in
the chapter he sqawks about how the position and number of arguments
must match in a call/definition or a TypeError would be raised.
Also,
#trace
def square(x):
...
square = trace(square)
trace returns 'callf' so this is equivalent to writing: square = callf
which is fine because since square refers to a new-function-object, you can
call it with *args, **kwargs. But, then in callf he does func(*args...
Given that we just made 'square' refer to some other object, how is
the original square accessible inside? What mechanism is coming into
play?
#trace
def square(x):
return x*x
enable_tracing = True
if enable_tracing:
debug_log = open("debug.log","w")
def trace(func):
if enable_tracing:
def callf(*args,**kwargs):
debug_log.write("Calling %s: %s, %s\n" %
(func.__name__, args, kwargs))
r = func(*args,**kwargs) #????????
debug_log.write("%s returned %s\n" % (func.__name, r))
return r
return callf
else:
return func
The *-prefix means, "Use this sequence of values as the positional parameters to the function." The **-prefix means, "Use this dictionary as the named parameters to the function." If the sequence is empty, then no positional parameters are passed. If the dictionary is empty, then no named parameters are passed.
When you define a function with those prefixes, then the unaccounted for positional parameters go into the *-prefixed argument, and the unaccounted for named parameters go into the **-prefixed argument. So if you define a function like this:
def wrapper(*args, **kwargs):
then the function can be invoked with any arguments whatsoever. If that function then calls another function with those arguments, then it will be called however the wrapper was called.
Note that you can call a function with (*args, **kwargs) even if wasn't defined that way:
>>> def square(x):
... return x*x
...
>>> args = (10,)
>>> kwargs = {}
>>> square(*args, **kwargs)
100
Because kwargs is empty, there are no named parameters passed to the function. It gets only the one positional arguments in args.

How can I treat positional arguments as keyword arguments in Python 2

For a decorator I am writing I would like to manipulate a specific named parameter of a function. Consider the following decorator:
def square_param(param):
def func_decorator(func):
def func_caller(*args,**kwargs):
kwargs[param] = kwargs[param] * kwargs[param]
return func(*args,**kwargs)
return func_caller
return func_decorator
Applied on the next function:
#square_param('dividend')
def quotient(divisor=1,dividend=0):
return dividend/divisor
This will work if dividend is called as a keyword argument e.g.:
>>> quotient(dividend=2)
4
However, when given as a positional argument this will fail.
>>> quotient(3,4)
TypeError: quotient() got multiple values for keyword argument 'dividend'
With Python 3 I could solve this by forcing the parameter to be always given as a keyword:
#square_param('dividend')
def quotient(divisor=1,*,dividend=0):
return dividend/divisor
but I would like to support Python 2 and also I would like to put as little restrictions on the function.
Is there a way that I can fix this behaviour in my decorator?
Firstly, your square_param decorator doesn't work because it doesn't return the functions. It needs to be:
def square_param(param):
def func_decorator(func):
def func_caller(*args,**kwargs):
kwargs[param] = kwargs[param] * kwargs[param]
return func(*args,**kwargs)
return func_caller
return func_decorator
Now I took #Dirk's advice and looked into the inspect module. You can do it by checking first if the parameter is one of the function's positional arguments, and second if that positional argument has been specified, and then modifying that argument position. You also need to make sure you only modify kwargs if the parameter was supplied as a keyword argument.
import inspect
def square_param(param):
def func_decorator(func):
def func_caller(*args,**kwargs):
funparams = inspect.getargspec(func).args
if param in funparams:
i = funparams.index(param)
if len(args) > i:
args = list(args) # Make mutable
args[i] = args[i] * args[i]
if param in kwargs:
kwargs[param] = kwargs[param] * kwargs[param]
return func(*args,**kwargs)
return func_caller
return func_decorator
even without using Inspect we can get function params
>>> func = lambda x, y, args: (x, y, {})
>>> func.func_code.co_argcount
3
>>> func.func_code.co_varnames
('x', 'y', 'args')
This may only be tangentially related, but I found it useful to solve a similar problem. I wanted to meld *args and **kwargs into a single dictionary so that my following code could process without regard to how the args came in, and I didn't want to mutate the existing kwargs variable, otherwise I just would have use kwargs.update().
all_args = {**kwargs, **{k: v for k, v in zip(list(inspect.signature(func).parameters), args)}}
# optionally delete `self`
del (all_args['self'])
Update: While this works, this answer has a better technique. In part:
bound_args = inspect.signature(f).bind(*args, **kwargs)
bound_args.apply_defaults()

Categories