I have a base decorator that takes arguments but that also is built upon by other decorators. I can't seem to figure where to put the functools.wraps in order to preserve the full signature of the decorated function.
import inspect
from functools import wraps
# Base decorator
def _process_arguments(func, *indices):
""" Apply the pre-processing function to each selected parameter """
#wraps(func)
def wrap(f):
#wraps(f)
def wrapped_f(*args):
params = inspect.getargspec(f)[0]
args_out = list()
for ind, arg in enumerate(args):
if ind in indices:
args_out.append(func(arg))
else:
args_out.append(arg)
return f(*args_out)
return wrapped_f
return wrap
# Function that will be used to process each parameter
def double(x):
return x * 2
# Decorator called by end user
def double_selected(*args):
return _process_arguments(double, *args)
# End-user's function
#double_selected(2, 0)
def say_hello(a1, a2, a3):
""" doc string for say_hello """
print('{} {} {}'.format(a1, a2, a3))
say_hello('say', 'hello', 'arguments')
The result of this code should be and is:
saysay hello argumentsarguments
However, running help on say_hello gives me:
say_hello(*args, **kwargs)
doc string for say_hello
Everything is preserved except the parameter names.
It seems like I just need to add another #wraps() somewhere, but where?
I experimented with this:
>>> from functools import wraps
>>> def x(): print(1)
...
>>> #wraps(x)
... def xyz(a,b,c): return x
>>> xyz.__name__
'x'
>>> help(xyz)
Help on function x in module __main__:
x(a, b, c)
AFAIK, this has nothing to do with wraps itself, but an issue related to help. Indeed, because help inspects your objects to provide the information, including __doc__ and other attributes, this is why you get this behavior, although your wrapped function has different argument list. Though, wraps doesn't update that automatically (the argument list) what it really updates is this tuple and the __dict__ which is technically the objects namespace:
WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__',
'__annotations__')
WRAPPER_UPDATES = ('__dict__',)
If you aren't sure about how wraps work, probably it'll help if your read the the source code from the standard library: functools.py.
It seems like I just need to add another #wraps() somewhere, but where?
No, you don't need to add another wraps in your code, help as I stated above works that way by inspecting your objects. The function's arguments are associated with code objects (__code__) because your function's arguments are stored/represented in that object, wraps has no way to update the argument of the wrapper to be like the wrapped function (continuing with the above example):
>>> xyz.__code__.co_varnames
>>> xyz.__code__.co_varnames = x.__code__.co_varnames
AttributeError: readonly attribute
If help displayed that function xyz has this argument list () instead of (a, b, c) then this is clearly wrong! And the same applies for wraps, to change the argument list of the wrapper to the wrapped, would be cumbersome! So this should not be a concern at all.
>>> #wraps(x, ("__code__",))
... def xyz(a,b,c): pass
...
>>> help(xyz)
Help on function xyz in module __main__:
xyz()
But xyz() returns x():
>>> xyz()
1
For other references take a look at this question or the Python Documentation
What does functools.wraps do?
direprobs was correct in that no amount of functools wraps would get me there. bravosierra99 pointed me to somewhat related examples. However, I couldn't find a single example of signature preservation on nested decorators in which the outer decorator takes arguments.
The comments on Bruce Eckel's post on decorators with arguments gave me the biggest hints in achieving my desired result.
The key was in removing the middle function from within my _process_arguments function and placing its parameter in the next, nested function. It kind of makes sense to me now...but it works:
import inspect
from decorator import decorator
# Base decorator
def _process_arguments(func, *indices):
""" Apply the pre-processing function to each selected parameter """
#decorator
def wrapped_f(f, *args):
params = inspect.getargspec(f)[0]
args_out = list()
for ind, arg in enumerate(args):
if ind in indices:
args_out.append(func(arg))
else:
args_out.append(arg)
return f(*args_out)
return wrapped_f
# Function that will be used to process each parameter
def double(x):
return x * 2
# Decorator called by end user
def double_selected(*args):
return _process_arguments(double, *args)
# End-user's function
#double_selected(2, 0)
def say_hello(a1, a2,a3):
""" doc string for say_hello """
print('{} {} {}'.format(a1, a2, a3))
say_hello('say', 'hello', 'arguments')
print(help(say_hello))
And the result:
saysay hello argumentsarguments
Help on function say_hello in module __main__:
say_hello(a1, a2, a3)
doc string for say_hello
Related
I have a class. This class has a list of functions that are to be evaluated by a different program.
class SomeClass(object):
def __init__(self, context):
self.functions_to_evaluate = []
There is a function that adds functions to an instance of SomeClass, via something like:
new_function = check_number(5)
SomeClassInstance.functions_to_evaluate.append(new_function)
Where check_number is a function that will check if number is greater than 10, let's say.
If I take SomeClassInstance.functions_to_evaluate and print it, I get a bunch of python objects, like so:
<some_library.check_number object at 0x07B35B90>
I am wondering if it is possible for me to extract the input given to check_number, so something like:
SomeClassInstance.functions_to_evaluate[0].python_feature() that will return "5" or whatever the input to check_number was to me.
You can use the standard library functools.partial, which creates a new partially applied function *.
>>> from functools import partial
>>> def check_number(input):
... return input > 10
>>> fn = partial(check_number, 5)
>>> fn.args # this attribute gives you back the bound arguments, as a tuple.
(5,)
>>> fn() # calls the function with the bound arguments.
False
*: actually the partial object is not a function instance, but it is a callable, and from a duck-type perspective it's a function.
If new_function = check_number(5) is a closure, then you can extract this value using __closure__[0].cell_contents:
Example:
def foo(x):
def inn(y):
return x
return inn
s = foo(5)
print(s.__closure__[0].cell_contents)
Output:
5
I understand your confusion, but:
new_function = check_number(5)
Is calling the function, and the new_function variable gets assigned the return value of the function.
If you have this check_number function:
def check_number(input):
return input > 10
Then it will return False, and new_function will be False. Never <some_library.check_number object at 0x07B35B90>.
If you're getting <some_library.check_number object at 0x07B35B90> then your check_number() function is returning something else.
There are probably several ways to skin this cat. But I'd observe first and foremost that you're not adding python function objects to the functions_to_evaluate list, you're adding the evaluations of functions.
You could simply add a tuple of function, args to the list:
SomeClassInstace.functions_to_evaluate.append((check_number, 5))
And then you can:
for f, args in SomeClassInstance.functions_to_evaluate:
print(args)
[I am using python 2.7]
I wanted to make a little wrapper function that add one output to a function. Something like:
def add_output(fct, value):
return lambda *args, **kargs: (fct(*args,**kargs),value)
Example of use:
def f(a): return a+1
g = add_output(f,42)
print g(12) # print: (13,42)
This is the expected results, but it does not work if the function given to add_ouput return more than one output (nor if it returns no output). In this case, the wrapped function will return two outputs, one contains all the output of the initial function (or None if it returns no output), and one with the added output:
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1,42)
g2 = add_output(f2,42)
print g1(12) # print: ((12,13),42) instead of (12,13,42)
print g2(12) # print: (None,42) instead of 42
I can see this is related to the impossibility to distinguish between one output of type tuple and several output. But this is disappointing not to be able to do something so simple with a dynamic language like python...
Does anyone have an idea on a way to achieve this automatically and nicely enough, or am I in a dead-end ?
Note:
In case this change anything, my real purpose is doing some wrapping of class (instance) method, to looks like function (for workflow stuff). However it is require to add self in the output (in case its content is changed):
class C(object):
def f(self): return 'foo','bar'
def wrap(method):
return lambda self, *args, **kargs: (self,method(self,*args,**kargs))
f = wrap(C.f)
c = C()
f(c) # returns (c,('foo','bar')) instead of (c,'foo','bar')
I am working with python 2.7, so I a want solution with this version or else I abandon the idea. I am still interested (and maybe futur readers) by comments about this issue for python 3 though.
Your add_output() function is what is called a decorator in Python. Regardless, you can use one of the collections module's ABCs (Abstract Base Classes) to distinguish between different results from the function being wrapped. For example:
import collections
def add_output(fct, value):
def wrapped(*args, **kwargs):
result = fct(*args, **kwargs)
if isinstance(result, collections.Sequence):
return tuple(result) + (value,)
elif result is None:
return value
else: # non-None and non-sequence
return (result, value)
return wrapped
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1, 42)
g2 = add_output(f2, 42)
print g1(12) # -> (12,13,42)
print g2(12) # -> 42
Depending of what sort of functions you plan on decorating, you might need to use the collections.Iterable ABC instead of, or in addition to, collections.Sequence.
I tried to reimplement something like partial (which later will have more behavior). Now in the following example lazycall1 seems to work just as fine as lazycall2, so I don't understand why the documentation of partial suggests using the longer second version. Any suggestions? Can it get me in trouble?
def lazycall1(func, *args, **kwargs):
def f():
func(*args, **kwargs)
return f
def lazycall2(func, *args, **kwargs):
def f():
func(*args, **kwargs)
f.func=func # why do I need that?
f.args=args
f.kwargs=kwargs
return f
def A(x):
print("A", x)
def B(x):
print("B", x)
a1=lazycall1(A, 1)
b1=lazycall1(B, 2)
a1()
b1()
a2=lazycall2(A, 3)
b2=lazycall2(B, 4)
a2()
b2()
EDIT: Actually the answers given so far aren't quite right. Even with double arguments it would work. Is there another reason?
def lazycall(func, *args):
def f(*args2):
return func(*(args+args2))
return f
def sum_up(a, b):
return a+b
plusone=lazycall(sum_up, 1)
plustwo=lazycall(sum_up, 2)
print(plusone(6)) #7
print(plustwo(9)) #11
The only extra thing the second form has, are some extra properties. This might be helpful if you start passing around the functions returned by lazycall2, so that the receiving function may make decisions based on these values.
functools.partial can accept additional arguments - or overridden arguments - in the inner, returned function. Your inner f() functions don't, so there's no need for what you're doing in lazycall2. However, if you wanted to do something like this:
def sum(a, b):
return a+b
plusone = lazycall3(sum, 1)
plusone(6) # 7
You'd need to do what is shown in those docs.
Look closer at the argument names in the inner function newfunc in the Python documentation page you link to, they are different than those passed to the inner function, args vs. fargs, keywords vs. fkeywords. Their implementation of partial saves the arguments that the outer function was given and adds them to the arguments given to the inner function.
Since you reuse the exact same argument names in your inner function, the original arguments to the outer function won't be accessible in there.
As for setting func, args, and kwargs attributes on the outer function, a function is an object in Python, and you can set attributes on it. These attributes allow you to get access to the original function and arguments after you have passed them into your lazycall functions. So a1.func will be A and a1.args will be [1].
If you don't need to keep track of the original function and arguments, you should be fine
with your lazycall1.
is there a way to check if a function accepts **kwargs before calling it e.g.
def FuncA(**kwargs):
print 'ok'
def FuncB(id = None):
print 'ok'
def FuncC():
print 'ok'
args = {'id': '1'}
FuncA(**args)
FuncB(**args)
FuncC(**args)
When I run this FuncA and FuncB would be okay but FuncC errors with got an unexpected keyword argument 'id' as it doesn't accept any arguments
try:
f(**kwargs)
except TypeError:
#do stuff
It's easier to ask forgiveness than permission.
def foo(a, b, **kwargs):
pass
import inspect
args, varargs, varkw, defaults = inspect.getargspec(foo)
assert(varkw=='kwargs')
This only works for Python functions. Functions defined in C extensions (and built-ins) may be tricky and sometimes interpret their arguments in quite creative ways. There's no way to reliably detect which arguments such functions expect. Refer to function's docstring and other human-readable documentation.
func is the function in question.
with python2, it's:
inspect.getargspec(func).keywords is not None
python3 is a bit tricker, following https://www.python.org/dev/peps/pep-0362/ the kind of parameter must be VAR_KEYWORD
Parameter.VAR_KEYWORD - a dict of keyword arguments that aren't bound to any other parameter. This corresponds to a "**kwargs" parameter in a Python function definition.
any(param for param in inspect.signature(func).parameters.values() if param.kind == param.VAR_KEYWORD)
For python > 3 you should to use inspect.getfullargspec.
import inspect
def foo(**bar):
pass
arg_spec = inspect.getfullargspec(foo)
assert arg_spec.varkw and arg_spec.varkw == 'bar'
Seeing that there are a multitude of different answers in this thread, I thought I would give my two cents, using inspect.signature().
Suppose you have this method:
def foo(**kwargs):
You can test if **kwargs are in this method's signature:
import inspect
sig = inspect.signature(foo)
params = sig.parameters.values()
has_kwargs = any([True for p in params if p.kind == p.VAR_KEYWORD])
More
Getting the parameters in which a method takes is also possible:
import inspect
sig = inspect.signature(foo)
params = sig.parameters.values()
for param in params:
print(param.kind)
You can also store them in a variable like so:
kinds = [param.kind for param in params]
# [<_ParameterKind.VAR_KEYWORD: 4>]
Other than just keyword arguments, there are 5 parameter kinds in total, which are as follows:
POSITIONAL_ONLY # parameters must be positional
POSITIONAL_OR_KEYWORD # parameters can be positional or keyworded (default)
VAR_POSITIONAL # *args
KEYWORD_ONLY # parameters must be keyworded
VAR_KEYWORD # **kwargs
Descriptions in the official documentation can be found here.
Examples
POSITIONAL_ONLY
def foo(a, /):
# the '/' enforces that all preceding parameters must be positional
foo(1) # valid
foo(a=1) #invalid
POSITIONAL_OR_KEYWORD
def foo(a):
# 'a' can be passed via position or keyword
# this is the default and most common parameter kind
VAR_POSITIONAL
def foo(*args):
KEYWORD_ONLY
def foo(*, a):
# the '*' enforces that all following parameters must by keyworded
foo(a=1) # valid
foo(1) # invalid
VAR_KEYWORD
def foo(**kwargs):
It appears that you want to check whether the function receives an 'id' keyword argument. You can't really do that by inspection because the function might not be a normal function, or you might have a situation like that:
def f(*args, **kwargs):
return h(*args, **kwargs)
g = lambda *a, **kw: h(*a, **kw)
def h(arg1=0, arg2=2):
pass
f(id=3) still fails
Catching TypeError as suggested is the best way to do that, but you can't really figure out what caused the TypeError. For example, this would still raise a TypeError:
def f(id=None):
return "%d" % id
f(**{'id': '5'})
And that might be an error that you want to debug. And if you're doing the check to avoid some side effects of the function, they might still be present if you catch it. For example:
class A(object):
def __init__(self): self._items = set([1,2,3])
def f(self, id): return self._items.pop() + id
a = A()
a.f(**{'id': '5'})
My suggestion is to try to identify the functions by another mechanism. For example, pass objects with methods instead of functions, and call only the objects that have a specific method. Or add a flag to the object or the function itself.
According to https://docs.python.org/2/reference/datamodel.html
you should be able to test for use of **kwargs using co_flags:
>>> def blah(a, b, kwargs):
... pass
>>> def blah2(a, b, **kwargs):
... pass
>>> (blah.func_code.co_flags & 0x08) != 0
False
>>> (blah2.func_code.co_flags & 0x08) != 0
True
Though, as noted in the reference this may change in the future, so I would definitely advise to be extra careful. Definitely add some unit tests to check this feature is still in place.
I've been playing around in depth with attempting to write my own version of a memoizing decorator before I go looking at other people's code. It's more of an exercise in fun, honestly. However, in the course of playing around I've found I can't do something I want with decorators.
def addValue( func, val ):
def add( x ):
return func( x ) + val
return add
#addValue( val=4 )
def computeSomething( x ):
#function gets defined
If I want to do that I have to do this:
def addTwo( func ):
return addValue( func, 2 )
#addTwo
def computeSomething( x ):
#function gets defined
Why can't I use keyword arguments with decorators in this manner? What am I doing wrong and can you show me how I should be doing it?
You need to define a function that returns a decorator:
def addValue(val):
def decorator(func):
def add(x):
return func(x) + val
return add
return decorator
When you write #addTwo, the value of addTwo is directly used as a decorator. However, when you write #addValue(4), first addValue(4) is evaluated by calling the addValue function. Then the result is used as a decorator.
You want to partially apply the function addValue - give the val argument, but not func. There are generally two ways to do this:
The first one is called currying and used in interjay's answer: instead of a function with two arguments, f(a,b) -> res, you write a function of the first arg that returns another function that takes the 2nd arg g(a) -> (h(b) -> res)
The other way is a functools.partial object. It uses inspection on the function to figure out what arguments a function needs to run (func and val in your case ). You can add extra arguments when creating a partial and once you call the partial, it uses all the extra arguments given.
from functools import partial
#partial(addValue, val=2 ) # you can call this addTwo
def computeSomething( x ):
return x
Partials are usually a much simpler solution for this partial application problem, especially with more than one argument.
Decorators with any kinds of arguments -- named/keyword ones, unnamed/positional ones, or some of each -- essentially, ones you call on the #name line rather than just mention there -- need a double level of nesting (while the decorators you just mention have a single level of nesting). That goes even for argument-less ones if you want to call them in the # line -- here's the simplest, do-nothing, double-nested decorator:
def double():
def middling():
def inner(f):
return f
return inner
return middling
You'd use this as
#double()
def whatever ...
note the parentheses (empty in this case since there are no arguments needed nor wanted): they mean you're calling double, which returns middling, which decorates whatever.
Once you've seen the difference between "calling" and "just mentioning", adding (e.g. optional) named args is not hard:
def doublet(foo=23):
def middling():
def inner(f):
return f
return inner
return middling
usable either as:
#doublet()
def whatever ...
or as:
#doublet(foo=45)
def whatever ...
or equivalently as:
#doublet(45)
def whatever ...