I learned that we can equate one function with another in Python like this:
def func_1(x)
print("func_1")
print(x)
def func_2(x)
print("func_2")
print(x)
func_1 =func_2
So here what is happening is every call to func_1 executes func_2.
However, I read about decorators and the following is a simple code illustrating them:
def our_decorator(func):
def function_wrapper(x):
print("Before calling " + func.__name__)
func(x)
print("After calling " + func.__name__)
return function_wrapper
def foo(x):
print("Hi, foo has been called with " + str(x))
print("We call foo before decoration:")
foo("Hi")
print("We now decorate foo with f:")
foo = our_decorator(foo)
print("We call foo after decoration:")
foo(42)
Here as we can see in the following line:
foo = our_decorator(foo)
something like the previous function equation is taking place . I thought this is how decorators might be working, i.e., replacing call to the decoratee by call to decorator .
However, under this impression if I write up a code like the following:
def our_decorator():
def function_wrapper(x):
print("Before calling " )
foo(x)
print("After calling " )
return function_wrapper
def foo(x):
print("Hi, foo has been called with " + str(x))
print("We call foo before decoration:")
foo("Hi")
print("We now decorate foo with f:")
foo = our_decorator()
print("We call foo after decoration:")
foo(42)
The above results in infinite recursion printing infinite number of "Before calling ".
So, I could conclude that a decorator must be something which takes a function as an argument.
So the equating of functions differ in these two cae, namely equating with function which takes another function as an argument, and equating two functions which don't take another function as argument.
How might this two differing in internal implementation?
What you call "equating functions" is really just a variable assignment. A function definition (with def) creates a function and assigns it to a variable name. After you do func_1 = func_2, you have 2 variables referring to the same function.
What happens in your decorator examples is a natural consequence of the previous paragraph. Leave a comment if you need further clarification on that.
I hope your question can be answered by explaining few terms.
You are using the term "equating" for what it is commonly called "assigning". name = expr is an assignment statement. The name name is given (assigned) to the object which is the result of the expression expr.
In Python functions are not treated special. This is sometimes expressed with the sentence "functions are first class objects" and basically it means that function objects can be assigned to variables (names), passed as arguments etc. the same way as numbers, strings and other objects.
A (function) decorator processes another function. It is a function that takes the function to be decorated as its argument and returns a "decorated" (i.e. enhanced or modifed in some way) version of it. Sometimes it only registers the function e.g. as a handler or as a part of an API and returns it unchanged. There is a special syntax for it:
#decorator
def func(...):
pass
which is equivalent to:
func = decorator(func)
and also:
#decorator(args)
def func(...):
pass
which is equivalent to:
_real_decorator = decorator(args)
func = _real_decorator(func)
Because this #decorator syntax is so simple to use and easy to read, you normally don't write:
func = decorator(func)
To summarize:
func1 = some_func is plain assignment, giving other name to some_func.
func2 = create_function() this would be called a function factory in some languages. You wrote that in your question.
func = decorate_function(func) this is a decoration of func
Note: there exist class decorators, they are very similar, but enhance class definitions instead of functions.
A decorator looks like this:
def decorator_with_args(*args, **kwargs):
def wrapper(f: "the function being decorated"):
def wrapped(*args, **kwargs):
# inside here is the code that is actually executed
# when calling the decorated function. This should
# always include...
f(*args, **kwargs)
# and usually return its result
return wrapped
return wrapper
# or
def decorator_without_args(f: "the function being decorated"):
def wrapped(*args, **kwargs):
# as above
return f(*args, **kwargs)
return wrapped
and is used by:
#decorator_with_args("some", "args")
def foo(x):
print("foo:", x) # or whatever
#decorator_without_args
def bar(x):
print("bar:", x)
This is equivalent to defining each function without the #decorator... magic and applying the decorator afterwards
def baz(x):
print("baz:", x)
baz = decorator_with_args("some", "arguments")(baz)
# or
baz = decorator_without_args(baz)
In your example code you call foo inside your decorator, then you decorate foo with that decorator, so you end up recursing infinitely. Every time you call foo, it runs your decorator code which also invokes foo. Every time your decorator invokes foo, it runs your decorator code which also invokes foo. Every time your decorator's decorator invokes foo, is runs your decorator code which also... etc
Related
I am new to the more advanced features of Python like decorators.
I am unable to understand how the Python interpreter actually understands where to put the original function object in a decorator.
Lets look at an example: Examples taken from here.
Simple decorator with no arguments:
def call_counter(func):
def helper(*args, **kwargs):
helper.calls += 1
return func(*args, **kwargs)
helper.calls = 0
return helper
#call_counter
def succ(x):
return x + 1
This makes perfect sense if we can assume that the first/only argument to the decorator call_counter(func) is the function object that needs to wrapped ie. in this case succ() function.
But things become inconsistent when you are talking about "decorators with parameters". Look at the example below:
Decorator with one argument:
def greeting(expr): # Shouldn't expr be the function here ? Or at least isn't there suppose to be another parameter.
def greeting_decorator(func): # How does Python know to pass the function down here ?
def function_wrapper(x):
print(expr + ", " + func.__name__ + " returns:")
func(x)
return function_wrapper
return greeting_decorator
#greeting("Hello")
def foo(x):
print(42)
foo("Hi")
Now we know Python has no concept of data-types, so function parameters give no information about what type of object they will contain.
Am I correct ?
Having said that lets look at the line from the above example:
def greeting(expr):
If for decorators the first argument is the function to be wrapped then by that logic expr should point to foo() right ? Otherwise there should be at least two parameters in greeting(), like:
def greeting(func, expr):
But instead Python can "magically" understand that the inner function needs to be passed the function reference:
def greeting(expr):
def greeting_decorator(func): # How is it correctly put one level down ?
The code has no datatypes or type information specified, so how is it that for decorators without arguments the function is passed as the first argument and for decorators with arguments the function is passed to the inner function ?
How can the interpreter detect that ?
What is going on here ?
This seems like "magic" to me.
What happens if I have 5 or 6 levels of nested functions ?
I am pretty sure I am missing something pretty basic here.
Thanks.
Python evaluates the expression after the # and uses the result as the decorator function.
Python calls the __call__ method of the object that is the decorator with the function as argument.
using
#call_counter
def succ(x):
return x + 1
callcounter is the object looked for __call__ to give the argument func
If you use
#greeting("Hello")
def foo(x):
print(42)
greeting("Hello") is evaluated and its result is an object that Python uses the __call__ method with the func argument.
I'm having trouble understanding the concept of decorators, so basically if I understood correctly, decorators are used to extend the behavior of a function, without modifying the functions code . The basic example:
I have the decorator function, which take as parameter another function and then changes the functionality of the function given as argument:
def decorator(f):
def wrapper(*args):
return "Hello " + str(f(*args))
return wrapper
And here I have the function which I want to decorate :
#decorator
def text (txt):
'''function that returns the txt argument'''
return txt
So if I understand correctly , what actually happens "behind" is:
d=decorator(text)
d('some argument')
My question is , what happens in this case when we have three nested function in the decorator:
def my_function(argument):
def decorator(f):
def wrapper(*args):
return "Hello " +str(argument)+ str(f(*args))
return wrapper
return decorator
#my_function("Name ")
def text(txt):
return txt
Because the functionality is awesome, I can pass argument in the decorator, I do not understand what actually happens behind this call :
#my_function("Name ")
Thank you,
It is just another level of indirection, basically the code is equivalent to:
decorator = my_function("Name ")
decorated = decorator(text)
text = decorated
Without arguments, you already have the decorator, so
decorated = my_function(text)
text = decorated
my_function is used to create a closure here. The argument is local to my_function. But due to closure, when you return the decorator, the decorator function has a reference to it all the time. So when you apply decorator to text, the decorator adds extra functionality, as to be expected. It also can embed argument in its extra functionality since it has access to the environment in which it was defined.
I want to make prevent an already defined function from looking up function names in the global scope, and instead to always call the versions available when the function was defined.
In the following snippet (http://ideone.com/GvghAy), foo() always looks up add() in the global scope, so when add() is replaced by sub(), foo() calls sub() instead. On the other hand, when bar() is defined, it captures a reference to the original add() function so calls to bar() will always call the original add(), even when add() is redefined.
from operator import add, sub
def foo(a, b):
return add(a, b)
def create_bar_function():
original_add = add
def bar_function(a, b):
return original_add(a, b)
return bar_function
bar = create_bar_function()
print("foo before redefinition: " + str(foo(3, 4))) # 7
print("bar before redefinition: " + str(bar(3, 4))) # 7
# someone redefines add() with a different function
add = sub
print("foo after redefinition: " + str(foo(3, 4))) # -1
print("bar after redefinition: " + str(bar(3, 4))) # 7
Is it possible to create a function decorator such that the decorated function always captures references to the functions available when the decorated function was defined?
The only way to do this is to pass it as an argument to the function. It's not so bad though -- you can make it a keyword argument (which gets a default value "frozen" when the function is created):
def create_bar_function(add=add):
...
Please beware though -- Every python programmer that I know would consider this a serious hack. It isn't something you really want to do. Better is to avoid name clashes in the current namespace. e.g. import add as a different name (or import the module):
from operator import add as _add
# import operator as op # now use op.add ...
Now the user can define a new add and use _add when he/she wants operator.add.
[Updated]: Answer inline below question
I have an inspecting program and one objective is for logic in a decorator to know whether the function it is decorating is a class method or regular function. This is failing in a strange way. Below is code run in Python 2.6:
def decorate(f):
print 'decorator thinks function is', f
return f
class Test(object):
#decorate
def test_call(self):
pass
if __name__ == '__main__':
Test().test_call()
print 'main thinks function is', Test().test_call
Then on execution:
decorator thinks function is <function test_call at 0x10041cd70>
main thinks function is <bound method Test.test_call of <__main__.Test object at 0x100425a90>>
Any clue on what's going wrong, and if it is possible for #decorate to correctly infer that test_call is a method?
[Answer]
carl's answer below is nearly perfect. I had a problem when using the decorator on a method that subclasses call. I adapted his code to include a im_func comparison on superclass members:
ismethod = False
for item in inspect.getmro(type(args[0])):
for x in inspect.getmembers(item):
if 'im_func' in dir(x[1]):
ismethod = x[1].im_func == newf
if ismethod:
break
else:
continue
break
As others have said, a function is decorated before it is bound, so you cannot directly determine whether it's a 'method' or 'function'.
A reasonable way to determine if a function is a method or not is to check whether 'self' is the first parameter. While not foolproof, most Python code adheres to this convention:
import inspect
ismethod = inspect.getargspec(method).args[0] == 'self'
Here's a convoluted way that seems to automatically figure out whether the method is a bound or not. Works for a few simple cases on CPython 2.6, but no promises. It decides a function is a method if the first argument to is an object with the decorated function bound to it.
import inspect
def decorate(f):
def detect(*args, **kwargs):
try:
members = inspect.getmembers(args[0])
members = (x[1].im_func for x in members if 'im_func' in dir(x[1]))
ismethod = detect in members
except:
ismethod = False
print ismethod
return f(*args, **kwargs)
return detect
#decorate
def foo():
pass
class bar(object):
#decorate
def baz(self):
pass
foo() # prints False
bar().baz() # prints True
No, this is not possible as you have requested, because there is no inherent difference between bound methods and functions. A method is simply a function wrapped up to get the calling instance as the first argument (using Python descriptors).
A call like:
Test.test_call
which returns an unbound method, translates to
Test.__dict__[ 'test_call' ].__get__( None, spam )
which is an unbound method, even though
Test.__dict__[ 'test_call' ]
is a function. This is because functions are descriptors whose __get__ methods return methods; when Python sees one of these in the lookup chain it calls the __get__ method instead of continuing up the chain.
In effect, the 'bound-methodiness' of a function is determined at runtime, not at define-time!
The decorator simply sees the function as it is defined, without looking it up in a __dict__, so cannot tell whether it is looking at a bound method.
It might be possible to do this with a class decorator that modifies __getattribute__, but that's a particularly nasty hack. Why must you have this functionality? Surely, since you have to place the decorator on the function yourself, you could pass it an argument that says whether said function is defined within a class?
class Test:
#decorate( method = True )
def test_call:
...
#decorate( method = False )
def test_call:
...
Your decorator is run before the function becomes a method. def keyword inside a class defines a function line in any other place, then the functions defined in the body of a class are added to the class as methods. Decorator operates on the function before it is processed by the class that is why your code 'fails'.
There is no way for the #decorate to see the function is actually a method. A workaround for that would be to decorate the function whatever it is (e.g. adding an attribute do_something_about_me_if_I_am_a_method ;-)) and then process it again after the class is computed (iterating over the class members and doing whatever you want with those decorated).
I tried a slightly different example, with one decorated method and one undecorated method.
def decorate(f):
print 'decorator thinks function is', f
return f
class Test(object):
#decorate
def test_call(self):
pass
def test_call_2(self):
pass
if __name__ == '__main__':
print 'main thinks function is', Test.test_call
print 'main thinks function 2 is', Test.test_call_2
Then the output is:
decorator thinks function is <function test_call at 0x100426b18>
main thinks function is <unbound method Test.test_call>
main thinks function 2 is <unbound method Test.test_call_2>
Thus, the decorator saw a different type than the main function did, but the decorator did not change the function's type, or it would be different from the undecorated function.
In this blog article they use the construct:
#measured
def some_func():
#...
# Presumably outputs something like "some_func() is finished in 121.333 s" somewhere
This #measured directive doesn't seem to work with raw python. What is it?
UPDATE: I see from Triptych that #something is valid, but is where can I find #measured, is it in a library somewhere, or is the author of this blog using something from his own private code base?
#measured decorates the some_func() function, using a function or class named measured. The # is the decorator syntax, measured is the decorator function name.
Decorators can be a bit hard to understand, but they are basically used to either wrap code around a function, or inject code into one.
For example the measured function (used as a decorator) is probably implemented like this...
import time
def measured(orig_function):
# When you decorate a function, the decorator func is called
# with the original function as the first argument.
# You return a new, modified function. This returned function
# is what the to-be-decorated function becomes.
print "INFO: This from the decorator function"
print "INFO: I am about to decorate %s" % (orig_function)
# This is what some_func will become:
def newfunc(*args, **kwargs):
print "INFO: This is the decorated function being called"
start = time.time()
# Execute the old function, passing arguments
orig_func_return = orig_function(*args, **kwargs)
end = time.time()
print "Function took %s seconds to execute" % (end - start)
return orig_func_return # return the output of the original function
# Return the modified function, which..
return newfunc
#measured
def some_func(arg1):
print "This is my original function! Argument was %s" % arg1
# We call the now decorated function..
some_func(123)
#.. and we should get (minus the INFO messages):
This is my original function! Argument was 123
# Function took 7.86781311035e-06 to execute
The decorator syntax is just a shorter and neater way of doing the following:
def some_func():
print "This is my original function!"
some_func = measured(some_func)
There are some decorators included with Python, for example staticmethod - but measured is not one of them:
>>> type(measured)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'measured' is not defined
Check the projects import statements to see where the function or class is coming from. If it uses from blah import * you'll need to check all of those files (which is why import * is discouraged), or you could just do something like grep -R def measured *
Yes it's real. It's a function decorator.
Function decorators in Python are functions that take a function as it's single argument, and return a new function in it's place.
#classmethod and #staticmethod are two built in function decorators.
Read more ยป
measured is the name of a function that must be defined before that code will work.
In general any function used as a decorator must accept a function and return a function. The function will be replaced with the result of passing it to the decorator - measured() in this case.